id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
199468890 | pes2o/s2orc | v3-fos-license | A Neutral “Aluminocene” Sandwich Complex: η1‐ versus η5‐Coordination Modes of a Pentaarylborole with ECp* (E=Al, Ga; Cp*=C5Me5)
Abstract The pentaaryl borole (Ph*C)4BXylF [Ph*=3,5‐tBu2(C6H3); XylF=3,5‐(CF3)2(C6H3)] reacts with low‐valent Group 13 precursors AlCp* and GaCp* by two divergent routes. In the case of [AlCp*]4, the borole reacts as an oxidising agent and accepts two electrons. Structural, spectroscopic, and computational analysis of the resulting unprecedented neutral η5‐Cp*,η5‐[(Ph*C)4BXylF] complex of AlIII revealed a strong, ionic bonding interaction. The formation of the heteroleptic borole‐cyclopentadienyl “aluminocene” leads to significant changes in the 13C NMR chemical shifts within the borole unit. In the case of the less‐reductive GaCp*, borole (Ph*C)4BXylF reacts as a Lewis acid to form a dynamic adduct with a dative 2‐center‐2‐electron Ga−B bond. The Lewis adduct was also studied structurally, spectroscopically, and computationally.
Experimental Details
General Information All manipulations requiring handling under inert conditions were carried out under argon atmosphere using standard Schlenk techniques or an MBraun Glovebox with an Ar atmosphere. Benzene was obtained from an MBraun SPS and stored over molecular sieves, toluene and ether were distilled from sodium and degassed. Hexane and pentane were distilled from Na/K alloy. THF was distilled from potassium. Benzene-d6 and toluene-d8 were distilled from potassium, degassed and stored in a glove box.
NMR spectroscopy NMR spectra were recorded with either a Bruker Avance III 400 NMR spectrometer equipped with a 5 mm BBFO ATM probe head and operating at 400. 13 27 Al and Ξ = 94.094011 % for 19 F. [1] 1 H and 13 C spectra have been referenced on specific values for the respective solvent signal. The proton and carbon signals were assigned where possible via a detailed analysis of 1 H, 13 C, 1 H-1 H COSY, 1 H-1 H NOESY, 1 H-13 C HSQC, 1 H-13 C HMBC NMR spectra.
Young-type teflon-valve borosilicate NMR tubes have been used throughout the study.
Mass spectrometry
Mass spectra were recorded by the Zentrale Analytik within the Faculty of Chemistry, Göttingen applying a Liquid Injection Field Desorption Ionisation-technique on a JEOL accuTOF instrument with an inert-sample application setup under argon atmosphere. The injection capillary was washed several times with dry, distilled and inertly injected toluene before the samples were injected. Samples usually had a concentration of 1 -2 mmol/L in toluene and were prepared in a glovebox.
Crystallographic details
Crystals suitable for X-ray analysis grow from benzene solutions carefully concentrated at ambient temperature by evaporation and storage of the very concentrated liquid for a few days.
Opposed to ambient atmosphere the crystals suspended in oil rapidly lose colour and crystallinity, and crystal examination and picking was performed using an XTEMP-setup.
Crystal crop from benzene: S15 Tabulated crystallographic data for 2.
Crystallographic Details
Data Acquisition and Processing X-ray data for 1, and 2 were collected on Bruker APEX II CCD diffractometers with either Mo Kα radiation from a IµS or spinning anode source. The data were integrated using SAINT implemented in Brukers APEX3 programme suite. [6] SADABS [7] or TWINABS were used for multi-scan absorption correction. [8] Structure solution was performed with SHELXT [9] and refined using SHELXL [10] along the graphical user interphase of ShelXle. [11] In some cases DSR has been applied to treat disordered solvent molecules. [12] All hydrogen atoms were placed with a riding model. Further details on the individual data sets are tabulated in the analytical section of each compound. All structures were deposited with the CCSD.
Crystallographic and Refinement Details 1
Crystals of compound 1 were obtained from three different solvents (toluene, benzene and hexane) from concentrate solutions at ambient or low temperature (-40°C). The crystals are stable under argon atmosphere but lose their crystallinity under ambient conditions in inert oil within minutes. Crystals were therefore mounted with an XTEMP 2 device.
As the crystals of 1 from benzene were twinned, the two reciprocal lattices were sorted using RLATT from within the Bruker Apex 3 2018.7-2 GUI. All three datasets were integrated using SAINT 8.38A.
All three structures showed disorder within the solvent molecules in solvent accessible voids and within the majority (the entire borole sub unit) of the structure itself, which in consequence results in very poor intensity of reflections with a resolution higher than about 1.2 Å. To fit the solvent molecules with as little parameters as possible, the solvent molecules within the moieties were fitted using the SQUEEZE model, as implemented in PLATON. [13] The disorder of the Ph* and Xyl F groups were treated differently. The Ph* moiety was modelled using a modified mesityl group as included in the DSR programme with all the non tBu-methyl group positions being refined as a rigid-body. [12] The positions of the bound methyl groups were refined freely (see figure on the right).
Within the five membered borole unit, Cɑ and Cβ positions were restrained to have similar 1,2 and 1,3 distances. The resulting target symmetry of the restrains would be equivalent to a mirror plane through the boron atom and the opposing carbon-carbon bond. All C-C distances from the borole ring to the outer substituents were refined to be equivalent as well. All tert-butyl groups were restrained to have similar 1,2 and 1,3 distances. Equivalent restraints were applied to the trifluoromethyl groups.
Atomic displacement parameters of atoms within the disordered borole moiety were refined to be have similar Uij components to their neighbours (SIMU). Additionally, rigid body restraints for the atomic displacement parameters were applied to these atoms (RIGU).
With the very similar electron density pattern of a (C-Ph*) vs a (B-Xyl F ) moiety, the quasi five-fold symmetry, as well as the disorder, pose the question, whether there are additional orientations. All putative combinations of boron positions for the two disorders were evaluated, with the reported structures showing a significantly lower R-value than the alternatives. The difference can, in large part, be attributed to the fit of the CF3 groups. The model should therefore represent the two main positions of the borole moiety. However, due to the nature of the disorder and the limited resolution, additional minor occupation where the ring overlaps but is rotated differently, cannot be ruled out.
Despite the considerable efforts the resulting data-to-parameter ratio was still low for all three structures. This is an inherent result from the structure itself, as already mentioned before. However, the derived features are similar between all three structures and consistent with all other experimental and especially theoretical results.
Representation of the used rigid group. The positions of the red atoms were refined as rigid group, the black methyl groups were refined freely.
S24
Tabulated values for the key structural features of the "Aluminocene" 1 from various data sets. Please note, that there are differences of the Al-B distances in Disorder 1 and Disorder 2. This may indicate that the exact assignment/modelling of (C-Ph*) vs.(B-Xyl F )-units may be incomplete. Depictions of the disordered borole subunit within the molecule 1. Part 1 (Blue), Part 2 (orange). The second fragment is a borole unit rotated by ca. 36° with an inversion of the paddlewheel tilt of the aryl groups. This major disorder, along with further disorder within the t-Bu groups causes the low resolution of the obtainable data.
Refinement Details 2
The structure contains one molecule of lattice benzene, which is disordered modelled using SIMU, RIGU and SAME commands. Two tert.-butyl groups and a CF3 group are disordered and each modelled over two positions using SIMU, RIGU and SAME commands.
Structure Optimisation, Frequency Calculation and Thermochemical Approximations
For thermochemical approximations, structures were optimised with Gaussian09.D01 [14] applying the BP86 functional [15] and Grimmes D3 dispersion correction [16] with def2-SVP [17] basis sets on all elements. Frequency calculations were performed on these structures and absence of imaginary frequencies confirmed true local minima on the potential energy surface.
Thermochemical corrections stem from these calculations. Single point energies were calculated on these structures using a def2-TZVP basis set on all atoms.
[a] Thermochemical corrections stem from BP86-D3-def-SVP optimisation and frequency calculations. S26 Summary GIAO-NMR computations Computational examination was performed using ORCA (version 4.1.). [18] For numerical accuracy, a gridsize of "5" and a final step gridsize of "6" is applied. GIAO-NMR spectroscopic properties were calculated as implemented as the default in ORCA4.1 applying RIJK-PBE0 [19] functional on structures previously optimised using the RI-BP86-D3BJ-def2TZVP/J model chemistry. [15,17,20] Input structures were based on X-ray structures of 2 and A. For NMR calculations of the reference set of small molecules, def2-TZVPP basis sets were chosen for B, Al and Ga and def2-TZVP for all other elements.
For the rather large molecules 1 and 2, def2-TZVPP basis sets were chosen for B, Al and Ga, while a def2-TZVP basis was chosen for the core carbon atoms (namely borole Cα and Cβ positions, the ipso-CxylF atom as well as the inner cyclopentadienyl carbon atoms). A def2-SVP basis set was applied for all other atoms. Frontier Orbital Depictions Selected canonical frontier orbitals from BP86 calculations (vide supra) are shown. All drawn at an isosurface value of 0.04 a.u. using the programme ChemCraft for visualisation. [24] All hydrogen atoms are omitted for the sake of clarity.
LUMO HOMO
Topology Analyses Topology analyses and Bader Charge-analyses [25] were carried out using the Multiwfn programme [26] or AIMAll [27] on the RI-BP86-D3BJ-def2TZVP wave function files obtained from ORCA. To shed further light onto the structure analysis of the aluminium sandwich complex further analyses were carried out. The results from topology analyses did not differ between wavefunctions obtained from BP86 or PBE0 functional calculations and no qualitative change between def-SVP basis sets and def2-TZVPP basis sets were observed. In all cases same CP and bonding path were found giving the same molecular graphs. We further investigated the parent all hydro substituted η 5 ,η 5 -(C4BH5),(C5H5) Al complex. Structures have been optimised using both BP86 and PBE0 functional and def2-TZVPP basis sets. No imaginary frequencies were found confirming minimum structures. The geometries obtained are summarised in the following Figure Some features of the QTAIM analyses for both calculations are depicted below. The isodensity surfaces show that electron density around the boron atom is significantly reduced when compared to the densities at Cα but also Cβ. | 2019-08-08T13:13:53.066Z | 2019-09-10T00:00:00.000 | {
"year": 2019,
"sha1": "9697f967e199fff22525ce704c8e5a80197b4f1f",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.201907749",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "92c2d79bb573ad6156451d602e59d3e8286ea2c5",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
237984787 | pes2o/s2orc | v3-fos-license | Self-reported hearing status and audiometric thresholds among college students using headphones
Purpose: The study aims to investigate headphone listening habits of college-going students and for those using headphones, correlate self-reported hearing status with average audiometric hearing thresholds. Method: Headphone listening habits and awareness of adverse effects of the same was profiled in college-going students using a questionnaire distributed through online platform. Hearing thresholds were then compared for those with and without self-report of hearing difficulty. 341 responses were obtained from students between 17 and 23 years of age. For the second part of the study, a convenience sample of 30 willing students from among these 341 was selected. Pure tone thresholds were obtained for various frequencies with a high frequency audiometer. PTA (average of 500, 1000, 2000 Hz) and HFPTA (average of 4000, 6000, 8000, 10000 Hz) were calculated for both the ears and compared for those with and without reported hearing difficulty. Results: 78% students reported headphone usage for less than 3 hours per day, while 22% reported usage for more than 3 hours per day. 77% respondents were aware that listening to loud sounds can alter hearing sensitivity, but many (54.83%) did not have awareness about the minimum safe hours of listening. There was a weak positive correlation between self-reported hearing difficulty and poor ear HFPTA (r = 0.2304). Conclusion: Majority of students used insert earphones even after knowing the adverse effect of the same. There was a weak correlation found between the self-reported hearing problems and audiometric hearing thresholds. Implication: More awareness is needed about the ill effects of headphone usage amongst the young teenage population. Proper counseling and management strategies are required for people who report difficulty in hearing.
Introduction
Moore 1 describes noise as a sound that is unwanted and at an intensity with which it can interfere with verbal communication and may cause discomfort to the ears or reduction of hearing sensitivity, defined as hearing damage. In our daily surroundings, many sounds are present, some of which may be soothing and some may be perceived as noise leading to annoyance. Sounds which are at safe levels of loudness or shorter lengths of time are relatively safe and do not lead to hearing difficulties. However, those higher in intensity or occurring for longer durations may cause hearing problems progressively. Hearing function is affected by both long 2 and short-duration noise. The human ear, like any other body part can be damaged by overuse. The inner ear contains tiny hair cells that are gigantic in number. These tiny hair cells are susceptible to damage by loud sounds and prolonged exposure. 3 Any exposure to noise of significant intensity and duration increases the risk of ear damage and causes permanent hearing damage, known as noise-induced hearing loss (NIHL). 4 There can be a shift in hearing thresholds of the person after noise exposure. The shift in thresholds secondary to NIHL occurs in high frequency region or around 4000 Hz and higher making a dip/notch. The hearing loss starts and predominates in the frequencies of 3, 4, and 6 kHz and eventually progresses to 8, 2, 1, 0.5, and 0.25 kHz. 5 Recreational activity is a leading cause of NIHL and may include activities such as listening to loud music through headphones or insert earphones, target shooting, hunting, snowmobile riding or attending loud concerts. The noises produced during band activities, motor sports and loud noises in concert halls and nightclubs are known to have harmful effects on hearing. 6 NIHL can occur in any age group and at any period of time due to exposure to loud sounds. Children, teenagers, adults and older people can be exposed to loud sounds and it can lead to NIHL. It is estimated that as many as 17 percent of teens (ages 12 to 19) have features of their hearing test suggestive of NIHL in one or both ears based on data from 2005-2006. 7 The incidence of NIHL in children has been increasing day by day. For example, 40% of students between the ages of 16 and 25 years have audiological evidence of NIHL. 8 1% of children attending school were reported to have symptoms of NIHL in 1996. 9 loss in hearing sensitivity due to improper use of personal listening devices, including smartphones, and exposure to damaging levels of sound at loud noise entertainment surrounding such as discotheques, pubs and sport events. Studies have shown that exposure to music at high intensity and for longer periods of time is likely to induce many hearing symptoms, such as temporary threshold shift (TTS), tinnitus, hyperacusis, recruitment, distortion, or abnormal pitch perception. 6,11,12 Studies on listening to music with headphones have indicated poorer hearing thresholds for adolescents and young adults who use headphones compared to those who do not use headphones. 13 There is an increasing trend towards use of headphones for listening to music. Most people, especially youngsters, use them for considerable amounts of time, but do not realize the harmful effects that can be caused by prolonged usage of headphones on listening. Children and adolescents are also increasingly using portable digital audio players (DAPs), and some individuals, at maximum volume settings, with little knowledge of the risks this may pose to their hearing. 14 Most school or college-going teenagers tend to use headphones for long hours without knowing the damage they are doing to their auditory mechanism. Adolescents and young adults (17-23 years) listen for 1.5 hours per day on average (ranging from 10 minutes to 4 hours), at a sound level of 73-79 dB A on average (ranging from 40 to 93 dB A, depending on the device). 15 Personal music players have very strong sound tones, and their maximum volume level can reach 78-136 dB. 6 Individuals who listen to 15 minutes of music at 100 dB using personal music players may be exposed to the same level of loudness as industrial workers exposed to 85 dB in an 8-hour day. 16 The effect of headphone listening to music on hearing depends on several factors such as type of headphones 17 , number of hours of usage 15 , volume, 18 and type of music, to list a few. Numerous studies have indicated that further hearing exposure can be avoided using different strategies. This can be achieved by using different style of earphones and using an '80-90 rule'. 18 Some devices show an alert notification if an individual exceeds 60% of volume as quoted by Portnuff. 19 It is important to study how many of these young students report hearing related symptoms and how their self-perception of hearingrelated symptoms correlates with their hearing thresholds. The present study was taken up with the following aims: a) To investigate headphone listening habits of college-going students, b) For those using headphones, correlate self-reported hearing status with average audiometric hearing thresholds. The results of the study will help in creating awareness among college students about the detrimental effects of listening to loud music through headphones and in identifying hearing loss caused due to such exposure in the participants tested, serving as secondary prevention. The results will further aid in understanding the correlation between self-perception of hearing loss and clinically obtained hearing thresholds in college students who use headphones for listening to music, providing valuable information for counselling this population.
This study poses major questions such as: What are the headphone listening habits of college-going students? Is there awareness about recreational NIHL in college-going students using headphones? Are there any self-reported hearing related symptoms in college-going students using headphones? Are the pure tone thresholds different for students who report self-perception of hearing difficulty and for those who do not report self-perception of hearing difficulty? Is there a co-relation between self-reported hearing problems and clinical audiometric findings?
In order to achieve the aims of the present study the following objectives have been formulated: a. To profile the headphone listening habits of college-going students b. To study the awareness of college-going students about adverse effects of headphone listening on hearing sensitivity c. To obtain information about the hearing related symptoms in college-going students using headphones.
d. To obtain hearing thresholds of college-going students who use headphones for listening and compare hearing thresholds of college-going students for those with self-report of hearing difficulty and those without self-report of hearing difficulty.
e. To obtain correlation between self-reported hearing status and average hearing thresholds.
Materials and methods
Approval was obtained from the Institutional Ethics Committee and all procedures were as per the approved protocols. The study was divided into two parts. In Part 1, proforma-based survey to assess headphone listening habits was distributed online amongst many students and colleges of which 393 responses were received. In Part 2, 30 participants from Part 1 of the study were seen for audiological assessment to obtain their hearing thresholds.
Participants for part 1 -survey using online proforma
Both male and female students attending degree colleges in Mumbai and suburbs and comfortable with English language were included as participants for this phase. A survey form was framed keeping in mind that the participants are college students who may not be well versed with scientific terms. The questions were framed in simple layman terms and the number of questions were kept to a minimum so that the participant would fill the proforma with full Content validity of the developed proforma was established by five audiologists with more than 10 years of clinical experience. The suggestions were incorporated, and changes were made as suggested by the validators. The final proforma was made using 'Google Forms' online and link was generated for the same to be forwarded to participants. The Google form was distributed on mobile phones via E-mail, text messages or social media to participants fulfilling the inclusion criteria. 393 students responded to the survey; however, some did not fulfil the inclusion criteria and 36 returned incomplete proforma. Hence, a total of 341 participants were finally included in this part of the study. The details of these 341 respondents are shown in Table 1 below.
Participants for part 2 -pure tone audiometry
For the second part of the study, a convenience sample of 30 willing participants (23 males, 7 females) from among these 341 was included such that they had more than 3 hours per day usage of headphones and exposure to headphone use for minimum 2 years. Participants may or may not have reported hearing difficulty in the survey. Students reporting presence of middle ear pathology, any neurological impairment, history of ototoxicity or sudden unilateral or bilateral hearing loss were excluded.
Procedure
For the first part of the study, the developed Google link was distributed through e-mails, WhatsApp and other social media to various students from many degree colleges in Mumbai and suburbs. The link had the instructions to answer all questions to complete the proforma. Participants were also allowed to not disclose their identity (name) if they wished to do so. As the data for the first part of the study was being collected online, simultaneously the second part of study was initiated. Students who were willing to visit the clinic at their convenience were asked to report for audiological evaluation. After obtaining their written informed consent, detailed history was obtained on a case record form and pure tone audiometry was conducted in a standard sound treated room as per ANSI guidelines. Pure tone thresholds were obtained using calibrated high frequency audiometer Resonance r37a Clinical with Sennheiser HDA 280 calibrated headphones for the frequencies 250 Hz, 500 Hz, 1 kHz, 2 kHz, 4 kHz, 6 kHz, 8 kHz, 10 kHz and 12.5 kHz. Prior to audiometry, otoscopy was performed using Welch-Allyn otoscope to rule out any symptoms associated with outer or middle ear. The obtained thresholds were recorded on the case record sheet and pure tone averages were obtained as follows: For the purpose of analysis of this data to fulfil the objectives, the 30 participants in this part of the study were divided into two groups -those who reported to perceive hearing difficulties (N=12) and those who did not perceive hearing difficulties (N=18). Self-perceived hearing difficulty was determined on the basis of two questions from the survey proforma.
Do you have difficulty in hearing?
2. Do you have difficulty in understanding speech of others? If the answer to any one of these questions was "yes", the respondent was considered to self-perceive a hearing problem (refer) and if both the questions were answered as "no", the respondent was considered to have no self-reported hearing difficulty (pass).
The responses obtained on each of the five categories of the questionnaire were converted into MS Excel sheets and percentage of respondents providing the various responses to each question was calculated. For the second part of the study, pure tone averages (PTA right and PTA left) and high frequency pure tone averages (HFPTA right and HFPTA left) were calculated for both the groups i.e., those who did not perceive hearing related problems and those who perceived to have hearing related problems. Kolmogorov-Smirnov Test of Normality was performed to check if the data for PTA right, PTA left, HFPTA right and HFPTA left was normally distributed. Kolmogorov-Smirnov Test of normality results revealed that the data for PTA right and PTA left ear were normally distributed while data for HFPTA right and HFPTA left was not normally distributed. Based on the results of the test of normality, comparison of the two groups for PTA right and PTA left was done using unpaired t test, while Mann Whitney U test was performed to compare the data of the two groups for HF PTA right and HF PTA left. To obtain correlation between self-reported hearing status and average high frequency hearing thresholds, the data was analysed in the following manner. The four questions pertaining to self-perception of hearing difficulty (i.e. do you have difficulty in hearing, do you have difficulty in understanding speech of others, do you often ask for repetition, do you hear ringing or hissing sound inside ear in quiet?) were answered as YES or NO. Scoring was done in such a way that "yes" was scored as 1 point and "no" was scored as 0 point. A total score of 4 could thus be obtained, wherein higher score is indicative of greater perceived problem. This score was correlated with the HFPTA of the ear which was poorer of the two ears using the Pearson correlation coefficient value 'r'.
Results
The results of the study are presented below with reference to the objectives listed in introduction. ii. Volume of the device: The volume levels reportedly used by the different number of participants are depicted in Figure 1. As seen from Figure 1, participants reported varied volume levels at which they usually listen to music. Volume levels 8, 7 and 6
A. Profile of headphone listening habits of college-
were the most widely used, while very low volume levels such as level 2 or 3 were reported by very few participants. When participants were asked if loud sounds are intolerable to them, 63 of them found them to be intolerable and 179 reported this to happen sometimes. 41 participants found male voice to be better perceived than female and most of them (i.e., 261) had not noticed such difference of perception in male and female voice. iii. Number of hours of usage: 78% of the respondents used headphones for less than 3 hours per day while 22% used them for more than 3 hours per day. 104 participants (30%) reported to have been using headphones or PLDs since more than 5 years and 78 (23%) reported to have been using them for less than 2 years.
iv. Activities during which headphones are used: Respondents were asked to list the activities that involve more headphone usage during their daily routine. The most common situations during which most respondents used headphones were in public places, while commuting and while working out. Type of activity also had effect on the volume level at which respondents listened, because some activities involved higher background noise. The data showing percentage of respondents using headphones during various activities is depicted in Figure 2. v. Exposure to other noisy activities: When participants were asked about their involvement in any other recreational activity which involved high intensity sounds, only 18% reported to be involved in such activity (regular clubbing, discos, use of loud firecrackers, noisy neighbourhood), while others were just exposed to sound from their Personal Listening Devices or Laptops.
B. Awareness of college-going students about adverse effects of headphone listening on hearing sensitivity
Participants also responded to questions addressing their awareness about the hazardous effects of exposure to loud music and excessive headphone usage and their awareness about how the hazardous effects can be prevented. 261 of the 341 respondents (77%) were aware that listening to loud sounds can alter hearing sensitivity but many (54.83%) did not have awareness about the minimum safe hours of listening and most (64.80%) were not aware about ways to prevent effects related to exposure to loud sounds.
C. Hearing related symptoms in college-going students using headphones
Information about any past or present hearing related symptoms was also obtained from the respondents. 70% of the respondents had no history of any middle ear infection. 30% had some middle ear related issue but only 6 out of 341 subjects had a history of ear discharge. Only 14% respondents reported to have any neurological issues while 86% had no history of any such issues. Among participants who reported to have history of neurological problems, migraine was the most commonly occurring problem. With reference to systemic illnesses and other infections, 33 out of the 341 participants reported to have had Malaria/dengue in their past. 255 participants had no history of any major illness, 8 had problems related to blood pressure. Seven participants also reported to have hearing related problems or hearing loss in their family. Figure 3 shows the number of respondents with ear-related and neurological problems.
Participants were also asked about hearing-related symptoms that they may be experiencing, the data for which is shown in Figure 4. 203 of them reported to have no such difficulty and the remaining subjects reported to have some or the other problems in hearing. Participants were asked if they ever needed to ask for repetition asking 'what', most of them (i.e. 274) said they need to ask for repetition sometimes and 47 reported they never did, and few (i.e.20) reported that they always do. 108 participants reported that people around them have stated that they can hear their music outside of the headphones.44 respondents also reported to have problem in speech perception of others. Tinnitus was also reported by 80 out of 341 participants.
D. Hearing thresholds of college-going students who use headphones for listening
Out of the 341 respondents, 30 were taken up for hearing testing using pure tone audiometry. Two pure tone averages were obtained for each ear of the 30 participants -average of 500 Hz, 1000 Hz and 2000 Hz (PTA) and average of 4000 Hz, 6000 Hz, 8000 Hz and 10000 Hz (HF PTA). The 30 participants were divided into 2 groups -12 who reported to perceive hearing difficulties and 18 who did not report any hearing difficulties. The average PTA and HF PTA values with SD and percentiles for the two groups of participants are shown in Table 2. As seen from Table 2, the mean PTA right for those with no selfperceived hearing difficulty (16.91 dBHL) is higher than that for participants with self-reported hearing difficulty (15.93 dBHL). The mean PTA left for the participants with no self-reported difficulty in hearing is also higher (17.28 dBHL) than that for participants with self-reported hearing difficulty (14.42 dBHL). The group with selfreported hearing difficulty had greater SD for both PTA right and left than the group with no self-reported hearing difficulty. Unpaired t-test (PTA Right -t=0.5187, p>0.05; PTA Left -t=1.3267, p>0.05) indicates that there is no significant difference between the average PTA right and PTA left of participants with and without self-reported hearing difficulty.
As seen from Table 2, the mean HF PTA right for the participants with self-reported difficulty in hearing was higher (18.837 dBHL) than that for the participants with no self-reported hearing difficulty (17.7083 dBHL). The group with self-reported hearing difficulty had greater SD value than the group with no self-reported hearing difficulty. Mann-Whitney U test (Z=0.52917; p=0.59612) indicates that there is no significant difference between the average HF PTA right of participants with and without self-reported hearing difficulty. The mean HF PTA left for those with no self-perceived hearing difficulty is higher (18.65 dBHL) than that for participants with selfreported hearing difficulty (10.7291 dBHL). The group with no selfreported hearing difficulty had greater SD value then the group with self-reported hearing difficulty. Mann-Whitney U test (Z=2.2225; p=0.02642) indicates that there is significant difference between the average HF PTA left of participants with and without self-reported hearing difficulty.
E. Correlation between self-reported hearing status and average hearing thresholds
For the 30 participants in part 2 of the study, the self-reported hearing difficulty score was obtained as described above based on the responses provided on the proforma. The mean score was 1.56 (SD=0.989, Range 0-4). The HFPTA of the poorer ear for each of the 30 participants was considered to obtain the mean HF PTA (19.95 dBHL, SD=14.49, Range 7.5-77.5). The correlation coefficient between the self-reported hearing difficulty score and poor ear HF PTA was found to be r = 0.2304, indicating a weak positive correlation between the two measures.
Discussion
In this advancing world people have started to rely on technology throughout the day, especially teenagers who are never seen without their phones or laptops, always almost plugged in with headphones/ earphones for recreational purpose. The current study aimed at exploring the headphone listening habits of college-going students, studying their awareness about ill effects of excessive headphone usage and correlating self-reports of hearing difficulty and audiometric thresholds. The results obtained from the collected data are discussed here for the various objectives of the study.
Profile of headphone listening habits of college-going students
The results indicate that most of the respondents used cell phones as their personal music players; same as reported by Gupta et al 20 and 80% of the respondents used insert earphones which was also seen in other studies as that by McNeill 21 whose study reported most of the participants using insert earphones as their transducers but iPod was the most commonly used personal music player amongst their sample. Use of Personal Music Player was found to be minimum in the present study; probably due to availability, cost and trend among peers. Results of the present study also indicate that college students usually tend to listen to music at high volume levels i.e. 70-80% of the maximum volume. This was also a common finding reported in other studies. 20,22 Herrera et al. 23 reported that 37.40% of their participants used high volume when listening to music, 34.35% at medium volume and 16.03% at extreme high volume and only about 3.05% at low volume. Findings of these past studies as well as the present study imply that most of the young population have a habit of listening at high volume levels. Some studies 22 also state that men usually tend to keep the volume higher. However, comparison between male and female participants was not done in the present study. Respondents of the present study were also asked if they perceive male voices better than female voices in order to get an indication of any high frequency hearing loss that they might have developed due to noise exposure. However, the results indicated that most of them haven't noticed such a difference, but 41 respondents noticed that they perceive male voice better then female voice. In the present sample, 78% respondents used headphones for less than 3 hours per day, which is in agreement with the study by Gupta et al. 20 wherein 77.4% participants listened to music for an hour or less than an hour. Current findings also uphold reports by Berg et al. 24 and Herrera et al. 23 Similar results are reported by Torre 22 who studied a very large sample (i.e., 1016) and more than 50% of the sample reported to use personal music players between 1-3 hours per day; and men usually used personal music players for longer duration than females. Male-female comparison was not made in the present study. Present results about duration of usage are contradictory to those of Shah, 25 who found most of their sample to report more than 4 hours of usage per day and also reported that females listened to music at high levels for longer durations than men. 30% of the respondents in the present study reported to have been using headphones for more than 5 years and only about 23% reported to be using headphones less than 2 years, others had exposure between 2 to 5 years. Considering that these participants are teenagers, already having headphone usage at loud volumes for long durations for many years, and likely to continue to indulge in similar behaviours for several more years, their risk for NIHL increase significantly. This necessitates provision of appropriate counselling regarding ill effects of exposure to loud music for prolonged durations.
Headphone usage does not just occur at home sitting in quiet environment and in a teenager's life when there is a lot of commute and other activities, listening to music goes hand in hand. Study sample reported to be using headphones commonly in public places, while commuting and while working out. Similar results were obtained from various studies which involved questions regarding the habits and usage of headphones and personal music players. 21,22,26,27 Such usage puts them at risk of using listening to music at higher volumes and louder levels as reported by Hodgetts et al. 28 wherein Preferred Listening Levels (PLL) were higher in street noise than in multi talker babble and both were higher when compared to PLL in quiet. Noise exposure is not only through headphones in a teenager's daily living. People tend to explore and indulge in many other recreational activities which involve loud sounds or noises which could also increase the total amount of daily noise exposure for the individual. In the present sample only 18% of the respondents reported to be indulged in such activities regularly. 10% of respondents reported to spend lots of time in noisy environments e.g. railway station, traffic, main roads, music events etc. 4% of the respondents were involved in regular clubbing or going to discotheques. 1% of respondents reported to be involved in loud gun fire or firecracker activities. Exposure to these activities might have an impact on hearing sensitivity/thresholds. Keppler et al. 29 reported temporary tinnitus in 86% of respondents and also compared hearing thresholds of groups who were exposed to low, intermediate, and high recreational noise exposure. The results showed no significant difference in thresholds between the groups, though long-term exposure may lead to decrease in hearing sensitivity in future so long-term assessment may be required.
Awareness of college-going students about adverse effects of headphone listening on hearing sensitivity
It is of utmost importance to be aware of the safety levels and hours of usage of personal music players, as one can know to what limit it should be used daily so that there is no harm to hearing mechanism. 261 of the 341 respondents (77%) were aware that listening to loud sounds can alter hearing sensitivity but many (54.83%) did not have awareness about the minimum safe hours of listening and most (64.80%) were not aware about ways to prevent effects related to exposure to loud sounds. Study by Shah et al. 25 also included a question regarding awareness of NIHL and concerns regarding its effect. Most of the participants were at least slightly concerned about hearing loss with aging (15% were "not concerned"). Majority participants were also concerned with hearing loss related to use of their devices (24% were "not concerned"). Another study by Berg L et al 24 , compared knowledge about safety levels amongst males and females, a greater number of female students chose right answer to safe usage of PLD and safe listening per day than men. Herrera et al. 23 also studied awareness and concerns regarding hearing health and found that 13.74% were concerned about hearing loss via the use of personal music players, 36.64% were more or less concerned, 12.96% were very concerned, 23.66% were not very concerned and 11.45% were not concerned at all. Same results were seen in other studies. 20 Information about the hearing related symptoms in college-going students using headphones As this study focuses on effects of headphone on hearing, it was important to exclude those subjects from the sample who already might have a hearing loss or may develop hearing related problems due to causes like middle ear infection, genetic hearing loss, neurological problems, etc. This was important for part 2 of the study. 70% of the respondents had no history of any middle ear infection. 30% had some middle ear related issue but only 6 out of 341 respondents had a history of ear discharge. Similar findings were reported by Torre 22 and Kumar. 15 Båsjö et al. 30 also reported 81.1% (331/408) had normal middle-ear function in both ears. The prevalence of bilateral middleear abnormalities was 7.8% (32/408), and 11.0% (45/408) of the children had unilateral middle-ear abnormalities. This study also showed the relationship between middle ear pathology and PTA. It was reported that sample who had history of middle ear pathology showed poorer thresholds. To better understand if the respondents self-perceived any hearing related symptoms, direct questions and some indirect questions were framed to have an understanding about the perceived hearing symptoms of the respondents. When respondents were asked if they perceive to have hearing difficulty, 203 subjects reported to have no such difficulty and the remaining subjects reported to have some or the other problems in hearing properly. Similar results were reported by Widen et al. 26 where out of 50 adolescents, 30 (60%) reported having no hearing problems and 8 (16%) reported poor hearing. Indirect question i.e. if they ever needed to ask for repetition asking 'what', revealed that most of them (i.e. 274) need to ask for repetition sometimes, 47 reported they never did, and few (i.e.20) reported that they always do. Respondents were also asked if people around them state or report that they can hear the loud music outside from their earphones, 108 respondents agreed that they have been pointed for the same. In such say we can make the respondents realise that even unconsciously they are listening to loud music throughout the day and without feeling that there might be problem in their hearing.
Tinnitus is one of the most common symptoms seen in persons who might have NIHL or long duration music exposure.13,6 Tinnitus was also reported in 80 subjects out of 341 in our sample, which is just 23.4%. Various studies reported tinnitus in their sample. Adolescents listening for ≥3 hours are more prone to suffering from tinnitus. 20,26,[27][28][29][30] Prevalence of hearing problems, i.e., tinnitus and perceived hearing impairment were 6.1% and 5.8% respectively. Perceived hearing impairment and tinnitus were more seen in males than females. Women perceived they were more sensitive to noise then men. 31
Hearing thresholds of college-going students who use headphones for listening
Out of the 341 respondents, 30 were taken up for hearing testing using pure tone audiometry. Two pure tone averages were obtained for each ear of the 30 participants -average of 500 Hz, 1000 Hz and 2000 Hz (PTA) and average of 4000 Hz, 6000 Hz, 8000 Hz and 10000 Hz (HFA PTA). Extended high-frequency audiometry is found to be a sensitive method for early detection of Noise Induced Hearing Loss as depicted in a study by Peng et al. 32 The 30 participants in the present study were divided into 2 groups, those who reported to perceive hearing difficulties and those who did not report to perceive hearing difficulties. It is seen from the results that for the group of participants who reported hearing difficulty, the audiometric testing indicated "pass" result for PTA left and HF PTA left while it indicated a refer result for PTA right and HF PTA right. Similarly, for the group of participants who reported no hearing difficulty, the audiometric results (PTA right, PTA left, HF PTA right, and HF PTA left) indicated a "refer" result. This difference was significant for HF PTA of left ear where the mean HF PTA for participants with no reported hearing difficulty was 18.65 dBHL while that for the group with perceived hearing difficulty was 10.72 dBHL. This difference can be primarily attributed to one participant whose HF PTA in left ear was 77.5 dBHL. Results show that participants who perceive to have hearing problems have variable results on pure tone audiometry and those who do not even perceive any problems related to hearing may have poorer thresholds comparatively. This was in agreement with the study done by Jiang et al. 33 where worse hearing thresholds were found in personal listening device users using audiometry, and significantly poor results in otoacoustic emission (OAE), even in the participants with selfreported 'normal hearing'. These results show how important it is to have a regular hearing check-up even when you don't perceive any hearing difficulty or else it might go unnoticed and may progress gradually. Other studies involving hearing threshold measurement using headphones have reported similar results. 25,26,30,[32][33][34][35] Correlation between self-reported hearing status and average hearing thresholds One objective of the study was to correlate the self-reported hearing problems with the high frequency pure tone average of the same individuals. The Pearson correlation coefficient between the self-reported hearing problems and audiometric hearing thresholds was found to be r=0.2304, which indicates that there is weak positive correlation between the self-reported hearing problems and audiometric hearing thresholds. This was in agreement with Jiang et al. 33 in which he also showed poor hearing thresholds and poor OAE on sample using headphones even in those who self-reported to have normal hearing. Widen et al. 26 showed opposite results in which more noise exposure and longer headphone usage resulted in poor hearing thresholds and more self-perceived hearing related problems. Similarly, Båsjö et al. 30 showed that the children who reported to have tinnitus had significantly poorer hearing thresholds in both ears. Positive correlation was found between hearing threshold, years of usage and volume setting in the study by Kumar et al. 35 which states that with >5 years of PLD use subjects showed more difference in their hearing thresholds in extended high frequency audiometry. The hearing thresholds were significantly elevated in those listening to PLDs at high volume settings as compared to normal group and low volume users.
Conclusions
It can be concluded that majority of teenage population listen to music at significant loud volumes and through insert earphones and are aware of the ill effect caused by it on the hearing mechanism but still are not well versed with the ways in which it could be prevented further. Also, there was no significant difference found between the pure tone averages of the two groups, i.e., one group who perceive to have hearing problems and other group which does not perceive to have hearing problems, only difference found was between the HFPTA of left ear between the groups. There was a weak correlation found between the self-reported hearing problems and audiometric hearing thresholds.
The study has few limitations. The sample size taken for into consideration for audiometry was small. Equality of gender of the participants was not controlled. Output levels of different music players were not measured and compared. Among the audiological test battery, Oto-acoustic emission test is more sensitive to the ill-effects of noise. However, the present study did not incorporate Oto-acoustic emission testing. This study was unable to educate the sample about the prevention methods to lessen the ill effect of excessive exposure to loud sounds. | 2021-08-27T16:56:24.565Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a0dc03a7cd02b65a3da96f9d5f39d24bd19f18ca",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/JOENTR/JOENTR-13-00492.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "dd80ad2fb8c1852dd2b84ba347aac895be799705",
"s2fieldsofstudy": [
"Physics",
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14021474 | pes2o/s2orc | v3-fos-license | Persistent Coxiella burnetii Infection in Mice Overexpressing IL-10: An Efficient Model for Chronic Q Fever Pathogenesis
Interleukin (IL)-10 increases host susceptibility to microorganisms and is involved in intracellular persistence of bacterial pathogens. IL-10 is associated with chronic Q fever, an infectious disease due to the intracellular bacterium Coxiella burnetii. Nevertheless, accurate animal models of chronic C. burnetii infection are lacking. Transgenic mice constitutively expressing IL-10 in macrophages were infected with C. burnetti by intraperitoneal and intratracheal routes and infection was analyzed through real-time PCR and antibody production. Transgenic mice exhibited sustained tissue infection and strong antibody response in contrast to wild-type mice; thus, bacterial persistence was IL-10-dependent as in chronic Q fever. The number of granulomas was low in spleen and liver of transgenic mice infected through the intraperitoneal route, as in patients with chronic Q fever. Macrophages from transgenic mice were unable to kill C. burnetii. C. burnetii–stimulated macrophages were characterized by non-microbicidal transcriptional program consisting of increased expression of arginase-1, mannose receptor, and Ym1/2, in contrast to wild-type macrophages in which expression of inducible NO synthase and inflammatory cytokines was increased. In vivo results emphasized macrophage data. In spleen and liver of transgenic mice infected with C. burnetii by the intraperitoneal route, the expression of arginase-1 was increased while microbicidal pathway consisting of IL-12p40, IL-23p19, and inducible NO synthase was depressed. The overexpression of IL-10 in macrophages prevents anti-infectious competence of host, including the ability to mount granulomatous response and microbicidal pathway in tissues. To our knowledge, this is the first efficient model for chronic Q fever pathogenesis.
Introduction
The interaction between innate/adaptive immune system and invading bacteria is sufficient to eradicate microorganisms in the majority of bacterial infections. This microbicidal response is based on inflammatory cytokines, such as interferon (IFN)-c and tumor necrosis factor (TNF), which control the expression of cytokines and chemokines and the production of toxic metabolites [1]. The suppression of the microbicidal response due to genetic disorders leads to reactivation or chronic evolution of infections and to bacterial persistence [2]. In addition, immunosuppressive treatments and anti-inflammatory cytokines such as interleukin (IL)-10 or transforming growth factor (TGF)-b may also disarm microbicidal responses and contribute to chronic evolution of bacterial infectious diseases [1,3].
IL-10 is known to increase host susceptibility to numerous intracellular microorganisms and is involved in the persistence of bacteria such as Bartonella quintana or Mycobacterium tuberculosis [3,4]. Coxiella burnetii is an obligate intracellular bacterium that replicates in macrophages (M/) and is responsible for Q fever. The disease is characterized by a symptomatic primary infection in a minority of individuals, which may become chronic as culture-negative endocarditis in patients with valvular damage and immunocompromised patients [5]. The diagnosis of chronic Q fever is based on the presence of high titers of anti-C. burnetii antibodies, and bacteriological methods are of interest to study cardiac valve specimen [6]. In chronic Q fever, IL-10 is overproduced [7], and in patients with acute Q fever and valvulopathy, the risk to develop Q fever endocarditis is related to IL-10 overproduction [8]. IL-10 interferes with M/ activation through the inhibition of transcription of inflammatory genes [9] and enables M/ to support C. burnetii replication [10]. IL-10 also blocks maturation of C. burnetii-containing phagosomes in monocytes from patients with Q fever endocarditis [11].
While clinical and in vitro studies have suggested a role for IL-10 in the evolution of Q fever, an efficient mouse model for chronic Q fever pathogenesis, which could serve as a platform for anti-C. burnetii drug or immunotherapy development, is lacking. In transgenic mice that overproduce IL-10 in the T-cell compartment, BCG clearance is impaired [12], but this model is inappropriate for studies of Q fever patho-genesis because multiple phenotypes complicate the analysis of M/-bacterium interaction. Similarly, infection of IL-10deficient mice is uninformative for studies of chronic infections because C. burnetti-infected humans do not lack IL-10. A more robust model is described here that applies transgenic mice with constitutive overexpression of IL-10 in M/ lineage (macIL-10tg mice) [13]. We report an efficient mouse model for chronic Q fever pathogenesis, which associates high levels of specific antibodies, sustained tissue infection, and reduced granuloma formation, as in human Q fever. We also found an anti-inflammatory transcriptional program associating increased expression of arginase-1, decreased expression of IL-12p40 and IL-23p19, and altered expression of chemokines in infected tissues.
Results
Persistent C. burnetii Infection in macIL-10tg Mice When wild type (wt) and transgenic mice were injected with 5 3 10 5 organisms by the intraperitoneal route, mortality or morbidity was not observed up to 60 d. The infection was assessed by qPCR in tissues and measurement of circulating specific antibodies (Abs) by immunofluorescence. Tissue infection was maximum at days 7 and 14 post-infection in wt and transgenic mice (Figure 1). At day 28 post-infection, only residual organ bacterial levels were observed in wt mice, whereas the infection of spleen, liver, and lungs was persistent in macIL-10tg mice, particularly for the lungs (p , 0.05). At day 42 post-infection, C. burnetii was completely cleared from the spleen, liver, and lungs of wt mice, but bacterial DNA was still present in the spleen, liver, and lungs from transgenic mice: the difference was significant (p , 0.05). After 60 d, no bacterial DNA copies were detected in spleen, liver, and lungs from wt and transgenic mice (unpublished data). The infection of mice was also studied through specific humoral response. In wt mice, the titer of IgG specific for phase I C. burnetii ( Figure 1G) and phase II C. burnetii ( Figure 1H) increased transiently. In macIL-10tg mice, the titer of specific IgG for phase I and phase II C. burnetii ( Figure 1G and 1H, respectively) was high as compared with those found in wt mice (p , 0.05) and it was persistent as a plateau at least up to day 42 ( Figure 1). Clearly, IL-10 overexpression was associated with sustained presence of C. burnetii in tissues and high levels of specific Abs, which was reminiscent of chronic Q fever.
Decreased Granuloma Formation and Splenomegaly in macIL-10tg Mice
Granuloma formation is indicative of a protective immune response to C. burnetti and is defective in chronic Q fever [5]. While granulomas were easily identified in liver, they were merging with surrounding lymphoid tissue in spleen. In wt mice, granulomas detected in liver lobules and portobiliary spaces ( Figure 2A) and the splenic red pulp ( Figure S1) were mainly composed of M/ with few lymphocytes and poly-
Author Summary
The interaction between immune system and invading bacteria is sufficient to eradicate microorganisms in the majority of bacterial infections, but the suppression of the microbicidal response leads to reactivation or chronic evolution of infections and to bacterial persistence. Coxiella burnetii, an obligate intracellular bacterium, is responsible for Q fever. This infectious disease is characterized by a primary infection that may become chronic as endocarditis in patients with valvular damage and immunocompromised patients. Clinical and in vitro studies have suggested a role for interleukin-10 in the chronic evolution of Q fever. However, an efficient mouse model for chronic Q fever pathogenesis, which could serve as a platform for anti-C. burnetii drug or immunotherapy development, is lacking. Here we use transgenic mice with constitutive overexpression of interleukin-10 in the macrophage lineage to study C. burnetii infection. We report an efficient mouse model for chronic Q fever pathogenesis, which associates high levels of specific antibodies, sustained tissue infection, and reduced granuloma formation, as in human Q fever. We also find an anti-inflammatory transcriptional program and altered expression of chemokines in infected tissues. morphonuclear leukocytes. In transgenic mice, the composition of liver ( Figure 2B) and splenic ( Figure S1) granulomas was maintained, but their size was increased (compare Figure 2A and 2B). In addition, while liver ( Figure 2C) and splenic ( Figure S1) granulomas from wt mice were paucibacillary, liver ( Figure 2D) and splenic ( Figure S1) granulomas were multibacillary in macIL-10tg mice. The number of liver and splenic granulomas was quantified. In the liver ( Figure 3A) and the spleen ( Figure 3C) of wt mice, granulomas were detected at day 7 post-infection and their number increased at day 14 post-infection. Few granulomas were detected at day 28 post-infection. In macIL-10tg mice, granulomas were detected in liver ( Figure 3B) and spleen ( Figure 3D) at day 7 post-infection. However, they were no longer found at day 14 post-infection, in sharp contrast with wt mice. Histologic damages were not observed in the lungs of wt and macIL-10tg mice although bacterial DNA copies were found. Splenomegaly associated with C. burnetii infection was also different in wt and transgenic mice. In wt mice, the spleen weight was about 600 mg at day 7 post-infection versus 120 mg before infection. In macIL-10tg mice, the spleen weight moderately increased after 7 d (300 mg): the differences between wt and transgenic mice were significant (p , 0.05). These results showed that IL-10 inhibited granuloma formation and prevented splenomegaly in C. burnetii-infected mice. Again, the lack of granulomas in transgenic mice is reminiscent of chronic Q fever.
Altered Activity of M/ from macIL-10tg Mice
We next tested if the constitutive production of IL-10 by myeloid cells affects the microbicidal activity of M/ toward C. burnetii and may account for bacterial persistence in transgenic mice. When bone marrow-derived M/ (BMDM/) were incubated with C. burnetii organisms at a bacterium-tocell ratio of 100:1, the initial uptake of C. burnetii was significantly (p , 0.006) impaired in transgenic M/. After establishment of the infection, the number of bacterial DNA copies decreased in wt M/ whereas it significantly (p , 0.005) increased in transgenic M/ at day 3 post-infection. It remained significantly (p , 0.003) higher in transgenic M/ than in wt M/ at day 6 post-infection ( Figure 4A). This difference may be related to impaired bacterial uptake by transgenic M/. C. burnetii uptake (about 2 3 10 4 DNA copies) was rendered similar in wt and transgenic M/ by using different infective doses of organisms (25:1 and 200:1 bacterium-to-cell ratios for wt and transgenic M/, respectively). While wt M/ cleared C. burnetii organisms, transgenic M/ allowed a moderate and transient replication of C. burnetii (unpublished data). The inability of transgenic M/ to clear C. burnetii was not restricted to bone marrow-derived M/ since wt peritoneal M/ cleared C. burnetii whereas those of macIL-10tg mice remained infected until 9 d postinfection ( Figure 4B). It is noteworthy that the defective uptake of C. burnetii by transgenic BMDM/ was corrected in peritoneal transgenic M/. Thus, IL-10 impaired microbicidal . Note that granulomas were mainly composed of macrophages and were greater in macIL-10tg mice than in wt mice. (C and D) Liver sections from wt (C) and macIL-10tg (D) mice were deparaffinized and rehydrated, and C. burnetii organisms were revealed by immunohistostaining. Macrophages in inflammatory granulomas present in liver lobules were packed with granular immunopositive material (indicated using arrowheads). Immunopositive material was more abundant in macIL-10tg than in wt mice. doi:10.1371/journal.ppat.0040023.g002 activity of M/ in keeping with previous data using exogenous IL-10 stimulation of C. burnetii-infected M/ [9]. We wondered if IFN-c pre-treatment of transgenic BMDM/ restored their microbicidal competence. IFN-c had no effect on bacterial uptake by M/ from wt mice (unpublished data) or macIL-10tg mice ( Figure 4A). IFN-c did not change the microbicidal competence of wt M/ (unpublished data). It specifically prevented C. burnetii replication in transgenic M/ (p , 0.001 and p , 0.004 at days 3 and 6, respectively), but was unable to induce bacterial killing ( Figure 4A). Finally, we studied the transcriptional profile of BMDM/ induced by C. burnetii. In the absence of infection, the overexpression of IL-10 in transgenic M/ did not affect the expression of transcripts encoding molecules involved in the microbicidal activity of macrophages such as inducible NO synthase (iNOS) or molecules such as arginase-1 and mannose receptor (MR) (unpublished data). In wt M/, C. burnetii stimulated the expression of transcripts for iNOS, TNF, IL-12p40, IL-23p19, and CXCL-10 ( Figure 4C) but did not stimulate arginase-1, MR, Ym1/2, and TGF-b ( Figure 4D). This transcriptional pattern of M/ is consistent with a microbicidal profile. By comparison, in transgenic M/, C. burnetii did not affect the expression of transcripts for iNOS, TNF, IL-12p40, IL-23p19, and CXCL-10 ( Figure 4C) but stimulated the expression of transcripts for arginase-1, MR, Ym1/2, and TGFb ( Figure 4D). Hence, the constitutive overexpression of IL-10 in C. burnetiistimulated M/ is associated with a non-microbicidal transcriptional profile.
Tissue Expression of Chemokines and Microbicidal Markers
The granuloma formation in spleen and liver, as a marker of efficient cell-mediated immunity, is associated with the recruitment of immunocompetent cells, which may be impaired in macIL-10tg mice. We investigated the distribution of leukocyte populations in spleen and liver by flow cytometry ( Figure 5A-5D). In uninfected mice, only the percentage of NK cells was significantly (p , 0.05) increased in the spleen of macIL-10tg mice as compared to wt mice (16.8% versus 6.6%). At day 7 post-infection, less CD8 þ T cells were recruited in the spleen from transgenic mice (2.9% versus 6.6% in wt mice; p , 0.002). The percentage of recruited DC was decreased in spleen (p , 0.01) and liver (p , 0.05) from macIL-10tg mice as compared to wt mice. We suggest that changes in tissue distribution of immunocompetent leukocytes are not sufficient to account for defective granuloma formation. As chemokines are required for leukocyte recruitment into granulomas and development of protective immunity [14], their expression in spleen and liver was assessed. In transgenic mice infected for 14 d, the splenic expression of transcripts for CXCL-1, CXCL-2, and CXCL-16 was decreased whereas that of CXCL-9, CCL-2, and CCL-5 was unaffected, as compared to infected wt mice ( Figure 5E). In liver, the expression of mRNA for CXCL-1, CXCL-2, CXCL-16, and CCL-2 was markedly reduced, and that of CXCL-9 and CCL-5 was not affected ( Figure 5F). The transcriptional pattern of spleen and liver chemokines found after 7 d of infection (unpublished data) was similar to that observed after 14 d. These results showed that the expression of chemokines was slightly altered in tissues. We finally wondered if the overexpression of IL-10 in mice resulted in a transition from a microbicidal to a non-microbicidal pattern in spleen and liver. The transcriptional pattern of molecules involved in the microbicidal machinery and M1/M2 polarization was similar in uninfected wt and transgenic mice (unpublished data). In 7-d-infected transgenic mice, the expression of arginase-1 mRNA was markedly increased in spleen ( Figure 5G) and liver ( Figure 5H), as compared to wt mice. The expression of MR was also markedly increased in spleen. Ym1/2 and TGF-b mRNA were also increased, but to a lesser extent. In contrast, in spleen and liver from transgenic mice, the expression of iNOS and TNF mRNA was not affected by infection and that of IL-12p40 and IL-23p19 was down-modulated. The transcriptional profile of tissues from mice infected for 14 d was similar to that found at day 7 postinfection (unpublished data). These results suggest that the overexpression of IL-10 is related to a tissue non-microbicidal pattern.
Role of the Route of C. burnetii Infection in Bacterial Persistence
When wt and macIL-10tg mice were injected by the intratracheal route with 5 3 10 5 organisms, mortality or morbidity was not observed up to 28 d. The lung infection was assessed by qPCR ( Figure 6A). Sentinel mice were killed after 1 d of infection to determine the bacterial burden in lungs. There is no variation between wt and transgenic mice. In wt mice, the number of bacterial DNA copies slightly increased at day 7 post-infection, decreased at day 14, and no copies were detected at day 28 post-infection. In contrast, the number of bacterial DNA copies dramatically increased in transgenic mice at day 7 post-infection, demonstrating that C. burnetii organisms replicated within lungs. At day 14 postinfection, lung infection in transgenic mice decreased but remained significantly (p , 0.05) higher than in wt mice. At day 28 post-infection, a low number of bacterial DNA copies were still found in transgenic mice whereas wt mice had completely cured lung infection. We also found that the intratracheal route of C. burnetii inoculation was unable to induce liver and splenic infection in macIL-10tg mice (unpublished data).
We wondered whether C. burnetii replication within lungs was accompanied by histological changes. Inflammation was observed at days 7 and 14 post-infection in wt and transgenic mice. Inflammatory infiltrates were largely confined within the walls of the alveoli. Infiltrates were often organized as granulomatous interstitial inflammation and consisted mainly of macrophages with few lymphocytes ( Figure 6B). Granulomas of variable diameter scattered throughout the interalveolar walls of the lung parenchyma. However, the size of lung granulomas was lower in wt mice than in transgenic mice. The bronchoalveolar air spaces were relatively free of cellular exsudates, but some are filled with rare alveolar macrophages. Neither necrosis of the lining alveolar epithelium nor suppuration was observed. These findings are consistent with a mixed interstitial and mild alveolar mononuclear cell pneumonia. Finally, we studied the pulmonary localization of C. burnetii at day 7 post-infection. Organisms were seen as granular immunopositive material in few alveolar macrophages (unpublished data). They were mainly found in the granulomatous interstitial inflammation within cells that had the morphology of macrophages. In wt mice, interalveolar wall granulomas were paucibacillary. In contrast, they were multibacillary in transgenic mice ( Figure 6B).
Discussion
In humans, C. burnetii infection can be asymptomatic or result in acute or chronic disease [5,6]. Mice are usually used as animal model for acute Q fever [15]. Different animal models of chronic Q fever are based on dramatic immunosuppression [16,17], but the chronic evolution of Q fever is not associated with severe immunosuppression but it requires IL-10 in humans [7,8]. In fact, an appropriate mouse model for the study of chronic Q fever is lacking. We wondered if the constitutive overexpression of IL-10 in the myeloid compartment from genetically modified mice would reproduce aspects of the human disease. When C. burnetii was injected by the intraperitoneal route, organisms were rapidly cleared in wt mice while macIL-10tg mice maintained bacterial loads for at least 42 d. The sustained presence of C. burnetii in tissues was associated with high circulating levels of specific IgG as in Q fever endocarditis [5]. The anti-C. burnetii IgG2a were prominent in macIL-10tg mice (unpublished data), which is consistent with increased levels of IgG1 and IgG3 in human Q fever [18]. C. burnetii persistence is also consistent with the role of IL-10 in latent tuberculosis [19].
As humans are most commonly infected with C. burnetii through inhalation of parturient secretions from infected animals [6], wt and transgenic mice were infected by the intratracheal route. The constitutive overexpression of IL-10 dramatically increased lung infection. The pulmonary lesions consisting of mixed interstitial and mild alveolar mononuclear cell pneumonia were previously published in a model of aerosol infection [20]; they were more pronounced in macIL-10tg mice than in wt mice. These results combined with a previous publication [20] emphasized the role of the route of inoculation in C. burnetii infection in mice as well as in humans.
The second important feature of chronic Q fever is the lack of granulomas that are replaced by mononuclear cell infiltrates [6]. In macIL-10tg mice, C. burnetii infection was associated with decreased formation of liver and splenic granulomas without alteration of their organization. This finding was specific since granuloma formation was normal in BCG-infected macIL-10tg mice [13]. The mechanism of defective microbicidal response to C. burnetii manifested as bacterial persistence and defective granuloma formation may involve innate and/or adaptive immunity [21]. Specifically, the immune response in granulomas may be affected by defective recruitment of immune effectors and/or polarization of protective immune response toward non-microbicidal immune response. First, we recently reported that transendothelial migration of mononuclear cells is impaired in chronic Q fever; this deficiency is corrected when IL-10 is neutralized [22]. However, we found that the transmigration of murine mononuclear cells was similar in wt and macIL-10tg mice (unpublished data). Second, we hypothesized that IL-10 overexpression may impair trafficking of immune cells, thus preventing cell recruitment in granulomas: it was not the case. The percentage of T cells, B cells, M/, and DC was similar in spleen and liver from uninfected wt and macIL-10tg mice, as described elsewhere [13]; only the percentage of NK cells was increased in the spleen of transgenic mice, which is consistent with the ability of IL-10 to stimulate NK cells [23]. C. burnetii infection decreased the proportion of DC in transgenic mice. C. burnetii is known to impair in vitro activation and maturation of DC [24], but it seems unlikely that DC impairment is sufficient to prevent splenomegaly and granuloma formation in spleen and liver. These findings suggest that the recruitment of immune effectors was not significantly impaired in mice constitutively overexpressing IL-10. Rather, they suggest the recruitment of non-microbicidal immune cells into tissues, which may account for defective killing of C. burnetii and granuloma formation. This is supported by our in vitro findings with M/. M/ from macIL-10tg mice allowed C. burnetii replication whereas C. burnetii was killed by wt M/. Impaired microbicidal competence was not corrected by IFN-c. This finding may be critical to understand bacteria persistence in mice and humans despite an apparent efficient immune response. As microbicidal activity of M/ is associated with transcriptional pattern of classical activation by cytokines and microbial products [25], we studied the transcriptional pattern of C. burnetii-stimulated M/. While wt M/ exhibited a classical inflammatory transcriptional pattern, M/ from macIL-10tg mice exhibited a non-microbicidal pattern in which iNOS and inflammatory cytokines were not induced and arginase-1 and TGF-b were stimulated. The differential expression of arginase-1 and iNOS mRNA may correlate with the ability of M/ to control C. burnetii replication since previous reports have shown increased susceptibility of iNOS-deficient mice toward C. burnetii [26]. Similarly, the gradual decrease in iNOS expression is correlated with the transition from latent tuberculosis to progressive pulmonary tuberculosis [27]. (B) Pulmonary lesions in wt and macIL-10tg mice were revealed by hematoxylin-eosin staining. Top panels: representative micrographs of lesions present at day 7 post-infection were shown (3200 original magnification). Note a granulomatous interstial inflammation. Thickened alveolar walls were heavily infiltrated with mononuclear leukocytes, mainly macrophages. The size of granulomas was higher in transgenic mice than in wt mice. Bottom panels: pulmonary sections were deparaffinized and rehydrated, and C. burnetii organisms were revealed by immunohistostaining with hemalun counterstain. Macrophages in inflammatory granulomas present in interalveolar walls were packed with granular immunopositive material (indicated using arrowheads). Immunopositive material was more abundant in macIL-10tg than in wt mice. Magnification, 3400. doi:10.1371/journal.ppat.0040023.g006 Hence, the balance between iNOS and arginase-1 may be essential to regulate inflammation and microbicidal response to infectious agression [28]. The responses of host tissues to C. burnetii infection may also reflect M/ polarization toward type 2 immune response. Indeed, the expression of CXCL-1, CXCL-2, CXCL16, and CCL-2 was reduced in liver from macIL-10tg mice, and it is established that CXCL-1, CXCL-2, and CCL-2 are regulated by IL-10 [29]. In C. burnetii-infected transgenic mice, the expression of arginase-1, MR, Ym1/2, and TGF-b was increased in spleen, and to a lesser extent, in liver, whereas the expression of iNOS, IL-12, and IL-23 was downmodulated. The combination of high expression levels of arginase-1 and Ym-1 and low levels of iNOS in tissues is consistent with M-2 type phenotype described in M/ [30].
In conclusion, IL-10 was essential for sustained C. burnetii burden in tissues, high levels of Abs, and impaired granuloma formation, three characteristics of chronic Q fever. IL-10 affected tissue inflammatory gene expression and M/ polarization and may contribute to defective granuloma formation. Hence, constitutive overexpression of IL-10 is an experimental model of persistent infection by C. burnetii that could serve as a platform for anti-C. burnetii drug or immunotherapy development.
Materials and Methods
Infection of mice. C. burnetii (Nine Mile strain) organisms in phase I (virulent organisms) and phase II (avirulent organisms) were obtained as previously described [31]. Control mice (FVB background) and macIL-10tg mice were kept in a specific pathogen-free mouse facility and handled according to the rules of Dé cret N8 87-848 du 19/10/ 1987, Paris. The experimental protocol have been reviewed and approved by the Institutional Animal Care and Use Committee of the Université de la Mé diterrané e. Female mice were infected with 5 3 10 5 C. burnetii organisms by intraperitoneal route, and, in some experiments, by intratracheal route. The clinical status of mice was recorded daily. Mice were killed after different infection times until day 60. Organs were aseptically excised, and tissue samples were embedded in paraffin or frozen at À80 8C.
C. burnetii detection. Tissues (10-25 mg) were incubated with 180 lL of lysis buffer and 20 lL of proteinase K, and DNA was extracted by using the QIAamp DNA MiniKit (Qiagen). Quantitative real-time PCR (qPCR) was performed with the LightCycler system (Roche) and carried out with 5-ll DNA samples and specific primers, as previously described [32]. In each qPCR run, a standard curve was generated using serial dilution ranging from 10 8 to 10 4 copies of the intergenic spacer region. Tissue sections were deparaffinized in xylene and rehydrated in graded alcohol. Bacteria were revealed using the Immunostain-Plus kit (Zymed), as previously described [33]. In brief, tissue sections were incubated with rabbit anti-C. burnetii Abs, or normal rabbit serum as control, used at a 1:2,000 dilution and secondary Abs conjugated with peroxidase.
Antibody determination. Slides with smears of formaldehydeinactivated organisms in phase I or phase II were incubated with serial dilutions of serum from infected mice, as described elsewhere [32]. Bacteria were labeled with FITC-conjugated goat Abs directed against mouse IgG (Beckman Coulter) and rat Abs against mouse IgG1, IgG2a, IgG2b, and IgG3 (BD Biosciences) at a 1:100 dilution for 30 min. After washing, the slides were examined by fluorescence microscopy. The starting dilution for the serum sample was 1:25, and samples were titered to end point.
Determination of granulomas. The 5-lm sections of paraffinembedded tissues were stained with hematoxylin-eosin to assess the presence of granulomas, defined as collections of ten or more macrophages and lymphocytes. Their number was determined after whole optical examination of at least three tissue sections. They were quantified by image analysis [32]. The results are expressed as the number of granulomas found per surface unit (mm 2 ).
M/ studies. Bone marrow precursor cells were cultured with 15% L-cell-conditioned medium. After a 7-d culture, more than 90% of cells were M/, as determined by morphological and phagocytic criteria. BMDM/ were scrapped and plated in 24-well tissue culture dishes at a density of 2 3 10 5 per well. In some experiments, M/ were treated with IFN-c (100 UI/mL, R&D Systems) for 1 d before infection. To determine bacterial phagocytosis, M/ were incubated with C. burnetii for 4 h, washed to eliminate free organisms (day 0), and the number of bacterial DNA copies was assessed by qPCR, as described above. M/ were then cultivated for 9 d, and bacterial replication was assessed. In some experiments, peritoneal macrophages were used instead BMDM/. Measurements of cytokines, iNOS, arginase-1, MR, and Ym1/2 were determined by quantitative real-time RT-PCR (qRT-PCR) as follows. C. burnetii organisms (100:1 bacterium-to-cell ratio) were added to M/ for 6 h and total RNA was extracted. The cDNA synthesis was carried out with primers using the primer3 tool available at the following website: http://frodo.wi.mit.edu/ (Table 1). Reverse transcriptase was omitted in negative controls. The fold change in target gene cDNA was determined relative to the b-actin endogenous control.
Statistical analysis. Results, given as median or mean 6 SD, were compared with the Mann-Whitney U test. Differences were considered significant when p , 0.05. Figure S1. Splenic Granulomas in WT and macIL-10tg Mice Granulomas in the red pulp of spleens and C. burnetii material were revealed by hematoxylin-eosin staining and immunohistostaining, respectively. Representative micrographs of granulomas present at day 7 post-infection were shown (3400 original magnification). Note that granulomas were mainly composed of macrophages and were greater in macIL-10tg mice than in wt mice. Granular immunopositive material is indicated using arrowheads. Immunopositive material was more abundant in macIL-10tg than in wt mice. | 2014-10-01T00:00:00.000Z | 2008-02-01T00:00:00.000 | {
"year": 2008,
"sha1": "6afe0e2f5a57273c8836f3502b6a1bcbd32859c8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.0040023&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6afe0e2f5a57273c8836f3502b6a1bcbd32859c8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
1761424 | pes2o/s2orc | v3-fos-license | Caspase-12 Silencing Attenuates Inhibitory Effects of Cigarette Smoke Extract on NOD1 Signaling and hBDs Expression in Human Oral Mucosal Epithelial Cells
Cigarette smoke exposure is associated with increased risk of various diseases. Epithelial cells-mediated innate immune responses to infectious pathogens are compromised by cigarette smoke. Although many studies have established that cigarette smoke exposure affects the expression of Toll-liked receptor (TLR), it remains unknown whether the nucleotide-binding oligomerization domain-containing protein 1 (NOD1) expression is affected by cigarette smoke exposure. In the study, we investigated effects of cigarette smoke extract (CSE) on NOD1 signaling in an immortalized human oral mucosal epithelial (Leuk-1) cell line. We first found that CSE inhibited NOD1 expression in a dose-dependent manner. Moreover, CSE modulated the expression of other crucial molecules in NOD1 signaling and human β defensin (hBD) 1, 2 and 3. We found that RNA interference-induced Caspase-12 silencing increased NOD1 and phospho-NF-κB (p-NF-κB) expression and down-regulated RIP2 expression. The inhibitory effects of CSE on NOD1 signaling can be attenuated partially through Caspase-12 silencing. Intriguingly, Caspase-12 silencing abrogated inhibitory effects of CSE on hBD1, 3 expression and augmented induced effect of CSE on hBD2 expression. Caspase-12 could play a vital role in the inhibitory effects of cigarette smoke on NOD1 signaling and hBDs expression in oral mucosal epithelial cells.
Introduction
Cigarette smoke, including active smoking and passive smoking, has been implicated in many diseases, disability and death [1]. Cigarette smoke consists of more than 7300 chemical constituents, many of which are potent carcinogens and tumor promoters. A number of specific infections have been associated closely with cigarette smoke, including community-acquired pneumonia, tuberculosis, Helicobacter pylori infections, inflammatory bowel disease, invasive fungal infections, periodontitis, and oral candidiasis. Even though cigarette smoke directly mediates upregulation of bacterial virulence, the pro-infective effects of cigarette smoke are believed to result primarily from interference with host defense [2].
Innate immunity constitutes the first line of defense against microbe infection. As two main classes of innate immune receptors, the Toll-like receptors (TLR) and NOD-like receptors (NLR) serve as pattern recognition receptors that recognize conserved structures of pathogens, toxic compounds, or cellular damage known as ''danger signals.'' Depending on the adapter Receptor-interacting protein 2 (RIP2), NOD induces NF-kB activation and nuclear translocation. NF-kB activation promotes the production of proinflammatory cytokines, chemokines, and antimicrobial peptides. The human defensins, one group of small cationic antimicrobial peptides include the a-defensins of intestinal and neutrophil origin, and the b-defensins of skin, oral mucosa and other epithelia [3]. The human b defensins (hBDs) play important roles in innate immune and adaptive immune, such as antimicrobial activity, antitumor effect, chemoattractive effect and immunomodulation [4]. hBD1, 2, and 3 represent the main group of human defensins expressed and secreted by oral mucosal epithelial cells and have been most investigated.
So far the best characterized proteins of NLR members are nucleotide-binding oligomerization domain-containing protein 1 (NOD1) and NOD2. As one of intracellular pattern recognition receptors (PRRs), NOD1 plays a pivotal role in pathogen microbe clearance and tissue homeostasis of oral cavity, gastrointestinal, and respiratory tract. Sugawara et al. indicated that NOD1 and NOD2 in oral epithelial cells were functional receptors that induced antibacterial responses [5,6].
Cigarette smoke directly activates epithelial cells and induces chemokine and inflammatory mediator release. Nevertheless, epithelia-mediated innate immune responses to infectious pathogens are compromised by cigarette smoke [7]. Although many studies have established that cigarette smoke exposure affects the expression of TLRs, study data about the effects of cigarette smoke exposure on NLRs remain scarce [8,9,10,11,12,13]. Aldhous [14]. However, it remains unknown whether NOD1 expression is affected by cigarette smoke exposure.
Caspases are cysteinyl aspartate-specific proteases that play a pivotal role not only in the induction of apoptotic cell death but also in the inflammatory responses against microbial infection. Caspases are divided into three functional groups: apoptosis induction (Caspase-2, -3, -6, -7, -8, -9, and -10), inflammatory responses (Caspase-1, -4, -5, -11, and -12) and differentiation . Caspase-1 is activated in the inflammasome, an intracellular protein complex that is formed by the recognition of intracellular ligands or cellular stresses by sensor molecules such as NOD-like receptors. Caspase-1 activation can induce the production of mature IL-1b/IL-18 and trigger pyroptosis. Under certain conditions, Caspase-11 is required for the activation of the caspase-1 inflammasome, referred to as the noncanonical inflammasome. In addition, Caspase-8 also contributes to the production of inflammatory cytokines [15]. Specially, only Caspase-12 can dampen the responses to bacterial infection and inhibit IL-1b, IL-18, and IFN-c production. It had been confirmed that Caspase-12-deficiency not only enhanced bacterial clearance and sepsis resistance, but also augmented the production of antimicrobial peptides, cytokines, and chemokines to some pathogens [16,17]. Previous studies have determined that cigarette smoke exposure or some components in cigarette smoke could up-regulate the expression of Caspase-12 [18,19,20,21], while Caspase-12 could negatively modulate NOD1 signaling [17]. Based on these established evidences, we hypothesized that cigarette smoke may also have direct effect on NOD1 signaling and the production of antimicrobial peptides of human oral mucosal epithelial cells by up-regulating the expression of Caspase-12. The first goal of this study was thus to investigate whether CSE affected the expression of crucial molecules in NOD1 signaling pathway, including NOD1, RIP2, NF-kB and hBD1, 2, 3 in human oral mucosa epithelial cells. Our second focus was to verify the potentially inhibitory effect of Caspase-12 on NOD1 signaling and hBD1, 2, 3 in these cells following CSE exposure.
Preparation of aqueous cigarette smoke extracts (CSE)
Aqueous CSE was prepared as previously described [22,23]. Kentucky 3R4F research-reference filtered cigarettes (The Tobacco Research Institute, University of Kentucky, Lexington, KY), one of which contains 0.73 mg of nicotine and 9.4 mg of tar, were used for CSE preparation. A cigarette was smoked continuously by a peristaltic pump. Four cigarettes were bubbled through 40 ml of cell growth medium, and this solution, regarded as 100% strength CSE. The generated CSE solution was filtered (0.22 mm) to remove bacteria and large particles which was adjusted to a pH of 7.45 and used within 15 min after preparation. The content of nicotine in CSE was analyzed in the institutional laboratory using liquid chromatography-tandem mass spectrometry as previously described [24,25,26]. The 100% CSE contained 239¡45 mg/ml of nicotine in three separate samples. Working dilutions of CSE (in the range of 0.5% to 8%) were made with culture medium expressed as a percentage (v/v %).
Cell culture
Immortalized human oral mucosal epithelial (Leuk-1) cell line was a generous gift from Professor Li Mao at Department of Oncology and Diagnostic Sciences, University of Maryland Dental School, Baltimore, MD. The Leuk1 cell line was established from a dysplastic leukoplakia lesion adjacent to a squamous cell carcinoma of the tongue. It exhibits an immortalized but nontumorigenic phenotype [27]. The cell line was expanded and passaged in keratinocyte serum free medium [28]. This medium was supplemented with BPE (25 mg/ml), epidermal growth factor (0.2 ng/ml), CaCl 2 (0.4 mM). The passaged cells were cultured in 37˚C humidified air incubators with 5% CO 2 which were stimulated with CSE of different concentration (0.5%, 1%, 2%, 4%, and 8%) in certain experiments.
Western immunoblot analysis
Western blotting was performed as described [29]. Leuk-1 cells were washed twice with PBS and harvested by trypsinization. Cells were lysed in ice-cold lysis buffer containing 1% Nonidet P-40, 0.5% deoxycholate, 0.1% SDS, protease inhibitor cocktail, and phosphatase inhibitor cocktail. The lysates were incubated on ice for 30 min and centrifuged at 14,000 g, for 10 min, at 4˚C, to remove cell debris. Total cellular protein was collected and protein concentration was measured. Next, 10 to 50 mg of total cell protein was separated by 10% SDS-PAGE gels and transferred from gels to polyvinylidene difluoride membranes (Millipore, Bedford, MA) by wet electroblotting. Membranes were blocked for 1 h at room temperature with 5% bovine serum albumin (BSA) in PBS-0.1% Tween 20 (PBST). After washing with PBST for three times, membranes were incubated with the primary antibodies (NOD1, Caspase-12, RIP2, p-NF-kB, and GAPDH antibodies were diluted 1:1,000 with PBST containing 5% BSA) overnight at 4˚C. The next day, membranes were washed with PBST followed by 1 h incubation at room temperature with horseradish peroxidase-conjugated secondary antibodies (5,000-fold diluted with PBST containing 5% BSA). Following washing with PBST, immunostained protein bands were detected by using an enhanced chemiluminescence (ECL) assay kit and were visualized on FluorChem FC2 system (Cell Biosciences, Santa Clara, CA). Densitometric analyses of bands were performed using Image J software and the data of the target protein were normalized to those of the corresponding GAPDH (http://rsb.info.nih.gov/ij/). Data were normalized to GAPDH and expressed as the percentage or fold change compared with the corresponding control, which was set to 1 or 100.
RNA extraction and real time quantitative reverse transcription PCR (qRT-PCR)
qRT-PCR assay was performed as described previously [17]. Total RNA was extracted from Leuk-1 cells by using TRIzol reagent (Invitrogen) according to the manufacturer's instructions, and 2mg of RNA was used to synthesize first-strand cDNA synthesis in 20 ml of reaction volume using RevertAid First Strand cDNA Synthesis Kit (Roche) according to the manufacturer's protocol. The primers used for the PCR amplifications are listed as follows: hBD1 forward TCA TTA CAA TTG CGT CAG CAG, reverse TTG CAG CAC TTG GCC TTC [30]; hBD2 forward TCC TCT TCT CGT TCC TCT TCA, reverse AGG GCA AAA GAC TGG ATG AC [30]; hBD3 forward CCA TTA TCT TCT GTT TGC TTT GCT C, reverse CCG CCT CTG ACT CTG CAA TAA TA [31]; Caspase-12 forward AAT GGA ATC TGT GGG ACC AA, reverse GAA CCA AAC AAT CCC AGC AC [32]; GAPDH forward TCA AGA AGG TGG TGA AGC AG, reverse CCC TGT TGC TGT AGC CAA AT [30]. Real-time PCR analyses ware performed using an ABI 7300 Real Time PCR System (Applied Biosystem, Foster City, CA), and PCR amplifications were performed using the SYBR Green PCR Master Mix (Roche) according to the manufacturer's instructions. Amplification conditions were as follows: 50˚C for 2 min, 95˚C for 10 min, 40 cycles of 95˚C for 15 s, 58˚C for 30 s, and 72˚C for 30 s, followed by melting curve analysis, by which the specificity of primers was confirmed. The experiment was repeated three times. The data are expressed as relative mRNA levels and were normalized to GAPDH. Fold changes in expression of each gene were calculated by a comparative threshold cycle (Ct) method using the formula 2 2(DDCt) .
Immunofluorescence, confocal microscopy and densitometry image analysis Immunostaining was performed as described previously [33]. Briefly, Leuk-1 cells were collected and pipetted onto coverslips, which had been put in six-well culture plate in advance. After overnight incubation, Leuk-1 cells were washed with PBS and fixed in 4% paraformaldehyde for 15 min at room temperature. After being washed in PBS, the cells were permeabilized in 0.5% (v/v) Triton X-100 in PBS, washed, and blocked with 5% BSA in PBS-0.1% Tween 20 for 1 h at 37˚C. Next, the cells were exposed overnight at 4˚C to primary antibodies. Primary antibodies against the following proteins were used: NOD1 (1:100), Caspase-12 (1:200), RIP2 (1:50), phospho-NF-êB p65 (1:100), NF-êB p65 (1:100), hBD1 (1:100), hBD2 (1:100), and hBD3 (1:100). The next day, coverslips were washed with PBS and then incubated with Dylight 488 (green) or Alexa Fluor 555 (red)labeled goat-anti-mouse or goat-anti-rabbit secondary antibody for 1 h at room temperature. To stain the nuclei, 49, 6-diamidino-2-phenylindole (DAPI, Sigma) was added for 5 min, and slides were examined by a confocal laser scanning microscope (FluoView FV10i, Olympus, Japan). Densitometry image analysis was performed as previously reported with some modifications [34]. Five randomly selected discontinuous fields per slice were evaluated. The densitometry analysis of immunofluorescence results was performed by one blinded investigator using the Image J software. Briefly, the software was used to achieve the ''gray image'' and measure the optical density of the selected pixels within the region of interest (ROI). The calibration procedure was finished before image analysis. The mean value of the optical densities of all selected pixels was Mean Optical Density (MOD), which represented the corresponding fluorescence intensity of immunofluorescence staining.
Enzyme-linked immunosorbent assay (ELISA)
To detect the amount of hBD1, 2, 3 produced by Leuk-1 cells, ELISA kits were used to measure the levels of hBD1, 2, 3 in cell culture supernatant. hBD1, 2, 3 standard was used to construct standard curves. The experiments were performed according to the manufacturer's recommendations. Absorbances were read at 450 nm and 570 nm using a microplate reader, the absorbance at 570 nm was subtracted from the absorbance at 450 nm.
Preparation of siRNA and cell transfection
Caspase-12 silencing was obtained through transfecting FAM fluorescence-labeled siRNA, which was used to determine the transfection efficiency. Cell transfection was carried out by using siRNA as previously described [17,35]. For Caspase-12 siRNA transfection, cells grew in 60 mm-diameter dishes and transfection was performed when cells were 50%,70% confluent. For each dish, 100 pmol of Caspase-12 siRNA and 5 ml of Lipofectamine 2000 were diluted in 250 ml of Opti-MEM I reduced serum medium. Caspase-12 siRNA and Lipofectamine 2000 dilutions were then mixed and incubated at room temperature for 20 min before being added to each well containing cells and K-SFM (final concentration of Caspase-12 siRNA was 40 nM). The cells were incubated at 37˚C in a CO 2 incubator for 24 h, transfection medium was replaced by complete K-SFM, and the incubation continued for an additional 24 h before the addition of CSE.
Antimicrobial assay of culture supernatants of Caspase-12silenced cells and control cells following CSE treatment
The antimicrobial assay of culture supernatants was performed as described previously [37]. Candida albicans strain ATCC 10231 was purchased from China Center of Industrial Culture Collection (CICC). C. albicans was incubated in yeast extract-peptone-dextrose (YPD) liquid media at 37˚C overnight. Following various concentration of CSE treatment for 24 h, each of the culture supernatants of Caspase-12-silenced cells or control cells was then harvested, centrifuged and filtered. One hundred microliters each of the culture supernatants was added to 100 ml each of the C. albicans suspensions and incubated under 5% CO 2 at 37˚C for 1 h. After serial dilutions (until 10 4 -fold), each C. albicans suspension was then applied to YPD agar plates, incubated at 37˚C for 24 h and subjected to colony counting. The antimicrobial activity of each culture supernatant was measured three times.
Statistical analyses
Statistical analyses were performed using SPSS 15.0 (Chicago, IL). Values are expressed as the mean ¡ SE. Differences between the groups were analyzed via an unpaired t-test, and ANOVA was used to compare the differences between the different concentrations within the same treatment group. Two-tailed probability values of ,0.05 were considered statistically significant. Error bars on images represent SE.
Results
CSE altered NOD1, Caspase-12, RIP2, and p-NF-kB expression in Leuk-1 cells The first goal of this study was to investigate whether CSE affected the expression of crucial molecules of NOD1 signaling pathway in human oral mucosa epithelial cells. Leuk-1 cells were treated with various concentrations of CSE for 24 h. As shown by Western blotting results, the protein levels of NOD1 decreased with increasing concentration of CSE, while the protein levels of RIP2 elevated with increasing concentration of CSE (Fig. 1A, 1B and 1C). Relatively low concentrations of CSE treatment resulted in elevated p-NF-kB levels when comparing with the control group. The protein expression of p-NF-kB reached the highest level in Leuk-1 cells following 1% CSE treatment. However, p-NF-kB levels significantly decreased when exposed to higher concentrations of CSE ( Fig. 1A and 1D). Caspase-12 expression was significantly increased in Leuk-1 cells following 1%,8% CSE treatment, respectively ( Fig. 1A and 1E).
Consistent with Western blotting results, immunofluorescence assays revealed that CSE down-regulated NOD1 protein levels and up-regulated RIP2 protein levels both in a dose-dependent manner. Relatively low concentrations of CSE elevated p-NF-kB levels, while relatively high concentrations of CSE reduced p-NF-kB levels. The p-NF-kB protein level reached the peak level following 1% CSE treatment. CSE activated Caspase-12 expression in Leuk-1 cells ( Fig. 2A). As shown by confocal microscopy result, marked nuclear translocation of NF-kB p65 subunit was observed in Leuk-1 cells after 24 h following 1% CSE exposure (Fig. 2B).
CSE regulated the expression and release of hBD1, 2, 3 by Leuk-1 cells
To clarify the effects of CSE on hBDs expression in human oral epithelial cells, qRT-PCR and immunofluorescence were performed to detect hBDs expression at mRNA and protein levels respectively. As shown by qRT-PCR results, relatively low concentrations of CSE treatment resulted in elevated hBD1 levels when comparing with control group. Interestingly, hBD1 mRNA level was up to the highest following the treatment of 1% CSE. However, hBD1 mRNA level significantly decreased when exposed to relatively higher concentrations of CSE (Fig. 3A). The real time qPCR results revealed that hBD2 mRNA levels upregulated following CSE exposure (Fig. 3B). As shown by qRT-PCR results, 0.5% and 4% CSE treatment significantly down-regulated hBD3 mRNA level (Fig. 3C).
To further study the effects of CSE on hBDs releases from human oral epithelial cells, ELISA assays were performed to detect hBDs levels in the supernatant of Leuk-1 cells culture following CSE exposure. As Fig. 3D showed, 0.5% CSE treatment up-regulated hBD1 release, while CSE treatment of higher concentrations down-regulated hBD1 releases. On the contrary, CSE increased hBD2 releases in a dose-dependent manner (Fig. 3E). The hBD3 releases down-regulated following CSE treatment of 0.5%, 2%, 4% and 8% concentrations (Fig. 3F).
Consistent with qRT-PCR results, immunofluorescence assays indicated that hBD1 expression reached the highest level following the treatment of 1% CSE and down-regulated following relatively higher concentrations of CSE treatment. The Immunofluorescence assays revealed that hBD2 mRNA and protein levels upregulated following CSE exposure. However, immunofluorescence staining results revealed CSE treatment completely down-regulated hBD3 protein level (Fig. 3G). Caspase-12 Silencing Attenuates Inhibition of CSE on NOD1 and hBDs Fig. 3. CSE regulated expression and release levels of hBD1, 2, 3 in Leuk-1 cells. A The real-time PCR results showed that hBD1 mRNA level significantly increased following 0.5% and 1% CSE treatment, especially for 1% CSE. B hBD2 mRNA levels were greatly up-regulated by CSE stimulation. C hBD3 mRNA levels were clearly decreased by 0.5% and 4% CSE. Following 1% CSE treatment, hBD2 and hBD3 mRNA expression in Leuk-1 cells reached the highest level. D ELISA analyses revealed that 0.5% CSE significantly increased hBD1 release and 1%,8% CSE greatly decreased hBD1 release. E 0.5%,8% CSE treatment significantly induced hBD2 release. F hBD3 release was significantly down-regulated by 0.5%, 2%, 4%, and 8% CSE treatment. G Immunofluorescence assay and confocal microscopy results showed that hBD1 protein level was markedly increased following 0.5% and 1% CSE treatment. Notably, hBD1 protein expression reached the highest level following 1% CSE exposure. CSE treatment greatly increased hBD2 protein expression and decreased hBD3 protein expression. The mRNA and ELISA data were expressed as means¡SE (n53). Statistical significance: *P,0.05, **P,0.01, ***P,0.001 vs. cells without CSE treatment. The transfection of Caspase-12 siRNA caused Caspase-12 silencing at mRNA and protein levels To further examine the relationship between NOD1 signaling and Caspase-12, Caspase-12 was silenced by RNA interference. We selected two siRNA sequences from the region of Caspase-12 for evaluation. Non-silencing control siRNAs were synthesized using scrambled sequences as a negative control (NC) to assay the interference efficiency of Caspase-12 siRNA. These FAM labelled siRNAs were transfected into Leuk-1 cells. After 24 h, confocal microscopy showed that the transfection efficiency was ,95% (Fig. 4A). FAM-labelled siRNAs indicated subcellular localization of cytoplasm, especially perinuclear area (Fig. 4B). As shown in Fig. 4C, in the Western blot assay, compared with the NC group, the normalized protein level of Caspase-12 reduced remarkably in Caspase-12 siRNA-1 and -2 groups. Especially Caspase-12 siRNA-2 had greater interference efficiency than caspase-12 siRNA-1. As shown in Fig. 4D, Caspase-12 levels in cells transfected with Caspase-12 siRNA-2 (40 nM) were ,3% of that of NC group. Real-time PCR showed that the transfection of Caspase-12 siRNA-2 caused clear Caspase-12 silencing at mRNA level (Fig. 4E). Therefore Caspase-12 siRNA-2 was chosen in the following experiments.
Caspase-12 silencing increased NOD1 and p-NF-kB expression and down-regulated RIP2 expression
RIP2 is a downstream component of Caspase-12 and NOD1 signaling is regulated by Caspase-12 in murine intestinal epithelial cells [17]. To determine whether NOD1 signaling was regulated by Caspase-12 in human oral epithelial cells, Leuk-1 cells were transfected with Caspase-12 siRNA-2 (40 nM). Immunoblotting demonstrated that NOD1 and p-NF-kB levels significantly increased in Caspase-12-silenced cells compared with that in NC group. Other than NOD1 and p-NF-kB, Caspase-12 silencing remarkably reduced RIP2 level (Fig. 4F, 4G, 4H and 4I).
Caspase-12 silencing did not significantly alter the expression of hBD1, 2 and 3 at mRNA and protein levels Since Caspase-12 silencing increased NOD1 and p-NF-kB levels in Leuk-1 cells, we investigated whether Caspase-12 impacted hBD1, 2 and 3 at mRNA and protein levels. Firstly, we examined the relative alterations of hBD1, 2 and 3 at mRNA levels due to Caspase-12 silencing. As shown in Fig. 5A, 5B and 5C, no statistically significant change was observed in hBD1, 2 and 3 mRNA levels due to Caspase-12 silencing. We then analyzed whether Caspase-12 impacted hBD1, 2 and 3 at protein levels by immunofluorescence staining. Consistent with qRT-PCR results, Caspase-12 silencing also did not lead to significant change of hBD1, 2 and 3 protein expression compared with that in control cells (Fig. 5D). Fig. 4. Caspase-12 silencing altered NOD1, RIP2 and p-NF-kB protein expression in Leuk-1 cells. A Immunofluorescence and confocal microscopy results showed that the transfection efficiency of Caspase-12 siRNAs was ,95%. B FAM-labelled siRNAs indicated subcellular localization of cytoplasm, especially perinuclear area. C Western blot assay indicated protein level of Caspase-12 greatly decreased in Leuk-1 cells following RNA interference. D According to the image analysis result of immunoblot bands, compared with the NC group, the normalized protein level of Caspase-12 reduced remarkably in Caspase-12 siRNA-1 and -2 groups, especially for Caspase-12 siRNA-2. E Real-time PCR showed that the transfection of Caspase-12 siRNA-2 caused clear Caspase-12 silencing at mRNA level. F Western blot assay indicated Caspase-12 silencing altered NOD1, RIP2, and p-NF-kB protein expression. G Immunoblotting demonstrated that NOD1 expression significantly increased in Caspase-12-silenced cells compared with that in NC group. H Other than NOD1, Caspase-12 silencing remarkably reduced RIP2 level. I Alike to NOD1, p-NF-kB expression was up-regulated in Caspase-12-silenced cells. The immunoblot band density and mRNA data were expressed as means¡SE (n53). Statistical significance: *P,0.05, **P,0.01, ***P,0.001 vs. scrambled siRNA-transfected cells. Caspase-12 Silencing Attenuates Inhibition of CSE on NOD1 and hBDs
Caspase-12 silencing partially attenuated inhibitory effects of CSE on NOD1 and p-NF-kB expression
Caspase-12 has been reported to negatively regulate NOD1 signaling in enterocytes [17]. To confirm whether Caspase-12 inhibits NOD1 signaling following CSE exposure, we examined the impact of Caspase-12 silencing on NOD1 signaling pathway in Leuk-1 cells following CSE exposure. As immunoblotting analysis shown, NOD1 protein levels significantly increased in Caspase-12-silenced cells compared with that in controls following 4% CSE Caspase-12 Silencing Attenuates Inhibition of CSE on NOD1 and hBDs treatment ( Fig. 6A and 6B). Caspase-12 silencing markedly increased RIP2 levels following 2% and 8% CSE treatment ( Fig. 6A and 6C). Like NOD1, p-NF-kB level also greatly increased in Caspase-12-silenced cells compared with that in controls following 4% CSE treatment ( Fig. 6A and 6D). These results confirmed that Caspase-12 silencing partially attenuated the inhibitory effect of CSE on NOD1 signaling pathway to a certain extent. Coincidentally, immunofluorescence results indicated that NOD1 protein levels significantly increased in Caspase-12silenced cells compared with that in controls following 4% CSE treatment (Fig. 7A). Caspase-12 silencing significantly increased RIP2 levels following relatively high concentrations of CSE treatment (Fig. 7B). Immunofluorescence results indicated that p-NF-kB protein level significantly increased in Caspase-12silenced cells compared with that in controls following 4% CSE treatment (Fig. 7C) Caspase-12 silencing abrogated the suppression of hBD1, 3 expression by CSE and augmented the induction of hBD2 expression by CSE In the next stage, we examined the impact of Caspase-12 silencing on CSEstimulated hBD1, 2, 3 mRNA expressions in Leuk-1 cells. As shown in Fig. 8A, Fig. 6. Caspase-12 silencing partially attenuated inhibitory effects of CSE on NOD1 and p-NF-kB expression. A Immunoblot bands indicated the protein expression of NOD1, RIP2, p-NF-kB in Caspase-12-silenced cells and controls exposed to CSE. B Immunoblot analysis showed that NOD1 protein level was significantly increased in Caspase-12-silenced cells compared with that in controls following 4% CSE treatment. C Caspase-12-silenced cells greatly increased RIP2 expression following 2% and 8% CSE treatment. D p-NF-kB level was greatly increased in Caspase-12-silenced cells compared with that in controls following 4% CSE treatment. Immunoblot band density data were expressed as means¡SE (n53). Statistical significance: #P,0.05, ##P,0.01 vs. scrambled siRNA-transfected cells. doi:10.1371/journal.pone.0115053.g006 Caspase-12 Silencing Attenuates Inhibition of CSE on NOD1 and hBDs hBD1 mRNA levels increased and reached the highest level following 1% CSE treatment in control cells and then decreased following higher concentrations of CSE treatment. Caspase-12-silenced cells expressed 1,000-fold hBD1 mRNAs than the controls following 1% CSE treatment. Strikingly, the hBD1 mRNA levels following 8% CSE treatment reached the peak level in Caspase-12-silenced cells, which expressed more than 8,000-fold hBD1 mRNAs than the controls. As shown in Fig. 8B, hBD2 mRNA levels was up-regulated by CSE and reached the highest Fig. 7. Caspase-12 silencing partially attenuated inhibitory effects of CSE on NOD1 and p-NF-kB expression. A Immunofluorescence assay and confocal microscopy results showed, compared with that in controls, NOD1 protein levels were remarkably increased in Caspase-12-silenced cells following 4% CSE treatment. B RIP2 levels were up-regulated in Caspase-12-silenced cells compared with that in controls following relatively high concentrations of CSE treatment. C p-NF-kB level greatly increased in Caspase-12silenced cells compared with that in controls following 4% CSE treatment. doi:10.1371/journal.pone.0115053.g007 Caspase-12 Silencing Attenuates Inhibition of CSE on NOD1 and hBDs Fig. 8. Caspase-12 silencing strikingly up-regulated hBD1, 2, 3 mRNA levels in Leuk-1 cells following CSE stimulation. A Real-time PCR results indicated that Caspase-12-silenced cells expressed 1,000-fold hBD1 mRNAs than the controls following 1% CSE treatment. Strikingly, Caspase-12-silenced cells expressed more than 8,000-fold hBD1 mRNAs than the controls following 8% CSE treatment. B The hBD2 mRNA levels increased and reached the highest level following 1% CSE treatment in Caspase-12-silenced cells, which expressed more than 1,000-fold hBD2 mRNAs than the controls. The hBD2 mRNA levels following 8% CSE treatment increased in Caspase-12-silenced cells, which expressed nearly 10-fold hBD2 mRNAs than the controls. C The hBD3 mRNA levels following 1% CSE treatment reached the peak level in Caspase-12-silenced cells, which expressed about 100-fold hBD3 mRNAs than the controls. Caspase-12-silenced cells following 8% CSE treatment also reached the peak level and expressed about 300-fold hBD3 mRNAs than the controls. The mRNA data were expressed as means¡SE (n53). Statistical significance: #P,0.05, ##P,0.01, ###P,0.001 vs. scrambled siRNAtransfected cells. doi:10.1371/journal.pone.0115053.g008 Caspase-12 Silencing Attenuates Inhibition of CSE on NOD1 and hBDs level following 8% CSE treatment in control cells. The hBD2 mRNA levels increased and reached the highest level following 1% CSE treatment in Caspase-12-silenced cells, which expressed more than 1,000-fold hBD2 mRNAs than the controls. The hBD2 mRNA levels following 8% CSE treatment increased in Caspase-12-silenced cells, which expressed nearly 10-fold hBD2 mRNAs than the controls. As shown in Fig. 8C, hBD3 mRNA levels decreased following 0.5% CSE treatment and increased following 4% CSE treatment and then decreased following 8% CSE treatment in control cells. The hBD3 mRNA levels following 1% CSE treatment reached the peak level in Caspase-12-silenced cells, which expressed about 100-fold hBD3 mRNAs than the controls. Caspase-12-silenced cells following 8% CSE treatment also reached the peak level and expressed about 300-fold hBD3 mRNAs than the controls.
Since Caspase-12 silencing significantly increased hBD1, 2, 3 mRNA levels in Leuk-1 cells following CSE exposure, we investigated whether Caspase-12 silencing impacted hBD1, 2, 3 protein levels following CSE exposure by immunofluorescence staining (Fig. 9A, 9B and 9C). Consistent with qRT-PCR results, the image analysis of immunofluorescence results revealed that hBD1 protein expression increased and reached the highest level following 1% CSE treatment in control cells and then decreased following relatively higher concentrations of CSE treatment. To the contrary, Caspase-12-silenced cells expressed significantly higher levels of hBD1 than the controls following 1%, 2%, 4%, and 8% CSE treatment (Fig. 9D). As shown in Fig. 9E, CSE up-regulated hBD2 protein levels in control cells, while hBD2 protein levels following 2%, 4%, and 8% CSE treatment were remarkably higher in Caspase-12-silenced cells than the control cells. As shown in Fig. 9F, hBD3 protein levels in control cells decreased clearly following CSE exposure, while Caspase-12-silenced cells expressed slightly higher hBD3 levels than the controls following CSE exposure. However, the difference was not statistically significant.
Caspase-12 silencing enhanced the antimicrobial activity of culture supernatants of CSE-exposed Leuk-1 cells As shown in Fig. 10, the culture supernatant of CSE-exposed control cells could not inhibit C. albicans colonies formation. The colonies number of C. albicans clearly increased in control groups following the treatment with the culture supernatant of control cells exposed to 2%, 4% and 8% CSE. Interestingly, the culture supernatant of CSE-exposed Caspase-12-silenced cells exhibited significantly higher antimicrobial activity to C. albicans than that of control cells. The colonies number of C. albicans following the treatment with the culture supernatant of Caspase-12-silenced cells exposed to 2%, 4% and 8% CSE was remarkably fewer than that in the corresponding NC group. These results indicated that Caspase-12 silencing enhanced the inhibitory effect of culture supernatants of CSE-exposed Leuk-1 cells on C. albicans.
Discussion
Many studies have demonstrated that cigarette smoke alters the expression of PRRs, especially for TLRs [8][9][10][11][12][13]. However study data about the effects of cigarette smoke on NLRs remain very little. A recent study indicated that CSE delays NOD2 expression and affects NOD2/RIP2 interactions in intestinal epithelial cells [14]. As fundamental members of NLR family, NOD1 and NOD2 have very similar structures. Our data indicated for the first time that CSE could inhibit NOD1 signal pathway in oral mucosal epithelial cells.
There is a close relationship between cigarette smoke and many diseases. As is well known, smoking is one of the most important risk factors for periodontitis, second only to plaque [2]. A recent study indicated that NOD1 is critical for commensal-induced periodontitis [38]. Apart from periodontitis, cigarette smoke Fig. 9. Caspase-12 silencing abrogated the suppression of hBD1, 3 protein expression by CSE and augmented the induction of hBD2 protein expression by CSE. A-C Immunofluorescence and confocal microscopy showed hBD1, 2, 3 staining in Caspase-12-silenced cells and control cells following various concentration of CSE treatment. D Densitometry results revealed that Caspase-12-silenced cells expressed markedly higher levels of hBD1 protein than the controls following 1%, 2%, 4% and 8% CSE treatment. E The hBD2 protein levels following 2%, 4%, and 8% CSE treatment were remarkably higher in Caspase-12-silenced cells than the control cells. Caspase-12 Silencing Attenuates Inhibition of CSE on NOD1 and hBDs itself, or in combination with other factors, is a well recognized risk for oral candidiasis, oral leukoplakia and oral cancer [2,39,40,41,42].
In oral cavity, oral mucosal epithelium is the first tissue that encounters cigarette smoke. Very little study data existed about the effects of cigarette smoke on innate immune of oral mucosal epithelial cells. Given the vital role of NOD1 in innate immune and tissue homeostasis, the inhibitory effect of CSE on NOD1 expression could result in reduced antibacterial peptide production and oral diseases occurrence. NOD1 may be a potential therapeutic target for some diseases in future.
In the study, we found that CSE increased RIP2 expression in Leuk-1 cells. As an adapter, RIP2 acts a crucial role in NOD1-induced NF-kB activation. Moreover RIP2 also mediates cell apoptosis and autophagy according to previous studies [43,44]. Recently, Wang et al. found that RIP1 expression remarkably Fig. 10. Caspase-12 silencing enhanced the inhibitory effect of culture supernatants of CSE-exposed Leuk-1 cells on C. albicans. Antimicrobial activities of culture supernatants of Caspase-12-silenced cells and control cells following CSE treatment were compared. A After the treatment with culture supernatants, C. albicans colonies were observed and counted at 24 h. B The colonies number of C. albicans clearly increased in NC group following the treatment with the culture supernatant of scrambled siRNA-transfected cells exposed to 2%, 4% and 8% CSE. The colonies number of C. albicans following the treatment with the culture supernatant of Caspase-12-silenced cells exposed to 2%, 4% and 8% CSE was remarkably fewer than that in NC group. The colonies number of C. albicans was expressed as means¡SE (n53). Statistical significance: *P,0.05 vs. culture supernatants of cells not exposed to CSE; #P,0.05, ##P,0.01 vs. culture supernatants of scrambled siRNA-transfected cells.
increased in cigarette smoke-exposed mouse lung and significantly induced by CSE in human bronchial epithelial cells [45]. It is well known that receptorinteracting protein family consists of several members RIP1-4, which play a crucial role in cell survival signaling. Based on existing evidences, increased expression of RIP2 may result from CSE-induced cell damage.
Our early results indicated that low concentrations of CSE increased NF-kB expression in murine macrophages, while CSE of higher concentrations inhibited NF-kB activation [46]. Consistent with the study data, our present findings further confirmed CSE regulated-NF-kB activation or suppression is dependent on CSE concentrations. One of possible explanations is that the exposure to relatively low concentrations of CSE may lead to the cellular stress response to toxic compound stimulation. Organisms have developed an elaborate system of defensive molecules and survival signaling pathways to counteract various toxic and environmental stresses. If the adaptive response is unable to counteract adverse exposure, cells will be eliminated by death processes such as apoptosis [47].
The hBD is one of antimicrobial peptides that are expressed by the epithelia throughout the body including oral cavity. The expression of hBD1, 2, and 3 has been most investigated. These peptides are produced by oral epithelial cells and may control many commensal and pathogenic bacteria in oral cavity. hBD1 is constitutively expressed in epithelial cells and may be up-regulated by bacterial products. hBD2 is inductively expressed in epithelial cells and strongly upregulated in vitro by commensal, pathogenic bacteria as well as proinflammatory cytokines. hBD3 is expressed in normal epithelium and is up-regulated by bacteria, IFN-c and growth factors [48].
Study data on effects of cigarette smoke on hBD1 expression are little. Wolgin et al. found that the expression of hBD1 and hBD2 mRNA significantly reduced in gingival samples of smokers compared to that in non-smokers [30]. An early study indicated that mouse b defensin (mBD) 1 expression decreased in cigarette smoke-exposed mice, while the expression of mBD2 and mBD3 was greatly elevated in the lungs of cigarette smoke-exposed mice compared with air-exposed mice [49]. Some studies indicated that CSE or whole cigarette smoke exposure modulates hBD2 and hBD3 mRNA by human gingival epithelial cells in vitro [10,11]. The current result indicated that CSE could modulate hBD1, 2, and 3 expression levels in Leuk-1 cells. hBD1 expression in Leuk-1 cells was activated by relatively low concentrations of CSE and suppressed by relatively high concentrations of CSE. Otherwise, our results indicated CSE significantly increased hBD2 expression and inhibited hBD3 levels in Leuk-1 cells. In the present results, the difference between hBDs mRNA expression levels in Leuk-1 cells and hBDs protein levels in the supernatant was observed. On the one hand, the difference could be explained by the regulation of antimicrobial peptide expression at transcriptional, post-transcriptional and post-translational levels [50,51]. On the other hand, this difference may originate from the modulation of hBDs at secretory level by epithelial cells themselves, which released distinct amount of hBDs to control defensive responses to varying extents.
Many study results have confirmed that cigarette smoke or some components in cigarette smoke can activate Caspase-12 expression [18,19,20,21]. In accordance to previous studies, our results suggested that CSE treatment could increase the expression of Caspase-12 in oral mucosal epithelial cells. Caspase12 is a crucial molecule associated with endoplasmic reticulum (ER) stress-induced apoptosis and inflammasome activation [52]. LeBlanc et al. found that Caspase-12 modulates negatively NOD1 signaling in mouse colonic epithelial cells. Mechanistically, Caspase-12 binds to RIP2 and displaces Traf6 from NOD1 signaling complex. As a result, its ubiquitin ligase activity is inhibited and NF-kB activation is blunted [17]. Supporting their findings, our results indicated that Caspase-12 silencing down-regulated RIP2 expression in Leuk-1 cells. Intriguingly, our results further showed that Caspase-12 silencing up-regulated the expression of NOD1 and NF-kB. These findings suggested clearly that Caspase-12 is a negative regulator of NOD1 signaling. Further studies are needed to investigate the complicated interacting mechanism between Caspase-12 and crucial molecules in NOD1 signal pathway.
The present study in Leuk-1 cells showed that Caspase-12 silencing partially attenuated inhibitory effects of CSE on NOD1 and p-NF-kB protein expression. After 4% CSE treatment, both NOD1 and p-NF-kB levels significantly increased in Caspase-12-silenced cells compared with that in controls. Therefore, these results indicated that Caspase-12 could be involved in the inhibitory effect of CSE on NOD1 signaling in oral mucosal epithelial cells.
In the present study we found for the first time that Caspase-12 silencing abrogated the suppression of hBD1, 3 expression levels by CSE and augmented the induction of hBD2 expression by CSE. Moreover our results indicated that Caspase-12 silencing enhanced the inhibitory effect of culture supernatants of CSE-exposed Leuk-1 cells on C. albicans, a common conditioned pathogen in oral cavity. LeBlanc et al. confirmed that Caspase-12-deficient enterocytes after infection hyper-produced antimicrobial peptides, specifically mBD1, a functional homolog of hBD1 [17]. LeBlanc and colleagues' results provide a clue that Caspase-12 regulates antimicrobial peptide production following infection stimulation. Coincidentally, our results suggested that Caspase-12 regulates antimicrobial peptide production following CSE stimulation. Antimicrobial peptide production may up-regulated in the absence of Caspase-12. Mechanistically, CSE activated intracellular Caspase-12, which negatively regulated NOD1 signaling by suppressing NOD1 and NF-kB expression and inducing RIP2 expression. As a part of downstream molecules of the signaling, hBDs production was subsequently inhibited and the defense response of human oral mucosal epithelial cells to pathogens was dampened (Fig. 11). Saleh et al. confirmed that Caspase-12-deficient mice enhanced bacterial clearance and sepsis resistance. Caspase-12 is detrimental to in vivo handling of systemic bacterial infections and predisposes to sepsis, thereby making it a potentially important target for future therapeutic strategies [53].
All together, CSE could suppress NOD1 signaling and modulate the downstream hBDs production in Leuk-1 cells. Caspase-12 silencing partially attenuated the inhibitory effects of CSE on NOD1 signaling and abrogated the suppression of CSE on hBDs expression. Caspase-12 silencing could enhance the antimicrobial activity of CSE-exposed cells. Caspase-12 may be a potential therapeutic target for some infectious and inflammatory diseases in future.
Author Contributions
Conceived and designed the experiments: WW XDH WZ XW. Performed the experiments: XW YQ QZ PY. Analyzed the data: XW YQ PY ND QZ. Contributed Fig. 11. Schematic depiction of the potential mechanism by which CSE-induced Caspase-12 activation dampens the defense response of oral mucosal epithelial cells to pathogenic microorganism through inhibition of the NOD1 signaling and hBDs production. Mechanistically, CSE activated intracellular Caspase-12, which negatively regulated NOD1 signaling by suppressing NOD1 and NF-kB and inducing RIP2. As a part of downstream effectors of the signaling pathway, hBDs production was subsequently inhibited and the defense response to pathogens was dampened. CS: cigarette smoke; CSE: cigarette smoke extract. doi:10.1371/journal.pone.0115053.g011 Caspase-12 Silencing Attenuates Inhibition of CSE on NOD1 and hBDs reagents/materials/analysis tools: XFH YZ JL LH WZ. Wrote the paper: XW XDH WW. | 2017-04-13T11:55:24.687Z | 2014-12-11T00:00:00.000 | {
"year": 2014,
"sha1": "802627622fc0d39e40ec733760b5144348791e0d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0115053&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "802627622fc0d39e40ec733760b5144348791e0d",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Biology",
"Medicine"
]
} |
250371206 | pes2o/s2orc | v3-fos-license | A categorial grammar of Spanish auxiliary chains
Spanish auxiliary sequences as in Juan puede haber tenido que estar empezando a trabajar hasta tarde ‘ Juan may have had to be starting to work until late ’ , traditionally termed auxiliary chains , have two properties that are not naturally captured in phrase-structure approaches to syntax: (i) they follow no a priori fixed order; auxiliary permutations have different meanings, none of which is any more basic than any other (cf. Juan puede estar trabajando ‘ Juan may be working ’ and Juan está pudiendo trabajar ‘ Juan is currently able to work ’ ); and (ii) the syntactic and semantic relations established within a chain go beyond strict monotonicity or cumulative influence; rather, they present different kinds of syntactic relations in distinct local domains. We show that an alternative to syntax grounded in a modification of the categorial grammar introduced in Ajdukiewicz (1935) that closely follows Montague (1973), Dowty (1978, 1979, 2003), and Schmerling (1983a, b, 2019) provides effective tools for subsuming Spanish auxiliary chains in an explicit and explanatory grammar.
Introduction
In this paper, we present a framework for describing and explaining the properties of sequences of auxiliary verbs in Spanish in a theory that equally well accommodates the familiar but very different auxiliary sequences of English. English auxiliaries, which are surely the most widely studied auxiliaries of any language, have been investigated since the early work of Chomsky in the 1950's, in one or another version of phrase structure grammar (PSG) or a computationally equivalent context-free formalism, often supplemented with other types of rules (transformations, feature cooccurrence restrictions, etc.) or a universal template of syntactic projections. The versatility of the framework we present constitutes an important argument in its favour. This introductory section summarises the fundamental properties of Spanish auxiliary verb sequences. Section 2 then addresses in depth what linguistic theory must permit a revealing account of, while at the same time permitting English-like auxiliary sequences. A novel account of Spanish auxiliary chains that makes use of no independently unmotivated formal apparatus is the topic of Section 3. Section 4 is our conclusion.
Verbal periphrases in Spanish
We begin our introduction to Spanish auxiliary sequences by defining verbal periphrastic constructions (or verbal periphrases). The term verbal periphrasis is characteristic of works written in or about the various Romance languages and has a venerable place in Hispanic linguistics specifically (Roca Pons 1958;Olbertz 1998;Fernández de Castro, 1999;Gómez Torrego, 1999;García Fernández, 2006;RAE-ASALE, 2009;Bravo & García Fernández, 2016; to cite but a few). Throughout this paper we use as equivalent the expressions verbal periphrasis (or simply periphrasis), auxiliary verb construction, and periphrastic verb construction. As classically used for Spanish, these terms refer to sequences of one or more auxiliary verbs and a nonfinite form of a lexical (or "main") verb, giving rise to a single predication and within the limits of a single clause (RAE-ASALE 2009: §28.5). Constructions with auxiliary verbs are exemplified in (1), with single auxiliaries, and in (2) with auxiliary sequences. The Spanish grammatical tradition refers to sequences of two or more auxiliaries as auxiliary chains (cadenas de verbos auxiliares). As is common in Indo-European languages, each auxiliary determines the form of the following verb (whether auxiliary or lexical verb): 1,2 1 We use the following abbreviations: AUX = auxiliary; COND = conditional; CONT = continuative aspect; GER = gerund; HAB = habitual aspect; INCH = inchoative; IPFV = imperfective; INF = infinitive; MOD = modal (auxiliary); PTCP = participle; PASS = passive; PFV = perfective; PL = plural number; PRES = present tense; PROG = progressive; SG = singular number; TNS = temporal auxiliary. 2 Elements like a, de, or que (among others) in auxiliary verb constructions must be distinguished from homophonous prepositions (a and de) and complementisers (que). García offer a detailed study of these items, which do not constitute a unified class, and which they term intermediate elements. (1) and (2) illustrate that in Spanish, as in many Indo-European languages (though no longer in contemporary English), all auxiliary verbs, with the exceptions in fn. 7 below, may show inflection; modal verbs, for example, have full inflectional paradigms and are identifiable as such primarily by semantic criteria (see Bravo 2016Bravo , 2017 for recent overviews of modality in Spanish), whereas in contemporary English the class of modals is defined primarily by a lack of inflection and by a restricted distribution (McCawley 1975;Pullum & Wilson 1977). 3 As we have indicated, there is general agreement that only the lexical verb in a verbal periphrasis has argument structure and that the verbs making up the periphrasis jointly express a single eventive predication. This property is usually referred to as monoclausality. The central role of monoclausality in defining verbal periphrases cross-linguistically has been widely recognised in the literature, regardless of framework (see, among many others, Gómez Torrego 1999: 3325;Rochette 1999: 151;Cinque 2004;Wurmbrand 2004;Anderson 2006Anderson : 7, 2011RAE-ASALE 2009;Sag et al. 2020). 4 Thus, Anderson (2011: 796) states that "A(uxiliary) V(erb) 3 As recently as the seventeenth century, English auxiliaries were similar to those in Spanish where inflection was concerned; this included the modals. A detailed account of how various changes in English led to modals' becoming uninflected particles is offered in van Kemenade (1992); see also the references cited there. 4 Within generative grammar, there have historically been differences over whether this monoclausal structure is achieved transformationally or through PS rules (in more recent terms, whether monoclausality is a consequence of Internal or External Merge, the former presumably subsuming incorporation processes like Restructuring; see, e.g., Roberts, 1997). Aissen & Perlmutter's (1976) clause reduction and Chomsky's (1964a) grammar fragment, C(onstructions) are … mono-clausal verb phrases that minimally consist of an auxiliary verb component … and a lexical verb component".
Lexical and functional auxiliaries
Examples of the auxiliary chains of our title are given in (2) above and in (3) We follow Bravo et al. (2015) and García Fernández & Krivochen (2019a, b) in defining an auxiliary chain as any verbal periphrasis in which there are at least two auxiliary verbs. The relative linear position of an auxiliary chain with respect to the lexical verb varies, but in the declarative sentences that we focus on in this paper, the chain always appears immediately to the left of the main verb, as in (2) and (5); 5 an extension to other sentence types does not require additional theoretical machinery (see Bach, 1979;Schmerling, 1983bSchmerling, , 2019Jacobson, 1987).
Spanish auxiliary chains display a variety of internal dependencies and word orders, none of which seems to be derivationally "more basic" than any other. Thus, (5a) and (5b) are equally grammatical; crucially, however, they are not synonymous: Juan está debiendo trabajar todo el día J.
be.3SG.PRES must.GER work.INF all the day 'J. is having to work all day long' (Aspect > Modality > Verb) respectively, serve as early and very clear illustrative examples of these two analytical approaches. 5 Crucially, the generalisation we have just cited does not hold, e.g., for interrogatives or instances of inversion in verum focus fronting, as in example (ii), for instance: i) Yo tendría que estar muriéndome para no ir a esa fiesta (auxiliaries to the immediate left of the lexical verb) 'I would have to be dying not to go to that party' ii) Muriéndome tendría que estar yo para no ir a esa fiesta (auxiliaries to the right of the lexical verb) 'Dying I would have to be not to go to that party' Krivochen & García Fernández (2019) analyse this and other instances of nondeclarative sentences where the Aux Chain-V order is disrupted. 5 5 In Section 3 we will pursue the point that this critical property of their syntax motivates our adoption of an approach that departs from syntactic theories grounded in monotonic structure building in two important ways. The first is that it correctly recognises and captures a structural variety that those theories do not. Our second analytical departure involves an interaction between Spanish auxiliary structure building and the semantic properties of auxiliaries: some, which we (following Bravo et al., 2015;García Fernández et al., 2017, and related work) call 'lexical' auxiliaries, delimit domains for the transmission of temporal and aspectual information provided by other, 'functional' auxiliaries (e.g., temporal <ir a + infinitive>, aspectual <estar + gerund>). 6 In other words, lexical auxiliaries can be temporally and aspectually anchored independently of main verbs; they are expressions assigned to a category. The functional auxiliaries, in contrast, forgo this kind of anchoring, contributing temporal and aspectual information themselves: they are akin to inflection rather than to basic categorematic expressions of the language. What is "lexical" about lexical auxiliaries is the possibility of their modifying lexical elements while at the same time being able to be modified themselves. Functional auxiliaries, in contrast, modify but cannot themselves be modified; they never take on temporal or aspectual information from other auxiliaries as lexical elements do, including lexical auxiliaries and main verbs. We will focus primarily on the interaction between functional auxiliaries and modal auxiliaries (for non-modal lexical auxiliaries, see García Fernández et al., 2017;García Fernández & Krivochen, 2019, among others. The same formal devices apply).
The difference between lexical and functional auxiliaries is illustrated in the examples in (5); lexical and functional auxiliaries are marked as such using L(exical) and F(unctional) subscripts: In (5a), what is temporally anchored by the temporal future auxiliary va a is the obligation denoted by the deontic modal tener que, not the aspectual inchoative empezar a or the lexical verb trabajar. The obligation, in turn, pertains to the start of the event of working; that is, va a tener que modifies empezar a, which in turn modifies trabajar. However, tener que, and, by extension, va a, do not modify trabajar: we can see this from the lack of entailment (⇏) indicated in (5a'): 6 These two auxiliaries mark future tense and progressive aspect, respectively. Bravo et al. (2015) call lexical auxiliaries opaque because, as (5a) illustrates, they do not let temporal and aspectual information from functional auxiliaries like ir a through: the future tense contributed by ir a modifies only the lexical auxiliary tener que, not having scope over anything to its right. But in (5b), the functional auxiliary estar intervenes between the lexical modal auxiliary poder and the main verb. If functional auxiliaries are transparent for purposes of modification relations in auxiliary chains-that is, if they let that information through-we predict that the lexical auxiliary modifies the next lexical element namely, the main verb. This prediction indeed holds: In (5c) (Juan está debiendo llegar a tiempo) the deontic modal auxiliary deber appears in a progressive periphrasis, as the complement of the functional auxiliary estar. As in (5a), the lexical auxiliary deber absorbs the aspectual (imperfective, progressive) modification from this functional auxiliary, so that what is understood progressively is the obligation to arrive on time. The event of arriving per se is not so understood. Deber is representative of the entire class of lexical auxiliaries in its behaviour with respect to the 'absorption' of functional information.
The examples in (5) demonstrate how lexical auxiliaries define local modification domains; the lexical / functional distinction (or rather, the distinctions in dependency types that it captures) is critical to the adequacy of structural descriptions of auxiliary chains. This distinction will be pursued in Section 3.
The lexical auxiliary / functional auxiliary distinction that we have illustrated in examples (5a-c) is summarised in Table 1: 7,8 Table 1. Lexical and functional auxiliaries The distinction between lexical and functional auxiliaries touched on here is critical to our CG analysis of the Spanish auxiliary system, which is the focus of Section 3. In particular, our discussion will focus on the syntactic properties of modal auxiliaries as lexical auxiliaries, but our formal analysis is more general (see the grammar fragment in Appendix A).
Categorial grammar
The theoretical framework for our analysis of Spanish auxiliary chains comes from the tradition of categorial grammar (CG). CG was introduced by the Polish philosopher and logician Kazimierz Ajdukiewicz (1935) and, like the PSG tradition, has evolved in more than one way; the version of CG that we adopt involves expansions upon Ajdukiewicz's original proposal, most notably in Montague (1973), Dowty (1978Dowty ( , 2003, and Schmerling (1983aSchmerling ( , b, 2019. CGs have the mathematical structure of an algebra, just as PSGs do; but rather than make use of rewriting operations as a PSG 7 Spanish linguistics has traditionally noted positional restrictions on some auxiliaries, notably <soler + INF> and <haber de + INF> (and the impersonal <haber que + INF>, which can only be conjugated in 3SG), which can only appear in declarative clauses and in first position in finite clauses (the infinitives we have cited are strictly citation forms; these auxiliaries have also no gerund or participle, see García Fernández, 2006: 245, 165 respectively). These restrictions reflect the auxiliaries' having defective paradigms, as noted among others in RAE-ASALE (2009) §4.4c and §28.9b and Bravo & García Fernández (2016): as an example, habitual soler can only be conjugated in the imperfective aspect (and even then, with temporal and modal restrictions: the indicative imperfective future does not exist, there is only one occurrence of the imperfective subjunctive future soliese and two of the alternative form soliera in the CREA corpus -consulted on 10/06/2022-). The defective paradigm of soler was noted as early as Correas (1625Correas ( [1903). 8 Examples (5a), (8a,c), and others to be presented contain a further auxiliary, passive ser 'to be', which is in a class by itself. We discuss this auxiliary in Section 3.2.
Opaque / lexical
Progressive <estar + GER> 'to be -ing', perfective <haber + PTCP> (have -en), <ir a + INF> (be going to), <acabar de + INF> (in its 'recent past' reading; have just -en) Phasals (<empezar a / comenzar a + INF> 'to start'; <terminar de / acabar de + INF> 'to finish'; <continuar / seguir + GER> 'to keep -ing'), positionally unrestricted modals (<tener que + INF> 'to have to'; <poder + INF> 'to be able to/ to be allowed to'; <deber (de) + INF> 'to have to'); scalars (<llegar a + INF> 'to go as far as to', <acabar + GER> 'to finish by -ing'); first-position auxiliaries (<soler + INF> 'to be accustomed to -ing', <haber de + INF> 'to have to'); <haber que + INF> 'it is necessary to'; <tardar en + INF> 'to take (time) to'. does, a CG's formal operations manipulate a language's expressions rather than grammatical symbols (lexical elements and their phrasal projections in classical PSGs; terminals and non-terminals, in formal language theory). Recall that an algebra consists minimally of a non-empty generator set A and a possibly empty set of operations on A; if the set of operations is non-empty, as it is in any natural language, A is the smallest set that is closed under the operations. The generator set of the CG algebra is a set of basic expressions, and its operations recursively yield a set of derived expressions; the field of the algebra, then, is the union of these two sets. The early extensions of Ajdiukiewicz's CG by Bar-Hillel (1953) and Lambek (1958) follow his inasmuch as they recursively define syntactic categories on the basis of two kinds of information: the role they play in the language's compositional semantics and, for derived expressions, the categories of their constituent expressions and how those expressions combine. In the more recent Montague-Dowty-Schmerling variety of CG, in contrast, a language's system of syntactic categories is based only on the first kind of information: their role in the compositional semantics, which we illustrate shortly. Because the categories are no longer based solely on the language's formal operations, the assignment of sets of expressions to categories is now accomplished by the supplementing of the category indices with a system of syntactic rules. These rules assign sets of expressions to categories, directly in the case of basic expressions and, in the case of derived expressions, by the categories of their constituent expressions and the formal operations deriving them-since the latter are not already encoded in the categories themselves. In Appendix A of this paper, we include examples of both kinds of syntactic rule.
We will assume the basics of CG grammars presented in Montague (1973) and its extensions in Dowty (1979Dowty ( , 2003 and Schmerling (1983a, b;, with some modifications to be developed in Section 3. We follow Montague (1970) and Schmerling (1983aSchmerling ( , b, 2019 in defining a language L as containing an algebra <A, P>, where A is a set of expressions and P is a set of formal operations defined over A-or, as they were called especially in the early twentieth-century American linguistics of Franz Boas, Edward Sapir, and their students, processes (the algebraic character of this model of grammar is discussed in Hockett, 1954; see Schmerling, 1983a for extensive discussion). The processes are productive; in mathematical terms, the set A is closed under the processes. 9 Within the set A, we distinguish basic and derived expressions; derived expressions are those that are the outputs of formal operations.
Beyond the algebra that constitutes its formal core, a language contains a set of syntactic categories, each of which is a set of expressions indexed according to principles to be discussed shortly. The categories comprise a filter on this algebra. The structure of the system as a whole is shown in the Venn diagram in Figure 1. 9 That is, any output of a process is itself a member of the set of expressions. For example, if the process is prefixation of un-, and if tie, untie, ununtie, and so on are all members of the set A of expressions, then A is closed under un-prefixation: the outputs of repeated applications of this process are also members of A. 9 9 The syntax of a language then, in the variety of CG used here, is a set of is-derivedfrom relations (from a bottom-up perspective) or comprise(s) relations (viewed top down) among basic and derived expressions. 10 The categories of a language, in a CG, have basic or derived indices, basic category indices (typically two) and a set of derived category indices. The set of universally available category indices is defined recursively from this base as the smallest set containing the basic indices-say, A and B, where we use A and B as metavariables ranging over category indices-and the indices derived from them by repeated binary combinations of category indices expressed in fraction notation. 'The smallest set' in this definition does the work of linguists' more familiar all and only. We follow Ajdukiewicz in using fraction notation for the derived category names; for typographical simplicity, we use slash notation for fractions, designating a derived category index as, for example, A/B. The recursive definition we have just cited also makes available category indices such as B/A, A/A, B/(A/B), and so on. A particular language makes use of a proper subset of these available indices, A category name A/B (often referred to informally as a slash category) always indexes an expression that denotes a function; such an expression is called a functor. Specifically, an expression of category A/B always has the semantic value of a function from semantic values of B expressions to semantic values of A expressions. 11 As an example, in tener que empezar a V 'has to start to V', tener que 'to have to' belongs to an A/B category, and this functor expression is followed by the category B expression (empezar a V 'to start to V'). In an example like this, where an A/B expression combines with a B expression, we refer to the B expression as the argument or the complement of the A/B expression; we also sometimes speak of the A/B 10 These CG relations have a very different motivation from those of the PS relations of dominance and precedence, their fundamental role being to constrain the relationship between the syntax and the compositional semantics (we will illustrate the workings of semantic rules in Section 3). Dominance has no counterpart in a CG, and what precedes what in a derived expression is specified by the language's formal operations and syntactic rules. 11 This is a critical feature of the variety of CG that we adopt in this paper, because, as we illustrate in Section 3, by its very nature it gives us a mathematical basis for rules for compositional semantic interpretation. The presence or absence of this intimate syntax/semantics link-in mathematical terms, a homomorphism-, among other formal properties, distinguishes the Montagovian variety of CG adopted here from Combinatory Categorial Grammar (CCG; see especially Steedman, 2014;Steedman & Baldridge, 2011) and perhaps other systems whose names contain the term categorial grammar. Unlike CCG, which derives more from the tradition of Lambek (1958) than the Montagovian tradition we use, the formal operations by which expressions of the language combine do not index the syntactic categories. Since the semantic value of a category A/B expression is always a function from expressions of category B to expressions of category A, such an expression is always appropriate for taking a category B expression as its argument-i.e., as its complement.
CGs, as we have summarised them, can be illustrated by the following Englishbased toy grammar, which includes a very reduced set of categories, expressions, formal operations, and syntactic rules. Syntactic rules for derived expressions must specify the categories of the expression or expressions that are inputs to the rule and the formal operations that derives them. The rules in (6) follow the format in Montague (1973); rules S0-S3 are adapted from Schmerling (2019: §6.8): (7) Categories: FC ( Syntactic rules: S0 (rule S1 in Montague, 1973). BA ⊆ PA, for every category A. (The basic expressions of category A are a subset of all the expressions of category A, for every category A) S1. If α ∈ PFC/NP and β ∈ PNP, then F1(α, β) ∈ PFC, for all α, β. S2. If α ∈ PNP and β ∈ P(FC/NP)/NP, then F1(α, β) ∈ PFC/NP, for all α, β.
With these rules, we can formulate a rigorous proof that the expression John breaks the vase belongs to the language as an expression of category FC: (8) The vase is a basic expression of category NP.
Breaks is a basic expression of category (FC/NP)/NP Breaks the vase is a well-formed expression of category FC/NP, by S2.
John is a basic expression of category NP.
Now if we add a line after the fourth line in (4) in which we make use of F1(breaks the vase, John), then we arrive at what we sought to prove: John breaks the vase is a well-formed expression of category FC, by S2, QED. 11 11 Montague (1973) introduced a method for diagramming proofs of category membership like (7) that made use of what he called an analysis tree, with which (7) can be diagrammed as in (7'): (7') (7'), as a diagram of a proof, differs from PS trees in not being a part of the syntactic structure of any expression; it also conveys quite different information from a PS tree. Each node in (7') is a 2-or 3-place sequence: (a) a linguistic expression, shown in boldface, (b) the category to which that expression belongs, shown in italics, and (c) the number of the rule that yields that expression, if it is derived.
In a Montagovian analysis tree, the relative order of the constituents making up an expression higher in the tree reflects which is the functor and which is the argument. This information is available from the categories of the expressions and the syntactic rule specified, in another departure from PS rules: 'breaks the vase, FC/NP, 2' is exactly equivalent to 'breaks the vase is a well-formed expression of category FC/NP, by S2' (see Montague, 1973: 227). In this sense, analysis trees are more informative than PS trees: at every point we know the expression involved, its category index, and thus, for functors, the category of the expression they can combine with. The analysis tree in (7') shows that the vase does not occur leftmost in the expression breaks the vase and that John does occur leftmost in the expression John breaks the vase; this information is given in syntactic rules 2 and 1, respectively, and (7') shows that they are applied to breaks and the vase, in the first instance, and to breaks the vase and John, in the second. It should now be apparent that the mnemonic value of the fraction notation lies in the way it diagrams that concatenation of an expression of category A/B with an expression of category B yields an expression whose category index is the result of the two B's 'cancelling each other out', yielding an expression of category A: when the FC/NP expression breaks the vase combined with the NP John, the two instances of NP cancelled out, yielding FC as the category of the whole expression John breaks the vase.
A final way in which a Montague-style analysis tree is distinct from a PS tree is that a category index like FC/NP in (7') is not a PS-style label: there is no 'labelling algorithm' (Chomsky, 2013) accompanying structure building or rules of the grammar making reference to labels or structural variables. In contrast to VP or NP in a PSG, FC/NP and FC in a CG are not non-terminal nodes that rewrite as whatever they dominate. CGs are not grounded in a rewrites-as relation; in other words, there is no is-a relation defined for mother node-daughter node pairs as in PSGs. In short: analysis trees in a Montagovian CG are not phrase markers, nor are they reducible to phrase markers.
The grammatical formalism we have chosen for our analysis has the advantage of being both highly adaptable and fully explicit, in terms of both the categories it makes available and the combinatory potential of expressions of those categories. Recall that category indices in a CG are more informative than node labels in PSGs: given the interpretation of the fraction notation introduced above, if we know that an expression is of category A and that one of its constituent expressions is of category B, we can deduce that the category of the other constituent expression is A/B. An important emphasis of Ajdukiewicz (1935) is that his CG allows one to discover previously unknown categories; for example, if we know that an expression is of category FC/NP and that one of its constituent expressions is of category NP, we can deduce that the category of the other constituent expression is (FC/NP)/NP.
Having now summarised the principal features of the variety of CG we are using and noted some of its overall benefits, we turn to a detailed look at the aspects of Spanish auxiliary chains that are problematic for PS-based approaches and a demonstration of the natural accounts of them that are available in our CG alternative.
Where monotonic approaches fall short
We have indicated that the works on Spanish auxiliary chains cited in Section 1, on which our analysis is based, identify technical and empirical difficulties faced by Xbar theory and its comparatively recent incarnations (Merge-based Minimalism; Chomsky, 1995 and much related work; see Bjorkmann, 2011;Harwood, 2014;Ramchand & Svenonius, 2014;Ramchand, 2018 for surveys of Minimalist approaches to auxiliary verbs; also Falk, 2003 for a Lexical Functional Grammar analysis that faces similar difficulties). We will now see that a critical property of Spanish auxiliary chains is that they display a variety of dependencies of varying computational complexity, according to the properties of the specific auxiliaries making them up. This variation is illustrated in examples (8a-c), to which we will return in Section 3.2. Essentially what we have in (8a) are two lexical elements (the lexical auxiliary tener que and the lexical verb ayudar), each modified by a non-lexical auxiliary (the perfect haber and the passive ser, respectively). Ha tenido que in turn modifies ser ayudado, such that the obligation pertains to an event in which someone is helped. An adequate analysis must group ha with tenido que in a syntactic unit that excludes ser and ayudado if it is to capture the semantic properties of the sentence. In (8b), each auxiliary modifies an immediately adjacent element of the chain; we have examples like this whenever the auxiliaries in the chain are all lexical auxiliaries. Example (8b) requires the deontic meaning expressed by debía to affect the modal poder but not the phasal auxiliary empezar a …: the subject was obligated to be able to start working earlier, but, as we have already seen with lexical auxiliaries, this does not entail that the subject was obliged to actually start working or that he/she 13 13 was obliged to work. Because all the auxiliaries in this sentence are lexical auxiliaries, they are each, as we have indicated, opaque to aspectual information expressed by auxiliaries other than the one immediately preceding them. The modification pattern of (8b) is that predicted by a monotonically growing PSG (transformational or not; see Falk, 2003) In (8c), both functional auxiliaries, va a and haber, modify the lexical verb asesinar, as does the passive auxiliary ser, with no one auxiliary modifying any other. Note that if va a modified haber… there would be a clash between the future meaning supplied by va a and the temporal-aspectual meaning of haber, which always involves past time reference; haber cannot be localised in time by va a. Sentences like (8c) arise when a sequence of functional auxiliaries is immediately followed by passive ser; in sentences like these none of the auxiliaries absorbs the aspectual and temporal information of the auxiliaries occurring to its left. Recall that functional auxiliaries modify but cannot themselves be modified; this is also true of passive ser. In sentence (8c), then, the auxiliaries all modify the main verb asesinar, as we have indicated. These modification relations yield the correct future perfect interpretation of a passive VP.
Capturing the semantic relations among the items in a chain whose auxiliary members are all functional auxiliaries as in (8c) is not straightforward in Minimalism. As a consequence of its grammatical architecture, where structure building is severed from both the lexicon and semantics, it is not possible for internal properties of the elements which are manipulated by the syntactic operations of Internal-and External Merge-including in our case being a lexical or a functional auxiliary-to impact the format of phrase markers so that these always, in the case at hand, have the form [Aux1 [Aux2 [Aux3…[Auxn [VP]]]]]. These properties may interact with structure building only if they are expressed as features that can enter into Agree relations (Adger, 2003;Di Sciullo & Isac, 2008;Wurmbrand, 2014; see also Harwood, 2014 for an approach to auxiliary sequences that relies heavily on operations over lexical features). However, since the Agree operation requires asymmetric c-command relations between Probes and Goals (Chomsky, 2000), the format of the structure itself (the sequence of auxiliary heads mentioned above) still cannot change. Non-monotonicity in sequences of auxiliaries is not contemplated in structurally monotonic approaches.
Let us flesh these points out. Given our formal characterisation of a language (see Schmerling, 2019: 16-17 for a complete formal definition), we can ask whether the algebra <A, P> for Spanish has the property of commutativity. We can see that this is not the case when we consider the Spanish verbal domain; note that (6a-b) (repeated here as (9a-b)), while both grammatical, are not synonymous: There is no evidence independent of the functional hierarchy itself that either (9a) or (9b) is transformationally derived from the other. That is, there is no empirical test to defend the position that one is more basic than the other, nor is there a way to test whether movement has taken place to repair the posited discrepancy between word order and an a priori universal functional hierarchy (Cinque, 1999(Cinque, , 2004. 12 This issue arises with any global functional skeleton based on an underlying universal order (e.g., Bjorkmann, 2011;Ramchand & Svenonius, 2014). 13,14 Here we reproduce Cinque's (2004: 133) hierarchy (see also Cinque & Rizzi, 2016): If we assumed Cinque's hierarchy, then (9a) would have to be derived via movement of deber (which would be a head Modobligation) from a position below estar (which would be a head Aspprogressive) to a functional projection above estar. This is not a peculiarity of deber: the same paradigm emerges with all deontic modals (e.g., está teniendo que trabajar 'is having to work' vs. tiene que estar trabajando 'must be working': either epistemic or deontic) and also in the interaction between tense, aspect, and modality. 15 In Cinque's view the functional hierarchy is determined by Universal 12 That is, any a priori functional clausal skeleton, as assumed in Exoskeletal models (Borer, 2005) and Nanosyntax (e.g., Baunaz & Lander, 2018). 13 Bravo et al. (2015), García Fernández et al. (2017), and Krivochen & García Fernández (2019 argue that this structural variety cannot be generated by an approach requiring uniformity and monotonicity in structure building, as with a Merge-based system like that in Kayne (1994Kayne ( , 2018 or Chomsky (1995Chomsky ( , 2013, or a universal template like Cinque's (1999Cinque's ( , 2004. 14 Theories like HPSG diverge from Minimalism on this point: rather than assume a universal underlying fixed order of functional heads, HPSG makes use of sets of linearisation principles that are assumed to hold widely though not universally. See Müller (2019) for discussion. In classical LFG (e.g., Kaplan, 1995) the order of terminals is read directly off cstructure, but more recent developments separate terminal strings from c-structure (Dalrymple & Mycock, 2011). 15 In this respect, note the contrast between (i) and (ii) (from Krivochen, forthcoming): Grammar (which also determines the format of phrase markers as binary-branching and projecting, as in Chomsky, 1995Chomsky, , 2013Kayne, 1994Kayne, , 2018; and much related work). Crucially, the hierarchy translates directly and uniformly into a clausal skeleton in which if A is higher than B, then the projection headed by A must c-command the projection headed by B. Since the order that emerges from Cinque's hierarchy is (9b), the structure of (9a) must be that in (9a'): (9) a'. Juan debei estar ti trabajando todo el día But apart from the fact that such a view forces us to choose arbitrarily that certain auxiliary sequences are more basic than others, a strictly syntactic interpretation of the Cinque hierarchy runs into problems, most notably because it allows a limitation to a single kind of predication structure; our examples (8a-c) showed that no such limitation exists for Spanish. 16 If, with Ladusaw (1980), May (1985, and many others, we define the scope of a node α as the set of nodes in a PS tree that α c-commands, then we are forced to predict that a single kind of modification is possible: These examples show (a) that perfective aspect is possible above or below the modal, and (b) that the interpretations are not equivalent, since a perfect complement of a modal has a counterfactual interpretation that a perfect modal does not have. 16 An illustration of the procrustean character of a template-based approach is the following quotation from Cinque (2004: 133): […] the functional portion of the clause, in all languages, is constituted by the same, richly articulated and rigidly ordered, hierarchy of functional projections […] [emphasis ours] In such a scenario, the different orders found in Spanish auxiliary chains must be handled via movement transformations, an approach for which there is no independent motivation and which therefore has the status of an ad hoc stipulation. Furthermore, a functional hierarchy like Cinque's can only generate one kind of modification pattern (the monotonic structure [XP X [YP Y [ZP Z [… ]]], defining a regular language; this has the problems noted above in delivering the correct segmentations), which-as we argue at lengthundergenerates and is thus empirically inadequate; see also García Fernández & Krivochen (2019).
[Aux 1 [Aux 2 [Aux 3 [Lexical verb …]]]]
In the context of the analytical tradition for auxiliaries originated in Chomsky (1957Chomsky ( , 1964b and Ross (1969) and developed within X-bar theory and Minimalism, 17 the predication structure in (10) is incorrectly predicted to be the only kind of modification pattern that can exist in a Spanish auxiliary chain (or indeed in any auxiliary chain, since the format for phrase markers is universal). As illustrated above, however, recursive monotonicity is only one of several possible modification patterns in auxiliary chains. Even if head movement could, however stipulatively, take care of the issue of auxiliary order in (9a), it would still yield an incorrect segmentation for (9b): the progressive only affects the modal, not the lexical verb. The correct segmentation for (9b), if a syntactic segmentation to be suitable for the compositional semantics as in the approach we have adopted here, must be [está debiendo] [trabajar], not [está [debiendo [trabajar]]]. A single universal template faces difficulties not only with respect to linear order, but also to the constituent structure assigned to a string.
Structural uniformity is not only a property of generative grammar. The type of dependency in (8c)-in which all auxiliaries modify the main verb but no other auxiliary-is the only one explicitly mentioned in the prominent RAE-ASALE Spanish grammar (2009, §28.1a): The term verbal periphrases refers to syntactic constructions in which an auxiliary verb affects an auxiliated [Sp.
: auxiliado] verb, variously called main or full, occurring in an impersonal form (that is, an infinitive, gerund, or participle) without giving rise to two distinct predications. The auxiliary verb is usually conjugated (…), but need not be, according to the syntactic properties of the sentence (…). Even so, auxiliary verbs can occur in a chain [translation ours].
The RAE-ASALE definition, representative of the Hispanic grammatical tradition, inevitably leads to the conclusion that auxiliaries, together or individually, affect only the "auxiliated" verb, which can only be the main verb. While this idea is not entirely wrong, it is insufficient, inasmuch as it predicts only the (8c) kind of structure. We have seen that this structure must be distinguished from the (8a) and (8b) structure types.
We have seen that "[t]he order in which auxiliaries appear does not linearly correlate with interpretative effects, for a given string of symbols can display several kinds of structural dependencies which are all in principle applicable […] (Bravo et al., 2015: 77-78)". This point is not trivial. It does not entail that all possible orders (i.e.., all logical permutations of terminal symbols) are grammatical (see García Fernández et al., 2017; for analyses of restrictions in chains), it states that more than one order is possible and that each of the grammatical orders given a sequence of auxiliaries corresponds to a distinct interpretation, to which a distinct structural description must correspond. In a monotonic, binary-branching-all-the-way-down generative engine, the only way to build structure is via discrete recursive combinatorics. If the only structure-building operation is (Internal or External) Merge, which always manipulates two elements, the 17 See, e.g., Adger (2003: §5.3.2) for a feature-and projection-rich Minimalist view, also Ramchand (2018). Falk (2003) presents a similarly monotonic LFG approach based on VP recursion at c-structure. resulting object then being labelled depending on the identity of which of the two elements is the head (Chomsky, 2013), then, without the invocation of an independently unmotivated operation on phrase markers, there is no room for variation in phrase-marker format (see also Kayne, 1994Kayne, , 2018. In this scenario, instances of {H, H} (two heads) or {XP, YP} (two maximal projections) require some readjustment to yield {H, YP} and restore derivational rhythm. The requisite process is usually movement (Internal Merge), as illustrated in (9a'), which in turn requires either operations to reconstruct the pre-transformational phrase marker or the inclusion of indices to the same effect.
As should be apparent by now, the English auxiliary system does not work like the Spanish one (a point that should not be cause for surprise in the categorial system, again because of CG's adaptability).To illustrate the differences between English and Spanish auxiliaries to which we have referred, we note that Schmerling's (1983b) arguments for modal and aspectual auxiliaries' forming a grammatical unit with nominative subjects in Finite Clauses (FCs) and Inverted Finite Clauses (IFCs) in English do not apply to Spanish: • Only two English auxiliaries inflect like finite verbs for Tense, Aspect, Modality (TAM), and agreement: have-has-had and be-is-was. In Spanish, however, auxiliaries, with the two exceptions noted in fn. 7, inflect for TAM the same way lexical verbs do: Importantly, however, in English only the first auxiliary in a sequence can invert (Quirk et al., 1985 refer to the auxiliary that inverts as the operator in a sequence): (14) a. He might have been being questioned by the police b. *Might have he been being questioned by the police? c. Might he have been being questioned by the police?
• Spanish auxiliaries are not restricted to specific clause types (again, see fn. 7).
• There are many aspects of the Spanish auxiliary system that fall outside the scope of this work, including a detailed account of co-occurrence restrictions of the kind that forbid *está siguiendo cantando ('he/she is continuing singing') (see García Fernández et al., 2017;; some of these restrictions are orthogonal to the syntax of auxiliary chains. The focus of the present contribution is the interaction between auxiliaries that can modify other auxiliaries as well as be modified themselves and those which can only modify, and how to provide adequate characterisations for constructions where these appear. To the extent that Spanish is not the only language where modal auxiliaries are not positionally restricted 18 VP ellipsis in Spanish is impossible with perfect haber, future ir a, progressive estar, and passive ser: i. *María va a llegar tarde y Juan también va a / *pero Juan no va a. 'María is going to arrive late, but Juan is not going to' ii. *María está trabajando y Juan también está / *pero Juan no está. 'María is working and Juan is too' iii. *María ha trabajado y Juan también ha / *pero Juan no ha. 'María has worked and Juan has too' iv. *María fue traicionada y Juan también fue / *pero Juan no fue. 'María was betrayed and Juan was too' VP ellipsis with modal and phasal auxiliaries forces us to consider data that go beyond the scope of the present paper (e.g., root modals allow for VP ellipsis but not epistemic modals; see Krivochen, forthcoming). Nevertheless, Spanish is crucially different from English in not having an English-like general VP ellipsis rule which applies regardless of the specific auxiliary involved or its interpretation. 19 as they are in English (cf. e.g., Italian ho potuto lavorare, 'I have been able to work', but crucially *sto potendo/dovendo lavorare for most speakers 'I am currently being able to/having to work', unlike Spanish), the theoretical framework specified here has a wide applicability. Given the descriptive observations made above, a categorial segmentation of Spanish that is different from that of English not only in terms of what constitutes a criterion for auxiliary-hood but also in terms of what format the structural descriptions of sequences of auxiliaries require has strong empirical justification.
In Section 3.2, we will present a CG analysis of Spanish auxiliary chains that accounts straightforwardly for the modifying properties of lexical vs. functional auxiliaries that we have discussed in this section. These modifying properties follow from the architecture of the overall CG analysis as presented in Appendix A.
A categorial grammar of the Spanish auxiliary system
We can now give the following summary of the properties of Spanish auxiliary chains that we want our analysis to account for: • There is no a priori upper bound on the number of auxiliaries that a chain may contain. 19 • The relative order of auxiliaries is not fixed a priori, and each permutation is semantically significant (such that ha podido trabajar 'has been able to work' is not synonymous with puede haber trabajado 'may have worked'). • There are two kinds of auxiliaries, lexical and functional. Lexical auxiliaries may modify either a saturated FC/NP 20 or a basic or derived lexical auxiliary. They may also be modified by a lexical or functional auxiliary: in ha debido hacer eso ('he/she has been under the obligation to do it'), ha modifies debido, and ha debido modifies hacer (eso). Functional auxiliaries may only modify a main verb or a lexical auxiliary; they may not themselves be modified: in va a haber hecho eso ('he/she will have done it'), va a does not modify haber, but only hecho (eso); similarly, haber only modifies hecho (eso).
We turn now to the categorial framework we will use to account for these properties.
In the version of CG introduced by Lambek (1958) (cf. Section 1) and in the work of those who take Lambek's system as their point of departure (see Moortgat, 2011 for discussion), the formal operations deriving functors are built into the names of the categories that index them-so that syntactic rules like those we introduced in that section would have been redundant and therefore unnecessary. The toy grammar that we introduced in that section, which did make use of syntactic rules, anticipated 19 This does not mean, obviously, that auxiliary chains can be infinitely long. It means that, unlike English, it is impossible to formulate a single rule that makes reference to all possible auxiliaries and is valid a priori (cf., e.g., Chomsky's 1957 phrase structure rule for English auxiliary chains). 20 A saturated FC/NP is the category of basic (lexical) or derived intransitive verbs (verb phrases) which require no further arguments to become expressions with a single NP argument, the combination yielding a FC (saturated FC/NPs may still be modified by optional modifiers: lexical auxiliaries and traditional adverbs). In what follows we will often find it convenient to use Montague's abbreviation IV for the (basic or derived) intransitive verb category.
the kind of CG that we adopt in the Spanish grammar fragment in the Appendix B. For our Spanish grammar fragment, the set C of available categories is defined as the smallest set such that: • FC, NP ∈ C; and • For all X, Y, if X, Y ∈ C then X/Y ∈ C. X/Y designates a category of expressions that combine with expressions of category Y to yield expressions of category X.
Recall that the "slash categories" represent function/argument relations such that X/Y is a functor, Y is the category of its argument, 21 and X is the category of the range of the function denoted by the functor; accordingly, the semantic value of (X/Y)/Y is the semantic value of the functor X/Y applied to the semantic value of its argument Y. Let ⟦α⟧ stand for the semantic value of α (Dowty et al., 1980). Then we can notate this as ⟦(X/Y)⟧(⟦Y⟧); an example that we discuss shortly is the semantic value of empezar a 'to start to' applied to the semantic value of the argument trabajar 'to work', or ⟦empezar a⟧(⟦trabajar⟧). Productive work on natural-language compositional semantics using CG did not make great headway before Montague (1973), but the innovation of basing the syntactic categories on function/argument relations goes back to Ajdukiewicz (1935), and the successful implementation of Ajdukiewicz's insight has been a goal in all the versions of CG introduced after his.
Expressions and operations
In our analysis of Spanish auxiliary chains, we will assume the formal operations in (15), where α and β are variables over expressions. The syntactic rules we assume are then introduced as this section continues and summarised in Appendix A. In the remainder of this paper, the syntactic rule numbers refer to those in that appendix: (16) Formal operations: F1(α,β) = the result of concatenating α to the left of β, for all α, β. F2(α,β) = the result of concatenating α to the right of β, for all α, β. F3(α,β) = the result of concatenating α a to the left of β, for all α, β. 22 We have described operations in some detail; we turn now to expressions. Consider, for instance, the different clause types that we can find in English or Spanish: indicative clauses, inverted indicative clauses (including interrogatives), subjunctive clauses, imperative clauses, and various non-finite clauses (infinitival, gerundial, participial). Montague (1973) introduced the innovation of splitting 21 In Montague (1973) and work based on it like Dowty (1979Dowty ( , 2003 and Schmerling (1983bSchmerling ( , 2019, the operations that effect this combining need not be simple concatenation. Consider for example our operation F3 in Appendix B, which not only concatenates two expressions but also adds the object marker a (see fn. 22). 22 This operation applies in the derivation of expressions like interrogar a Juan 'to question Juan', which exhibits the a of so-called Differential Object Marking (DOM) in Spanish. DOM, broadly speaking, introduces animate direct objects; see Fábregas (2013) for a survey of research on this phenomenon. Since CG does not assign any special significance to orthographic words, as we have indicated, F3 is properly seen as including case inflection, marking the direct object Juan as accusative.
categories into subcategories such as those we have seen, while assigning each subcategory the same type of semantic value. Splitting Montague's clause category t into subcategories like indicative clause, infinitival clause, and so on allows the grammar to recognise different clausal subcategories and hence to capture the differences among them in internal constituency and distribution. 23 Because of such differences, Schmerling (1983b) analyses the clause category in English as being split into clausal subcategories that include FC (indicative finite clause), IFC (inverted indicative finite clause), etc. In turn, expressions of each of these categories may have internal structure; expressions of the category FC may be basic (as with Yes or No), but they typically result from the application of operations to their constituent expressions. Where categories are concerned, we have focussed on category names in CG. A category itself is a set of expressions that is indexed by a category name. So, just as a subset of a set is itself a set, a subcategory like FC is itself a category. Accordingly, everything we have said about categories in this paper pertains equally well to subcategories. Category splitting as introduced in Montague (1973) is formally a trivial modification of CG theory; this is true despite the nontrivial increase it makes in CG's adaptability, our focus in Section 3. Consider now that in Schmerling's system the functor in an English FC is not an FC/NP, as in the toy grammar in Section 1, but the subject: a nominative NP, or member of the category FC/IV. This category assignment was motivated, among other things, by facts pertaining to VP ellipsis in English, which may leave a modified subject as a remnant. Neither our toy grammar nor the more ambitious Spanish grammar fragment we present in Appendix A has a category of nominative subjects like Schmerling's; the basic building blocks we motivate for clause formation in Spanish are the crucially different FC/NP and NP. An important thesis of this paper is that the CG system presented here has the adaptability to account for languages that, from a structurally monotonic perspective, can only be considered formally incommensurate.
We have seen that in a CG, basic expressions are distinct from PSG terminals; thus, basic expressions are not to be confused with words, which, as we have indicated, constitute terminals in mainstream approaches. In a CG approach a basic expression can consist of more than one orthographic word, as in (16) and (17) below. These examples use category names from Schmerling's (1983b) analysis of English; the use of multiple slashes will be discussed shortly: (17) John would rather walk →(FC//IV)/(FC/IV) (18) John will have walked → (FC////IV)/(FC/IV) (Schmerling, 1983b: 14, 22) In Schmerling's analysis the subject is defined as belonging to a category that must combine with an expression of category IV. (12) and (13) contain multi-word basic expressions, 24 which belong to categories that are English auxiliaries which combine with nominative subjects (FC/IV) to yield formally modified expressions of category 23 Montague (1973: 249) did not in principle limit category splitting to the categories he split in that work, although, since he included only indicative clauses there, t was not among those he split. 24 FC//IV and FC////IV could in principle of course also be category names for derived expressions. FC//IV or FC////IV: subjects with which an expression of a subject modifier category, here would rather or will have, has been concatenated. Following the practice of Montague (1973), these modified subjects are distinguished from the category of nonmodified subjects (FC/IV) by the use of additional slashes, as we have indicated: FC//IV in (12) or FC////IV in (13). The "numerators" in the modifying expressions are written FC//IV and FC////IV because the modifiers in (12) and (13) take unmodified nominative subjects as their complements. The motivation for this analysis of auxiliaries as expressions that modify subjects is discussed in Appendix A. When a process applies to a basic expression, the result is a derived expression; in Schmerling's analysis of (12) and (13), John, would rather, will have, and walk are all basic expressions, and John would rather, John will have, John would rather walk, and John will have walked are all derived expressions.
Generative grammar has traditionally analysed English will and have as two independent heads (from Ross, 1969, Huddleston, 1974and Akmajian et al., 1979;to Cinque, 2004;Bjorkmann, 2011;Harwood, 2014;Ramchand & Svenonius, 2014;Cinque & Rizzi, 2016;Ramchand, 2018 and many related works; see also Falk, 2003 for an LFG analysis), projecting functional phrases in a strictly monotonically binary skeleton with a fixed order determined by Universal Grammar (expansions of the Inflectional domain, IP, include several kinds of AspectP, ModalityP, etc.). Schmerling (1983b), in contrast, analyses such expressions as 'will have' as basic, because of their lack of full syntactic and semantic predictability. The formal adaptability of CG makes it a suitable formalism for capturing the structural and semantic nuances of both the English and Spanish auxiliary systems; we detail this adaptability in Appendix B.
We have indicated that, following Montague (1973), Dowty (1979), and Schmerling (1983b, 2019), we use different numbers of slashes to indicate category splits, i.e., expressions of the same category but with different combinatory possibilities. For instance, in a Spanish-like SVO language an expression of category IV (FC/NP; cf. fn. 12) like trabajar must combine with an NP in the formation of an FC, via left concatenation of the NP expression to the IV expression: Juan trabaja. If the FC/NP combines with a functional auxiliary, yielding ha trabajado, the result of such combining still combines with an NP to form an FC, yielding Juan ha trabajado; we can designate the newly derived category FC//NP: a modified FC/NP. We will discuss modification of an FC/NP by a functional auxiliary shortly, as well as the justification for splitting the IV category.
We can now make explicit a point asserted in fn. 11: the fact that the CG theory by its very nature gives us a mathematical basis for rules of compositional semantic interpretation. This is clearest in the case of lexical auxiliaries, which can both modify and be modified; accordingly, we can say that (FC/NP)/(FC/NP) is of the category of functions from FC/NPs to FC/NPs, (IVs to IVs). To say this is to say two things: (i) that expressions of this category are syntactically defined to take FC/NPs (IVs) as their complements, and (ii) that, as we have discussed, they are at the same time semantically defined as the values of the functions they denote applied to the semantic values of those complements. As a specific example, consider that an expression of the (FC/NP)/(FC/NP) category like empezar a 'to start' is categorially defined to take an expression of the FC/NP (IV) category like trabajar 'to work' as its complement, yielding an expression of category FC/NP like empezar a trabajar 'to start to work'. Simultaneously, an expression of this category is categorially defined to have the 23 23 default semantic effect of modifying the meaning of that complement. In this case, the architecture of the grammar automatically makes available a semantic rule saying that the result of combining a lexical auxiliary with an IV is a modification of the semantic value of that IV; translating the semantic rule schema just summarised into the interpreted Intensional Logic (IL) of Montague (1973) yields λP(empezara'(P))(ˆtrabajar'), 25 where P is a variable ranging over IV intensions (or senses). Following Montague (1973), the exact way in which the intension of the complement is modified is specified by the extension, or referent, of the auxiliary. Given the lexical semantics of empezar a, we have the result that in empezar a trabajar the internal temporal structure of trabajar in empezar a trabajar is modified so that the beginning of the working is focussed upon rather than the work's entire course (Freed, 1979;Klein, 1992: §3;Laca, 2004).
We indicated earlier and reiterate now that this intimate syntax/semantics link is an essential feature of the Ajdukiewicz/Montague CG we have adopted in this paper. This fact has a crucial consequence: it makes no sense in this system to speak of something as "being handled in the semantics rather than the syntax" or vice versa; the two work in tandem. It cannot be overemphasised that the relationship between syntax and semantics in the variety of CG we are assuming follows from the architecture of the theory. Because of this intimate connection between syntax and semantics, it is unnecessary, in the case of lexical auxiliaries, to state a semantic rule for each syntactic rule we propose: the default structure of the relevant semantic rules is always inferable from the syntactic rules. The reader should bear this in mind, since we often discuss matters from the syntactic side of things; this does not obscure what sort of semantic rule we are assuming in any given example. 26 Where the lexical semantics of the 25 In IL, the symbol ' immediately following a 'non-logical' (lexical) word indicates that it and the immediately preceding word constitute an abbreviation for that word in an IL translation that included lexical as well as logical expressions.
Although in Montague-inspired approaches in linguistics it is common to give translations into IL as if those translations were themselves semantic rules, the practice of translation into IL is one Montague devised for his own purposes; we can think of it as shorthand for true semantic interpretation (which he called interpretation induced by translation). IL is itself a language, albeit a formal one, and as such it is in need of semantic interpretation every bit as much as any natural language. We give the IL translation in the text to emphasise the structure of the semantic interpretation we are assuming for lexical auxiliaries generally; an actual semantic rule would be what we summarise for our example with empezar a. The semantics for the whole modified IV empezar a trabajar depends of course on the lexical semantics of the elements of the IV as well-in this case, the simple trabajar-and, for a more complex IV, the semantics of the elements of that IV. 26 A semantic rule we employ with great frequency is functional application; as an example, the value of an IV (FC/NP) derived from a lexical auxiliary and an IV is the value of the function denoted by the X/Y expression-which in the case at hand is the auxiliary, as illustrated in the text-applied to the meaning of its complement as argument. Here, Y in the X/Y schema is IV. Functional application, though the default, is not the only semantic rule our grammar fragment requires (we will see, for example, that the semantics of functional auxiliaries is quite different from that of lexical auxiliaries, and the semantics of the passive auxiliary ser is quite different from either of these). In making use of more than one semantic various auxiliaries is concerned, we keep our discussion at a general, informal level, since our focus is on their syntax (but see, e.g., Dowty, 1979;Fernando, 2015 for discussions of the semantics of tense and aspect; also García Fernández, 2000: Chap. 1-3 for a focus on Spanish specifically).
Spanish intransitive verbs, basic or derived, belong to the category FC/NP: they take subjects as their complements in the formation of finite clauses. This analysis is sufficient to capture the combinatory behaviour of Spanish auxiliaries in detail. Before getting into our proposal, however, it will be useful to restrict the class of adequate grammars for Spanish auxiliary chains by observing in Section 3.2.2 how neither the traditional Spanish structuralist / functionalist perspective on auxiliary chains nor a strictly monotonic approach captures the variety of internal dependencies within auxiliary sequences.
Functional application and functional modification in auxiliary chains
What we can call the traditional perspective on auxiliary chains, in both structuralist and generative frameworks, is that their structure simply extends the syntax of singleauxiliary constructions-in other words, that their structure is strictly monotonic (see, among others, Ross, 1969;Zwicky, 1993;Guéron & Hoekstra, 1998;Falk, 2003;Cinque, 2004;RAE-ASALE, 2009;Bjorkman, 2011;Ramchand & Svenonius, 2014;Ramchand, 2018). Within this traditional view, we can distinguish (a) approaches in which the auxiliary chain constitutes one syntactically simple predicate (e.g., Alarcos Llorach, 1994;Gómez Torrego, 1999) and (b) approaches in which auxiliaries in a chain are distinct objects but share their configurational properties (i.e., they all head their own projections in an exhaustively binary-branching structural description).
When we focus on what auxiliaries modify, the following issue arises. If auxiliaries modified only saturated FC/NPs (in other words: if auxiliaries did not modify other auxiliaries), then an auxiliary chain would have a structure along the lines of either (18a) or (18b), the former inspired by approaches of type (a) above and the latter by approaches of type (b): Let us analyse (18a) first. Structuralist-functionalist analyses of Spanish auxiliary sequences have traditionally assumed that a verbal periphrasis and an auxiliary chain have the same structure, modifier + modified, the only difference being that in auxiliary chains, the modifier contains more than a single auxiliary. These analyses offer no glimpse of any structure internal to the chain. The idea that auxiliary chains act as uniform objects is expressed, for instance, by Gómez Torrego (1999): On occasion, auxiliarity [auxiliaridad] in a single periphrastic head is given by an auxiliarity chain, that is, by two or more auxiliary verbs linked together which have an influence on the auxiliated verb, which can only be a single one […] Syntactically, we are dealing with simple sentences which can be operation we are following an innovation by Montague (1973) and those whose work is built on his, including Dowty (1979) and Schmerling (1983bSchmerling ( , 2019. 25 25 segmented into auxiliary (the whole chain) and auxiliated (Gómez Torrego, 1999: 3346-3347 From this perspective, the whole chain is a single modifier--a single expression assigned to a single category--without any discernible internal structure (see also Alarcos Llorach, 1994;Iglesias Bango, 2008; perhaps most radically Morera, 1991: 29). Empirical arguments against this analysis were the focus of examples (8a-c) in Section 2, repeated here: If, on the other hand, we consider an alternative in which auxiliary modification must be uniformly monotonic and phrase structure must (by axiom) be binary branching, then we have the structure in (21) ((21) omits the usual rule indices, since no actual grammar is involved): 27 27 Note, however, that while (21) is inspired by the monotonicity of structure building in the Minimalist Program, Kayne's (1994) (21)-or any similar monotonically growing structure, such as those based on the socalled functional sequence (Cinque, 1999;Rizzi & Cinque, 2016)-is inadequate, because it fails to capture the internal dynamics of the chain: that is, its appropriate modification patterns (see the discussion of our example (8c) above. This is because it is not possible under a strictly monotonic view of syntactic computation to define an object that includes only the analytic future form va a tener que and excludes the rest of the chain (compare the synthetic counterpart tendrá que, which is perhaps more transparently isolable from its complement ser ayudado in tendrá que ser ayudado); recall that only the obligation denoted by tener que is located in the future. If a node has scope over everything in its c-command domain, as we have noted, following Ladusaw (1980), May (1985, and much subsequent work, then (21) predicts that va a (the third-person-singular present form of the auxiliary ir a) should have scope over ayudado; in other words, the event denoted by the saturated FC/NP should be located in the future. But, in fact, va a only affects the deontic obligation to be helped; this auxiliary modifies tener que, but that information does not pass through to lower elements in the tree. ]. In configurational terms, the same objections apply to uniform VPembedding analyses such as Ross' (1969) and related work (see also Falk, 1984Falk, , 2003 for a similar idea within LFG). The structure in (21) permits only dependencies like (8b) to be adequately represented. For (8c), where auxiliaries cannot modify one anotherwhere they must therefore all modify the lexical verband for (8a), where command, which is instrumental to the LCA). Furthermore, note that considerations of familiarity lead us to depart in the top tier from the CG tradition of always writing functors to the left. (21) should be thought of as a hybrid between a PS tree and a Montagovian analysis tree, which we use for expository purposes only. we need to define an object that includes only two members of the auxiliary chain and excludes the rest, (21) is descriptively inadequate.
Having now looked at empirical inadequacies in two prominent approaches to the structure of Spanish auxiliary chains, we turn to the approach we advocate. To the best of our knowledge, there are no previous CG analyses of Spanish auxiliaries; to illustrate our point about the problems of a priori structural uniformity we need to refer to works dealing with English. For example, Bach (1983: 111) offers a model of the English auxiliary system that in fact follows rather closely the postulates of phrase structure grammar-though he separates modals (will, must, may, can…) from aspectual auxiliaries (have, be). Bach's system also incorporates features (à la Chomsky, 1965) as diacritics distinguishing categories, such that will and would are both (T\S)/(e/t), 28 differing in the presence of a feature [pres] in the former and [past] in the latter (Bach, 1983: 112). We will return to Bach's proposal for English auxiliaries in Appendix A. In any case, it is important to note that any descriptively and explanatorily adequate theory of the English auxiliary system must be able to capture the fact that auxiliary ordering is rigid in English, quite unlike the case with Spanish: In a theory of the [English] auxiliary, we would like to be able to account for the ordering of auxiliaries, so that they occur in the right order before the verb. Auxiliary sequences such as will have been eating are not at all uncommon, and can only be well formed with this exact ordering. (Carpenter, 1989: 210) For Spanish, however, we have shown that there is variability in the position of auxiliaries in a chain that is both restricted and systematic, such that the modal + perfect + progressive + passive template that holds uniformly for English chains, and on which many claims about the purportedly rigid structure of the functional sequence are based, reflects only one of the several auxiliary orders available in Spanish; both (20) and (21) wrongly assign all auxiliaries to a single syntactic class, obscuring differences in distribution and interpretation. We have seen that Spanish auxiliary chains are syntactically and semantically heterogeneous; the approaches we here reject fail to take this heterogeneity as something empirically real. In contrast to the approaches to the structure of auxiliary chains that we have rejected, which are monotonic from theory-internal necessity, the architecture of our CG alternative has the flexibility to permit lexical auxiliaries to define local domains within which downward transmission of temporal and aspectual information in a chain is blocked, as discussed in Section 2. See also .
We are now ready to see how CG is particularly well suited to capturing the systematic syntactic-semantic behaviour we observe in the data; in particular, the possibility of having modals affected by progressive, perfective, and temporal 28 In Bach's Generalised Categorial Grammar, t is the category of truth-value-denoting expressions and e the category of individual expressions. Both are adopted from PTQ (see Montague, 1973: 222). T is the category of Terms, essentially NPs, and S, as in much of generative syntax, is 'sentence'; the symbol \, adopted from Lambek (1958), indicates concatenation of the argument to the left of the functor. Bach's use of features, as well as some of his category names, are usual neither in vanilla (Adjukiewicz / Lambek / Bar-Hillel)-nor in Montague-style CG. auxiliaries. It will be useful here to go step by step through the category definitions we are assuming. From this point forward, the syntactic rules we refer to are those given in Appendix A. We have already mentioned that intransitive verb phrases (IVs) need to combine with NPs to yield Finite Clauses. Lexical verbs are basic expressions of category TV (transitive verb, or (FC/NP)/NP) or IV (or others, in an extended fragment), according as they are transitive or intransitive; expressions of category TV combine with expressions of category NP to yield derived intransitive verb phrases, of a category defined as FC/NP like basic intransitive-verb expressions. Then, expressions of this category need to combine with expressions of category NP, by rule S2, to form expressions of the Finite Clause category (FC). 29 We illustrate this in (22), now indicating which syntactic rule has applied at every point: 30 (22) Recall now that we have emphasised a distinction between functional and lexical auxiliaries in Spanish. We have mentioned the following generalisation, due to Bravo et al. (2015) and García Fernández et al. (2017) and related work: lexical auxiliaries can modify and be modified, whereas functional auxiliaries can only modify; they cannot themselves be modified (i.e., anchored temporally or aspectually) by other auxiliaries. Rather, functional auxiliaries like <ir a + INF> or <haber + PTCP> are direct modifications of the expressions they are added to; they are in this respect more akin to inflectional elements than to expressions of a verbal category. We propose that this asymmetry reflects the following generalisation: Functional auxiliary generalisation: Functional auxiliaries differ from lexical auxiliaries in not being introduced by concatenation.
If functional auxiliaries are not themselves (basic or derived) expressions of the language-i.e., if they are elements of set C in Figure 1 but not set B-then it follows 29 We stick to simple verb phrases in this paper, because our focus is auxiliary chains; for these purposes, the choice of lexical verb is of little if any consequence. A full grammar of Spanish lexical verbs must, of course, ultimately capture the semantic and syntactic richness of verb typology, which would require additional categories (for example, ditransitive verbs, inherently pronominal verbs like avergonzarse (de) 'to be ashamed (of)', apoderarse (de) 'to take possession (of)', arrepentirse (de) 'to regret', and many others). 30 Our syntactic rules omit details of verb inflection. Note our rule S4, however, which addresses nominal morphology with a formal operation that concatenates ayudar with a direct object that we treat as marked with the differential object marker a. In its role as an accusative marker, a contributes no lexical meaning of its own to the larger expression of which it is a part. Items with this property are traditionally called syncategorematic. An analysis tree includes all and only the categorematic items-items assigned to categories-that make up the expression at its root.
Juan ayudó a María, FC, 3
Juan, NP ayudar a María, FC/NP, 4 ayudar, (FC/NP)/NP María, NP 29 29 that they cannot be modified: there is nothing to be modified where they are concerned. Our analysis thus explains Bravo et al. (2015)'s generalisation that they modify but cannot themselves be modified.
We now introduce a function ϕ, defined as in (23), to formalise the analysis of Spanish auxiliary chains defended in Bravo et al. (2015) and the later works we have cited: (23) ϕ(X/Y) = X//Y ϕ applied to an expression of category X/Y (e.g., FC/NP) yields a modified expression, as we have indicated; we notate this modification with an additional slash: X//Y (e.g., FC//NP is a modified X/Y expression, in this case a modified FC/NP). Note that, in accordance with our generalisation stated above, ϕ does not concatenate two expressions. The semantics of functional auxiliaries is accordingly different in a significant way from the semantics of lexical auxiliaries that we have discussed: since only one linguistic expression is involved in modification by a functional auxiliary, we are not dealing semantically with a rule of functional application (see fn. 25); rather, a single expression is the input to a rule of functional modification, and a functional auxiliary is accordingly a 1-place operator-which, given our syntax, does not belong to a syntactic category. The specific modification involved depends of course on the functional auxiliary-but, as we have noted, functional auxiliaries as a class have to do with modification involving tense or external aspect. To give one example of the semantics of a functional auxiliary, the meaning associated with the addition of <estar + gerund> is progressive aspect; this can modify a lexical auxiliary (as in no está pudiendo ofrecer un buen servicio 'he/she is not currently able to offer good service') or a lexical verb (as in está trabajando 'he/she is working').
We come now to the passive auxiliary <ser + participle> (Bosque, 2014), 31 which is in a class by itself, as we have suggested: it is neither a lexical verb nor a lexical auxiliary nor a garden-variety functional auxiliary, because of both its syntax and its semantics. Passive ser does not form a natural class with temporal-aspectual auxiliaries; the semantic rule corresponding to S7-the rule that adds ser-simply uses a 1-place identity operation. Where passives are concerned, semantic complexity reflects rule S6, according to which the input is detransitivised. This change in diathesis involves not tense or aspect but the distribution of grammatical relations and thematic roles.
Before going into more detail about the rules that govern the introduction of functional auxiliaries, we need to make explicit the properties that differentiate passive ser from functional auxiliaries. In this paper we depart from Bravo et al. (2015) and García Fernández et al. (2017), who group passive ser with functional auxiliaries on the basis of the familiar criterion of its being able to modify but not itself be modified. This property does hold (see (8c) and the discussion that follows). 31 Spanish has a second passive auxiliary, <estar + participle>, which differs semantically from <ser + participle>: it is used to derive resultative passives (Bosque, 2014;RAE-ASALE, 2009: §28.5.2), whereas <ser + participle> forms eventive passives. There seems to be no formal difference between the two auxiliaries-although passive ser appears in auxiliary chains more frequently than estar: as an example, a Google search for ha podido estar ocupado por on 17 February 2021 yields three results, whereas ha podido ser ocupado por yields more than 5600.
Passive ser differs from prototypical functional auxiliaries in two ways. First, its meaning is simply a 1-place identity function; in this respect it has a meaning like that of copular ser, which we have not treated as an auxiliary in this paper. Second, it is introduced into syntactic structures by a simple rule that does not depend on the steps of the formation of a preceding input expression; in this respect it differs from the rules for functional auxiliaries making use of the function ϕ defined in (23) above, rules that we discuss shortly. This formal property of ser corresponds to the informal observation that ser always occurs immediately adjacent to the lexical verb, unlike prototypical functional auxiliaries.
Having seen the syntactic and semantic effects of functional auxiliaries, we can formulate the rule schemata for functional modification that are given in (25). First, however, we must recall that the functional auxiliaries are not assigned to syntactic categories as the lexical auxiliaries are but are introduced directly into structures, much as affixes are (compare our treatment of passive ser). The operations introducing the functional auxiliaries are given in (24) Each of these operations plays a role in one of the rule schemata in (25), one schema for each functional auxiliary: 32 (25) a. S6. If α ∈ / , then F5(α) ∈ / +1 , for all α, X, Y, where X and Y are variables ranging over the "numerators" and "denominators", respectively, of functor categories, and where n is an integer and /n an abbreviation for n slashes. b. S7. If α ∈ / , then F6(α) ∈ / +1 , for all α, X, Y, where X and Y are as in S6. c. S8. If α ∈ / , then F7(α) ∈ / +1 , for all α, X, Y, where X and Y are as in S6. d. S9. If α ∈ / , then F8(α) ∈ / +1 , for all α, X, Y, where X and Y are as in S6.
We can furthermore use the function ϕ introduced in (23) to formulate the meta-rule schema in (26): (26) S10. If α ∈ / , then ϕ(α) ∈ / +1 , for all α, X, Y, where ϕ is a metavariable ranging over F5-F8 and where X, Y, n, and /n are as in (25a). 33 32 We use the notation /n as an abbreviation for "n slashes". The use of variables in the rules in (25) maintains CG's objective of having heuristic value for the determination of new grammatical specifications: once a new Spanish functional auxiliary is identified, a readymade rule is automatically available for introducing it into structures. 33 As of this writing, n in these rule schemata appears to us to range over 0-2.
(25a) pertains to the adding of progressive <estar + gerund> to an expression of category FC/NP or category (FC/NP)/(FC/NP). We refer to S8 as a rule schema because it abbreviates rules adding this functional auxiliary to expressions that are unmodified by another functional auxiliary, modified by one, or modified by two. For example, if the input to S8 belongs to the category FC/NP, then its output belongs to FC//NP; if the input belongs to FC//NP, then the output belongs to FC///NP: the number of slashes reflects the number of times the base expression has been modified in the derivation. This innovation does not distance us from Montague (1973: 223) in any formally significant way; as he observes, […] our syntactic categories diverge from those of Ajdukiewicz only in our introduction of two compound categories (A/B and A//B) where Ajdukiewicz would have had just one. The fact that we need only two copies is merely an accident of English or perhaps of our limited fragment; in connection with other languages it is quite conceivable that a larger number would be required.
The analysis presented in the present paper can be taken to confirm Montague's conjecture that the fact that A//B is the largest compound category is a consequence of his limited fragment; the Spanish fragment developed here requires more than two modified categories, due precisely to the syntax and semantics of functional auxiliaries. S6 is thus responsible for yielding periphrases containing functional auxiliaries; (26) is simply a generalised version of (25). Introducing a functional auxiliary results in an expression assigned to a modified category.
Rule schemata for the remaining functional auxiliaries in Table 1 are given in (25b-d).
As we have noted, when a functional auxiliary is introduced, the logical type of the input category is maintained. (26) below is an example: after introducing the perfect auxiliary ha (third person of haber), the expression still needs to concatenate with an expression of category NP to yield an expression of category FC: Lexical auxiliaries are not introduced in the same way as functional auxiliaries, because, as summarised in Table 1, they differ from them in two important ways: they express unique meaning types (lexical auxiliaries primarily express modality and external aspect, whereas functional auxiliaries express temporal information or internal aspect), and only lexical auxiliaries can be modified by other auxiliaries and also be modifiers themselves. Of the two auxiliary classes, only lexical auxiliaries are basic expressions of the language that are assigned to syntactic categories.
We turn now to the category lexical auxiliaries belong to. We know this cannot be FC/NP; this would wrongly predict that, for example, Juan tiene que was a wellformed expression of category FC, the result of concatenating an NP with an FC/NP according to rule S4. We must capture the fact that lexical auxiliaries are able to combine not only with lexical verbs (saturated FC/NPs) but also with other lexical auxiliaries, as in (28) The result of the combination of a lexical auxiliary with another lexical auxiliary must itself be able to combine with a saturated FC/NP or another lexical auxiliary, and so on. A sequence of lexical auxiliaries is, of course, a chain, the longstanding recognition of which we have pointed out. A chain always modifies a saturated FC/NP. Therefore, a lexical auxiliary must have FC/NP as its 'denominator' category. Then, to be able to be a link in a chain of the sort we have been discussing, and as exemplified in (27), it must also have FC/NP as its 'numerator' category. We thus have (29) as the category of lexical auxiliaries, which we illustrated earlier: Here, the perfective functional auxiliary ha (third person of haber) modifies the lexical modal poder but not the lexical verb trabajar: we will illustrate the structure shortly, but it is worth making a preliminary comment on this example. Recall from Section 2 that if the predication structure in (30) were such that haber modified trabajar and that poder modified trabajar, we would be describing an eventuality of having worked plus one of having been able to work, both in the past (we could quasi-formally represent this view as PAST(haber(trabajar) ˄ poder(trabajar))). This, however, is not what (30) means. In (30) we have an event of having been able to work but not necessarily an event of having worked. In other words, the perfective aspectual information does not affect trabajar; it simply affects poder. We can see this from the fact that there is no contradiction in (31)-because haber does not modify trabajar: We can now consider how to capture the correct modification relations. If a lexical auxiliary like poder is of category (FC/NP)/(FC/NP), as we have proposed, then it can be modified by a functional auxiliary to yield a modified lexical auxiliary: an expression of category (FC/NP)//(FC/NP). We do not need to assume further rules to account for this kind of interaction between functional and lexical auxiliaries: the rules we have discussed already give us the analysis tree in (32), a proof that Juan ha podido trabajar is an expression of category FC, as required: Before moving forward, we need to return to our earlier observation that auxiliary chains composed of only lexical or only functional auxiliaries are monotonically recursive. Consider (33a) and (33b) In (33a), the lexical verb trabajar is modified by two functional auxiliaries. All we need, then, is to apply S8 recursively: In a sequence of functional auxiliaries, illustrated in (33a) and diagrammed in (34), each modifies the lexical verb: recall that functional auxiliaries are only modifiers (contributing temporal or aspectual information about an eventuality); haber therefore cannot be modified by ir a. (33a) speaks of a point in time that is located after the moment of utterance but possibly before some other point in the future: in Juan va a haber trabajado el viernes 'Juan will have worked on Friday', the event of working takes place after the moment of utterance but during Friday or before Friday (see Carrasco & García Fernández, 1994;Carrasco, 2008). The pattern of dependencies here is exactly that shown in (8c).
For ( In (35) we have the lexical verb trabajar and the auxiliary sub-chain empieza a poder, which contains two lexical auxiliaries (empezar a and poder). The modification pattern is that of (8b): empezar a modifies poder (we are talking about the beginning of a possibility), and poder modifies trabajar (that possibility pertains to the event of Juan working). In cases involving sequences of lexical auxiliaries, modification is strictly local: recall again that, unlike functional auxiliaries (which only modify), lexical auxiliaries can both modify and be modified. This accounts for our observation that the presence of a lexical auxiliary blocks the transmission of information downward through the chain. Whatever is above empezar a modifies only empezar a, whatever is between empezar a and poder modifies only poder, and poder modifies trabajar.
Consider now a chain of auxiliaries belonging to all three of the classes we have discussed (lexical, functional, and the passive). In (36), the lexical auxiliary tener que is modified by the functional auxiliary haber, and the lexical verb ayudar immediately follows the passive auxiliary ser: Since the lexical verb ayudar is not modified by the perfective auxiliary haber, an adequate segmentation for (36) must be equivalent to (37): (37)
[[ha tenido que] [[ser ayudado]]]
That is, only the modal auxiliary tener que is modified by the auxiliary haber, and haber tenido modifies the passivised lexical verb. We need to take into consideration that (35) contains a passive, so a new operation and two further syntactic rules are needed: (38) S2. If α ∈ PTV, then F0(α) ∈ PFC//NP, for all α.
The addition of passive ser requires its own syntactic rule because, as we have indicated, it is in a class by itself: it cannot be a lexical auxiliary because it patterns with the functional auxiliaries where modification possibilities are concerned. Unlike functional auxiliaries, however, it has the distinctive property not of expressing temporal or aspectual information but rather of marking diathesis. Diathesis has profound effects on clause organisation, in terms of both grammatical functions and thematic roles. The relations among the elements of (36) are diagrammed in (39): (39) Note that in all examples, there is only one expression of category FC: the concatenation of a chain of auxiliaries (always of the form FC/NP) with an NP is a finite clause. Monoclausality is captured in the CG analysis without additional stipulations.
Conclusions
A monotonic approach to structure building has inherent limitations that prevent it from providing adequate structural descriptions for the dependencies we observe in Spanish auxiliary chains, which we showed in Section 1 exhibit formally varying dependencies. Among other limitations, monotonic branching is uniformly to the right or to the left, and semantic relations, based on a syntactic c-command relation (defined either in classical PS-terms or in terms of co-containment in sets), are similarly monotonic. One advantage of categorial grammars is that they allow us not only to create non-monotonic structures when these are empirically necessary, but, especially, to be fully explicit in the formal mechanisms that generate those structures while at the same time allowing such cross-linguistic variation as occurs. For example, categorial grammars allow us to choose either the IV or the subject as the functor in a clause, leading us to group auxiliaries with the subject or with the lexical verb depending on the operations that a given natural language licenses. It also allows us to group auxiliaries by means of either concatenation or functional modification (as opposed to only concatenation as in mainstream approaches), which constitutes the theoretical novelty of the present paper. A CG approach can yield empirically adequate descriptions without needing to assume an a priori sequence of labelled functional projections. It is significant that the adaptability of CG, as illustrated by the fundamental difference between Spanish and English that we have proposed, already encompasses an account of "parameters"; their existence is derivable from the mathematics of the system of available category indices presented in Section 2. Accounting for this has been modified by a functional auxiliary in a derived expression (recall that functional auxiliaries do not have categorial status). 35 If we were following the formatting of Montague (1973) religiously, we would start our syntactic rules with the following, as in our toy grammar in Section 2: BA (or the set of basic expressions of category A) ⊆ PA (or the set of expressions of category A), for every category A. We use the above table to express the information in this rule, which is essential for our grammar fragment's completeness, as part of the larger presentation of our syntactic categories. 45 45 S1. If α ∈ PTV, then F0(α) ∈ PFC//NP, for all α. (This rule forms the heart of the passive construction, by detransitivising the TV ((FC/NP)/NP) so that the NP argument of the FC/NP in its input is now the (only) argument of its output.) S2. If α ∈ PFC//NP, then F4(α) ∈ PFC/NP, for all α. (This rule adds the semantically empty auxiliary ser to the output of S1, to complete the creation of an FC/NP expression in the passive voice. Recall that diathesis plays a role in distinguishing ser from the functional auxiliaries of Spanish, which are introduced by S7-S10 and which are not semantically empty but modify functor expressions with the addition of temporal or aspectual specifications.) 36
Rule schemata for functional modification: 38 S6. If α ∈ / , then F5(α) ∈ / +1 , for all α, X, Y, where X and Y are variables ranging over functor categories, and where n is an integer and /n an abbreviation for n slashes. 36 Recall that it is the algebraic structure of a categorial grammar that gives us the difference between active and passive IVs, as these are derived in distinct ways. To know what kind of IV we are dealing with in a given instance, we consult the grammatically significant relations existing among the expressions forming that IV, according to the syntactic rules; the syntactic rules recapitulate the relevant algebraic structure. All of this is recoverable from the proof that an active IV is that and the proof that a passive IV is that, as diagrammed in the analysis trees we have presented. 37 This rule is an oversimplification, inasmuch as combinations of transitive verbs with direct objects use F3 only if those objects are animate; otherwise, the operation effecting this combining is F1. 38 Syntactic rule schemata S6-S10 represent an innovation over Montague (1973), but it is purely a matter of notation. These schemata are reminiscent of rule collapsing in relatively early generative grammar (especially in generative phonology), inasmuch as each schema is an abbreviation for a set of garden-variety syntactic rules. Montague (1973: 252) used rule schemata (for a different purpose) as his "rules of quantification". 47 47 studied in detail by Akmajian & Wasow (1975). With Modal, Perfective, and Progressive belonging to distinct categories, whose discovery CG makes straightforward (as Schmerling emphasises) and in which these auxiliaries combine first with the subject and the result forms a constituent with the saturated verb phrase (here IC/IV), Schmerling's categories capture not only both the fixed order of English auxiliaries and the correct generalisations pertaining to VP ellipsis. Spanish, in contrast, exhibits different kinds of dependencies and meaningful variation in auxiliary order (recall, for example, the contrast between poder estar trabando 'may be working / be able to be working' and estar pudiendo trabajar 'currently be able to work'). Recall, too, that Spanish has nothing corresponding to English-style VP ellipsis; see fn. 17 for illustration. In Spanish, there is no motivation for modals' forming a constituent with the subject; all auxiliaries are contained in a saturated IV, and so there is no way to strand them together with subjects. This difference between English and Spanish means that Spanish lacks the motivation English has for analysing the subject as the functor. Recall the further difference between English and Spanish that whereas English auxiliaries are highly restricted in which clause types permit them, auxiliaries in Spanish can appear in clauses of any type. Schmerling's account, in which the subject is the functor in a clause and auxiliaries combine first with subjects, captures auxiliaries' limitation to occurrence in specific clause types.
For our analysis of Spanish, we have adopted what is essentially a mirror-image of Schmerling's analysis of English, and we have shown it to be empirically successful for that language. We must emphasise that both analyses have strong empirical support from the languages for which our explicit accounts have been provided. In Schmerling's analysis, the IC/IV subject category is keyed to the specifically indicative and inverted indicative clause categories, so that IC/IV is the category of subjects that are specifically nominative. Thus, in John would rather walk and John will have walked, it is a nominative subject that the addition of an auxiliary modifies, yielding expressions of category IC//IV for John will and John would rather. The auxiliary itself must be of an appropriate category to modify a nominative subject; in the cases we have mentioned, will and would rather thus belong to the category (IC//IV)/(IC/IV); they are categorially defined to occur in indicative clauses specifically. It is interesting to note that the definition of nominative subjects in terms of indicative clauses also automatically includes a relationship between Nominative Case and Tense that has been a commonplace observation in generative syntax since Chomsky (1981), see also Pesetsky & Torrego's (2007) proposal that Nominative Case is a T feature in DPs. 40 The only other major CG work on auxiliaries that we know of, which is exclusively dedicated to the English system, is Bach's (1983) mixed categorial approach. 41 We say "mixed" because whereas Bach's semantic machinery is 40 In Schmerling's analysis, indicative clauses with no auxiliaries get tense inflection on main verbs as a morphological consequence of the combining of expressions of the FC/IV category with expressions of the IV category in the formation of FC expressions. 41 Dowty (1996: §4.6) sketches a treatment of English auxiliaries focused on ordering issues (he refers to his own approach as a 'linear-oriented theory'). All auxiliaries are assigned to the category VP/VP, and the stepwise introduction of lexical functors orders them before the lexical head of the phrase (i.e., the V). However, that requires a definition of lexical head, which is a category that differs from the others assumed in his paper. He presents, as an alternative treatment, the possibility of introducing rules like (i): unabashedly Montagovian, his definition of categories incorporates elements from phrase structure grammars, particularly the strictly context-free version in Gazdar et al. (1982), while remaining CG-based in large measure. Let us illustrate the structure that Bach (1983: 111) assigns to a complete chain of auxiliaries in English (the example and analysis tree are Bach's): (3) Mary mustn't have been being arrested We note first that this structure is strictly monotonic (it grows constantly and always at the same rate), homomorphic to what a PSG could generate. That is not a problem in and of itself, as long as the grammar is flexible enough to accommodate the necessary category splits (which are an integral part of a Montagovian framework). In (3), each lexical element is annotated with not only a CG-based category definition but also the inflectional features of its surrounding elements: must selects a bare infinitive (Ø)here the bare form of havewhile have selects the past participle -en, and so on. If one element does not select another, only its own inflectional features are specified, as in the passive form of arrest.
Bach's version of CG is very much influenced by Gazdar | 2022-07-09T15:34:01.750Z | 2022-07-05T00:00:00.000 | {
"year": 2022,
"sha1": "67a9eff2cb8366138f225c6007c207a633346596",
"oa_license": "CCBY",
"oa_url": "https://revistes.uab.cat/isogloss/article/download/v8-n1-krivochen-schmerling/126-pdf-en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ee0bf0a2de67ae976d09be39a13a9d78d23c132f",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": []
} |
134948199 | pes2o/s2orc | v3-fos-license | DISTRIBUTION OF PELAGIC FISH IN SOUTH CHINA SEA USING GEOSTATISTICAL APPROACH
Pelagic fish are spesies that live in water column at depth of 100 to 200 meters from surface. They migrate as a group looking for nutrient and spawning place. Potential fisheries comodities in Indonesia including pelagic fish have high economic value. Then, stock assessment of pelagic fish measurement is important to researched. The research was conducted in May – June 2016 surrounding South China Sea waters using Madidihang 02 Research Vessel operated by Marine Fisheries Affair (MFA) Republic of Indonesia. To estimate the density of pelagic fish hydro-accoustic equipment and oceanography parameters were used and measured during the campaign. The split beam echosounder was use in aim to obtain precission position and number of fish target. The highest density of fish was found around Tambelan Island and Anambas Island. Statistically pelagic fish density has correlation with chlorophyll-a, salinity, temperature, and sea current velocity. The statistical analysis between pelagic fish density and those oceanography parameters (as statistic variables) yields positive vector correlation.
INTRODUCTION
Southern part of South China Sea is categorized as shallow water and still as a part of Sunandese Shelf.(Nurhakim et al. 2007).South China Sea is wellknown to have much variety of productivity and abundance of biodiversity.The biodiversity consist of plant and animal, especially fish as the main commodity of sea resources (Matsunuma et al. 2011).
Pelagic fish is a kind of fish which able to move to overseas even of state territory, spatial structure and distribution is known not random (Suman et al, 2016).It is difficult to decide or estimate the fish stock, particularly the pelagic one.Sea surface temperature (sst) has been believed influence the fish distribution (Solanki et al, 2005).In addition, sst also influences growth of phytoplankton in open sea.The concentration of chlorophyll-a indicating the abundance of phytoplankton (Bertrand et al, 2002).Distribution of pelagic fish has correlation with migration which is influenced by sea current.Another physical environment of oceanography is salinity (Kang, 2014).Pelagic fish habitat is affected by salinity.Fishing activities in South China sea which local fisherman have been doing until today is based on.experience,so location for fishing is not determined accurately and environmentally unsafe.An approach that may be implemented to establish the fishing ground in South China sea is hydroacoustic method.The Development of data recording technology using hydroacoustic method has been operated commercially in the fisheries field (Melvin et al, 2015).Hydroacoustic technology has a concept to utilize sound waves propagation in sea water medium.The application of hydroacoustic method has advantages in terms of data recording, accuracy, and it is not harmful for living organism in sea water environment (Priatna and Wijopriono 2011).This paper will discuss the correlation between pelagic fish density distribution with oceanography factors at South China sea using hydroacoustic approach in order to propose fishing potential zone.
Oceanographic Data Sampling
The oceanographic data was collected by using digital equipment.Samples have been taken from stations that represent the sea water environment.
The equipment is CTD-set (Conductivity Temperature Depth) that record every layer of depth.
Data recorded were include temperature, salinity, chlorophyll-a, and sea current.
Acoustic Data Analysis
Hydroacoustic data was analyse using Target Strength (TS) concept to measure size and length of fish target.The equation to measure TS was (Johannesson and Mitson 1983): Where: TS = Target strength =scattering cross-section Pelagic fish density (fish/hm 3 ) may estimate by Scattering Volume (SV).The formula to calculate SV was: [ ] The average depth of recording data is 50 meter, then the formula to calculate Scattering Area (SA in fish/km2) was adding Δd 50 to form the equation as: The Target Strength (TS) of fish can be used to estimate the fish abundance.In some cases, the fish abundance gathered by hydroacoustic method may use to assess fish stock (Kang, 2014).The variation of fish density is depend on the amount of fish in waters area.There were some part of water surveyed have high fish density.Maximum density has been found was 161.800 fishes/mile 2 , and the minimum number was 0 or no fish detected.Fish density can be seen by echogram that shown at Figure 4 There were 3 locations found had high fish density.The first place was at the waters area between Borneo and Natuna Island, the second area was around Anambas Islands, and the last wast the waters near Tambelan Island.Sea surface temperature of those waters at were around 30.5 o C -31 o C, lower than other survey areas.The average of Chlorophyll-a concentration was 0.1 -0.2 mg/L and the salinity was around 32 psu -33 psu.Safruddin (2014) said that the watermass with lower temperature than its surrounding area may indicated as upwelling area.The location where upwelling occur may correlated to high nutrient concentration and attract pelagic fish to concentrate.Zainuddin et al. (2013) remark that pelagic fish showing their consistency at chlorophyll-a 0.2 -0.3 mg/L and SST at 30 o C -31 o C in sea water.This study found that density level at area surveyed may categorize to low level (0-100 fish/mil 2 ), middle level (100-200 fish/mil 2 ) and high level (more than 200 fish/mil 2 ) Generally sea surface currents are generated by winds that propagate along the sea level.Current velocity is affected by pressure on sea surface which has decreased over sea level of depth (Pond and Pickard, 1978).Spawning and migration pattern of pelagic fish are influenced by sea current velocity (Laevastu, 1993).Result of recorded data survey claimed that sea current velocity and current direction tend to move from South (Java Sea) to North (South China Sea).Similar research by Akhir (2012) also showed that the sea current was move from South China Sea to Java Sea in November until March and vice versa in April to August.Figure 6 describes oceanographic factors that contribute to pelagic fish density refer to Principal Component Analysis result.The first component show that temperature had close correlation to fish density and sea current velocity had far correlation to fish density.The chlorophyll-a was the only oceanographic factors that contribute positive value in second component, that means chlorophyll-a is important for waters environment.The temperature Esa Fajar Hidayat affect phytoplankton life as producer of chlorophylla.The sea current play an important role in distribution of nutrient where fish always looking for nutrient and ideal habitat to life.
RESULTS AND DISCUSSION
The lowest sea current velocity recorded was 90.385 cm/s and the highest was 224.368 cm/s (Figure 5).
High fish density tends to be found at current velocity 130 cm/s.This result similar to research by Rasyid et al. (2014) that stated maximum current velocity for fish was at 120 cm/s.Then based on this water current velocity, fish within this water mass layer may migrate to the Pacific through by South China Sea.
Figure 6.Biplot analysis fish density on oceanography factors
CONCLUSION
The highest density of pelagic fish was found near in around Tambelan Island, and Anambas Waters.The oceanographic condition in those spots area were have temperature at 30.5oC -31oC, salinity at 32 psu -33 psu, chlorophyll-a at 0.2 -0.3 mg/L, and sea current velocity at 130 cm/s.The environment temperature was prove has close correlation to fish density.Based on spatial analysis, areas in high fish density are potential to state as fishing ground.
Statistically, there were two parameters that most influential on pelagic fish density in this research; temperature and sea current velocity.
Figure 1 .
Figure 1.Research location in South China Sea, Indonesia territory.Hydroacoustic Data SamplingHydroacoustic data recording using Echosounder type Simrad EK 80. Transmitter and receiver waves pulse using Split Beam Transducer ES200-7C frequency in 200 kHz, angle beam 70 made from composite material.The split beam transducer commonly divide the transducer into four different functional part (Figure2).These equipments then were connected to GPS to record the location of data.All tools was placed in ship, then data record data may doing simultaneously during ship tracking.
Figure 3
Figure 3 (a) shows horizontal distribution of temperature at South China Sea derived from insitu data.Data collection was conduct in three different time in a day, i.e morning time, noon, and night, therefore it were shown huge temperature different in some areas.The lowest temperature was 30.45 ( o C) and the highest was 32.22 ( o C).Syaifullah (2015) state that sea surface temperature (SST) at Indonesian Sea has a range between 30 -31 o C, yet with the contribution of global warming causes the anomaly in SST with positive (increase) or negative (decrease) trend.The southern part of study area have SST around 30 -31 o C, while in northen part at the value was dominantly reach 32 o C.These SST distribution may caused by water mass mixing process in shallow water zone, meanwhile in deeper water of northen part the water mass may not be perfectly mixed, so the sea surface get the optimum temperature.Tubalawony (2002) stated that the difference of mass temperature occured because of the water mass removal vertically.Sadhatomo (2006) stated that the horizontal distribution of sea surface temperature is affected by seasonal factor.The horizontal distribution of chlorophyll-a is shown in Figure 3.The very high values (1.5 mg/ L -5 mg/L) was found around the coast of Borneo, and the value of the open waters is more homogeneous (0.1 mg/L -0.2 mg/L).These results matched with the statement of Kurniawati et al. (2015) that
Figure 3 .
Figure 3.The Horizontal Distribution of sea surface temperature (a), chlorophyll-a (b) and salinity (c) at South China Sea The horizontal distribution of salinity is described in Figure 2. The higher salinity (32 psu -33 psu) was found at open water area, while lower salinity was mainly distribute at near coasts (28 psu -30 psu in range).It can be occurs because of the circulation of mass transport was coming from North side of South China Sea and then mixed with the freshwater from inland.It support by Simanjuntak (2009) that the water mass at open water may moving from the water column to the surface and led to input higher salinity to the shore area.The variation of salinity coefficient at open waters was about 1 psu indicating that water salinity was homogeneous.
Figure 4 .
Figure 4. Echogram show the fish density distribution | 2018-12-29T09:57:51.074Z | 2018-04-03T00:00:00.000 | {
"year": 2018,
"sha1": "1e98981e13c1a29240e4ba6299b606350b455edc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.20956/jiks.v4i1.3800",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1e98981e13c1a29240e4ba6299b606350b455edc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
46782108 | pes2o/s2orc | v3-fos-license | Pressure Induced Densification and Compression in a Reprocessed Borosilicate Glass
Pressure induced densification and compression of a reprocessed sample of borosilicate glass has been studied by X-ray radiography and energy dispersive X-ray diffraction using a Paris-Edinburgh (PE) press at a synchrotron X-ray source. The reprocessing of a commercial borosilicate glass was carried out by cyclical melting and cooling. Gold foil pressure markers were used to obtain the sample pressure by X-ray diffraction using its known equation of state, while X-ray radiography provided a direct measure of the sample volume at high pressure. The X-ray radiography method for volume measurements at high pressures was validated for a known sample of pure α-Iron to 6.3 GPa. A sample of reprocessed borosilicate glass was compressed to 11.4 GPa using the PE cell, and the flotation density of pressure recovered sample was measured to be 2.755 gm/cc, showing an increase in density of 24%, as compared to the starting sample. The initial compression of the reprocessed borosilicate glass measured by X-ray radiography resulted in a bulk modulus of 30.3 GPa in good agreement with the 32.9 GPa value derived from the known elastic constants. This method can be applied to variety of amorphous materials under high pressures.
Introduction
The materials for transparent armor applications are generally silicate based either in the form of glass or glass ceramics, and can exist in many different crystalline and amorphous modifications. The ballistic impact response of transparent armor materials requires an understanding of densification process under applied stresses, yield stress, and failure mechanisms under dynamic shock loading. A direct measurement of densification and compression produced under high pressure provides critical data for modeling ballistic response of transparent armor materials and in design of novel armor materials. The equation of state of a material provides information about the thermodynamic state of a system under specified physical conditions [1]. For crystalline materials with a well defined unit cell and long range order, the volume and pressure can be determined from X-ray diffraction experiments and a known equation of state of a pressure standard, respectively [1][2][3]. Since amorphous materials lack long range order, X-ray diffraction cannot be used as a method to obtain direct sample volume measurements. Attempts have been made to directly measure the equation of state of solids under diamond anvil cell (DAC) compression using optical microscopy [3]. The main drawback of this method is the use of irregularly shaped samples, which are on the order of tens of microns in diameter.
Another technique, X-ray microtomography, has been used to determine sample densification by rotating the sample in 0.125 • increments to obtain a sequence of three-dimensional (3D) tomographic images; this method has the drawback of being more time intensive, as the sample is rotated through multiple angles [4]. By comparison, X-ray radiography method [5,6] has been developed for bulk millimeter size samples using Paris-Edinburgh (PE) press at Beamline 16-BM-B (HPCAT) at the Advanced Photon Source, Argonne National Laboratory. In our research, we have adapted this PE press for carrying out X-ray radiography on reprocessed borosilicate glasses, with images of 1936 px × 1216 px with resolution of 0.850 µm 1 pixel . The radiography technique used in this work has the advantage of much shorter experiment times, as a cylindrical sample geometry is used, which leads to a volume calculation from the sample height and width from just 1 radiograph per pressure step, which can be obtained in under a second. Radiography studies of samples at Beamline 16-BM-B can also be done in conjunction with energy-dispersive X-ray diffraction, allowing for pressure determination from measured volume of the gold pressure standard. The X-ray radiography and multi-angle energy dispersive X-ray diffraction technique has been recently applied to a high-boron content borosilicate glass; however, direct sample volume measurements were not possible in this study due to limitations of the sample assembly [7].
Results
For validation of experimental technique, white-beam X-ray radiography was conducted on a cylindrical sample of α-Fe (stable in the body-centered cubic phase) from ambient condition to a maximum pressure of 6.3 GPa. Radiography images were taken at increasing pressure steps, as seen in Figure 1. The sample was decompressed from 6.3 GPa to 0.4 GPa, and the decompression radiography image is denoted by an '*' in Figure 1. determine sample densification by rotating the sample in 0.125° increments to obtain a sequence of three-dimensional (3D) tomographic images; this method has the drawback of being more time intensive, as the sample is rotated through multiple angles [4]. By comparison, X-ray radiography method [5,6] has been developed for bulk millimeter size samples using Paris-Edinburgh (PE) press at Beamline 16-BM-B (HPCAT) at the Advanced Photon Source, Argonne National Laboratory. In our research, we have adapted this PE press for carrying out X-ray radiography on reprocessed borosilicate glasses, with images of 1936 px × 1216 px with resolution of .
. The radiography technique used in this work has the advantage of much shorter experiment times, as a cylindrical sample geometry is used, which leads to a volume calculation from the sample height and width from just 1 radiograph per pressure step, which can be obtained in under a second. Radiography studies of samples at Beamline 16-BM-B can also be done in conjunction with energy-dispersive Xray diffraction, allowing for pressure determination from measured volume of the gold pressure standard. The X-ray radiography and multi-angle energy dispersive X-ray diffraction technique has been recently applied to a high-boron content borosilicate glass; however, direct sample volume measurements were not possible in this study due to limitations of the sample assembly [7].
Results
For validation of experimental technique, white-beam X-ray radiography was conducted on a cylindrical sample of α-Fe (stable in the body-centered cubic phase) from ambient condition to a maximum pressure of 6.3 GPa. Radiography images were taken at increasing pressure steps, as seen in Figure 1. The sample was decompressed from 6.3 GPa to 0.4 GPa, and the decompression radiography image is denoted by an '*' in Figure 1. The radiography images were used to obtain the length and width values at each incremental pressure step. The sample height H and sample width W at each pressure step were measured from the top gold foil to the bottom gold foil and from the left gold foil to the right gold foil, respectively (in pixels). These measurements were then converted to millimeters using the conversion factor . . The initial ambient image at p = 0.13 GPa (Figure 2) was taken when the top PE anvil is closed The radiography images were used to obtain the length and width values at each incremental pressure step. The sample height H and sample width W at each pressure step were measured from the top gold foil to the bottom gold foil and from the left gold foil to the right gold foil, respectively (in pixels). These measurements were then converted to millimeters using the conversion factor 0.850 µm 1 pixel . The initial ambient image at p = 0.13 GPa ( Figure 2) was taken when the top PE anvil is closed onto the sample assembly, but oil pressure has not been applied yet. The heights and widths were normalized by the initial height H 0 and W 0 , respectively. H H 0 decreases with an increasing pressure, while W W 0 increases with increasing pressure. The volumes were calculated as V = π W 2 2 H with the raw height and width values in pixels, then normalized by the initial volume V 0 , such that V V 0 = 1 (shown in Figure 2).
Gold foil (2 µm thick, shown in Figure 1 as the dark outline around the α-Fe sample) was used as the pressure standard and the marker in the white-beam X-ray radiography direct volume measurement. Multiple energy-dispersive X-ray Diffraction (EDXD) spectra were collected for each incremental pressure step from the top, bottom, and side gold foil markers and then averaged to determine the sample pressure. The third-order Birch-Murnaghan equation of state [1] was used to determine the bulk modulus B 0 and the first derivative of the bulk modulus B 0 from experimental white-beam X-ray radiography experiments of pure α-Fe. A nonlinear least squares method was used order to determine B 0 with a fixed B 0 = 5.29. The fit converged with an R-squared value of 0.9968 with B 0 = 167.6 ± 5.3 GPa. The third-order Birch-Murnaghan equation of state fit for the radiography data is shown as a solid curve in Figure 2. The equation of state derived from ultrasonic value of bulk modulus and its pressure derivative [8] is also shown in Figure 2. onto the sample assembly, but oil pressure has not been applied yet. The heights and widths were normalized by the initial height and , respectively. decreases with an increasing pressure, while increases with increasing pressure. The volumes were calculated as = with the raw height and width values in pixels, then normalized by the initial volume , such that = 1 (shown in Figure 2). Gold foil (2 µm thick, shown in Figure 1 as the dark outline around the α-Fe sample) was used as the pressure standard and the marker in the white-beam X-ray radiography direct volume measurement. Multiple energy-dispersive X-ray Diffraction (EDXD) spectra were collected for each incremental pressure step from the top, bottom, and side gold foil markers and then averaged to determine the sample pressure. The third-order Birch-Murnaghan equation of state [1] was used to determine the bulk modulus and the first derivative of the bulk modulus from experimental white-beam X-ray radiography experiments of pure α-Fe. A nonlinear least squares method was used order to determine with a fixed = 5.29. The fit converged with an R-squared value of 0.9968 with = 167.6 ± 5.3 GPa. The third-order Birch-Murnaghan equation of state fit for the radiography data is shown as a solid curve in Figure 2. The equation of state derived from ultrasonic value of bulk modulus and its pressure derivative [8] is also shown in Figure 2. The experimental data on equation of state of pure α-Iron presented in Figure 2 for three different methods is summarized in Table 1. The consistency of fitted equation of state (EOS) parameters for three different methods is evident in Table 1. The first derivative of the bulk modulus was fixed at ultrasonically measured value of 5.29 for the X-ray radiography and X-ray diffraction The experimental data on equation of state of pure α-Iron presented in Figure 2 for three different methods is summarized in Table 1. The consistency of fitted equation of state (EOS) parameters for three different methods is evident in Table 1. The first derivative of the bulk modulus B 0 was fixed at ultrasonically measured value of 5.29 for the X-ray radiography and X-ray diffraction EOS fits to obtain a quantitative comparison of the bulk modulus B 0 obtained from these methods to the ultrasonic measured value. The coefficient of determination for the X-ray radiography and X-ray diffraction data EOS fit is r 2 = 0.997 and r 2 = 0.9978, respectively. The percent difference between the reference bulk modulus from the ultrasonic data and the bulk modulus obtained from radiography measurements is 0.72%. The percent difference between the bulk modulus from the ultrasonic data and the bulk modulus obtained from X-ray diffraction measurements is 1.21%. The percent difference between the bulk modulus obtained from radiography measurements and the bulk modulus obtained from EDXD measurements is 1.93%. Overall, the percent difference between these methods is small, indicating a good agreement between the methods and validating the radiography technique as a method to obtain the equation of state of a solid sample.
Reprocessed Borosilicate Glass Sample
The radiography images of the reprocessed borosilicate glass sample at increasing pressure steps are seen in Figure 3. The heights and widths were normalized by the initial height H 0 and W 0 , respectively, and H H 0 and W W 0 decrease with an increasing pressure. EOS fits to obtain a quantitative comparison of the bulk modulus obtained from these methods to the ultrasonic measured value. The coefficient of determination for the X-ray radiography and Xray diffraction data EOS fit is = 0.997 and = 0.9978, respectively. The percent difference between the reference bulk modulus from the ultrasonic data and the bulk modulus obtained from radiography measurements is 0.72%. The percent difference between the bulk modulus from the ultrasonic data and the bulk modulus obtained from X-ray diffraction measurements is 1.21%. The percent difference between the bulk modulus obtained from radiography measurements and the bulk modulus obtained from EDXD measurements is 1.93%. Overall, the percent difference between these methods is small, indicating a good agreement between the methods and validating the radiography technique as a method to obtain the equation of state of a solid sample.
Reprocessed Borosilicate Glass Sample
The radiography images of the reprocessed borosilicate glass sample at increasing pressure steps are seen in Figure 3. The heights and widths were normalized by the initial height and , respectively, and and decrease with an increasing pressure. The volumes were calculated as = with the raw height and width values in pixels, and then normalized by the initial volume such that = 1 (Figure 4). The bulk modulus was calculated as = − using the low pressure experimental points below 1 GPa, which remain in the elastic region of compression. The bulk modulus obtained is = 30.34 GPa. The sample of commercial borosilicate glass has Young's modulus E = 63.1 GPa and Poisson's ratio ν = 0.18 [9]. Using the equation relating Bulk Modulus B0 = E/[3(1 − 2ν)] to the Young's modulus and Poisson's ratio, we obtain = 32.9 GPa. Using the direct volume measurement from radiography images technique, the bulk modulus obtained for the reprocessed borosilicate glass sample is = 30.34 GPa. This gives an 8.1% difference between the experimentally obtained value in this experiment and the value that was obtained through calculation of elastic constants [9].
Density Measurements
Flotation density measurements were conducted on a reprocessed borosilicate glass that was subjected to high pressures in a PE press. The cylindrical borosilicate glass sample was housed in the cell assembly, with a sample diameter of 1.0 mm and a sample height of 1.0 mm and compressed using a PE press and the gold diffraction pattern at the highest pressure is shown in Figure 5. The measured lattice parameter for gold is 3.999 Å for the data shown in Figure 5, and when combined with the gold equation of state [2] yields a pressure of 11.4 GPa. The photograph of the pressure recovered sample from 11.4 GPa is shown in Figure 6.
Lithium metatungstate was the high specific gravity fluid used in density measurements. Deionized water was used to change the density of the fluid throughout the experiment. The sample floated until the final density step, 2.755 gm/cc where the sample was just barely submerged under the surface of the liquid. The flotation density measurements give a final density for the recovered borosilicate glass sample of 2.755 gm/cc. The reprocessed borosilicate glass has an initial density of 2.214 gm/cc, as determined by Archimedes method of the bulk sample. This indicates a 24.4% increase in the initial density for the reprocessed borosilicate glass sample after decompression from 11.4 GPa. The sample of commercial borosilicate glass has Young's modulus E = 63.1 GPa and Poisson's ratio ν = 0.18 [9]. Using the equation relating Bulk Modulus B 0 = E/[3(1 − 2ν)] to the Young's modulus and Poisson's ratio, we obtain B 0 = 32.9 GPa. Using the direct volume measurement from radiography images technique, the bulk modulus obtained for the reprocessed borosilicate glass sample is B 0 = 30.34 GPa. This gives an 8.1% difference between the experimentally obtained value in this experiment and the value that was obtained through calculation of elastic constants [9].
Density Measurements
Flotation density measurements were conducted on a reprocessed borosilicate glass that was subjected to high pressures in a PE press. The cylindrical borosilicate glass sample was housed in the cell assembly, with a sample diameter of 1.0 mm and a sample height of 1.0 mm and compressed using a PE press and the gold diffraction pattern at the highest pressure is shown in Figure 5. The measured lattice parameter for gold is 3.999 Å for the data shown in Figure 5, and when combined with the gold equation of state [2] yields a pressure of 11.4 GPa. The photograph of the pressure recovered sample from 11.4 GPa is shown in Figure 6.
Lithium metatungstate was the high specific gravity fluid used in density measurements. Deionized water was used to change the density of the fluid throughout the experiment. The sample floated until the final density step, 2.755 gm/cc where the sample was just barely submerged under the surface of the liquid. The flotation density measurements give a final density for the recovered borosilicate glass sample of 2.755 gm/cc. The reprocessed borosilicate glass has an initial density of 2.214 gm/cc, as determined by Archimedes method of the bulk sample. This indicates a 24.4% increase in the initial density for the reprocessed borosilicate glass sample after decompression from 11.4 GPa. Materials 2018, 11, 114 6 of 9 Figure 5. The observed diffraction peaks from the gold pressure marker at 11.4 GPa. The (hkl) indices for gold diffraction peaks are indicated and a peak marked '*' is from hexagonal boron nitride sample holder. The measured lattice parameter for gold is a = 3.999 Å. Figure 6. Sample of pressure-treated borosilicate glass after compression to 11.4 GPa. This sample was employed in measurement of pressure induced densification. Figure 6. Sample of pressure-treated borosilicate glass after compression to 11.4 GPa. This sample was employed in measurement of pressure induced densification.
Discussion
Pressure induced densification and compression in a reprocessed borosilicate glass has been studied using X-ray radiography and X-ray diffraction techniques at a synchrotron source. Borosilicate sample recovered from pressure of 11.4 GPa shows an increase in density of as much as 24%. Direct volume measurements by X-ray radiography combined with pressure measurements by gold pressure marker reveals a bulk modulus of 30.3 GPa, which is in good agreement with value derived from the elastic constants. This methodology of measuring densification and compression can be applied to borosilicate glasses of different compositions.
Materials and Methods
The validation of X-ray radiography method was carried on a cylindrical sample of 99.995% pure α-Iron wire from Alfa Aesar (Tewksbury, MA, USA). The α-Fe wire had a diameter of 1.0 mm and a height of 0.5 mm. α-Fe has a body centered cubic structure at ambient condition and retains this structure to 6.3 GPa. The borosilicate glass samples used in this experiment are reprocessed versions of commercially available borosilicate glass [10]. The borosilicate glass was reprocessed by U.S. Army Research Laboratory in Aberdeen Proving Ground, MD to provide base-line data for comparison with glasses of various compositions and to obtain a more optimized glass for use as transparent armor material. The bulk starting glass was broken up and then subjected to reprocessing via cyclical melting and cooling of the same sample in a furnace with the following temperatures and times seen in Table 2. The glass started at 5.25 • C, increased to 1540 • C, and held for 17.0 h, this cycle was repeated on the same glass sample for the amount of time seen in Table 2. The chemical analysis of the borosilicate glass sample, performed via inductively coupled plasma atomic emission spectroscopy, determined a composition of: 2.40% Al 2 O 3 , 12.45% B 2 O 3 , 0.02% BaO, 0.01% Fe 2 O 3 , 0.57% K 2 O, 3.40% Na 2 O, 81.10% SiO 2 , and 0.03% ZrO 2 (% weight). The cylindrical borosilicate glass sample studied by white-beam radiography had a 1.0 mm diameter and a 0.5 mm height, the borosilicate sample compressed to 11.4 GPa and studied via flotation density measurements had a 1.0 mm diameter and a 1.0 mm height.
White-Beam X-ray Radiography
White-beam X-ray radiography studies were conducted at Beamline 16-BM-B, HPCAT (Argonne, IL, USA), The Advanced Photon Source, Argonne National Laboratory, on a sample of α-Fe and a sample of reprocessed borosilicate glass. Both of the samples were quasi-hydrostatically compressed at ambient temperature using a Paris-Edinburgh (PE) cell. The cell assembly, seen in Figure 7, consists of a cylindrical glass sample housed within a hexagonal boron nitride (h−BN) cup with a h−BN cap, which is surrounded by a magnesium oxide (MgO) inner ring and a boron epoxy outer ring, which is all surrounded by supporting outer polycarbonate plastic (Lexan) ring; this setup is sandwiched between zirconium oxide (ZrO 2 ) caps, which are shaped to match the PE anvil geometry. Gold foil (2 µm thick, shown in red in Figure 7), an important component of this cell assembly, was used as the pressure standard [2] and the marker in the white-beam X-ray radiography direct volume measurement. Two separate pieces of gold foil were used, one on top of the sample and one longer piece that fit underneath of the sample in a 'U' shape, in order to directly measure the changing sample height and width with increasing pressure. In between each white-beam X-ray radiography measurement an energy-dispersive X-ray diffraction (EDXD) spectrum was taken of the top gold foil at 2θ = 15.01°, in order to determine pressure. The (220), (311), (222), (400), (331), (420), (422), and (333) Miller indices (hkl) were indexed for gold foil (space group 3 (number 225)) to determine the lattice parameter a in order to obtain the volume V. This volume was used with bulk modulus = 165.8 GPa, first derivative of the bulk modulus = 5.14, and the initial unit-cell volume = 67.850 A [3] (at ambient conditions) to obtain the sample pressure using the third-order Birch-Murnaghan equation of state [1].
Conclusions
A proof of concept experiment was conducted on a sample of pure α-Fe to check the validity of the use of X-ray radiography to obtain an equation of state for amorphous materials. The bulk modulus that were obtained through a third-order Birch-Murnaghan equation of state fit to experimentally obtained volume data by radiography, X-ray diffraction, and ultrasonic method showed excellent agreement. Direct volume measurements of a reprocessed borosilicate sample were conducted via whitebeam X-ray radiography to a pressure of 4.9 GPa. From the initial compression, we obtain bulk modulus for the borosilicate glass sample to be B0 = 30.3 GPa. The percent difference between the bulk modulus of the reprocessed borosilicate sample obtained via whitebeam X-ray radiography and the commercial borosilicate sample is 8.1%. It is important to note that the borosilicate glass that was studied in this current research was a reprocessed version of the commercially available, which could contribute to the 8.1% difference in the bulk modulus. The experimentally obtained density of the recovered borosilicate glass sample compressed to 11.4 GPa was 2.755 gm/cc and showed densification of 24% when compared to starting materials. In conclusion, direct volume measurements via white-beam X-ray radiography prove an effective method to obtain the equation of state of amorphous materials. The method described in this paper can be applied to obtain densification and compression data on a broad cross-section of amorphous materials. Gold foil (2 µm thick, shown in red in Figure 7), an important component of this cell assembly, was used as the pressure standard [2] and the marker in the white-beam X-ray radiography direct volume measurement. Two separate pieces of gold foil were used, one on top of the sample and one longer piece that fit underneath of the sample in a 'U' shape, in order to directly measure the changing sample height and width with increasing pressure. In between each white-beam X-ray radiography measurement an energy-dispersive X-ray diffraction (EDXD) spectrum was taken of the top gold foil at 2θ = 15.01 • , in order to determine pressure. The (220), (311), (222), (400), (331), (420), (422), and (333) Miller indices (hkl) were indexed for gold foil (space group Fm3m (number 225)) to determine the lattice parameter a in order to obtain the volume V. This volume was used with bulk modulus B 0 = 165.8 GPa, first derivative of the bulk modulus B 0 = 5.14, and the initial unit-cell volume V 0 = 67.850 A [3] (at ambient conditions) to obtain the sample pressure using the third-order Birch-Murnaghan equation of state [1].
Conclusions
A proof of concept experiment was conducted on a sample of pure α-Fe to check the validity of the use of X-ray radiography to obtain an equation of state for amorphous materials. The bulk modulus that were obtained through a third-order Birch-Murnaghan equation of state fit to experimentally obtained volume data by radiography, X-ray diffraction, and ultrasonic method showed excellent agreement. Direct volume measurements of a reprocessed borosilicate sample were conducted via whitebeam X-ray radiography to a pressure of 4.9 GPa. From the initial compression, we obtain bulk modulus for the borosilicate glass sample to be B 0 = 30.3 GPa. The percent difference between the bulk modulus of the reprocessed borosilicate sample obtained via whitebeam X-ray radiography and the commercial borosilicate sample is 8.1%. It is important to note that the borosilicate glass that was studied in this current research was a reprocessed version of the commercially available, which could contribute to the 8.1% difference in the bulk modulus. The experimentally obtained density of the recovered borosilicate glass sample compressed to 11.4 GPa was 2.755 gm/cc and showed densification of 24% when compared to starting materials. In conclusion, direct volume measurements via white-beam X-ray radiography prove an effective method to obtain the equation of state of amorphous materials. The method described in this paper can be applied to obtain densification and compression data on a broad cross-section of amorphous materials. | 2018-04-03T03:10:41.393Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "5f64bedb61b50189be4d43cb66cff60b1ac52733",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/11/1/114/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37e873f1f79d207b0fe10548d06893b123ec7e3d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
208527336 | pes2o/s2orc | v3-fos-license | Driven black holes: from Kolmogorov scaling to turbulent wakes
General relativity governs the nonlinear dynamics of spacetime, including black holes and their event horizons. We demonstrate that forced black hole horizons exhibit statistically steady turbulent spacetime dynamics consistent with Kolmogorov's theory of 1941. As a proof of principle we focus on black holes in asymptotically anti-de Sitter spacetimes in a large number of dimensions, where greater analytic control is gained. We also demonstrate that tidal deformations of the horizon induce turbulent dynamics. When set in motion relative to the horizon a deformation develops a turbulent spacetime wake, indicating that turbulent spacetime dynamics may play a role in binary mergers and other strong-field phenomena.
INTRODUCTION
The recent observations of black hole mergers [1] add to the increasing evidence that black holes exist in nature. Whilst at early times black hole mergers are described by perturbative post-Newtonian physics [2], and at late times are described by Kerr plus a handful of the longest lived quasinormal modes [3], at intermediate times there is the exciting possibility that we are witnessing the full nonlinearity of general relativity [4].
A remarkably universal consequence of non-linear dynamics is turbulence, seen across a wide variety of systems that exhibit fluid-like behavior at the largest scales. The dynamics of the chaotic cascade of vortices across scales washes out all memory of how those vortices were created in the first place, resulting in universal characteristics. Turbulence is arguably universality par excellence, the consequences of which are witnessed in all corners of nature from galaxy formation to atmospheric dynamics to a cup of tea.
In this paper we examine the possibility that the dynamics of black holes exhibit similar universal features to fluids undergoing turbulent cascades. As is well known, the physics of black hole horizons and the dynamics of fluids are closely related, depending on the context. It was appreciated early on that black-hole horizons can be thought of as fluid membranes [5][6][7]. More recently, a one-to-one map between near-equilibrium black holes solutions in asymptotically anti-de Sitter (AdS) spacetimes and the solutions to conformal hydrodynamics with particular transport coefficients has been established [8][9][10]. Recent connections between near horizon dynamics and incompressible Navier Stokes equations were studied in [11], and moreover it has recently been established that the Stokes equations govern transport properties of in-homogeneous horizons [12]. We may therefore hope that the remarkable universality observed in turbulent cascades is also seen in the dynamics of black holes. This is our present goal.
The most widely celebrated results on the universality of turbulent cascades are captured by Kolmogorov's theory of 1941 [13,14] (K41). 1 Under similarity hypotheses for homogeneous isotropic turbulence, the statistical distributions of the velocity field in the inertial range depend only on the rate of transfer of kinetic energy within the cascade, ε. Dimensional analysis then reveals that the two-point functions of the velocity field v in momentum space -here written in terms of the kinetic energy spectrum, E(k) -takes a simple scaling form, while for higher n-point functions of the velocity field in position space, arranged into longitudinal structure functions, Here the angle brackets denote statistical averages. These results apply in any number of dimensions, so long as the underlying details of the dynamics meet the nontrivial test of the similarity hypotheses. We shall demonstrate that certain driven black hole spacetimes exhibit these universal features, namely we shall numerically demonstrate that (1) holds over 1-2 decades in momentum space, and (2) holds up to n = 10 over a decade.
These properties are demonstrated for horizons with dynamics restricted to two spatial dimensions (i.e. spacetimes with nontrivial dynamics in 3+1 dimensions). In two spatial dimensions the underlying dynamics which realise the scaling hypotheses and lead to (1) take the form of an inverse cascade, with power moving from a driving scale at some high-k to low-k over time. In two dimensions there is also a direct cascade which moves from the driving-k to the UV [17], and whilst we do see this occur, its properties will not be the focus of our work.
We focus on the simplest possible setting in which we can explore such turbulent horizons, as a proof of principle of the above properties. To this end we restrict our attention to horizons with planar (or toroidal) topology, rather than spherical. In asymptotically flat spacetimes planar horizons present an unstable starting point because of the Gregory-Laflamme instability [18], and are therefore unnatural objects to consider in our present goal of studying the dynamical response to a forcing term. We focus instead on asymptotically AdS spacetimes where planar horizons are intrinsically stable. Such black holes are also of interest as models of strongly interacting many body systems through the AdS/CFT correspondence [19]. Due to the universality of the mechanism of turbulence we may hope that our proof of principle examples shed light onto the universal dynamics of black holes in general, including those in asymptotically flat spacetimes.
In obtaining our results we do not utilise a hydrodynamic expansion, i.e. we do not carry out perturbation theory in gradients. Without this technical crutch, the connection between black holes and fluid-like dynamics is not readily apparent from the Einstein equations. However, the connection is once more seen if one treats the inverse spacetime dimension 1/D as a perturbative parameter according to the seminal constructions of [20][21][22][23].
At large D there is a separation of scales between the black hole size, r 0 and the region occupied by a nontrivial gravitational potential, r 0 /D. A separation of scales signals an effective theory, which can be constructed by analytically solving the integrals for radial evolution. What remains is a set of constraint equations in D − 1 dimensions. It is these constraints that resemble fluid-dynamics equations. Crucially, however, even though they are perturbative in 1/D, they are exact in gradients. Therefore, far from a mere technical simplification, the 1/D expansion allows us to directly connect black holes with the turbulent behaviour of a class of fluid-like equations that are exact in gradients. We thus note that these equations may be valuable in their own right as a natural candidate for the study of turbulent behaviour, as compared to those obtained in an arbitrarily truncated gradient expansion (such as the Navier-Stokes equations). We return to this point in the discussion.
The driving we consider is obtained by adding forcing terms directly to the black hole equations, F i (t, x), in a way that injects vorticity consistent with the requirements of statistical homogeneity and isotropy. Due to the nature of our particular setup, we are also afforded the opportunity to drive the horizon fluid by turning on a deformation to the gravitational potential at the boundary of AdS, γ tt (t, x), using a generalisation of the large-D equations derived in [24]. Previous work has explored decaying turbulent dynamics of black holes that result after starting from unstable initial conditions [25][26][27][28], where [28] also utilised the large D expansion as we do here. To distinguish the work we present here, we do not require starting with unstable initial data, and the driving allows us to achieve a quasistationary 2 turbulent regime that can be compared with the predictions of K41. Motivated by the holographic connection there has also been a focus on turbulence in 2 + 1 conformal hydrodynamics [29][30][31][32] where analyses of the energy spectrum and comparisons to (1) are made. Quasi-normal mode resonances of a rapidly spinning Kerr black hole have been argued to result in a phenomenon resembling an inverse turbulent cascade in 2+1 dimensions [33].
LARGE D BLACK HOLE DYNAMICS IN ADS
We shall now record the most salient points regarding the effective theory that describes the dynamics of black branes in asymptotically AdS D spacetimes at large D, derived in [20][21][22][23] (see also [34]). In taking the large D limit we focus on the near horizon region of the black brane, so that the resulting theory effectively describes how the near horizon deformations evolve in time. Furthermore, a mathematical simplification arises that allows us to solve the constraints in the radial direction, so that the basic variables can be readily related to the energy and momentum density of a fluid. This effective theory was later extended in [24] by introducing a general class of boundary conditions which induce changes in the gravitational potential on the boundary of the nearhorizon region. These map to sources for the dual stressenergy tensor in the dual picture. Among the class of deformations derived in [24], here we will only consider the one corresponding to adding a source for the energy density.
Our action is simply Einstein-Hilbert with negative cosmological constant in D dimensions where Λ = −(D − 1)(D − 2)/2. We choose a coordinate system adapted to the black brane; we split the spacetime coordinate X α into X α = {t, r, x i , y a }, where r is the coordinate transverse to the brane, which also plays the role of the holographic coordinate, t, x i and y a are the coordinates along the black brane (i = 1..p, a = 1..ñ), so that D =ñ + p + 2. The reason for the split between x i and y a is that we restrict only to dynamics in a subset p of the boundary spatial directions, in other words, we dimensionally reduce on anñ-torus and keep only the zero modes. As in [20], we take the large D limit by takingñ → ∞ keeping p fixed. This is facilitated by choosing a metric ansatz of the form As shown in [22][23][24], it is consistent to solve the Einstein equations with the following ansatz as a perturbative expansion in where R = rñ. Here, a, p i are the energy and momentum density of the black brane, while γ tt is a deformation of the AdS-boundary metric. As such, we can think of it as providing an external gravitational potential, or providing an adjustment to the gravitational environment in which the black hole lives. 3 The ansatz (4) can be thought of as a radial foliation of spacetime. As a consequence, the Einstein equations split into evolution equations in the radial direction and constraints on r = const surfaces. The radial equations can be solved order by order in 1/ñ, so we are left with the constraints. At leading order in 1/ñ, these are where in the presence of the boundary metric deformation we have F i = a 2 ∂ i γ tt and where indices are raised and lowered with the flat metric δ ij . Equations (9), (10) correspond to conservation equations associated to time and spatial translation invariance, and behave in accordance with expectations for a viscous fluid, complete with sound and shear-diffusion modes as detailed in [23]. These equations will serve as the basis of our analysis. All that remains is to provide the relations between these gravitational variables (a, p i ), in which we carry out numerical evolution, and a set of fluid variables: the fluid energy density is a, whilst the velocity is v i = (p i −∂ i a)/a, see [24] for details. Note that this large D limit results in a set of non-relativistic equations.
KOLMOGOROV SCALING
In this section we present the key results of our work, that when appropriately driven by a homogeneous and isotropic forcing function, the large-D equations of general relativity in AdS (9)-(10) exhibit turbulent behaviour consistent with the predictions of K41 theory. We verify the agreement with K41 by carrying out 256 independent random realisations and using them to compute statistical properties of the velocity field, v i .
To achieve the conditions required for K41, namely homogeneity, isotropy and driving, we replace F i on the right hand side of (10) by an explicit forcing function, instead of utilising γ tt , which we set to zero. While γ tt is such a forcing function, it appears as the derivative of a scalar and cannot be directly used to supply a source of vorticity. 4 We shall consider flows in the presence of nontrivial γ tt later. We adopt periodic boundary conditions for a torus of size L × L. Details of the choice of F i , the numerical methods used and their implementation are given in the appendix.
To acquaint the reader, we first illustrate the pattern of vorticity, ω = ij ∂ i v j , obtained during a single realisation at late times in figure 1 (left). The time-dependence of this picture resembles that of a liquid of vortices moving in larger coherent structures. Larger structures can be seen in this picture in accordance with power beginning to accumulate on the largest scales available, i.e. the torus size. This part of the spectrum, subject to finitesize effects, is expected to lie outside the inertial range. The inertial range is seen to grow over time, from the driving scale downwards, i.e. an inverse cascade. The inertial range approaches two decades-worth of K41 scaling before meeting the finite size of the box. At this time power begins to accumulate at low k and the scaling is destroyed there. Our simulations thus demonstrate quasistationary turbulence. Interestingly, since we are working on a torus, the structure begins to resemble that of a square vortex-antivortex lattice at late times, but continually fed from high k. This phenomenon also goes by the term 'energy condensation' [35].
Next we demonstrate that the longitudinal structure functions (2) are also seen in our simulations, from n = 2 up to n = 10 in figure 2 (right). For this position-space calculation we pick a single direction in which we compute velocity differences,x, average over each row labelled by y, and then over 256 realisations. The consistency with K41 is indicative of the absence of intermittency corrections; we are driving the system homogeneously and not providing any window of opportunity for intermittent laminar flow to develop.
So far we have provided evidence for the presence of K41 scaling in the form of power spectra and structure functions. Finally, we provide further support for why K41 predictions are seen here. A crucial part of the K41 analysis is that there is a dominant scale in the problem, ε, the rate of energy transfer, which is taken to be a constant. This governs how quickly kinetic energy is passed between vortices of different scales. Given that our inertial range is growing over time, exhibiting an indirect cascade from the injected k-scale downwards, we must be providing a constant injection of kinetic energy via the F i forcing. This is indeed the case, as is clearly shown in figure 3. We see a constant growth of v 2 with time (integrated over the torus), whilst the total enstrophy (Ω = d 2 xω 2 ) remains approximately constant. 5
TURBULENT WAKES
So far in this work we have examined universal turbulent dynamics of black hole horizons by explicitly adding a set of custom forcing terms F i to the Einstein equations which directly source vorticity. Whilst such terms are the most convenient for achieving homogeneous isotropic turbulence and comparing to K41, it is desirable to have a fully gravitational realisation of turbulent dynamics. In fact our setup is capable to also address this question directly. To this end, in this section we consider the case where the forcing is instead provided by a gravitational deformation of the AdS conformal boundary metric, γ tt (t, x). The associated equations for such deformations simply result in a forcing term F i = a∇ i γ tt /2 as in (10). We emphasize, however, that this means that 5 For a discussion of the approximate constancy of Ω for unforced equations see [28]. Note that here we are forcing the equations and the approximate constancy of Ω emerges from the detailed dynamics. As further support we also show the approximate constancy of the total enstrophy of the system in red.
the forcing term is entirely physical, and results from an explicit gravitational source. The fact that we can implement such a scheme is another illustration of the power of our setup for the study of gravitational turbulence.
For this simulation we set initial conditions where the fluid is in uniform motion, and then quench γ tt from zero to a symmetric Gaussian profile at a fixed location x on the torus. The resultant dynamics, shown in figure 4, clearly demonstrates that this deformation develops a turbulent gravitational wake on the horizon. Note that this example is not statistically homogeneous nor isotropic, and moreover it is decaying over time, and so we do not expect it to fall into the class of turbulent flows meeting the basic requirements to be described by K41. 6 We note that since drag force varies qualitatively between turbulent and laminar wakes we expect that this tidally-induced turbulence can impact the dynamics of strong-field gravitational processes. For example, the mechanism we have proposed may be of relevance to the dynamics of quark-gluon plasmas via AdS/CFT. To add further relevance to this scenario, the γ tt calculation we have performed may be taken in the same spirit as studies of near-extremal Kerr black holes and their near-horizon regions using CFT [36]. There, when a massive body falls into the near horizon region it appears as a source term in the CFT, as pointed out in [37]. In our case, the γ tt may be viewed as the gravitational deformation due to such a massive body, and the appearance of a turbulent wake in this context may affect the plunge dynamics and associated waveforms for BH-BH or BH-NS mergers. Of course the extrapolation of our results to asymptotically flat spacetimes is not straightforward, particularly with regards to identifying an appropriate hierarchy of scales, and a direct computation would be required to confirm its astrophysical relevance.
DISCUSSION
In this paper we have worked at strictly infinite spacetime dimension, D, though we considered dynamics constrained to 2 + 1 of them. There are however reasons to expect that results we obtained continue to hold in lower dimensions. First, the dimensional analysis behind K41 scaling is dimension independent; indeed, k −5/3 is predicted and seen in a range of 2 + 1 and 3 + 1 dimensional scenarios, as well as 2 + 1 dynamics of an infinitedimensional system as studied here. This universality occurs despite the clear differences in the dynamics that underpin the cascades; in 3 + 1 the cascade is direct, whilst in 2 + 1 there is also an inverse cascade. Thus whilst the detailed dynamics may change substantially as we lower the number of dimensions, we anticipate that K41 remains robust. Second, the map between gravity and hydrodynamics holds in all spacetime dimensions D.
We also highlighted the large-D equations as a potentially useful model in the study of turbulence in general. These equations are simultaneously dissipative and exact in gradients, by virtue of the parametric control afforded by the 1/D expansion. This should be contrasted with the usual treatment of hydrodynamics in a dynamical setting, where one typically truncates at a finite order and treats the resulting system of equations as exact -a procedure which fundamentally changes the theory. As a consequence of this change one discovers physically undesirable qualities such as instabilities and acausal behaviour both for relativistic theories [38], and non-relativistic theories [39]. Furthermore, one must of course also verify post-hoc that the solution remained a good approximation within the framework of a perturbative gradient expansion. None of these concerns apply to our setup.
Finally we emphasise that whilst the large-D equations of motion appear fluid-like, these are the radial constraint pieces of a full solution to the Einstein equations. The radial evolution equations were solved analytically as part of the construction, and so our solutions correspond to full black hole solutions to the Einstein equations. Thus the K41 behaviour we illustrate here from the fluid perspective is naturally encoded -through the a, p i variables -as geometric data in these spacetimes. In a more general setting the appropriate geometric data corresponding to the fluid observables may be difficult to identify, in which case it may be helpful to consider approaches to visualising horizon vorticity, for example [40].
Numerical Method
The equations we are solving in this case are given by (9) and (10) with the right hand side of (10) supplemented by the additional term Notice that this enters the equations in such a way to produce vorticity without directly sourcing a. We consider the system on a torus with The force F is constructed in such a way as to drive the system isotropically at a fixed energy scale, m, and consists of random combinations of all the Fourier amplitude coefficients whose corresponding wave vectors lie close to a circle of fixed radius in wave vector space. Specifically, given a discretisation scheme for the spatial directions, F is given by where k i is a set of M vectors in Fourier space that was sampled over an annular region | k (i) | = m ± δm in an isotropic way. The coefficient A is a fixed overall amplitude controlling the strength of the forcing. At times that are integer multiples of ∆t the mode coefficients c (i) are drawn from a normal distribution with zero mean and variance ∆t, normalised such that while the angles φ (i) are drawn from a uniform distribution. These random variables are assigned at times that are integer multiples of an interval ∆t. For the times in between, i.e. from t 1 = n∆t for some n ∈ Z + and t 2 = t 1 + ∆t the forcing function is interpolated, for t 1 ≤ t < t 2 . This choice ensures the squarenormalisation is maintained during the interpolation, with uncorrelated cross-terms cancelling after averaging. We typically used ∆t = 10δt, where δt is the time step used for the numerical evolution. A couple of comments are now in order. First of all, the benefit of forcing the system at a particular energy scale, m, is that there will be a sharp peak on the power spectrum at that scale, while the remaining of the spectrum will not be contaminated, allowing easier identification of scaling behaviours at larger and smaller energy scales. Furthermore, interpolating over the random amplitudes in this way allows us to use deterministic time-stepping techniques instead of stochastic ones.
Let us now move on to discuss the details of the numerical method used for solving these equations. We utilise a uniformly spaced discretisation of x, y with N x , N y grid points respectively, taking N x = N y = N . We adopt fourth-order finite difference approximations of the derivative operators. The variables a, p x and p y are then evolved forward in time using fourth-order Runge-Kutta time-stepping (RK4), subject to periodic boundary conditions. In addition we add a sixth order Kreiss-Oliger dissipation term [41]. This is done by replacing the time derivative operator with where h = L/N is the grid spacing. Note that this term approaches to zero faster than the error in the spatial finite differences as h → 0. At the highest resolutions considered, N = 1024, we have also performed numerical evolutions with η = 0. The difference between η = 0 and η = 0 is only visible near the UV in the power spectrum, as expected. The properties of the solutions away from the UV are not affected by η.
The last piece of information needed for the time evolution is to specify the initial conditions. For simplicity, we consider homogeneous initial conditions for the evolution functions, namely a = 2 , p i = 0.
In the particular evolutions discussed we considered N = 1024, M = 100, η = 0.4, δt = 6.4 × 10 −4 L, A = 0.005, and m = 100. In order to observe turbulence, the associated Reynolds numbers, Re ∼ L, should be large, and in our simulations we use L = 2×10 5 . Random numbers are generated using the Mersenne twister algorithm [42], and normally distributed variables are obtained using a Box-Muller transform [43]. We work with double floating-point precision.
Overview of GPU implementation
Given the relative simplicity of the explicit update steps we have found the use of GPUs to be of significant utility. The methods used are standard and do not warrant a detailed exposition, however a high-level overview of the step-by-step procedure used may be of value. This is given below.
Seed the pseudo-random number generator uniquely for each run. Allocate memory on the device to store, for all variables a, p x , p y : their spatial derivatives, first time derivatives, and their values at four RK4 intermediate steps. Allocate memory to store 100 pairs of k x , k y values specifying the location of driving points in momentum space, as well as two buffers for the associated amplitudes and phases (two buffers are required because we interpolate between two sets of random variables over time). Initialise the allocated memory with initial data for the run, the driving point locations, initial random amplitudes and phases (all generated on the CPU). Then, for each complete time step: 1. Check if the random amplitudes and phases need updating at this time step. If so, cycle between the two buffers, compute new random values on the CPU and copy them to the device.
2. For each RK4 intermediate step: • Compute spatial derivatives in the x-direction (of the intermediate RK4 variable appropriate for this step). This is performed by splitting the N × N grid into N thread blocks, one for each y-value, and using N threads in each thread block to perform the computations at each x. In this way we can set the shared memory for each thread block to be the entire y-column, extended to include an extra 8 ghost points for periodicity. See, for example, [44].
• Transpose and repeat for all y-derivatives (mixed second derivatives do not appear in the equations).
• Compute time derivatives using the previously computed values through the equations of motion.
• Add F i terms to the time derivatives as required. These are constructed from interpolating between F i values evaluated using initial and final amplitude/phase buffers.
• Fill the next RK4 intermediate buffer using the time derivatives. For the final RK4 intermediate step, the next RK4 buffer written is the first RK4 buffer.
3. Rarely, copy the field values (i.e. the first RK4 buffer) from the device and write to disk. In practise we found it invaluable to use the visualisation software VisIt [45] and so we also write an appropriate 'brick of values' descriptor file to accompany each binary data file.
In this way the bulk of the evolution is performed on the device itself. The exception to this is also the slowest part of this procedure, namely the device synchronisation bottlenecks for step 1. | 2019-11-29T19:00:04.000Z | 2019-11-29T00:00:00.000 | {
"year": 2021,
"sha1": "d4a1b6953d3b916b38450ff6d946e2a79bb0cf1e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2021)063.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "d4a1b6953d3b916b38450ff6d946e2a79bb0cf1e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
146261833 | pes2o/s2orc | v3-fos-license | Locating mathematics within post-16 vocational education in England
Abstract The political importance of mathematics in post-16 education is clear. Far less clear is how mathematics does and should relate to vocational education. Successive mathematics curricula (e.g. core skills, key skills) have been developed in England with vocational learners in mind. Meanwhile, general mathematics qualifications remain largely disconnected from vocational learning. Following a brief historical survey of mathematics within vocational education, the paper presents findings from a nested case study of student groups in three large Further Education colleges in England. The primary unit of analysis herein is student groups learning Functional Mathematics in two vocational areas: construction and hairdressing. We show how approaches to organising teaching, developing connected curricula and classroom pedagogy tend to isolate or integrate mathematics from/with the vocational experience. Integrated approaches are shown to impact positively on student engagement and attitudes to learning mathematics. The paper concludes by discussing the potential impact of academic qualifications displacing vocationally relevant mathematics.
on vocational courses in Further Education (FE) settings. In England, 16-year olds complete general Certificate in Secondary Education (gCSE) in several subjects, including mathematics, before continuing either to further academic study or into vocational education, the majority of which is provided by FE colleges. The influential Wolf Report (Wolf 2011) has resulted in all post-16 students without a grade C or above in gCSE Mathematics being required to resit this examination or be working towards an interim qualification such as Functional Mathematics. 1 Recent figures indicate that 37% of the 2012/13 age 16 gCSE cohort did not achieve a grade C in mathematics (DfE 2014) and that three-quarters of those students transferred to FE colleges (Education and Training Foundation 2014): around 180,000 young people. The impact of these policy changes is therefore particularly significant for FE colleges.
Although other challenges arise for college management, (such as a shortage of suitably qualified teachers), our main concern here is how mathematics is positioned within the student learning experience. Students on vocational pathways often have low prior mathematical attainment and find the subject unappealing. Those with a history of failure are more demanding to teach (Turner, Harkin, and Dawn 2000) so, for a subject often associated with disaffection (City & guilds 2012;Nardi and Steward 2003), strategies to engage students in meaningful learning experiences are important.
To understand this positioning of mathematics within the student experience, we also need to consider the context in which the subject is situated. Mathematics education within FE in England intertwines three areas of policy and practice: academic, vocational and adult education. These strands of the education system, each associated with different ideologies, traditions and sociocultural practices, have distinct historical roots, leading to different views of mathematics. A deeply embedded academic-vocational division continues to have a considerable impact (Pring et al. 2009) and the positioning of mathematics within FE is characterised by tensions between contrasting views of the subject's value in relation to academic or vocational goals. In the academic pathway, mathematics is a core discipline with high status and exchange-value. It acts as the most powerful 'gatekeeper' (Volmink 1994) to further study and future employment opportunities. The relationship between mathematics and vocational education has been more uncertain and this is examined further in the following section. This paper explores how mathematics, in particular Functional Mathematics, is located within the experiences of FE students. using the twin concepts of integration and isolation, which emerged as strong themes from the research, we describe contrasting locations of mathematics with respect to students' vocational learning and consider the impact of college policies enacted at different levels, particularly in terms of college structures, programme organisation, curriculum and pedagogy. Before we present our empirical findings, the following section outlines relevant historical developments and considers how these have framed mathematics policy and practice within FE colleges. Some recurring historical themes are introduced here that are key to the interpretation of our data.
The historical positioning of mathematics in education
As early as the eighteenth century, academic and vocational educations were distinguishable as separate strands. Schools provided the classical, liberal education favoured by English public schools and universities (Hyland 1999) where knowledge had intrinsic value. Work-related training focused on developing practical skills within the workplace through apprenticeships.
These two pathways signalled the division between academic education and vocational training that has pervaded the English education system to this day, affecting 14-19 education policy (young 2011), curriculum and organisation (Hodgson and Spours 2008).
Mathematics education based on a classical Euclidean view became established in schools. It was only with the growth of technical education in the late nineteenth century through the Mechanics Institutes and early technical colleges (Hyland and Merrill 2003;lucas 2004), that new forms of applied science and mathematics learning emerged that could be described as general work-related knowledge rather than occupation-specific skills or classical subject-focused study. Despite being viewed by some as 'intellectually adrift ' (lucas 2004, 7) due to the distinction from established forms of knowledge, technical education offered a potential middle ground for mathematics between embedded occupation-specific skills and a liberal approach to the subject. In the terms used by Ernest (2004), this represented a wider utilitarian view of mathematics, rather than a restriction to practical work-related knowledge, but was still distinct from the academic approach that incorporated an appreciation of the discipline with advanced specialist knowledge.
During the twentieth century, vocational and academic education remained largely disconnected, despite local authorities gaining responsibility for both strands from the 1944 Education Act onwards (Fieldhouse 1996). Different forms of knowledge were reinforced through distinctive but separate curricula and qualifications for academic and vocational learning. Mathematics in vocational education was focused on work-related skills that were embedded into vocational competencies. In academic pathways, abstract mathematics became highly valued and necessary for progression. These contrasting purposes for mathematics reflected a 'stratification of knowledge ' (young 1998, 51) that privileged academic 'learning' over vocational 'training' (Hyland 1999) and reflected hierarchical societal structures.
The 'middle ground' remained a relatively unexplored area of the mathematics curriculum until the latter part of the twentieth century when high youth unemployment prompted significant growth in vocational education for young people. Hyland and Merrill (2003) describe how various youth-training schemes in the 1970s and 1980s led to a greater recognition of the value of mathematics as a general preparation for employment. A call for employment-readiness, accompanied by concerns about skills gaps in subjects such as mathematics, continues to be echoed by the likes of the Confederation of British Industry (Confederation of British Industry 2015).
The landmark Cockcroft Report of 1982 emphasised the importance of numeracy skills for everyday life and work. Meanwhile, in vocational education, there was a considerable interest in identifying generic skills that would be useful for a wide range of occupations (Hyland 1999). Although lists of skills were developed by different bodies in an attempt to define 'core' skills, they did not always include mathematics. With the mandatory inclusion of 'application of number' in new general National Vocational Qualifications (gNVQ) and National Vocational Qualifications 2 (NVQ) in the 1990s, mathematics became an essential component of vocational training (Hyland 1999). The aim of introducing 'core skills' was to help students develop generic employment skills but, in practice, the curriculum was best suited to a narrow embedded approach focused on occupation-specific applications. Despite trying to introduce these core skills into both academic and vocational post-16 learning, they were soon rejected from academic pathways. Conceptually, core skills were problematic due to the assumed generalisability and transferability of these skills between contexts (green 1998). Hyland and Johnson (1998) considered such free-standing generic skill qualifications as untenable, a position also adopted later by Wolf (2011) regarding subsequent qualifications. Despite multiple criticisms of core skills (green 1998), these developments laid the foundations for a possible bridging of the academic-vocational divide. The Dearing Report (1996) supported the development of these generic skills but redefined them as 'key skills' . New qualifications appeared and the inclusion of Application of Number as a Key Skills qualification was recommended for students on both academic and vocational pathways (QCA 2000). This approach to developing students' generic mathematical skills suggested a shift towards a common ground where both mathematical knowledge and applications were valued. The assessment included both an application component and a formal test. Approaches that linked the mathematics to the vocational programme and emphasised a vocational purpose for the skills became popular in colleges (Eldred 2005;Roberts et al. 2005). These 'generic' skills therefore became more embedded and contextualised into vocational training, although the test component was disconnected from vocational practices.
on the heels of the Dearing recommendations, Moser (1999) highlighted low levels of numeracy in the adult population, which resulted in an expansion of adult numeracy provision in England. A new core curriculum for adult numeracy (DfES 2001) underpinned the Key Skills Application of Number specifications and was linked to the National Curriculum in schools. It might have appeared that some consistency had been achieved across the academic, vocational and adult strands of education, but the assessments showed some significant differences in the underlying educational purposes. The Adult Numeracy National Test focused on assessing knowledge of basic mathematical processes, whereas the key skills assessment included an identical test but also a portfolio of contextualised work to demonstrate competency with application skills in familiar contexts.
The inadequacies of the mathematics curriculum and qualifications, for both academic and vocational purposes, were brought into focus by the Smith Report (2004) which argued that neither the needs of higher education nor those of employers were being met. The report identified a need to develop new pathways for mathematics in the 14-19 phase and, alongside the recommendations made by Tomlinson (2004), provided the impetus for significant change.
A renewed call for students to develop mathematical skills for everyday life and employment (DfES 2005) led to the replacement of key skills by new Functional Skills qualifications. The Functional Mathematics curriculum centred on applications and problem-solving in 'realistic' life situations (QCA 2007). Moving away from specific applications in familiar contexts towards developing transferable problem-solving skills, the approach was designed to meet the demands of the workplace where tasks are non-routine (Hoyles et al. 2002) and complex situations require interpretation (Hodgen and Marks 2013). Whether this curriculum could fulfil its comprehensive purpose and succeed where core and key skills had failed was, however, debatable (Hodgson and Spours 2008;Wake 2005). In practice, Functional Mathematics in schools became embedded into gCSE Mathematics and quickly disappeared, but in FE it was retained and used extensively with adult and vocational students. The research reported herein investigates the experiences of vocational students, aged 16-18 years, learning Functional Mathematics.
Research design
Although classroom research is undertaken at a particular point in time and space, it is necessarily also research of the imbricated layers of historical policy and educational practices described above (Noyes 2013). The main research study on which this paper is based sought to understand the effects of this amalgam of national-and college-level policies, organisational structures, commonly accepted pedagogic practices and classroom cultures on students learning mathematics in FE. In view of the limited research attention paid to mathematics education in FE, the study was exploratory as well as explanatory and was not strongly framed by a particular theoretical stance. In this paper, three particular questions are explored: (1) How do vocational students view the place and purpose of mathematics?
(2) What features of college policy and practice influence their perspectives?
(3) What impact does this have on their learning experience?
A nested case study methodology employing a multi-method approach was used to generate rich data and allow for triangulation between respondents, methods and sites. Three large FE colleges in the Midlands of England with different organisational approaches to Functional Mathematics formed the primary cases. Preliminary discussions with colleges suggested that their organisational approaches could be appropriately described using the terms centralised and dispersed to indicate the locations of Functional Mathematics teachers within contrasting college staffing structures and the associated management focus. College A had a strong centralised Functional Mathematics team who worked across many of the college departments, although some vocational areas had appointed their own Functional Mathematics teachers who were situated in the department as part of the vocational team. College B had a fully dispersed structure in which Functional Mathematics teachers were situated within vocational departments, whilst College C had recently moved to a dispersed arrangement but still retained a small centralised team.
Each college offered a wide range of courses and this made comparisons possible between colleges in the same vocational areas. Student participants were mainly on level 2 vocational programmes and were working at a similar level of Functional Mathematics (level 1 or 2 3 ). A total of 17 student groups were recruited from the Construction, Hair and Beauty and Public Services 4 areas. Each student group, together with their Functional Mathematics teacher, formed a case study nested within the college. Data were generated over a nine-month period from focus groups, card-sorting activities and from questionnaires and interviews with Functional Mathematics and vocational teachers. In addition, observations of several Functional Mathematics lessons and one vocational lesson were conducted for each of the student groups.
The student focus groups each met three times during the academic year to discuss their transition from school to college, experiences of Functional Mathematics and opinions about a range of exemplar mathematics curriculum materials. The first-and third-term discussions were preceded by individual card-sorting activities in which students indicated how strongly they agreed or disagreed with statements about learning mathematics in school and in college. These responses were coded, summarised and treated as ordinal data in the analysis. The analysis of the qualitative data was conducted using a constant comparison method to identify emerging themes, which were then explored further using cross-case and within-case comparisons.
Research findings
The present paper investigates the location of mathematics within the learning experiences of vocational students, by considering their views of its place and purpose alongside the different policies and practices adopted by colleges. This section begins with a brief description of the changing perspectives towards mathematics and attitudinal shifts experienced by some students in their transition from school to FE college (see Dalby [2014] for more information). Four brief case portraits of Functional Mathematics groups (and their teachers) are then presented to highlight key features of college policy and practice that had an impact on the student experience. They are analytic 'sketches' , rather than comprehensive accounts, but the main features are a fair reflection of the full data and illustrate the main themes that emerged from the study. In the following section, the key themes are summarised and we consider how the positioning of mathematics within students' experiences through college policies and practices is related to the concepts of integration or isolation.
Analysis of the first set of individual card-sorting activities showed that many of the students in this study were disaffected by their experience of mathematics in school, which was not unexpected (see Nardi and Steward 2003). There were, however, some significant shifts towards more positive attitudes in college and the focus group discussions supported this analysis. The results shown in Table 1 indicate that many students considered Functional Mathematics lessons in college to be less stressful, less difficult, less confusing and more interesting than gCSE Mathematics in school. These students reported having more positive relationships with their teachers, better subject understanding and increased confidence.
The reasons for any attitudinal changes were explored in the focus groups. Although reactions were sometimes mixed, student attitudes within groups were generally consistent though not always positive. This bifurcation into more positively and negatively disposed groups is explored by studying contrasting cases of Functional Mathematics classes. Each of the following case portraits has been selected to illustrate several key features and allow for comparison of contrasting aspects of policy and practice. These case portraits present key extracts of relevance from the case studies, and are based on an analysis of triangulated data from lesson observations, teacher interviews, teacher questionnaires and student focus groups.
Case 1: Elliott's Construction group
Elliott is a full-time Functional Mathematics teacher in a dispersed staffing structure, teaching solely within the Construction department. He is managed as part of the vocational team and his teaching is timetabled in classrooms allocated to the department. Sharing a staff room with the Construction teachers means he works closely with vocational staff when designing tasks for his Functional Mathematics lessons to ensure they represent authentic uses of mathematics with accurate contextual details.
Elliot frequently refers to applications of mathematics from the construction industry in his lessons, linking his teaching to vocational schemes of work. He sets problems in contexts relating to building trades and incorporates relevant practical activities. For example, students construct accurate models of houses to use with a scale drawing of a new estate so that they can carry out a series of practical problem-solving tasks, such as designing plans for the supply of the basic amenities and calculating the costs. In another lesson, students use authentic floor plans and images of houses to draw elevations. These types of task connect mathematics to vocational learning in an integrated approach. Students recognise the vocationally related purpose for learning mathematics and contrast this with their school experience: Table 1. differences in individual students' ratings of statements about their experience of mathematics in school and college (term 1). * Significant at 5% level. Because at school you was just doing maths to get grades where you'd do certain types of maths just to get you better grades but here you do maths that's going to help you with future life and stuff that you're always going to need. (Connor) Connor contrasts gCSE Mathematics, which he perceives as being about gaining a qualification, with Functional Mathematics, which he believes is helping him to develop useful skills for his future. Most of this group describe their previous experience of mathematics in terms of disaffection and low attainment, but they talk positively about the relevance of the tasks used by Elliott and the connections to their vocational curriculum. For them, Functional Mathematics now has a purpose beyond being an academic subject. They are finding that mathematics can be relevant to their vocational training and this in turn results in more positive attitudes and better engagement with lessons.
Case 2: Rachel's Construction group
Rachel is a Functional Mathematics teacher in a dispersed structure, based in the Construction department and teaching in rooms within the department. She has daily interactions with the vocational teachers. Her approach to teaching Functional Mathematics is, in contrast to Elliott, to avoid familiar vocational applications in favour of a diversity of contexts for problem-solving. Rachel's approach to developing general problem-solving skills involves various puzzles and tasks that require the development and implementation of systems or strategies. The contexts for these problems are often fictitious or use scenarios with no connection to the vocational area. The students in this group have difficulty seeing the relevance of Functional Mathematics, as one student explains: 'There was nothing new in it that we did that I can't remember doing at school' (Ethan).
These students make comparisons to gCSE Mathematics in terms of curriculum content and do not consider themselves to have developed any additional skills in using and applying mathematics. From their perspective, Functional Mathematics is similar to gCSE but less challenging. It is, therefore, just another qualification with no particular relevance to their vocational course or their personal development, other than being a college-imposed requirement for progression to the next level of training.
Many of this group do not see the need to study Functional Mathematics as they already have a grade C at gCSE. The policy of the college, however, requires them to take one Functional Skill qualification and the department stipulates mathematics. Through teaching approaches that fail to integrate or connect Functional Mathematics with students' vocational learning, the subject is perceived as irrelevant. Students continue to associate mathematics with gaining a qualification rather than learning useful skills.
Case 3: Richard's Hairdressing group
Richard is a Functional Mathematics teacher within a dispersed structure, situated in the Hair and Beauty department and teaching classes from this vocational area but in various rooms across the college. His lessons are organised around a series of tasks or projects that are directly related to the vocational course, such as refurbishing a hairdressing salon, drawing up a business plan for a new salon or planning an appointments schedule. These tasks use relevant, authentic information and so are strongly linked to the students' vocational programme. Richard also uses scenarios that arise during informal conversations with students to pose impromptu problems or illustrate how mathematics is useful in their personal lives or intended careers.
He's doing it in a scenario so that makes it more interesting and more fun to learn about rather than just being sat down writing on the board and working out. (leanne) The use of 'real life' scenarios has stimulated leanne's interest in learning mathematics. She also describes in the focus group, how she has come to understand its relevance to hairdressing. Discovering a practical purpose for mathematics in relation to her vocational learning has helped her to re-engage with mathematics and she later achieves success in the Functional Mathematics examination. Ellie, from the same group, explains a similar impact on her achievement: I've learned more than I did in school and I was in school for, what, four years? I think I've learned more in the past two years in maths than I have in the past four years in school. (Ellie) Although the opinions within this group are not unanimous, there is agreement that Functional Mathematics involves learning relevant skills. These students do not already have gCSE Mathematics at grade C and generally lack confidence. Many approached the prospect of a Functional Mathematics course with reluctance and did not initially understand the relevance of mathematics to their vocational learning. The connections to hairdressing convinced most of the group, however, of their need to use mathematics and resulted in an intrinsic purpose for learning.
Case 4: Kathy's Hairdressing group
Kathy is a mathematics specialist within a centralised team of Functional Skills teachers. She teaches groups from several vocational departments. This group of hairdressing students has lessons in a dedicated Functional Skills room some distance from the hairdressing vocational base. Kathy's teaching is based on a workshop approach, in which students work individually, at their own pace, using printed booklets on different mathematical topics. These include questions in a range of contexts but with little connection to the vocational area.
In the focus group the students discussed initially how they valued the flexibility and personal independence of the workshop approach but, as the year progressed, they expressed their boredom with the repetitive use of the booklet system. once the attractiveness of this new teaching method diminished, they began to question the purpose of learning the subject. Amber explained that 'If I had the choice then I wouldn't go; I'm not going to use it in everyday life' and this reflected the general attitude of the group. For them, Functional Mathematics lacked a practical use and their motivation declined accordingly, despite the qualification being a requirement for completion of their vocational course. These students see little connection between mathematics and hairdressing, although they do acknowledge that some mathematics is needed for their intended occupation and that there is room for improvement in their skills.
This disconnected perception of Functional Mathematics was clear in discussions about the use of contextualised questions, as Naomi explained: 'Because if it was about hairdressing then I'd be thinking "No, you do it like this" instead' . The connection between a mathematics problem in a hairdressing context and typical workplace practices in a salon appears tenuous. The use of context in these lessons is often superficial and unrealistic. Rather than connecting Functional Mathematics with hairdressing, this suggests that mathematics is irrelevant to, or isolated from, students' work experiences. The purpose of learning mathematics for these students is more closely related to passing an examination than it is to supporting their vocational learning.
Key themes
A comparative cross-case analysis of the full set of case studies identified a number of key themes and features within the case study groups that are relevant to the research questions. These features are summarised in this section with reference to the strong twin themes of isolation or integration which provide some common threads through a complex interaction of college policy and practice with students' perceptions of mathematics.
For all of these groups, Functional Mathematics is taught in discrete sessions and therefore the starting point is one of isolation rather than integration or connection between mathematics and vocational learning. Furthermore, students' experiences of gCSE Mathematics in school have generally been of disaffection and they approach Functional Mathematics with views of the subject as an academic discipline with little relevance or connection to their vocational learning. Table 2 summarises the key features that shaped student perceptions of the isolation or integration of mathematics within their learning experience in college. The cases exemplify the general patterns found through the full data-set.
Elliott and Rachel may teach in the same vocational area and are both located within the vocational department, in dispersed structures, but their students' views of the relevance of mathematics differ considerably. Elliott's use of vocational contexts and connections to the vocational course contributes to his students' views of Functional Mathematics as relevant and useful. Rachel has similar opportunities to make meaningful connections as a teacher situated within the Construction department but chooses an alternative approach to teaching Functional Mathematics that generalises, abstracts and thereby isolates mathematics from vocational learning.
A comparison of Richard's and Kathy's groups, both in the Hairdressing area but in different colleges, shows how using contextualised tasks and making curriculum connections can support students to understand the relevance of mathematics. Kathy is part of a centralised Functional Mathematics team and teaches groups from several vocational areas. given her isolation from the vocational area, a generic approach to teaching is understandable. lacking opportunities to plan with vocational teachers, her teaching is disconnected from students' vocational learning and thereby relatively less engaging.
The policy of situating Functional Mathematics teachers within vocational teams presents opportunities to connect mathematical and vocational learning in an integrated approach. Elliott and Richard make use of this potential, making effective connections through the use of mathematical tasks in authentic vocational contexts. They also use informal discussions to enhance connections or explain other applications to student lives. Students thereby understand the purpose of mathematics in relation to their current interests and values. Although this may be seen as a somewhat utilitarian view of mathematics (Ernest 2004), their students are engaged, which is a significant improvement upon their previous experiences and attitudes.
Connecting mathematics to students' lives and values also contributes to a classroom culture that more closely resembles that of the typical vocational learning environment. In lesson observations, differences between vocational and Functional Mathematics sessions in relation to physical space, student responsibilities, time scales for tasks, peer learning opportunities and teacher roles betray a cultural division that only a few teachers in the study attempted to bridge. The more integrated approach to the curriculum adopted by teachers such as Elliott and Richard mirrors values that are important in the vocational area. This contributes to a classroom culture that is more closely connected to the vocational and is therefore valued by students. The second policy that had a strong, and sometimes detrimental, influence on the student experience was concerned with which students should take a Functional Mathematics course. In Rachel's group, students who had already achieved gCSE Mathematics at grade C or above were required to take Functional Mathematics but became resentful because they could see no added value in gaining a Functional Mathematics qualification at an equivalent level. In contrast, low-attaining students in Richard and Elliott's groups were initially reluctant to engage with Functional Mathematics, but their on-course experience resulted in changed attitudes as they began to discover a vocationally related purpose for the subject. For these students, discovering a use-value for mathematics was an important factor in their re-engagement.
Student focus groups highlighted how the timetabling and rooming of Functional Mathematics sometimes reinforced perceptions that the subject was an isolated, disconnected addition to their vocational programme. Disjointed arrangements, such as using rooms at a distance from the students' vocational base area, or timetabling Functional Mathematics on a separate day from the vocational programme, generated negative attitudes and gave the impression that the subject was not an integrated part of their study programme.
The data show how connections between mathematics lessons and the vocational area are both (a) variable in practice and (b) important influencers of students' views of the relevance and value of mathematics for their lives. Differences in the systems, curriculum and classroom cultures between mathematics and vocational education were sometimes bridged through the building of meaningful connections and facilitated by structures that integrated mathematics teachers into vocational departments. Although Functional Mathematics lessons were timetabled separately and were sometimes physically distant from the vocational area, making connections at other levels brought the two learning experiences closer together and had a positive impact on student attitudes, engagement and understanding.
The findings discussed above show that the historical division between academic and vocational education (Pring et al. 2009) is still visible in FE colleges in the enactment of mathematics policy and practice. Differences are manifested in various ways and these affect vocational students' views of mathematics. Nevertheless, the case studies also demonstrate the positive impact of concerted efforts to provide a more integrated learning experience, both in terms of colleges' organisational policies and through teachers' approaches to curriculum and pedagogy. Table 3 summarises the key features that contribute to students' perceptions of mathematics learning as isolated from, or integrated with, vocational learning.
Intersecting levels of influence
College policies for staffing and programme organisation impact upon student perceptions of mathematics as an isolated or integrated subject. government policy affects the type of curriculum offered to students, whilst colleges and teachers have respective agency through internal policies, programme planning and classroom practices. At each level -college, programme and classroom -an integrated approach seems to contribute towards more positive views of mathematics and the research suggests that this results in improved attitudes and engagement.
The best-case scenario for an integrated approach would be one in which each level of the system is oriented towards integration. When a teacher's Functional Mathematics pedagogy connects mathematics to the vocational learning, where there is close collaboration with vocational staff and teaching takes place in close proximity to the vocational base there is greater synergy between the college, programme, curriculum and pedagogic levels. This desirable scenario is not always present, but some of the levels in Table 3 are more critical than others. For example, evidence from other cases within the study suggests that a strongly integrated pedagogy can help to overcome the impact of Functional Mathematics teaching being centralised. In contrast, a Functional Mathematics teacher who is placed within a vocational department might still adopt a disconnected or isolated pedagogy. In both of these cases, activity at one level tends to counteract that at another. This inter-linking reinforces the idea that understanding mathematics education in FE needs to be considered as a complex, multi-scale problem.
College structures with Functional Mathematics teachers dispersed amongst vocational departments provide, on the basis of evidence from this study, better opportunities to integrate mathematics and vocational learning, but positioning a Functional Mathematics teacher into a vocational team is not sufficient to ensure closer connections. Better integration of curriculum and pedagogy is also necessary to develop a stronger sense of the use-value and relevance of mathematics to vocational learning. There is evidence that organisational arrangements, such as the timetabling of Functional Mathematics, can contribute to students' perceptions of the isolation of the subject, but these can be countered by effective pedagogy and an appropriate curriculum.
Curricular and pedagogic relevance
Curriculum and pedagogy are, unsurprisingly, identified from this study as strong influences that are closely linked to the effectiveness of mathematics teaching and learning. Providing a relevant curriculum and developing classroom practices that communicate an integrated view of mathematics is, however, not easy given the direction of current government policy. With respect to the curriculum, some integration was achieved in Functional Mathematics lessons by constructing meaningful connections to students' lives through the use of contextualised problems and 'realistic' scenarios (QCA 2007). Although the Functional Mathematics curriculum is not directly focused on application in familiar contexts, as it was with key skills (QCA 2000), some teachers emphasised a practical purpose for mathematics in relation to students' vocational interests and values. In this way, the Functional Mathematics curriculum, although theoretically positioned in the 'middle ground' as a means of developing a generic set of skills, is experienced by some students as having close connections to their vocational training. These learning experiences highlight the relevance and use-value of mathematics, leading to better engagement and reported improvements in understanding.
In contrast, many of the students saw little relevance for the gCSE Mathematics they had learned in school. Although they acknowledged its exchange-value, they perceived it to be an academic subject and therefore isolated from their current learning experience. These students valued vocational learning over academic study, in contrast to normative knowledge hierarchies (young 1998). For vocational students to re-engage with mathematics, the curriculum needs to relate to vocational values rather than defaulting to academic expectations. Recognising the personal use-value of mathematics was fundamental to the process of replacing students' previous views of mathematics with ones that were more aligned to their vocational aspirations.
The linking of mathematics and vocational classroom cultures through the alignment of values is also important in developing an integrated approach. Features of the general organisational culture in FE colleges, such as more equitable social structures, informal learning environments and teacher-student relationships, help to generate positive attitudes when enacted in mathematics classrooms. Classes with these characteristics align well with students' vocational experiences and values. This harmonisation of values and practices enables traditional divisions between mathematics and vocational classrooms to be bridged. With a more coherent cultural environment across student timetables and less distinction between mathematics and vocational classroom cultures, mathematics is more readily viewed as an integral part of a student programme as opposed to an unwelcome appendage.
Maintaining the 'middle ground'
The clear separation between an academic and a vocational curriculum makes it difficult to see mathematics as a vocationally relevant subject for students without taking a very narrow approach and teaching only occupation-specific skills. The deeply ingrained historical divisions in education systems and structures, curriculum and cultures demand a multi-level approach to bridge such divisions and avoid the tendency to bifurcate academic and vocational purposes for mathematics in FE. This research shows how integrated approaches to Functional Mathematics offer a transformative learning experience for students that helps to locate mathematics in the 'middle ground' between an isolated gCSE and highly embedded forms such as core and key skills (Eldred 2005;Roberts et al. 2005). The value and status of the Functional Mathematics qualification is, however, contested in current policy debates. It is considered inferior to gCSE Mathematics and this might well undermine the positive impact that Functional Mathematics has had for many learners.
The association of Functional Mathematics with vocational education and its disappearance from gCSE Mathematics in recent years reinforces the stratification of mathematics knowledge from the nineteenth century discussed earlier. Any qualification designed for the needs of vocational learners rather than simply for academic progression can easily become perceived as inferior. This is an endemic problem that is indicative of the challenges facing the development of a modern vocational educational system in England. Creating a mathematics curriculum and appropriate pedagogy that can meet both purposes remains elusive in such a divided system (Hayward and Fernandez 2004).
Conclusions
Throughout this paper, questions about the purpose of learning mathematics are never far from the surface. The multiple purposes for mathematics in policy-making, for example as skills for life and work (Cockcroft 1982;DfES 2005) versus the acquisition of minimum level of mathematical knowledge (Wolf 2011), generate contradictions for vocational students and FE teachers and managers. Studies of mathematics in the workplace suggest that most occupations only require basic mathematical knowledge, but the ability to interpret situations and apply appropriate mathematics is also vital (Hodgen and Marks 2013;Hoyles et al. 2002). If the purpose of teaching mathematics to vocational students is preparation for work, then integrated Functional Mathematics is arguably more beneficial than the gCSE qualification since it provides opportunities to develop skills in practical problem-solving and application of mathematics using vocationally relevant scenarios. In contrast, gCSE specifications and assessments privilege abstract, isolated knowledge acquisition.
Alternatively, if the aim is to ensure all students reach an accepted minimum standard in Mathematics, then the findings from this study indicate that setting this standard as a gCSE Mathematics grade C may reinforce prior disaffection and disinterest amongst post-16 vocational students. Whilst aiming to raise standards, such a policy further disadvantages and demotivates low-attaining learners. Furthermore, the prioritisation of gCSE mathematics seems to reinforce the long-standing inequity between academic and vocational knowledge by placing greater value on an academic qualification than on the skills required for vocational employment, even for those who have chosen a vocational pathway.
A curriculum with relevance and purpose for vocational students may appear less academically challenging, but this research provides evidence of some significant positive effects on student attitudes and engagement when integrated approaches to teaching are adopted. In a system that continues to place lower value on vocationally related courses, acquiring worthwhile mathematical skills for employment is at risk of taking second place to maintaining disengaged learners in cycles of failure as they strive for 'success' in gCSE Mathematics.
Notes
1. There are fundamental differences in the curriculum between gCSE Mathematics, which is a knowledge-based qualification, compared to Functional Mathematics which has a narrower range of content but a greater emphasis on 'real life' applications and problem-solving.
2. NVQs were introduced to assess the competencies required for specific occupations. gNVQ involved a wider range of knowledge, skills and competencies that were considered to underpin a range of occupations with a vocational area. 3. level 2 corresponds broadly to the level of a gCSE grade C or above. level 1 corresponds broadly to the level of lower gCSE grades (D-g). 4. Public Services programmes in England focus on preparation for entry to the Armed Services or Emergency Services. | 2018-01-04T19:15:50.797Z | 2015-12-16T00:00:00.000 | {
"year": 2016,
"sha1": "aef81dbd03f780031bcd704fe97e2281f3cbc617",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/769206/Locating%20mathematics%20final%20version.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "31412a5bde5d4ed20816a4c820091dba0bec4d2b",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": []
} |
212668636 | pes2o/s2orc | v3-fos-license | REVERSE TRANSFER IN CLITIC COLLOCATION A STUDY ON SPANISH AND BRAZILIAN PORTUGUESE TRANSFERÊNCIA REVERSA E COLOCAÇÃO CLÍTICA UM ESTUDO COM BILÍNGUES PORTUGUÊS BRASILEIRO E ESPANHOL
This study focuses on reverse transfer, that is, the influence of L2 on L1, in a dominance situation of L1. It investigates clitic collocation in verbal complexes in Brazilian Portuguese (BP) and Spanish. Spanish privileges proclisis to the auxiliary verb (pre-verbal position/clitic climbing) or enclisis to the main verb (post-verbal position). BP, in turn, lost clitic climbing in the XIX century. Although schooling tries to recover it, natural speech privileges proclisis to the main verb (medial position). Thus, in principle, it can be assumed that highly educated BP speakers may accept clitic climbing, regardless of any fluency in Spanish. On the other hand, full acceptance of clitic climbing may constitute a case of reverse transfer in the context of bilingual BP/high-proficiency Spanish speakers. In order to observe this situation, a self-paced reading experimental task with a Likert scale grammatical judgment was applied, manipulating the position of the clitic in Portuguese sentences with highly educated BP monolinguals and BP/high-proficiency Spanish bilinguals. The results show that both groups accept clitic climbing, but bilinguals accept it even more, and are faster in reading Portuguese sentences with clitic climbing, suggesting a possible Spanish facilitation in the Portuguese sentence processing.
Introduction
This study addresses the issue of reverse transfer, that is, L2 influences on L1, with L1 as the dominant language, focusing on clitic collocation in verbal complexes, contrasting Brazilian Portuguese (BP) and Spanish. Recent studies have considered that reverse transfer may affect not only individuals who have been immersed in the context of L2, but also those who show an intermediate or high proficiency on their L2, but have their L1 as the dominant language for daily usage (Cook et al. 2003;Pavlenko & Jarvis 2003;Souza, Oliveira, Passos & Almeida 2014).
Although several phenomena may be permeable to reverse transfer, in this study we focus on a particularly interesting phenomenon, since BP has already shared with Spanish one of the characteristics of clitic collocationclitic climbing. Clitic climbing was lost in the nineteenth century. Thus, in spontaneous speech, clitic climbing never appears nowadays. However, schooling tries to recover its use and it is particularly encouraged in written production. Thus, highly educated BP speakers may demonstrate some acceptance of clitic climbing. By its turn, higher acceptance of clitic climbing by BP/high-proficiency Spanish bilinguals could also be taken as a case of reverse transfer.
This context is thus particularly interesting for the discussion on reverse transfer. We take into account notions such as internalized grammar and peripheral grammar (marked periphery) in the sense of Chomsky (1981) and Kato (2005) in order to discuss the extent to which reverse transfer would be actually occurring. To what extent can the acceptance of clitic climbing be due to reverse transfer from Spanish to BP or be just a case of identifying some rules from L2 as possible peripheral rules of L1, that is, a simple case of reinforcement of peripheral rules? For this discussion, we consider the results of a Likert scale grammatically judgment, embedded in a self-monitored reading task. The position of the clitic was manipulated in Portuguese sentences, presented to highly educated BP monolinguals and BP/high proficiency Spanish bilinguals.
The paper is organized as follows: The next section presents BP speakers as diglossic in the sense of presenting marked periphery rules, which are only acquired by exposition to literacy. Then, Section 3 contrasts clitic collocation in both Spanish and BP. Section 4 presents the experimental task and the main results obtained. The last section brings our final remarks.
Internalized grammar, marked periphery, and processing demands
Terms like interference or influence have given way to the notion of transfer, especially in second language acquisition and refers to previously learned patterns (the knowledge of the native language), which emerge in a new learning situation (the acquisition of a foreign language). The idea, however, is that this transfer may have a facilitation (positive transfer) or an inhibition effect (negative transfer) in the learner's progress. This notion was also applied in the opposite direction (transfer from L2 to L1) and was termed reverse transfer. Recent studies have focused on reverse transfer not only for individuals who have been immersed in the context of L2, but also for those who show an intermediate or high proficiency on their L2 but have their L1 as the dominant language for daily usage. Thus, cross-linguistic influence is taken as often bidirectional or even multidirectional (Cook et al. 2003;Jarvis 2003;Pavlenko 2000;Souza et al. 2014). Jarvis (2003) argues that L2 influences L1, expanding its repertoire. The rules from the grammar of L1 are still plainly established, but rules from L2 are available and may be occasionally used. Cook (1991) adopts the concept of multi-competence, arguing for the possibility of "two coexisting grammars in the same mind", a major issue for UG-oriented research. The main question is what would most likely be transferred from one system to another and what mechanisms would allow it. This study does not intend to delve into these topics, but aims to present some relevant notions, which are of interest for the discussion on reverse transfer, particularly in the case discussed here.
The idea of multi-competence or multiple grammars has also been explored with regards to intra-linguistic variation. BP speakers constitute a clear case. There is a distinction between natural oral speech grammar and formal written record grammar. Thus, BP speakers are considered diglossic, insofar as a natural grammar is acquired during early childhood, but schooling/exposition to literacy will present conflicting rules, which ultimately get to be mastered. Kato (2005), following Chomsky (1981), adopts the concept of a marked periphery, which may be added to a core grammar. The natural growth of a grammar during the earliest years of language acquisition constitutes a core grammar. A marked periphery may be added to this core grammar and may be expanded through the next years of life of any speaker, through exposition to formal varieties of language, loans, schooling, etc. Since the attempt to grasp such rules will be postponed to older age, Kato (2005) argues that the process of learning a written Portuguese grammar by a Brazilian individual would be similar to a process of learning a second language. This process would be subjected to greater individual difference performance, core grammar interference, inconsistent use, and hypercorrections. By its turn, very proficient speakers may end up being very accurate and natural in using the rules of the peripheral grammar. Thus, the speed/ease of processing sentences with these marked rules may be an indicator of how natural certain peripheral rules may have become for an individual, that is, how proficient he/she in the written variety.
Clitic collocation constitutes a phenomenon showing marked periphery rules in BP. Highly educated individuals are diglossic. Low-educated speakers make use of the proclitic position in sentences with simple verbs and the medial position in verbal complexes, whereas highly educated speakers are comfortable with enclisis with simple verbs and even clitic climbing in verbal complexes. The next section contrasts BP and Spanish in relation to clitic collocation in verbal complexes, our focus in this paper.
Clitic collocation in Spanish and Brazilian Portuguese
Spanish and BP behave differently in terms of clitic collocation in verbal complexes, although some similarities may be noticed (González 1994). Spanish presents a stable system with proclisis to the auxiliary verb (pre-position/clitic climbing), as in 1a, or enclisis to the main verb (post-position), as in 1b.
(1) a. BP lost clitic climbing in the XIX century (Pagotto 2013). BP oral production, that is, the natural internalized grammar, makes use of proclisis to the main verb, that is, a medial position, in 2a, or enclisis to the main verb, in 2b, (especially with third person clitics). However, schooling tries to recover clitic climbing (Kato 2005) and BP normative grammar prescribes it, that is, proclisis (pre-position/clitic climbing) to the auxiliary verb, in 2c, or, alternatively, enclisis to the auxiliary verb, in 2d (Azeredo 2010; Bechara 2009): (2) a.
A atriz vai te convidar para o seu aniversário. the actress go3PS youACCUS invite to the her birthday 'The actress will invite you to her birthday.' b.
A atriz vai convidar-te para o seu aniversário. the actress go3PS invite-youACCUS to the her birthday 'The actress will invite you to her birthday.' c.
A atriz te vai convidar para o seu aniversário. the actress youACCUS go3PS invite to the her birthday 'The actress will invite you to her birthday.' d.
A atriz vai-te convidar para o seu aniversário. the actress go3PS-youACCUS invite to the her birthday 'The actress will invite you to her birthday.' Thus, BP is a case of diglossia -two distinct varieties of a language coexist within the same speech community. Actually, a highly educated BP speaker fluctuates between different grammars for oral and written varieties (Kato 2005). Thus, the most natural collocation in BP, the medial position, is an ungrammatical possibility in Spanish, as shown in 3.
(3) *La actriz va a te invitar a su cumple. the actress go3PS to youACCUS invite to her birthday 'The actress will invite you to her birthday.' Clitics are considered D (determiner) elements, which move from their base argumental position towards a verbal host (Raposo 1998). This may be a long movement in verbal complexes (clitic climbing). Clitic climbing is seen as a natural phenomenon in Spanish grammar, but it is only part of an archaic grammar for BP. According to Uriagereka (1995), an FP position above TP hosts the clitic in proclitic position to the auxiliary verb, as in 1 and 2a. (4) In BP, clitic long movement was lost, thus clitics do not reach the F position anymore. They are cliticized to the main verb in proclitic (medial) or enclitic (post) positions. For written grammar, BP speakers have to perceive that proclisis or enclisis may occur to the auxiliary verb. This does not mean that an F position is actually represented. This may constitute a stylistic rule, making part of their marked peripheral grammar, as discussed earlier.
Actually, Rodríguez-Mondoñedo, Snyder e Sugisaki (2006) argue that clitic climbing is an early parameter setting (Wexler 1996(Wexler , 1998 in Spanish, and as suggested by Kayne (1989), it would be related to the possibility of null subjects in the language. As BP is no longer considered a prototypical null subject language (Holmberg, Nayudu & Sheehan 2009;Kato & Duarte 2014), it is not expected that the vernacular language will present clitic climbing per se. Therefore, proclitical position to the auxiliary verb is not part of a natural grammar in BP. Nevertheless, it may be acquired through literacy/schooling, making part of a highly educated speaker's marked periphery to his/her grammar (Chomsky 1981;Kato 2005), that is, related to formal styles. Could thus clitic collocation be a particularly permeable phenomenon to reverse transfer, considering Spanish and BP? Could Spanish particularly influence the evaluation and use of variants regarding the position of the clitic in BP, intensifying the acceptance of proclisis to the auxiliary verb, that is, clitic climbing?
In order to test it, a self-paced reading task was applied which will be reported in the next section.
Self-paced reading task with a Likert scale for judgement
The self-paced reading task presented forty-eight sentences in Portuguese, composed of test sentences (twelve sentences), and distractors (thirty-six sentences). These were subdivided in two groupsgrammatical (eighteen) and ungrammatical (eighteen) sentences. Test sentences made use of 4 distinct clitics (me (me), te (you), se (himself/herself/itself), nos (us)) 1 , which could appear in proclitic position, medial position or enclitic position. The sentences were segmented: the first segment presented the subject of the sentence; the second segment was the critical one, presenting the clitic and the verbal complex; the last segment presented the complement to the verb or adjuncts. Segments 2 and 3 were controlled for number of syllables. Segment 2 was the critical one and segment 3 could give us information about spill-over effects. 2 Three distinct lists were created, presenting 3 trials for each of the 4 clitics in the three distinct positions (pre/medial/post). Sentences in each list were randomized by the Paradigm 2.5 software. This computer program also registers the reading times for each segment and the kind of answer provided by the participant. He/she was supposed to evaluate the acceptability of the sentence in a Likert scale, varying from -2 (least acceptable) to +2 (completely acceptable). The task was administered to two groups: monolingual BP speakers (basic knowledge of English was sometimes reported) and bilingual BP/high-proficiency Spanish speakers. 1 The reflexive clitic pronoun se (himself-herself-themselves) was used instead of the objective third person clitics o(s)/lo(s)-a(s)/la(s) (him-her-them) (i), given the usual substitution, in informal BP, by null objects (ii) and lexical pronouns (iii) (Duarte 1989 already be3RDSING in-the airport and I will pick up-he. 'Pedro is already at the airport and I will pick him up.' 2 Spillover effects refer to a secondary effect that may follow from a primary effect, although far removed in time or place from the event that caused the primary effect. In psycholinguistics, for reading times, it is seen as an effect that may be measured after the segment (in the next segment or at the end of the sentence) containing the conditions under investigation. Our working hypothesis is that BP/Spanish bilinguals will differ from BP monolinguals in relation to their judgements of clitic collocation, if reverse transfer acts upon these bilinguals. It would thus be expected that both monolinguals and bilinguals accept the medial position of the clitic (the preferred order in natural BP), but behave differently in relation to the proclitic position, if Spanish influences the judgement of Portuguese sentences for the bilingual speakers. Nevertheless, it is important to highlight that this proclitic position is also reinforced by schooling in Brazil. Insofar as monolinguals under investigation are highly educated individuals, a high acceptance of this order by this group could also be expected. Thus, in order to posit that reverse transfer is acting, a clear difference between the groups should be found.
Participants
In this research, forty-five participants took part in the experiment: thirty BP monolinguals (ages from nineteen to thirty-two) and fifteen BP/high-proficiency Spanish bilinguals. A Spanish proficiency test for the bilinguals was planned. However, as the participants were recruited from a post-graduating course for Spanish teachers, it was considered unnecessary.
Material
A laptop HP was used, equipped with the software Paradigm 2.5, which is responsible for presenting the sentences for self-paced reading. The software displays an Excel archive with reading times and answers for the Likert scale per participant.
Procedure
Each participant was invited to take part in the experiment in a quiet room, where he/she sat in front of the computer and received instructions from the investigator. The test began with written instructions also displayed on the computer screen and a pre-test (with 5 sentences), in order to make sure that the participant grasped the procedure. He/she was asked to read the segments of the sentences, using the Space button to follow from one segment to the next. At the end of the sentence, a scale from +2 to -2 would appear on the screen. He/she should judge how acceptable the sentence was. At times, a question about the last sentence read could also appear. He/she should answer using the buttons yes or no. Reading times for the segments were recorded, the scale point chosen for the evaluation of the sentence was registered as well the response time for the evaluation of the sentence (picking a value in the Likert scale provided).
Main results
Data were analyzed for two aspects: acceptability judgement (Likert scale) and reading times for the critical segment, as a function of clitic collocation. We ran separate ANOVAS for monolinguals and bilinguals. Clitic medial position in the verbal complex is the best evaluated one by both monolingual and bilingual speakers. As far as the pre-position is concerned, the majority of the evaluations considers the sentence acceptable (completely acceptable or almost completely acceptable), but there are more bilinguals totally accepting the sentence than there are monolinguals, who tend to not consider it completely acceptable. The fact that similar behavior is also attested for post-nominal position seem to indicate that BP speakers are very unsure about clitic collocation. There is, however, an interesting behavior attested: post-position evaluation seems to be subjected to a satiation effect (Snyder 2000), that is, non-accepted sentences tend to be more and more evaluated as accepted as exposition to them increases. Thus, we observed each participant's behavior comparing the first two and the last two evaluations given to those sentences. As mentioned, positive evaluation to clitic post-position increases, suggesting a satiation effect, whereas clitic pre-position receives less positive evaluation for the last two evaluations, both by monolingual and bilingual speakers, even though bilinguals evaluate them as more acceptable than monolinguals do. We also ran an analysis considering the attribution of points to the scale. 3 Medial position was the most accepted one, both by monolingual (mean = 4.58), and bilingual speakers (mean = 4.77).
Acceptability judgements
3 These values were the results of a mean from the sum of the chosen points of the scale (graded from 0 to 5, from the least acceptable to plainly acceptable) for all the trials in each condition (pre-/medial/postposition) per participant. In general, these relations show that bilinguals tend to be faster in their decisions than monolinguals and that, for both monolingual and bilingual speakers, the more time spent in evaluation, the least acceptable the sentence is considered. For medial position, it means that less time is spent in evaluating these sentences, which receive the highest scores in terms of acceptability. It is in relation to pre-position (clitic climbing) that the larger difference of time is observed between monolinguals and bilinguals for deciding on the value of the Likert scale. Bilinguals are about 800ms faster and accept this kind of sentence more (bilinguals: 2335.07ms; monolinguals: 3053.18ms). The difference in decision time for the other two types of sentences (medial and post-positions) are less expressive (medial position (around 400ms)bilinguals: 1822.55ms; monolinguals: 2245.57ms/post-position (around 200msbilinguals: 2247.69ms; monolinguals: 2407.86ms). Thus, in an off-line evaluation task, both monolingual and bilingual speakers behave similarly insofar as the medial position for the clitic is the most accepted one. Moreover, the analysis of the data also shows that more bilinguals tend to consider the clitic pre-position (clitic climbing) as completely acceptable, whereas monolinguals do not reject that position, but tend to evaluate it as not totally accepted. Bilinguals are also faster in deciding on the value to the sentences, particularly for the sentences showing clitic climbing.
Reading times
As for reading times of the critic segment (presenting the clitic in pre-, medial, or postposition), we have obtained some distinction between monolingual and bilingual speakers. ANOVA for monolingual data shows a main effect of clitic collocation (F(2.58) = 13.4, p < 0.000017). Pairwise comparisons show significant distinctions in all pairs: pre-versus medial position (t(29) = 2.39, p < 0.0234), pre-versus post-position (t(29) = These results suggest that the medial position is indeed the most natural one, followed by the pre-position, being the post-position felt as the most weird one. As for bilinguals' data, ANOVA did not return a main effect of clitic collocation (F(2.28) = 0.771, p < 0.472011). For bilinguals, the faster reading times were associated with the pre-position of the clitic, the most natural position in Spanish. However, pairwise comparisons do not show statistical significant differences between the pairs: pre-versus medial positions (t(14) = 0.28, p < 0.7840), pre-versus post-positions (t(14) = 1.28, p < 0.2225), and medial versus post-positions (t(14) = 0.77, p < 0.4538)).
Graph 6. Reading times means as a function of clitic collocation -Bilinguals (ms).
In general, the results show that monolinguals and bilinguals differ in terms of reading times for the different clitic collocation. Monolinguals are significantly faster for the medial position, but bilinguals do not significantly differ in speed, although pre-position is the fastest critical segment read.
In relation to the segment following the critical one, no spill-over effects were attested.
General discussion
The results obtained in this study suggest that the evaluation and the processing of clitic collocation constitute an area of great uncertainty for highly educated BP speakers. Although the medial position concentrates the best evaluated scores and the fastest reading times, pre-and post-positions receive some good scores. Reading times are not so slower for those positions either. Nevertheless, one may say that the medial position is the most natural one for BP speakers. As far as pre-and post-positions, we have posited a difference between them. On one hand, schooling pressures may affect the evaluation of pre-clitic position, considered the legitimate one for Portuguese in formal use. On the other hand, the evaluation of post-clitic position may have shown satiation effects.
A not completely similar picture has been obtained for BP/Spanish bilingual speakers. Although the medial position has also been the best evaluated, pre-clitic position received the fastest reading times, cancelling an advantage for the medial position obtained for the monolinguals. Moreover, bilingual speakers showed to be faster both in reading the critical segments as well for deciding on their evaluation of the sentences.
In all, results are not robust enough to indicate an unequivocal transfer from Spanish to BP as far as clitic collocation is concerned. However, the populations seem to clearly differ in their processing of clitic collocation. What is the nature and source of difference?
We believe that distinguishing results from on-line and off-line tasks is relevant in addressing the results. In off-line tasks, the Likert scale assignment, both groups performed more similarly than in the on-line reading task. The acceptability judgements are more likely to show metalinguistic awareness. As previously mentioned, highly educated BP speakers have been exposed to clitic climbing as a prestigious form in written texts. By its turn, the easier processing bilinguals showed in reading clitic climbing sentences in the on-line task may suggest that there is some facilitation due to Spanish. This suggests that results are more likely due to a matter of processing than real transfer of representations. The reinforcement of the marked periphery rules of BP by the grammar of Spanish could explain the similarities and differences obtained in this study. That is, monolinguals and bilinguals do not differ a lot in relation to the phenomenon investigated here, but do show differences in terms of gradation, since L2 reinforces the rules available for BP speakers in their L1 marked periphery, facilitating the processing of clitic climbing.
It is also important to take into consideration that results from comprehension are more limited than production. A follow-up production technique is being planned for assessing whether production of clitic climbing in BP would be found among highproficiency Spanish/BP bilinguals.
Final remarks
This study focused on reverse transfer, considering clitic collocation in verbal complexes, contrasting Brazilian Portuguese (BP) and Spanish. The fact that BP had once admitted clitic climbing and school still tries to recover it led us to a discussion on the role peripheral grammar may play in reverse transfer issues. As highly educated BP speakers, in principle, could still admit clitic climbing, regardless of any fluency from Spanish, it was argued that only clear differences between monolinguals and BP/high-proficiency Spanish bilinguals in the rates of acceptance of clitic climbing could signal reverse transfer.
The results of a self-paced reading experimental task with a Likert scale grammatical judgment, manipulating the position of the clitic in Portuguese sentences with highly educated monolingual BP speakers and BP/high-proficiency Spanish bilinguals were not clearly conclusive. Sentences with medial clitic position, the most natural one in BP, are the best evaluated and are faster read by monolinguals. Bilinguals also tend to accept medial position better, but do not exhibit faster reading times on those sentences. Actually, they are faster in reading Portuguese sentences with clitic climbing (although there is no statistical significant difference). Moreover, both groups accept clitic climbing in written BP sentences. However, bilinguals accept it even more, and are faster in reading them.
At last, we have considered that this phenomenon in particular may be affected by influences from highly educated BP speakers' peripheral grammar, since clitic climbing can be considered as a prestigious form to be used in written texts. A more ample contrast in this issue, considering the performance of monolingual BP speakers and bilingual BP/English speakers, for example, a language which does not present clitics, may help determine the extent to which Spanish is indeed influencing clitic collocation in BP. | 2019-12-19T09:13:18.150Z | 2019-12-16T00:00:00.000 | {
"year": 2019,
"sha1": "59bcdb23cf8d0b551e502d05974d94274d764a80",
"oa_license": "CCBYNC",
"oa_url": "http://diacritica.ilch.uminho.pt/index.php/dia/article/download/373/102",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d9311d03999c25b13e1c9a198498b94e48c1c47e",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
239606871 | pes2o/s2orc | v3-fos-license | Self-Reporting of Risk Pathways and Parameter Values for Foot-and-Mouth Disease in Slaughter Cattle from Alternative Production Systems by Kenyan and Ugandan Veterinarians
Countries in which foot-and-mouth disease (FMD) is endemic may face bans on the export of FMD-susceptible livestock and products because of the associated risk for transmission of FMD virus. Risk assessment is an essential tool for demonstrating the fitness of one’s goods for the international marketplace and for improving animal health. However, it is difficult to obtain the necessary data for such risk assessments in many countries where FMD is present. This study bridged the gaps of traditional participatory and expert elicitation approaches by partnering with veterinarians from the National Veterinary Services of Kenya (n = 13) and Uganda (n = 10) enrolled in an extended capacity-building program to systematically collect rich, local knowledge in a format appropriate for formal quantitative analysis. Participants mapped risk pathways and quantified variables that determine the risk of infection among cattle at slaughter originating from each of four beef production systems in each country. Findings highlighted that risk processes differ between management systems, that disease and sale are not always independent events, and that events on the risk pathway are influenced by the actions and motivations of value chain actors. The results provide necessary information for evaluating the risk of FMD among cattle pre-harvest in Kenya and Uganda and provide a framework for similar evaluation in other endemic settings.
Introduction
Foot-and-mouth disease (FMD) is a highly contagious disease of livestock with massive global impact [1,2]. FMD costs billions of dollars annually due to endemic losses and outbreaks [3], and control measures such as vaccination, biosecurity, and stamping out when outbreaks occur are also costly [4,5]. Despite global efforts for FMD control [6,7], FMD remains endemic in many regions [8].
The Agreement on Sanitary and Phytosanitary Standards adopted by the World Trade Organization Member States [9] specifies that trade restrictions based on health hazards associated with the trade of goods should align with the guidance of international standard-setting bodies (the World Organization for Animal Health (OIE) in the case of transboundary animal diseases such as FMD). Actions should be based on the level of risk presented by the trade of goods, as evaluated through objective risk assessment. According to the principle of equivalence, countries are to recognize the actions taken by exporting partners according to the reduction in risk achieved rather than requiring a specific set of protocols (though actual practice is often murkier [10]). For this reason, risk assessments are an essential tool for demonstrating the fitness of one's goods for the international marketplace as well as for understanding and improving animal and public health domestically [10,11]. Import risk assessment is typically used to inform risk management from the defensive standpoint of an importing country; it assesses how to reduce and mitigate the risk of importing a threatening bug or substance based on the probability and consequence of the event occurring. Countries that want to export are evaluated by potential importers using this approach and criteria. In order to export products that could potentially transmit FMD virus, countries have traditionally been required to demonstrate that FMD is not present in the region where cattle (or other source livestock or wildlife) are produced and processed. This requirement is costly, comes with tradeoffs and externalities, and has not been achievable for most of Africa [12,13]. Recent alternatives, which include disease-free compartments and commodity-based trade, encourage the examination of more nuanced, strategic approaches to the development of production and processing systems for export [14,15]. In this context, import risk assessment can be used by the exporting country to evaluate the risk (probability of FMD transmission) experienced by a potential importer under various production and processing scenarios. This analysis could then be used to lobby for access to external markets, or, if unacceptably high, to evaluate the potential value of interventions to reduce risk compared with net benefits from other markets with less stringent entry requirements.
However, in many countries where FMD is present, it is challenging to obtain the necessary data for such assessments, due in part to the small scale and non-standardized value chains that often operate with a mix of formal and informal processes and incomplete documentation of transactions [16]. In this study, we used a hybrid between participatory and expert elicitation techniques to overcome this gap. This novel approach, in which we partnered with local veterinary professionals to characterize risk pathways and parameter values, captured some of the richness and quality of data collected through participatory methods while maintaining the quantitative rigor required to utilize the data in formal risk assessment models.
There is a history in animal and public health fields of using participatory methods to overcome data scarcity challenges for epidemiological surveillance, research, and outreach [17,18]. A participatory approach to risk assessment has been developed and implemented for many studies of food safety in African markets and value chains [16,19,20] and more recently to qualitatively assess the risk of disease introduction and spread [21]. Efforts to marry value chain analysis with risk assessment have also attempted to connect participant knowledge of value chain dynamics with the assessment and management of risks related to animal and public health [22][23][24]. Participatory approaches promote both efficiency and impact by including populations that are affected by decisions made based on study findings [16]. Specifically relating to risk assessment, an advantage over conventional approaches is the chance to capture relevant aspects of human behavior as well as technical causal mechanisms contributing to risk pathways and probabilities [25]. However, a challenge encountered in participatory risk assessments is the need to generate robust evidence of the type that can be used for formal, quantitative risk assessment [16].
The elicitation of expert knowledge from subject matter experts is another approach utilized when data are scarce, unrepresentative, or inadequate to describe the process being studied [26,27]. "Expert" in this usage can refer to a person who can provide information about the question based on their experience with the subject matter of interest [28,29]. This approach has been used within veterinary science to estimate parameter values or prioritize risk factors [30][31][32][33][34][35]. However, when trying to collect information about local systems or informal pathways, a challenge is that those familiar with the subject may not have an academic understanding of the techniques being used. This can impede effective communication and impact the quality of the results if adequate training is not provided [26,27].
The hybrid approach employed here relied on partnership with Kenyan and Ugandan mid-career veterinary professionals who were enrolled in a capacity-building course that covered topics including international trade, transboundary diseases, and risk analysis. Their participation and contribution to the research generated credible data about the risk pathways and parameter values that can be used in a quantitative, probabilistic risk assessment to inform decisions about disease management based on local conditions and priorities. The richness of the data collected gave insight into causal relationships that can help inform appropriate model structure [36] and risk management strategies, including correlations between events in time and space and the influence of actors' incentives on events that contribute to risk.
The objective of this study was to characterize the risk pathways for FMD among cattle at the time of slaughter in Kenya and Uganda through partnership with practicing veterinarians. That objective has been achieved through (a) describing the risk pathways and events; (b) defining the populations of cattle based on the production system of origin, which are expected to have distinct FMD risks associated with baseline conditions and processes; and (c) specifying parameter values to characterize events that require knowledge of the local sale and inspection processes (i.e., what happens between the farm and the abattoir). These results can be used to perform risk assessments, modeling exercises, and economic analyses regarding the expected value of investments based on empirical understanding of the local system. This framework may be used for similar analyses in other endemic settings, ultimately contributing to the analysis and design of targeted interventions for development of risk-based export markets.
Risk Question
The question to be answered for each of four cattle production systems in two countries was: what is the risk that cattle sold for meats are slaughtered while infected with FMD? Mapping and quantifying that risk required system-specific knowledge of the events that occur prior to slaughter for cattle originating from local production systems. Expert knowledge was elicited from practicing veterinarians in Kenya and Uganda, separately, to describe the risk pathways, define populations of relevance, and quantify parameter values for key variables related to sale, transportation, and inspection of cattle.
Participant Selection
The subject-matter experts for this study were defined as veterinary professionals living and working in their respective countries (Kenya, Uganda) with at least two years of experience related to livestock production, and training in risk assessment for animal health and international trade. Experts were identified and contacted in the context of an online capacity-building course for mid-career Veterinary Service (VS) professionals (progressvet.umn.edu) in which they were trainees [37]. The procedures for recruitment and selection of participants in the training course differed between Kenya and Uganda. In Kenya, participants were nominated for the course by the national Directorate of Veterinary Services for the country; in Uganda, participants were self-selected with facilitation through Makerere University and the national Ministry of Agriculture, Animal Industry and Fisheries. The training was done in parallel for both countries (i.e., the instructors, materials, and procedures were the same, but there was no interaction between participants in Kenya with those in Uganda). At the time of the research study, which was five months into the program, they had completed five weeks of training on risk analysis applied to animal health and food safety. Thirteen Kenyan and ten Ugandan participants were in the program at the time when the study was conducted and comprised the pool of available subject matter experts.
The elicitation activity-a guided exercise of building and quantifying a risk assessment model based on participant knowledge and experience-was part of the training program. This facilitated an approach that was a hybrid between traditional participatory and expert elicitation techniques. The participants-already experts on the subject matter of local cattle production and disease management systems-were recently trained as a cohort in topics related to the research question, methodology, and context. The context of the training program facilitated data collection through a prolonged, iterative process of gathering descriptive, qualitative information as well as quantitative parameter values, first at the level of individual responses followed by group discussion. The specific steps of data collection are outlined below. Further discussion of the duality of the training and research activities can be found elsewhere [37].
Participants were given the opportunity to opt in for their input during the training exercise to be used for research purposes, with the explanation that their choice would not have any impact on their standing or relationships in the training program. All individuals (n = 13 Kenya, n = 10 Uganda) chose to do so. The University of Minnesota Institutional Review Board for research involving human participants reviewed the study protocol and determined that it met the criteria for exemption from review.
Knowledge Elicitation and Integration
The elicitation activities took place in three stages, referred to as Part A, Part B, and Part C, over a three-week period (see Figure 1). All activities were conducted separately for each country. The three stages comprised a variation of the Delphi method [38], an iterative process of eliciting individual responses and group discussion to reach consensus. Parts A and B were completed individually, helping to avoid dominance of any one opinion in the information gathered [28]. Part A comprised 18 open-ended, short answer questions. In Part B, participants provided quantitative estimates for parameter value distributions and were asked to only respond for the management systems with which they felt most comfortable. Part C was a group discussion to reach consensus regarding the values of key variables for all management systems; the aggregated values from Part B were provided as a starting point, and all participants were encouraged to comment on how they felt those distributions should be altered to best represent the range and distribution of values in each system. Figure 1. Schematic of the approach used. Parts A, B, and C were carried out separately for Kenya and for Uganda. Parts A and B were individual activities; the individual results were organized and aggregated to present to the group for discussion, revision, and final consensus in Part C.
Part A
The instructions, background material, and questionnaire for Part A were distributed in a similar manner to all previous assignments in the training program: via email as well as through an online learning platform (Canvas LMS, Instructure, Salt Lake City, UT, USA). Participants were able to fill out and return the questionnaire through either route. This was completed individually by each participant. The questionnaire consisted of four sections with 18 open-ended, short answer questions (Supplementary Materials File S1) interwoven with educational material related to the process of risk assessment and the role of expert opinion. This context-gathering phase, not often included in expert elicitation protocols, provided insight into correlational and causal relationships between events that otherwise may have been overlooked by the modeling team.
The first section contained seven questions about the sale, transportation, and inspection of cattle sold for slaughter in their country, including two questions that asked about possible correlations between events. In the second section, participants walked through the steps and logic of building a fault tree and event tree for a simple example risk model (the risk of sleeping through one's alarm). They were then presented with preliminary outputs (a fault tree and event tree) of the same process applied to the combination of events that would lead to the outcome of cattle infected with FMD at the time of slaughter. They were asked whether the pathways presented made sense, whether they agreed, and whether they could identify any additional pathways. The preliminary model structure was built by the research team after a review of available literature.
In the third section, participants were asked to consider how the risk could differ among animals originating from distinct production systems. Kenya and Uganda each have diverse cattle production systems including pastoralism, smallholder agropastoralism, and confined extensive and intensive farms. Beef cattle systems in each country have been classified by the FAO through a process that engaged key national stakeholders and synthesized sources of cattle distribution and production data [39,40]. The participants reviewed these classifications for their country, were asked for each of 11 variables whether they believed the value would be the same or different in each system and were asked if they would recommend a different way of dividing and identifying subpopulations.
The fourth section was four open-ended questions reflecting on the processes that create and mitigate risk and the role of Veterinary Services.
The anonymized individual responses were reviewed separately by three researchers, whose review was guided by the question: Do participant responses support, expand, or contradict the preliminary model structure (variables, relationships, and populations)? After reviewing the responses individually, the researchers discussed in which areas the responses indicated a consistent action to be taken and in which areas there was contradiction or ambiguity in their responses, requiring further clarification in later stages. As a result of that discussion, they had a list of aspects of the model structure to be accepted as is, modifications to the model structure, and additional information to be elicited during parts B and C.
Part B
Part B was a questionnaire intended to elicit quantitative and qualitative information about key parameter values for the risk model (Supplementary Materials File S2). The questionnaire was completed individually using web-based survey software (Qualtrics, Provo, UT, USA) by each participant. Instructions and background information was distributed through email and on Canvas.
The questionnaire opened by presenting the subpopulations (production systems) for the cattle industry in the respective country, and participants were asked to select those for which they had experience and/or felt comfortable giving opinions about FMD risk and the farm-to-market process. For each production system they selected, participants were asked to estimate the minimum, maximum, and most likely value, and explain their reasoning, for 16 variables related to beef cattle production, sale, and inspection processes.
They were instructed to reply "no answer" for any question if they did not feel they could provide a useful estimate.
Results were anonymized and aggregated for a selection of variables to be discussed by the whole group in part C. Variables were prioritized based on those which the population of veterinarians were well equipped to answer and for which there was little other information available.
A noteworthy point of the elicitation process is that each participant provided both a point estimate (most likely value) and a distribution of uncertainty around that value (minimum and maximum possible). This is considered a better measure of uncertainty than simply taking the variability among several individuals' point estimates [26]. Thus, our sample of 10 or 13 experts in each country yielded that many distinct distributions of the point estimate and uncertainty interval for each variable.
The distributions of each individual (specified as PERT distributions) were then combined into a single mixed distribution, weighting each one equally. This approach is outlined in risk assessment textbooks [41] and has been used elsewhere [42,43]. In our study, we used that mixed distribution as a starting place for group discussion, so that participants engaged with each others' judgments of the range and most likely values to ultimately reach a consensus on the characteristics of the final appropriate distribution. This aligns with the recommended best practices for expert elicitation: including multiple experts, using a structured protocol for the phases of knowledge elicitation and aggregation, and providing the opportunity to interact and cross-examine reasoning within the group [26,29].
Answers were excluded from the aggregation if the respondent's rationale indicated that they were estimating something other than what the question was asking. If the distributions and reasoning were similar across the four production systems, then they were merged into a single distribution; otherwise, they were kept distinct for each production system. Some variables were conceptually summarized or manipulated to form a new variable, related to but distinct from that which had been asked in the questionnaire, in order to be better formulated for input to a risk assessment model. More specific information about the aggregation approach for each variable is described below.
•
Duration in days between sale and slaughter: direct mathematical aggregation was used for discussion.
•
Probability of not commingling: The questionnaire asked about the probability of mixing with animals from other herds. The estimates given by each participant were subtracted from 1 to yield the probability of not mixing with animals from other herds. This complementary probability was aggregated into a composite distribution for each production system and presented for discussion in Part C.
•
Number of animals mixed with, when commingling does occur: direct mathematical aggregation was used for discussion.
•
Number and probability of inspections: The questionnaire asked participants to estimate the number of times an animal would be inspected for FMD and then to describe each inspection and to estimate certain attributes: the percent of animals that would be inspected, the sensitivity of the inspection to detect clinical FMD, and the percent of positive diagnoses that would be ignored or compromised. The number of inspections was summarized as a range of point values to initiate discussion in Part C. The probability of inspection was handled differently in each country based on the flow of conversation in Part C. In Uganda, the discussion about the number of inspections included the proportion of animals for which that number would be zero. In Kenya, the most likely value for the percentage of animals who undergo each inspection was used to calculate the complementary portion of animals that do not get each inspection, which was then combined across all inspections reported by an individual to calculate the proportion of animals that would not receive any inspection. These values were presented to the group in Part C as the starting point for discussion about the probability of bypassing inspection for animals from each production system.
• Effectiveness and type of inspections: For each inspection described by each participant, a distribution for "effectiveness" was calculated by multiplying the minimum, maximum, and most likely values of the sensitivity multiplied by the most likely value of the reporting rate (defined as the complement of the most likely value for the proportion of positive results ignored or compromised). The effectiveness therefore described the percentage of animals that would be detected and detained by each inspection. If no answer was given for the proportion of results ignored, the sensitivity was assumed to functionally represent the effectiveness. In each country, the inspections and corresponding effectiveness estimates were categorized into two types that emerged from the comments and descriptions in parts A and B. Because of this emergent nature, the definitions of type 1 and type 2 differed between countries based on the patterns in participant descriptions of inspections. The effectiveness distributions for all inspections of each type were aggregated as described above into a single composite distribution of effectiveness for each type of inspection in each country. By synthesizing responses in this way, the distributions for each type of inspection included a variety of specific inspection circumstances and contexts. One distinction that was not explicitly discussed was whether a region in which inspection occurred was currently under FMD-related quarantine measures. The inspection descriptions were used to quantify how frequently each type occurred at each location (checkpoints, farm, market, slaughter, or unspecified/blended) and what rate of inspections in each production system took place at each location. This was used to compute the relative frequency (weight) of type 1 and type 2 inspections for each production system.
Part C
Part C was a structured group discussion held using a web conferencing system with the participants from each country (conducted separately for Kenya and Uganda). The purpose of the discussion was to reach group consensus on the distribution of values for key parameters for each production system.
For each variable to discuss, the facilitator presented a summary of the related question(s) asked in Parts A and B and representative comments pertaining to the interpretation and estimation of the variable. Then, the most likely, minimum, and maximum values specified by each respondent were presented, along with the density plot and summary statistics of the composite distribution. Participants were asked whether the summary presented was an accurate description of the distribution for a particular management system or for all management systems. If they agreed or disagreed, they were asked to provide their reasoning and, where relevant, to propose how they would modify the distribution presented. There was limited use of the poll function in the web conferencing system to gather participant opinions; most of the discussion occurred as direct conversation among participants and through the chat. To close the discussion of each variable, the facilitator summarized the consensus of the discussion up to that point and asked if there was any further comment. Once all participants expressed agreement or no objection, the discussion moved on to the next variable.
There was one variable presented in Part C for which no information was collected in Part B (included after reviewing the responses to Part A). For this variable, participants were asked to estimate, out of 10 animals infected with FMD, how many would experience each of four distinct outcomes. Participants gave their answers in the chat (Uganda) or in a poll (Kenya) and then discussed with each other the reasons for variation in their responses.
Responses to Part B were unevenly distributed among management systems in each country. Where there were no responses for a certain variable in a certain management system, the group was asked which system they thought it would be most similar to, and then they were asked to explain how they would modify the values for that similar system in order to represent the one for which no Part B data had been provided.
The discussion was recorded and distributed via email so that participants who were unable to attend would be able to view it and were encouraged to submit any comments they had regarding the discussion.
Final Steps
For the few variables designated as important to quantify by VS opinion but without time to discuss in Part C, the individual descriptions in Part A and B were used to thematically classify the responses into relevant summary variables as described above, and the quantitative estimates were then mathematically aggregated to represent the composite distribution described by all of the responses for each variable.
Following Part C, the modified distribution for each variable (based on group consensus or mathematical aggregation) was summarized as a probability distribution that could be used for input into a probabilistic risk assessment model. Values that were VS opinion of a probability were summarized as PERT distributions. Values that were estimates of a scalar (number of animals, inspections, or days) or test characteristics (inspection effectiveness) were summarized as a common probability distribution with appropriate theoretical characteristics. Where multiple distributions were considered, the one with the lowest AIC was chosen. Distributions were fit using maximum likelihood estimation (package "fitdistrplus" [44], R software version 4.0.2 [45]).
The distributions were presented back to each group for final comment, along with the consensus of the discussion and reasons supporting that consensus. Each distribution was described with accessible summary statistics. The report was distributed to the participants via email, and they were asked to review it and respond via email or in a virtual forum with any questions or comments.
Results
In Kenya, there were 12/13 responses to Part A, 13/13 responses to Part B, and 6/13 active participants in Part C. In Uganda, there were 10/10 responses to Part A, 10/10 responses to Part B, and 9/10 active participants in Part C.
The veterinarians in both Kenya and Uganda unanimously confirmed that there was value in evaluating risk separately for distinct cattle production systems. Most respondents (9/10 Uganda, 11/12 Kenya) indicated that the management systems presented were appropriate classifications of beef cattle production systems in their country.
Additional Event Added to the Proposed Risk Pathways
Most participants (8/10 Uganda, 12/12 Kenya) concurred with the risk pathways presented in the preliminary model of Part A. Two individuals in Uganda and three individuals in Kenya proposed an additional event be included on the pathway to represent the inspector's decision to appropriately report and act on an FMD-infected animal. "We assume the right action will be taken but that is not always the case", explained one Kenyan response.
Following these responses, the event tree and risk pathways were updated similarly for each country. The event tree (Figure 2) was included in the final report back to the participants for review; it includes the steps from the preliminary model that participants supported and the additional step for the probability that appropriate action is taken by inspectors when an infection is suspected. There was no objection from any participant with the formulation of the resulting pathway.
Correlations Exist between Events
Four Ugandan and three Kenyan participants indicated that points exist where an animal with FMD would be more likely to be sold for meat than an FMD-free animal. The Ugandan participants described that farmers at times want to dispose of animals that are sick, that farmers may sell animals when there is an outbreak in the area but quarantine is weakly enforced, and that during an outbreak farmers may want to dispose of affected animals to avoid losses. They also indicated that there may be temporal (seasonal) correlations between disease incidence and sales volume due to factors related to both demand (e.g., festivals) and supply (e.g., need for income at beginning of school year, decreased forage available during dry season). Kenyan responses described circumstances when farmers want to dispose of sick animals and traders to buy animals at a cheaper rate.
In contrast, three Kenyan and two Ugandan individuals indicated that there was no point at which an animal with FMD would be more likely to be sold for meat compared to a healthy animal. Several responses (six in Kenya and four in Uganda) discussed the possibility of selling FMD-infected cattle but did not address the question of correlation or comparison between sick and healthy animals.
Parameter Values
Participants estimated the minimum, maximum, and most likely value of variables for any/all production systems for which they felt comfortable responding. For Uganda, there were the following numbers of responses for each production system: Semi-intensive-7; Agropastoral-6; Ranching-2; Pastoral-1. For Kenya, there were the following numbers of responses for each production system: Pastoral-10; Agropastoral-3; Feedlot-1; Ranching-0.
Individual responses were aggregated into a composite distribution, which was presented and discussed with the cohort to reach a consensus on the characteristics of an appropriate distribution for each variable and each production system. The consensus, final parameters, and summary statistics for each are reported in Tables A1 and A2 in Appendix A for Kenya and Uganda, respectively.
Probability That an Infected Animal Is Sold While Infected
A discussion question was added to Part C following the responses about a possible correlation between the probability that an animal is infected with FMD and the probability that an animal was sold. The group was asked, out of 10 infected animals at random (throughout the year), how many would experience various outcomes including that the animal sold from the farm without reporting infection. In Uganda, the group consensus was that two to four out of every 10 infected animals are sold, for all production systems. The participants reasoned that it is hard for a farmer to report to the authorities that an animal is infected unless discovered by a professional because there is no form of compensation and that, when farmers realize there is disease in their region, they tend to sell animals to make sure their farms are empty. In Kenya, the group consensus was that two to three out of every 10 infected animals are sold on average across all production systems.
Duration of Time (Days) between Sale and Slaughter
The duration in days between when a cow leaves the herd and slaughter was described qualitatively in Part A, estimated in Part B, and discussed in Part C. The group consensus in Uganda was that the distribution for the duration of the process was similar for all production systems and that sources of variation, primarily the distance between origin and destination, could vary within any of the systems. They specified that this range does not include scenarios in which the purchased animals are held by a trader or butcher for extended lengths of time prior to slaughter. The Kenyan cohort concluded that the duration is different between production systems: pastoral and agropastoral systems had longer maximum durations and a larger variation, with pastoral having the longest most likely value (eight days) due to the distances the animals typically travel to reach the final destination. Feedlot and ranching systems had much shorter described durations, maxing out at two and three days, respectively, due to the shorter distance to travel and vertical integration in some systems.
Commingling with Animals from Other Herds: Probability, Number
Situations in which commingling occurs were described qualitatively in Part A. In Part B, participants estimated the proportion of animals from each management system that do not commingle with animals from other herds before slaughter and then, for those which are exposed to animals from other herds, the number of animals with which they are mixed. In both countries, it was agreed that the probability of commingling would vary by management system, and the distribution for the number of animals mixed with when commingling does occur was the same for all cattle regardless of origin. The Ugandan group discussed that the probability of avoiding commingling was highest for animals from ranching systems (most likely value of 40%), and lowest (0%) for animals from pastoral systems. Participants commented on the general trend that in systems where farms have fewer animals, there would be more mixing on the way to market. In Kenya, individual and group discussions highlighted a distinction in the probability of avoiding commingling between systems that trek cattle to market on foot (identified as pastoral, agropastoral) and those that transport animals on trucks directly to a slaughterhouse premise (feedlot, ranching). This was attributed to the length of the journey, opportunities to congregate with other animals at markets or stops, and the number of animals sold at once from a single herd (e.g., enough to fill a truck with animals from the same origin).
Inspection: Probability, Number
Participants described inspection points and procedures from farm to slaughter. The responses from Uganda highlighted differences in the probability of inspection between systems based on the availability of veterinary services and the motivation of producers to maintain credibility and follow regulations. In the discussion in Part C, participants reinforced that it was not uncommon for animals from any system, and especially the three systems other than ranching, to completely bypass inspection before slaughter. They pointed to the current (at the time) movement restrictions in place in one district because of an FMD outbreak and that cattle were, regardless, being moved and slaughtered through unofficial channels. The consensus after some discussion was that the probability that an animal is never inspected (number of inspections = 0) was influenced most heavily by the destination for slaughter: if at designated slaughter points, they will be inspected; those that miss inspection are those going to undesignated slaughter points ("local slabs"). Animals from ranching systems were more likely than those from other systems to go to a designated slaughter facility and therefore had a lower likelihood of receiving 0 inspections.
Five Kenyan participants indicated in Part A that they expected the probability of bypassing inspection completely (i.e., for whom the number of inspections is zero) to be higher among cattle from pastoral or agropastoral systems than those from feedlots and ranches. Individual estimates posited that 1% of animals originating from a feedlot were expected to bypass inspection completely, while up to 20% of agropastoral and 70% of pastoral cattle could potentially reach slaughter without being inspected. They reasoned that pastoral systems include vast areas that are poorly covered by all services including veterinary services, though others pointed out that inspection and permits are mandatory for all animals transported from one point to another. Others commented that buyers are motivated to perform their own inspections and check animals for indications of poor health that may cause losses; they want to "avoid being duped." In the group discussion, the Kenyan cohort concluded that the probability of bypassing inspection differs by management system, with the lowest probabilities for animals from feedlot and ranching systems and a higher frequency and broader distribution of occurrence for animals from agropastoral and pastoral systems. The broad range for pastoral and agropastoral systems included acknowledgment that some of those inspections would be performed by community health workers or other non-veterinarians. The group emphasized that the percentage would be very low for cattle sourced from feedlots, since the animals and systems are closely monitored.
Inspection: Effectiveness
Participants described potential inspection points and estimated the sensitivity as well as non-reporting rate for each.
Among Ugandan responses, there were 27 inspection points described in total (2 pastoral, 10 agropastoral, 4 ranching, 11 semi-intensive). The inspection descriptions and distributions were similar for all production systems, so they were aggregated into a single distribution of effectiveness. Both the descriptions and the distribution indicated there were multiple "types" of inspections being lumped together. Based on the descriptions, inspections were categorized into two types: • Rigorous (type 1): qualified and experienced personnel conducting exams, thorough inspection, "clinical signs are very clear"; • Lesser (type 2): Any of the following: less qualified personnel (different incentives/stakes), less experienced, or less thorough (rushed, poor conditions/facilities, etc.), "clinical signs not always distinctive".
There were 15 inspection points classified as type 1. All 15 individual distributions had a most likely value of 0.70 or greater, and the median value for the combined distribution was 0.83. There were 12 inspections classified as type 2. Ten of the twelve had a most likely value of 60 or lower, and the median value for the combined distribution was 0.52. Five of the inspections included an estimate for the probability that a positive result was ignored or compromised, with the most likely value ranging from 0.01 to 0.05 with a median of 0.02.
Kenyan responses described 21 inspection points (2 feedlot, 6 agropastoral, 13 pastoral). Descriptions and reasoning for each inspection delineated two types based on the occasion for inspection and who was performing it.
• Formal (type 1): any inspection performed by veterinary or animal health professionals before movement to the next stage (e.g., a movement permit before transportation or antemortem inspection before slaughter). Results from formal inspections were unlikely, but possible in some instances, to be ignored or falsified; • Informal (type 2): performed by a trader, owner, butcher, or other middleman before the sale takes place. Results from these inspections were more likely to be compromised or ignored in the opinion of some VS members.
There were 16 inspections classified as type 1. Fifty percent of type 1 inspections had a most likely value of effectiveness greater than 0.90, and the median value for the combined distribution was 0.71. There were five type 2 inspections, four of which had a most likely value of 0.60 or lower. All inspections for feedlot cattle were described to be formal inspections; this was attributed to ranching systems as well based on the descriptions in Part A. Nine of the inspections included an estimate for the probability that a positive result was ignored or compromised, with the most likely value ranging from 0.0003 to 0.9 and a median of 0.2.
Discussion
In this study, we partnered with veterinarians in Kenya and Uganda to characterize the pathways and events leading to FMD infection at the time of slaughter among distinct populations of cattle in Kenya and Uganda. We then estimated values for key variables along those pathways from farm to slaughter based on the expert knowledge of veterinarians in each country. We found that risk processes differ between management systems, that disease and sale are not always independent events, and that events on the risk pathway are influenced by the actions and motivations of value chain actors including the decision of inspectors to report or to ignore an animal they suspect to be positive for FMD. The findings provide necessary information for evaluating the risk of infection among cattle at the time of slaughter in Kenya and Uganda and provide a framework for similar evaluation in other endemic settings. This knowledge can be used to guide exporter decisions for the development of risk-based export markets. A similar approach may be used to collect data to inform risk-based approaches to support the trade of various commodities from many geographies relating to FMD or other transboundary animal diseases.
The results describe differences in the risk processes among animals from distinct production systems. In the Kenyan systems, a trend emerged with clear delineation between pastoral/agropastoral and ranching/feedlot systems for several variables including the time from farm to slaughter, the probability of commingling en route, and the probability of bypassing inspection. The clustering of production systems whose characteristics extend beyond the farm gate is supported by other studies of Kenyan value chains [46,47]. The delineation between types of systems for factors contributing to the risk of acquiring a new infection en route to slaughter (in particular the probability of commingling with cattle from other herds) may be a strong indicator of which systems have the capacity to most easily adapt to an approach that involves direct transport and completely eliminates opportunities for exposure to other animals. Distinctions in management practices that occur on-farm were not characterized in this study but could be considered for future analysis: for example, routine vaccination against FMD. The impact of this variable would be partially captured by disease prevalence estimates but could also affect outcomes later in the pathway such as the probability of detection due to the rate of subclinically infected animals. A quantitative model could employ sensitivity analysis to explore how sensitive the overall risk is to the probability of displaying clinical signs and to other factors associated with vaccination (e.g., transmission probability from infected animals).
The events of FMD infection and sale for slaughter are not always independent for cattle in Kenya and Uganda due to both causal and correlational factors described by veterinarians in each country. Temporal and spatial patterns in FMD incidence, animal movements, and meat supply and demand have been described elsewhere [46,48,49]. Three participants (two Kenyan, one Ugandan) described the beginning of the school year as another time when producers would be more likely to sell cattle because of the need to pay school fees. The seasonal patterns may cause correlations between disease incidence and likelihood of being sold such that the prevalence of FMD infection among animals sold is different from the disease prevalence in a herd or region when expressed as the annual average. Furthermore, responses indicated that the presence of FMD in a region, herd, or individual could impact the probability of sale through various mechanisms. Other sources have reported the practice of informal sales continuing in Uganda even when an FMD quarantine is in place [50,51] and that the implementation of formal control measures such as ring vaccination may not be implemented for weeks after the initial outbreak event [49,52].
If disease and sale are not independent of one another, it may not be appropriate for a risk assessment to assume that animals sold are chosen at random from a herd and therefore that the risk of infection for that animal is represented by the average risk of infection for any animal in the herd. This assumption is common in risk assessments performed in the field of animal health and is often appropriate for a particular question and context [53][54][55]. However, for risk assessments examining the movement or sale of animals in endemic environments [56][57][58][59], our findings suggest it would be judicious to characterize the relationship between sale and disease of cattle in the population of study and to interpret the results of the risk assessment accordingly. While there are many studies on livestock marketing [60][61][62] and many on FMD epidemiology [63,64], this gap highlights the opportunity for further research on the relationships and mechanisms connecting the two. Such an insight would contribute to a fuller understanding and more accurate assessment of risk among animals originating from distinct production systems in FMD-endemic areas.
The decisions of value chain actors influence the ultimate risk level in the product. The role of such decisions was highlighted and exemplified by the suggestion, made independently by multiple individuals in each country, to include a variable that accounts for the action taken by the inspector after diagnosing an animal as positive or suspect for FMD. Corruption is a barrier to health care access in many countries [65], has been described during regulatory inspection of pharmacies in Uganda [66], and may be incentivized among livestock producers by quarantine measures and disease control policies that restrict access to markets [67]. Actor motivations and incentives to make a decision in a given situation should be considered when building the structure of a model for risk assessment or economic analysis, especially where there may be feedback loops that could qualitatively change the conclusions of an analysis [10,68,69]. Utilizing risk analyses for identifying opportunities and designing effective policies requires understanding and acknowledging the role of motivation and incentives [70], including how they will change over time and the expected changes in actions taken [71,72]. In this particular case, it should be acknowledged that some participants may have been reluctant to provide quantitative estimates for the occurrence of compromised inspection results or to have open discussion with the group about this topic. This could have resulted in an overestimation of "effectiveness", considering that the sensitivity was assumed to incorporate the impact of compromised results when not explicitly stated.
The approach used here, a partnership with local professionals in a hybrid between participatory and expert elicitation techniques, is a novel contribution to import risk assessments, particularly in disease-endemic and data-scarce settings. Participatory mapping and characterization of the risk pathways and value chains gathered valuable information about the processes and relationships at work, as described above. By utilizing local veterinary expertise to guide the model structure, this approach elicited information to help achieve the purpose of evaluating risk from the perspective of the importer but for the purposes of the exporter-giving insight into causal relationships to help inform an appropriate model structure [36] and risk management strategies [73]. Earlier uses of participatory methods for risk assessment have faced the challenges of "coupling" the beliefs of participating stakeholders with technical contributors when they differ [25]. In this case, since we considered our participants to be subject matter experts, we deferred to their beliefs in the realm of information discussed, and the procedures were in fact designed so that participants would update and improve the research team's preliminary drafts and impressions of the systems obtained from generic or external sources. Robust and systematic procedures for training, eliciting, and reviewing participant knowledge helped to minimize bias and generate risk pathways and parameter estimates suitable for use in a formal model. At the same time, it is the hope and intention that the veterinarians and their communities also benefited from their involvement [37]. As professionals who are invested in improving animal health and livestock systems, their planning and decisions impact the outcome being discussed. It is reasonable to expect that the participatory exercise of mapping and interrogating the system, risk factors, and relationships from many professional viewpoints contributed to an updated understanding of their own role related to FMD and trade [74].
The results of this risk assessment and others can be used to develop risk-based approaches for FMD control at both the country and regional levels. Considering that risk dynamics, including many of the factors characterized here, change over time, it is best that each country's veterinary services take on the task of regular risk assessment to guide risk management activities. The challenge of data scarcity can be addressed through the use of regular VS activities, e.g., data on health certificates, market throughput, and veterinary inspection reports. The results presented should be combined with other data related to herd management and sales, FMD prevalence, and disease transmission to complete a quantitative risk assessment. This should include epidemiological data that are representative of production systems' distribution across each country to account for regional variations in disease occurrence. Risk-based approaches for FMD control, informed by the use of risk modeling and risk analysis, would aid early detection and response to disease outbreaks. Currently, reporting of outbreaks is often delayed or does not occur; there is limited surveillance for disease. Official outbreak reports differ by source [4] and appear to underestimate the occurrence of disease when compared to seroprevalence estimates. Common strategies for disease control include movement controls and ring vaccination in the face of outbreaks. The effectiveness of such measures are challenged by lack of resources for robust responses, delays in obtaining and delivering an effective response (median reported times between recognition of outbreak and deployment of vaccines in Uganda of 25 and 52 days in two separate surveys [49,52]) and underreporting of disease events, compounded by difficulty delivering veterinary service to remote areas. Vaccination is used primarily as a reactive rather than proactive measure of disease control in Uganda [49,52]; routine preventive vaccination has increased in Kenya in recent years [4].
The risk pathways reported here need to be coupled with activities at the slaughterhouse to characterize risk of transmission associated with the final product [75]. Handling practices at the level of the abattoir differ between facilities, based on their target markets. Most beef in Uganda and Kenya goes to domestic consumption and is slaughtered at some level of local slab or abattoir. Studies of abattoirs and butcheries in Uganda have demonstrated unacceptably high microbial contamination in meat samples as well as poor hygiene standards and beef handling practices [76]. Studies of meat handlers at five small and medium slaughterhouses in Nairobi likewise reported poor hygiene practices and microbial profiles that could facilitate cross-contamination of meat [77,78]. Meat inspection is often performed by a Public Health Officer, with greater emphasis on zoonoses and foodborne pathogens than trade sensitive diseases such as FMD. Larger processing companies with their own abattoir may be expected to have superior sanitary processes due to company standards but handle a small portion of the beef supply (11-13% in Nairobi [46]). There are approximately five abattoirs or companies in each country that export meat and offal to other countries in the region [79]. In addition to hygiene practices, it would be critical to describe the actions taken when an animal or carcass is identified to have FMD in order to understand the implications for risk in associated animals or meat products.
The primary limitations of this study are related to the use of expert knowledge as a surrogate for empirical data [80]. Rigorous methods must be utilized to obtain accurate and reproducible study results in the face of motivational, behavioral, and cognitive biases [81]. This study included many of the core tenets associated with rigorous protocols [26], including multiple experts with diverse backgrounds, training of experts with the necessary vocabulary and concepts, following a structured elicitation protocol that privately recorded individual judgments before encouraging discussion among participants, and quantifying uncertainty around parameter estimates [30,82,83]. One limitation is the potential bias of the perspective of expertise by including veterinarians as the only profession represented, though they did come from diverse regional and personal backgrounds.
It may be perceived that the sample size here (number of participants) may be relatively small, compared to the population of field experts. The definition of sample size when consulting experts is subjective and, in many cases, a sample size of even one single expert has been used to parameterize distributions [84]; see also the discussion of sample size in [80]. Rather than numbers, we focused on giving our population the required training to help them understand what we wanted to estimate and then relied on their expertise and consensus-building to arrive at the best representation of each value. That said, results should be interpreted in light of the relatively few responses in Part B for the feedlot and ranching systems in Kenya and pastoral systems in Uganda. It is desirable to have several experts contributing knowledge because each tends to be overconfident in their own judgment (i.e., they specify bounds for a parameter that are too narrow), and the aggregation of uncertainty across several experts, as well as interaction and discussion among them, increases the consistency of expert knowledge with reality [28,80]. Because fewer individuals contributed to the aggregate distribution, there may be less uncertainty expressed for the parameter values than would have been covered with a greater number of contributors with expertise in these systems. Even so, the values of the estimates reported by our participants are generally supported: they are plausible compared to known values, supported by the consensus of the group, and align with trends shown in other literature.
Finally, the risk model structure and parameters were handled and influenced by the primary researcher and discussion facilitator, who is not from East Africa. This researcher built the preliminary model structure and questionnaires based on a literature review, reviewed and aggregated the individual results, facilitated the group discussion, and was involved in all decisions regarding data analysis and interpretation. The participants were invited to review and discuss the conclusions from each stage of the research process, including the report summarizing the process, final risk tree, and parameter distributions. It is possible that misinterpretation [80,85] could have occurred in both directions during communication between the researcher and the participants and is certain that the lens of the primary researcher has been incorporated into the final risk-mapping outputs.
Conclusions
The results of this study fill the gap of identifying risk pathways and quantifying key variables for which published data are not available that are representative of the East African cattle management systems and value chains. This information could be combined with other available data to perform systematic risk assessment to estimate the baseline and relative risk for FMD transmission associated with beef products and to identify key variables for intervention including populations of focus, design of risk mitigation measures, and evaluation of what level of risk is reasonably achievable and at what cost. The novel approach builds on prior participatory and expert elicitation approaches to risk assessment to generate credible data appropriate for use in formal risk assessment models from local veterinary professionals. Institutional Review Board Statement: Ethical review and approval were waived for this study after the University of Minnesota Institutional Review Board for research involving human participants reviewed the study protocol and determined that it met the criteria for exemption from review.
Informed Consent Statement: Written informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available in the article and supplementary material and available upon request.
Acknowledgments:
The authors would like to thank the veterinarians of the ProgRESSVet Kenya and ProgRESSVet Uganda programs for their participation and contributions to this work. We are also thankful to María Sol Pérez Aguirreburualde, Mary Katherine O'Brien, Anna Pendleton, Julia Baker, and Daniella Schettino for the support in all stages of study creation and implementation.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. | 2021-10-23T15:08:43.240Z | 2021-10-20T00:00:00.000 | {
"year": 2021,
"sha1": "351113d66a3e5b263c6abbf421e92838405fa945",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/13/11/2112/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "293841033fd8960f0e8c1332286aa9e019a88c53",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
49186284 | pes2o/s2orc | v3-fos-license | Correlation between dental conditions and comorbidities in an elderly Japanese population
Supplemental Digital Content is available in the text
Introduction
Japan is one of the countries that contain the most rapidly aging populations in the world; the percentage of persons aged ≥65 years reached 27.3% in 2017. [1,2] The percentage is estimated to exceed 30% by 2025. However, this trend can now be observed globally. [3] Most people can expect to live over 60 years, and this also occurs in less developed countries. [4,5] This is the first time in history mankind has been dealing with this issue, and thus, medical and preventive care for the elderly population is becoming increasingly important. [4,6] One aspect of health care that is particularly important in the elderly is oral care. [7] Tooth loss can give rise to various problems associated with eating, speaking, and appearance. [8] The number of teeth or wearing of dentures is also related to swallowing function. [9] Moreover, hypertension, [10] cardiovascular disease, [11][12][13] and oral/dental conditions due to poor oral health have been reported in several population groups. Dental conditions are also associated with diet and nutritional status. [14] Therefore, oral/dental health problems could affect general health and quality of life both directly and indirectly.
Previous research has shown that older adults who need special care are at an increased risk for dental plaque accumulation on natural teeth or dentures. A higher incidence of caries, gingivitis, periodontal disease, and edentulousness is also observed. [15] Moreover, a study conducted in 11 nursing homes in Japan demonstrated that oral care significantly prevented the onset of aspiration and pneumonia, and reduced the number of pneumonitis-related deaths. [16] These results were preliminary, but the report suggested that diseases related to oral/dental conditions could be prevented or improved by promoting oral care among nursing home residents. This means that studies that focus on correlations between dental conditions and diseases in the elderly population who require care, such as nursing home residents, are relatively more important. In addition to these, several previous studies have suggested associations and correlations between the dental conditions of elderly individuals living in nursing homes and their comorbidities. [17,18] To accumulate literature for use in clinical practice, studies in various settings and comorbidities are important.
The aim of the present study was to investigate the correlation between dental conditions and comorbidities in an elderly population using a database constructed from data obtained from nursing homes in Japan.
Study design
In this study, we used a database constructed using data obtained from 12 nursing homes consisting of 1008 individuals in Japan. The database was developed and provided by the Health, Clinic, and Education Information Evaluation Institute (HCEI; Kyoto, Japan), which is a general incorporated association. The registration period was from January 1, 2014, to December 31, 2015, and individuals were consecutively included in the database. The data were retrospectively collected from the nursing homes. The database included the sociodemographic/ clinical data [age, sex, location before admission, reasons for admission, level of care required, comorbidities, oral condition (number of present, decayed, and filled teeth), use of dentures] of the elderly individuals.
Study population
Individuals with dental and other medical records of nursing homes [sociodemographic/clinical data) (age, sex, location before admission, reasons for admission, level of care required, comorbidities, oral condition (number of present, decayed, and filled teeth), use of dentures] were included in the analysis. After excluding individuals with missing data, the association between the dental condition, comorbidities, and other sociodemographic/ clinical factors was analyzed. To maximize the generalizability of the study results, other exclusion criteria were not set.
The following sociodemographic/clinical data were obtained at the time of admission to the nursing home: age, sex, reason for admission, residence before admission (home or hospital), care needs level, dental conditions (number of present teeth, decayed teeth, and filled teeth), and comorbidities (dementia, stroke, bone fracture, arthritis, heart disease, and other common diseases associated with care).
Dental conditions and care needs level
The condition of the teeth was categorized as follows: normal, decayed, and filled. Normal teeth were defined as teeth that were not decayed or filled. According to the World Health Organization criteria, a retained root was considered as a decayed tooth. [19] The care needs level was determined by the local government in Japan, and ranged from 1 (lightest) to 5 (heaviest). [20,21] The process of determination of the level is nationally standardized and includes a visit to the individual's home or hospital by a trained local government official. The computerized algorithm was preliminarily used to determine the level based on 74 items related to physical and mental status. Subsequently, a local committee of specialists determined the level according to the preliminary results and a report of the 12 items of behavioral status by an attending physician. Details were reported by Tsutsui and Muramatsu, [20] but briefly, level 1 means that the individual requires partial help for daily life, and level 5 means that the individual requires full help for daily life.
Statistical analysis
Continuous variables are expressed as mean ± standard deviation (SD), while categorical variables are described as number and percentage (%). The number of teeth (including decayed and filled) across age groups was compared using 1-way analysis of variance. Linear regression models were used to analyze univariate and multivariate associations between the dental conditions, comorbidities, and other sociodemographic/clinical backgrounds. [22] In the multivariate analyses of age, sex and the care needs level were considered potential confounding variables because of their medical implications. The fit of the model was evaluated by the root mean squared error (RMSE). In the supplementary analysis, the prevalence of comorbidities with or without decayed teeth were compared using Fisher exact test. The 2-sided significant level was set at P < .05. All statistical analyses were performed using SAS version 9.4 for Windows (SAS Institute Inc., Cary, NC) and Stata 15 (StataCorp, College Station, TX). Table 1. Approximately half of the individuals lived at home before admission. Care level 2 (75, 26.0%) was most frequently required, and 5 (36, 12.5%) most infrequently. Dementia was the most common comorbidity reported (116, 40.1%).
Dental health status
The dental health status of the elderly individuals is summarized in Table 2. The number of present teeth among all individuals (mean ± SD) was 11.6 ± 9.6, and decreased with older age [16.2 ± 9.6 ( 80 years), 12.1 ± 8.9 (81-85 years), 11.9 ± 9.5 (86-90 years), 6.5 ± 8.1 (≥91 years); P < .001]. Dental conditions such as number of decayed teeth, filled teeth (treated teeth), and denture use are also presented. Among the total population of 289 individuals, the mean number ± SD of decayed teeth was 1.4 ± 3.0 and 4.9 ± 5.2 for filled teeth. Over half of the individuals used dentures (172, 59.5%).
Univariate and multivariate linear regression
Associations between dental status and other sociodemographic/ clinical characteristics was analyzed using linear regression models. In the univariate analysis, there was no significant association between the number of present teeth and the following characteristics: sex, residence before admission, and care level required (Table 3). However, the presence of hypertension, heart disease, and arthritis was significantly correlated with the number of present teeth (P = .035, .005, and .023, respectively). The presence of heart disease and arthritis was also significantly correlated with the number of normal teeth (P = .014 and .034, respectively). Dementia was not significantly associated with the number of present (P = .56) and normal teeth (P = .78), but instead with the number of decayed teeth (P = .018, RMSE = 2.959). In the multivariate regression analysis on comorbidities, the number of decayed teeth was significantly correlated with the number of decayed teeth even after adjusting for confounding variables (P = .030, RMSE = 2.959; Table 4). The prevalence of dementia was also significantly higher among the dentate individuals with more than 1 decayed teeth than in individuals without decayed teeth (P = .044, Supplementary
Discussion
The present study investigated the associations between dental conditions and comorbidities in an elderly population living in nursing homes. The number of decayed teeth was significantly correlated with dementia, and this correlation was observed even after adjusting for confounding variables (age, sex, and care needs level). Moreover, the number of treated teeth (filled teeth) was not associated with dementia. These results suggest that dental health could represent a marker of impending dementia, and probably represent a marker of general health status.
In this study population, 107 residents (46.3% of all dentate residents) had at least 1 untreated decay. The incidence of dental decay is increasing in the elderly population. [23] The North American epidemiological studies indicated an association between the presence of root caries (decay) and aging. [24] In addition, a community-based study conducted in India claimed that the prevalence of dental decay in the population aged ≥60 years was > 90%. [25] Previous literature, as well as the current results, indicate that dental decay is problematic in the elderly population, and periodic dental care may have a positive impact on health status. We also focused on other comorbidities. Of these, hypertension, heart disease, and arthritis were significantly associated with the number of present and normal teeth. These findings are consistent with those reported in previously published literature [10,12,13,26] ; therefore, our results may have generalizability.
However, there are several limitations of this study. The most important limitation was the cross-sectional setting, owing to Table 3 Univariate linear regression analysis of the association between dental conditions and backgrounds. [27] demonstrated that daily oral hygiene was independently associated with cognitive impairment. Elderly individuals with dementia had difficulty maintaining adequate oral hygiene, and often presented with bacterial dental plaque accumulation and gingival inflammation. [27] In addition, individuals with dementia are less likely to complain about their oral condition, and a lack of access to professional dental care makes it difficult to detect and treat dental decay adequately. The correlations between oral health status and cognitive impairment are still unclear, [28] but even if our findings indicate a reverse-causality, periodic dental care in the elderly population should be prioritized. The limited information included in the database was also a potential limitation. There were no data related to oral health such as issues with nutrition, food consumption, tobacco use, and alcohol consumption. [29][30][31] The data used in this study were obtained from 12 nursing homes, but only 289 individuals were included in the analysis. Therefore, representativeness of the study sample might be limited. Additional investigations including these aspects may provide more beneficial results in clinical practice; nonetheless, our exploratory results provide valuable suggestions for future research on oral and systemic health status. In summary, the number of decayed teeth was correlated with dementia. While causality cannot be inferred from the present results, periodic dental examinations, care, and treatments should be prioritized to slow the onset of age-related diseases such as dementia in the elderly population. Additional longitudinal studies are highly desirable to understand the causal relationships between dental conditions and systemic comorbidities. | 2018-07-03T01:13:46.259Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "8f5d74552433be0ca4901d46e416312ddfd4a261",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000011075",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f5d74552433be0ca4901d46e416312ddfd4a261",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9874033 | pes2o/s2orc | v3-fos-license | A Review of Salam Phase Transition in Protein Amino Acids: Implication for Biomolecular Homochirality
The origin of chirality, closely related to the evolution of life on the earth, has long been debated. In 1991, Abdus Salam suggested a novel approach to achieve biomolecular homochirality by a phase transition. In his subsequent publication, he predicted that this phase transition could eventually change D-amino acids to L-amino acids as C -H bond would break and H atom became a superconductive atom. Since many experiments denied the configuration change in amino acids, Salam hypothesis aroused suspicion. This paper is aimed to provide direct experimental evidence of a phase transition in alanine, valine single crystals but deny the configuration change of D- to L- enantiomers. New views on Salam phase transition are presented to revalidate its great importance in the origin of homochirality.
θ σ ϕ crystals were characterized by elemental analysis (C, H and N) and a good agreement was shown between the theoretical and experimental data [13]. By using X-ray diffraction crystallography at 293K, the cell dimensions of D-/ L-alanine crystals were determined as the same space group P2 1 P2 1 P2 1 , orthorhombic, a = 6.0250 Å, b = 12.3310 Å, c = 5.7841 Å, V = 429.72 Å 3 . The data for D-/L-valine crystals showed a space group P2 1 , monoclinic, a= 9.6686 Å, b = 5.2556 Å, c = 11.9786 Å, V = 608.64 Å 3 . It indicates that all crystals are pure single crystals containing no crystal water. The rotation angle ζ of the D-and L-alanine solution was measured on Polarimeter PE-241 MC at 293 K with the wavelength of 589.6 nm. By using the formula of [ α ] = ζ / (L × C), the corresponding α values of D-and L-alanine were shown to be the same.
In order to maintain the authenticity and accuracy of these experiments, we use dozens of pure single crystals for our series of experiments. We use different crystal in different experiment and all these results have been repeated.
Specific heat measurement
The temperature dependence of the specific heats for the D-, L-alanine and valine are shown in Fig .1-3 .An obvious transition was observed at 270 1 K in both alanine and valine enantiomers by differential scanning calorimetry with an adiabatic continuous heating method. With all the other conditions being equal, it is also shown that the specific heat C p value of D-valine is larger than that of L-valine and the same existed in the enantiomers of alanine. In all cases, the biologically dominant Lenantiomer is found to have lower energy, and the specific heat value reflects this fact.
DC-Magnetic susceptibilities measurements
Magnetic moment (m) and magnetic susceptibility (χ ρ ) of D-/ L-alanine crystals were measured by a SQUID magnetometer (Quantum Design, MPMS-5) from 200K to 300K at a field of 1.0T (with differential sensitivity 1E-8 emu to 1 Tesla) in the National Laboratory for Superconductivity, Institute Physics Chinese Academy of Sciences. Crystals were weighed and determined to be 174.1mg (D-alanine), 99.5mg (L-alanine), then transferred to the straw. The signal from the plastic straw was canceled out while the temperature was measured. It is clearly indicated that, both values and variations of magnetic susceptibilities of D-/ L-alanine enantiomers keep the same when the temperature is above 240K, and there is not distinct difference between them. It can also be found that, when temperature approaches 240K from higher ones, the change of χ ρ values becomes slow and subsiding. On the other hand, when the temperature falls below 240K, D-alanine undergoes a magnetic phase transition as the χ ρ values showed a maximum near 240K, while χ ρ values of L -alanine go on increasing but the rate of variation experiences an abrupt rise, which should also be looked as a magnetic phase transition and its mechanism is clearly distinct with that of D-alanine crystal. The experimental results are repeatable for the same samples As for the L-alanine, the variation degrees of peak widths of -H, -H peaks make a good agreement (the maximum peak width is about two times as large as that in room temperature) in the whole process, which indicates that the temperature-dependent relaxation effects of -H, -H nuclei of L-alanine molecule are nearly the same in the transition process. In the case of D-alanine, the variation of -H peak width is much fiercer than that of its enantiomer in the transition temperature range (220~250K), in addition, its transition temperature seems nearer to 240K instead of 230K.
Considering the relative stability of magnetic field in the whole experimental process, these results show that, in this specific transition, the spin-spin relaxation and spin-lattice relaxation mechanism of -H nucleus of D-alanine molecule may be different from that of L-alanine, and its relaxation effects may also stronger than its enantiomer.
Ultrasonic attenuation measurements
The measurements of ultrasonic attenuation values of D-/L-alanine were performed on a computer controlled ultrasonic system (Matec model 7700). A ceramic transducer for generating a longitudinal ultrasonic wave was used. The frequency of ultrasonic wave is 5.56MHz. The attenuation where d is the thickness of the sample, A(x 1 ) and A(x 2 ) are the amplitudes of the first and second echoes respectively. The samples used for ultrasonic measurements were D-and L-alanine crystals with thickness equal to 3.00 and 2.70 mm. The transducer was fixed properly to ensure the contact with the sample and there was no air between them. The temperature of the sample was controlled by a thermocouple. In this way we can gain the ultrasonic attenuation values at various temperatures. For the sake of detecting and describing the difference of both phase transitions mechanisms more precisely, relative attenuation values curve for either enantiomer was drew, respectively: the maximum attenuation value of either enantiomer was chosen as max , the other temperature-
Temperature-dependent optical rotation measurement
This is the first method suggested by Salam to testify the phase transition. The temperaturedependent optical rotation angle result of DL racemic alanine was performed on a solid state optical measurement system and the result was shown in Fig.7. The ϕ value is equal to − 0.5° (approach to zero) from 290K to 252K. It proves that the crystal is truly racemic. However, when the temperature continuously decreases from 260K pass through 252K to 230K, the ϕ value is rapidly decreasing to The most striking and considerably important result of this part of study is the apparent different Raman spectra behavior of DL racemic alanine crystal when temperature approaches the 13 transition temperature T c . A careful observation of this phase transition by Raman spectra showed that (according to the assignment of peaks in reference ): (1) the relative intensity of 3002 cm -1 (C α −H stretching) is weaker than that of 2953 cm -1 only at 250K (shown in Fig. 8), (2) the relative peak intensity of 1411 cm -1 peak (COOsymmetric bending) is weaker than that of 1483 cm -1 (C α −H bending) only at 250K in the whole temperature region (shown in Fig. 9). These phenomena imply the abrupt change of vibration particulars of C α −H bond and COOgroup at the transition point around 250K, which suggests the contribution of C α −H bond and COOgroup made to the phase transition process of DL-alanine crystal.
Conclusion
From above series of experimental results, the existence of a phase transition around 250K in alanine crystals has been fully proved. Although the detailed transition mechanism is unclear, we may at least draw a cautious conclusion that there exists a phase transition in alanine crystals and in the transition process, D-, L-, and DL-alanine crystals exhibit different transition behavior. [15]. They observed no change in optical rotation after exposing both racemic DL-cystine and L-cystine to temperature ranging from 77K to 0.6K for three and four days, thus reported failing to validate PVED-induced phase transitions predicted by Salam.
EXPERIMENTAL EVIDENCE THAT DENY THE CONFIGURATION CHANGE
In addition to their experiments, a more direct way to testify Salam's phase transition is to conduct temperature-dependent X-ray diffraction or neutron diffraction on alanine enantiomers. If there is a configuration change of D-to L-, it will be easy to catch this phenomenon for an abrupt change in atom coordinates will be observed. Atomic coordinates (1) The data for D-/L-alanine single crystals are listed in Table 1 Establishing chemical systems that are thermodynamically far from equilibrium, Kondepudi and Nelson succeeded in simulating the evolution of dominant chirality by nonlinear analysis of chemical reactions. If we take the specific phase transition into consideration, the enlarged energy difference will greatly shorten the time period needed in their model to achieve homochirality and it will also reduce the total number of molecules that this system must have in order to surmount fluctuation. What is worth emphasizing, according to Salam, the occurrence of Salam phase transition can enhance the PVED a lot, which will in turn induce much higher probability of transition. This kind of nonlinear feedback relation is similar with that in Kondepudi-Nelson scenario : when the PVED value is larger by one to two orders of magnitude than anticipated, the time required to realize homochirality in a 100km 100km 4m lake reduces from 10 4 year to one year [19] [20]. In fact, nonlinear effects in asymmetric synthesis and stereoselective reactions have been widely discovered and studied [21] [22]. A recently published experiment showed that the chiral amplification of oligopeptides in two-dimensional crystalline self-assemblies on water also introduces nonlinear effects [23].
Significantly, the biological system when homochirality was achieved meets well the formation requirements of a dissipative structure -the critical transition point is far from equilibrium; the control parameter PVED, whose value determines the probability of Salam phase transition, has a transition threshold; the occurrence of Salam phase transition will in turn enlarge PVED nonlinearly.
Hence, if the temperature is kept in the transition temperature range all along, the control parameter PVED can exceed its critical value (threshold) for enough time, and it is possible to form a dissipative structure and finally develop into an ordered and homochiral system. This scenario is being tested in our laboratory. A related and interesting phenomenon was reported by Ellis-Evans et al [24]: a great lake-Lake Vostok lies below a flowing ice sheet, and the melting point temperature is -3.15 ºC (270K). Microbiological studies of the Vostok ice core have revealed a great diversity of microbes including yeasts and actinomycetes (with antibiotic synthesizing potential) which remain viable in ice for up to 3000 years, and viable mycelial fungi up to 38600 years old.
At last, we emphasize again the significance of Salam phase transition in the evolution of homochirality: instead of the ultimate solution to the problem, it may actually plays as the first step of amplification mechanism. It connects the microcosmic difference (PVED) between biomolecular enantiomers with nonlinear process in a macrocosmic biological system. It also solves the long-term debating suspicion that PVED is too minor to be enlarged directly by nonlinear process. Combining the existence of PVED, Salam phase transition and nonlinear amplification mechanism, we may propose a sound way to understand the chemical evolution of homochirality as depicted in Fig.12. | 2002-12-22T13:29:20.000Z | 2002-09-19T00:00:00.000 | {
"year": 2002,
"sha1": "369530189b3fd57434a13a5af928022ccc0f61dc",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "369530189b3fd57434a13a5af928022ccc0f61dc",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Physics",
"Biology"
]
} |
247379835 | pes2o/s2orc | v3-fos-license | Genesis and Mechanism of Some Cancer Types and an Overview on the Role of Diet and Nutrition in Cancer Prevention
Cancer is a major disease with a high mortality rate worldwide. In many countries, cancer is considered to be the second most common cause of death after cardiovascular disease. The clinical management of cancer continues to be a challenge as conventional treatments, such as chemotherapy and radiation therapy, have limitations due to their toxicity profiles. Unhealthy lifestyle and poor dietary habits are the key risk factors for cancer; having a healthy diet and lifestyle may minimize the risk. Epidemiological studies have shown that a high fruit and vegetable intake in our regular diet can effectively reduce the risk of developing certain types of cancers due to the high contents of antioxidants and phytochemicals. In vitro and in vivo studies have shown that phytochemicals exert significant anticancer effects due to their free radical scavenging capacity potential. There has been extensive research on the protective effects of phytochemicals in different types of cancers. This review attempts to give an overview of the etiology of different types of cancers and assesses the role of phytonutrients in the prevention of cancers, which makes the present review distinct from the others available.
Introduction
Cancer is a disease which involves abnormal cell growth, with the potential to invade and metastasize other parts of the body. It has become a leading cause of death globally, causing nearly 10 million deaths in 2020. In 2018, approximately 9.6 million people died due to cancer [1]. As the prevalence of cancer continues to grow worldwide, new strategies are being sought for disease management. Cancer is a multifactorial disease and various factors such as diet and lifestyle, exposure to radiation, and hormonal factors can contribute to the development of this fatal disease [2]. Lifestyle factors, such as smoking, alcohol consumption, and dietary habits are considered to be significant contributing factors in the etiology of cancer and are also the main targets for primary prevention. The possible association between diet and development of cancer cannot be overlooked. For example, diets rich in red and processed meats have been linked to increased risk of colon cancers, whereas breast cancers have been associated with high-fat diets [3,4]. Salted, pickled or smoked foods are linked with an elevated risk of stomach cancers. Low-fiber food and/or high-fat content are associated with colon, prostate, pancreas, breast, endometrium, and ovarian cancers. The clinical management of cancer is based on the type and extent of the disease. Most people undergo a combination of treatments, such as surgery along with chemotherapy and radiation therapy. Various other procedures, such as photodynamic and thermal therapy, immunotherapy, and gene therapy have also emerged as novel treatments for cancer. Phytonutrients are bioactive substances found in plants and are known for their antioxidant and anti-inflammatory effect on humans. Among these phytonutrients, flavonoids and anthraquinones are known to protect the body against various types of cancers [5,6]. Approximately 5000 individual phytochemicals have been identified, and evidence suggests that the additive and synergistic effects of these phytochemicals are responsible for their potent antioxidant and anticancer activities. These phytochemicals play a vital role in apoptosis, cell cycle arrest, the inhibition of angiogenesis, enzyme inhibition, and the modulation of nuclear receptors [7]. This review compiles useful information from all available library databases and electronic searches, including Web of Science, Scopus, Google Scholar, etc., from the period of 1998 to 2021. It highlights the genotoxic and carcinogenic nature of certain food items. It also discusses the traditional and novel treatment modalities of cancer.
Cancer Mechanism
Cancer is a genetic disease that can arise from a combined effect of multiple external factors along with internal genetic changes. The development of this malicious disease at a cellular level involves somatic mutation in the DNA followed by exposure to carcinogenic factors [6]. This somatic mutation implicates translocation and the strengthening of particular genes; this translates to a distinctive expression of the cell manner named reformed genes. These reformed genes are identified as proto-oncogenes. The genetic damage that occurs mostly becomes irreversible due to numerous cell duplication sequences.
Any agent that prompts mutations or DNA destruction in a cell structure is considered to be genotoxic. A genotoxin can act via direct as well as indirect mechanisms. Ethylene imine and its chloromethyl ether are examples of direct-acting genotoxins. Genotoxins such as hepatitis B virus and aflatoxin are implicated in the etiology of hepatocellular carcinoma, while alcohol and tobacco are risk factors for oral cancer. Indirect-acting genotoxins require metabolic activation to elicit a tumorigenic response. Examples include polycyclic aromatic hydrocarbons and aromatic amines, which are linked to lung cancer and bladder cancer, respectively [8].
It is widely believed that diet and lifestyle strongly impact cancer development. Several carcinogenic and mutagenic constituents are present in our food [9,10]. There is concern over the risk that the pesticide content of commercially grown food items poses. Certain naturally occurring carcinogens, such as pyrrolizidine alkaloids, have been identified in plant products, such as in commonly consumed herbal teas [11][12][13]. Hydrazine in edible mushrooms, and safrole and alkenylbenzene in spices and flavorings have shown carcinogenic properties. In addition, mycotoxins such as aflatoxin present in foods spoiled by fungus have been shown to induce cancer and impair the immune system [14,15].
Oxidative damage is associated with tumor formation. Free radicals which are created by oxidative stress led to DNA damage. These free radicals, if left unrepaired, could cause base mutation, DNA cross-linking, strand breakage, and chromosomal fracture and reorganization [16,17]. Phytochemicals present in our diet have the potential to modulate cancer development and retard tumor growth due to their free radical scavenging capacity. They may positively affect processes of cell signaling, oxidative stress response, and inflammation. There is abundant evidence on the beneficial effects of flavonoids, carotenoids, phenolic acids, and organosulfurs on the downregulation of certain carcinogenic pathways [18,19].
Types of Cancer and Their Causes
Cancers are classified either according to the kind of tissue from which they originate from, or the organ where they first developed. In addition, some cancers are of mixed types. Development, progression, and incidence of cancer occur at a slow rate, and can take several years to manifest.
The consumption of large amounts of food is one of the key factors in the development of cancer. In developed countries, 30% of all cancer cases have been attributed to poor dietary habits [15]. However, in developing countries, the contribution of diet to cancer risk is relatively less, and accounts for about 20% [16]. It is indicated that poor diet may contribute to 50% of all breast cancers, 70% of colon cancers, and 50% of gallbladder cancer cases [2]. A substantial positive relationship has been established between obesity and high death rates due to various types of cancers such as pancreatic, uterine, kidney, esophageal, breast, and cervix. Researchers believe this is largely due to the inflammation caused by visceral fat [20,21]. Overweight and obesity represent about 20% of all cancer-related deaths in women and 14% in men [22].
It is estimated that 30-40% of cancers could be avoided by consuming healthy diets, leading a physically active life, and maintaining a healthy body weight [23,24]. Epidemiological research has constantly revealed that a high intake of whole grain products, vegetables and fruits is strongly linked to less deaths due to cancer and cardiovascular ailments, the top two causes of deaths globally [25,26]. Figure 1 gives a general insight on the relationship between food and cancers.
Colon Cancer
Colorectal cancer is the third leading cause of cancer-related death in males, second in females [27], and ranks fourth in cancer-related deaths globally [24,27]. It is observed that there is a higher incidence of colon cancer in developed countries (Oceania and Europe) than developing countries such as Africa and Asia [19,20]. Several studies have revealed that the Western dietary pattern correlates with an elevation of colorectal cancer, whereas diets rich in whole grains and fibers have been linked with reducing colorectal cancer [28][29][30][31][32].
It was also reported in a meta-analysis that a substantial positive correlation existed between the consumption of red meat and colon cancer [33]. On the contrary, white meat consumption does not increase the risk of colon cancer, but the heme content of red meat increases the risk tenfold. Cooked or fried protein-rich foods such as fish and meat are the principal sources of mutagens [34]. Amino acids present in protein tends to react with hexoses and creates hetero aromatic moieties that condense with creatinine, forming imidazo moieties of heterocyclic amines.
In vivo studies to determine the involvement of gut bacteria in mutagenic activation have revealed high mutagenic levels in the urine and feces of rats fed a fried meat diet as compared to germfree rats. A link between a meat rich diet and colon and rectal cancer was hence established [35,36].
A high-fat diet has been suggested to enhance bile acid formation and exert neutral sterols that promote colon carcinogenesis. Dietary fats stimulate fecal bile acid concentration. The primary bile acids include cholic and chenodeoxycholic acid; cholic acids are converted to secondary bile acids (lithocholic and deoxycholic acids, respectively). Secondary bile acids could act as tumor developers, as shown in animal investigations [37,38].
Colon Cancer
Colorectal cancer is the third leading cause of cancer-related death in males, second in females [27], and ranks fourth in cancer-related deaths globally [24,27]. It is observed that there is a higher incidence of colon cancer in developed countries (Oceania and Europe) than developing countries such as Africa and Asia [19,20]. Several studies have revealed that the Western dietary pattern correlates with an elevation of colorectal cancer, whereas diets rich in whole grains and fibers have been linked with reducing colorectal cancer [28][29][30][31][32].
It was also reported in a meta-analysis that a substantial positive correlation existed between the consumption of red meat and colon cancer [33]. On the contrary, white meat consumption does not increase the risk of colon cancer, but the heme content of red meat increases the risk tenfold. Cooked or fried protein-rich foods such as fish and meat are the principal sources of mutagens [34]. Amino acids present in protein tends to react with Chronic inflammation has been stated to play a part in developing many forms of cancers, including colon cancer [39]. Carbohydrates, total fat, cholesterol, proteins, saturated fatty acids, and trans fats are pro-inflammatory dietary substances. On the other hand, fibres, minerals, vitamins, isoflavones, polyunsaturated fatty acids, β-carotene and anthocyanidins have been found to exert anti-inflammatory properties [20]. Some studies suggest that Mediterranean diets are protective against colorectal cancer due to the presence of polyphenols from olive oils, through the modulation of several metabolic pathways involved in carcinogenesis [39].
Regular intake of fermented dairy products has been shown to diminish the risk of colon cancer [40][41][42][43]. Lactobacillales (lactic acid bacteria) found in fermented dairy products reduces pro-carcinogen load in the intestine by lowering the concentration of enzymes that convert pro-carcinogens into carcinogens, including β-glucosidase, β-glucuronidase, nitroreductase, and azoreductase [44]. It was also found that occasional curd consumption also had protective effect against colon cancer. Animal trials have been conducted to investigate the effects of probiotics on cancer incidence over a 20-week period; rats fed on a beef-only diet showed a 77% occurrence of colon cancer, whereas an incidence of 40% was noted when they were fed beef along with Lactobacillus acidophilus [45].
Breast Cancer
It has been observed that the development of cancer in some sensitive tissues, such as the breast, is probably related to hormonal imbalance [46]. Breast cancer is caused when the DNA in breast cells mutate, disrupting specific functions that control cell growth and division. Risk factors associated with breast cancer include early menstruation/late menopause. Increased levels of endogenous estrogen in postmenopausal women could most likely induce a greater risk of postmenopausal breast cancer. Other risk factors include late age of reproduction, hormonal imbalance, sedentary lifestyle and obesity, and alcohol intake [47,48]. In addition, women who do not breastfeed are at an increased risk of developing breast cancer [49][50][51][52].
Diet may play a role in promoting as well as inhibiting breast cancer development, as concluded by case-control research [53]. The American Cancer Society recommends eating more whole grain foods, vegetables and fruits, and less red and processed meats and sweets. Evidence from several observational studies suggests that a higher intake of omega-3 fatty acids is vital in decreasing breast cancer risk. Fatty acids could influence cancer cell proliferation, angiogenesis, and metastasis [54,55]. Nevertheless, the results of epidemiologic studies concerning dietary factors are inconsistent, and additional research is needed.
A number of epidemiological studies have been conducted to investigate the association between dietary fat intake and risk of breast cancer [51]. Increased consumption of total and saturated fat was found to be positively associated with the development of breast cancer. However, another study conducted on 90,000 nurses found no association between dietary fat consumption and the incidence of breast cancer [39][40][41]. Even though no epidemiological studies provide a strong positive correlation between the consumption of certain types of dietary fat and breast cancer risk, at least a moderate association does seem to exist.
Diets high in glycemic index and glycemic load have been linked to an increased risk of breast cancers. However, a meta-analysis exhibited no causal relationship between postmenopausal breast cancer occurrence and glycemic load consumed [42].
Bladder Cancer
The most common cancer of the urinary tract is bladder cancer, and it is the ninth most common cancer for men [56]. This cancer is substantially more common in males than in females [56]. Smoking is the most important risk factor for bladder cancer, causing about half of all bladder cancers in both men and women. Occupational exposures to aromatic amines used in the dye industry are also linked with bladder cancer. Industries carrying higher risks include leather, rubber, textiles, and paint as well as printing companies. Certain dietary supplements containing aristolochic acid have the potential to cause urothelial cancers, including bladder cancer [57].
No one food by itself can prevent cancer. However, research shows a healthy diet filled with various fruits, vegetables, whole grains and other plant foods could inhibit carcinogenic development, consequently preventing bladder cancer incidence. People who drink a lot of fluids, especially water, each day tend to have lower rates of bladder cancer [58]. Animal studies have shown that the frequency of urination is inversely associated with the level of potential carcinogens in the urothelium. An increase in total fluid intake tends to reduce contact time between carcinogens and urothelium by diluting urinary metabolites and increasing the frequency of voiding. In a randomized experiment, increasing water intake for fifty days by 65 smokers considerably diminished urinary mutagenicity [59].
According to the epidemiological investigations, the relation between total fluid consumption and risk of bladder cancer are conflicted. For example, a health professional's follow-up study indicated that drinking fluid of 2531 mL per day or above was linked with a 49% decrease in the risk of bladder cancer compared to a lower intake (<1290 mL per day) [60]. On the other hand, a case-control research study performed in the U.S. stated a 41% elevation in the risk of bladder cancer with the intake of total fluid by about 2789 mL/day in comparison to low intake (<1696 mL/day) [61].
Phenolic compounds such as epigallocatechin gallate and resveratrol displayed anticancer action against T24 cells as indicated by in vivo studies [62][63][64][65][66][67][68][69][70]. A study demonstrated that the oligomers of epicatechin, resveratrol and catechin exhibited a noticeable apoptotic influence on the T24 cell line. In contrast, monomers of catechin, resveratrol and epicatechin did not show anticancer influences on T24 cells, since the viability of the cell did not significantly reduce compared to the control sample [71]. However, the antitumor mechanisms of catechin, epicatechin, and resveratrol oligomers are still not clearly understood [58,59]. Certain case-control investigations provide enormous evidence regarding the protective role of carotenoids in bladder cancer, particularly for patients subjected to DNA damage [72,73]. Carotenoids exert their anticancer effect by inhibiting the development of precancerous lesions and scavenging DNA-damaging free radicals.
Data relating dietary habits to bladder cancer survival are limited. The effect of consumption of cruciferous vegetables on bladder cancer survival was investigated, and a strong reverse connection was observed [74]. This result was validated by previous clinical records on isothiocyanates, a group of favorable chemo-protective phytochemicals mostly presented in cruciferous vegetables [75]. High consumption of fats, specifically animal fats, might elevate bladder cancer risk [76,77]. Mutagens associated with bladder cancer etiology may occur during the heating process from fried fatty foods. It is reported that substances such as heterocyclic amines and N-nitroso compounds formed either during cooking or the salt-drying of protein-rich foods could be associated with bladder cancer incidence [78][79][80]. In addition, the intake of persevered meat, such as bacon, ham and sausage, was linked with an elevated risk of bladder cancer [81,82].
Many case-control investigations have stated a considerable elevated risk with the intake of fried eggs [80][81][82][83]. A study showed a strong positive connection with cholesterol intake and it was predicted that half of the estimated cholesterol intake resulted from eggs [84,85]. Nevertheless, the association with the intake of eggs itself was not established. The regular intake of fermented milk containing Lactobacillus casei strain Shirota and skimmed milk decreased the occurrence of bladder cancer [86][87][88][89][90][91][92][93][94].
Renal Cancer
Although renal cancer is rare and accounts for around 2% of all cancers, its incidence rate has recently begun to increase worldwide. Little is known about the etiology of this type of cancer; however, smoking, hypertension and obesity are the most established risk factors and are believed to account for almost 40% of all factors, in addition to specific dietary factors [95,96]. A number of studies have attempted to determine the impact of macronutrients and micronutrients on cancer risk [97,98]. Numerous epidemiological investigations have established an inverse relationship between a healthy diet and risk of cancer. Dietary fiber, one of the most commonly used macronutrients, demonstrated many biological activities.
In Europe, a meta-analysis study was conducted to examine the link between renal carcinoma occurrence and the dietary consumption of carbohydrates, protein, fat, fiber, and many other factors. The results showed that there was no correlation between the intakes of macronutrients and renal cancer risk. The results supported the theory that dietary fiber consumption is negatively linked with renal cancer risk. A cohort study conducted in the United States revealed that intake of total fiber was correlated with a significant reduction in renal cancer risk, by about 15-20% [99]. Plant-based and fiber-rich diets high in vegetables and fruits are recommended to prevent cancer and chronic conditions associated with renal cell carcinoma [100]. Since renal cell carcinoma is an obesity-related malignancy, consuming fiber may be protective as it enhances satiety and results in loss of weight by increasing stool bulk and its transit period [101]. Moreover, fiber may also regulate postprandial blood sugar by impeding the entry of glucose into the bloodstream [102]. Butyrate, a short-chain fatty acid produced by dietary fiber fermentation has displayed antineoplastic activity. Additionally, fiber might also decline renal cancer risk by controlling systemic inflammation [103].
A case-control research study has indicated a positive correlation between renal cell carcinoma risk and high energy consumption (protein and fat) mainly from animal foods, and adverse associations with polyunsaturated fat consumption [104,105]. Carbohydrate intake was found to be implicated in several cancers, including renal cancer. An Italian casecontrol study suggested that foods with high glycemic loads (GL) and glycemic index (GI) were connected with a rising risk of renal cancer [24]. Hypertensive and obese individuals on a high-GI diet are at 2.7 times greater risk of developing renal cancer than individuals without these health issues. It was established in the same study that GI, but not GL, was linked with renal cancer incidence, suggesting that the quality of carbohydrates might perform a more significant role compared to their quantity.
Some foods contain ingredients that can trigger or worsen inflammation, such as processed and sugary foods [106]. Numerous experimental and clinical investigations suggest tumor progression with the up-regulation of pro-inflammatory molecules. Several studies have linked an increase in cancer risk with the inflammatory potential of a diet [79][80][81][82]. For example, the Western diet comprising high-fat dairy products, refined grains, and red meat has been linked with increased levels of pro-inflammatory molecules (interleukin-6, fibrinogen and C-reactive protein) [107]. On the other hand, the Mediterranean diet characterized by high olive oil content, whole grains, green vegetables, and fruits has been correlated with lesser inflammation [108]. A study observed a positive relationship between the inflammatory potential of diet and renal cancer among older women in Iowa [88]. This outcome supports the suggestion that those having pro-inflammatory foods are at a greater risk [109][110][111][112][113][114][115][116][117][118][119][120].
Nevertheless, the indications from potential investigations are scarce, and the World Cancer Research Fund established that there is no reliable information for an association between any nutrients or foods and renal cancer risk [121][122][123]. Prior case-control studies have proposed that a high consumption of animal products is linked with an increase in renal cell carcinoma, even though information from probable data is restricted [124]. A Swedish investigation indicated that a diet rich in fruits and vegetables and modest alcohol intake was associated with increased risk [89]. The findings collected from a populationbased case-control investigation of renal cancer performed in the United States supported the protective role of vegetables and considered high meat intake to be deleterious [125]. On the contrary, the European prospective investigation into Cancer and Nutrition observed no correlation with fruit and vegetable consumption [126]. A recent meta-analysis of 13 observational studies proposed a reverse association between vitamin E intake and renal cancer risk [122]. Another study showed an increased risk of renal cancer linked with the consumption of fried/sautéed meat and low intakes of Vitamin E or magnesium. Although varied conclusive results are established about diet and renal cancer, the evidence proposes a possible role of nutrition and renal cancer development. The food groups considered to be protective are green leafy vegetables, fruits and whole grains, nuts, low-fat dairy, and the food groups that showed greatest increase were butter, fried food, and alcohol [121,127].
Carcinogenic Food Components
Genotoxicity is the property by which chemical agents can damage genetic information within a cell and may induce mutations that lead to cancer. The studies on humans are limited; however, several commonly consumed foods and their components have been screened for their carcinogenic effects on various animal models (Table 1).
Sites Food Carcinogenic Effect/Clinical Studies Reference
Breast Red meat (1) Increasing consumption of red meat was associated with an increased risk of invasive breast cancer. (2) Dietary heme iron, fat, and N-glycolylneuraminic acid are indicated to possibly increase tumour formation, as these compounds are found in red meat.
[ 119,120] Alcohol (1) Increased circulating levels of estrogen in both premenopausal and postmenopausal women, which might occur through reduced steroid degradation and increased aromatase activity, enhances the transcriptional activity of ER. (2) Contributes to carcinogenesis partly through oxidation from alcohol metabolism and oxidative stress from the production of the alpha-hydroxyethyl radical, a reactive oxygen species, then metabolized to acetaldehyde. [121,122] Dairy Milk (1) Higher intakes of dairy milk were associated with a greater risk of breast cancer [123] Colon Red meat (1) High intakes of red and, in particular, processed red meat in unbalanced diets contribute CRC development, PAHs, and HCAs, and dietary NOCs can initiate mutations. (2) A dose-response relationship between heme iron and the promotion of colon carcinogenesis through the fat peroxidation pathway and the N-nitroso pathway where the catalytic role of heme iron from red meat or nitrosyl heme from processed meat is involved in the endogenous production of NOCs, and production of malondialdehyde, which is a carcinogen. Comparatively, heme iron promotes reactive oxygen species (ROS) production, which induces genetic mutations. (3) Participants who reported consuming an average of 76 g/day of red and processed meat compared with 21 g/day had a 20% (95% confidence interval (CI): 4-37) higher risk of colorectal cancer.
[ [124][125][126] Bladder Red meat (1) Processed meat may be positively associated with bladder cancer risk (red meat was linearly associated with bladder cancer risk in case-control studies, with a pooled RR of 1.51 (95% confidence interval (CI) 1.13, 2.02) for every 100 g increase per day) (2) Intake of processed red meat was significantly associated with the incidence of bladder cancer after multivariate adjustment (highest vs. lowest quintile: HR, 1.47; 95% confidence interval (CI), 1.12-1.93; p-trend = 0.008). In contrast, there was only a suggestive but no significant association between the intake of total processed meat and bladder cancer risk after multivariable adjustment (highest vs. lowest quintile: HR, 1.16; 95% CI, 0.89-1.50; p-trend = 0.073). (3) Increased BC risk was found for a high intake of organ meat (hazard ratio comparing highest with the lowest tertile: 1.18, 95% CI: 1.03, 1.36, p-trend = 0.03) (4) Liver and salami, pastrami or corned beef were found to be associated with increased risk of bladder cancer. Consumption of meats with high nitrate/nitrite, high amine and heme content.
[ [127][128][129][130] Renal Meat (1) BaP intake, a PAH in barbecued meat, was positively associated with RCC. (2) A meta-analysis indicates a significant positive association between red and processed meat intake and RCC risk (large prospective cohort study observed increased risk of RCC with high consumption of nitrate and nitrite, the precursor of NOCs, and total RCC (hazard ratio = 1.28, 95% CI, 1.10-1.49) [131][132][133] Our food selection is influenced by availability and the culture we live in. Moreover, based on our experience, we tend to avoid foods that can cause illness. We ignore many beneficial foods due to lack of knowledge about their nutritional content, taste, faith, societal associations, or cost. There are estimated to be about 250,000 flowering plants, of which 11,000 are used as foods, spices, or flavoring agents, including vegetables, fruits, and nuts. Therefore, the foods we eat are simply a small percentage of those available around us [128]. The major nutrients in our food are carbohydrates, proteins, fats, vitamins and minerals. In addition, fibers and water are also present, which are needed by our body. It is useful to review the functional and beneficial roles of these naturally occurring components, along with their effects on cancer development.
In many countries food has a crucial role in influencing cancer incidence. The knowledge of major sources of macro-and micronutrients is important in order to understand differences in the diet-cancer link in various geographical areas, and to provide better nutritional guidelines. The type and amount of food consumed can have both beneficial and adverse (carcinogenic) effects on the human body [93][94][95][96][97][98][99][100]. As dietary habits are linked to one-third of all cancers, it is critical to look for genotoxic substances or contaminants in foods. Much emphasis has been placed on cooked, uncooked, fermented, and fresh food materials. Several lines of evidence indicate that cooking conditions and dietary habits can contribute to human cancer risk through the ingestion of genotoxic compounds such as acrylamide, heterocyclic amines, and polyaromatic hydrocarbons found in heat-processed foods [101][102][103][104][105][106][107][108][109][110][111][112][113][114][115][116]. The presence of several highly mutagenic substances in cooked meat and fish has been pointed out by many researchers in the past [116,117]. The heating process releases genotoxicants such as aromatic hydrocarbons and heterocyclic amines in beef through processes involving creatin(in)e, sugar, and amino acids [117,118]. Heterocyclic amines are relatively new as dietary genotoxicants; however, they have been found to induce breast, colon, and prostate cancers in animal research. These intoxicants promote carcinogenesis by causing DNA damage and gross chromosomal aberrations. The total caloric intake also has a significant impact on cancer incidence.
Effect of Carbohydrate on Carcinogenesis
Carbohydrates are a broad category of biomolecules. On the basis of few preclinical findings, carbohydrates have been ascribed a deleterious role in the field of cancer research. Carbohydrate intake has been hypothesized to modulate cancer risk depending on the amount and type consumed. Certain studies have reported that carbohydrate consumption induces microbial and epigenetic modulations as well as endocrine and systemic alterations that may influence cancer development [129,130]. Many in vitro and animal trials have presented various mechanisms through which carbohydrates influence cancer development. However, epidemiologic evidence linking dietary carbohydrates to cancer development has remained uncertain [131].
The links between dietary carbohydrates and cancer risk are hypothesized to involve mechanisms that directly implicate players in insulin-mediated pathways across various tissues, as well as through the modulation of IGF-1 bioactivity. One of the primary pathways in cellular proliferation is the insulin/IGF-1 signaling axis which plays a critical role in glucose metabolism.
Insulin interaction with the insulin receptor (IR) is crucial to glucose uptake and energy homeostasis [132]. Animal models have demonstrated that insulin-IR signaling activates signal transduction pathways directly associated with cellular proliferation. Specifically, the hyperstimulation of IR and its interaction with circulating insulin is a hallmark of various cancers [133]. Many observational studies have proven the association between hyperinsulinemia and enhanced risk of adiposity-related cancers [134,135].
Dietary fibers are mainly indigestible complex carbohydrates mainly found in plants. Fibers consist of pectin, lignin, cellulose, and hemicellulose. Mammalian digestive enzymes do not break down fiber, and are moderately metabolized by colonic microflora. Some fibers are water-soluble, and some are insoluble. The consumption of fiber-rich foods, particularly those high in pentoses, is associated with decreased colon cancer risk [136].
A Cochrane systematic review suggested that dietary fiber may reduce the risk of adenomatous polyps of the colon, which are believed to be precursors of colon cancer [137]. Studies have suggested that protective effects of fiber may be associated with lignan found in whole grain foods. Lignans are a group of diphenolic compounds that exert cytostatic activity against colon cancer cell lines [138]. The American Institute of cancer research (AICR) recommends 30 g of dietary fiber each day to lower cancer risk. The AICR report reveals that each 10 g increase in dietary fiber is linked with a 7% decline in risk of colorectal cancer [24]. Fibers may play a role in lowering the risk of other cancers, but the evidence is still limited. The data from animal studies are mixed. Some studies revealed protection, whereas others showed no effect. Therefore, further rigorous studies need to be performed to prove the effect of fiber on cancer prevention.
Effect of Fat on Carcinogenesis
Many case studies have found positive associations between breast cancer and dietary fat intake; however, cohort studies have failed to replicate the same findings. According to several studies, the consumption of red meat is strongly associated with colon cancer due to mutagenic heterocyclic amines which are found in cooked meats [139]. High intake of animal and saturated fats may also be associated with prostate cancer risk [140].
Diets high in fat have augmented the process of carcinogenesis as exhibited in numerous models [141]. The effect depends on the type as well as the amount of fat consumed. Vegetable oils containing polyunsaturated fatty acids of the linoleic acid family (n-6) are known to enhance mammary tumorigenesis, but a fish oil containing polyunsaturated fatty acids of the linolenic acid family (n-3) had an inhibitory effect at higher levels of intake.
At present, the exact mechanism by which dietary fat modulates carcinogenesis has not been elucidated. However, it can be concluded that the influence on synthesis of prostaglandins and leukotrienes may be the universal mechanism by which dietary fats modulate carcinogenesis [142]. Certain studies have reported the role of dietary fat in altering gene expression which could lead to cancer development [143].
Effect of Protein on Carcinogenesis
The significant effect of dietary protein on carcinogenesis appears to be due to its caloric value. Excessive dietary protein increases colonic ammonia levels, and subsequently ammonia may enhance the development of chemically induced colonic tumors [144]. However, limited epidemiologic studies have reported any implication of dietary protein in cancer development. Some studies show associations of colon and breast cancers with animal protein which is considered to be a carcinogen [145].
Effect of Micronutrients on Carcinogenesis
An inadequate diet has detrimental effects on the immune system and various metabolic functions of the body, and it lowers tolerance to cancer treatment. Various epidemiologic and experimental evidence suggests that several micronutrients, including vitamins and minerals, contribute to cancer prevention. Diets lacking these micronutrients could be associated with an increased risk of cancer [146][147][148][149]. Micronutrients such as vitamin C, vitamin E, retinoids, and selenium are not only antioxidants, but also have many essential metabolic functions. They have immunomodulating and apoptosis-inducing properties and regulate cell proliferation and differentiation. In vitro, animal, and human studies have shown that antioxidants reduce cancer cell growth through a variety of mechanisms, including an increase in cell differentiation and apoptosis [150][151][152][153][154].
Anticancer Bioactive Compounds
Cancer is a devastating disease that has claimed many lives. Natural bioactive agents obtained from plants are gaining popularity for their anticancer activities [155][156][157]. Sev-eral studies have found that plant-based bioactive compounds can enhance the efficacy of chemotherapy while also ameliorating some of the side-effects. There is an increasing number of reports which show that many phenolic compounds have potential inhibitory effects on cancer invasion and metastasis. Each medicinal plant has different bioactive compounds that act synergistically to produce the desired protective effect [156]. Natural compounds such as flavonoids, alkaloids, saponins, terpenes, and lignans play an essential role in suppressing cancer cell-activating enzymes, proteins, and signaling pathways [157][158][159][160][161][162][163][164][165][166][167][168]. Various other natural compounds with potent anticancer activity are taxol, camptothecin, vinblastine isolated from Catharanthus roseus, Camptotheca acuminate, and Taxus brevifolia [169,170]. Table 2 lists various anticarcinogenic food items along with their components.
Quinoline is a versatile bioactive compound with a wide range of pharmacological activities such as anticancer and anti-inflammatory effects and is regarded to be a superlative molecule in drug discovery. Quinoline scaffold plays an important role in anticancer drug development by inducing apoptosis, cell cycle arrest, angiogenesis inhibition, and nuclear receptor responsiveness modulation [171]. Flavonoids such as catechin, cyanidin, luteolin, epicatechin, quercetin, and kaempferol exert anticancer properties on various cancer cell lines through several mechanisms, such as inhibiting the phosphorylation of epidermal growth factor receptor (EGFR), increasing DNA fragmentation, suppressing signal transduction enzymes, and counteracting angiogenesis. Various alkaloids such as piperine, chabamide, guineensine, piperlongumine, and pellitorine have anti-apoptotic effects. While terpenoids are capable of inducing cell cycle arrest, they down-regulate signal transduction of antiapoptotic protein Bcl-2, and activate pro-apoptotic mediators Bax and Bak [172]. There are 78 flavonoids and xanthones isolated from Cudrania tricuspidata which exert inhibitory effects on apoptosis, invasion, and the migration of tumor cells.
The chemopreventive and anticancer activities of Aloe vera are due to bioactive compounds such as anthraquinones, chromones, and polysaccharides [173,174]. They work by inhibiting proliferation, invasion, and the migration of cancer cells. Honokiol, the major bioactive constituent of Magnolia species, exerts its anticancer effect by targeting apoptosis pathways, inhibiting angiogenesis, and regulating cell cycle arrest [175,176].
Osthol is a natural coumarin extracted from Umbelliferae. This bioactive compound inhibits apoptosis via suppressing the activation of different apoptotic proteins, including Smac/DIABLO, poly-ADP ribose polymerase and caspase-3 and caspase-9, as well as suppressing p53. It also can inhibit metastasis through different molecular mechanisms such as suppressing the HGF/c-Met signaling pathway and inhibiting the expression of Smad 2, 3 and 4 [177,178]. Osthol also has a stimulatory effect on the extrinsic apoptotic pathway via increasing the levels of caspase-8 [179]. Many studies have confirmed the efficacy of osthol as a protective and therapeutic agent in various cancers such as cervical, ovarian, colon, prostate, lung, and chronic myeloid leukemia [180].
Emerging evidence supports a link between garlic consumption and decreased cancer incidence due to compounds such as S-allylcysteine and S-allylmercaptocysteine which have antiproliferative potential [181]. Furthermore, some experimental studies have suggested the cytotoxic potential of tanshinone IIA isolated from Scutellaria herbs in breast cancer cell lines [182]. Extracts of Nelumbo nucifera (lotus) were found to suppress cell growth in non-small-cell lung cancer [183,184]. Another bioactive compound isolated from Nelumbo nucifera is 7-hydroxydehydronuciferine. This compound has demonstrated high-quality anticancer bio-functions and has inhibited melanoma tumor growth in vivo and in vitro [185]. Induces the intrinsic pathway of apoptosis in breast cancer cells, inhibits tumor progression in mice [134] Pomegranate (Punica granatum L.) Ellagic Acid Murine breast cancer WA4 cell line inhibited with induction of cell cycle arrest at the G0/G1 phase and apoptosis through caspase-3 activation, reduced cell proliferation and induced apoptosis in MCF-7 cells, antiangiogenic potential (significantly inhibited tumour growth and VEGF receptor 2 (VEGFR-2) phosphorylation), produced synergistic cytotoxic effects [135][136][137][138] Rosemary Carnosic acid Decreases cell viability and proliferation, enhances the effect of chemotherapeutics, increases apoptosis and decreases cell transformation.
Sites Food Constituent Anticancer Effect Reference
Bladder Turmeric Curcumin β-catenin expression was significantly up-regulated, cell proliferation, migration and invasive ability were reduced. [162] Black seed oil Thymoquinone (TQ) Inhibits proliferation and induces apoptosis via endoplasmic reticulum stress-dependent mitochondrial pathway. Attenuates mTOR activity and inhibits PI3K/Akt signalling of T24 cell lines.
Allopathic Cancer Treatment
Effective cancer treatments include surgery, chemotherapy and radiotherapy, as well as newer techniques such as interventional radiology and immunotherapy. The type of treatment that one receives depends on the type of cancer and how advanced it is. Additionally, a combination of treatments is usually needed to achieve the best results [189].
A brief discussion on the allopathic treatment of cancer is summarized below. Figure 2 illustrates the allopathic treatments involved in cancer. Table 3 Pervilleine, isolated from the roots of Erythroxylum pervillei, has shown anticancer effects in combination with vinblastine against multidrug-resistant oral epidermoid cancer cell line (KB-V1) [188].
Allopathic Cancer Treatment
Effective cancer treatments include surgery, chemotherapy and radiotherapy, as well as newer techniques such as interventional radiology and immunotherapy. The type of treatment that one receives depends on the type of cancer and how advanced it is. Additionally, a combination of treatments is usually needed to achieve the best results [189].
A brief discussion on the allopathic treatment of cancer is summarized below. Figure 2 illustrates the allopathic treatments involved in cancer. Table 3 enlists the various treatment modalities of cancer.
No.
Cancer Therapy Details Reference 1.
Radiotherapy [175] Elevates the oxygen in the blood by breathing in high oxygen levels before
No.
Cancer Therapy Details Reference
1.
Radiotherapy [175] Radiotherapy individualization based on hypoxia markers Elevates the oxygen in the blood by breathing in high oxygen levels before and during the irradiation to destroy hypoxic cells using bioreductive compounds or to radiosensitize hypoxic cells using oxygen mimicking drugs.
Radiotherapy individualization based on FDG-PET
Fludeoxyglucose (18F-FDG) intensity on a positron emission tomography (PET) image represents the level of glucose uptake by active malignant cells.
Markers of DNA repair
One of the best biomarkers for tumor radioresponse of DNA double-strand breaks is gH2AX, a histone protein, which is found after the induction of double-strand breaks.
Cancer-stem-cell markers CD44 is considered as one of the best cancer stem cell markers. A significant correlation of CD44 mRNA expression as well as CD44 immunohistochemical score with local tumor control after radiotherapy was shown in a hypothesis-driven approach.
Radiotherapy individualization based on EGFR status
The application of anti-EGFR antibody cetuximab showed locoregional tumor control compared to radiotherapy alone.
2.
Gene Therapy [176] Oncolytic Virotherapy It uses replication-competent viruses, which are able to proliferate selectively at tumor cells. It can directly lyse cancer cells, and it also can introduce wild-type p53 tumor suppressor genes into the cells lacking the tumor suppressor gene.
Gendicine is a non-replicative vector, where the E1 gene is replaced with the p53 cDNA gene. The expression of p53 in tumor cells stimulates the anticancer effect by triggering the apoptotic pathway and inhibiting damaged DNA repair.
Oncolytic recombinant ad5 (rAd5-H101) It was proven to treat refractory nasopharyngeal cancer. Oncorine is an ad5 virus with a deletion in the E1B 55K gene. Host cell p53 gene inactivation is essential for wild-type to block the activation of the apoptotic pathway.
Imlygic (Talimogene Laherparepvec)
It was proven that administration of Imlygic causes the apoptosis of cancer cells, improves antigen presentation and increases antitumor response.
Rexin-G (Mx-dnG1)
Rexin-G synthesizes cytocidal dnG1 proteins suppress the cell cycle in the G1 phase, leading to the apoptotic pathway of cancer cells.
3.
Thermotherapy [177] Thermal Ablation Options It causes destruction and the eradication of the tumor by overheating using temperatures from 55 • C to 100 • C as an external excitation. It can cure many types of cancer such as kidney, liver, lung, rectum, and prostate.
Radio Frequency Ablation (RFA)
It uses a high-frequency heating source from 375 to 500 KHz to kill the targeted cells. It has shown a positive result against different kinds of cancers, including breast, liver, and brain.
Micro Wave Ablation (MWA)
It uses an electro-magnetic (EM) signal to heat the selected area and stimulate a direct hyperthermic injury. The frequency range begins from 915 MHz to 2.45 GHz.
High Intensity Focused Ultra Sound (HIFU) It sends an ultra sound (US) beam focused on overheating a targeted tissue in order to cause coagulation necrosis. It is highly precise in killing tumors and cures some of the related health issues.
LASER Ablation
A LASER (Light Amplification by Stimulated Emission of Radiation) is a monochromatic directed and focused beam of light. It has been used to kill different tumors, especially brain tumors.
Cryoablation
Cryotherapy uses a low temperature of −30 to −40 • C to create a freezing zone and generate the destruction of a targeted region. The probe tip is alimented by a source of nitrogen or argon to cool the tissue to −100 • C.
Chemotherapy
Chemotherapy refers to the use of chemical agents to kill or control cancer cells. These agents induce cytotoxicity frequently, but not exclusively, through apoptosis, a cell-death modality that is non-immunogenic [190]. Many different kinds of chemotherapy drugs are used to treat cancer, either alone or in combination with other drugs or treatments. Paclitaxel is a popular chemotherapeutic agent effective against a broad spectrum of cancers, including head and neck cancer, small-cell and non-small-cell lung cancer, breast and ovarian cancers, colon cancer, melanoma, and multiple myelomas [189].
Radiotherapy
Radiotherapy is used traditionally in combination with surgery or chemotherapy for treating cancer. It is the most important non-surgical modality for the curative treatment of cancer. Radiation-immunotherapy, a combination of radiotherapy and immunotherapy, has also shown effective results in cancer treatment [190,191]. Radiotherapy can cure cancers alone or in conjunction with other treatments; it can also reduce incurable cancer symptoms.
A key challenge is to maximize radiation to cancer cells while minimizing injury to the adjacent healthy cells [192]. Technological advancements in the field of radiology, such as intensity-modulated radiotherapy, stereotactic body radiotherapy, and image-guided radiation therapy, have helped maintain a balance between cure and toxicity of treatment.
These technologies ensure precise radiation delivery to the target tumor cells and reduce damage to the surrounding healthy cells [193].
Immunotherapy
The immune system plays an important role in regulating tumor growth. Some types of inflammatory responses tend to favor tumor growth, while a tumor-specific adaptive immune response can potentially restrict it [194]. Cancer immunotherapy, also known as immuno-oncology, is a form of cancer treatment that uses a person's immune system to fight, control, and eliminate cancer. The goal of immunotherapy is to boost or restore the ability of the immune system so that it detects and destroys cancer cells by overcoming the mechanisms by which tumors suppress the immune response [194]. Immunotherapeutic strategies include adoptive cellular immunotherapy, immune checkpoint inhibitors, cytokines, cancer vaccines, oncolytic viruses, and targeted antibodies. The traditional approach to immunotherapy is to increase the frequency of tumor-specific T cells through the administration of cancer vaccines, cytokines, and adoptive cell transfer. Another approach is to trigger innate immune activation and inflammation in the tumor microenvironment with interferons and Toll-like receptor agonists. The most effective strategy to trigger antitumor immune responses is to target various checkpoints of immune cell activation, such as programmed cell death protein 1 (PD1), monoclonal antibodies (mAbs), regulatory T-cells, and cytotoxic T lymphocyte-associated protein 4 (CTLA4) [195]. Immune checkpoint inhibitors have proven to be successful, as these agents appear to overcome the mechanisms by which tumors suppress the antitumor immune responses. It was proven that anti-CTLA-4 antibody ipilimumab increases the survival rate of patients with metastatic melanoma for whom conventional therapies have failed [196,197]. Sipuleucel-T is a therapeutic vaccine found to be effective in prostate cancer which has prolonged overall survival [198].
Vascular abnormalities are the hallmark of most solid cancers; they increase proangiogenic factors such as angiopoietin 2 and vascular endothelial growth factors (VEGF). The rational use of drugs that target these factors helps to stimulate immune response and normalize the abnormal vasculature. They convert an immunosuppressive tumor microenvironment to an immune supportive one and trigger the infiltration of immune effector cells into cancer cells. Vascular normalization and immune responses are reciprocally regulated. Therefore, combining immunotherapies and antiangiogenic therapies might enhance the potential of immunotherapy and reduce the risk of immune-related side effects [199].
Targeted Therapy
In targeted therapy, drugs are designed to precisely target cancer cells without affecting the surrounding normal cells. These are classified as small molecules and large molecules. Small-molecule drugs are small enough to enter a cancer cell and work by targeting a specific substance inside the cell and blocking it, thus destroying the cancer cell. For example, imatinib treats chronic myelogenous leukemia and other cancers by blocking tumor-activating signals [200,201]. Large molecules, such as some mAbs, are big in size and cannot fit into a cell. They work by attacking and destroying proteins or enzymes on the surface of the cell and suppress tumor growth by interrupting the interactions between ligands and receptors. Examples include alemtuzumab used in chronic leukemias, trastuzumab used in breast cancers, and cetuximab for lung, head and neck cancers [202,203].
Antibody-targeted therapy: mAbs-based treatment has been established as one of the most successful therapeutic strategies for both hematologic malignancies and solid tumors. mAbs exert their actions through various mechanisms such as antibody-dependent cellular phagocytosis, antibody-dependent cellular cytotoxicity, apoptosis, complement-dependent cytotoxicity, and the blockage of signal transduction. Efforts are being made to maximize the efficacy and minimize the toxicity of these mAbs by loading them with cytotoxic drugs. Such molecules are called antibody-drug conjugates (ADC), which are believed to be tumorspecific. These ADCs combine the specificity and favorable pharmacokinetics of mAbs with the high cytotoxic potential of small-molecule drugs.
Selected examples of targeted cancer therapy are mentioned below based on their mechanism of action [203][204][205][206]. Molecular targeted therapy enables the delivery of anticancer drugs with high-precision targeting. The therapeutic drug used is often a smallmolecule drug that targets markers inside the cell or an antibody that attaches to specific targets on the outer surface of cells. Molecular targeted therapeutic agents act on growth factor receptors, cell surface antigens, and signal transduction pathways which regulate cell death, angiogenesis, and metastasis [206]. Agents used in molecular targeted therapy include mAbs, gene therapy, and immunotherapeutic cancer vaccines. VEGF is a crucial stimulus of angiogenesis and blocking it is an effective approach to treat cancer in humans. VEGF receptors (VEGFR) are expressed in different types of leukemias and hematological malignancies. Dovitinib is a potent inhibitor of VEGFRs and has shown efficacy in metastatic melanoma, metastatic RCC, breast cancer, and acute myeloid leukemia. Ligandtargeted therapy ensures selective drug delivery to pathological cells for both therapeutic and diagnostic purposes with the advantage of limited side effects and toxicity. This targeted approach is based on the discovery that there is an overexpression of receptors on pathological cells as compared to normal tissues.
Therapeutic Cancer Vaccines
Therapeutic cancer vaccines are classified as patient-specific and patient-nonspecific vaccines. Patient-specific vaccines are derived from patient's cancer cells, whereas patientnonspecific vaccines are derived by activating a general immune response that may have an anticancer effect in some patients [207]. Therapeutic cancer vaccines target specific tumor-associated antigens by stimulating T-cells' expansion and infiltration, thus resulting in antigen-specific cytotoxicity. Some common proteins are targeted by therapeutic cancer vaccines, including viral proteins (e.g., hepatitis C virus, human papillomavirus), tissue lineage antigens (e.g., tyrosinase, prostatic acid phosphatase), and oncofetal antigens (e.g., carcinoembryonic antigen) [208].
Thermal Therapy
The thermal ablation of cancer involves techniques that utilize heat (hyperthermia) or cold (hypothermia) to kill neoplastic tissues. It has been recorded that cell necrosis develops at temperatures higher than 60 • C or lower than −40 • C. Subsequently, prolonged exposure to temperatures ranging from 41 • C and 55 • C is effective in destroying tumor cells. Ther-mal therapy can be performed by using five therapeutic strategies, namely cryotherapy (≤508 • C for >10 min), moderate cooling (0-108 • C for 10 min), low-temperature hyperthermia (39-418 • C for times up to 72 h), moderate temperature hyperthermia (42-458 • C for 15-60 min), and high-temperature thermal ablation therapy (>508 • C for >4-6 min). These five therapeutic approaches can impact the tissues through an increase or decrease in oxygenation, blood perfusion, and cellular metabolisms, thus causing protein denaturation, tissue necrosis, and cellular coagulation [209]. Hyperthermia enhances the sensitivity of tumor cells to radiation. It was proven through different clinical trials that hyperthermia combined with radiotherapy and/or chemotherapy significantly reduced the tumor size of many types of cancers, including melanoma, sarcoma, lung, breast, esophagus, brain cancers, etc. It was noted that the combination of hyperthermia with chemotherapy allowed deeper penetration of drugs into tumor tissues, thus enhancing treatment efficacy [210][211][212]. High tumor interstitial fluid pressure (IFP) reduces oxygenation and causes insufficient blood perfusion, and could impede delivery of therapeutics to the tumor site. Systemic heating may stimulate effective thermoregulatory responses, thus reducing tumor IFP and increasing vascular perfusion within cancer tissues.
Gene Therapy
Gene therapy introduces genetic materials such as DNA or RNA into cancerous cells in order to suppress their growth. This can be achieved by replacing the mutated tumor suppressor gene with a normal one, inhibiting tumor angiogenesis and inhibiting the expression of an oncogene [213]. Thus, gene therapy aims to replace, modify, or delete abnormal gene(s) in a targeted cell. Gene delivery systems comprise viral (or bacterial) vectors and non-viral vectors [214]. The most important viral vectors are retroviruses and adenoviruses, whereas non-viral vectors are naked DNA, particle based, and chemical based. Viral vectors are effective in gene delivery and cell transfection, but their application is limited due to their immunogenicity and toxicity. In comparison, non-viral vectors are less toxic, though they require delivery vehicles to invade different types of cells [205].
Anticancer Foods Show Efficacy in Modulating Cell Proliferation and Improve Overall Survival
Diet and nutrition are important factors in the prevention of various cancers and have a significant impact on disease outcome in patients during and after therapy. Plant-based foods are rich in cancer-beating molecules such as polyphenols, flavonoids, terpenoids, and botanical polysaccharides. Epigallocatechin-3-gallate, gallic acid, gallocatechi-3-gallate, and lupeol are catechins with anticancer properties and are found in green tea [215]. Multiple mechanisms of action have been implicated by which dietary agents contribute to the prevention or treatment of various cancers. Fruits such as black elderberry, guava, and rosemary have chemopreventive and hemotherapeutic potential, and work by targeting key molecular pathways involving NF-κB, cFLIP, FAS, KRAS, PI3K/AKT and WNT/signaling [216]. Phenolics (procyanidin B2 and B1) found in avocado, apple, and bilberry are promising compounds for cancer prevention [215,217]. Vegetables such as broccoli and cabbage are rich in anticancer compounds such as 1H-Indole-3-methanol, indole-beta-carboxylic acid, and 4-methoxy-glucobrassicin. These compounds target molecular networks that control cell division, apoptosis, and angiogenesis.
In summary, the influence of nutrition in cancer metabolism is undoubtedly a topic of widespread interest. Cancers can be avoided by following a nutritious diet, increasing physical activity, and maintaining a healthy body weight. While "prevention" is probably an exaggeration, risk reduction appears to be backed by research. It has been proposed that the cause of most cancers can be explained in part by the shift from a predominantly plantbased diet to a high-fat, high-sugar diet. According to epidemiologic research, changes in lifestyle and dietary variables may play a role in determining the risk of certain cancers. When a cancer diagnosis is made, many people wonder how lifestyle changes will help decrease tumor progression. Calorie restriction is a well-established dietary strategy for cancer prevention and longevity in animal models.
Conclusions
This review focused on the mechanisms of some types of cancer, as well as the aspects of possible prevention and treatment strategies, by proposing some anticancer foods that show efficacy in modulating cell proliferation and improving survival in these different types of cancers. Research based on the role of nutrition on cancer development is vast, and it is now clear that nutrition plays a major role in cancer development. Diet is just one of the lifestyle factors that influence the risk of developing cancer. Alcohol, tobacco, lack of physical activity, and obesity are among the others. Phytonutrients found in fruits and vegetables exert a synergistic effect to lower cancer risk through various mechanisms including hormone regulation, the downregulation of certain carcinogenic pathways, or the attenuation of inflammation. The perfect recipe seems to be a well-balanced diet high in lean proteins, whole grains, fruits and vegetables, and low-fat dairy, while being low in red meat, sugar, coffee, and alcohol. There is no scientific evidence that a particular type of diet can cure or treat cancer. However, there is ample evidence suggesting that a healthy diet and lifestyle can help reduce the risk of developing certain types of cancers. Further investigations are required to understand inter-individual and geographical variations in diet and their relative contributions to cancer risk. | 2022-03-11T16:14:23.566Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "20b2dd58445dc09d157c89aad59cb05dab2d6f69",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/6/1794/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d20e1879edc520fd8296c25409b5211ee1f49486",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16489347 | pes2o/s2orc | v3-fos-license | Irradiation of the secondary star in X-ray Nova Scorpii 1994 (=GRO J1655--40)
We have obtained intermediate resolution optical spectra of the black-hole candidate Nova Sco 1994 in June 1996, when the source was in an X-ray/optical active state (R~15.05). We measure the radial velocity curve of the secondary star and obtain a semi-amplitude of 279+/-10 km/s; a value which is 30 per cent larger than the value obtained when the source is in quiescence. Our large value for K_2 is consistent with 60 +9,-7 per cent of the secondary star's surface being heated; compared to 35 per cent, which is what one would expect if only the inner face of the secondary star were irradiated. Effects such as irradiation-induced flows on the secondary star may be important in explaining the observed large value for K_2.
INTRODUCTION
The soft X-ray transients, a subclass of the low-mass X-ray binaries distinguished by their X-ray outbursts, have proved to be an ideal hunting ground for stellar-mass black hole candidates (Tanaka & Shibazaki 1996). The system Nova Sco 1994 (=GRO J1655-40) is particularly interesting, since as well as being a source of superluminal jets (Zhang et al., 1994;Harmon et al., 1995), its optical brightness and partial eclipse features mean that it is one of the few systems that has yielded a reliable estimate for the mass of the collapsed star.
Nova Sco 1994 was discovered on July 27 1994 with BATSE on board the Compton Gamma Ray Observatory (Zhang et al., 1994). It has been studied extensively during the past few years in X-rays and at optical and radio wavelengths (Bailyn et al., 1995aand b, Zhang et al., 1995, van der Hooft et al. 1998. Strong evidence that the compact object in Nova Sco 1994 is a black hole was presented by Bailyn et al. (1995b) who initially established a spectroscopic period of 2.601 ± 0.027 days, classified the secondary as an F2-F6iv type star and suggested a mass function f (M )=3.16±0.15 M⊙. An improved value of f (M )=3.24±0.09 M⊙ was presented by Orosz & Bailyn (1997) using both quiescent and outburst data, derived from a radial velocity semi-amplitude of 228.2±2.2 km s −1 . Shahbaz et al. (1999) using only quiescent data, determined the true radial velocity semiamplitude K2=215.5±2.4 km s −1 which gives a revised value for the mass function of f (M ) = 2.73±0.09 M⊙. They also measured the rotational broadening of the secondary star which then gives the binary mass ratio q ∼ 0.39 (=M2/M1, where M1 and M2 are the masses of the compact object and secondary star respectively).
The effect of heating of the secondary is to shift the 'effective centre' of the secondary, weighted by the strength of the absorption lines, from the centre of mass of the star. One expects that this results in a significant distortion of the radial velocity curve and renders a sinusoidal fit clearly inadequate, leading to a spuriously high radial velocity semiamplitude. In order to quantify this effect we have determined the radial velocity variations of the secondary star in Nova Sco 1994, when it was in outburst and compared our results with others obtained using data taken when the source was in different X-ray states. in Chile using the Danish Faint Object Spectrograph and Camera (DFOSC). We used grating #8 which gave a dispersion of 1.26Å per pixel and a wavelength coverage from 5865-8336Å. The Loral 2048×2048 CCD was used, binned by a factor two in the spatial direction in order to reduce the readout noise, but not binned in the dispersion direction. The seeing during the observations was poor and variable (see section 2.2) so we used a slit width of 2. ′′ 5 on the first night and then 2. ′′ 0 for the other nights. This resulted in spectral resolutions of 7.6Å and 5.5Å for the first and other nights respectively. Wavelength calibration was performed using a Cu-Ar arc. A total of 47 spectra were taken each having exposure times of 1800s (see Table 1 for details). The data reduction and analysis was performed using the Starlink figaro package, the pamela routines of K. Horne and the molly package of T. R. Marsh. Removal of the individual bias signal was achieved through subtraction of a median bias frame. Small scale pixel-to-pixel sensitivity variations were removed with a flat-field frame prepared from observations of a tungsten lamp. One-dimensional spectra were extracted using the optimal-extraction algorithm of Horne (1986), and calibration of the wavelength scale was achieved using 5th order polynomial fits which gave an rms scatter of 0.03Å. The stability of the final calibration was verified with the OH sky line at 6300.3Å whose position was accurate to within 0.1Å.
Photometry
Using the same setup as for the spectroscopy, we also obtained limited Bessell r-band images of Nova Sco 1994 every night. The data were debiased using a median bias frame, but not flat-fielded, as none were taken. These images were used to estimate the seeing each night (see Table 1). We applied aperture photometry to Nova Sco 1994 and several nearby comparison stars within the field of view. Johnson V -and R-band magnitudes of these comparison stars were made available to us by J. Orosz. We determined the relative magnitude of Nova Sco 1994 with respect to three stars having a range of colours [(V − R)=0.49, 0.77 and 1.23]. Assuming that the colour correction between the two filter systems is small (<0.05 mags; similar to the accuracy of our photometry) and that Nova Sco 1994 has a colour in the same range as the comparison stars used, we estimate R ∼15.05 for Nova Sco 1994.
THE SPECTRA OF NOVA SCO 1994
In Figure 1 we show the variance-weighted average and also the nightly averages of the Nova Sco 1994 spectra. A strong Hα emission line (mean equivalent width of 7.5±0.06Å) and a much weaker Hei 6678Å (equivalent width of 0.4±0.03Å) can be seen. In Table 2 we list the Hα equivalent width for the nightly averages. The Fei absorption blend at 6485, 6496, 6499, 6502Å is also visible. These features are used to determine the radial velocity of the secondary star (see section 4). The 6613Å diffuse interstellar band is also present. The emission lines in Nova Sco 1994 are double-peaked, which is presumably a consequence of the system being at high inclination. We can compare the observed peak-to-peak half separation of the Hα emission line (which arises from the accretion disc) with the projected velocity of the outer disc edge. In a binary system with a mass ratio > 0.25 it is generally assumed that the accretion disc cannot grow larger than the tidal truncation radius, r d (Paczynski 1977;Whitehurst 1988;Osaki, Hirose & Ichikawa 1993), which is approximately given by r d = 0.60a/(1 + q) for 0.03 < q < 1, where a is the binary separation (Warner 1995). Given the system parameters (P orb =2.62168 days; q ∼ 0.39; i ∼ 69 degrees; M1 ∼6.7 M⊙ see Shahbaz et al., 1999) the minimum value for the projected velocity of the accretion disc rim is ∼394 km s −1 . The observed peak-to-peak half separation of the Hα emission line (see Figure 1) in late June 1996 is 385±8 km s −1 (measured by fitting the profile with a double Gaussian), which implies that the accretion disc is close to its maximum possible size. Soria et al. (1998) estimate the Hα half peak-to-peak separation to be <350/2 km s −1 and <550/2 km s −1 for their August/September 1994 and June 1996 Hα observations respectively, velocities much lower than expected, suggesting that the Hα emission line arises from non-Keplerian regions/flows in the accretion disc.
THE RADIAL VELOCITY OF THE SECONDARY STAR
The radial velocities of the F-type secondary star in Nova Sco 1994 were measured from the spectra by the method of cross-correlation (Tonry & Davis 1979) with a template star. Prior to cross-correlation the spectra were interpolated onto a logarithmic wavelength scale (pixel size 55 km s −1 ) using a sin x/x interpolation scheme to minimize data smoothing (Stover et al. 1980), and then normalised. The template star spectrum (HR2906; F6v) was then artificially broadened by 90 km s −1 to account for the rotational velocity of the secondary star. Note that the orbital smearing of the Nova Sco 1994 spectra through the 1800s exposure is at most only 10 km s −1 , much less than the resolution of the data. Only regions of the spectrum devoid of emission lines (6400-6520Å) were used in the cross-correlation. The radial velocity of the template star (derived using the position of the Hα absorption line to be −7 km s −1 ) was then added to the radial velocities of Nova Sco 1994.
Using the orbital ephemeris given by van der Hooft et al (1998) we phase-folded and binned the heliocentric radial velocities (see Figure 2). From figure 2, it can be seen that the radial velocity measurement at phase 0.2 does not fit the general pattern of the sinusoidal modulation present in the data. This data point was the total of three radial velocity measurements taken on the second night (21st June 1996). Although the seeing and quality of the spectra taken during this night were not as good as the others, no obvious reason could be found as to why these spectra gave much lower radial velocities than expected. A sine wave fit to all the data points does not give an adequate fit (χ 2 ν =6.9). However, removing the discrepant data point and then performing a sine wave fit yields a χ 2 ν of 1.5, a semi-amplitude K2 = 279 ± 10 km s −1 , systemic velocity γ = −155 ± 7 and a phase shift of −0.043 ± 0.005φ (1-σ errors are given). We also fitted the radial velocity curve with an eccentric orbit, but found the fit to be less than 50 percent significant.
THE EFFECT OF IRRADIATION ON THE SECONDARY STAR'S RADIAL VELOCITY
Three absorption line radial velocity curves have been obtained for Nova Sco 1994, using the same absorption features of the F6iv secondary star and the standard method of cross-correlation. However, in each case the system was observed to be in a different X-ray state. A sinusoidal fit to the outburst data taken in April/May 1995 of Orosz & Bailyn (1998) gives a radial velocity semi-amplitude of K obs =230±2 km s −1 . During this period BATSE did not detect the source, so we can only put an upper limit of 2.4×10 36 erg s −1 (<0.03 photons erg cm −2 s −1 in the BATSE 20-350 keV energy range) to the X-ray luminosity of the source. This upper limit alone does not allow us to state unequivocally that the source was not active at X-ray energies, but optical observations suggest that the source was not in quiescence (V =16.5; Orosz & Bailyn 1998). In section 4 we determined K2=279±10 km s −1 from data taken in June 1996 when RXTE ASM (2-12 keV) observations give an X-ray luminosity of Lx=6.8×10 37 erg s −1 , and the R-band brightness was ∼ 1 mag brighter than its quiescent value. The BATSE (20-350 keV) count rate was at least a factor of 4 higher than in April/May 1995. determined the true radial velocity of the secondary star (K2=215.5±2.4 km s −1 ) in 1998 May/June, when the source was finally in optical quiescence. The only X-ray quiescent observations were obtained during March 1996 using ASCA (1-10 keV; Robinson et al., 1997) which gave Lx=2×10 32 erg s −1 .
In Figure 3 we show the observed radial velocity amplitudes relative to the quiescent value as a function of the observed X-ray luminosity at the time of the measurements. We have converted the X-ray luminosities, which were observed with different instruments, into a common energy range (0.4-10 keV) using a hydrogen column density of N h = 0.89 × 10 22 cm −2 and a photon power-law model with indices 2.8 and 1.5 for the X-ray high and quiescent states respectively (see Table 3; Zhang et al., 1997;Robinson et al., 1997;Hameury et al., 1997). This energy range is where we expect the total radiated power for X-ray transients in both outburst and quiescence to lie (Chen, Shrader & Livio 1997). Note that there is a correlation between X-ray luminosity and the observed radial velocity semi-amplitude; the higher the X-ray luminosity the larger the observed radial velocity semi-amplitude, exactly as expected. We can use our model to estimate the X-ray luminosity at the time when Orosz & Bailyn (1997) took their radial velocity measurements. We find Lx ∼ 5 × 10 35 erg s −1 which is consistent with the BATSE upper limit.
IRRADIATION OF THE SECONDARY STAR
It has been known for some time, especially in studies of dwarf novae and polars, that substantial heating of the secondary star shifts the effective centre of the secondary, weighted by the strength of the absorption lines, from the centre of mass of the star. This results in a significant distortion of the radial velocity curve leading to a spuriously high semi-amplitude and a radial velocity curve that may be eccentric. Davey & Smith (1992) describe a procedure for detecting the effects of irradiation on the radial velocity curve of the secondary star, whereby one tests the significance of an eccentricity in the orbital solution. However, it should be noted that, although our data does not allow this eccentricity test, due to the poor orbital phase coverage, we can use the spuriously high radial velocity semi-amplitude to show that X-ray heating is present.
In order to investigate the effects of X-ray heating on the secondary star's radial velocity curve we used the model described by Phillips, Shahbaz & Podsiadlowski (1999). The model uses a crude treatment for X-ray heating, since no satisfactory robust model exists for the effects of external heating in stars. However, it serves to illustrate the extreme effects of X-ray heating. It should be noted that the first order model of X-ray heating by Brett & Smith (1993), which does not include energy transport effects, does show that the whole temperature structure of the outer layers of the secondary is upset by external heating. Figure 3 shows the effects of different amounts of X-ray luminosity on the secondary star's radial velocity amplitude. K obs is computed by fitting the predicted curve with an eccentric orbit. The regions on the secondary star that are heated do not contribute to the absorption line flux. The maximum possible change that irradiation can have on K obs , based purely on geometry, is 15 per cent. However, from our data presented in this paper, we observe ∆K2/K2=0.30±0.05, which when compared with maximum possible value based on geometry, is significant at the 3-σ level.
In Figure 4 we show how much of the secondary star's surface needs to be heated in order to produce the observed radial velocity amplitude. We find that based purely on geometry 35 per cent of the secondary star's surface is directly heated by X-rays produced at the compact object. (This fraction only depends on the shape and size of the secondary star, which in turn is determind by the q. Using the extreme values for q (Shahhaz et al., 1999), we find that this fraction changes by less than 1 per cent.) However, in order to produce the observed large radial velocity semi-amplitude, 60 +9 −7 per cent of the secondary star needs to be heated. The 1-σ uncertanties quoted here were estinated using the 1-σ uncertanties in ∆K2/K2. This result may seem surprising at first, since one expects only the re- gions of the secondary star facing the compact object to be irradiated and yet our result implies that some of the regions not directly seen by the compact object are also affected by irradiation. However, one should note that effects such as Xray scattering and irradiation-induced flows on the surface of the secondary star can increase the fraction of the secondary star that responds to the X-ray source. Note that the regions on the secondary star that are shadowed by the accretion disc will be indirectly heated by such mechanisms. Therefore K obs can be larger than that expected from heating the inner face of the secondary star alone.
DISCUSSION
The existence of circulation in rotating stars was first proved in 1924 by von Zeipel (von Zeipel 1924). He demonstrated that for a rotating homogeneous star, the radiative transport equation and equation of conservation of energy cannot be fulfilled simultaneously. This results in the formation of meridional motions. In order to maintain a stationary state as assumed, one has to demand that these meridional motions contribute to the energy transport. In the case of an irradiated rotating star, the situation is still more complicated, since the radiation will induce additional circulation currents.
Evidence for the existence of significant irradiationdriven circulation is provided by several sources. For example, the analysis of the optical light curve of HZ Herculis has shown this to be heated by its accompanying X-ray source HER X-1. Although the main features of the optical light variation are well understood (HZ Her is bright when the Xray source is in front of it, its brightness is reduced during the occultation of the X-ray source by the secondary), the minimum at phase 0.0 is sharper than expected and indicates some additional source of optical radiation at this phase. Strittmatter et al. (1973) tried to explain this via the illumination of the disk by HZ Her. Other attempts were made by Pringle (1973) and Bahcall, Joss & Avni (1974). However, the most successful explanation was due to Kippenhahn & Thomas (1979). They estimated the energy transported from the X-ray illuminated part of the stellar surface to the shadowed side, and demonstrated that the minimum at phase 0.0 could be reasonably well accounted for (X-ray heating without horizontal transport leads to a flat minimum at phase 0.0).
In addition, Schandl et al. (1997) found circulation to be necessary in order to accurately model the optical light curve of CAL 87, an eclipsing supersoft X-ray source. They calculated the light curve based on the assumption that an accreting, steadily burning white dwarf irradiates the accretion disk and the secondary star, as suggested by van den Heuvel et al. (1992). A simple description of energy transport on the secondary surface was used and then integrated over the whole surface, while conserving the total luminosity. They found that significant energy transport of the irradiated flux to non-illuminated parts on the secondary surface is required to simulate the observed lightcurve, particularly around the primary eclipse, when the shadowed hemisphere of the secondary is in view.
Recent models for irradiation-induced flows in binary stars have been computed by Martin & Davey (1995). They considered circulation in gently-heated secondary stars (where the incident flux is less than the intrinsic flux). Their 2-dimensional calculations included the effects of the Coriolis force and showed upwelling of hot material being carried preferentially towards the direction of rotation of the star. They also concluded that all secondary stars should show asymmetric heating, because of the presence of Coriolis forces. Phillips (1999) has recently extended the study of circulation to 3-dimensions. As well as including the effects of X-ray irradiation i.e. the anisotropic heating of the irradiated surface, and the effects of surface radiation stress, he also considers the large-scale effects of the rotation of the system and includes an approximate treatment of the Coriolis force. His results suggest a realistic analysis of the Coriolis force is essential for a full description of stellar circulation.
In order to study the extent of irradiation of the secondary star one requires good quality spectrophotometric studies throughout an X-ray outburst, during which the level of X-ray irradiation and induced heating changes. This will allow the surface intensity distribution across the secondary star to be mapped (see Rutten &Dhillon 1994 andSmith 1996), from which effects such as irradiation-induced circulation or star-spots can be investigated. . The effects of different amounts of X-ray heating on the secondary star's radial velocity semi-amplitude. We show the fractional change in K obs as a function of Lx. K obs is computed by fitting the predicted curve with an eccentric orbit. The three observed radial velocity measurements are also shown. The two lower curves show the effects of X-ray heating based purely on geometry i.e. only those elements of area on the secondary star that are directly seen by the X-ray source are irradiated. The top curve (dashed line) show the effects of indirect X-ray heating, calculated by extending the radiation horizon as seen by the X-ray source by a further 24 degrees (see Figure 4). The effect of this additional heating is to produce a value for K obs which is 30 per cent larger than expected purely based on geometry. Figure 4. The irradiated secondary star's Roche lobe in the (x − z) plane. The compact object is at coordinates (1,0). We have assumed Lx = 8.5 × 10 37 erg s −1 and the extreme geometrical case with a mass ratio of q=0.44, an inclination angle of i=71 • and a disc angle of 2 • . The dense shaded region shows the area (35 per cent) that is irradiated directly by X-rays produced at the compact object; these regions do not contribute to the observed absorption line flux. The less dense region shows the area (60 per cent) which must also be heated indirectly in order to produce the large observed radial velocity semi-amplitude. | 2014-10-01T00:00:00.000Z | 2000-01-13T00:00:00.000 | {
"year": 2000,
"sha1": "2c3c7eaae00ec6ddeab041b5e51d7d6b419981a7",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/314/4/747/2958029/314-4-747.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "2c3c7eaae00ec6ddeab041b5e51d7d6b419981a7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
51749471 | pes2o/s2orc | v3-fos-license | Health effects and standard threshold shift among workers in a noisy working environment *
Introduction: Working in a noisy environment is a risk for employee hearing health. Standard threshold shift (STS) can be used as a screening method to detect early indications of hearing deterioration. Objective: To investigate health effects related to STS in motor compressor workers. Methods: A cross sectional study of 464 motor compressor workers was conducted including hearing health examination by audiometer, and noise level in the workplace was monitored. Workers who reported having hobbies relating to noise, e.g. gun shooting, or a personal history of disease relating to the ear were excluded. The relationship between health effects and workers with STS was studied. Results: There were more men 81.90% (aged range 31 40 years old) than women working for the company. The average continuous noise level in the workplace was 84.14 ± 5.21 dB(A). The study showed that working at the factory for more than 14 years (OR= 3.84, 95%CI 1.54 9.56) and being exposed to noise at least 8 hours a day (OR = 2.12, 95%CI = 1.02 4.40) effected to STS. Workers with STS showed significant communication difficulties (OR = 1.89, 95%CI = 1.03 3.49) and stress/nausea more than workers without STS (OR = 1.54, 95%CI = 0.90 2.65) although not statistically significant. Conclusions: Workers exposed to continuous noise in the motor compressor industry are at risk of STS. Duration of exposure to noise is a key factor in respect of harm to hearing health. STS could be used as a tool to screen workers who have hearing health problems.
INTRODUCTION
Occupational noise exposure and noise-induced hearing loss (NIHL) have been recognized as classical problem among workers working in industries.NIHL occurs due to hair cell damaged which is irreversible.NIHL is generally observed to affect a person's hearing sensitivity at higher frequencies, especially at 4000 Hz [1].When workers are exposed to noise, it is difficult to detect NIHL symptoms or health related NIHL until symptom manifests.So, screening tests will be useful to detect any early symptoms and to prevent problems from worsening.
A screening test is a preliminary test purposed to prevent occupational diseases.Standard threshold shift (STS) could be used as a technique to screen a person who has a hearing problem as it defines a change in the hearing threshold relative to a baseline audiogram of an average of 10 dB or more at 2 kHz, 3 kHz, or 4 kHz in either ear [1].It is therefore different from noise induced hearing loss (NIHL) because the baseline will change in both ears.levels ranging from 80 -90 dB(A) were at risk of noise induce hearing loss and STS.Health effects could be presented in a worker who has STS.
In the present study, workers in a motor compressor factory have been employed to participate in the project purposed to study the association between 1) exposure to noise and STS, and 2) workers who experience STS and health effects.
Study Design and Setting
A cross sectional study was conducted in March-April 2011 at a motor compressor factory.The factory produced compressors for fridges and parts for air conditioners.Processes of motor compressor production shown in Appendix 1.
Of 464 workers, those who had been working for the company for at least 6 months and had results from hearing tests take over the past 2 consecutive years were selected to participate the project.
Secondary data for hearing tests carried out by audiometer on workers over the past 2 years were used to study STS, self-administered questionnaire, and noise level measurements were also used in the study.Most of the workers have only a day shift that runs for about 12 hours (routine work 8 hours and overtime 4 hours) , and the workers spent 8 -12 hours working in a noisy environment.
Subjects were excluded if they had hobbies or habits related to noise, e.g.disco pub, shooting, or those with personal diseases e.g.tuberculosis, mumps or any previous ear related accidents.Workers who showed a change in hearing threshold relative to a baseline audiogram of an average of 10 dB or more at 2 kHz, 3 kHz, or 4 kHz in either ear were defined as a worker with STS.The example of STS calculation was shown in Appendix 2.
Participants were provided with assurances of confidentiality and the objectives of the study were explained and an informed consent form was signed prior to administering the questionnaire.The study was approved by the Ethics Committee of the Faculty of Medicine, Srinakharinwirot University, Thailand.
Questionnaire
Of 464 workers were asked to do self-administered questionnaire.They were asked about personal information, work history, health status, behaviour on using personal protective equipment, hobbies and health effects of exposure to noise in the workplace.
Noise Monitoring
Noise levels in a motor compressor factory were meas-ured by sound level meter model sound track LXT#1 serial number 0001029.
The level of noise was measured for each production process.The sound level meter was calibrated before sampling using a calibrator model Larson Davis model CAL 250 # 2820.
The pattern of noise level monitoring in the workplace is shown in Appendix 3.
Data Analysis
Statistical Analysis of data was carried out using SPSS.Descriptive analysis was used to describe characteristic of study populations, work duration, duration of noise exposure in each day and the levels of noise in the workplace.
Binary regression analysis was used to study the association between change of STS baseline and personal factors, duration of exposure to noise, work duration for the company, and association between STS and health effects.Workers with STS and heath symptoms e.g.headache/dizzy, stress/nausea, loss of concentration and difficulty in communication were studied.Adjusted odds ratios, 95% CI were reported.
Population Characteristics
The study population consisted of 464 workers aged between 20 -60 years who had completed an audiogram examination over the past 2 years and were not exposed to excessive noise as a result of hobbies or ear related disease.
The study population consisted mainly of men aged 21 -40 years and most of them had been working in the factory less than 5 years.Most the participants worked for the company 1 -5 years and were exposed to noise more than 8 hours per day.The education levels of workers were primary school level and vocational or college level.The characteristic of participants is shown in Table 1.
Noise Levels in the Workplace
The present study monitored noise levels in the workplace by sound level meter.The highest noise level in the workplace was 97 dB(A) and the lowest noise level was 66 dB(A).The average noise level in the production department was 84.14 ± 5.21 dB(A).Workers were exposed to noise in the workplace more than 8 hours per day.The noise levels on the production line are shown in Table 2.
Relationship between Personal Factors and STS
Hearing examination results showed STS among 51 3.
Health Effect and Change of STS
Workers that reported STS were selected to study any related health effects of STS and analysed by logistic regression analysis.The relationship between workers with STS and health effects presented headaches/dizziness (Adj.OR1.78, 95%CI = 0.50 -6.32), stress/nausea (1.54, 95%CI = 0.90 -2.65) and loss of concentration (0.60, 95%CI = 0.30 -1.21) but this was not found to be significant compared to workers without STS.Workers with STS showed they had statistically significant communication difficulties compared to workers without STS (1.98, 95% CI = 1.03 -3.49), data shown in Table 4.
Health effects of workers with STS were also studied when they were not at work.The results showed that workers with STS had no significant health effects while not at work compared to workers without STS.However, workers with STS presented headaches/ dizziness (Adj.OR = 2.95, 95%CI = 0.84 -10.42), stress/nausea (Adj.OR = 1.25, 95%CI = 0.58 -2.68) and communication difficulties (Adj.OR 1.30, 95%CI = 0.59 -2.86) while they were off work.The results are shown in Table 5.
DISCUSSION
The questionnaire had a high response rate and audiogram data from physical examinations conducted by the factory was available for the past 2 years.This data was then used to calculate STS and the noise levels in the workplace were monitored while the research was taking place.Given the information and data available bias in this study is considered unlikely.
The present study found a correlation between health effects related to STS and exposure to noise for duration of 8 or more hours a day.The data reveals interesting health effects between workers with STS compared with workers without STS, although the health effects were generally nonspecific and therefore could be linked to other conditions.Some studies show an association between noise exposure and hypertension, impaired fasting glucose and diabetes even when the noise level was not high [2,3].Our study confirmed workers within a noisy environment are at risk of STS and related headache/nausea, stress/ dizziness, loss of concentration and having communication difficulties.
Most workers in the motor compressor factory were men because the factory is related to auto part assembly and the parts are quite heavy.The study showed that workers with low education were more susceptible to STS than workers with high education.This may because workers with high degrees can choose jobs with more favourable working conditions while workers with low education cannot.
Factors Related to STS
Previous studies showed factors related to hearing loss are duration of exposure to noise, cigarette smoking and noise levels in the workplace [4][5][6][7].Long durations of exposure to noise have been shown to damage hair cells and continued exposure to noise could lead to irreversible hair cell damage.Accordingly, workers frequently exposed to noise should be provided with noise hearing protection.
From the study, gender does not appear to be an influential factor of developing STS although it should be noted, most the workers in the motor compressor factory were men.While education level was not a statistically significant factor relating to STS this may be because the participants had graduated from primary school and high school or vocational courses and few of them had a high level of education.As above duration of exposure to noise in the workplace appeared to influence STS in workers consistent with the fact STS is a typical occupational problem among workers in a noisy environment.Workers in noisy environments invariably present noise induced hearing loss [8][9][10][11][12].
Health Effects Due to STS
The study demonstrates an impact from noise on health both inside and outside the work place.The im-pact of noise on health was significant for workers exposed to noise for long durations which is not surprising because noise has been found to cause noise induced hearing loss and harm hair cells in previous studies.
Workers exposed to noise for more than 8 hours a day or who worked in the factory for more than 14 years show a significant risk of health effects.The company should therefore have a noise hearing conservation programme for those employees who are exposed to noise for long durations or work at the factory for a long period of time even if symptoms are not yet considered serious due to the likelihood symptoms will worsen over time.
Workers with STS reported heath symptoms, e.g.headache, dizziness, or nausea.These workers were exposed to noise more than 8 hours a day.Such workers may have to shout due to hearing problems and noise in the work place creating additional difficulties in their environment which could lead to stress.Although not statistically significant it is also worth noting symptoms reported by workers exposed to shorter durations of noise (i.e., less than 8 hours per day).This may indicate even shorter durations of exposure to noise can be harmful to health.Indeed, ISO Models predicted exposure to noise from 10 years could be harmful to hearing health [13].
Hearing health may be affected by exposure to either high noise level or continuous noise level.Regression analysis showed an association between STS and noise exposure, and STS and health effects such as stress/ nausea and communication problems.
From previous studies, workers in a noisy environment suffered from annoyance, sleeping disturbance, stress, depression, and fatigue adversely affecting quality of life [14][15][16].
Regression analysis showed a positive correlation between duration of exposure to noise and negative health effects.One potential solution would be to operate a rota system where employees spend less hours in a noisy environment in addition to being provided with PPE at work.Previous studies showed workers exposed to noise often manifests as hypertension, tachycardia, increased cortisol release and increased physiologic stress [17,18].Results from this study are consistent with previous studies that exposure to noise can have adverse effects on health and STS screening would be a useful indicator for early detection.
Workers who have been diagnosed with STS will benefit from early detection if they are transferred to work in environments where there is less noise.
STS Screening tests will help increase workers awareness of related health effects which may motivate them to wear the correct protective safety equipment.
Limitations of the Study
The number of workers suffering adverse health effects due to STS was lower than expected but this may be due to a healthy worker effect bias.
CONCLUSION
Working in a noisy environment is thought to put workers at risk of STS and related health problems.STS screening tests will be beneficial for employee health as it will provide an opportunity to identify those affections and to take the necessary steps to prevent further deterioration and the resulting negative health effects.Workers exposed to noise for more than 8 hours a day should be provided with appropriate hearing protection.
Table 1 .
Characteristics of study population.
Table 3 .
Personal factors and adjusted odd ratios relating to STS. effect estimate adjusted for: age, department, work duration , noise exposure in a day; 2 effect estimate adjusted for: department, work duration , noise exposure in a day; 3 effect estimate adjusted for: age, work duration , noise exposure in a day; 4 effect estimate adjusted for: age, department, noise exposure in a day; 1 5effect estimate adjusted for: age, department, work duration.
Table 4 .
The association between health effects and STS while workers in the workplace.
Table 5 .
The association between health effects and STS while workers outside the workplace.effect estimate adjusted for: age, department, work duration, noise exposure in a day, noise level, frequently used hearing protection, PPE hygiene, and PPE shared. 1 | 2018-07-21T17:04:45.627Z | 2013-08-01T00:00:00.000 | {
"year": 2013,
"sha1": "230fffad0700d341174d7e1d5ab7804f1bd91d51",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=35510",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "230fffad0700d341174d7e1d5ab7804f1bd91d51",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233970616 | pes2o/s2orc | v3-fos-license | Good, homely, troublesome or improving? Historical geographies of drinking places, c. 1850–1950
This paper surveys historical geographies of drinking places designed for the consumption of alcohol between about 1850 and 1950, covering work published in English on sites in Europe, Russia, the Americas, and parts of Britain's empire. Five key aspects of drinking places are identified. The paper first considers them as significant social spaces associated with positive conceptions of both the public sphere and public space before exploring the ways in which drink became a spatial problem for contemporary observers, both in terms of their internal design and layout, and in their arrangements and concentrations in space. Histories and historical geographies of workers and patrons in these sites then suggest that the spatial problems associated with drink might also be classed, gendered, racialized, and sexualized. The last two sections of the paper review work on aspects of drinking places shared across many different social and geographical contexts: licensing and the provision of highly regulated ‘improved' sites for the consumption of alcohol. Similarities across many different contexts may reflect common social patterns or the development of shared strategies for reform. The conclusion suggests a few areas that might be developed.
campaign (Pliley et al., 2016). The provision of "improved" places on the model of the "Gothenburg system" in many of those countries also demonstrates that these ideas travelled from their origins as collective solutions to the problem of alcohol consumption.
Crucially, though, these networks were not universal, and these ideas did not travel everywhere. Licensing was sensitive to local contexts and while Gothenburg-style drinking places appeared in many of the countries listed above, they were managed by different actors and took different forms to grapple with varying problems. Work on licensing and the Gothenburg system provides opportunities to compare drinking places, and to think about the development and dissemination of policy, the ways in which governmental technologies shape and are shaped by their contexts, and more. And because these phenomena concern the governing of spaces, there are points of connection with work on other social problems. This is not just a paper on drinking places, then, and it will hopefully find a wider audience beyond those working in "drinking studies." It is a survey first and foremost, though, and while it returns to some of these questions in the conclusion, there is insufficient room to tackle them at any length.
| "GOOD PLACES"?
Drinking places were significant social institutions during this period. "The tavern took on some of the characteristics of the commune" in the rapidly growing cities of mid-nineteenth-century Russia (Transchel, 2006, p. 28). In early-twentieth-century Germany, the tavern was "the locus of working-class social life" (Roberts, 1984, p. 117), "one of the few places where workingmen regularly met each other as equals" (Roberts, 1991, p. 101). The chapel and the pub were the "twin foci of most nineteenth-century Welsh communities" (Lambert, 1983, p. 13), and the nineteenth-century French café was described as the "church of the working class" (Haine, 1994, p. 16). Churches and drinking places could be rivals, as they were in Britain (Harrison, 1973), but Mass-Observation argued that the pub was a stronger social institution than religion, politics, or popular media (1943), "a public and a political space … a temporally and spatially liberated public sphere" (Hubble, 2006, p. 211). Ben Clarke suggests that for interwar writers like George Orwell and Patrick Hamilton, the pub offered a reconnection to communal life, "a refuge from loneliness and a space that brings this loneliness into focus" (2012, p. 48; 2015).
This widely shared view of drinking places led Ray Oldenburg to suggest that they are a kind of "third place" between home and work (1989), providing hospitality (Bell, 2007) as well as shelter, sustenance, and comfort. Oldenburg's model "great, good place" is the nineteenth-and twentieth-century German-American lager beer garden; they appealed to immigrants of different classes and to families, and therefore to (some) women. This sense that drinking places are neither "public" nor "private" can also be seen in Perry Duis's suggestion that US saloons were "semi-public city spaces" (1983, p. 3). While London's gentlemen's clubs were "seemingly in the heart of the public sphere," they "provided their members the friendly intimacy and privacy ideally located in the home" (Milne-Smith, 2006, p. 797, andsee Milne-Smith, 2011). The same was said of Kenyan clubs for European men in the 1930s (Willis, 2002), but the ordinary mid-nineteenth-century pubs of Stalybridge could also be "hybrid" public/private spaces offering a comfortable, domestic sociability (Booth, 2018). W. Scott Haine argued that working-class patrons treated the cafés of nineteenth-century Paris as domestic spaces that offered a form of "communal privacy" (1996, p. 55) and in Roubaix in Northern France, the working-class culture bourgeois observers saw in public cabarets stood in for the private domestic lives hidden from view (Clement, 2020). The blurring of "public" and "private" may have been part of the appeal of these and other drinking places, for different classes of drinkers.
Drinking places were home to rich associational cultures as well as informal sociability. British pubs hosted political groups (Harrison, 1973), artisan botanists (Naylor, 2002;Secord, 1994), sports fans (Collins & Vamplew, 2002), Friendly Societies (Cordery, 2003, and literary societies and eisteddfods (Lambert, 1983;Pritchard, 2012). Social clubs, established as an alternative to pubs, provided facilities for both education and leisure (Cherrington, 2012;Tremlett, 1987). Irish pubs played a similar role in masculine working-class life (Malcolm, 1998) KNEALE -3 of 14 and were important sites of political agitation (Kadel, 2003(Kadel, , 2015. For French working-class men, débits de boissons (drinking places) were sources of information about work, accommodation, and credit, homes for clubs and associations, and significant political spaces (Barrows, 1991;Prestwich, 1988). Leisure and democratic politics also collided in the urban bars of 1920s' Gold Coast, despite strict colonial licensing, high import duties on gin, and the suppression of locally produced akpeteshie (Akyeampong, 1996a(Akyeampong, , 1996b. Montreal's "Joe Beef's tavern" was a clearing house for information, a platform for debate and political campaigning, and a source of entertainment, accommodation, and emergency funds (DeLottinville, 1981(DeLottinville, -1982. These stories challenge the arguments of historians who saw drinking places as the target of attempts to pacify popular culture (e.g. Storch, 1975Storch, , 1976Storch, , 1977. Other historians saw glimpses of self-regulation, or a parallel culture of working-class respectability. In Germany, the associational culture of taverns "disciplined drinking behaviour by subordinating it to other goals and purposes" (Roberts, 1984, p. 117). British publicans may have realized that respectability was good for business (Girouard, 1975). However, many of these examples highlight the exclusions that made these third places "good" for those who were invited to take part; we will return to this later.
We should also note that "third places" do not require alcohol. In the cities of the Ottoman Empire, coffeehouses were "good places," unlike taverns operated by non-Muslims (Matthee, 2014); in Chengdu, China, teahouses and wine houses complemented one another (Wang, 2008). We will return to this question in the conclusion, but we now turn to historical work on the ways in which they were seen as problems by contemporary observers.
| SITES OF CONCERN: MICROGEOGRAPHIES AND DISTRIBUTIONS
Emphasizing the more spectacular aspects of drinking places risks obscuring their role as sites of everyday sociability (Booth, 2018;Yeomans, 2014), but they were often the focus of well-documented political, religious, or moral concern. This section considers two spatial aspects of this problem: the internal microgeographies of these drinking places and their distributions in space.
The character of some "third places" may well have been shaped by an emerging urban sociability characterized by anonymity, detachment, and money's ability to act as "the most frightful leveler" (Simmel, 1950, p. 414).
Developing cash economies made drink more accessible to migrant workers in the rapidly growing cities of tsarist Russia and Southern African mining camps, for example (Ambler & Crush, 1992;Transchel, 2006). Strangers met or separated into groups in drinking places, and these meetings and separations troubled some commentators. Social mixing brought the threat of criminality, class or racial conflict, sexual contact, and moral "contamination"; but the separation of drinkers into homogenous groups could also be troubling because it implied unsupervised drinking, political activity, or immorality.
The internal microgeographies of some English drinking places-vaults, saloon bars, ladies' bars, and morewere thought to reflect their patrons' "instinct for social distinctions, their morbid passion for what Americans call self-stratification," (Gorham, 1949, p. 30). The long bar counter that served an open room, uncluttered by seating, was associated with early gin palaces and a "drink and go" culture of vertical drinking and social mixing (Clark, 1983;Gorham & Dunnett, 1950), but in some British towns, pubs began to close open spaces off into compartments after 1850 (Girouard, 1975). Sala's gin palace had "not only a bar public, but divers minor cabinets, bibulous loose boxes, which are partitioned off from the general area" (1859, p. 72). The idea that this was a consequence of "the ineradicable class-consciousness of the English" (Gorham & Dunnett, 1950, p. 26) had been suggested in the 1890s and the lack of supervision this afforded was a source of concern across Britain, with "ladies' bars" thought to be an especial temptation to women drinkers (Brandwood et al., 2004;Girouard, 1975;Kenna & Mooney, 1983;Kneale, 1999Kneale, , 2012. Mass-Observation had a keen eye for the classed and gendered spaces of midtwentieth-century Bolton's pubs (1943) and Stella Moss (2016) perceptively explores their material cultures and homosocial practices, from the "rough masculinity" of the plain vault (spittoons, sawdust ditches, few seats) to the "associational sociability" of the comfortable taproom (seating, memorials, games). These microgeographies 4 of 14 -KNEALE materialized social identities, while providing opportunities for reflections on the value of both interaction and segregation.
However, there is of course a geography of these developments as the complex relationships between licensing authorities, local trade organizations, and other social and economic factors made drinking places very sensitive to their local contexts, shaped by market pressures (competition or the lack of it) and regulation (licensing authority strategies, police attention). Pub layouts varied for these reasons, for example, in Bradford, Portsmouth, and Dartmouth (Eley & Riley, 1991;Jennings, 1995Jennings, , 2007, and the different pub types that emerged in Liverpool and Manchester reflected the contrasting relationships between demand, business decisions, and regulation in those cities (Mutch, 2003(Mutch, , 2004(Mutch, , 2006(Mutch, , 2008. The second problematic aspect of drinking places was their location in space. Brian Harrison's suggestion that British pubs were concentrated in working-class areas in this period (1973, andsee Vaughan, 2015) is echoed by work on the locations of drinking places as different as Brighton's beerhouses and pubs (Robinson, 2015), Calcutta's "punch houses" (Fischer-Tiné, 2012), and Mexico City's pulquerías (Toner, 2011). In these widely different contexts, observers noted relationships between residential segregation, class, and drinking. Of course, these sites were also highly visible because they were closely policed (Dhillon, 2015).
Mappings of drink made during this period raise new questions about the nature of drinking places as spatial problems. In the British case, it seems likely that drinking places were seen as causes, not symptoms, of social problems; these "moral geographies" (Driver, 1988) mapped the pernicious influence of drinking places and targeted them for reform. If drinking was an individual moral failing, it made little sense to blame pubs, but in this period "environmental" arguments encouraged reformers to consider the social context of drinking (Kneale & French, 2008). These maps revealed concentrations of drinking places where these influences were supposedly strongest (Kneale, 2001;Vaughan, 2015). Reformers mapped and counted the numbers of doors pubs had because they were worried about their porosity, their connections to streets and homes, allowing patrons and influences to flow in and out (Kneale, 1999(Kneale, , 2012. Beckingham (2017a, pp. 85-124) presents careful readings of a series of these maps created in late nineteenth-century Liverpool, and lists others produced elsewhere. Drink maps were also created in the United States, for very similar purposes (Levine, 1983;Vaughan, 2015).
| PATRONS AND WORKERS
While drinking places could be spatial problems, observers also worried about the people who used them, particularly women bar workers or customers in what were thought to be masculine spaces. Brian Harrison described the British pub as "a 'masculine republic' on every street" (1973, p. 172), borrowing the phrase from H. W.
J. Edwards' description of Rhondda pubs (1938). For Valerie Hey (1986), the pub provided a "female substitute" for British men, another "flight from domesticity." Of course, women owned, ran, and worked in drinking places. For Haine, a "female presence behind the bar ensured that Paris cafés were not ordinary patriarchal spaces" (1996, p. 185). Glamourous images of the British "barmaid" (Bailey, 1990) co-existed with descriptions of the long hours, hard work, and harassment experienced by many women workers in drinking places (Kirkby, 1997;Poutanen, 2017;Upton, 2013). These women were cause for concern in Australia, Singapore, New Zealand, Canada, and Britain at different points in our period (Beckingham, 2017b;Kirkby, 1997;Peleggi, 2012;Poutanen, 2017;Upton, 2013). In Mexico, both wealthy white women tavern-owners and the Indigenous women who sold food outside them faced criticism as "nuisances" or worse (Toxqui, 2014); in Bolivia, between 1870 and 1930 chicheras, women selling maize beer, were accused of criminality and sexual impropriety, often by elite male patrons (Hames, 2003). But a domestic family life was possible in some family-owned British establishments (Booth, 2018) and Mary Anne Poutanen suggests that the "blurring of … social and spatial boundaries" in Montreal's tavern spaces appealed to women (2017, p. 45).
KNEALE -5 of 14
There is broad agreement that women made up a significant proportion of British pub patrons in this period but were rarely as common as men (Beckingham, 2012;Gutzke, 1984Gutzke, , 1994Gutzke, , 2013Langhamer, 2000;Moss, 2008Moss, , 2009Robinson, 2015). There were exceptions, of course. In late Victorian London compartments-or even whole pubs-might be dominated by women (Gleiss, 2009, p. 56;Jennings, 2007, pp. 116-117), with the writer Arthur Machen complaining of a "monstrous incursion of women" into London's pubs (2015, p. 91). Ellen Ross suggests that a "women's pub culture" provided a crucial support network in Edwardian London (1983, p. 10). The attention paid to women's drinking after the 1908 Children Act made British pubs less welcoming for them (Orwell, 1946;Moss, 2008Moss, , 2009, though women returned to pubs in greater numbers during the Second World War (Gutzke, 2013;Langhamer, 2003).
Elsewhere significant numbers of women were present in drinking places in Australia (Kirkby, 1997), Canada (Campbell, 2001;Hamilton, 2004;Malleck, 2012), Guatemala (Carey, 2014), and Russia (Phillips, 2000) in this period. In British East Africa licensed spaces attracted both men and women, though that worried white administrators and educated Africans (Willis, 2002, pp. 10, 104-105). Women drinkers were rarely as common as men, though. In the United States, Madelon Powers suggests that "the saloon trade regarded women as a special and separate class of customers" (1998, p. 32); in Bogorodsk in 1859, men were everyday drinkers, while women joined them for communal holiday drinking (Transchel, 2006, p. 23); and in twentieth-century French cafés, women drank less frequently than men (Prestwich, 1988, p. 90).
Studies of American and British "gay male worlds" suggest licensed premises could be key spaces for queer sociability in this period. While a new-and more hostile-attention was paid to New York's gay men after the Repeal of Prohibition, and San Francisco's bars faced hostile policing (Boyd, 2003;Chauncey, 1994), Seattle's gay bars were "governed heteronormatively but … indirectly and with a relatively soft touch" by the Liquor Control Board (Brown & Knopp, 2016, p. 337). London's pubs were no more important than other third places in this period (Houlbrook, 2005), however.
Finally, drinking places also enforced separations based on racialized lines, with alcohol forbidden to African Americans before the end of the Civil War and to Native Americans from the start of the nineteenth century (Herd, 1991;Ishii, 2008). Similar restrictions on Indigenous drinking existed in Canada until 1951 (Campbell, 2001;Heron, 2003;Malleck, 2012) and in Australia from the 1830s (Brady, 2019). In other places, like the European hotels of Colombo and Singapore, drinking places could be "contact zones,", "where different social, ethnic, and national groups interacted," though excessive alcohol consumption was "an identity marker that set Westerners apart" (Peleggi, 2012, pp. 125, 138).
Much of the work of enforcing these exclusions was left to licensing systems, and the next section explores work on this important topic in some detail, with the section after that concentrating on drinking places explicitly designed to improve their customers and workers.
| REGULATION: LICENSING
A powerful but subtle technology of liberal governmentality, licensing focuses "on particular spaces, temporalities, and activities," effectively "contracting out the governmental work of preventing disorder and monitoring risks to the private sector" (Valverde, 2003, p. 147). Licensing records provide very different accounts to narratives of the pub's domestication; London's police struggled to convict drinksellers though they successfully prosecuted tens of thousands of drunks (e.g., Jennings, 2013). Licensing authorities dreamed of order though this was often limited by local factors. David Beckingham's exploration of licensing has been enormously productive (2017a, 2017b, 2017c). Licensing was always local, a form of "small and cautious government" where "local influences shaped individual licensing outcomes in remarkably uneven ways" (2017a, p. 44). Between 1830 and 1920, the city of Liverpool experimented with different kinds of licensing, as "the permissive framing of legislation created the spaces for drink coalitions to 6 of 14 -KNEALE occupy and so shape everyday local policies" (2017a, p. 250). The hostile attention paid to barmaids in Edwardian Glasgow provides another excellent example, demonstrating licensing's extraordinarily flexible engagement with different elements: drinking places themselves, the objects and people inhabiting them, rhythms of working and drinking (2017b). Beckingham's work on the city of Motherwell's short-lived prohibition of the sale of spirits in 1916 shows that the scales of action required to make and then challenge this ban were actively made by many agents, some of them outside the licensing system (2017c, and see Lester [2014] for another example of jurisdictional differences).
Histories of Mexico City's pulquerías show authorities responding to the same issues that were troubling British licensing authorities. From the 1850s authorities made increasingly insistent demands for improvement; seats and tables were to be removed and the bar positioned immediately inside the entrance to discourage loitering (Toner, 2011;Toxqui, 2014;Toxqui Garay, 2008). As Beckingham's work demonstrates, licensing strategies produce very different outcomes in different contexts. Both Chicago and Boston were 'high license' cities from the early 1880s, for example, as the cities charged a high price for the right to sell alcohol, but their drinking places became recognizably different (Duis, 1983). Boston limited the number of licenses, so every bar prospered though there was no incentive for owners to improve them. In Chicago, where there was no cap on numbers, the cost of licenses prompted intense competition, leading to extravagant interiors and the provision of entertainment and food.
In Ireland, nineteenth-century legislation and policing "shap [ed] the pub into a controlled and well-ordered environment" (Malcolm, 1998, pp. 71, 72). In other colonial contexts, licensing and prohibition were tools of both administration and "development." In 1917, the British took over the production, sale, and consumption of all alcohol in Uganda, also banning ebirabo (clubs for sale of local drinks), off-sales and public drinking in the Buganda kingdom (Willis, 2002). These "Native Liquor Ordinances" were extended a decade later, though ebirabo were now encouraged and regulated rather than prohibited. In Southern Africa, drink could only be bought through stateowned or licensed outlets by the start of the twentieth century. Drink shops supplied camps and migration routes in Mozambique and the Witwatersrand, while other employers prohibited alcohol from their compounds; "employers and local authorities in towns and cities combined not to destroy the liquor trade but to seize control of it, shape it to their needs, and profit from it" (Ambler & Crush, 1992, p. 18).
The 6 PM early closing legislation adopted in New Zealand, South Australia, Victoria, New South Wales, and Tasmania during the First World War also changed drinking places (Blainey, 2003). Consumption seems to have remained high, as drinkers drank more in the hour between finishing work and closing time-the "six o'clock swill"but it made drinking places less attractive as "anything that interfered with the fast and efficient dispensing of drink was thrown out" (Phillips, 1980, pp. 251, 250).
Licensing also shaped the character of unlicensed drinking places, including American "blind pigs" (Duis, 1983), British "jerries" or "whisht-," "hush-," or "wabble-" shops (Jennings, 2016;Lambert, 1983), Polish melinas selling moonshine bimber (Kochanowski, 2017), and the Irish "shebeen," a word adopted in Scotland, the United States, and many parts of Africa. Russia women acted as bootleggers and illegal sellers before the Revolution (Herlihy, 2002) and the illegal spaces opened up by US Prohibition allowed women patrons and workers into drinking places in greater numbers (Gutzke & Law, 2017;Murphy, 1994). Elsewhere, some women were displaced into other licensed sites. Between 1791 and 1910 Irish spirit grocers sold spirits for consumption off the premises, though drinking in the shop was thought to be widespread (Kearns, 1996;Martin, 2016). In much the same way, British pastrycooks' shops became associated with respectable women's 'secret drinking' after the Refreshment Houses Act of 1860 (Bonea et al., 2019).
The next section concerns two excellent examples of licensing's influence that attracted support around the world, with (again) very different outcomes. KNEALE -7 of 14 6 | REGULATION: "GOTHENBURG" AND IMPROVEMENT Two linked developments had important consequences for drinking places in Scandinavia, North America, and Britain and its empire in this period: the "Gothenburg system" and the search for "improved" drinking places.
The Gothenburg system was named after Göteborg's "disinterested management" scheme (1865). The municipal authority established a private company, granted it a monopoly on spirit sales, and took the profits, with supporters receiving dividends on their investments. Removing the profit motive reduced consumption, eliminated the incentive to sell drink illegally, and made implementing reforms "a mere matter of administration" (Rowntree & Sherwell, 1899, p. 274). Drink sales funded amenities usually seen as alternatives to the drink trade, like eating houses and reading rooms. One British observer described Bergen's company bars as offering "no attractions whatever, except drink. They have no resemblance to bright gin-palaces, nor to bright coffee taverns, nor yet to 'snug' public-houses. They are not places of resort for social intercourse" (in Rowntree & Sherwell, 1899, p. 301).
Patrons were not allowed food, games, newspapers, or chairs.
The first British companies of this kind opened in the 1890s, reaching their peak in the 1920s (Gutzke, 2003). In 1915, when the Central Control Board (Liquor Traffic) sought to manage drinking in places where essential war work was being done, Gothenburg offered an obvious model. The CCB closed some of the pubs it took over, improved others, and built new ones in North London, Cromarty Firth, Carlisle, and Gretna Green, eventually extending its influence across the United Kingdom (Duncan, 2013;White, 2014). Carlisle's pubs remained in state control until 1974.
The CCB clearly inspired elements of what became known as the "improved pub," but the meaning of and reason for improvement shifted, from moral reform to wartime national efficiency, ending as a commercial imperative (Greenaway, 1998). The improved pub was shaped by forward-thinking breweries who saw a convergence between their own interests and those of reformers (Gutzke, 2005); "improvement" meant providing amenities that would slow down drinking in profitable ways. Improved pubs have received a good deal of attention, from their commercial origins to an extensive analysis of their forms, the amenities they offered, and their relationship with interwar social housing construction (Fisher and Preston, 2015, 2018Boak & Bailey, 2017;Cole, 2015;Fisher, 2009;Gutzke, 2005;Jennings, 2007). Their designs often suggest an attempt to organize pub space in reforming ways, however, raising familiar questions of visibility and privacy (Fisher & Preston, 2019;Moss, 2009). Still, not all interwar drinking places were concerned with respectability; roadhouses were both "smart" and "racy" (e.g., Gutzke & Law, 2017;Law, 2009).
'Gothenburg' travelled on through Britain's empire. Hotels operating on these principles opened in Australia from 1897, generating income for municipal projects (Brady, 2019). Gothenburg also offered Britain's African colonies a significant source of funds for development. In 1909 Durban's municipal government established a monopoly in making and selling beer, managing African drinking while funding housing and sanitation. Southern Rhodesia and Johannesburg soon followed suit (La Hausse, 1988Parry, 1992;Rogerson, 1992). The beerhall-funded construction of townships, residential compounds and recreational spaces for Africans further spatialized ethnic divisions (Ambler & Crush, 1992). Municipal halls were opened in Nairobi, Dar es Salaam, Mombasa, and elsewhere between the 1920s and 1940s (Willis 2002(Willis , 2003. African halls resembled Bergen's company bars; "in the clubs of Nairobi and Mombasa, with their brick and wire-mesh walls, turnstiles, and 'stalwart attendants,' the cheerless colonial vision of urban drinking as a physiological function came closest to realization" (Willis, 2002, p. 129). In Southern Africa, "beerhalls were bleak functional buildings with little character and no charm, as befitted their purpose" (Ambler & Crush, 1992, p. 25). This "improvement" was a long way from the comfortable interwar British pub.
Post-Prohibition Canada offers interesting parallels. Dan Malleck's insightful analysis of public drinking in Ontario supports Beckingham's argument that licensing is always local. The Liquor Control Board of Ontario (LCBO) licensed and monitored privately owned hotels across the province, attempting to govern space and people in the most productive arrangement possible, "a form of generative rationality" producing better conduct (2012, p. 8 of 14 -KNEALE 65, original emphasis). However this often involved a case-by-case compromise between idealism and pragmatism (2012, p. 64). The "two aspects of the architectural ideal of hotels that governed the morality of the hotel's patrons were surveillance and segregation," just as they had been for British pubs in the 1890s (p. 73). The LCBO insisted on separate rooms for men and women from 1934; staff should be able to see patrons easily, but the public should not be able to see in from the street, and men and women should not be able to see each other, except in women's rooms where "escorts" were allowed.
The "beer parlors" that reintroduced public drinking into British Columbia in 1924 shared some similarities with Ontario's hotels. Patrons had to sit to drink in poorly lit, uncomfortable rooms; food, cigarettes and soft drinks were all prohibited. Separate rooms, with separate entrances, were introduced for women customers in 1927.
Seattle's Liquor Board insisted that patrons of gay bars were served and drank at tables, though these bars were more appealing than Vancouver's beer parlors (Brown & Knopp, 2016;Campbell, 2001;Hamilton, 2004). The form of these "improved" sites reflected and shaped questions of class, race, and sexuality.
| CONCLUSIONS
These historical geographies and histories of drinking places offer insights into specific issues, which I will briefly recap here before considering avenues for future research. The first important question is the hybrid (public/private, working/domestic) or "third" character of these sites. While interesting, we need to do more to challenge these long-standing and simplistic binary oppositions (Vickery, 1993), perhaps by engaging with a broader set of research materials (Andersson, 2015). A re-evaluation of the idea of "domestication" (Koch & Latham, 2013) might help, as well as a loosening of the idea of place meanings as already given. This might also allow us to extend our analysis to sites like homes, or colonial ships, messes and barracks (Goodman, 2020). Alternatively, we might put public drinking into its wider context as only one of the ways in which alcohol becomes public (Kneale, 2014).
The second point concerns the importance of the relationships between licensing, business and policing as they come together in specific places. Perhaps appropriately, much of the best work considered above-Beckingham, Brown and Knopp, Malleck, Moss-has focused on "the local": on particular, often unspectacular, articulations of people, regulation, and objects. There is still a good deal of potential in comparative analyses of licensing, given the variety of spaces it manages and the multiplicity of outcomes it produces.
Finally, there are three areas of historical-geographical or historical work that seem very promising for future research. The first might extend and synthesize work on imperial alcohol, looking back to earlier commodity chains linked to the trade in sugar and enslaved Africans (Carey, 2015;Courtwright, 2001;Mandelblatt, 2011;Mintz, 1985;Ogborn, 2008). Government taxes on alcohol funded colonial development, contributing 52%-68% of the revenue of the Lagos Colony between 1892 and 1903 (Olorunfemi, 1984, p. 237), and 62%-74% of the revenue of French West Africa between 1908 and 1913 (Pan, 1975, p. 16). Again, what might comparisons of drinking places in imperial Russia, India, and African colonies show us? The second might follow Ruth Slatter in attending to the mobile assemblages of material things that make up apparently closed and static places (2019a, 2019b, 2019c).
Jugs and 'growlers' connected pubs with homes, as stolen drinking vessels roamed local neighbourhoods (Owens et al., 2010;Owens & Jeffries, 2016). Bottles turn up in Victorian rubbish heaps and can help to trace the material geographies of Prohibition in the United States (Licence, 2015; Mosher & Wilkie, 2010). The last area that might receive more attention concerns alcohol itself: what makes pubs different to "great good places" like barbershops, for example? Perhaps we need to think about what the "affective potential of alcohol" might mean for these sites (Latham & McCormack, 2004, p. 717). | 2021-05-08T00:02:57.098Z | 2021-02-25T00:00:00.000 | {
"year": 2021,
"sha1": "31a272d74a136c0dd1890674cdd5b49650dcfa4a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/gec3.12557",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "20dcfbdba1d1bbbf14df88d0961ecab82a76e4e1",
"s2fieldsofstudy": [
"History",
"Geography"
],
"extfieldsofstudy": [
"History"
]
} |
256860297 | pes2o/s2orc | v3-fos-license | Bi-level decomposition algorithm of real-time AGC command for large-scale electric vehicles in frequency regulation
,
INTRODUCTION
With the 'carbon peak' and 'carbon neutrality' goals proposed, the power system based on renewable resources has received extensive attention [1].While renewable resources bring new development opportunities, their randomness and volatility cause great challenges to the frequency stability of the power grid and increase the frequency regulation (FR) pressure.Nowadays, the traditional FR units in China are mostly thermal power units and hydropower units, and the thermal power units are the main component.The delay in data transmission and the low ramp rate of the unit often occur when the thermal power unit participates in AGC.The phenomenon leads to low adjustment rate, poor adjustment accuracy, slow response speed, and accelerated aging of the unit [2].Especially under the implementation of the 'Two Rules' [3], the FR based on traditional thermal power units has been unable to better meet the requirement of the new power system.Therefore, it is urgent to study new regulation resources applied to improve the frequency stability of the grid.
As a generalized form of energy storage, electric vehicles (EVs) use their characteristics to provide the FR service for the power grid, which is a promising V2G project [4].With the 'New Energy Vehicle Industry Development Plan (2021-2035)' and other policies put forward, the sales of the new energy vehicle (NEV) will reach about 20% of the total sales of new cars by 2025, which leads that the scale of EVs in China will become larger and larger.The large-scale EVs can provide: 1) power from MW to GW; 2) continuous discharging time with hour level; 3) response speed in milliseconds; 4) accurate control and the stability at any power point.According to this series of characteristics, EVs are well in line with the requirements of the "Two Rules" in the AGC performance assessment.So there have been some studies focusing on large-scale EVs participating in AGC services in the form of aggregators, which are mainly divided into three aspects: 1) Real-time dispatchable ability evaluation of the aggregator; 2) Method of the aggregator to obtain the AGC command; 3) Method of each EV to obtain the AGC command inside the aggregator.
In the first part, current studies mainly evaluate the dispatchable ability of EVs by means of prediction.Considering the temporal-spatial dual uncertainty, reference [5] evaluates their dispatchable ability by Monte Carlo method using the random travel chain.Reference [6] uses a large amount of history data to predict the dispatchable ability of EVs through machine learning.However, the accuracy of prediction will be limited by the amount of historical data.Nowadays, communication technology has already met the needs to obtain real-time data of large-scale EVs, so it can evaluate the real-time dispatchable ability of the aggregator through real-time data.
In the second part, there have also been some studies on the command decomposition between the aggregator and AGC units, including decomposition based on frequency domain characteristics, AGC performance assessment, and economic factors.1) In terms of frequency domain characteristics, reference [2] uses the empirical mode decomposition (EMD) to decompose the FR deviation of thermal power units into the high frequency, medium frequency and low frequency part.Considering the characteristics of supercapacitors, batteries and aggregated EV resources, each FR resource responds to the corresponding type of AGC command respectively, so as to realize the effective decomposition of AGC command between the aggregator and the traditional unit; 2) In terms of the AGC performance assessment, reference [7] proposes a multi-agent cooperative control algorithm for multi-regional and multi-energy microgrid clusters, which considers CPS as the evaluation index to obtain the optimal AGC control strategy corresponding to each unit and the aggregator respectively.Reference [8] uses an improved deep deterministic gradient algorithm to dynamically allocate the power between AGC units and the aggregator from the three aspects of the AGC performance assessment: regulation rate, response time and regulation accuracy; 3) In terms of the economic factors, reference [9][10][11][12] model the decomposition of the AGC command between AGC units and the aggregator with the aim of minimizing the AGC response cost and the respective constraints of EVs and AGC units.Reference [13] aims to minimize the ACE tracking error and AGC response cost according to different time scales of AGC units and the aggregator under the condition of satisfying the basic constraints.
In the third part, some research has been carried out on the AGC command decomposition among EVs.Reference [14][15] use the charging time margin to measure the charging urgency of each EV, so as to decompose the AGC command well considering the charging demand.Reference [16][17] decompose the AGC command among EVs according to the dispatchable power of EVs so that the overall power of each EV tends to be the same.However, the above two decomposition strategy is only from the perspective of meeting the charging demand, and ignore the battery life loss.Reference [18][19] combine the life loss and the dispatchable ability of EVs simultaneously in the weighting method to determine the priority of each EV, but the model for the life loss is too simple and the weight determination is too subjective.Reference [20][21] use the idea of optimization to consider the life loss of EVs participating in the AGC service.However, the optimized solution speed is difficult to meet the time accuracy of AGC due to the large number of EVs.
This paper studies the optimal decomposition of the real-time AGC command from the power grid to EVs based on the established framework of information interaction between them.A bi-level decomposition algorithm considering the network model in combination with real-time dispatchable information of EVs is proposed.In the bi-level decomposition algorithm, on the one hand, the upper level decomposes the total AGC command between the aggregator and the AGC unit with economy as the goal and safety as the constraint considering the network model.On the other hand, the lower level decomposes the command between each EV in the aggregator based on the real-time dispatchable power and the battery life loss under the condition of meeting the demand for EVs.Through the analysis of the study case in this paper, the upper level decomposition algorithm is more compatible with the participation of EV aggregators from the perspective of safety and economy, and the lower level decomposition algorithm effectively considers the charging demand of each EV in terms of the travel demand and battery life loss.
INFORMATION INTERACTION FRAMEWORK FROM THE POWER GIRD TO EVS
As shown in Fig. 1, in each dispatchable cycle, the information interaction process between EVs and the power grid is divided into two parts: real-time information reporting and real-time AGC command decomposition, which will be explained in Section 3.1 and 3.2 respectively.The information interaction in these two links revolves around the power grid level, the aggregator level, and the EV level.The real-time information reporting link is carried out from bottom to top.The EV level reports the real-time information of each EV to the aggregator level and the aggregator level reports the real-time dispatchable power and the AGC response cost to the grid level; The real-time AGC command decomposition is carried out from top to bottom, and the grid level decomposes the total AGC command into each AGC unit and the aggregator level, and the aggregator level decomposes the obtained real-time AGC command to each EV.The power grid obtains the frequency deviation in the calculation area and the exchange power deviation of the external tie line through real-time monitoring to calculate the corresponding regional control deviation.Then input the deviation, and output the total AGC command that the entire area needs to respond to after filtering and PI control process [22].The whole process is shown in Fig. 2.
B
Filter PI presents the total AGC command that the entire area needs to respond to.
Real-time information of the aggregator level 1) Real-time dispatchable power of the aggregator level
With the development of communication technologies such as optical fiber and wireless communication, real-time status monitoring of EVs can be realized.So the real-time charging data of EVs can be used to calculate their real-time dispatchable power.Compared to some previous artificial intelligence algorithms, this method of using real-time data has higher calculation accuracy.
In the condition of meeting user charging demand, the feasible region of the dispatchable ability for the corresponding EV can be calculated during the entire dispatchable period considering the travel constraint, the power constraint, and the capacity constraint in Fig. 3. To meet the time precision of the AGC command, the dispatchable ability during the entire dispatchable period should be turned into the real-time dispatchable power at the current time.If the power of EV is between the maximum discharging power and the maximum charging power, it has the ability to respond to the AGC command.
The upper limit and lower limit of feasible region can be obtained by using the real-time data, including the current capacity status of the EV, the end charging time, and the desired capacity status, so as to calculate the dispatchable power of each EV.
Where EV , jt S presents the planned capacity of the j th EV at the current moment; EVbase , j t+ t S presents the minimum capacity of the j th EV at the next moment; plug-out j t presents the end charging time of the j th EV; j,t P presents the planned charging power of the j th EV at the current moment; ,, , j t j t PP present the power of the j th EV that can be adjusted up and down at the current moment respectively; EVdep j S presents the desired capacity when the j th EV leaves; max max , PS present the upper limit of charging power and the upper limit of capacity respectively; ch dis , presents the charging efficiency and discharging efficiency.Using equation ( 1)-(3), the real-time dispatchable power of the EV can be calculated, in which the equation ( 1) is used to calculate the base capacity at the current moment, so that EVs participating in the AGC service does not affect the user's demand, equation ( 2) and (3) correspond to the power that can be adjusted up and down during this period.Because the charging efficiency of EV batteries is different from the discharging efficiency of that, it is necessary to classify and discuss to calculate the power that can be adjusted down.In the case of reverse discharge, the discharging efficiency needs to be used for calculation.
The real-time dispatchable power of single EV is relatively small, but from the perspective of an aggregator, making good use of the real-time dispatchable power of large-scale EVs can have a great impact on the power grid.By accumulating all EVs connected to the aggregator during the corresponding period, the real-time dispatchable power of the aggregator can be obtained.agg, , 1 In which agg, agg, , tt PP present the power of the aggregator that can be adjusted up and down at the current moment respectively, t N presents the number of EVs connected to the aggregator at the corresponding time.
2) Real-time AGC response cost of the aggregator level
For EVs, the economic cost of participating in AGC mainly comes from the battery life loss.So the AGC response cost of the aggregator should consider this factor.At present, there are some references that take the battery life loss into account in the AGC command decomposition for EVs, but the established battery life loss model is too simple or only as a criterion for the decomposition priority.Therefore, this paper uses a refined battery life loss model [23] , which converts the DOD-cycle life curve obtained from the experiment into the corresponding life loss when the SOC state changes.However, due to the complexity of the life loss model, the unit AGC response cost keeps changing during different periods.Therefore, the linear battery life loss model obtained by using the idea of piecewise linearization can reduce the change of unit cost effectively.The model [23] is improved based on that different EVs have different battery capacities: ) In which, F is the life loss function corresponding to the change of capacity; N presents the number of segments; i presents the serial number of the segment; i S presents the length of the corresponding segment; S presents the upper limit of the length of the segment; i S presents the slope of the corresponding segment; S presents the current capacity of the EV battery; min S and max
S
present the lower limit and the upper limit of the battery respectively.The relationship between the piecewise linear life loss model and the original model is shown in Fig. 4 For each EV, extra battery life loss occurs only because of the discharging behavior in response to the AGC command.Based on the current capacity status of each EV, its position on the curve in Fig. 4 can be determined.Because each EV has the corresponding charging plan, it needs to subtract the maximum power that can be adjusted down during the charging scenario to calculate the dischargeable power of each EV in its corresponding life loss segment: In which, dis, , i jt P presents the dischargeable power for the j th EV in the corresponding i th life loss segment; ch, , jt P presents the maximum power that can be adjusted down for the j th EV during the charging scenario.dis,EV , jt S presents the updated state for the j th EV before participating in discharging.By accumulating the dischargeable power of EVs in the corresponding life loss segment during the corresponding period, the dischargeable power of the aggregator in each life loss segment can be obtained: In which, dis, agg, i t P presents the dischargeable power for the aggregator in the corresponding i th life loss segment; i N presents the number of EVs connected to the aggregator at the corresponding time in the corresponding i th life loss segment.
Under the discharging scenario, the unit AGC response cost of EVs in the same life loss segment is approximately equal.So, the unit AGC response cost of the aggregator in each segment can be converted using the life loss and the total cost of the battery: In which, i c presents the unit AGC response cost in the corresponding i th life loss segment; BESS
C
presents the total cost of the battery.Combining the dispatchable power of the aggregator in the corresponding life loss segment and the unit AGC response cost, the unit AGC response cost of the aggregator can be obtained.
Upper level decomposition algorithm
According to the information interaction framework of EVs and the power grid in Section 2, the upper level decomposition algorithm from the power grid level to the aggregator level is studied, which establishes the optimization model with economy as the goal and safety as the constraint about the realtime command decomposition between the aggregator and the AGC units.
1) Objective function
In which, cost C presents the total AGC response cost of the AGC units and the aggregators; gen,i c presents the unit AGC response cost of the i th AGC unit; gen, agc i P presents the AGC response power of the i th AGC unit; load, j c presents the unit AGC response cost of the j th aggregator and it is caused by the deviation from the energy market after participating in AGC; load, agc j P presents the AGC response power of the j th aggregator; , kj c presents the unit discharging cost of the j th aggregator in the k th life loss segment; dis,res,k j P presents the discharging response power of the j th aggregator in the k th life loss segment; seg, j N presents the number of life loss segments of the j th aggregator.
2) Constraints
In which, gen,i R presents the maximum power that the ith AGC unit is allowed to adjust up and down without considering the active power upper and lower limits, that is, the maximum ramp rate; before gen,i P presents the active power of the i th AGC unit at the previous moment.
③ present the active power that can be adjusted down and up of the jth aggregator respectively.They can be calculated by equation ( 4)- (5).
Equation (23) means that if the jth aggregator can satisfy the assigned AGC command using the maximum power that can be adjusted down during the charging scenario, the discharging response power is zero.If it cannot be satisfied, the calculation of the discharging response power needs to subtract the maximum power that can be adjusted down during the charging scenario.Through this constraint, the optimization model fits well with lower level decomposition algorithm.
④ Flow constraint of the main power grid The main power grid line adopts the ℼ-type model, which replaces the trigonometric function term in the original constraint with the quartic power term in accordance with the distflow power flow model of the distribution network.And considering the different radial characteristics between the main power grid and distribution network, establish the branch power flow model of the main power grid [24] .By processing the secondary variables, the power flow model of the main power grid is further processed into a linear model.
sin , , Equation ( 24)-(32) are the corresponding power flow model of the main power grid.Equation ( 24) and ( 25 26)-( 27) are the relationship between the phase and amplitude of the node voltage in the branch vw.Equation ( 28) is the relationship between node voltage, branch current and branch active, reactive power.Equation ( 29)-( 30 In order to meet the time precision of the AGC command decomposition between the AGC units and the aggregator, it is necessary to linearize the nonlinear constraint ( 26) and (28) in the power flow model of the main power grid.
Since the node phase needs to meet the constraints in equation ( 31) and (32), considering that the phase difference of the branch is generally small, equation (33) can be used to make an approximate calculation of the sine function.Since the voltage amplitude square interval allowed by the node voltage constraint is narrow, the error brought by the introduction of the node voltage constant norm bus V is also relatively small, and the linearization of equation ( 26) is completed.As shown in equation (34).The piecewise linearization is used to linearize equation (28), which means piecewise linear approximation is performed on the variable square term in the branch power flow model [25-26].So the nonlinear constraint (28) is transformed into: In which, respectively.Piecewise linear function ( , , ) f y y is used to approximate the variable square term.In which, y presents the upper limit of y ; presents the number of discrete segments; x presents the auxiliary 0-1 variable; M presents the large enough constant; presents the small enough constant.The following constraints (35)-(40) should be supplemented.37) and (38) linearize the absolute value of the variable.Equation ( 39) and (40) ensure that the length of the previous segment must take the maximum value when the next segment can take a nonzero value.
Lower level decomposition algorithm
According to the information interaction framework of EVs and the power grid in Section 2, the lower level decomposition algorithm of the real-time AGC command from the aggregator level to the EV level considers the battery life loss and dispatchable power in two cases.
Since the reverse discharging of EVs will produce additional battery life loss, which is unfavorable for EV users, it is necessary to reduce the planned charging power as much as possible to meet the AGC requirement.However, if all EVs have reached the maximum power that can be adjusted down during the charging scenario and it still cannot meet the real-time AGC command, the EV should response the AGC command by discharging, and the additional battery life loss should be minimized as much as possible.
1) First case
In view of the above situation, if all EVs in the aggregator can satisfy the AGC command by reducing to the maximum power that can be adjusted down during the charging scenario, then the lower level decomposition algorithm is based on the dispatchable power of EVs.
The idea of ability-contribution is proposed to decompose the response command to EVs based on the consensus algorithm.
In which, t presents the ability-contribution factor, which means that when each EV has the same ability-contribution factor, EVs that have more dispatchable power can make more contributions so that their dispatchable power can be used fully and fairly.res , jt P presents the response power allocated to the jth EV, res t P presents the total response power of the aggregator.Through this strategy, the dispatchable power of EVs is considered to complete the command decomposition inside the aggregator.
It can be turned into the response power allocation weight for each EV by solving equations: Equation ( 45) means that when the power needs to be increased, EVs with more power that can be adjusted up are given priority to the response power.On the contrary, when the power needs to be decreased, EVs with more power that can be adjusted down are given priority to the response power. 2
) Second case
However, if all EVs have reached the maximum power that can be adjusted down during the charging scenario and it still cannot meet the real-time AGC command, the EV should response the AGC command by discharging.At this time, the established piecewise linear life loss model needs to be used to comprehensively consider the battery life loss and the real-time dispatchable power to decompose the AGC command.
In the discharging scenario, all EVs must at least reduce the charging power to maximum power that can be adjusted down during the charging scenario, so use the result to update the initial state of each EV.Determine the life loss segment of each EV based on the current state of each it.Combined with that, the segment with a lower life loss slope has a higher response priority.
First, calculate the total AGC command that need to respond by discharging, and update the initial state of each EV using equation (11) In which, dis,res t P presents the total AGC command that need to respond by discharging.Then, the dischargeable power of each EV in each life loss segment can be obtained by using the equation (13).By accumulating the real-time dischargeable power of EVs in the corresponding life loss segment, the real-time dischargeable power of the aggregator in each life loss segment can be obtained: In which, dis, agg, i t P presents the real-time dischargeable power of the aggregator in the i th life loss segment.
Combined that the segment with a lower life loss slope has a higher response priority, set the corresponding AGC response power of each segment according to the corresponding priority.And dis,res,i t P presents the total AGC command that need to respond by discharging in the i th life loss segment.seg dis,res dis,res, For EVs in the same segment, the priority needs to judge by the dispatchable power, as shown in Fig. 5.
Battery
In which, i t presents the ability-contribution factor in the i th life loss segment.
, jt P presents the response discharging power allocated to the j th EV.Through this strategy, both the battery life loss and the dispatchable power of EVs are considered to complete the command decomposition inside the aggregator.
Similarly, it can be turned into the power allocation weight for each EV by solving equations: In view of fairness and economy, the lower level decomposition algorithm of the real-time AGC command from the aggregator level to the EV level is studied from the perspective of the battery life loss and dispatchable power according to the different situations of the dispatchable power.
REAL-TIME AGC COMMAND DECOMPOSITION PROCESS FROM THE POWER GRID TO EVS
Based on the information interaction framework of EVs and the power grid in Section 2, adding the real-time information acquisition method proposed in Section 3.1 and the bi-level real-time decomposition algorithm in Section 3.2, the real-time AGC command decomposition process from the power grid to the EVs can be obtained, as shown in Fig. 6.Fig. 6 Real-time AGC command decomposition process from the power grid to EVs The process is divided into the following steps: Step 1: Each EV determines its real-time status at the current moment and reports it to the aggregator; Step 2: Based on the real-time information of EVs, the aggregator calculates the real-time dispatchable power and AGC response cost according to the method in Section 3.1.2; Step 3: According to the ACE information at the current moment, the power grid calculates the total AGC command that it needs to respond to according to the method in Section 3.1.1; Step 4: The power grid determines the actual total AGC response command based on and the real-time dispatchable power of the AGC unit and the aggregator; Step 5: The power grid sends the corresponding response AGC command to the aggregator according to the upper level decomposition algorithm proposed in Section 3.2.1; Step 6: The aggregator sends the corresponding AGC response command to each EV according to the lower level decomposition algorithm proposed in Section 3.2.2; Step 7: Judge whether the current moment is within the scheduling period, if so, update the real-time state of the corresponding EV according to the result in step 6, and return to step 1; if not, complete the entire real-time AGC command decomposition during the dispatchable period.
Case data
In order to verify the correctness and effectiveness of the proposed bi-level decomposition algorithm for the real-time AGC command, this paper chooses the typical IEEE39 node system for the case study.Select the time section of 21:45-22:00, Jan 3 rd 2021 for simulation analysis and the time interval is 1 minute.The AGC command data uses the historical data of Shanghai grid during the corresponding time period.The initial state of each AGC unit is the example data of the typical IEEE39 node system.The corresponding unit AGC response cost of AGC units is in reference [27], and the cost of the power that can be adjusted up is the same as that can be adjusted down.The unit AGC response cost of the aggregator is 0.617 ¥/kWh according to the electricity price in the corresponding time period in Shanghai Set node 2 as the node of the aggregator that can respond to the AGC command.The real-time data of 5000 EVs in Shanghai corresponding to the time period are selected for analysis.And the battery capacity of EVs is 50 kWh and the maximum charging/discharging power of the charging pile is 20kW/-20kW.According to the experimental data of LiFePO 4 batteries in reference [28], the battery life loss fitting curve is: The upper limit of the SOC of the EV battery is taken as 0.8, and the lower limit is taken as 0.2, so as to reduce the number of excessively deep charging and discharging of the EV battery.The number of segments is set to 5, and the charging and discharging efficiency is 0.9.The unit cost of LiFePO 4 batteries is 1500¥/kWh.
Case result
According to the process in Chapter 4, the case study is analyzed according to the case data in Section 5.1.By calculating the real-time dispatchable power of the aggregator in each time interval, report it to the power grid.The AGC response power and the corresponding AGC response cost of the AGC units and the aggregator in each time interval are shown in Fig. 7.The bar graph in Fig. 6 shows the AGC response power of the AGC units and the aggregator in each time interval obtained by the upper level decomposition algorithm in Section 3.2.1.When the AGC command is positive, the AGC units need to increase the power, and the aggregator needs to reduce the load, on the contrary, when the AGC command is negative, the AGC units need to reduce the power, and the aggregator needs to increase the load.The line graph corresponds to the total AGC response cost in each time interval.
Based on the result of AGC response power received by the aggregator, the lower level decomposition algorithm is used to obtain the AGC response power for each EV in each time interval, as shown in Fig. 8. Fig. 8 Clustering result of charging power under the lower level decomposition algorithm Due to the huge amount of EVs in each time interval, in order to display the charging power situation of EVs under the lower level decomposition algorithm, the scenes are classified and clustered based on the positive and negative properties of the AGC response power.As shown in Fig. 7, the power of EVs is gathered between -15MW and -20MW in the down-clustering scenario, while the power of EVs is concentrated around 15MW and 20MW in the up-clustering scenario.And compared to the up-clustering scenario, the power in the down-clustering scenario is more dispersed because the lower level decomposition algorithm prioritizes reducing the charging power to 0MW and then adopts the discharging behavior to respond to the AGC command.The result shows that the EV resources are well used to respond to the real-time AGC command.
The process of the real-time AGC command decomposition from the power grid to EVs is completed through the case study, and the feasibility of the proposed bi-level decomposition algorithm for the realtime AGC command in this paper is preliminarily verified.
1) Feasibility analysis of EVs responding to the AGC command
After obtaining the result in Section 5.2.1, in order to verify the feasibility of EVs responding to the AGC command in the form of the aggregator, two scenarios where no EV responds to the real-time AGC command and aggregator without V2G in this section are set up to compare.Since the aggregator considering V2G responding to the AGC command is the important incremental resources, the real-time AGC commands can be satisfied.The key is to compare and verify the economic feasibility of EVs responding to the AGC command, as shown in Fig. 9.
Fig. 9 Economic analysis of EVs responding to the AGC command It can be found in Fig. 9 that all scenarios can meet the real-time AGC command, EVs responding to the AGC command can effectively reduce the cost, which verifies the economic feasibility of EVs responding to the AGC command in the form of the aggregator.And considering V2G, although the two scenarios are economically similar in the power needs to be adjusted up ( Time 3,4,9,10,11,15), aggregator with V2G can have better performance in economy because EVs can better exert their discharging capacity (Time 1, 2, 5, 6, 7, 8, 12, 13, 14).
2) Analysis of upper level decomposition algorithm
In the traditional command decomposition process from the power grid to the AGC unit, most of the AGC command decomposition between the AGC units and the aggregator in Chapter 1 is difficult to meet the time precision set in this case study.The two algorithms that can meet the time precision of real-time AGC command decomposition above are compared, and the feasibility of this method is verified from the perspective of economy and safety.
Algorithm 1: Upper level decomposition algorithm proposed in this paper.Algorithm 2: Decomposition algorithm in a fixed proportion of economy and dispatchable power [29] .Algorithm 3: Decomposition algorithm based on EMD [2] .① Economy analysis Algorithm 1 uses the optimization idea to decompose AGC commands with the goal of the lowest AGC response cost, which is more economical than Algorithm 2 and Algorithm 3, as shown in Fig. 9-11.13 with Fig. 14-15, it can be found that the burden rate of each branch in Algorithms 2 and 3 are significantly higher than those of the decomposition algorithm in this paper.In Algorithm 2, branch 3 and 13 are overloaded during parts of the time interval (the part above the parallel plane in Fig. 14), and in Algorithm 3, branch 1, 3, 27 and 37 appeared overloaded during parts of the time interval (the part above the parallel plane in Fig. 15).Under the decomposition algorithm in this paper, each node of the branch in the whole period is not overloaded, which verifies the safety feasibility of the decomposition algorithm in this paper.
① Feasibility analysis
The lower level decomposition algorithm comprehensively considers battery life loss and dispatchable power.The following shows the results from these two perspectives.
Burden rate
Fig. 16 Distribution of battery life loss In terms of battery life loss, it can be seen from the Fig. 16 that the overall life loss distribution is skewed to the left, indicating that the decomposition method in this chapter has well considered the battery life loss.17, indicating that the overall SOC level is more concentrated.It is verified that the decomposition method in this chapter takes the dispatchable power into account well.
② Effectiveness Analysis
The lower decomposition algorithm proposed in this paper objectively considers the battery life loss and dispatchable power of EVs.In order to verify its effectiveness, it is compared with the previous traditional methods: Method 1: Considering the battery life loss and dispatchable power (lower decomposition algorithm proposed in this paper).
Method 2: Considering the dispatchable power [17] .Through the comparison of the total life loss of the two methods in Tab.1, as well as the comparison of the life loss distribution in Fig. 16 and Fig. 18, it can be concluded that the lower level decomposition method proposed in this paper has great advantages in terms of life loss.Compared with method 2, the total life loss is reduced by 9.34%, which verifies the effectiveness of the lower level decomposition method in this paper.
4) Applicability analysis of the decomposition algorithm
In order to verify the applicability of the bi-level decomposition algorithm, the typical IEEE118 node system is simulated and analyzed by setting the same parameters in Section 5.The case result proves the applicability of the bi-level decomposition algorithm proposed in this paper by decomposing the real-time AGC command well and satisfying the time precision of AGC simultaneously.
Fig. 1
Fig.1Information interaction framework between EVs and the power grid
Fig. 2
Fig.2 Real-time AGC command acquisition in the power grid level
Fig. 3
Fig.3 Feasible region of the dispatchable ability for single EV
Fig. 4
Fig.4 Relationship between original life loss model and piecewise linear life loss model ) are the node power balance constraints.In which, , feeder bus SS present the set of the branches and the nodes respectively; , vw vw PQ present the branch active power and the reactive power from node v to node w respectively; , vw vw rx present the resistance and reactance of the branch vw respectively; sh sh , vv gb present the parallel admittance of the node v; sqr v V presents the voltage square of the node v; sqr vw I presents the current square of the branch vw; Since only the branch current square and the node voltage square are used in this model, the two square terms are directly used as a variable to simplify the model.Equation ( ) are the limit of the node voltage and branch current.In which, min max , vv VV present the lower and upper limit of the voltage at node v respectively; max vw I presents the upper limit of the current in branch vw.Equation (31)-(32) are the node phase and branch phase difference constraints.In which, v presents the phase of the node v; min max bus bus , present the lower and upper limit of the node phase respectively; vw presents the phase difference of the branch vw; min max , present the lower and upper limit of the node phase difference respectively;
Fig. 7
Fig.7 AGC response power and cost in each time period under the upper level decomposition algorithm
Fig. 17
Fig.17 Box figure of start SOC and end SOC In terms of dispatchable power, it can be seen from the Fig. that EVs uses the V2G performance to respond to the AGC command, which makes the box with the end SOC flatter than the box with the initial SOC in Fig.17, indicating that the overall SOC level is more concentrated.It is verified that the decomposition method in this chapter takes the dispatchable power into account well.②Effectiveness Analysis
9 SOCFig. 18 2 Tab
Fig.18 Distribution of battery life loss under method 2 Minimize the AGC response cost as the objective function:
①
Power constraint of the AGC unit Q presents the reactive power lower limit of the i th AGC unit; ② Ramp rate constraint of the AGC unit Power constraint of the aggregator ii PP Lower level decomposition algorithm considering dischargingSimilarly, the idea of ability-contribution is proposed to decompose the command to EVs based on the consensus algorithm.
Considering that the state of each EV has been updated before, the AGC response power of each EV can be obtained by accumulating the charging and discharging scenarios.
t=1 Collect real-time status for each EV Aggregator: calculate dispatchable power and AGC response cost (3.1.2) Power grid: calculate the total AGC command (3.1.1) Real-time information about ACE at time t Power grid: calculate the AGC response command Aggregator: receive AGC response command EV: receive AGC response command Update the relevant data of EVs t>T t=t+1 Complete the entire real-time AGC command decomposition during the dispatchable period Report real-time information of EV
1.The case result is shown in Tab.2: Tab.2 Solution speed and AGC response cost in each time period | 2023-02-15T16:08:53.999Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "faf9637a89dba753f7769c6549724154e11c338a",
"oa_license": "CCBY",
"oa_url": "https://www.techrxiv.org/articles/preprint/Bi-level_decomposition_algorithm_of_real-time_AGC_command_for_large-scale_electric_vehicles_in_frequency_regulation/21115498/1/files/37461661.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "448b59ef01a817f3ee252e97d5acbb9e0ceb99cf",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
10380043 | pes2o/s2orc | v3-fos-license | O R I G I N a L I N V E S T I G a T I O N Open Access
Assessment of the cardiovascular safety of saxagliptin in patients with type 2 diabetes mellitus: pooled analysis of 20 clinical trials Abstract Background: It is important to establish the cardiovascular (CV) safety profile of novel antidiabetic drugs.
Introduction
Cardiovascular (CV) disease is the leading cause of mortality and morbidity in patients with type 2 diabetes mellitus (T2DM) [1]. In the United States, the prevalence of self-reported CV disease in people with T2DM is estimated to be >30% [2], and CV events account for almost 70% of diabetes-related deaths in individuals aged ≥65 years [1].
Although epidemiologic studies suggest that hyperglycemia is associated with adverse CV events [3][4][5], the effects of intensive glycemic control on CV outcomes in interventional studies are not clear [6][7][8]. Moreover, in some studies and with some antihyperglycemic drugs, a tendency toward an increased risk for CV events has been reported [7,9,10]. However, follow-up of prominent clinical trials in type 1 [11] and T2DM [12] suggest that intensive glycemic control may reduce CV events over the long term.
Because of the uncertainty surrounding glycemic control and CV events and the association of increased CV events with some antihyperglycemic drugs, in 2008 the US Food and Drug Administration recommended that CV safety be assessed as a component of the clinical development program of new antihyperglycemic drugs [13].
Saxagliptin is a dipeptidyl peptidase-4 (DPP-4) inhibitor approved as an adjunct to diet and exercise to improve glycemic control in adults with T2DM [14]. DPP-4 inhibitors are oral antihyperglycemic agents that inhibit the inactivation of the incretin hormones, glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic peptide, resulting in increased glucose-dependent insulin secretion and suppression of glucagon secretion [15]. Observational evidence suggests that GLP-1 may have protective effects on the CV system, independent of glucose control [16]. However, DPP-4 is increased in patients with T2DM [17,18] and elevated circulating DPP-4 is associated with subclinical left ventricular dysfunction in these patients [18]. Therefore, it is of interest to assess the CV safety of DPP-4 inhibitors.
In randomized, controlled, clinical trials, saxagliptin was effective and well tolerated over 24 weeks in improving glycemic control when used as monotherapy [19,20] and as add-on therapy to metformin [21], glyburide [22], or a thiazolidinedione [23] in patients with T2DM. The advantages of DPP-4 inhibitors are their tolerability, a low rate of hypoglycemia, and weight neutrality [24].
Results from large outcome trials of saxagliptin in patients with prior CV disease or multiple CV risk factors (SAVOR) [25] and alogliptin in patients after acute coronary syndrome (EXAMINE) have recently been published [26] and have shown that saxagliptin and alogliptin do not increase or decrease major adverse CV events (MACE). In contrast to those trials in patients with T2DM and high CV risk, the current analysis evaluated MACE and its individual component events of CV death, myocardial infarction (MI) and stroke, as well as heart failure, with saxagliptin in the general population of patients with T2DM that participated in the saxagliptin clinical development program. The present analysis expands on a previous assessment of the CV safety of saxagliptin [27] and analyzes MACE in 20 phase 2 and 3 trials of saxagliptin versus placebo or active comparator.
Study design
This post hoc analysis (N = 9156) used pooled data from 20 randomized phase 2b and 3b controlled clinical trials of saxagliptin. These trials were placebo-controlled or active-comparator studies of saxagliptin (2.5, 5, or 10 mg/d in most studies; 20, 40, or 100 mg/d in 1 phase 2b study) as monotherapy or add-on therapy to metformin, a sulfonylurea, a thiazolidinedione, or insulin ± metformin for up to 206 weeks (including long-term extension studies) in patients with T2DM (Table 1). Data from the SAVOR study in patients with prior CV disease or multiple CV risk factors were not included in this analysis. In some studies, rescue medication (metformin, pioglitazone, or titrated insulin) was given during the study if patients met prespecified glycemic criteria. In long-term extension studies of monotherapy, patients in the placebo arm received blinded metformin 500 mg. Detailed methodology and primary findings for these studies have been published (Table 1). Patients were followed until completion of the study or premature discontinuation from the study. The studies were performed in accordance with the Declaration of Helsinki and all patients provided written informed consent. The protocols were approved by a local ethics committee.
Analyses
Adverse events (AEs) and serious AEs (SAEs) were reported by study investigators using standard reporting procedures. AEs were coded using the Medical Dictionary for Regulatory Activities, version 15.0 (MedDRA). AEs occurring up to 1 day following the last treatment day or up to the last visit day in the short-term plus long-term period (where applicable), whichever was later, were included. SAEs occurring up to 30 days following the last treatment day or up to the last visit day in the short-term plus long-term period, whichever was later, were included.
Major adverse CV events, defined as CV death, MI, stroke, and cardiac ischemic events reported by investigators were systematically identified using a list of MedDRA preferred term (PT) diagnoses. All identified potential CV events subsequently went through treatment-blinded adjudication by independent reviewers at the Duke Clinical Research Institute (DCRI; Durham, NC; 8 studies) or the Montreal Heart Institute (MHI; Montreal, QC, Canada; 12 studies).
Briefly, for the studies retrospectively reviewed by DCRI, cases included all deaths, MI, and stroke events as well as all events coded by any of the 148 MedDRA PTs representing possible ischemic events. Methods for full CV event identification have been previously published [27]. For the 12 studies prospectively adjudicated by MHI, the sponsor identified potential cases for adjudication based on AEs and SAEs with PTs that correlated with the following Standardized MedDRA Queries (SMQs) groupings, as defined by the current version of MedDRA: "ischemic heart disease" (adjudicated for possible MI) and "cerebrovascular disorders" (adjudicated for possible stroke). In addition, SAEs (only) with PTs that correlated with the SMQs of "cardiac arrhythmias" or "cardiac failure" were sent for adjudication to determine if the cardiac failure or cardiac arrhythmia was precipitated by MI. Additionally, any event that led to death was identified for adjudication [27]. Heart failure events were not adjudicated and were identified based on PTs from a narrow SMQ for "cardiac failure".
Safety was analyzed in all treated patients, including those meeting rescue medication criteria. Analyses of CV events were performed using the pooled 20-study dataset and a separate pooled subset of 11 studies of saxagliptin add-on therapy to metformin (NCT00575588, NCT00666458, NCT00661362, NCT00121667, NCT00327015 [included saxagliptin + placebo and saxagliptin + metformin vs metformin + placebo], NCT00757588 [included saxagliptin + insulin ± metformin or insulin ± metformin], NCT00683657, NCT00885378, NCT00918138, NCT01006590, NCT00960076). In addition, subgroup analyses of MACE were performed for saxagliptin 2.5 mg versus control and saxagliptin 5 mg versus control in the 20-study pool. The saxagliptin 2.5-mg group included patients who received an initial dose of saxagliptin 2.5 mg once daily, except for those enrolled in the renal impairment study (NCT00614939). The saxagliptin 5-mg group included patients who received an initial dose of saxagliptin 5 mg once daily or 2.5 mg twice daily. Patients receiving doses of saxagliptin <2.5 mg/d or >5 mg/d were not included in the analyses by dose. For MACE and individual CV component events, the number of patients with the event, the time up to an event or censoring (for patients without a MACE), the exposure-adjusted incidence rate (IR), and the incidence rate ratio (IRR), which provides a means to account for differences in study duration and mean follow-up time with saxagliptin and control, were calculated. To account for differences between studies in patients, event rates, and randomization ratios, the IR (number of patients with events per 100 patient-years) with 95% CI was calculated using the Mantel-Haenszel method, stratified by study. Exact 95% CI was calculated for the IRR, stratified by study. In addition, adjudicated MACE were analyzed using a Cox proportional hazards model.
Patient demographics and clinical characteristics
In the 20-study pool, demographic and clinical characteristics were similar between the saxagliptin (n = 5701) and control (n = 3455) groups (Table 2). Most patients were white and <65 years of age, and 45% (control) to 49% (saxagliptin) had a duration of T2DM of ≤3 years. In the addon to metformin study pool, demographic and clinical characteristics were also similar between the saxagliptin (n = 2981) and control (n = 2190) groups (Table 3). There was a higher proportion of patients with duration of T2DM of ≤1.5 years in the 20-study pool, compared with the 11-study saxagliptin add-on to metformin pool; otherwise, no notable differences were observed between the pooled populations. The total follow-up time for saxagliptin and control for the 20-study pool was 6051 and 2869 patient-years, respectively, with an average follow-up time of 1.06 years/patient and 0.83 years/patient, respectively. The proportion of patients that prematurely discontinued from the study varied based on the length of study. The rate of premature discontinuation was higher with saxagliptin versus control in 3 studies, higher with control versus saxagliptin in 9 studies, and similar between groups in the remaining studies.
Cardiovascular events
In the 20-study pool, exposure time to the first MACE or censoring was 6039 patient-years in the saxagliptin In the 11-study pool of saxagliptin add-on to metformin, the exposure time to a first MACE event or censoring was 3287 patient-years in the saxagliptin group versus 1783 Heart failure was not defined as a component of MACE and was not adjudicated but was analyzed separately. For heart failure (20-study pool only), the IR (SE) was 0.34 (0.08) and 0.62 (0.15) for saxagliptin and control, respectively. IRRs for these individual events ranged between 0.55 and 0.87 (Figure 2).
Discussion
In this pooled analysis of 9156 patients with T2DM from 20 phase 2 and 3 clinical trials, treatment with saxagliptin was not associated with an increased risk of CV events and heart failure compared with placebo or active comparator. These results expand on previous findings on the CV safety of saxagliptin reported in a metaanalysis of 8 phase 2 and 3 trials [27]. In that analysis, a total of 40 MACE events in 4607 patients were reported. The relative risk (95% CI) for saxagliptin versus comparator for a composite endpoint of adjudicated CV death, MI, and stroke was 0.43 (0.23, 0.80), which suggested possible CV protection with saxagliptin. The present analysis expanded on the previous study and included 9156 patients who experienced 74 MACE events.
In this larger population, which should provide a more precise risk estimate, the relative risk (95% CI) for a composite endpoint of adjudicated CV death, MI, and stroke was 0.75 (0.46, 1.21), suggesting no increased risk of MACE in this 20-study pool. Incidence rates for CV events for saxagliptin were not different from those for placebo or comparator in most analyses, with the exception of the lower IR for MACE in the saxagliptin 2.5-mg group in the subanalysis of the 20-study pool. However, it should be noted that only 7 of the 20 studies included patients who had received the 2.5-mg saxagliptin dose. The present findings are also consistent with previously published meta-analyses of CV events from clinical trial programs for other DPP-4 inhibitors, including vildagliptin, sitagliptin, linagliptin, and alogliptin. In a pooled analysis of 25 clinical trials, the relative risk (95% CI) for cardiocerebrovascular events for vildagliptin was 0.88 (0.37, 2.11) for 50 mg once daily and 0.84 (0.62, 1.14) for 50 mg twice daily [49]. In other meta-analyses, the IRR or HR (95% CI) for CV-related events versus comparators was 0.83 (0.53-1.30) for sitagliptin [50], 0.34 (0.16, 0.70) for linagliptin [51], and 0.64 (1-sided 97.5% CI, 0.0, 1.406) for alogliptin [52]. In addition, a meta-analysis of 70 trials of DPP-4 inhibitors enrolling 41,959 patients reported a reduction in MACE (n = 495 total events of CV death, nonfatal MI, and stroke and acute coronary syndromes and/or heart failure; odds ratio, 0.71 [95% CI, 0.59, 0.86]) [53]. Although these studies are not directly comparable because of different CV endpoints, study designs, adjudication procedures, patient populations and background medication, all supported the hypothesis that DPP-4 inhibitors do not increase CV risk and may possibly have CV benefits in patients with T2DM.
Results from the large outcome trial of saxagliptin in patients with prior CV disease or multiple CV risk factors Figure 3 Incidence rate ratios for saxagliptin vs control (point estimates and 95% CI) for CV death, myocardial infarction, and stroke in the add-on to metformin study pool. Numbers in parentheses are total patient-years of exposure (the time up to an event or censoring). CV = cardiovascular; IRR = incidence rate ratio; SAXA = saxagliptin.
(SAVOR) have recently been reported [25]. Results generally consistent with those were also reported from the alogliptin trial (EXAMINE) in patients after acute coronary syndrome [26]. SAVOR demonstrated neutrality on the composite primary endpoint of CV death, MI, or ischemic stroke (HR, 1.00 [95% CI, 0.89, 1.12]). The MACE results reported here in a much lower-risk population with an event rate approximately a third of that observed in SAVOR are consistent with SAVOR in demonstrating a safe profile of saxagliptin with respect to MACE events. The fact that SAVOR did not demonstrate superiority compared with placebo raises at least two alternative, though not mutually exclusive, interpretations: (1) evidence suggesting benefit from metaanalysis and preclinical evidence [16,54] was due to chance or (2) saxagliptin and likely other DPP-4 inhibitors are safe in all populations and trends to benefit occur only in the lower-risk general population studied in the phase 3 clinical development program. The latter hypothesis has been previously suggested based on the only positive interaction of subgroups in a patient level meta-analysis of UKPDS, ACCORD, ADVANCE, and VADT [55]. Owing to the marked difference in population characteristics (eg, age, CV history and risk factors, duration of diabetes, background diabetes and CV medications, proportion of patients with baseline glycated hemoglobin <7%) and population risk (3-to 6-fold higher event rate) between SAVOR and EXAMINE and the meta-analyses of phase 3 programs of saxagliptin and alogliptin, it is difficult to support or dismiss either interpretation for the lack of benefit observed in SAVOR and EXAMINE. SAVOR also demonstrated neutrality on the broader composite endpoint of CV death, MI, stroke, or hospitalization for unstable angina, heart failure, or coronary revascularization (HR, 1.02 [95% CI, 0.94, 1.11]). One component of this broader endpoint, hospitalization for heart failure, did have an HR with 95% CI which did not include 1 (HR, 1.27 [95% CI, 1.07, 1.51]). As reported here, heart failure in the 20-study pool had an HR (95% CI) of 0.55 (0.27, 1.12). Again, differences in the patient population, background medications, and/or chance may be involved in the relative inconsistency of these results. Moreover, SAVOR was an event-driven trial in a highly defined population (prior CV disease or multiple CV risk factors), whereas the 20 clinical trials analyzed in this study had defined treatment periods ranging from 4 to 206 weeks and included diverse patient populations with T2DM (eg, patients who were treatment naïve, receiving varying background antihyperglycemic medications, or with renal impairment). The phase 3 data presented in this manuscript suggest that the observation of hospitalization for heart failure could not have been anticipated based on the phase 3 development program. It may be that further analysis of SAVOR results or the other prospective CV outcome trials with DPP-4 inhibitors [56,57] will give further clarity to the two issues raised here.
Certain limitations of this analysis should be recognized and considered when interpreting the results. To handle missing data as the result of premature discontinuation, analysis methods assumed similar event rates had the patient completed the study. However, patients treated with saxagliptin tended to be followed longer and had a lower rate of discontinuation compared with those who received control treatment. Results using this assumption should be interpreted with caution.
The saxagliptin group was also heterogeneous and included patients treated with doses higher than the approved 2.5-and 5-mg once-daily doses. Further, the analyses of the 2.5-and the 5-mg doses used distinct study pools because not all studies included 2.5-and 5-mg arms, which precludes direct comparison of results for the 2 doses. It is important to recognize that the pooled patient population in these clinical trials was highly selected, which may have resulted in a lower event rate compared with that observed in clinical practice. Finally, there was relatively limited experience beyond 18 months.
Conclusion
Pooled data from 20 clinical trials involving 9156 patients with T2DM suggest that saxagliptin is safe and not associated with an increased CV risk. Authors' contributions NI contributed to the conception and design of the study and the drafting and final approval of the manuscript. AP contributed to the conception and design of the study and the drafting, revision, and final approval of the manuscript. RF contributed to the conception and design of the study and the drafting, revision, and final approval of the manuscript. MD contributed to the conception and design of the study, analyzed the data, and revised and approved the final version of the manuscript. BH contributed to the conception and design of the study and the drafting, revision, and final approval of the manuscript. | 2018-05-08T17:38:20.849Z | 0001-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "e8d22ee6bf33101d368a20288497f62fa61184e1",
"oa_license": "CCBY",
"oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/1475-2840-13-33",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "e8d22ee6bf33101d368a20288497f62fa61184e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
153417245 | pes2o/s2orc | v3-fos-license | Information and Communication Technologies for Women’s Socioeconomic Empowerment
The report will provide a brief overview of major themes for women and ICTs, including issues for girls versus women; the ICT workforce; and opportunities versus the threat of ICTs for women s lives. The report will discuss as well the issue of women and Sciences and Technologies. Several policy recommendations will be drawn, amongst which The economic opportunities women can bring to development through ICTs will not be realized unless policies for all mainstream efforts take gender considerations into account. Policy makers should host forums that allow gender experts to debate the issues and arrive at a diversity of perspectives and recommendations that recognize the complexity of the issues and their impact on socio economic development. Policy is needed to ensure that investment in ICTs contributes to more equitable and sustainable development for all.
Information and Communication
Technologies for Women's Socioeconomic Empowerment ■ Policy makers should host forums that allow gender experts to debate the issues and arrive at a diversity of perspectives and recommendations that recognize the complexity of the issues.
■ Policies are needed to ensure that investment in ICTs contributes to more equitable and sustainable development for all. ■ Policy is critical to produce and maintain local content for women, to make this content's access and usage women-friendly within the local culture, and to create capacity for women and men to maintain and enrich that content.
Suggested recommendations for action are as follows: ■ Implementation efforts should refrain from transforming models and studies into "formulated approaches" or "prescriptive measures" if we are to ensure that the innovative character of ICTs remains in the hands and control of the users themselves.
■ Nontraditional patterns of education, entrepreneurship, and business development are needed to develop opportunities that reach beyond traditional culture.
C H A P T E R 1
Overview nformation and communication technologies permeate every aspect of our lives; from community radios in the most rural parts of the globe to cellular phones in the hands of women and men in every community on earth, to computers in almost every medium to large organization. The advancement of ICTs has brought new opportunities for both knowledge sharing and knowledge gathering for both women and men. To the extent that the global community can reach heretofore unconnected individuals, families, and populations to better understand their needs and challenges, ICTs can provide unlimited opportunities for economic development and social engagement through new, innovative thinking and tools. However, a basic assumption is that all members of our global community benefit from and are part of the growing knowledge society. ICTs have been compared to a double-edged sword-advancing the knowledge society on one hand, and deepening gender and social divides based on pre-existing social divisions on the other. Leaving large portions of the global community both underserved and unengaged remains the largest determinant of success for current development efforts. Specifically, without a thoughtful policy, strategy, and execution plan to ensure women's full engagement in the knowledge society, the places in which they work, the families for whom they care, and the communities in which they live and serve will not thrive.
The belief that one policy fits all has clearly demonstrated a lack of effectiveness over the years with a loss of billions of dollars and millions of hours of labor leading to little achievement towards the millennium development goals. This paper will present both traditional examples of effective practices for women in developing regions and a new, bold model for development for consideration through a "female first" policy that sets as a requirement for all mainstream initiatives the insistence that projects take into consideration the impact and engagement of women. By asking how women will be affected and engaged, unresolved challenges can be addressed in areas including e-government, agriculture, e-learning, business development, and entrepreneurship. Overcoming these challenges will benefit women, but will also benefit their families, their communities, and the developing regions.
Photo by Alan Gignous/World Bank.
Currently, in those countries with disproportionately lower income, women face greater constraints than men in four areas: 1. Access and use of ICTs. The work of Sophia Huyer and others has demonstrated that there is no correlation between the saturation of ICTs in a country and women's access to those ICTs. 1 Social and cultural factors limit women's access to shared ICT facilities such as cybercafés, or telecenters, 2 which often become meeting places for young men, and hence deter women's absorption and adoption of ICTs to access information and knowledge. Because women and girls often do not control the finances of the home or do not have sufficient personal income, they may lack the financial resources to purchase radios, televisions, or computers or to pay Internet service providers (ISPs) for monthly access to the Internet. Girls and boys may have differing access to computer skills training in primary and secondary schools. Anecdotal evidence suggests boys will often get priority access where computers are equally available, but this needs to be better measured and understood in developing countries before generalizations can be made. Finally, for the large numbers of women employed in the informal sector, there is no possibility for using office computers to access the Internet, a possibility that is more accessible for formal economy employees. 2. Usability and literacy. Access to education continues to be a greater barrier for women than men; an estimated two thirds of the world's illiterate people are women. 3 Education in science and technology is considered a male domain in many cultures. 4 Training in ICT skills is rarely gender sensitive or tailored to women's needs 5 and is sometimes delivered by a male trainer who has embedded perceptions about women's capabilities inconsistent with a research-based understanding of women's competencies and contributions in these fields 6 , 7 , 8 , 9 . Familiarity with basic computer use, including the ability of the user to establish an email account, communicate via email, navigate the Web, understand the basic etiquette of using the Web, download useful and sometimes life-saving information, use CDROMs and other interactive materials, and the ability to use electronic forms of communication for distance education are basic learning and communication skills needed for workplace tasks by women as well as men. 3. Development and design. Much of the content on the Internet has not been developed to address the needs of women and girls in developing and developed countries nor is it available in the languages they speak. Digital technology has also been used for harassment and sexual exploitation of women and girls in the form of pornography, trafficking, and predatory emails 10 , 11 , 12 . While gender-sensitive men have done much to promote genderequitable content design, fully addressing these issues can only be done when more women become software engineers, content producers, and entrepreneurs filling the large need for these resources. There is a growing commercial market, yet significantly underserved in the developing world, to be supplied by women entrepreneurs and employees who can both capture women's knowledge for the marketplace and develop knowledge and resources to serve women, their families, and communities in ways in which the male-dominated field has not yet considered. 13 This content by women for women will provide an excellent economic opportunity through the development of niche markets currently underserved. Concurrently, women can help fill the large demand for skilled labor needed for growth by major multinationals, as well as national and local workforce needs. 4. Leadership and power. In both developing and developed communities women make up a small percentage of the top leadership on boards of directors and "C" level business leaders. This has a significant impact on economic development as suggested by the work of Catalyst, 14 a research organization that studied women's participation at the top level of leadership for Fortune 500 companies in the United States against a number of factors: a. Return on equity: Companies with more women representatives on boards of directors outperformed the others by 53 percent. b. Return on sales: Companies with more women representatives on boards of directors outperformed the others by 42 percent. c. Return on invested capital: Companies with more women representatives on boards of directors outperformed the others by 66 percent. d. The link between women on boards of directors and corporate performance holds across all industries.
While developing countries may find their performance outcomes differ from U.S. companies, the effective policy maker will set standards for policies and implementation based on good practices demonstrated through rigorous research. These outcomes meet these criteria and suggest policies that integrate women into the top level of every organization-not because they are women, but because organizations can no longer afford to ignore their best and brightest minds, including women's.
On the positive side, there is a growing body of evidence that demonstrates that women's use of the Internet and cell phones has had a strong and powerful impact on their participation in the knowledge society-from e-banking to safely secure family income to connecting to medical experts for health-care advice in ways never believed possible (See Case # 4 below).
Unfortunately, the reality today is the potential of women continues to be underutilized.
Women are underrepresented in all ICT decision-making structures 15 (for example policy and regulatory institutions, ministries, and boards of ICT companies). Within the ICT industry and the growing Information Technology Enabled Services (ITES) clusters, women are found in disproportionately high numbers in the lowest paid and least secure jobs at the lower end of the supply chain. They hold jobs as data entry, phone operations, clerk, and administrative positions with few benefits and the lowest wages. 16 On the other hand, the ICT and ITES industries do enable a better work/life balance for women as they continue to be the primary caretakers for children and the elderly. A 2007 UNESCO publication entitled, "Science, Technology and Gender: An International Report" addresses these issues in detail. 17 New employment models enabled through technology, including teleworking, give women (and men) a wider range of employment options that can be combined with domestic responsibilities. This pattern is now being replicated in the developed world as a means for decreasing business overhead costs of infrastructure and decreasing workers' carbon footprint. Yet it can backfire and exclude women (and men) from selected career trajectories where "in office" time is critical or mobility is essential. This remains a cultural issue as more and more executives around the world run companies from remote locations, suggesting that telecommuting provides more of an opportunity than an obstacle for personal and business growth. The current financial crisis, combined with growing interest in climate change, is reshaping debate around telecommuting and telework, and creating more supporters for the concept even amongst very conservative organizations. 18 It is expected the culture will change and with it the advancement possibilities for remote employment from home-based female workers-but we are not there yet. A similar 'cultural shift' can be seen in the adoption of distance learning and e-based teaching methods by established universities that previously shied away from using ICTs for high-standard course delivery.
Men predominate in higher paid work in hardware and software engineering and management. At first glance, this may appear to be because of the lack of women engaged in the field, but women's stories from Ghana to Saudi Arabia to Italy to Dubai suggest that employer biases keep women graduates in engineering and computer science from gaining positions in these fields or rising in company hierarchies. 19 Lack of disaggregated employment data makes referencing these cases difficult. But any failure to employ skilled or educated women in developing regions in ICT businesses is a loss of capacity that has economic impacts well beyond the life of each individual woman. As seen below, by not hiring or training women software engineers to be employees or entrepreneurs, hundreds of people lose through the direct "The question we have to ponder here is simply this: how does a society hope to transform itself if it 'shoots itself in the foot' by squandering more than half of its capital investment? The truth of the matter is that societies that recognize the real and untapped socioeconomic, cultural and political power of women thrive. Those that refuse to value and leverage women's talent, energies and unique perspectives remain developmental misfits. And I daresay that it is not difficult to demonstrate this with a growing body of evidence." President Kagame of Rwanda, February 2007 Gender, Nation Building and the Role of Parliaments impact of jobs not created and wealth left unrealized, hundreds of thousands may lose by the failure to develop innovative ICT solutions in countries that would benefit women and their families, and millions of dollars are lost from resources that could have better served the needs of the region.
In addition to the cost of lost opportunities, more women than men have been displaced due to increased automation and computerization of work places. Increased demand for more advanced skills, as the technology in the ICT sector rapidly changes, means that workers must continually upgrade their skills. Women are at a disadvantage given their multiple roles in work, family, and community and the cultural bias that tends to value an investment in men's education before women's. 20 , 21 , 22 We expect the landscape to change slightly for women as increased efforts are being made by select governments to reduce the gender salary gaps in ICT jobs (which benefit from being in a new sector attracting young and entrepreneurial women) and develop policies to attract young women and adolescents to science and technology careers and on-the-job training. Research has indicated that positive role models often can have an impact on traditional mindsets and behaviors related to women's engagement in ICTs and science. Consequently, forward-thinking countries have launched media campaigns to promote women's full engagement and socioeconomic empowerment through ICTs and have provided special funding for women-owned small and medium enterprises (SMEs) that provide ICT (for example, in South Africa, Qatar, Tunisia, and the United Arab Emirates,) But while role models are a necessary condition for women's engagement, they are far from sufficient and should be viewed as only the first step in career awareness. In addition, women's entrepreneurship programs should not be simply men's entrepreneurship programs with women students, because research indicates that women both run their businesses differently and approach the field differently than men. Understanding the unique qualities of women's business, how to help them successfully manage what is usually a male-dominated space, and fully appreciating the value women bring to the business community are essential for long-term sustainability of women-owned ICT businesses. Asking women to behave more like men is a short-term plan for long-term failure. Creating a research-based appreciation of the unique and innovative business models women develop to serve both the mainstream and the underserved populations will add to the rich diversity of talent needed to ensure an economy thrives. The latter will also most likely increase the consumption and production of ICT devices, gadgets, and content for and by women (taking into account segmentation factors such as age, education, culture, language, and local context). This paper presents how and why ICTs impact women and men differently and the implications of women's lack of engagement, participation, and leadership in the knowledge society through ICTs for business and development. Furthermore, the Photo by Eric Miller / World Bank paper will explore the impact of the ICT "gender gap," a new and growing form of discrimination that offsets the benefits that ICTs provide to women. Additional policy development may support positive outcomes and mitigate negative ones.
The paper will also highlight examples of best practices and weaknesses in assumed best practices to provide opportunities for full scale execution of efforts to achieve measureable outcomes in achieving the Millennium Development Goals (MDGs). An important focus is the need to move many of the carefully incubated gender policies and initiatives, developed through thoughtful leadership in specialized women's programs, into the mainstream. This will help ensure that welldesigned initiatives do not inadvertently become "ghettoized" or ignored by the mainstream programs that desperately need the knowledge to enhance and achieve their outcome goals. The collaborative work of the World Bank, described in their recently released report Information and Communication Technology 2009: Extending Reach and Increasing Impact, which this document supports, serves as an example of such integration. This paper's main ideas and key messages should be used by the development community's practitioners and policy makers to support broader discussions on the opportunities and challenges that exist in the ICT sector to benefit all people and ensure projects have provisions and incentives to include women's participation at all levels. Suggested concepts will hopefully provide grounds for fruitful discussions among government leaders forming ICT policies, support good designs for ICT skills training and education programs, develop effective guidelines for good business practices including all talented workers, support entrepreneurship development customized to the learner (not expecting the learner to adapt to the training), and in general develop strategies to eliminate any negative impact a gender digital divide would have on development.
Box 1: Introductory Notes to Keep in Mind
• The paper examines a broad range of cases to provide a diversity of ideas.
• Web-based/online case studies on gender and ICTs are not reproduced in this document but are included as Web resources, references, or publications in Appendixes.
• While the review is predominantly focused on developing countries, projects, and policies deliberately targeting women in Australia, Canada, the European Union and the United States are included.
• Qualitative measurements on economic, social, and political impacts of ICTs are difficult to articulate for this study to allow conclusions. Comprehensive studies measuring the impact of ICTs generally do not disaggregate gender in data collection or analysis.
C H A P T E R 2
Women, Gender, and ICTs: Why Does It Matter? n 1995, the United Nations Commission on Science and Technology for Development (UNCSTD) recognized the growing influence of ICTs in development and the importance of women's participation in discussions regarding its integration globally. To that end, they established a Gender Working Group to address the significant gender issues from access to control. The United Nations Division for the Advancement of Women (DAW), the International Telecommunication Union (ITU), and the UN ICT Task Force Secretariat released a report in 2002 that focused on ICTs as a tool to advance and empower women. When the World Summit on the Information Society (WSIS) was established, a Gender Caucus was created to ensure women had a seat at the table and a voice in the room. 1 The Commission on the Status of Women, during its 47 th session in 2003, developed Agreed Conclusions that built upon the DAW report and urged WSIS leaders to integrate gender perspectives in every aspect of the Summit. 2 The first WSIS summit held in Geneva debated the issue of gender. In their final Declaration of Principles, the body stated that: We affirm that development of ICTs provides enormous opportunities for women, who should be an integral part of, and key actors, in the Information Society. We are committed to ensuring that the Information Society enables women's empowerment and their full participation on the basis on equality in all spheres of society and in all decision-making processes. To this end, we should mainstream a gender equality perspective and use ICTs as a tool…. 3 Yet despite the consistent agreement in policy, there have been challenges in the implementation of those policies.
Originally the focus on girls and women in ICTs was intended to address Millennium Development Goal 3, which targets the elimination of gender disparity in primary and secondary education, preferably by 2005 and at all levels of education by 2015. ICTs provide a new model for knowledge dissemination, diffusion, and creation that could, if developed correctly, address a long-standing, intransigent problem of education access and empowerment. To be able to benefit from the new I Hiba Zaiour, Computer Engineer, Doha, Qatar. Used with permission.
knowledge society, one must have the education and literacy needed to use the ICTs, as well as have access. However, "women and girls are poorly placed to benefit from the knowledge society because they have less access to scientific and technical education specifically, and to education in general." 4 Often the Internet is provided in English and women, particularly in rural areas, do not speak or read English. The impact of having few women Web developers and software programmers, particularly working in the developing regions, may be a lack of local content relevant for women's needs (the basic "how to" for health, nutrition, taking care of oneself, family, farming, husbandry, agriculture, and so forth) and interests, but the data on how this impact can be measured needs to be more specifically researched.
Education is generally recognized as a key ingredient for all forms of development, including economic. Educated women increase opportunities for their families and children. ICTs are an important tool for education delivery (e-learning), as well as a series of products about which one needs education. In other words, individuals need to be educated about the use of ICTs to use them, and once this education takes place, additional literacy and education can follow. Because the barriers to education delivery in many remote areas is so problematic, policy makers and development officials often make the mistake of focusing on these challenges alone and assuming simply getting power and technology access into the region or remote village is sufficient to address ICT education delivery for all. In fact, the continuing high percentage of women's illiteracy compared to men's suggests this assumption is false. Instead, mainstream policy makers and program developers should ask the question, "How will this project affect girls and women's literacy in addition to men's," which would highlight a long series of issues left unaddressed for households. By disaggregating the questions as well as the data, many as yet unexplored issues emerge that remain barriers for economic development around the world. Addressing these barriers at the beginning of a project or policy effort will ensure digital gaps are addressed before they emerge and reinforce current knowledge, power, or image gaps.
Web 2.0 applications are an emerging area of interest to the world and to women because of their power to connect and lobby for socioeconomic concerns. Busoga Rural Open Source and Development Initiative (BROSDI) is an NGO that engages the rural community for sharing knowledge to reduce household poverty in Uganda. Despite issues of literacy and Internet access, the organization feels that Web 2.0 applications encourage collaboration and networking, even in rural areas. Women's interest in community makes these tools of particular importance. The data to date has yet to clarify the extent women have access, use, and are developing these new tools.
Girls and ICTs
Throughout this report, references to both girls and women are used together to illustrate a continuum of impact. For those who might suggest that the differences between girls and boys may be a result of competence or capability, a recent researchbased report by the OECD ends the debate with performance indicators for boys and girls over time. The report highlights the achievement of girls in school in both science and mathematics (including computer science). Specifically the authors note the following: ∞ In primary education, there are few gender differences apparent in science or mathematics, although girls excel in reading even at this early age. ∞ In secondary education, females had higher average achievement than males in mathematics and science. ∞ In tertiary education, while traditional gaps have been narrowing, graduation rates for computer science and mathematics are lower for females then males. 5 The authors go on to conclude, "where education and human capital accumulation drive innovation and competitive advantage, increasing graduation rates among female students is for many countries the most immediately available opportunity for increasing the output of graduates in these critical areas." 6
Photo by Ghennadiy Ratushenko/World Bank.
This research is supported by a meta-analysis of 5,000 individual studies, which found that boys and girls have similar psychological traits and cognitive abilities when it comes to mathematics and science education, suggesting a focus of efforts is needed to help encourage girls to persist in these areas, such as eliminating gender bias about girls ' abilities and interest. 7 In other words, by engaging more girls and women in the development of ICTs, the world can better ensure there is quality content, products, and services that meet the needs of girls and women as well as their families, communities, and countries. Concurrently, girl's passive participation with ICTs leaves them vulnerable to predators 8 and less likely to engage in ICTs for knowledge gathering, sharing, and eventually business development and careers. There is little known about the intersection of the girl child and ICTs in the developing world other than some pilot studies that provide a glimpse as to their value in education for girls as well as boys. But the lack of disaggregated data and indicators make these issues difficult to discuss, like most other aspects of gender and ICTs. What we do know is from the work of Margolis and Fisher in their book, Unlocking the Clubhouse: Women in Computing; the lack of exposure to IT from a young age can lead to an erosion of confidence, which in turn leads to an increased attrition rate among young women in the IT field. 9 This is a trend we observe in most sciences and areas of technology: earlier exposure, usage, and experimentation is always a plus, especially for girls-for whom science and technology are heavily associated with cultural stereotypes, even in the developed countries.
The good news is that lack of data has not kept advocates around the world from working on the issues. Programs like Microsoft's DigiGirlz TM program gives high school girls the opportunity to learn about technology careers, connect to Microsoft employees as role models, and participate in hands-on activities. 10 The program, which started in the United States, has recently been hosted in Dubai, United Arab Emirates, with over 200 girls participating and more interest developing for next year's expansion (see Case Example 9 on p. 35).
Another program, Computer Mania Day for Girls TM , is an internationally awardwinning event that targets younger girls ages 10-12 for similar experiences using role models, hands-on activities, and an electronic puppet for the keynote speaker to demonstrate high-tech applications that are both fun and raise awareness. The program also has a side event for parents and teachers to educate them on how to better encourage girls' preparation for technology careers. 11 Some highlights of comments from girls from an impact study conducted on Computer Mania Day for Girls TM attendees after several years are as follows: The Kofi Annan Centre in Accra, Ghana hosts a number of technology courses including the Cisco Learning Academy classes for youth, which has equal numbers of girls and boys enjoying the course and preparing themselves with twenty-first century workforce skills.
In all cases (and there are many more) the goal is to create both an awareness of the opportunities for the girls if they choose to study in these fields and also an understanding as users of technology of the many current and emerging applications of technology. Once awareness and interest is created, more needs to be done to encourage that interest in schools and in homes and communities where ICTs are accessible and safe. Additional considerations are as follows: ■ Ensure educational content and curriculum is developed for girls' interests as well as boys', but avoid gender stereotypes to achieve this goal.
■ Create safe times and spaces for girls to access ICTs where they will not be in competition with boys as research shows boys' aggressive behavior tends to push girls out.
■ Educate parents and teachers about girls' capabilities in ICTs, highlighting women's many contributions to date, such as the first software developed was created by six women mathematicians called "computers." ■ Educate girls and their families about online predators and child safety though cell phones and the Internet.
Women and ICTs
Women have been engaged in ICT development since its inception. It was a woman who developed the compiler, identified the first computer bug, and created the first programs. Today, example after example highlights the value of women's voices and the importance of their contributions. Women's participation in economic development through microloans to build SMEs has been well documented and publicized. Women's business incubators are emerging through the developing world in recognition of the need to provide business opportunities for women as well as men to enhance, grow, and quicken the pace of economic development. The full scale and power of many of these SMEs are yet to be fully realized, but there is a growing awareness of women's ability to use ICTs to expand their work across regions and around the world. Highlighted in this document are a few women-initiated ICT projects that touch every aspect of development from improving access to health care to promoting peace. Women from the grass roots are using ICTs to expand their mission and drive their passion to improve the world. There is a growing reality that women's engagement in ICTs is important for multiple forms of development, including social and political justice 12 as well as economic development. But we do not understand well how women access, use, develop, and/or design technology compared to men. This is in part because of the lack of indicators as well as disaggregated data available. This lack of information is of growing concern and organizations such as the ITU are doing a better job of gathering household data that looks at gender as a variable. What Huyer and Hafkin found in their work is that there is little correlation between Internet penetration in a country and the percentage of female Internet users (figure 2.2).
Women's full participation in the knowledge society is indeed a necessary condition for development to take place. Conversely, the lack of participation by women will slow progress and negatively impact families and communities.
The current position that one mainstream policy for ICTs fits all is not sufficient to engage women (and many men) in the knowledge society. The consequence of failing to disaggregate the data by gender, to have mainstream policy makers understand gender issues fully, and to create policy and implementation strategies that acknowledge and assuredly engage women's and men's unique needs and contributions is to design a plan for failure-one with which we are all too familiar. Gurumurthy et al. 2006, pp. 35-41. 15
C H A P T E R 3
Outcomes and Impacts of ICT Policies and Projects for Women here is a dearth of literature systematically evaluating the impact of ICTs on women's overall welfare. Even among major recent studies evaluating ICT impact on business development or e-government initiatives, data is not disaggregated by gender. Michael Minges, an expert on gender and ICT data, explains that the lack of disaggregated data is a result of many government organizations' failure to collect national ICT statistics at all. Of those government agencies that compile statistics, most do not provide a breakdown by gender. Second, traditional ICT statistics are either obtained from telecommunication organizations (for example, telephone usage) or estimated based on shipment data (for example, sales of personal computers). These organizations have their own operational or analytical reasons for maintaining the data; unfortunately gender does not factor into their considerations. 1 Existing statistics relating to gender and ICTs are most likely to be found in usage data by sex. Socioeconomic status as a factor in access is less likely to be found. Internet penetration data is relatively easier to obtain through the use of Web-based surveys and is available through national agencies as well as market research companies. For example, the China Internet Network Information Center (CHNIC) compiles a breakdown of Chinese Internet users by sex every six months. 2 The number of Internet users multiplied by a factor of 200 in eight years, from an estimated 620,000 Internet users in October 1997. By mid-2006, the estimated number of Internet users in China was 123 million. The gender gap declined from about 80 percentage points in 1998 to about 20 percentage points in 2001 but appears to have remained more or less constant since then (see figure 3.1).
The 2002 World Bank publication "Information and Communications Technologies: A World Bank Group Strategy" notes evidence that gender inequalities are increasing in developing countries as they move toward competitive market economies and new technologies and states that "unless this regressive process is controlled, the knowledge-based economy will not only be incomplete but will also widen the gender gap and perpetuate some of the worst obstacles to social change." 3 Among the factors set forward for prioritizing different types of information infrastructure assistance is ensuring that "minority ethnic groups, women and the disabled should be a focus of network access and applications support. At the same time, anecdotal evidence is plentiful and varied suggesting ICTs may play a key role in the economic opportunities of disadvantaged men and women around the world. Some of these are facilitative-such as the uses of ICTs in credit and loan access and management, or online training opportunities that would otherwise be difficult for women to access.
There has been some research on the impact of accessing ICTs on women's socioeconomic conditions-from saving lives (early warnings in times of natural disasters) to improving human development and health (through access to information on health and nutrition, disease and infection preventions, and access to clinic locations) to improving competitiveness in the job market (ICTs open new employment sectors for women in new fields and in a wide range of self-employment possibilities).
One growing opportunity that is being recognized in many countries is the need for more qualified workers to fill gaps in the engineering and IT workforce. Increasing human resources in science and technology, for instance, is one of the key targets of the Lisbon agenda in order to boost competitiveness and increase growth. According to the European Commission, the ICT industry alone contributes to one fourth of European Union's total growth and 4 percent of its jobs. Yet the sector is set to face a skills shortage of some 300,000 qualified engineers by 2010. In an attempt to boost numbers of qualified computer engineers in the European Union and recognizing the relatively low numbers of women engineers compared to men, the Commission, together with leading technology companies, is trying to get more young women interested in ICT careers. 5
Case Example 1: Women Creating Global Peace through ICTs
Peace is a necessary condition for economic development. Understanding this, Patricia Smith Melton founded Peace X Peace, an international women's peace organization that uses the power of leading-edge technology tools to connect women across all cultures for mutual support and concerted action through "women's circle relationships" and "sister to sister relationships" that together help to shatter barriers, including language, culture, intolerance, and conflict. Current technology tools include Drupal, CiviCRM, and Roundpoint's Cerkle platform to host their Global Network: a secure, profile-based matching system that connects individual women and groups into egalitarian online Circles. The technology platform helps members connect, build mutual support, advocate for change, and mobilize to take action. Nearly 20,000 members in more than 100 countries connect from their personal computer or mobile phone and participate as equals in programs that highlight women's peace-building actions, promote women's leadership in peace processes, and spark specific peace actions at multiple levels and in multiple languages (removing language barriers for English, Arabic, Spanish, and French speakers with real-time message translation). Women in remote locations or those without access to computers can participate through the cell phones. The Website allows women to highlight their stories from the frontlines of conflict and engage and connect women peace builders though their virtual classroom, a multimedia archive, blogs, and best-practice resources for women's circles and connections. Their technology approach helps their members overcome linguistic, geographic, political, and cultural isolation to connect for peace building.
"It is unacceptable that Europe lacks qualified ICT staff. If this shortage of computer scientists and engineers is not addressed, it will eventually slow down European economic growth," said Information Society Commissioner Viviane Reding, addressing a conference 6 exploring the potential for women in the ICT sector. The conference, held on March 6, 2008, two days before International Women's Day, launched a joint initiative by the European Commission and a number of leading IT companies "to give young women a taste of what a job in ICT would be like. " "We need to overcome common stereotypes which describe ICT careers as boring and too technical for women," Reding told the conference, which also discussed best practices on how to get girls and young women interested in taking up ICT careers as well as possible educational barriers. Encouraged by the experience, the European Commission, together with the private sector, is to draft a "European Code of Best Practices for Women in ICT" by next year's Women's Day. 7
Box 2: Key Collections on Gender-Sensitive Polices and Programs
A few key collections of best practices and project summaries have been compiled recently that offer some insights into the implementation of relevant gender-sensitive policies and programs. The foremost are listed below:
• Gender and ICTs for Development: a Global Sourcebook: A Collection of Case Studies on How
ICT has Influenced Women in Developing Countries, KIT Royal Tropical Institute, Netherlands and Oxfam (UK) 2005.
The Role of Women's Use of ICTs in Sustainable Rural Poverty Reduction
Women around the globe play an important role in food production and distribution. Improving women's access to price and product information, increasing their supply chain options for exporters and freighters, and strengthening women's connections to any knowledge that helps increase their competitive power and improve earnings will lead to increased personal wealth and economic development. Examples of successful cases where access to information helped rural women increase their income may lead to an appreciation of the value of improved policies that will allow both increased ICT access to women and ensure that training is provided to build women's capacity to manage the information they receive as effectively as possible. In the book Gender and Digital Economy: Perspectives from the Developing World, 8 case studies from Argentina, Morocco, India, Malaysia, and the Philippines showcase how economic opportunities through ICTs can change the position of women within their families and workplace and give them better choices for their livelihood. However, women farmers and agricultural producers have unique challenges that their male counterparts do not face. Specifically, access to the Internet in rural areas can only be possible through common access points, called telecenters or cybercafés. These specialized centers are usually not open for women and several cultures frown upon women who mingle with men in these locations. Policy makers and practitioners alike need to consider this when implementing their plans. Special provisions need to be created, such as women-only telecenters or women-only capacity-building operations. This will allow women to benefit equally from information access and to reduce the impact of the ICT gender gap on rural development.
ICT-delivered knowledge then becomes a two-way vehicle for both informing women about the potential for their participation in development and better informing agencies and their officers about the impact of engendering ICT policies as a strategy for rural poverty reduction.
The Development Benefits for Communities that Provide Broadband Access for Women
Access to reliable and affordable broadband provides women and men with an opportunity to access the immense sources of knowledge and learning material available online. While much of what is available has been developed by men for men and specifically for English speakers, there are still resources that allow women to learn new skills and to perfect their existing skills. They can join online professional networks or, where none exist, create them and meet women in the larger community in ways the current culture or deficit of women will not provide. Electronic mail provides safe means to communicate with support networks, family members, and potential business contacts. Broadband networks are improving and transforming the health services delivery as can be seen in Case Study 2 of nurses in Kenya. Several examples are also outlined in Appendix 2.
Box 3: Considering ICTs as General Purpose Technologies
Like electrical power before it, ICTs have been recognized as a "general purpose technology" (GPT) that transforms economic relations, enhances productivity, and creates new services and markets. GPTs have the following three characteristics: Pervasiveness: GPTs spread to most sectors. This suggests that impacts should be measured at a higher level than the firm or disaggregated sectors. Higher levels of aggregation internalize the externalities or spillover impacts that arise at low levels of aggregation. Improvement: GPTs get better over time and, hence, should keep lowering the costs of their users. In fact, one of the problems associated with the study of ICTs is that it is constantly evolving. Apart from making quality adjustments for improvements in current technology, new technologies will emerge. ICTs are a moving target.
Innovation spawning: GPTs make it easier to invent and produce new products or processes. That is, they allow us not only to do things better but to do better things. New possibilities are created and specialization raises productivity. Women's economic opportunities are linked directly to women's access to land, labor, financial, and product markets. By allowing women to benefit from new electronic-based services such as land title registration, women can fully participate as developers of economic productivity and wealth to support their families and their communities. Older, manual, paper-based processes did not make any provision for the female citizen and instead required male relatives to fill the paper forms for land and/or other titles. For many countries the process of automating and reforming registration processes has triggered a thought reform, which has worked to benefit women. By increasing their inclusion in the property-titling and asset-ownership activities of their localities, women's knowledge and expertise becomes another valuable resource in the community, bringing more thought leadership into the development conversation and enriching the knowledge contributed to solve development challenges.
With the exception of a few countries, not much progress has taken place. Some middle-income countries are strongly promoting women's education on ICTs (for example, Tunisia and Cape Verde) while others are focusing on empowering women entrepreneurs venturing in ICT sector (for example, Qatar, the United Arab Emirates, and Bahrain). However, these examples are limited and lack the ability to be achieved by a worldwide policy formulation. Governments have a critical role to play in reexamining policies for access, an enabling environment, and usability factors that can ensure equal opportunities for full productivity and benefits for men and women.
The Transformative Impact of E-government Services for Women
E-government services can target the needs of women, including up-to-date and costfree public information and services about women's rights, inheritance and family laws, health care, or housing. For an example, see Case Study 3 on e-Seva in India.
While it is not easy to measure the impact of ICTs in the area of government, health, and education, the repercussions that information and communication technologies are having in these sectors are real and a number of studies and surveys have produced some concrete results. There are a number of impacts that can be identified with regard to e-government, including improved information flows, reduction of process time and cost, and an increase in efficiency and transparency. A 2005 European Union study confirmed that e-government services were producing real benefits for European Union citizens, governments, and businesses in terms of saving time and gaining flexibility. Online income tax declarations save European taxpayers an estimated seven million hours per year. When generally available and widely used in all member states, such e-services could save over 100 million hours each year. Compared to the same transaction completed offline, the average online transaction saves 69 minutes for citizens and 61 minutes for businesses. 9 (There was no gender analysis however).
An APC WBSP Europe member recently wrote about her experience preparing for an "ICT and Equal Opportunities of Women and Men" panel for an e-government conference with participation from the Czech Republic and other Visegrad countries (Hungary, Poland, Slovakia). She described the working environment of women within the civil service, using her mother's experience as an example. As a low-level officer playing an essential role in the practical application of e-government ideas, her mother noted that the key problem is the low level of skills amongst staff in the usage of new information and communication systems. Only heads of office were trained in the usage of software with the expectation that they would pass on the training to other staff-but the skills transfer never happened.
Instead everyone was issued a 200 page manual to educate herself. Her mother stayed a couple of evenings at work to study the manual. However, there are many others who do not feel comfortable with emerging technologies and find it hard to self-educate. Others are not able to stay overtime as she did, because their children are still young. As a result, staff members are struggling with the increase in their workload and new technical requirements-to the point where some are even considering leaving their jobs despite the high level of unemployment in the region.
She writes:
We usually think about gender issues in e-government from the users' perspective only. The conversation with my mother brought me new insight to this issue and I left for Czech Republic, where the conference took place, with emerging questions in my head: would the additional cost be so insurmountable for training all staff in local offices as part of a new system implementation? Especially considering the savings it will bring in terms of time effectiveness and human resources? Are there substantive reasons why the training of all members of staff was not considered an important priority? How does the dominance of men in policy-making processes, and in the ICT sector in general, affect the extent of e-government's effectiveness in addressing women's need s? What are the constraints? And finally, how are women able to benefit from e-government services that are top priorities of national ICT policy-and incidentally, are paid for by their taxes?
The APC WNSP's panel "ICT and Equal Opportunities for Women and Men" showed gender to be an important issue to be placed on the e-government agenda. There are many ways in which e-government impacts on women's lives. As mentioned by several speakers, women are usually in charge of communication with public administrations at the level of households, and e-government services can mean less time needed for queuing up in front of doors of different departments. It may also bring the government closer to women and make it easier for them to monitor state activities and budget spending in their localities in order to influence the decisions that affect their lives. E-government can facilitate better access for Roma women and other marginalized groups to up-to-date and cost-free public information and services in areas that directly affect them, such as health care or housing.
Finally, women feature as a significant number of public administration staff, and the e-government programs may bring negative changes to their workload, working conditions, and their position in the labor market. For example, many women working as administrative staff in banks or insurance companies lost their jobs along with the introduction of ICTs. The panelists offered some good suggestions on next steps that can be taken to ensure that women and men take advantage of the national egovernment programs. The assessment of women's information and communication needs, the support of networking, and the partnership projects development among women mayors are two illustrations. The panelists also highlighted the importance of enabling access of women from ethnic minorities to training. E-government processes are invaluable for all individuals who generally lack information on their legal rights and procedures to obtain required services. ICT applications can be applied to land ownership/title data bases, procurement, and registration procedures to ensure accountability and transparency for women and men.
Case Example 2: Online Learning for Health Professionals and Nurses in Kenya
The African Medical and Research Foundation (AMREF), in a classic public-private partnership with the Nursing Council of Kenya (NCK), Accenture, the Kenya Medical Training Colleges, several private and faith-based nursing schools, and the Kenya's Ministry of Health, pioneered a country-wide eLearning program for upgrading the skills of nurses. The program commenced in September 2005 with a pilot in four schools serving 145 students. The five-year goal was to upgrade the skills of 22,000 enrolled Community Health Nurses (KECHN) from "enrolled" to "registered" level. Enrolled Nurses (ENs) comprise 70 percent of nursing and 45 percent of the health workforce in Kenya. They are the first point of contact for communities but are inadequately skilled to manage new and reemerging diseases like HIV/AIDS. This has necessitated their Continuing Professional Development to improve nursing care standards, achieve the health related Millennium Development Goals (4, 5, and 6), and enable them to respond effectively to disease diversity and complexity. Electronic learning is the preferred mode due to its interactivity, cost effectiveness, ease of revision, and ability to achieve the goal in less time and at a lower cost than the residential program. It would also enable continued service provision, instant application of learning, and improved quality of care. For Kenya, a country with one registered nurse for every 27,000 citizens, the e-Learning program is revolutionizing healthcare by creating an electronic infrastructure for the accelerated education.
• 27 Medical Training Colleges and
Nursing schools participating, including AMREF's Virtual Nursing School. • Over 100 computer-equipped training centers in eight provinces, including rural, remote, and marginalized districts (for example Garissa and Dadaab refugee camps in the North Eastern Province of Kenya). • Over 4,000 nurses enrolled on both e-Learning and print-based learning modes. • Over 300 computers installed in training centers. • Over 192 implementers trained in IT skills.
Program Structure, Curriculum, and Clinical Experience
The ECHN Upgrading curriculum is designed to produce a well-rounded nurse who can handle new and reemerging diseases. It comprises four modules: General Nursing, Reproductive Health, Community Health, and Specialized Areas. The theory is provided through a blend of scheduled face-to-face sessions and self-paced computer-based material. In addition, students are required to complete 42 weeks of clinical experience. At the end of the four modules, students sit for their college finals and the NCK licensing exams at the end of the program. AMREF plans to use the program as a model for other African nations struggling with critical nursing shortages similar to Kenya.
In June 2006, a workshop in ICTs, e-government, and gender took place in Tunis, where participants made a number of broad recommendations calling for national ICT policies, capacity building, and budget allocations in support of delivering comprehensive e-government services to women. 10 In May 2007, an online discussion took place on ICTs, Gender, and e-Government, leading up to a two-day meeting in Mozambique that reinforced the need for mainstreaming gender into ICT policy. On the African continent there are a range of e-government policies and initiatives developing but very few practical examples of how these are impacting women are available.
Women's Advanced ICT Education and Lifelong Learning to Ensure a Healthy Economy and Community
As has been said many times, access alone is not sufficient. Education must complement ICT access in order to provide value to the technology. From classrooms to community radio to cell phones and family-friendly Internet cafes, technology itself provides multiple venues for women and men to learn. Further, lifelong learning provides a new formula to allow women to move out from the bottom of the career path (referred to as the sticky floor) and move to mid-level and top-level leadership positions.
There has been significant discussion about the importance of educating the girl child as well as the boy child to ensure they fully participate in the knowledge society of the twenty-first century. This is seen as both a basic right and a developmental need. This paper explores the role of ICT education to benefit the workforce development of women to allow them to benefit both as participants in the knowledge society and contributors to it.
Specifically, women need to develop skills beyond basic literacy and usability to become creators, developers, designers, and innovators using ICT as a tool in that process. There are two steps to consider. ■ Applied ICT skills: the ability to use and apply generic ICT tools in workplace settings and to upgrade these skills in line with the requirements of business and industry. These skills include all aspects of information working such as Web design, call center consultant, analyst programmer, information technology manager, software project manager, desktop publishers, librarians, computerized sewing, and multimedia. ■ Professional ICT skills: encompassing the specific skills required to design, develop, implement, and repair ICT tools (includes hardware and software creation and design, manufacturing, electronic manufacturing, network operating systems, cabling, and router programming).
In the United States, an award-winning program called ACTiVATE 11 brings together educated women in science, technology, or business with technologies developed at federal labs and universities. The training program funded by the National Science Foundation has exceeded all goals and is now being disseminated nationally. The program's success demonstrates clearly women's ability not only to work successfully at the entry levels, but also, given the opportunity, to excel at the highest levels of rigorous technology entrepreneurship.
For developing countries that have small numbers of highly educated women engineers who are unable to get jobs because they are women, this provides an opportunity for entrepreneurship training within country to develop the innovative solutions identified by women needed for their communities. This was the case for a training program in South Africa for 12 women (who later became the Femtrepreneurs 12 ) from a diversity of backgrounds including townships. The result is a model whereby one woman started a business that yields employment for 10 that care for their families touching 50 who help distribute the wealth generated to hundreds. As the company grows and thrives, the impact can reach thousands, as in the case of Isabelle Rorke with Anamazing. 13 In developing countries today, ICTs jobs can be provided through the booming mobile phone industry. Women have job opportunities in call centers and in sales and repair services, as can be seen in the Cameroon case. Access to information and knowledge in rural areas has a significant impact on women's social and political participation and women's economic empowerment as agricultural producers. Women can use ICTs and the Internet to access the agri-business supply chain and promote their products for better sales, as can be seen in Case Study 4 on Burkina Faso. 14 Women can also be national, regional, and international change agents through ICTs. Dr. Shahida Saleem, who chairs Sehat First, has brought together her medical knowledge with ICTs to create a customized national health care system that meets the needs of Pakistanis in their communities.
Public policy participation has a defining role to play in building up a country's human capital and knowledge endowments through promoting quality education, lifelong learning, innovation, and creativity in its workforce. By consolidating national and sector policies, women can more effectively contribute to economic growth as well as serve as agents of change for political moderation and productivity. A review of a variety of best practice frameworks in ICT implementation calls for a supporting regulatory and policy environment and a participatory mode of working with women. 15 ICTs fundamentally change modes of organization, management, production, and distribution, and by extension change modes of employment. In sum, the proliferation of ICTs has six main impacts on women's work in the context of increased competition: ■ A shift from manual labor to intellectual labor minimizing the need for brute strength as a workplace criterion. ■ A shift from automation to computerization in the manufacturing sector through the use of computer-aided design and computer-aided manufacturing.
■ Adjustments to dis-intermediary and intermediary 16 trends in the service sector. ■ The "computerization" of back-office functions.
■ The development of products and services (including education) needed to participate and compete in the workplace available online or through traditional technologies (such as radio).
■ The introduction of the technologies themselves as a means for business opportunity development (mobile phone operators for instance)
Case Example 3: e-Seva Centers, Andhra Pradesh, India
The e-Seva project, which is run by the West-Godavari District Administration in Andhra Pradesh State, has established Web-enabled rural e-Seva Centers run by self-help groups of women from the poorest segments of society. The aim is to help the women achieve economic independence and replace the traditional form of governance and its accompanying deficiencies with a modern, more open, transparent, and responsive service delivery system.
Initially the project started in all 46 mandal (block) headquarters in the district, with the first women's e-Seva Center opening in June 2002. More centers were then established in over 200 small villages, large villages, or towns in Andhra Pradesh, delivering services to citizens.
The project is cost effective for both government and beneficiaries as the centers work offline and access the Internet as required. Statistics suggest that citizens are able to save around US$0.10 per house as consumers of e-Seva services, which would lead to district-level savings of over US$100,000 per month (US$1.4 million per year). To further improve communication, wireless technology was adopted and 85 nodes were networked. Adopting wireless technology also enabled the project to reach more citizens.
The actual number of computers at each center varies from place to place based on local needs. In a small village an e-Seva Center will operate with one computer, a scanner, Xerox machine, digital camera, and printer. In a town there would be more computers, provision of Webcams, and so forth. Each center has an Internet connection-in villages they use dial-up; in towns they use a leased line connection. A very wide range of services is provided, including bill payments, issuance of land/birth certificates, Internet browsing, tele-medicine and tele-agriculture, access to online auctions, the filing of complaints and grievances, and matrimonial services.
In January 2002 there were 46 centers involving 92 member/partners. By January 2004 this had grown to 200 centers with around 292 member/partners. There are currently 384 women running e-Seva Centers, carrying out over two million transactions per year. Income and transactions are increasing month by month and were much higher in 2004 than in 2002.
The major costs for the women running the centers are loan repayment, stationery and consumables, salaries of other staff, and electricity. The service that provided most income was utility payments, used by at least 6,000 people per month who are charged about US$0.03 per payment. Bigger centers make about US$320 per month excess of income over direct expenditure (from which the women member's salaries are drawn), while smaller centers can expect an excess of income over direct expenditure of about US$90 per month.
Primary Benefits for Women
• Social respect. As the women's incomes increase, they become well trained, educated, and better respected. Villagers coming to centers take their advice and use their services. • Employment in their village.
• Self respect. Working with technology makes them feel proud.
• Monthly income. Currently the monthly net income for each of the larger e-Seva Centers averages US$300. This is shared among the two to ten women in that center with an average of US$45 per month per woman.
Case Example 4: Shea Butter Sales Increase for Rural Women in Burkina Faso
When the women of the Songtaaba Association, an organization manufacturing shea butter skincare products in Burkino Faso, started using ICTs, their profits more than doubled. ICTs, including cell phones, computers, and technologies such as global positioning systems (GPS), have helped them to run their businesses more efficiently. The association currently provides jobs to more than 3000 women in 11 villages in the country. To provide the women with regular access to ICTs and improve marketing and sales of their products, the association set up telecenters in two villages, which are entirely managed by the rural women trained by Songaaba. The organization also set up a Website that the women manage. This has been particularly successful in boosting the visibility of the producers. After the site went online two years ago, orders climbed by almost 70 percent. The women also have access to information about various promotional and sales fairs where they can promote and sell their products. 16 Disintermediation is the process of cutting out the middle agent. When companies bypass traditional retail channels and sell directly to the customer, traditional intermediaries (such as retail stores and mail-order houses) are no longer employed.
C H A P T E R 4
The Threat of ICTs for Women t first glance, ICTs have had an overall positive impact on women's work, livelihoods, and overall opportunities, but this is not easily quantifiable and there have been opportunity costs incurred. Unless gender considerations are incorporated into employment policies, ICT diffusion strategies, or national policies, strategies may inadvertently result in negative unintended consequences that compound gender and income disparities. These negative consequences include the following: ■ Maximum flexibility, minimum protection: ICTs and the digitization of information enable businesses and companies to locate and manage production away from the main site. This has implications both for employment of women and for their personal investments in ICT tools as well as for the growth of clusters of small enterprises and new forms of social production. In theory, ICTs should offer women the possibilities of both flexible locations and flexible hours through telecommuting and/or self-employment. Conversely, women's "flexibility" may also result in casual, part time, piece-rate, and seasonal employment, with little long-term protection or security of income.
■ Supply chain competition: Networks and communications infrastructures have intensified competition in unpredictable ways through facilitating decentralization of many aspects of supply to manufacturing and service industries. The miniaturization and modularization of products, intermediation, and disintermediation of processes, combined with cheap mobile capital, has an enormous impact on value-added specialization in the supply chain.
Case Example 5: Middle East Women in Technology Initiative
Women in Technology (WIT) is a partnership program between Microsoft, the Middle East Partnership Initiative of the U.S. Department of State, Women in Technology (WIT), and local partners in nine countries: Bahrain, Iraq, Jordan, Lebanon, Morocco, Oman, Saudi Arabia, the United Arab Emirates, and Yemen. The program expands women's participation in the workforce by providing local partner organizations, and the women they serve, with essential ICT skills through Microsoft's Unlimited Potential program. These skills include business planning and professional development skills training. Since its launch in 2005, WIT has trained 3,500 women and built the capacity of 50 local women's organizations in the Middle East. By 2010 WIT will benefit more than 10,000 participants, creating a strong base of women with vital IT and professional skills, allowing them access to new careers and increasing their role in shaping their societies.
Case Example 6: Addressing Human Development Issues in Pakistan through ICTs
Pakistan is a developing country with contrasting geographical and economic features. The country has an annual population growth rate of 1.9 percent; 65.1 percent of its population lives in the rural areas. The country also suffers from the unavailability of efficient healthcare infrastructure and inadequate clinical services. The complexity of the situation is further increased by the high rates of poverty and illiteracy, thereby placing Pakistan in the low human development zone.
According to the Human Development Report 2007/2008, the public sector health expenditure in Pakistan is 0.4 percent of the GDP, which is not equitably distributed across the population. There are only 74 physicians per 100,000 people with a concentration of healthcare facilities in the urban areas. The situation in the rural areas is worsened by factors such as scarcity of qualified doctors, unavailability of specialists, delays in the administration of proper treatment, and unavailability of appropriate medications in close proximity. A median availability of 40 percent of essential generic drugs renders simple illnesses, such as gastroenteritis, fatal. Lack of diagnosis and medicines for treatable ailments such as malaria, typhoid, and meningitis further add to the mortality rates.
Patients have reported up to 22 individual visits (with 22 transport costs, 22 patient records, etc) to obtain appropriate care. That is an enormous waste of resources, human and financial, in a country that can ill afford it. Some nonprofit organizations have made serious attempts to fill in these gaps. However, these initiatives themselves are not sustainable.
The only way to overcome the challenges at hand is to utilize technology as a means of maximizing the available human resources combined with an effective private sector pharmaceutical distribution network. Dr. Shahida Saleem set up Sehat First, a unique social enterprise aimed at providing access to basic health care and pharmaceutical services across Pakistan through self-sustainable franchised tele-health centers. Founded in 2008 with an equity investment from the Acumen Fund, Sehat First has served over 4,000 patients, most of whom are women and children.
Sehat First aims to set up 500 Sehat First Health Centers across Pakistan by 2012 and has already established 5 self-sustaining pilot centers in the first year. The centers are set up as franchises by local entrepreneurs employing local men and women. The healthcare providers at these centers are women.
A unique component in this model is the tele-health consulting service, which supports the local clinic staff with videophone consultations with a qualified physician. The simple IP-based phone has enabled access to specialists to which these patients would otherwise not have access. Open-source based Medical Records System (OpenMRS) is also being used to manage patient records to ensure availability of medical records throughout the healthcare system-a facility currently not available in Pakistan.
Source: Sehat First, http://www.sehatfirst.com/#. ■ "The presence of new supply alternatives with radically different economics now take the traditional 'supplier squeeze' to a new level". 1 Where one is situated within the supply chain is directly linked to one's skill set and ability to negotiate-that usually leaves women at the lower end of the value-chain with a low chance of upward mobility.
The International Labor Organization report on Work in the New Economy makes the following observations about the ICT sector: Patterns of gender segregation are being reproduced in the information economy where men hold the majority of high-skilled, high value-added jobs, whereas women are concentrated in the low skilled, lower value-added jobs. As traditional manufacturing industries that previously employed women gradually disappear, the women finding jobs in the new, often ICT-related industries are rarely the same ones as those who lost their jobs in the traditional sectors. New inequalities are therefore emerging between women with ICT-related jobs skills versus those without. 2
The Impact of ICTs on Gender Social Relations
Most examples in this report have been selected because of their value in highlighting the positive impacts ICTs can have when coupled with the latent potential of women. However, one example illustrates how mobile phones can have a negative effect on development by reinforcing unequal gender and power relations in Zambia. A threeyear study in Zambia compared relationships between husbands and wives to mobile phone access and use. For many women there was a benefit from faster, cheaper communication and a strengthening among family, friends, and business-related social networks. However, mobile phones also provided a new focal point for social conflict between spouses and led to the reinforcement of traditional gender power differences. In some cases husbands determined how wives used their phones, and even whether or not the women were allowed to continue owning a mobile phone.
Interviewees consistently reported problems of insecurity, insensitivity, mistrust, and jealousy, which sometimes resulted in physical and/or verbal abuse by men towards their wives.
■ Some husbands accused their wives of infidelity, thinking they used their mobile phones to communicate with lovers. They inspected call records on the mobile phones for proof, and some ordered their wives to sell their phones.
■ In a widely publicized case in the Zambian media, a man reportedly beat his wife because he suspected her of having an extramarital affair after she refused to let him check her calls and text messages.
■ Men often demanded that their wives make and answer calls in their presence, although they refused to do the same. ■ There are popular songs referring to the social difficulties that mobile phones have introduced between men and women. They are lighthearted but carry an important message about the way this new technology is adversely affecting gender relations.
These findings suggest that new technologies can become another aspect of oppression of women by men and a source of inequality between them. These inequalities are not just social: mobile phones can also reinforce economic gender differentials. Handsets and airtime are still expensive, and women may be less able than men to afford their use.
Box 4: Public Policy: Gender-Transformative Strategies
Gender-transformative strategies are about change of existing inequalities as opposed to genderneutral or gender-specific policies that target one gender over another to achieve goals, and in doing so, leave the gender division of labor and resources intact. For example, providing women with the enabling resources which will allow them to take greater control of ICTs; to determine what kinds of ICTs they would need; and to devise the policies to help them reach their goals. The development and implementation of ICT policies could be evaluated by asking the following questions: • Do these policies address gender needs? • Will they lead to the transformation of gender relations and gender roles?
If women and men are to benefit from ICT interventions, mainstreaming the perspectives and concerns of women is one of the important tasks to be undertaken. Two types of strategies are offered to support this task: top-down and bottom-up. Top-down strategies aim to change the ICT institutions and agencies to promote women's equality and empowerment in ICTs. Examples include: • Using political pressure at international conferences and consultations to demonstrate the importance of gender-sound policies and interventions • Serving as a watchdog that monitors ICT impacts on women • Conducting research and gathering data on gender concerns as central to ICTs for more effective lobby work • Promoting the use of gender analysis tools such as frameworks, guidelines, checklists and rosters of women, and ICT and gender experts • Working within structures to effect change through gender training, financial allocations, staff appointments, and obtaining internal legal mandates.
Bottom-up strategies are aimed directly to support women's entry into the mainstream of ICT. They include: • Removing legal or social barriers that limit women's access to ICTs • Enabling women to take initiatives in their involvement in ICT planning and policies • Extending financial or technical assistance to women to facilitate access to and control of ICTs by providing credit, training, and education.
However, insufficient official statistics on a range of gender concerns relating to technology mean that these new developments are difficult to analyze. For women, the social and economic advantages of accessing and using a mobile phone far outweigh the disadvantages. But those promoting and making policies for mobile phones must understand that these new technologies create problems as well as solutions. These problems must be recognized if they are to be addressed through a more active effort at gender awareness and concrete, measurable policies and projects. 3
Case Example 7: Access for Rural Women, Armenia
In Armenia, war and the subsequent transitional period left deep economic and social wounds, particularly in rural communities. The focus on survival made the rural-urban divide more acute in terms of educational and technological development. Youth, and in particular young women, lacked education and work. Zartonk-89, an NGO with a mandate to create workplaces for needy families, women, and refugees and to help women solve their health, educational, and social problems, spearheaded an initiative to address these challenges.
The Network and Capacity Building for Rural Women in Armenia project was implemented in rural communities of Syunik region. The project aims were threefold: 1) to improve the livelihoods and status of rural women and to support gender equality in the local community through empowering women and teaching them ICT and its usage, 2) to contribute to the establishment of a women's club that would act as a center for networking and information exchange among rural women as well as disseminate up-todate information and knowledge, and 3) to strengthen existing ties among various agencies and rural women through improving access of women to using ICTs. The project design reflected the First Mile Principles, in particular through its solicitation of local women's problems and needs. The initiative was guided by two equally important concepts: ICT education and ICT for education.
Fifty rural women, including 20 jobless refugees aged 16 to 20 living in poverty, participated in the ICT training courses. The women gained new computer and Internet skills, which opened up opportunities in the job market. They created contacts, discovered ways to continue their learning through online distance education, and broadened their perspectives with access to current information and daily news. Digital literacy, access, mobility and control, and convergence of scattered communities were not the only benefits. Gaining marketable skills and working with other women has improved the women's self-esteem and better equipped them with knowledge to fight against discrimination, social injustice, and gender inequality.
Zartonk-89 has facilitated positive realities for rural women; however, its exclusive focus on women may alienate men and further increase women's burden to support household and community life. Rural men also need the skills and knowledge to enter the information age and to work alongside women to fight against discrimination, social injustice, and gender inequality.
Case Example 8: Cell Phone Repair Small Business Development for Women, Cameroon
Mobile phone penetration in Cameroon increased from 0.02 percent in 1999 to over 12 percent in 2005; by 2006 mobile phones represented more than 95 percent of all telephone lines. The number of mobile phone subscribers grew to over 2 million while fixed lines numbers dropped to below 100,000. A long established women's business association, ASAFE, identified this escalation in cell phone use as a viable and expanding business opportunity for young women in peri-urban and rural areas.
The program developed by ASAFE supports the creation of small-scale enterprise in rural and periurban areas for the maintenance and commerce of cell phones. Women are trained on how to repair cell phones, sell them, and run viable businesses. Women are provided technical and management training modules (which last for 14 days) and a loan to acquire 10 cell phones, pay for needed equipment, and rent a small space. So far 100 women have been trained. Twenty have already set up repair workshops and earn an average of US$100 per month. Cameroon is made of 47 subdivisions and ASAFE is planning to get 50 women from each subdivision to be trained through the program.
ICTs as an Added Challenge for Women in the Workforce
While teleworking has certainly offered women a range of new employment possibilities, the downside is that women can be excluded from other, better, career possibilities. Instead of finding a balance, family responsibilities are combined with paid work, so that women end up acquiring new tasks on top of the old. Another common ICT employer of women is in the call-service sector. Effective call service often requires "client communication" or emotional labor, the latter tends to be considered an "inherent" skill to women, and they are usually financially undervalued.
Recent studies 4 of women working in call centers in Europe found that, contrary to notions about skill development and flexible career advancement, women's data processing work is often routine, deskilled, and devalued. Women in these centers rarely advance beyond "team leader" roles to managerial positions. Research in India also confirms that employment of women in the software and IT-enabled services sector closely mirrors the prevailing tendency of the market to reinforce existing socioeconomic inequities.
Box 5: Women Encounter Technology
Mitter and Rowbotham's 1995 anthology Women Encounter Technology explores the impact of technology on women's employment and the nature of women's work in third-world countries. Some observations that are particularly relevant in gender analysis are given below.
Gender is one of many factors that determine the impact of IT on women's working lives. Age, class, ethnicity, and religion can play even greater roles in defining women's working position. Similarly, the degrees of exclusivity that arise from the information revolution sharply differentiate regions and communities.
Technological changes affect the quality and quantity of women's work. Along with women's employment benefits from new technologies there are associated health, environmental, and other costs. Employment issues of concern to women working in technology relate to contractual terms, intensification of workloads, wages, training, and health and safety such as video display unit hazards and repetitive strain injuries.
Increased job opportunities bring new tensions in women's domestic lives. For example, Acero's case study documents the typical life of a woman textile worker in Argentina: "My marriage started to break down when I started to work … I had more chances than he did. So things started to go wrong." Deeper insights are needed into the links between women' status and role at work and at home. Women are rarely represented in the decision-making areas of technology. As a number of essays document, women are predominantly only in blue-collar jobs. In the next phase of the technological change these are precisely the jobs that will be vulnerable.
Upgrading women's skills through a continuous learning process benefits women and society. Radical thinking about training is essential for utilizing women's potential. In particular, training needs to take into account age, class, ethnicity, and religion.
Women's sharing of experiences has proved rewarding at community, national and international levels. More international exchanges of experience in organizing around some of the new issues relating to the electronic era are needed in order to ensure that women's employment benefits from new technologies are not outweighed by the associated health and environmental costs. There is a risk, particularly in emerging knowledge economies, of regarding women's interface with ICTs solely in terms of upgrading their skills to make them employable in the ICT sector to the exclusion of the potentially deeper and long-term benefits that ICTs might have for women's overall social and knowledge-based development. In other words we need to be alert to the reality that ICTs can either reinforce gender differences or can help to overcome them.
Implementation Issues for Women and ICTs
or ICTs to have the broadest reach and the most powerful positive impact, all global citizens need to participate fully in the knowledge society from basic access through the top levels of leadership. But the opportunities women can bring to development will not be realized unless policies for all mainstream efforts take gender considerations into account. This requires not only women and men be present as part of mainstream discussions, but also includes at the table individuals who are fully educated on the research relating to the interaction of gender and ICTs.
Policy advocates sometimes fail to appreciate the diversity of opinion that arises from the study of gender as a discipline. As we accept that different economists have different strategies for addressing the global economic recession, so policy makers need to allow for a debate on the issues and arrive at a diversity of perspectives and recommendations. It is only through this natural discourse that we can hone the clear pathways needed to ensure all women and men benefit. To this end, policy makers should ensure they talk not only to the gender experts in policy, but also to practitioners, business developers, and educators that work daily with the population, as well as women themselves.
Today, many developing countries are turning to the ICT sector as a new opening for attracting foreign direct investment-primarily in data entry and call center facilities. These facilities, however, are currently located in a handful of countries-India, Israel, Ireland, Mexico, the Philippines, and increasingly China. Concurrently, many U.S. companies that outsourced their call centers are rethinking this option in the light of increasing international costs and rising unemployment that will keep internal expenses down.
The projected development of this aspect of labor-intensive, low-skilled ICT work seems to be not unlike the long-established garment and electronics industries: poor wages and working conditions, little to no skill or technology transfer, absence of career growth, and feminization of the low-end, low-pay jobs. But ICTs have also been seen as a means for the development of e-commerce based initiatives where women are producing crafts or hand made products to market on line. In some cases women have little direct control over ICTs per se and are often far removed from the decisions and the applications around ICTs, but there are other initiatives where ICTs are integrated comprehensively throughout an existing institution, such as in SEWA, where women learn to apply different kinds of ICTs to a wide range of activities.
Creating a Supportive Environment as a Critical Success Factor
Public policy has a defining role to play in building up a country's human capital and knowledge endowments through promoting quality education, lifelong learning, and F innovation and creativity in its workforce. In order to promote women's full participation and involvement with ICTs, national and sector policies need to be consolidated to support women's contribution to economic growth as agents of change.
Without careful planning and the development of appropriate policy measures, ICTs may exacerbate differences between the rich and the poor and men and women. In the absence of a deliberate policy, the diffusion and use of ICTs and their intended benefits tend to follow the existing contours of income and economic divides with the poor being further marginalized or excluded. Due to socio-cultural norms, there are persistent gender inequalities in men and women's access to ICTs. For example, women's mobility may limit their access to Internet centers, or ICT training courses may not advertise in places that women frequent.
"ICTs and policies to encourage their development can have profound implications for women and men in terms of employment, education, health, environmental sustainability and community development. Policy is needed to ensure that investment in ICTs contributes to more equitable and sustainable development as these technologies are neither gender-neutral nor irrelevant to the lives of resourcepoor women." 1
Providing Relevant Content for Women and Men
Warschauer proposes that a better model for understanding access to ICTs is provided by the concept of literacy. The world has considerable experience in literacy acquisition that can also be brought to bear on ICT for development. Referring to the work of Brazilian social activist and educator Paulo Freire, Warschauer argues that "literacy instruction is most effective when it involves content that speaks to the needs and
Case Example 9: Microsoft's DigiGirlz
The objective of the Microsoft DigiGirlz program for the Arab region is to attract more high school students to Dubai Women's College (DWC) and increase the presence of female students in the field of science, technology, engineering, and math (STEM) programs. DWC partnered with Microsoft to bring this U.S. program to the United Arab Emirates as a first step in evaluating its ability to be customized for the region. The goal would be to prepare female school students for the challenges of working in a global environment and engaged them throughout the day with emerging technologies. The event was hosted for the first time in the Arab World in May 2009 and was fully integrated into a Year 2 IT marketing course. Prior to attending, the DWC students had to develop an intensive marketing plan that involved building creative ideas to attract the high schools to participate as well as engage them with meaningful technical activities.
The event met all expectations and attracted 200 students. Since then seven blogs were created, and seven new schools have expressed an interest in participating in the following year. In addition, multiple businesses beyond Microsoft offered to support the event through volunteers and sponsorships.
Source: Yousuf et al. 2009. social conditions of the learners. As with ICT-related material, this content is often best developed by the learners themselves." 2 This has a particular resonance with women, who are usually intimately knowledgeable about their local contexts, issues, and solutions, and can use ICTs to share, consolidate, and represent their interests and perspectives. This also gives women an opportunity to lobby public bodies themselves. They will also benefit by accessing information that can provide them with the information to better serve themselves, their families, and their communities.
Stakeholder Participation
A common criticism of ICT for development projects is that they fail to build on existing systems of work in a participatory way and therefore do not achieve local input and local ownership. There is often a gap between the design of an ICT project and the reality of what can unfold on the ground and the long-term implications for women. To avoid the recurrence of these mistakes, the introduction of ICTs into the activities of a community needs to involve the full participation of women from its very inception. This means engaging women in decisions, implementation, governance, and in benefiting from revenues, profits, and cost sharing. All development interventions must work with both women and men stakeholders to ensure that women's opportunities to utilize technologies are not inhibited by cultural dictates on seclusion, restrictions on mobility, or the unequal division of labor. While there may be "lessons" to be learned, business models and case studies that suggest "replicability," in fact no two situations are ever the same. It is important therefore, to refrain from transforming models and studies into "formulated approaches" or "prescriptive measures" if we are to ensure that the innovative character of ICTs remains in the hands and control of the users themselves.
Finally, it is important to involve national and international leadership in broadbased programs, but the knowledge base must be from the grassroots in order for it to be successful. Networks or "collaboratives" have been shown to be successful in bringing multi-stakeholder groups together to both provide content and resources, as well as benefit from shared efforts and ensuring sustainability. TARAhaat of the NGO Development Alternatives in the poorest region Bundelkhand of India has lessons from which we can all learn. From micro-credit services to skill development to alternate energy sources to markets for rural products produced and procured by women, this organization has created a model of success for women to be economically independent in more than 50 villages. Kudumbshree in Tamil Nadu, also in India, is another such experiment in the applications of ICT for poverty reduction for women.
Contextual Factors
Research has demonstrated that men and women approach technology differently. One study conducted at the University of Maryland Baltimore County highlighted the difference between men's and women's (or in this case girls and boys) interest in technology as one of toys versus tools. Boys liked to "play with technology. 3 " Girls liked to use technology as a tool to achieve a goal. This has broad implications for education. It also suggests that while boys may be more familiar with the jargon and hardware of technology, girls will bring great value in thinking through the opportunities for innovative problem solving using ICTs.
This understanding of the nature of boys and girls approach to technology must also be weighted with an understanding of the countries or regions historical, cultural, political, and economic contexts. For instance, in the small enterprise arena, research has demonstrated that women's businesses are more successful if project managers are appreciative of the unique factors women contribute to business development and need in terms of support and services. 4 The value women bring to the business and education community will be lost if traditional patterns of entrepreneurship and business development are utilized without thought of industry and its culture. In fact, there will always be women who are successful at fitting into the traditional male models for professional development, but this speaks more to the women themselves and the broad diversity they represent. Having a "one size fits all" plan may be easier, but it is completely contradictory to the innovation and creativity that is needed in the development space. Remaining current with changes in the economic, social, and political climates in different countries is important because they influence entrepreneurial ambitions in specific directions at different points in time. 5
Empowering Women through ICTs
Experience from recent policy efforts at the international level suggests that gender biases in the information society will persist for the foreseeable future. However, ICTs may give women the opportunity to be agents of their own development. Women are not "waiting" for access to ICTs, but rather using ICTs when they are available to get around the constraints they face in politics, society, and economy. There are case studies on gender and ICTs from around the world to highlight efforts by women and their organizations to negotiate the "digital divide" independently. This situation is apparent from the case studies introduced in this paper. ICTs are not "gender neutral" because they take on the gender of their developer from basic content to use to functionality to beneficiary. Many women know the importance of information and the power that these technologies hold in terms of breaking out of systematic discrimination and gender violence in the household, workplace, and village. They also see the new opportunities that ICTs provide for personal business development and growth. Like men, women are not waiting for policy making to bridge the "digital divide" but rather taking action as agents of their own opportunities using conventional ICTs such as radio to access information sources and communication processes to achieve their development goals, both for the good of their households and communities. In the papers written by Blythe McKay about the community radio station Radio Ada in south-eastern Ghana 6 and by Mercy Wambui of radio listening group projects in postwar Sierra Leone, 7 it is clear that the control of the ICTs and radio tenure or usufruct rights (radio programs by and for women) are of central importance. This consideration must be emphasized in policy that calls for public access, which in itself may not be sufficient to provide a voice for rural, resource-poor women.
ICT access statistics on their own, however, are not a true indicator of women's empowerment. Nancy Hafkin's brief "Are ICTs Gender Neutral? A Gender Analysis of Six Case Studies of Multi-Donor ICT Projects" 8 outlines how women's higher education, participation in small businesses, and ICTs access compared to men in the Philippines and in Thailand do not translate into women's equal representation in leadership or government positions. Similarly, the mere fact that more women are employed in the manufacturing sector facilitated by ICTs does not necessarily mean that these same women are benefiting from literacy or higher learning program or gaining leadership, communications, or negotiation skills. Nancy Hafkin refers to Amartya Sen's argument for the centrality of women in the knowledge society, and writes, "knowledge is not only for economic growth but its foremost use should be to empower and develop all sectors of society to understand and use knowledge to increase the quality of people's lives and to promote social development. A socially inclusive knowledge society empowers all members of society to create, receive, share and use information and knowledge for their economic, social, cultural and political development ." 9 It is therefore an imperative from the perspective of gender and ICTs for development that focus be placed on gender relations in communication and learning rather than simply women and technology. To this end, we may see that the information society is not an end in itself, but rather, the innovation of ordinary people. 10
Box 7: Eight Habits of Highly Effective ICT-Enabled Development Initiatives
1. Implement and disseminate best practice 2. Ensure ownership, get local buy-in, find a champion 3. Conduct needs assessments 4. Set concrete goals and take small achievable steps 5. Critically evaluate efforts, report back to clients and supporters, and adapt as needed 6. Address key external challenges 7. Make it sustainable 8. Involve groups that are traditionally excluded on the basis of gender, race, religion or age.
Box 8: Ways in Which ICTs Can Contribute to Women's Economic Opportunities
1. An increased ability for women to work from home 2. Improved employment opportunities for women in the ballooning IT sector 3. Increased ability of informal sector women to shift to the formal sector 4. Improved global market access for craftswomen through e-commerce 5. Transformation of traditional gender roles 6. Improved access of women, especially rural women, to distance learning and distance work programs 7. Improved ability for the sharing of experiences among women's organisations concerned with the economic well being of women in the informal sector 8. Increased ability to avoid gender bias by having a gender-opaque medium. This guideline is developed to be a cross-sectional look at potential indicators that directly affect economic development. A deeper, well developed discussion of the issues of gender indicators for science, engineering and technology (SET), please see Huyer and Westholm (2007). This also includes an appendix of multiple sources of gender disaggregated data in SET. Special thanks also goes to the Eftimie and others for their report "Mining for Equity: Gender Dimensions of the Extractive Industries" (forthcoming in September 2009), which provided a useful framework and concepts for this appendix.
Impact: Improved economic outcomes through women's increased access to ICTs to benefit women, their families and communities
The main activities of the unit are PC assembling and installation, service and sales. The unit also undertakes computer training and data entry operations. Currently, the unit is planning a further (limited) diversification: to supply reconditioned second-hand computer systems from a minimum price of US$200 since demand for cheap systems is rising. The unit has ten core women members including the group leader and the secretary, all in their twenties.
The enterprise is owned by a man and a woman who also manage it jointly with one other male employee. At inception the woman owner/manager purchased all the hardware (computers, peripherals, network cables, clips, clipping tools, RJ 45, etc), and supervised the business set-up (designed the network cabling, and supervised the laying of the network cables, and carpentry work such as tables and partitioning). She is now in charge of hiring staff, with input from other members, and actively participates in problem solving and management decisions.
These female staff essentially assist the customers in accessing the Internet, surfing the web, sending and reading email messages, transferring and saving files, Internet telephoning, and sending or printing fax messages. They also print tickets for the customers. They purchase net-2-phone credits from the representative in Lagos. They also maintain the systems, switching them off and on, and updating the files. If the system slows down, they check for viruses, and also that they can check the radio on the mast by pinging the radio both at the site and the providers end. They also check to ensure the volume of bandwidth consumed is as requested.
The female co-owner has a PhD. Website or other reference offered better pay. It is evident that even in an organization with a history of employing women, women workers are generally given routine jobs and lower salaries than men. At the same time, professional training provided in ITC 1 allowed 27 percent of the women to take on jobs requiring higher technical efficiency, and four of them were promoted to management positions. VTI aims to promote greater participation of women in leadership and is planning to have a woman manager in every division where women form at least 30 percent of the workforce.
Access to broadband by women and impact on economic opportunities and poverty reduction
Project, policy or program initiative
Brief description of impact to date & current status
Website or other reference 10,000 Women 10,000 Women is intended to target women already engaged in small businesses in developing countries and provide them with further training in subjects including marketing, ecommerce, accounting and accessing capital. The certificate training will last from five weeks to six months.
http://www.10000women.org/ PROMIS A holistic service for SMEs which is going into the market in five European countries. PROMIS provides European SME's and consultants with tailored business services and eTraining in the field of Environment, Health&Saftey and Quality (EHS-Q). PROMIS will support them in complying with the complex legal, commercial, and social requirements at national and international level and therefore strengthening their competitive advantage at an affordable price. http://www.promis.eu/
Access to information and knowledge in rural areas and specific impact on women's social and political participation
Project, policy or program initiative
Brief description of impact to date & current status
Website or other reference Ek Duniya Ek Awaaz, One World South Asia The project works towards the creation of a common, shared platform of knowledge for the local, marginalized urban and rural communities in order to contribute and participate actively to put forth their own concerns and issues, in their own voice. By imparting training on basic technical know-how, the program aims to employ the technology of radio to convert passive receivers/audiences of media, particularly people from the marginalized communities, into active producers of information relevant and topical to their socio-cultural and economic circumstances. The platform of learning was provided to a number of NGOs/CBOs and the young men and women working with the communities in India and Nepal. The process has been replicated in Bangladesh and Sri Lanka.
Knowledge networks
The majority of the world's poor men and women have limited access to information and knowledge that would enable them to overcome their own poverty. Development organizations, social networks, information brokers and others with a mandate to combat poverty all have a responsibility to understand what knowledge people already have; what information they need and in what form; and to communicate effectively in order to improve poor men and women's access to information. ITDGPractical Action has a mission to make knowledge networks work for the poor. We are looking for participation in the creation of a global knowledge network to improve access to information on technology for poverty reduction.
In local level. The role of the information intermediary (in the form of an organization or an individual) was considered and the process of distillation and transformation of information they undergo discussed.
Podcasting Pilot Peru
In Peru, Practical Action is testing the potential of podcasting to disseminate knowledge and information for poverty reduction, using a mixture of new and old technology. Radio has long been acknowledged as a media that reaches grassroots groups. Until recently, however, it has been relatively expensive to start-up and has various regulatory issues to overcome. Now, podcasting is believed to offer a lowcost way of broadcasting audio to defined groups of people.
Practical Action Latin America is conducting a pilot project in the rural region of Cajamarca, northern Peru, to analyze the viability of podcasting for the generation and diffusion of knowledge in poor areas of Peru. The program content is tailored to local needs and interests in the different areas of Cajamarca. In Chanta Alta, for instance, the programs provide information about cattle raising and dairy production, while in Chiliete they concentrate on growing grapes and beans. The language is kept simple, to make the broadcasts more accessible than technical leaflets. It is hoped that if the pilot project proves successful, the scheme could be replicated by Practical Action in Sri Lanka and Zimbabwe.
Practical Action-Local voices in Peru: http://practicalaction.org/?id=podcasting _peru Capacity building through ICT and networking Lithuania The projects aim is to improve the conditions for sustainable human development for women in rural and less dynamic areas by empowering women (through women's NGOs and self help organizations) by provision of ICTs to support information flows and knowledge building for better understanding of and increased involvement in social, political and economic spheres of life, both at the local and regional level. The study found that a participatory approach employed in the initiative encouraged local people to assume ownership of development programs, informing them of the design and implementation process. Communication strategies used in achieving this goal included: (1) holding gram sabha or village council meetings; (2) the creation of local GIS database and a local GIS unit; (3) assigning academic research units to particular districts; and (4) promoting a public process of progress and outcome report. It is argued that a participatory process can be effectively enabled through communicative action incorporating local communities and indigenous knowledge to the management of local information tools. Website or other reference microenterprises because participant firms were not using ICT in their regular business. The vast majority of MSEs surveyed could not afford individual access to ICT. Few of them reported occasional use of the telephone. Those who used phones reported a reduction in their operational costs (for example by substituting travels), increased income, or reduced uncertainty of transactions with suppliers and customers. Evidence suggested that information needs of rural MSEs were quite localized and likely to be met more by informal, organic information systems (social networks) than by formal, ICT-based systems. Social networks and social capital became the most valuable resource of information management for rural MSEs. Business owners also placed greater trust and value in information received from personal sources and channels. Information delivered by institutional, non-commercial institutions (for example government agencies, non-governmental organizations (NGOs), donor agencies) was second in importance.
Eco-Audit Environmental Benefits Statement
The World Bank is committed to preserving Endangered Forests and natural resources. We print World Bank Working Papers and Country Studies on postconsumer recycled paper, processed chlorine free. The World Bank has formally agreed to follow the recommended standards for paper usage set by Green Press Initiative-a nonprofit program supporting publishers in using fiber that is not sourced from Endangered Forests. For more information, visit www.greenpressinitiative.org.
In 2008, the printing of these books on recycled paper saved the following: | 2018-12-22T17:53:00.179Z | 2009-10-02T00:00:00.000 | {
"year": 2009,
"sha1": "3c478325f6bf4f65a22d2d44ec726ae65c0dbd97",
"oa_license": "CCBY",
"oa_url": "https://openknowledge.worldbank.org/bitstream/10986/5935/1/518310PUB0REPL101Official0Use0Only1.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "0e596c6f3eac96915bab3728dd2908e5252dc2cb",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
259310020 | pes2o/s2orc | v3-fos-license | Editorial: Microbiology of deep-sea carbon cycling
COPYRIGHT © 2023 Liu, Wang and Webster. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Editorial: Microbiology of deep-sea carbon cycling
The ocean is a vast carbon sink and mediates global carbon cycling, essential for mitigating climate change. The deep-sea pelagic and sub-seafloor environments represent the largest microbial habitats on Earth and are key sites for organic matter remineralization and storage in the biosphere. Moreover, diverse unique and extreme habitats, e.g., seamounts, trenches, cold seeps, and hydrothermal vents exist in the deep sea, developing special and active microbial communities and metabolic processes that significantly impact global carbon cycling. It is therefore important to understand the diversity, activity and metabolism of deep-sea microorganisms, particularly their mechanisms for utilization and transformation of organic matter, and the environmental factors affecting these processes. The main aim of this Research Topic is to collect recent work focusing on the diversity and metabolic activities of microorganisms in different deep-sea habitats, in order to understand the microorganisms that drive carbon cycling in the deep ocean.
Marine sediments harbor diverse physicochemical properties that regulate the assemblages of microorganisms. However, it is unclear how variations in sediment physicochemical properties impact microorganisms on a global scale. Bradley et al. investigated patterns in the distribution of microbial cells, organic carbon, and the amounts of power used by microorganisms in global sediments. They found that trends in cell abundance, particulate organic carbon storage and degradation, and microbial power utilization are mainly structured by depositional settings and redox conditions, rather than sediment depth and age. Sediments deposited on continental shelves and margins are predominantly anoxic and contain active microbial cells that decline in power utilization in deeper and older settings. Conversely, microorganisms in abyssal sediments use consistently low amounts of power across large gradients in sediment depth and age. Overall, the study demonstrated broad global-scale connections between depositional settings and activity of deep biosphere microorganisms.
Zhang et al. compared the composition and functions of the microbial communities in sediments from deep-sea seamounts, trenches and cold seeps in the Pacific Ocean, via amplicon sequencing and metagenomic analysis. They demonstrated that the microbes in deep-sea sediments were diverse and were functionally different (in terms of biogeochemical cycling) from each other in the seamount, trench, and cold seep ecosystems. These results help improve the understanding of the composition, diversity and function of microbial communities in deepsea environments.
Deep-sea seeps are extreme environments with high hydrostatic pressure, yet the seep systems have a great impact on global carbon cycling through discharge of methane and petroleum hydrocarbons. Webster et al. characterized the microbial diversity, geochemistry and methanogenic activities of prokaryotic communities in seven Gulf of Cádiz mud volcanoes. They concluded marked differences between the microbial biogeochemistry of mud volcano sediments and deep-sea control sediments. They found that methanogenic activities from methyl compounds, especially methylamine, within the top two meters of sediment were much higher than with the substrates H 2 /CO 2 or acetate. The potential archaea responsible for the methanogenic metabolisms were explored and sediment enrichments were dominated by Methanococcoides methanogens.
Lyu et al. investigated the potential and activities of deep-sea microorganisms for alkane degradation in the sediments of cold seep areas. They enriched five oil-degrading consortia from sediments collected from the Haima cold seep areas of the South China Sea, and further isolated seven efficient alkane-degrading bacteria belonging to Acinetobacter, Alcanivorax, Kangiella, Limimaricola, Marinobacter, Flavobacterium, and Paracoccus. The degradation rates of these bacteria were the highest in alkanes with medium chains. This study provides insights into the community structures, and oil-degrading activity of the bacterial inhabitants in the Haima cold seep areas, South China Sea, and offers bacterial resources for cultivation of candidates with oil bioremediation application potential.
Author contributions
RL wrote the draft. YW and GW revised and provided essential comments on the article. All authors have proofread and approved it for publication.
Funding
RL acknowledges the support from the National Natural Science Foundation of China (Grant No. 42276149). | 2023-07-03T13:11:57.444Z | 2023-07-03T00:00:00.000 | {
"year": 2023,
"sha1": "87ae9cc9025fe77af6b4a47068d69fcf26f43694",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "87ae9cc9025fe77af6b4a47068d69fcf26f43694",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
234852256 | pes2o/s2orc | v3-fos-license | The Ethnomathematics Practices of Eskaya Tribe
This research sought to describe the Ethno-mathematical practices of the Eskaya tribe of Taytay, Duero, Bohol using the ethnographical research design to explore the ethnomathematics practices through the lived experiences of the informants. Employing purposive sampling, selected teachers, parents, and students from the tribe served as the key informants of the study. Data collection took almost a year of observation, lived experiences documentation, and interviews. The study was able to describe some of the ethnomathematical practices of the Eskaya tribe such as the skills and processes of the Eskaya tribe commonly use in their daily life in counting, measuring, ciphering, ordering, classifying, inferring, and modeling patterns. These skills and techniques were used in studying their Eskaya numeration systems such as the Eskaya numbers and numerals, the Eskaya name of the basic shapes and the four fundamental operations, and the use of Eskaya numbers in measuring time, days, and months.
Furthermore, Ethnomathematics is a research program in the history and philosophy of Mathematics, with pedagogical implications, focusing on the arts and techniques of explaining, understanding, and coping with different sociocultural environments. It aims to contribute both to the understanding of culture and the understanding of Mathematics, and primarily to appreciate the beauty of Mathematics and culture.
One of the contexts of the K to 12 Basic Education Mathematics Curriculum other than belief and environment is the language and culture that include traditions and practices, as well as the learner"s prior knowledge and experiences. Ulep (2014) also emphasized that K to 12 also enhances literacy through multilingualism which includes the age-context and culture-appropriate print and electronic texts in the Teachers' Guide. One of the teaching techniques in the K to12 in which many subject areas and skills are organized and linked is to provide an integrative instructional program in the integration of local culture and indigenous resources (Dimaano, 2015).
Mathematics education should foster a greater understanding of how mathematics is applied in an increasingly technologically driven world. School mathematics needs to expand its parameter and become more inclusive of the mathematics formed in the world that the students inhabit (Avilla 2016). Mathematical education should include cultural issues which could help students in learning mathematics (Shaljan 2003).
Existing literature revealed that the field of ethnomathematics links students' diverse ways of knowing and learning through the use of embedded knowledge along with the academic mathematics curriculum. Rosa, M. &Orey, D.C. (2011) highlighted that ethnomathematics approaches to the mathematics curriculum are intended to make school mathematics more relevant and meaningful for students and to promote the overall quality of their education. It helps to develop students' intellectual, social, emotional, and political learning by using their unique cultural referents to part their knowledge, skills, and attitudes.
The study on Indigenous Quantification Techniques of the Agta in Sierra Madre Mountains, the Philippines by Cadorna (2015) found out that the Agta's quantification system is unique and simple, anchored on their physical attributes, based on their immediate primary needs for survival, transmitted by their elders and associated on the availability of the natural resources. The indigenous quantification techniques were used in the formulation of IPs education curriculum and activities to preserve these from disappearance and to sustain cultural survival amidst technology development.
Ethnomathematics creates a learning environment which senses applicability and practicality in concrete situations. It amplifies the knowledge of the subject being studied and helps students to understand, explain and reflect upon their reality. (Shaljan, AM.,2003) Rubio, (2016) explained in her study "that learning ethnomathematics is learning the applications of some mathematical concepts in real-life situations." To accept a concept or knowledge as well as its importance, the students must have a connection to it.
The results of this study serve as one of the bases in preparing instructional materials in teaching mathematics where this topic is integrated and emphasized in some particular lessons. As Rowlands and Carsons(2002) reflected in their critical review of ethno-mathematics, there were four possibilities considered in this area; (1) ethno-mathematics should replace academic Mathematics curriculum; (2) it should be supplemental to the Mathematics curriculum; (3) it should be used as a springboard for academic Mathematics; and finally, (4) it should be taken into consideration when preparing learning situation. Undeniably, ethno-mathematical studies aim to help the teacher establish cultural models of beliefs, thought, and behavior, the sense of contemplating not only the potential of the pedagogic work that takes into account the "knowledge" of the students but also the learning inside the school which is more meaningful and empowering ( D"Ambrosio (2001)).
On the other hand, even though different studies, vis-à-vis ethno-mathematics were done across the globe, the majority revealed that contextualized teaching and learning of the Mathematics Curriculum differ significantly in terms of understanding the concepts and ideas. A lot of studies on the same aspect were even tried here in the Philippines specifically in Jose Panganiban, Camarines Norte. However, teachers' receptiveness to the idea is still inadequate. This is due to the poor background on how to integrate Ethnomathematics in the teaching of Mathematics.
This motivated the researcher to study the Eskaya tribeto bridge the gap between the past and the future in the field of mathematics education and to illuminate the same field of study converging on the culture and local practices of the indigenous people of Tatay Duero, Bohol, particularly that of the Eskaya tribe. The focus of this study is to determine their ethno-mathematics practices in their daily lives and the inclusion of those practices in the teaching and learning of Mathematics from a cultural perspective.
The researcher envisioned that studying the tribe could create a great help to the Eskaya Tribe and to the province of Bohol, Philippines in general. This research would help the people of Eskaya in identifying and preserving their Ethno-mathematical cultural practices so that the students will respect the culture of the tribe and understand their mathematical roots. Also, it will contribute both to the understanding of culture and the understanding of Mathematics, and mainly lead to an appreciation of the unique Ethnomathematics culture of the tribe. Moreover, it will give ideas on how the Mathematics subject may be valued by students and even teachers by understanding its other side. This study will enhance and enrich one"s vision and experience towards mathematics, helping everyone to better understand and learn the subject. Additionally, this will preserve the culture of Eskaya and most importantly the researcher hopes to integrate Ethnomathematics in the K-12 instruction.
The study"s focus is to determine Ethnomathematics the Eskayas as practiced in their daily lives and its inclusion in the teaching and learning of Mathematics from a cultural perspective. Specifically, the study aimed to discover and document some of the socio-cultural characteristics of the Eskaya tribe.
Materials and Methods
To provide a complete picture of the study, the researcher used the qualitative method specifically the Ethnography that focused on the rich and thick description of the culture that includes the different culture components such as the ethnomathematical practices of the Eskaya tribe. The study isa kind of qualitative research which has its roots in cultural anthropology where researchers immerse themselves within a culture, and describe the values, beliefs, and practices of cultural groups (Camilar-Serrano, 2016).
The researcher utilized the Outline of Cultural Materials (OCM) Coding by (Bernard,2011;DeWalt & DeWalt,2011;Murdock et al,2004). The OCM is usually used to interpret the ethnographic studies. It provides coding for the categories of social life that have traditionally been included in ethnographic descriptions like history, demography that deal with descriptions of cultural systems (DeWalt & DeWalt,2011, p.184). The OCM coding is appropriate for ethnographic studies specifically it focuses on culture and includes the different components of Eskayaculture such as the ethnomathematical practices.
The study is about the Ethnomathematics of the Eskaya tribe specifically their ethnomathematical practices along with ciphering, simple counting, classifying, measuring, ordering, inferring, and modeling patterns arising from the environment. The way the school children use the learned ethnomathematics concepts in studying formal Mathematics is also described to unleash their culture and to develop instructional materials depicting the local practices of the tribe.
The study was conducted atTaytay, Duero, Bohol where the Eskaya tribe lives. The Eskaya tribe is an indigenous tribe found in the hinterlands of the towns of Duero, Guindulman, Pilar, and Sierra Bullones, Bohol"s southeast interior. Likewise know as the "Visayan-Eskaya, "the community is only found in the island province of Bohol. They have a unique cultural heritage, a distinct language and literature, and traditional practices and arithmetic. Eventually, the group was recognized, and the community was awarded a Certificate of Ancestral Domain Claim (CADC) in 1996 and classified as an indigenous group under Republic Act or R.A. No.8371 entitled "The Indigenous People"s Rights Act of 1997.
Plate 1:Eskaya Tribal School
The Eskaya Tribal School is presented in Plate 1. The research participants were 5 "totoban" teachers, 5 "estowas' students, and 5 parents (sila / nima)of the Eskaya tribe. They were purposely selected and went through formal and informal interviews, observations, and focus group discussions in a natural context to make sense of a situation in the context of meaning. The researcher observed the daily activities of the people in Eskaya, ate with them, attended the tribal annual celebrations, and even stayed overnight or slept with them. The researcher attended the tribal Sunday mass and classes and lived and observed the day-to-day activities in the tribe.
The researcher used the purposive non-probability sampling technique which identifies a particular group of people, specifically the people in the Eskaya tribe. The researcher used data recording instruments such as paper and pencil, a video recorder, and other gadgets to capture the data needed. However, the researcher made sure that the interviewee was informed and asked permission before recording the information for ethical purposes through informed consent. Moreover, with regards to sensitive issues, some respondents may ask for privacy or anonymity.
The researcher employed a semi-structured interview guide. She also utilized Spradley"s participant observation, as well as, ethnographic interview questions. These materials helped in collecting pertinent information which were examined and further led to the generation of themes.
The researcher asked permission from the chieftain of the Eskaya tribe and the Community Development Officerof the office of the NCIP of Tagbilaran City, Bohol, Philippines. The researchers made sure to respect the cultural practices and beliefs of the tribe.
The study attempted to illuminate the field of study covering the culture and local practices of the indigenous people of Taytay, Duero, Bohol, Philippines, particularly that of the Eskaya tribe. The analysis of data was done by identifying patterns, themes, or categories recurrent in the perceived data. The key themes or words of the study include ciphering, classifying, simple counting, ethno-mathematic, indigenous people, inferring, instructional materials, measuring, modeling pattern, and ordering. These terms were defined conceptually to make sure that there is a common understanding of the words. The extensive discussion was on how the Eskayatribe learns the ethno-mathematical concepts that they manifest in their daily lives. The way the schoolchildren use the learned ethno-mathematical concepts in studying formal Mathematics was also described to unleash their culture.
The significant data could not be collected all at once. The researcher needed to be in the field for 6-7months to capture the different events relative to the development of the study. All the observed details were done through field notes and subjected for interpretation. To confer its reliability and validity, triangulation, respondent feedback, and a series of consultations with the chieftain and (timama) teachers were done to arrive at the correct findings and conclusions about the data. These activities helped a lot to arrive at a good research output. The instructional materials depicting the local practices of the tribe were developed, as well as, an emergent theory on how the tribe learns mathematics.
3.Results and Discussion
Context is a locale, situation, or set of conditions of the Filipino learner that may influence his/her study and use of mathematics to develop critical thinking and problem-solving. Context refers to beliefs, environment, language, and culture that include traditions and practices, and learner's prior knowledge and experiences as stated in the twin goals of (DepEd Curriculum Guide,2011). Results showed that the ethnomathematical practices of the Eskaya tribe consist of the skills and techniques such as counting, measuring, ciphering, ordering, classifying and modeling pattern.
Counting
Every society needs to count. The Eskaya people can do simple counting by using their fingers. A numeration system is a way in which humans represent numbers. Aside from 46 Eskayaletters of the alphabet, the tribe has their Eskayaethnomathematical numeration system which is a simple counting but using their Eskaya dialect and symbol or script as shown in Tables 1 to 3 which the tribe used in their tribal Sunday class. The symbol can be extended by combining the Eskaya numbers. Like for example 1001 in Eskaya it is "oyman-oy", 1002 (oy-man-tre), 1003 (oy-man-koy), 1004(oy-man-pan), 1005 (oy-man-seng) and so on and so forth.
The tribe has their version of the four fundamental operations like addition (as), subtraction (ton), multiplication (bret), and division (pin) which the computations were similar to the usual mathematical computations. The symbols are shown in Table 4.
The Eskaya tribe was taught on applying these symbols in computing. The concept of multiplication was used in doubling in computing the earning of income especially in selling vegetables and flowers, counting money, the concept of the division was used in equal sharing food for others through the idea of distribution; marking, and tallying during the tribe"s election.
Moreover, the tribe has also identified their own name of the basic shapes name " liliyamor" like the "molyera" for circle, "pinla" for oval, "maldera" for rectangle and square and "lawde" for the triangle. As shown in table 5. Table 6.
Measuring
Similar to the ordinary days of the week, the Eskaya people have their version of days in a week such as Monday (Leni), Tuesday (Mimati), Wednesday ( Mibol), Thursday ( Hubir), Friday ( Bene), Saturday ( Sanubi) and Sunday as " Liongo" as shown in Table 7.
The local practices of the tribe in measuring show a stimulating connection in the field of ethnomathematics like for instance the elders would usually use the environment to determine the time through the placement of the sun or their shadow cast by objects. The Eskaya uses the sound produced by roosters early in the morning. The elder and mostly until now, they used their bare hands in measuring the rice to be cooked for her family. She also used her middle finger to measure the rice in cooking rice. An elder still used a "gantangan" and "salmonan" in measuring rice "magtakal". One "gantangan" is equivalent to 6 "salmonan" according to the Eskaya librarian.
Meanwhile, the elder carpenter said he used " sipilya" or "alapres" as planer and for making a corner in a door or decorative respectively as shown in Plate 2.
The tribe has a "yakon" for P100 per kilo which according to "Naning" (an Eskaya elder) they believed "yakon" is a good anti-oxidant or body cleansing for long life.
Plate 3: The Eskaya tribe"s Agri products Every society has a design using patterns. Observing the ambience and terrain where the community settles can enrich one"s mind in the real context of ethnomathematics. In this study, the patterns considered are produced by nature and the "manmade" creations by the tribe.
Plate 4: Eskaya Tribe"s native "pugaran and poo-so" An interesting activity of the tribe applying patterns is making a "poo-so" is rice wrapped and later boiled in a triangular casing made of woven coconut "lukay" and a native "pugaran" made of coconuts fronds which are used as shelters for an egg-laying hen. The tribe also uses patterns in making a design in a bamboo chair which could serve as a venue in appreciating the real-life context of Mathematics associated with tessellations in geometry.
Classifying
Classifying is categorizing something according to a certain group or system based on certain characteristics. Classifying was observed in selling the flowers like the red anthurium was sold according to their sizes (small, medium, and large). Small is sold for P0.50 each, medium size for P0.75 and large sizes for P1.00 each, P5 for assorted for every 5 pieces. The "cedar" or a white anthurium was sold for P25 each. Notice that each activity involved ethnomathematics that guides them in doing their local routines.
Ordering
Ordering is putting things into their correct place following some rules or reasons. Some of the stimulating local practices of the tribe can be observed through ordering. The researcher considers ordering as the arrangement or sequence of things and activities of the tribe following some rules or reasons. The common practices of planting among the Eskaya have significant ethnomathematical concepts an idea that can be seen in the followingactivities: making bamboo sticks to guard the plant, weighing the crops and vegetables, putting available organic fertilizer around the roots, harvesting the crops, vegetables, and flowers, selling poultry, livestock, and harvest to the local market among others.
Ciphering
Ciphering is the act of writing a code in which the letters of a text are replaced with others according to a system. This is somewhat, similar to the way the tribe does cipher. The activity involves signs, symbols, and human gestures which are other ways of conveying the message to their fellow "Eskayananons".
The ethnomathematics practices of the tribe along with ciphering were also observed through the activity like the sound produced by a " budjong" a shell used to inform the tribe of meetings or if a member of the family dies. The "budjong" is sounded 5 times to inform of a meeting and thrice if a member of the tribe dies.
Inferring
Applying inference in Mathematics is the act or process of deriving logical conclusions from premises known or assumed to be true. The laws of valid inference are studied in the field of logic. However, in this research inferring is used in the context of derived meanings from the observed information in the environment. Drawing inferences from the observed information is a basic human activity to interpret events in daily lives. In the case of Eskaya, when the events happen as a result of inferring, they become true for them, and eventually become part of the belief and culture.
Inferring is also observed in the local activities done by the Eskaya like for instance predicting the weather through the formation of the clouds, and inferring if a place is conducive for building a house. Another is following the type of moon in planting their crops like if the herbaceous vine crops they plant before a full moon, while non-vine herbaceous crops they plant after the full moon. Also, the Eskaya elders believe that if the sunsets' color is very red they believe that a calamity could happen. These are some situations that can be adapted to connect the lessons in Mathematics especially in the field of Statistics.
The contexts mentioned in this study in different activities of the tribe can be connected to Mathematics taught in school in several mathematical concepts like geometry, arithmetic, statistics, and algebra to make Mathematics more stimulating and meaningful on the part of the students. The relevance of ethno-mathematics is identified by connecting the concepts to their daily life activities. The ethnomathematical practices can be considered as the ethnomathematical skills that can be found in the Eskaya tribe as summarized in Figure 1.
Implications
The ethnomathematical practices of the Eskaya tribe were found in their daily activities like gardening and farming. The way of life observed was mostly related to their culture and local practices. Their day of living was seen to be very simple in the rural area. What was very interesting with the tribe was their unique language, mathematical numeration, and writing system.
The ethnomathematical practices of the tribe that can be on their day-to-day activities like counting using the Eskaya numbers and numerals. They have other names for theEskayabasic shapes too.
The Eskaya still used some traditional methods of measuring. The tribal members classify the prices of flowers by sorting them according to sizes as small,medium, or large and some characteristics of the vegetables.Inferring was observed in the local activities done by the Eskaya, like observing the weather by looking at the sky and predicting an incoming calamity by looking at the color of the sunset. Clouds formation was used in planting crops, as well as observing the moon, and planting before a full moon. One tribal member said "I know a calamity is coming if the branches of trees fall even without a typhoonn.
The researcher was able to identify some vital contents of the EskayaEthnomathematics that was taughtto the Eskaya pupils. The contents of Ethnomathematics are: the numbers and number sense which includes the Eskaya numeration system such as the Eskaya numbersand numerals the Eskaya name of the basic shapes and the four fundamental operations; the measurement which includes the use of Eskaya numbers and measure to describe, understand and compare mathematical and concrete objectsin the tribe like the tribal name in measuring time, days, months.
Recommendations
The researcher recommends the continued preservation of the Eskaya culture through further research for the enhancement of the preservation program of EskayaEthnomathematics. This is to empower, appreciate, recognize, respect, and promote the Eskayaethnomathematical practices for the attainment of national unity and development. The researcher was hopeful for the integration of EskayaEthnomathematics to the K to 12 Curriculum of the Department of Education.
Acknowledgements
With sincere appreciation and gratitude, the researcher would like to thank Holy Name University administration, the NCIP Tagbilaran City, and the tribal leaders and chieftain of Taytay, Duero, Bohol, friends and family of the researcher. | 2021-05-21T16:57:21.918Z | 2021-04-10T00:00:00.000 | {
"year": 2021,
"sha1": "27feeae93bc4b0736518bcb2f1f909b5a5745e84",
"oa_license": "CCBY",
"oa_url": "https://turcomat.org/index.php/turkbilmat/article/download/1660/1407",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c8621694f92ff9dc8c82e70abbfcc93ea757e975",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Geography"
]
} |
262103567 | pes2o/s2orc | v3-fos-license | Neuropeptide Y-expressing dorsal horn inhibitory interneurons gate spinal pain and itch signalling
Somatosensory information is processed by a complex network of interneurons in the spinal dorsal horn. It has been reported that inhibitory interneurons that express neuropeptide Y (NPY), either permanently or during development, suppress mechanical itch, with no effect on pain. Here we investigate the role of interneurons that continue to express NPY (NPY-INs) in adulthood. We find that chemogenetic activation of NPY-INs reduces behaviours associated with acute pain and pruritogen-evoked itch, whereas silencing them causes exaggerated itch responses that depend on cells expressing the gastrin-releasing peptide receptor. As predicted by our previous studies, silencing of another population of inhibitory interneurons (those expressing dynorphin) also increases itch, but to a lesser extent. Importantly, NPY- IN activation also reduces behavioural signs of inflammatory and neuropathic pain. These results demonstrate that NPY-INs gate pain and itch transmission at the spinal level, and therefore represent a potential treatment target for pathological pain and itch.
INTRODUCTION Click here to enter text. hypersensitivity of the ipsilateral paw, compared to pre-surgery thresholds. Both the 291 mechanical and heat hypersensitivity were blocked in CNO-treated mice (Figures 4E 292 and 4F). Because de novo expression of NPY is known to occur in injured A-fibre 293 afferents following nerve injury 33,[38][39][40] would contribute to the blockade of neuropathic pain that we observed. We therefore 305 conclude that this effect is due to activation of spinal inhibitory NPY-INs. We also 306 assessed mCherry expression in the L4 and L5 DRG of 5 CFA-treated 307 AAV.flex.hM3Dq-mCherry-injected NPY::Cre mice, 3 days following CFA injection. In 308 contrast to nerve injury, neuropeptide upregulation is not observed in rodent DRG 309 under inflammatory conditions 33,40 . As expected, we observed no mCherry-labelled 310 cells in the contra-or ipsilateral L4 or L5 DRG of these mice (data not shown).
311
Spinal NPY signalling has been implicated in the suppression of neuropathic pain 312 through inhibition of NPY Y1 receptor (Y1R)-expressing excitatory interneurons in 313 the dorsal horn 29,43,44 . Therefore the suppression of neuropathic hypersensitivity that 314 we observed during chemogenetic activation of NPY-INs could be due to GABAergic 315 transmission, NPY signalling, or a combination of both. To assess the potential role 316 of Y1R signalling, we systemically co-administered CNO and the Y1R-selective 317 antagonist BMS 193885 21 prior to behavioural testing in AAV.flex.hM3Dq-mCherry-318 injected NPY::Cre mice that had undergone SNI surgery. Administration of the Y1R 319 antagonist had no effect on the CNO-mediated suppression of tactile and heat 320 hypersensitivity in these mice (Figures 4E and 4F), suggesting that action of NPY on 321 Y1 receptors is not required for this effect.
In addition to evoked hypersensitivity, peripheral nerve injury induces ongoing 323 neuropathic pain in rodents, as well as engaging affective-emotional responses to 324 pain 45 . To determine the contribution of NPY-INs to ongoing pain we tested whether 325 CNO induced conditioned place preference (CPP) in a separate cohort of 326 AAV.flex.hM3Dq-mCherry-injected NPY::Cre mice following SNI surgery ( Figure 4G).
327
A wildtype control group that had undergone SNI was also included to test for any 328 possible preference of (or aversion to) the effects of CNO that could have resulted 329 from off-target effects independent of DREADD activation. CNO did not induce Toxin-mediated silencing of NPY interneurons causes spontaneous itch and 352 enhances pruritogen-evoked itch but does not alter nocifensive reflexes. 353 We then tested whether tetanus toxin light chain (TeLC)-mediated silencing of Figure 5A). AAV.flex.eGFP-injected mice of 392 the same genotype were again used as a control group. NPY::Cre;GRPR CreERT2 mice 393 that received injections of AAV.flex.TeLC.eGFP showed no significant difference in 394 CQ-induced itch, compared to AAV.flex.eGFP-injected controls (P=0.34, 2-way 395 ANOVA with Tukey's post-test, Figure 5B). However, when comparing NPY::Cre and 396 NPY::Cre;GRPR CreERT2 mice that had received injections of AAV.flex.TeLC.eGFP, 397 we found that the NPY::Cre;GRPR CreERT2 mice showed significantly less CQ-induced 398 itch behaviour than NPY::Cre mice (P<0.0001, 2-way ANOVA with Tukey's post-test, 399 Figure 5B). Furthermore, AAV.flex.TeLC.eGFP-injected NPY::Cre;GRPR CreERT2 mice 400 did not display a significant increase in spontaneous biting prior to CQ administration 401 (compared to AAV.flex.eGFP-injected controls; P=0.82, 2-way ANOVA with Tukey's 402 post-test, Figure 5C) and never developed skin lesions ( Figures 5D and 5E). These 403 data demonstrate that both the spontaneous itch and the increased pruritogen- coding for Cre-dependent constructs 7,16 . While this approach failed to capture a 517 minority of NPY-expressing neurons, it enabled us to target a large number of these 518 cells. Importantly, expression was restricted to those cells that continue to express Click here to enter text.
NPY. This was confirmed by our finding that up to 85% of the virally transfected cells 520 contained detectable levels of NPY.
521
The main differences in interpreting the roles of NPY cells are likely to depend on 522 whether the cells were inactivated (through ablation or synaptic silencing) or 523 chemogenetically activated. In agreement with Bourane et al 18 , we found that 524 silencing NPY cells had no effect on acute nociceptive thresholds. However, 525 chemogenetically activating these cells increased thresholds for both thermal and
554
Here we show that activating NPY cells also strongly suppresses CQ-evoked itch.
555
This is at odds with findings of Acton et al 21 , who reported that chemogenetic 556 activation of NPY-lineage neurons failed to alter scratching in response to CQ. There 557 are technical differences between these studies, since Acton et al used a reporter 558 mouse line to express hM3Dq, and injected CQ intradermally behind the ear. The 559 discrepancy between the results of these studies is most likely to result from higher 560 levels of DREADD expression following viral transfection, and therefore more 561 effective neuronal activation. However, there may also have been a contribution from 562 regional differences in the itch tests used (hindlimb versus head), as well as in the 563 neuronal populations targeted (as noted above). Although Bourane et al 18 reported 564 that ablating ~70% of NPY-lineage neurons had no effect on itch evoked by CQ, we 565 found that synaptic silencing of the NPY cells with TeLC increased CQ-evoked itch, 566 and often resulted in development of skin lesions, presumably secondary to the 567 spontaneous itch-related biting that was also observed. In fact, the antipruritic action Figure 7A). This inhibitory input to GRPR cells appears to be even more 582 powerful than that originating from the dynorphin/galanin cells, since NPY-583 immunoreactive boutons accounted for 45% of the inhibitory synapses on the GRPR 584 cells, compared to the 21% from dynorphin-immunoreactive boutons. Consistent with this we found that optogenetic activation of NPY cells elicited oIPSCs in all of the 586 GRPR cells tested. Interestingly, these were of much higher mean amplitude (~250 587 pA), than the ~80 pA oIPSCs reported by Liu et al 22 in GRPR cells when galanin 588 cells were optogenetically activated using a very similar experimental approach. The 589 inhibition of GRPR cells by NPY-INs is likely to be predominantly GABAergic, since 590 oIPSCs were reduced by gabazine in all cells (with one also sensitive to strychnine).
591
Also, consistent with previous evidence showing that the majority of GRPR cells lack 592 Y1 receptors 21,23 , we did not detect outward currents in any of the GRPR cells that 593 were tested with a Y1 agonist.
594
Although GRPR-expressing excitatory interneurons have been strongly implicated 595 in itch, we have recently shown that these cells respond to noxious as well as pruritic 596 stimuli, that they correspond morphologically to a class of SDH excitatory Activating NPY cells suppresses hypersensitivity in persistent pain states 610 Importantly, in addition to its effect on acute nocifensive reflexes and itch, 611 activating NPY cells also blocked thermal and mechanical hypersensitivity in both 612 inflammatory and neuropathic pain states. In the SNI model, we found that 613 administration of a Y1 antagonist had no effect on the reversal of mechanical and 614 heat hypersensitivity when NPY cells were activated. NPY acting on Y1 receptors 615 expressed by spinal neurons is known to reduce signs of neuropathic pain 29,43,59 ; 616 however, it appears that chemogenetic activation of NPY cells generated GABAergic 617 inhibition that was sufficiently powerful to reverse the hypersensitivity independently Click here to enter text. and spinal cord tissue was processed for imaging and analysis as described below. were injected into the L3 segments of wild-type mice (n = 8 for both groups), and | 2023-02-13T14:10:42.066Z | 2023-03-31T00:00:00.000 | {
"year": 2023,
"sha1": "79cd28afdcccfbdb213634c2b59180b1cba3f939",
"oa_license": "CCBY",
"oa_url": "https://eprints.gla.ac.uk/291916/2/291916.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "79cd28afdcccfbdb213634c2b59180b1cba3f939",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
208563557 | pes2o/s2orc | v3-fos-license | Dengue Virus: Clinical Manifestations and Advances in Diagnosis, treatment with a Special Focus on Strategies to limit Mosquito Spread
Dengue is a viral disease that is transmitted by Aedes aegypti mosquito worldwide. It is globally prevalent and has no effective treatment till date. Approximately 50,000 people suffer from dengue annually, of which 10% of the total cases involved are due to dengue hemorrhagic fever. This creates a very distressing situation which puts forth a need for development of an effective anti-dengue agent to combat this epidemic infection. The use of bioinformatics tools, high performance computing and molecular modeling programs has been leading the advancements in designing and in silico searching of therapeutically active molecules that can be used against Dengue. For drug designing the bioinformatics is evolving the docking studies of NS3 pro and Envelope protein of DENV 2 and DENV 3 with new drug design tools that are available in public using Artificial Neutral Network (ANN) and Hidden Markov Model (HMM). Most chemical insecticides that are used for the eradication of mosquito vectors are not safe because of their harmful side effects to living organisms. This review focus about the various immunological manifestations of Dengue viral infection. There is a need of using biocontrol agents that includes the use of microorganisms, fishes and its metabolites for the elimination of vectors and
which is computed with polyproteins that are post translationally severed into envelope structures, membranes, capsids and 7 other proteins which are non-structural. Previous research findings have reported about a nonstructural 48-kD glycosylated protein NS1, which plays an effectual role in viral replication and immune vagueness (Nivedita et al., 2013).
Initially translation of NS1 forms monomer and are glycosylated in the Endoplasmic Reticulum, rapidly incarnated into dimer that involves amalgamation of infected cells with the viral replication complex on the surface of the ER membrane, plasma membrane association by glycophosphatidyl inositol linkage, semblance of soluble lipophilic hexamer which are secreted by infected cells or by the binding of hexameric NS1 back to the surface of uninfected cells through glycosaminoglycan interactions (Gutsche et al., 2011). According to WHO, the epidemiological analysis of dengue has kept India under category A; due to is consistent communal health problems, paramount cause of hospitalization and death rates among children and also due to hyperendemicity with all five dengue virus serotypes. At present, the increased risk of dengue is due to factors such as changes in lifestyle, rapid urbanization and improper water storage practices which leads to rapid escalation in the breeding of mosquitoes (Natasha et al., 2013). The review presents the details about how far dengue virus and its manifestation are involved in regarding public health concern, prevention and treatments by understanding the mechanism of serious forms of Dengue Hemorrhagic Fever and Dengue Shock Syndrome.
Mode of transmission
The life cycle of Aedes aegypti is explained in two phases: Aquatic phase involves Larvae and Pupae form; and Terrestrial Phase includes eggs and Adult stages. Because of its fastest adaptations to the new environment there is a high chance of occurrence of dengue outbreak and it should be considered as an important infection affecting public health (Helenice et al., 2018). The transmission of dengue virus is aided by mosquitoes by feeding on blood of infected persons. The viral replication smoothly starts by infecting and replicating in the midgut epithelium of the vector and continuous its replication in the insect hemolymph and are transmitted to other organs finally reaching salivary glands after 10-14 days of exposure thereby aiding vector to bite another person for blood meal (Nedjadi et al., 2015). The entry of DENV is facilitated by the viral envelope glycoprotein receptor -mediated endocytosis to its targets such as dendritic cells, macrophages and monocytes. The glycoprotein E is the major component which aids the viral entry. It is also reported as the viral entry is mediated via clathrin-mediated endocytosis pathway (Rigau-Perez 2006). The DENV pathway is mainly based on the viral strains and the cell type.
In classical Endocytic Pathway, the clathrin -coated vesicles will uptake the receptor -bound virus and undergo fusion with endosomes to deliver into the cytoplasm. The glycoprotein E undergoes conformational rearrangement due to pH reduction of the endosomes (Van Dam and Walton, 2008). As it is discussed the viral Envelope protein (E) plays a major role in its replication process. The ectodomain of the Envelope protein are made of three functional domains: a) ED I contributes central region; 8 stranded ג barrel involved in the organization of structure. b) ED II-Dimerization domain has 12 β strands and 2 α helices involved in the process of highly conserved fusion loop. c) ED III are immunoglobulin like domain of 10 β strands involved in receptor binding process. According to the different serotypes the structure of the three domains also differs (Niyati and Ira, 2016).
Clinical Manifestations
In dengue fever, the infection starts suddenly and undergoes 3 phases. They are, initial febrile phase, critical phase and the spontaneous recovery phase.
Febrile phase
The main characteristics of the Febrile phase are headaches, vomiting, myalgia, high temperature, joint pain and macular rashes. Mild hemorrhagic manifestations like palpable liver, petechiae and bruising at the venepencture site are seen . The febrile phase may last for 3-7 days after the onset but after which the patients will recover normally without any further complications.
Critical phase
The critical phase is manifested by the begining of systemic vascular leak syndrome followed by an increase in hemoconcentration, pleural effusions, ascites and hypoproteinemia. During the transfugration from febrile from 4-7 days the critical phase with clinical complications occurs. The dengue shock syndrome is diagnosed when the pulse pressure is narrowed to 20mm Hg or less and peripheral vascular collapse observed. Once hypotension develops, systolic pressure abruptly lowers, irreversible shock and finally death may follow despite rigorus attempts of resuscitation. These continual deterioration of condition includes nausea, progressive severe abdominal pain, and tender enlarged liver, fluctuating hematocrit level that corresponds to thrombocytopenia, serosal effusions, bleeding of mucosa and lethargy or restlessness (Cameron et al., 2012). Other appalling manifestations, such as, myocarditis, encephalopathy,liver failure occur, and minimal associated plasma leakage are also frequently detected.
Recovery Phase
The change in permeability nature of vascular tissues is only temporary, after 48 to 72 hours the patient quickly recovers to a normal state with onset improvement in the patient's symptoms. A second rash may arise, which ranges from a mild maculopapular rash to a severe, itchy lesion suggesting leukocytoclastic vasculitis and it resolves with desquamation over a period of 1 to 2 weeks. However, adults may have severe fatigue that lasts for several weeks after convalescence (Cameron et al., 2012).
Clinical manifestation of the serious illness includes cold limb extremities, low pulse, low urine output, signs of mucosal bleeding and abdominal pain. One such report expanded the clinical illness involving gastrointestinal and hepatic syndrome with illness of asymptomatic increase of liver enzymes, fulminant hepatc failure, acute pancreatitis, peritonitis, spleen rupture, acalculous cholecysitis, subacute intestinal obstruction and kidney failure (Bijaya et al., 2019).
Diagnosis and treatment
The IgM immunoassay (MAC-ELISA) procedure is widely used for the rapid confirmation of viral illness (Rigau-Perez 2006). False negative result may be obtained by this assay if the samples were obtained from the patients within the six days of dengue incidence. The confirmation of IgM assay as positive occurs when both the Journal of Pure and Applied Microbiology acute and convalescent specimens are analysed by hemagglutination inhibition (HI) or enzyme immunoassays. These assays provide definitive serologic testing for acute dengue virus infection. The serum sample in acute phase can be tested for the patient with negative result in IgM assay which indicates the existence of the dengue viral NS1 antigen (Blacksell et al., 2007).
IgM antibodies specific to dengue virus are detected during sixth day of illness in blood specimen by using MAC-ELISA persist for 30 to 90 days. On comparing with HI assay the sensitivity and specificity of MAC-ELISA is much lower. Reduction in the sensitivity and specificity of MAC-ELISA assay is due to blunting of IgM antibody response in secondary dengue virus infections and potential for positive results to reflect acute dengue virus infection. (Vaddadi and Vaddadi, 2015). Confirmation of concurrent infection is mainly by virus isolation and detection of viral RNA or protein in acute phase serum. RT-PCR technique ia widely performed for epidemiologic purposes as a part of clinical research. The most preferred specimens for virus isolation is serum and plasma. The virus can be isolated from liver tissues once it is cleared from the serum. RT-PCR is the only effective method for virus detection in a very short period (one to two days) and it has sensitivity towards viral isolation (Chien et al., 2006).
Dengue viral proteins in tissue samples can be detected using immunohistochemical staining. In tissue samples, the high yields of viral proteins are detected in liver tissues. During the first five to six days of illness the nonstructural protein 1 (NS1) of dengue virus can be seen in plasma (Moi et al., 2013). Plasma leakage in DHF can be detected by Ultrasound technique (Srikiatkhachorn et al., 2007). No specific treatments vaccines are available for pestilence diseases.
Currently new approaches for rapid diagnosis of dengue is in process as it includes micro/paper fluidics, in vivo micropatches, Isothermal PCR, Peizoelectric and Electrochemical detection (David et al., 2017). Experiments in Vietnam, Australia and other countries suggest successful invasion of Wolbachia-infected mosquitoes into natural openings of female mosquito leads genetic changes by developing to new strains (Anum et al., 2016).
Biological Control of Mosquito Vectors
The chemical insecticides that are used for the eradication of mosquito vectors are not safe because if its harmful side effects to living beings. There is a need of using biocontrol agents that includes the use of microorganisms, insects, fishes and its compounds, as have been applied for other emerging viruses like Zika virus . Essential arthropod predators dragonflies or mosquito hawks and the aquatic stage of damselflies prey on mosquitoes which are not as effective mosquito control as dragonflies. Two beetles such as predaceous diving beetle and water scavenging beetle they readily eat the aquatic stages of mosquitoes. Spiders become mosquito predator by encasing and eating the mosquito that inadvertently flies into a spider's web.
C a r a s s i u s a u r a t u s , P o e c i l i a reticulate, Lepomis macrochirus and
Siluriformes sp. are effective in reducing the number of mosquitoes under field conditions. Among these, the most important fish predator is the Gambusia affinis, commonly known as the mosquito fish (Mario et al., 2012).
One such experimental procedure reported that most of the birds, tadpoles infrequently feed on mosquito larvae. A. aegypti mosquito usually lay eggs in discarded plastic containers, tires, etc. which are filled with water. The removal of these water filled containers from the environment with many anti-dengue campaigns as a part of vector control (Muhammad 2015). The red-eared slider turtle is the most turtle that feeds on mosquito larvae.
One such scientific study reported instead of eradicating the vector Wolbachia pipientis infected mosquitoes can shift the age and thereby it shortens the lives of dengue-infected mosquitoes. When a mosquito is infected with dengue virus, it takes about eight to twelve days for the transmission of infection to another healthy person. The mosquito then continue to infect people throughout its lifetime, generally around three to four weeks. An infected mosquito with short lifetime only have a short term opportunity for the transmission of dengue (McMeniman et al., 2009).
Synthetic pyrethroids is the most harmful chemical to the users, while Bacillus thuringiensis and insect growth regulators are among the least toxic. Bioformulation based on the bacteria like Bacillus sphaericus and Bacillus thuringiensis are the well-known examples of a bio insecticide, which is effective against larval population in terms of its cost of production and has less harm to the people and environment.. A Bacillus sp. spray performs its activity by insect gut pralyzation (depending on the strain used or mosquito). The bacterial protein is the active ingredient which paralyzes the gut. It is used in formulations instead of viable bacterial spores. Thus, the spread of disease does not continue among the insect population. The potential virus production at low are used against A. aegypti has not yet developed in developing countries; neither the production of protozoans and microsporidians through artificial cultures were evaluated as parasites of the dengue vector (Mario et al., 2012). Beneficial nematodes are the example of live natural enemies that are inundatively released. These nematodes travel through the soil or its surface, and actively attack their insect hosts. Inside the body, these release the symbiotic bacteria, which rapidly multiply and kill the host. The nematodes feed on the bacteria and insect tissue which then mate and reproduce. After one to two weeks, new young nematodes then emerge from the insect dead body and they seek new hosts. One such recent study reported that the toxic compound of Pseudomonas fluorescens KUN2 strain in petroleum ether successfully showed its larvicidal activity against dengue vector Aedes aegypti (Lalithambika and Vani, 2016). Studies about this cytotoxin and development of this anti-larvicidal drug will be effective for the treatment of Dengue fever in a cost effective manner.
CONClUSION
Prevalence of Mosquito borne diseases are the most prevalent world's most health hazardous problems. Several species belonging to the genera Anopheles, Culex and Aedes are vectors for pathogens of various diseases like malaria, filariasis, Japanese encephalitis, dengue and dengue hemorrhagic fever, yellow fever and chikungunya. Number of approaches has been developed to control mosquito spread. Mosquito borne disease is aimed at killing mosquito at larval stage with integrated pest management and effective biological agents. Concluding this review by encouraging the need for more extensive research with valid scientific discoveries by taking into the account of controlling the spread of infection and reducing mortality incidence associating public health. Development of antiviral drugs is the key route for the future insight in dengue pathogenesis thereby we can tackle underlying mechanism involved in Dengue Hemorrhagic Fever and Dengue Shock Syndrome.
ACKNOWleDgeMeNtS
We would like to thank Department of Biotechnology, Karunya Institute of Technology and Sciences and also would like to thank Mathew C Abraham, Susan Mathew, Blesson Mathew for all their support. | 2019-10-03T09:03:19.107Z | 2019-09-30T00:00:00.000 | {
"year": 2019,
"sha1": "d954519eb27e60c8852b83e1606d10e3dfa6beb0",
"oa_license": "CCBY",
"oa_url": "https://www.microbiologyjournal.org/download/27326/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1a31dda8d2e50d71c16dd6812bd86f90d7f09790",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
219072852 | pes2o/s2orc | v3-fos-license | Improvement of the student evaluation system based on the ICT use
. Today, considerable attention is paid to the higher education quality issues. The problem is solved by using tests that should provide a reliable student evaluation. The article presents the technology for improving test tasks. It includes functional procedures that specify the test and test task improvement sequence. It is found that it is better to use specialized computer applications for their implementation, that is why this technology involves the use of the author program “Statistical Analysis of Test Results”. This program calculates the indicators – the item difficulty, discrimination, reliability and validity – according to empirical student testing data. The indicators help identify unsatisfactory quality test tasks and improve the student assessment means, as the program derives the recommendations. The steps set out by the testing result processing technology with the help of a statistical package increase the improvement process efficiency. The correlation and factor analyses help identify the tasks that put the highest load into the test score. These procedures influence on making a decision on the test task review need. The technology involves repeated checking procedures. The presented technology has been tested at Zaporizhzhya National University and Zaporizhzhya Regional Institute of Postgraduate Teacher Education. ANOVA has helped prove its effectiveness.
Introduction
The sustainable development is associated with solving problems that humanity will face in the near future. That is why ambitious goals such as providing universal and high-quality education; creating conditions that enable children to get free, equal and high-quality secondary education; ensuring equal access for women and men to high-quality education, including the university one; facilitating the students' acquisition of knowledge and skills necessary to promote the sustainable development are set before education [1]. According to the World Education Monitoring Report, learning helps solve global environmental problems, promotes economic growth, helps overcome gender and social inequalities and is considered to be a conflict and violence prevention means.
From the standpoint of higher education, the important issues are access, accessibility and quality. Access to higher education reflects a number of indicators one of which is the university entering preparedness level. In Ukraine, the level is determined by the external independent evaluation (testing) results, so the use of high-quality tests is very important. In addition to funding, accessibility is associated with higher education enrollment of differently-abled young people, including the disabled ones. The problem is solved by introducing information and communication technologies into the educational process that will enable to create comprehensive and effective learning conditions for everyone. And the use of computer-based testing expands the disabled students' education availability.
Achieving sustainable development goals is ensured by high education quality. Nowadays the future specialist training quality problem is considered by many researchers. The standard introduction in educational institutions, the use of learning practice research data, the inclusion of alternative qualifications in training, the communication with educational centers, the external evaluation implementation are considered to be the ways to improve the future teacher training quality [2]. To improve the future English Philology Masters' education quality, the innovation introducing new educational technologies and new learning methods into the educational process is important [3]. To improve the future programmer training quality, it is proposed to modernize the content and methods of programming learning in accordance with international standards; to develop variable modules taking into account the modern labor market standards and needs; to carry out the constant future software engineer training quality monitoring at all levels; to monitor the labor market in order to determine the employers' requirements and to adjust the training content in accordance with the latter [4].
The education quality is determined on the basis of the university ranking or according to the student performance results. As acknowledged by the authors of the report, university ranking is not a reliable way to determine the education quality and relates to the marketing tools [1]. A more reliable criterion is the student assessment that should be transparent and understandable. The testing as a way of knowledge level assessment is an essential and integral part of the operative intermediate, stage and final learning result assessment. This once again confirms that the relevance of research in this direction is a tool to reduce the student knowledge level assessment cost.
The evaluation is carried out mostly through testing with computer programs dominantly applied for its implementation. They are required to implement testing, obtain initial information, accumulate and store students' performance data. Such programs are computer knowledge testing systems (Brainbench, INDIGO, Hot Potatoes, MyTest, OpenTEST2, TCExam, etc.) and learning management systems (Blackboard, Inkling, MOODLE, Sakai, WebTutor, etc.). Most of them provide "technological testing cycle", that is preparation of the test task bank; test development; testing; testing result report making [5].
Nowadays the use of learning result testing computer programs is considered quite actively in terms of ongoing monitoring, final assessment and qualification examinations.
In the paper titled Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment, the authors presented a comparative analysis of the use of the paper-based and computer-based tests in high-stakes exams [6]. The authors drew attention to the significance of the random test forming and the importance of using computer-based tests at the intermediate learning stage. The results of their study demonstrate that most students are ready to pass high-stakes exams based on the use of computerbased tests. Their positive attitude is explained by the possibility of getting a mark after passing the test.
The possibilities of using computer-based tests in the technical drawing assessment of students are discussed in Development of Computer-Based Tests Mode of Assessment for Technical Drafting Students by L. Aquino. The computer-based test development was carried out in four stages: Planning Stage; Development Stage; Validation and Acceptability Stage; Final Revision Stage [7]. The computer-based tests were analyzed and evaluated by five experts in the field of technical design according to the following parameters: Utility, Accuracy, Content and Navigation. At the same time, a computer-based test was evaluated according to the criteria of preferences in use, item difficulty level, readiness for computer-based testing and fraud prevention. As a result of the research, the author concludes that computer-based tests are appropriate and acceptable for technical drawing learning result assessment.
Recently, universities have been using learning management systems to enhance real learning opportunities. The use such programs enables students to study in a convenient place at a convenient time for them that is the basis for transforming the existing higher education system into Education for You [8]. The use of such programs is an adaptation of young specialists to passing qualification examinations that are already the base for the enterprise personnel selection. The learning management system opportunities are expanding by creating mobile applications that meet the challenges of the fourth industrial revolution. The use of mobile learning management systems will allow universities to refuse from traditional learning approaches, to implement innovations, and to form effective human capital [9].
Regardless of the means chosen for testing, all of them should implement an adequate learning result assessment and ensure the effective functioning of the educational process monitoring system. In this regard, the evaluation tool quality analysis and improvement is more relevant today than ever, regardless of the tools used in its implementation, whether by using a specialized program, with the help of a statistical package, or by formula calculation in a word processor environment.
These calculations are based on the Classical Testing Theory (CTT) and Item Response Theory (IRT) provisions. In general, the IRT results are considered to be more reliable than the CTT ones [10]. However, studies showing a link between the parameters obtained through these two theories have recently been conducted.
The paper titled Validation of a developed university placement test using classical test theory and Rasch measurement approach [11] presents a sequential economy test analysis that was conducted by using item difficulty, discrimination, and reliability indicators. Testing data was analyzed by using Classical Testing Theory and Item Response Theory. To calculate the CTT and IRT indicators, the authors used such specialized software as ITEMAN 4.3 and WINSTEPS 3.72.3. The data obtained proved a correlation between the results processed with the two models. It is important that the paper tested the task suitability to measure the desired result.
In the source [12], the authors considered the use of the CTT and IRT models in evaluating open test tasks. In order to obtain reliable results, open-ended test tasks were evaluated by experts and by using a developed scale. The estimates obtained were compared by using two models, and the open test task item difficulty indicators were calculated. The results demonstrated a high level of correspondence between them. The methods of mathematical statistics (factor analysis, correlation analysis, Che-square criterion) that proved the correspondence of the constructed model to the real data were used in the paper.
The paper titled Comparative Analysis of Classical Test Theory and Item Response Theory Based Item Parameter Estimates of Senior School Certificate Mathematics Examination
[13] provides the mathematics examination result analysis by using the CTT and IRT methods. The indicators obtained by using the two theories were compared by the factor analysis methods (principal component analysis) and correlation analysis (Fisher Correction, Olkin and Pratt Correction, Point-Biserial). Factor analysis proved the unidimensionality of all the tasks included in the examination. Correlation indicators indicated the absence of discrepancies between the item difficulty and discrimination indicators calculated by the two author-selected methods. The authors have also found that the item difficulty and discrimination indicators obtained are independent of sample size: n=100 and n=1000.
This review proves that statistical calculations (descriptive statistics, correlation analysis, statistical hypothesis testing, factor analysis, variance analysis, etc.) necessary to draw conclusions are used to carry out the test and test task analysis. However, the calculations turn into a big problem for teachers unschooled in mathematical statistics, and it is better to use a specialized program for this. Of course, nowadays there are specialized programs designed for test analysis [11,14,15]: Iteman, Winsteps, Test_Results, Computerbased system of quality analysis of test items etc. Some of these programs are local solutions that are not available to the general public: Test_Results, Computerbased system of quality analysis of test items.
Their functionality analysis has shown that they only output test quality indicators (in numerical or graphical form), and it is more logical to provide recommendations to assessment means developers. The availability of such programs cannot be a panacea to address the problem of improving the assessment means quality for students.
Hypothesis of our study. Based on a scientific publication and pedagogical experience theoretical analysis, we assume that the use of special technology to improve test tasks will allow: to gradually create adequate and reliable tests for evaluating student learning result assessment; to constantly check their validity; to implement the procedure efficiently and simply. To this end, we have developed specialized software.
Methods and instruments
The study hypothesis checking was carried out by using a set of methods. To determine the indicators necessary to improve the learning result assessment means quality, the methods of scientific and methodological literature data theoretical analysis and generalization were used. The analysis of the publications allowed to determine the test quality indicators. Their calculation is based on the test theory and statistical methods.
In the process of developing the test improvement technology, series of computational procedures were carried out that made it possible to select the most effective test theory and statistical methods. They are the test and test task item difficulty determination; task discriminative ability test; test reliability and validity evaluation; correlation analysis; factor analysis, ANOVA. Computational procedures used empirical student test data (the control paper, training test, test and examination results) derived from the LMS Moodle.
In the process of an experimental work, the pedagogical experiment method that took place in vivo was used. 20 lecturers agreed to take part in it. In the process of an experimental work, the testing results of 2283 students were processed. The results were generalized that led to the test improvement technology development.
In addition to the LMS Moodle (version 3.7), the specialized author computer program "Statistical Analysis of Test Results" and the SPSS statistical package (version 20) were also used in the research.
The test task improvement technology
As a number of studies indicate, learning management systems are quite popular nowadays [16,17]. And the MOODLE (Modular Object-Oriented Dynamic Learning Environment) LMS is considered to be the most effective and widespread [16]. The orientation to the MOODLE LMS environment is also due to the fact that this system is widely used for the learning process didactic support in universities. The control event results are exported to a spreadsheet document (.xlsx or .ods file) that contains: -general information about the student; -test duration (the test start and end time and the time spent); -test score as a whole; -answer results for each task (task types are Multiple choice, Matching, Calculated, Short answer, Numerical, Embedded answers, Drag and drop, etc.).
We developed a technology of the assessment means improvement, which is based on the educational measurement theory. There are a number of scientifically sound criteria for the quality of the test as a whole and for the individual test tasks from which we have chosen the item difficulty, discrimination, reliability and validity [10].
The item difficulty is associated with both the individual task and the test as a whole. For example, according to the item difficulty, the tasks are divided into the most difficult, the most successful, quite simple and very simple ones. The simplest and quite simple tasks should be at the beginning and in the end of the test, and the most difficult ones should be at the center of the test. The total test item difficulty is divided into 4 levels: very high test item difficulty, the test is not balanced; the test is balanced according to the item difficulty; the test item difficulty is sufficient; the test item difficulty is bad.
The index of discrimination means the task ability to differentiate students from the better to the worse ones [18]. High discrimination is considered to be an important indicator of a successful test task. The index value is in the range of [-1; 1] and the qualitative values may be as follows: the task is functioning quite satisfactorily; a small task correction is required; the task should be reviewed; the task should be deleted.
The reliability is considered as the test result stability degree during repeated measurements [10]. That is, the test is reliable if it provides high measurement accuracy and the results are resistant to external factors.
The test must be valid. It is a characteristic that reflects its ability to get the results corresponding to the testing purpose [10].
In addition to the mentioned test quality criteria, you should also consider the time indicator: the correlation between performance and testing time. The time interval when the students made the least mistakes is determined in accordance with the testing data.
According to the pedagogical test development algorithm, the following stages are gradually carried out: the test task bank development; the testing for the task approbation purpose (the item difficulty and discrimination checking); the test forming and the second testing session conducting (the test item difficulty, reliability and validity checking); the standardization procedure implementation (the preparation of several parallel test variants, the testing time calculation) [19].
Also, after testing, important indicators that provide additional information about the test tasks are: pointbiserial coefficient for each task, nominative correlation coefficients, factor and analysis of variance results [10].
The authors have developed a phased test task improvement technology: 1) the test task bank forming (LMS Moodle); 2) the probation testing using the bank tasks (LMS Moodle); 3) the discriminativity and item difficulty level determination after the probation testing ("Statistical Analysis of Test Results"); 4) based on the "Statistical Analysis of Test Results" recommendations, some test tasks are deleted from the bank, the rest are improved or remain unchanged; 5) the testing is carried out (LMS Moodle); 6) the test task item difficulty level is determined ("Statistical Analysis of Test Results"); 7) based on the "Statistical Analysis of Test Results" recommendations, the tasks are redistributed in the test; 8) the testing is carried out (LMS Moodle); 9) the test reliability and validity are checked ("Statistical Analysis of Test Results"); 10) the optimal testing time is determined ("Statistical Analysis of Test Results"); 11) based on the "Statistical Analysis of Test Results" recommendations, adjustments are made to the test, if necessary; 12) the calculation of correlation coefficients like the point-biserial and nominative one (SPSS); 13) based on the SPSS calculation results, the tasks that should be deleted are determined; 14) the factor analysis implementation (SPSS); 15) based on the SPSS calculation results, the tasks that are the most significant to get an objective assessment are determined, adjustments are made, if necessary; 16) the testing is carried out (LMS Moodle), and empirical data are accumulated; 17) the ANOVA implementation to compare the student test results over several years (SPSS); 18) based on the SPSS calculation results, the final decision is made on the test effectiveness.
The technology is represented in the model (Fig. 1). Fig. 1. The test task improvement technology.
Specialized computer program "Statistical Analysis of Test Results"
The "Statistical Analysis of Test Results" is a base for the introduced technology assessment means improvement, so let's take a look at this specialized computer program The C# programming language in Microsoft Visual Studio 2017 and the Windows Presentation Foundation technology have been selected for program implementation. When choosing the development means, we were guided by the following considerations: a convenient form designer and powerful means for working with arrays; the universal interface provides an integrated design and application component implementation.
The work with the Statistical analysis of students' test results software starts from the main window that is organized on the basis of pressing the buttons opening the corresponding system modules (Fig. 2). The clicking of the [Define the test item difficulty] button opens The Test item difficulty dialogue window (Fig. 3). The system is focused on the testing results in the LMS MOODLE, so it provides downloading files with these results (the [Download the file] button). You can get: -the item difficulty of each task; -the test item difficulty; -the item difficulty of each task and the test item difficulty.
Fig. 3. The test item difficulty dialogue window
The results from the downloaded file are transferred to a dichotomous matrix, the initial test indicators are calculated: the ratios of correct and incorrect answers; if there is the item difficulty of each task checkbox, the variance is calculated; if there is the test item difficulty checkbox, the average task item difficulty level is calculated. After that, a window showing the test task and whole test item difficulty checking results is displayed. The numerical value of the item difficulty indicator and its level are derived for the test. The item difficulty level is determined for each test task. The work of the task discrimination ability module (Fig. 4) helps determine the task discrimination ability of one test or recommendations on test tasks from the test task bank. That is why, the user can choose only one of the checkboxes after downloading the result file: the discrimination of the tests from the test task bank or the test discrimination. It is envisaged that the results are displayed in groups after the discrimination checking of all the bank tasks: 1) at first, the tasks functioning satisfactorily are listed; 2) then, a list of those ones requiring a small correction is displayed; 3) next, there is a list of test tasks that should be reviewed; 4) at the end, there are the tasks that should be deleted. To do this, the test task bank statistics file is downloaded and the discrimination of the tests from the test task bank checkbox is put, the tasks are grouped according to discrimination indicators.
The test task discrimination checking derives recommendations for each test task. This is necessary when the test is generated by bypassing the test task bank. To do this, the index of discrimination is calculated for each task and a recommendation for each test task is derived according to the numerical value.
After clicking the [Define the test reliability and validity] button, the test reliability and validity dialogue window opens (Fig. 5). the program checks the test reliability checkbox and calculates the reliability indicator and derives a qualitative reliability characteristic; the program checks the test validity checkbox, calculates the validity indicator and derives a qualitative validity characteristic. These indicators can be obtained individually or together with two downloaded files.
A feature of the "Statistical Analysis of Test Results system is that it derives not numerical values but qualitative characteristics of the test and its tasks. This is convenient because the teacher does not need to analyze numerical values, define the item difficulty, discrimination, reliability and validity level and make decisions about the test and its tasks.
Test improvement technology implementation
The assessment means improvement technology used at Zaporizhzhya National University in the course of current, final and rectorial control, and also tested at Zaporizhzhya Regional Institute of Postgraduate Teacher Education during the special course and training "The basics of testology and student computer-based testing".
After the development of test tasks, the approbation testing is carried out. According to its results, the task item difficulty and discrimination are checked using the "Statistical Analysis of Test Results" software. The data help identify the tasks that need to be improved or deleted (Fig. 6). It should be noted that the process is sufficiently long and lasts all the time during which teachers use the testing. In addition, the knowledge and skill level of students of different study years is still different, so the system provides test task discrimination checking (Fig. 7).
According to the of educational measurement specialists' recommendations, the test should include 20% of the most difficult tasks, 20% of very simple and quite simple tasks, other tasks should be the most successful [10]. The test task distribution should be as follows: the simplest and quite simple ones should be at the beginning and in the end of the test, and the most difficult ones should be in the center of the test, unless the test mode involves the task randomization. Fig. 7. The test task discrimination checking results The Fig. 8 presents the test task item difficulty checking results. From these data it is clear that the test tasks are placed not in a balanced way there. After the task improvement and redistribution according to the item difficulty, an optimal distribution was obtained (Fig. 9). According to our observations, the simple ones were mostly the closed tasks (multiple choice and conformity) and the most difficult ones were the built-in answers. The item difficulty, reliability and validity indicators helping evaluate the test quality are calculated for the tests. The developed tests are repeatedly used in the higher education institution educational process, often the final control (credit or examination) is carried out with the help of them. There is also the practice of using a pilot test, through which students conduct the test preparation self-monitoring.
The results of any test are processed and a level of difficulty is obtained. The teacher can continue the task improvement, add more or less difficult tasks if the test item difficulty is bad, the test item difficulty is sufficient (Fig. 10) or the test is not balanced (Fig. 11). The reliability checking is performed according to two parallel testings (the pilot and control one), and the validity checking is also based on the control and final work results. The Fig. 12 and Fig. 13 show two sufficiently divergent variants of the reliability and validity test checking. An unsatisfactory test validity or reliability is a signal to the task change. As noted above, an important problem of testing is the time allotted for it. The disadvantages are both the insufficient amount of time and its excess. In this regard, the developed program defines the optimal time to pass the appropriate test according to the testing results (Fig. 14). The calculation of the point-biserial correlation for each task helped check the task differentiation (Table 1). Since all indicators are greater than 0,2, all the tasks differentiate students well. It was found in the process of obtaining complexity indicators that the closed tests (Multiple choice and Matching) are among the simplest ones according to the item difficulty. The point-biserial correlation also proved this.
A part of the lecturers does not go beyond the theoretical closed tasks (Multiple choice and Matching) when developing tests, therefore, the research of two tests from the same discipline was conducted by using a factor analysis (the same students took the test). One test included solely theoretical tasks, and another one openended tasks of different types in addition to the former ones. The first test included tasks identical to the second test tasks: the task 1_1 was identical to the task 2_1, the task 1_2 was identical to the task 2_5, the task 1_3 was identical to the task 2_4. The factor analysis results showed the following: the factor 1 (informativeness of 24,7%) included all the open tasks; the identical tasks were in the same factor (2, 3 or 4) in pairs. Therefore, open test tasks put a higher load into the test score.
The variance analysis results also proved that the test improvement factor inclusion contributed to a more adequate assessment. The score of students who were tested by improving means was lower than that the one of the groups that was tested by non-improving tasks. The test improvement factor had a significant impact on the assessment adequacy.
428 tests (37,7%) for different disciplines (higher mathematics, computer science, programming, pedagogy, economics) used to evaluate the students at Zaporizhzhya National University and to train teachers at Zaporizhzhya Regional Institute of Postgraduate Teacher Education were analyzed by using the presented technology. This program was used to check 1375 (32,9%) tests for item difficulty and discrimination that in turn helped improve them. As a result, it was found that 63,7% of the test tasks functioned quite satisfactorily while others required a small correction according to the discrimination; 66,9% of the tasks had a sufficient test item difficulty level, and 31,1% of the tasks needed to be balanced according to the item difficulty (the item difficulty is very high or not sufficient); 82,3% of the tests were reliable; 74,9% of the tests showed high and medium validity levels.
Conclusions
So, the need to improve the future specialist training quality is based on the effective higher education system. It should not only create conditions, but also have reliable tools for the student knowledge level assessment.
The effective future specialist training system functioning depends largely on the perfection and quality of the assessment means, the most common of which are tests. Tests must meet the requirements for the item difficulty, discrimination, reliability and validity indicators. A study of the formulas used to make the calculations showed that a computer program could be an effective solution to the test quality checking problem. The paper presents a specialized computer program "Statistical Analysis of Test Results" that consists of four independent modules and derives the qualitative characteristics of the indicators involving the basis for making a decision on the need for test task improvement, as well as to define the optimal testing time. Fourthly, a special procedure to increase the test quality helping improve the means is needed. For this purpose, special indicators are applied: item difficulty, discrimination, reliability and validity. This procedure is presented in the form of a special technology that includes testing in the LMS MOODLE environment, calculating the main test quality indicators by using a specialized author program and statistical processing of empirical data with the help of the SPSS program environment. The results of the assessment means improvement program and technology approbation in the process of testing the applicants of the Zaporizhzhya National University and the postgraduate education system students proved their effectiveness.
The test improvement technology introduction has let make the tests transparent and objective. Such a test improvement will improve the future specialist training quality. Then it can be expected that in the future they will be able to think critically, generate creative ideas, make original decisions, strive to ensure the global environmental safety, economic prosperity, justice and equality.
It is possible to choose such developed program improvement directions as the test task distractor analysis, optimal the test length determination, the test task calibration for further research. It is also desirable to introduce a special course on educational measurements for students in pedagogical disciplines and practicing teachers. | 2020-04-23T09:09:59.562Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "6a0a085d20dadd02152cd25e6fabf013a21f02af",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/26/e3sconf_icsf2020_10018.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "633e2a679bb887fb9c14194d6b14faa15f7a1a8b",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
229010653 | pes2o/s2orc | v3-fos-license | Enhancing Decision Making Skills among Postgraduate Students Using Alternative Assessment Approach
Postgraduate learning in Malaysian public or private universities has undergone tremendous shift in its landscape that a continuous diversity of trends has emerged where postgraduate programmes are now offered to a large number of local and foreign postgraduate students. One of the expected aspects of being postgraduates means having to make decisions that are not only wise, but also sound and apt. This requires postgraduates to acquire high level and value decision making skills that would prevent them from making catastrophic decisions such as choosing the right subject matter to investigate, supervisors and appropriate research instruments. Thus, the purpose of this study is to enhance the level of decision-making skills among master students in Strategic Management course using alternative assessment. Using explorative qualitative approach, this study employed an action research where participants’ decision-making skills were evaluated followed by intervention treatments that would help them to make better decisions. Ten postgraduate students were asked to complete a self-filled questionnaire and participated in focus group discussion in the first phase of the data collection. SWOT analysis and presentation rubric were used as intervention tools in this study. Results indicated an increase of score from 79% to 85% at the end of the assessment. This concludes that the alternative assessment helps students to make better decisions. The findings illustrated that alternative assessment is different from the restrictive forms of traditional assessment and is advantageous to students, stakeholders, employers, funding bodies and others as it helps to establish capabilities and competencies as well as academic knowledge.
Abstract Postgraduate learning in Malaysian public or private universities has undergone tremendous shift in its landscape that a continuous diversity of trends has emerged where postgraduate programmes are now offered to a large number of local and foreign postgraduate students. One of the expected aspects of being postgraduates means having to make decisions that are not only wise, but also sound and apt. This requires postgraduates to acquire high level and value decision making skills that would prevent them from making catastrophic decisions such as choosing the right subject matter to investigate, supervisors and appropriate research instruments. Thus, the purpose of this study is to enhance the level of decision-making skills among master students in Strategic Management course using alternative assessment. Using explorative qualitative approach, this study employed an action research where participants' decision-making skills were evaluated followed by intervention treatments that would help them to make better decisions. Ten postgraduate students were asked to complete a self-filled questionnaire and participated in focus group discussion in the first phase of the data collection. SWOT analysis and presentation rubric were used as intervention tools in this study. Results indicated an increase of score from 79% to 85% at the end of the assessment. This concludes that the alternative assessment helps students to make better decisions. The findings illustrated that alternative assessment is different from the restrictive forms of traditional assessment and is advantageous to students, stakeholders, employers, funding bodies and others as it helps to establish capabilities and competencies as well as academic knowledge.
Introduction
Global education requires educators to explore various methods to improve learning outcomes continuously. This study aims to examine the feasibility of implementing alternative assessments to enhance decision-making skills in educational organizations for students pursuing a course in Strategic Management in Education. This is a compulsory course for postgraduate students pursuing Master of Science, specializing in Educational Management at Universiti Utara Malaysia (UUM). The objective of this programme is to develop and enhance students' ability and skills in school's management. Based on the Course Learning Outcomes (CLO), after partaking this course, these in-service and prospective school administrators should be able to apply appropriate decision-making skills in the actual environment of the respective educational organizations. Accordingly, students' ability to master problem-solving skills in a real-world environment would ensure that other related CLOs are achievable as well.
Based on the observations and evaluations of researchers who have taught this course for three semesters, it is concluded that most students were unable to relate and apply the learned theories and concepts to their actual situation at schools, either as administrators or general educators. Thus, this simply illustrates that on a practical basis, decision-making skills are vital for these students' careers as they hold significant positions in their respective organizations. Therefore, decision-making skills play a crucial role in shaping these postgraduates and educational organizations.
Decision Making Skills
For all organizations, there are decisions that must be made and for each level of the organization. The decision-making process ranges from strategic management decisions for the entire organization to the day-to day operations. The decision-making process in an educational organization is similar to any other business organizations since both aim to achieve a certain set of goals. In addition, emotions play an important role in the decision-making process [1]. Nevertheless, effective decision-making is not just making choices but it also involves the process of identifying problems, listing and selecting alternatives, implementing alternatives and solutions, and evaluating to know which alternative is the best and has the highest chances of success. Organizations that have the vision of becoming successful and staying ahead of others are those that have a strong line of administrators with excellent critical thinking and decision-making skills [2]. Changes would be difficult if organizations overlooked the importance of these thinkers (i.e. administrators) within their organizations.
Meanwhile, critical thinking is reflection using reasonable justifications and perceptions of our beliefs [3]. This serves as a guide for an individual to act effectively and aptly and to make appropriate decisions in a given situation [3]. Critical thinking is also portrayed as a practical skill, yet at the same time, it is a careful strategic structure and planning as well as extensive involvement in testing not only one's own thinking but the thinking of others. On the other hand, decision-making skill is a mental process that leads to some actions being made consciously or unconsciously [4]. It involves critical thinking since it is highly required during the decision-making process.
Alternative Assessment
Alternative assessment, also referred as authentic assessment, includes all sorts of assessments that are used to measure students' capability in undertaking complex tasks that are related to the learning outcomes. Some examples of alternative assessments are students' portfolios, project work, problem-based assessment, portfolio assessment and technology-based assessment. In the context of this study, performance-based assessment (PBA) is used as one of the assessment tools in alternative assessment. It is a form of assessment that deviates from the traditional paper-and-pencil assessment. As PBA is closely related to the context that mimics the workplace, it is chosen specifically in here where students taking this particular course are expected to complete the assigned tasks using their working experience and knowledge.
The alternative assessment introduced in this study uses a performance-based assessment (PBA) technique, an assessment that does not use pencil-paper-based traditional assessment method. Instead, PBA is a task assigned to students based on the actual circumstances or issues in a particular school environment or educational organization. The assigned tasks are expected to extract specific knowledge, skills, and characteristics among the students. In the context of this study, the students are expected to extract specific knowledge and skills and apply them in their tasks.
The assessment was designed in accordance with the Programme Educational Objectives (PEO), Programme Learning Outcomes (PLO) and Course Learning Outcomes (CLO) set for the course Strategic Management in Education, adhering to Malaysian MQA requirement.
Alternative assessment is introduced not only to assess students' knowledge but also to evaluate their ability to apply knowledge. Alternative assessment consists of multiple dimensions that look at students' learning process through the lens of behavioral changes within real contexts. This method of assessment is criterion-based, and one of the measurement tools used is rubric [5]- [8]. [9] stated that alternative assessment is an authentic indicator for measuring students' application of knowledge and skills. It also enhances students' skills through projects, portfolios, and activities. This method of assessment is also said to be more comprehensive since it is based on clearly defined tasks that the students need to perform in his/her own workplace (authentic) [9].
PBA has many benefits for students and lecturers [15] [16]. These include: 1) Learned knowledge of decision-making skills which can be applied in real-life contexts and environments. 2) knowledge that would be applied once students learnt to think, analyze, innovate, and, develop their talents to think creatively and critically. 3) communication skills, especially during the presentation session that will be improved as students learn to clearly explain to the assessor the justification for all actions taken during the decision-making process.
As such, an assessment such as PBA is more realistic, as it increases knowledge and understanding. Since skills and knowledge are simultaneously measured, the assessment will produce more accurate and comprehensive results. Moreover, it provides ample space and opportunities for lecturers to understand effective teaching methods better (in this case, in the form of reflection) [18]. Meanwhile, through alternative assessment, students are given the same opportunity to apply the knowledge gained to the best of their ability. This type of assessment could also motivate students to become autonomous learners.
Alternative assessment is not limited to content knowledge; it also assesses knowledge and skills related to capacity building [5]. In addition, alternative assessment also reflects behavior as a result of the learning process [10] and evaluates high-level thinking skills [11]. Alternative assessment is not only used in face-to-face learning but has reportedly been used in blended and online learning [9].
The purpose of the assessment also changes from the traditionally assessing the level of knowledge that the students could acquire to assessing students' ability to apply what they have learned [12]. In addition, alternative assessment also examines the development of students' values, attitudes, and behaviors in both inside and outside the classroom through various assessment methods [13]. As a result of the transition of assessment to meet current needs, new approaches need to be considered for measuring the performance of 21st-century students. Among the principles to be considered in implementing the PBA are [15], [16]: 1) Assignments given to students involve assessment of high-level thinking skills across three learning domains, namely cognitive, affective, and psychomotor. 2) Authentic; assessment is made based on the actual issues or problems that occur in the educational organizations. 3) Parallel (aligned); the assessment is made per the predefined Course Learning Outcome. A clear rubric is essential to guide the assessor/lecturer in assessing students' performance. Students should also be aware of the assessed criteria.
Methodology
In this study, qualitative approach was chosen to examine the feasibility of implementing alternative assessments to enhance decision making skills among postgraduate students. Here, focus group discussions and semi structured interviews were used. During the focus group interview, all responses were treated with the following procedures: i). Using protocols: Data recording protocols are designed and used by qualitative researchers to record information during interviews. ii). Transcribing Data: Transcribing is converting speech to text word for word. Transcribing is a common practice when conducting interviews because it enables to perform analysis. Transcription is the process of converting tape recording into text dat. iii). Analysis: In this study, researchers use thematic hand analysis. The hand analysis of qualitative data means that researchers read the data a few times, marked and categorized them according to the themes which were i) their levels of decision making and ii) understanding of SWOT analysis for problem solving. iv). Design of qualitative action research: Action research was chosen as the research design since the researchers aimed at improving self-teaching techniques as well as improving their own understanding of the best educational practices. Action research has been shown to be suitable for studying issues in teaching [14]. This approach also enables the knowledge of teachers or lecturers to be part of the educational literature and, thus, contributes to the construction of educational structures.
Before assessment can be conducted it has to be validated by the Committee of Academic and Examination. The result analysis used percentage before and after the assessment.
The chosen ten participants for this research were those who were enrolled in the Strategic Management in Education course. There were ten participants in this study, two males and eight females. Their age ranges from 26 years old to 40 years old with a mean of 33, as illustrated in Table 1 below: This study was based on three phases.
Phase 1: Identification level of decision-making skills
In this first phase, students were asked to answer a set of questions (inventory) to determine their level of decision-making skills. The level was determined by the provided decision-making skills' rubric, adapted from Catalina Foothills School District 21 st Century Learning Rubric-Skill: Critical and Creative Thinking. [17].
In this inventory, participants were required to set their level of decision-making skills based on the items given. Once they completed the questionnaire, they would be informed of their levels of decision making. Next, focus group discussions were conducted where the students' decision-making skills were explored, determined, analyzed and factors that lead them to making certain decisions were also established. Both questionnaire and focus group discussions were compared to SWOT analysis and presentation rubric before participants level of decision making and factors leading them to making decisions could be compared.
Phase 2:
Intervention 1 (SWOT Analysis) The second phase was the initial phase of the intervention which was introduction to decision-making skills. During this phase, students were introduced to the decision-making skills through a series of SWOT analysis procedures. Firstly, students were briefed and explained on what constitutes SWOT analysis before they were guided on the process using strengths, weaknesses, opportunities, and threats components in the analysis. Next, students were asked to apply decision-making skills to solve the actual problems and issues that were taking place in their respective organizations. The SWOT analysis matrix was used by the students as the basic guidelines for problem-solving process and this subsequently led them to make appropriate decisions. Then, assessment was made on the level of students' decision-making skills based on their own SWOT analysis matrix. During this phase, students were asked to prepare reflective reports on their decision-making and problem-solving skills so as to determine their level of decision-making and problem-solving skills.
Intervention 2 (Alternative Assessment)
The second intervention was conducted to enhance the impact of alternative assessment on decision-making skills among postgraduate students. This intervention was intended to improve students' understanding of decision-making skills and subsequently solve their identified problems in particular organizations. Additionally, presentation sessions and group discussions on decision making and problem-solving were conducted. Apart from that, students reflective writing was also assessed using scoring rubric. Next, students' SWOT analysis matrixes generated by the students were assessed before further assessments were made on their level of decision-making skills.
Phase 3:
The third and final phase was evaluation phase in which the researchers evaluated the students' presentations to identify if they have improved their decision-making and problem-solving skills. During this evaluation process, students were required to present one of their tasks which was the findings on the problem or issue that they have successfully resolved. The overall scorings of the presentations were the final process of this action research.
Findings and Discussions
The result for each phase was analyzed, and the findings were presented as follows.
Phase 1: Participants' level of decision-making skills
At the beginning, students were asked to answer a set of questions to determine and measure the level of decision-making based on the Catalina rubric. The result of the total rubrics was at the level (67.5% -81.7%), which was equivalent to moderate level. In addition, data from semi structured focus group discussion illustrated the diversity of responses to decision-making skills as stated by R1; R2; R4; and R5 respectively: R1-ask someone; R2-depends on the situation R4-ask for opinion; R5-i will change, but [will] consider the cause and effect Meanwhile, R3 indicated that she has a clear stand on their decisions, as stated in the following statement.
R3-usually does not change
On the other hand, R6 demonstrated flexibility when making important decisions. This proved that the respondent has good judgment on the situation in her decision-making skills. This is displayed as follows:
R6-I am flexible based on the circumstances
The overall findings from phase one showed that students were at a moderate level and were still confused or unclear and unsure about their decision-making skills in their lives and work environment.
Phase 2: Intervention 1 (Introduction to SWOT Analysis)
During this phase, the intervention was conducted on students through the following four situations; i) Students were first introduced to decision-making skills, ii) Students /lecturer determined the organizations that needed to be studied, iii) Students set real issues to be studied and apply decision-making skills, and iv) Students apply SWOT analysis matrix.
Next, a matrix of the SWOT analysis was generated by the students and subsequently, this SWOT analysis was introduced as a tool for improving decision-making skills, where a rubric was built to measure self-reflection ability.
Observations were also conducted using observation checklist to evaluate the process of completing the assigned tasks among students. The following was findings of the participants' SWOT analysis: R1: SWOT sequences were not structured R2: SWOT analysis not clear R3: SWOT lack accuracy R4: Lack of structure R6: Unsatisfactory
Most students scored moderately in their reflections, ranging from 70-90%, illustrating/proving that students might have general ideas on SWOT analysis.
The overall reflection scores are shown in Table 2:
Phase 2: Intervention 2 (SWOT Analysis)
After discovering that the overall students' rubric score was at the moderate level (79%), the researcher conducted alternative assessments by asking students to: i). choose an issue they wish to resolve in their organization, ii). use SWOT analysis to solve the identified issue or problem in their organization, iii). write reflective writing using the given scoring rubrics.
As a result of the alternative assessments conducted, the researchers found an increment from 79% to 85%. The overall rubric of students' reflective writing is presented in Table 3. Table 3. Reflective Writing Scoring (Alternative Assessment) section of intervention 2
Respondent
Scores Percentage Six students managed to obtain 90%, as compared to 3 before the introduction of the alternative assessment. In addition, there were five students who score consistently for both phases. Only one student, R10, received a static mark of 70%. Student R4 scored the most significant improvement by a 20% increase. Finally, four students had a similar increase of 10%.
Phase 3
In this last phase, the researchers evaluated the students' presentations to determine if they had improved their decision-making and problem-solving skills. Also, in this phase, students presented their findings on the problem or issue that they successfully resolved. The overall scoring of the presentation is the final process of this action research. Each scoring is given based on the student's presentation rubric. The overall final scores of the students showed excellent results with five students scoring A. Three students scored A-. All in all, these indicate a significant improvement in Strategic Management in Education course, illustrating a significant increase in students' decision-making skills.
Conclusion
The research concluded that the students' decision-making and problem-solving skills showed significantly improved scores. This implies that alternative assessment is effective and could help students to make better decisions in solving problems in their respective organizations. The two-stage intervention has also been proven to be effective in improving decision-making skills among postgraduate students. The use of alternative assessment elements based on the actual situation in their organization succeeded in enhancing the level of decision-making and problem-solving skills among postgraduate students. The SWOT analysis matrix introduced during environmental analysis throughout the study also helped to enhance decision-making skills, proving that the action research approach was appropriate as a tool to enhance classroom teaching. By the same token, the results demonstrated that the interventions had facilitated students to integrate and implement the theories and concepts in this course with their actual situations at school as administrators specifically, or educators in general. | 2020-11-19T09:12:44.633Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "ac95fdb0151f270e6a4d81f9015f37820d168bb9",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20201030/UJER71-19516199.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0ae4da660d54775a8988e810aed56ab86e96a9cc",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
42180632 | pes2o/s2orc | v3-fos-license | CASE REPORT Neuroendocrine Small Cell Uterine Cervix Cancer in Pregnancy: Long-Term Survival Following Combined Therapy
A 22-year-old woman carrying twin gestations at 30 weeks presented with preterm labor and a prolapsing cervical mass. Following Cesarean section birth, she was treated with multiagent chemotherapy followed by pelvic radiotherapy for a Stage IIA small cell cancer of the uterine cervix. She is without evidence of disease 5.5 years after diagnosis and is the first reported long-term survivor of a small cell cervical carcinoma diagnosed during pregnancy.
INTRODUCTION
Endocrine tumors account for only 1-2% of all uterine cervix cancers. A variety of descriptive terms have been used to account for the broad morphologic spectrum encountered, including neuroendocrine tumor, small cell carcinoma, oat cell carcinoma, carcinoid tumor, argyrophil cell carcinoma, and apudoma. In an effort to facilitate comparisons of the clinicopathologic characteristics and biologic behavior of these rare tumors, a workshop was sponsored by the College of American Pathologists and the National Cancer Institute to construct a uniform terminology system for the endocrine tumors of the uterine cervix [1]. The classification scheme developed is similar to that of the neuroendocrine tumors of the lung and includes the classical carcinoid tumor, the atypical carcinoid, the large cell neuroendocrine carcinoma, and the small (oat) cell carcinoma.
The small cell carcinomas are histologically indistinguishable from oat cell pulmonary cancer and portend a grave prognosis. More than half of patients with apparent early-stage disease have nodal involvement and most patients treated by radical surgery and radiotherapy succumb to widespread metastases [2]. Because of its poor prognosis and propensity for hematogenous spread, systemic chemotherapy is a mainstay of treatment of small cell carcinoma of the cervix, with many patients experiencing prolonged disease-free survival. We re-port the first known survivor of a small cell cervical carcinoma diagnosed during pregnancy.
CASE HISTORY
In July 1992, a 22-year-old Caucasian woman, gravida 5, para 3, presented with preterm labor and a prolapsing cervical mass. She was carrying a twin gestation at 30 2/7 weeks. Her most recent Papanicolaou test had been performed 6 months previously at an outside facility and reportedly was within normal limits. A pelvic examination revealed an 8-cm, pedunculated, tan, friable mass arising from the anterior cervix, assumed to be a myoma. She was admitted and underwent intravenous tocolysis with magnesium sulfate. A course of intramuscular corticosteroids was administered. Ultrasonography demonstrated concordant twins with no anomalies, both in vertex presentation. The patient was kept at bedrest and the cervical mass was reduced. Several pessaries were fitted, but the mass continued to prolapse.
On the ninth hospital day, the patient was taken to the operating room for an examination under anesthesia which revealed an 8-cm friable, necrotic cervical mass extending into the upper vagina wall (Fig. 1). An acute hemorrhage prompted an emergency Cesarean section with delivery of viable twin female infants with birth weights of 1,625 and 1,800 g. The lower uterine segment was palpably normal and there was no gross disease in the abdomen or pelvis. A portion of the cervical mass was sent for pathologic analysis. Hemostatic sutures and a vaginal pack were placed to control bleeding.
The specimen was composed entirely of malignant cells without cystic spaces, papillary structures, or normal cervical mucosa. Small round cells with hyperchromatic nuclei and scant cytoplasm, characteristic of a small cell carcinoma of the uterine cervix, were observed (Fig. 2). Neurosecretory granules were present on electron microscopy. Immunohistochemistry studies demonstrated tumor cells which stained positively with antibodies to cytokeratin and focally with antibodies to chromogranin. The tumor cells failed to stain with antibodies to synaptophysin. The placenta was without evidence of metastatic disease. Pelvic washings retrieved at the time of Cesarean section did not contain malignant cells.
Examination 3 weeks postpartum revealed an 8 ϫ 10 cm friable fleshy cervical mass extending to the left vaginal fornix. There was no parametrial thickening or nodularity. A metastatic workup included a normal chest roentgenogram, bone scan, and cranial computed tomography. Abdominal and pelvic computed tomography showed no retroperitoneal or liver involvement.
Consolidation pelvic radiotherapy was administered to 45 Gy followed by two tandem and ovoid implants, bringing the point A dose to 90.2 Gy and the point B dose to 64.1 Gy. She received four additional courses of cisplatin (80 mg/m 2 ) and etoposide (400 mg/m 2 ) administered from April to July 1993, during which time she had no evidence of disease.
All pelvic examinations, Papanicolaou tests, endocervical curretage specimens, and chest roentgenograms have been within normal limits since completion of therapy. She is alive, 5 years after diagnosis, with no evidence of recurrent disease or significant complications of therapy.
DISCUSSION
Sheets and colleagues reported 14 patients with Stage IB or IIA small cell cervical cancer treated either by radical surgery alone or by surgery and postoperative radiotherapy at the University of California, Irvine [2]. Twelve patients were dead of disease within 3 years of diagnosis and the 2 survivors had recurred. Abeler and co-workers reported a 5-year survival rate of 14% among 26 patients with small cell cervical cancer treated at the Norwegian Radium Hospital [3]. Fifteen of these patients had Stage I disease, and of these, 11 had died with disease. These two studies suggest that traditional nonsystemic modes of cervical cancer treatment were not efficacious in the management of small cell tumors.
In light of similar histologic appearances and clinical behavior, Pazdur recommended that the bronchogenic neuroendocrine tumors serve as a model to guide the therapy of endocrine cancers arising from the cervix [4]. He treated three patients with advanced disease with chemotherapeutic agents known to be active in the management of pulmonary oat cell carcinoma. One patient experienced tumor regression in the pelvis and supraclavicular nodal regions of 11 months duration, prior to developing brain metastases. Utilizing a bronchogenic neuroendocrine protocol, Sutton and colleagues documented a partial response of nodal metastases to VAC chemotherapy [5].
The first sustained remission following administration of VAC chemotherapy was reported by Sheets at the University of California, Irvine [2]. The patient underwent radical hysterectomy, bilateral pelvic and para-aortic lymphadenectomies, and postoperative whole pelvis radiotherapy for a 9-cm diameter exophytic small cell cervical cancer which had metastasized to numerous pelvic and para-aortic lymph nodes. The patient received four courses of VAC chemotherapy and sur- patients with small cell cervical cancer who have experienced complete responses with systemic therapy lasting 2 to 4 years [6]. Finally, at 21 to 60 months follow-up, Morris and coworkers noted disease-free survival in three patients with Stage IB disease who had received cisplatin, doxorubicin, and etoposide [7].
The model for the induction phase of treatment for our patient was based on protocols used in the treatment of bronchogenic oat cell carcinoma of the lung. Turrisi and colleagues demonstrated a 93% complete response rate in patients with small cell lung cancer treated with chemoirradiation [8]. Alternating etoposide-cisplatin with VAC was tested in several randomized trials which demonstrated efficacy of this regimen in the treatment of small cell lung cancers [9]. The complete response to chemotherapy and radiotherapy experienced by our patient has subsequently translated into long-term survival.
We were able to find six other case reports of small cell carcinoma of the cervix complicating pregnancy [10 -15]. In their clinicopathologic study of 26 patients with small cell cervical cancer, Abeler and colleagues cite a pregnant patient with Stage IB disease who was alive 54 months following definitive surgical treatment and adjunctive chemotherapy [2]. A personal communication from Professor Abeler revealed that this patient did not have a neuroendocrine small cell cervical carcinoma. Rather the patient had the intermediate cell variety of small cell cervical cancer. Thus, we have not included this patient in our review. Table 2 summarizes the outcomes of the six cases and the current case.
(1) Pregnancy outcome. Cesarean delivery was carried out for the five women who were at least 25 weeks pregnant at the time of diagnosis. Five had favorable neonatal outcomes and one neonate who delivered at 26 weeks gestation died of prematurity. The one case diagnosed during the first trimester was treated before fetal maturity could be achieved, resulting in a spontaneous abortion.
(3) Survival. With the exception of the current report, all patients have died of widespread metastatic disease within 3 years of diagnosis of Stage IB (n ϭ 4) or Stage IIA (n ϭ 2) disease. Four patients survived only 2, 6, and 9 months. These four had been treated initially by radical surgery and either postoperative radiotherapy [14] or adjunctive chemotherapy [10,12,15]. The other two patients died 24 and 32 months after diagnosis. One had been treated with a single course of cisplatin preoperatively, followed by radical surgery and postoperative radiotherapy [13], while the other received radiotherapy for a Stage IIA lesion [11].
(4) Systemic therapy. Turner and colleagues describe treatment of a Stage IB cervical cancer with a chemotherapy regimen similar to ours [10]. Their patient began systemic treatment 10 days after having undergone radical hysterectomy and pelvic lymph node dissection. Metastatic disease had been found in one hypogastric node. The patient experienced an upper abdominal recurrence and died 9 months after diagnosis. Although a modest treatment delay to allow fetal maturity for some patients with Stage I squamous cell carcinomas of the cervix has not been found to impact negatively on maternal survival [16], such a delay is contraindicated in pregnancies complicated by the neuroendocrine cervical cancer. Given the aggressive nature of this tumor type, we advise that therapy be instituted immediately without delay following diagnosis. A viable fetus should be delivered by classical Cesarean section to avoid the lower uterine segment. Induction of systemic therapy followed by radiotherapy should begin in the immediate postpartum period. Patients who insist on treatment delays in order to permit gestational advancement in early pregnancy may be candidates for antepartum systemic therapy and fetal surveillance. Once fetal viability is attained, delivery should be accomplished by Cesarean section with surgical assessment of the para-aortic lymph nodes and debulking of enlarged pelvic lymph nodes. Postoperatively, continued systemic therapy followed by radiotherapy to ports which encompass the extent of disease found on exam and lymph node assessment is required. | 2018-04-03T05:16:36.127Z | 1998-10-01T00:00:00.000 | {
"year": 1998,
"sha1": "bee56307163f7a0f5aa6f953034303bbf294d447",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt13n5s0b9/qt13n5s0b9.pdf?t=ow47mi",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a50351c7dbc528f5a3ed4e01198839358b69dc10",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119294880 | pes2o/s2orc | v3-fos-license | Effect of spatial inhomogeneity on the mapping between strongly interacting fermions and weakly interacting spins
A combined analytical and numerical study is performed of the mapping between strongly interacting fermions and weakly interacting spins, in the framework of the Hubbard, t-J and Heisenberg models. While for spatially homogeneous models in the thermodynamic limit the mapping is thoroughly understood, we here focus on aspects that become relevant in spatially inhomogeneous situations, such as the effect of boundaries, impurities, superlattices and interfaces. We consider parameter regimes that are relevant for traditional applications of these models, such as electrons in cuprates and manganites, and for more recent applications to atoms in optical lattices. The rate of the mapping as a function of the interaction strength is determined from the Bethe-Ansatz for infinite systems and from numerical diagonalization for finite systems. We show analytically that if translational symmetry is broken through the presence of impurities, the mapping persists and is, in a certain sense, as local as possible, provided the spin-spin interaction between two sites of the Heisenberg model is calculated from the harmonic mean of the onsite Coulomb interaction on adjacent sites of the Hubbard model. Numerical calculations corroborate these findings also in interfaces and superlattices, where analytical calculations are more complicated.
I. INTRODUCTION
Strongly interacting fermions are part of some of todays most studied physical systems. In cuprate and manganite systems, for example, strongly correlated electrons are held responsible for high-temperature superconductivity and colossal magnetoresistance, respectively. [1][2][3] Systems of strongly interacting fermionic atoms can be realized in optical lattices, and are currently under intense investigation due to the possibility to use them as quantum simulators for understanding phenomena of condensed matter physics. [4][5][6] Since the seminal works of Wigner on the lowdensity electron crystal 7 and of Mott on the metalinsulator transition 8 it is known that strong repulsive particle-particle interactions suppress itineracy and favor localization. 9 In the localized state, the repulsive interaction is minimized, and charge degrees of freedom are frozen out. The dominating interactions in this state are magnetic.
Mathematically, strongly interacting fermions are frequently described by the Hubbard model, which in one dimension is defined by the Hamiltonian whereĉ † iσ andĉ iσ are fermionic creation and destruction operators,n iσ =ĉ † iσĉ iσ is the particle-density operator, U the onsite interaction and t the hopping parameter.
For sufficiently strong interactions, the Hubbard model can be expanded in powers of t/U . (Below we quantify what interactions can be considered 'sufficiently strong'.) The leading term of this expansion is the t-J model, where Ŝ i is the spin one-half vector operator at each site. This model is frequently taken to be the starting point in investigations of doped cuprates. For a half-filled system, in which the number of fermions N equals the number of lattice sites L, the average density n = N/L is unity. Since there are no empty sites, hopping is suppressed, and the t-J model reduces to the antiferromagnetic Heisenberg model where J = 4t 2 /U and charge fluctuations are completely frozen out. The original system of strongly interacting itinerant fermions (U/t ≫ 1) has thus been mapped on a system of localized spins with weak antiferromagnetic interactions (0 ≤ J/t ≪ 1). The mathematics and the physics of this mapping are very well understood and discussed in the textbook literature. 10,11 The mapping of strongly interacting itinerant fermions on weakly interacting localized spins is a standard concept of condensed-matter physics, routinely used in the interpretation of experiments on strongly correlated solids. Recently, however, three important classes of systems have been discovered or created that call for a reconsideration and more detailed investigation of this mapping.
It is well known that many strongly correlated systems are characterized by nanoscale spatial inhomogeneity. Such inhomogeneity can take the form of irregular spatial variations of system properties, such as observed by scanning-tunneling microscope techniques in many cuprates and similar materials, 2,12-16 or the form of regular spatial variations such as in naturally occurring or man-made superlattice structures. [17][18][19][20][21] In the presence of either type of inhomogeneity, the parameters characterizing the model Hamiltonian become site dependent. In the simplest case, with which we are mostly concerned here, the above homogeneous Hubbard model is replaced by an inhomogeneous model of the form in which the onsite interaction U i varies from site to site. A second class of systems we are concerned with here are nanoscale devices. In the modeling of such devices inhomogeneities in the model parameters occur simply because on the nanoscale the effect of the surface can no longer be neglected, and also because a typical device combines more than one material, with the resulting interface automatically implying the existence of spatial variations in the system parameters.
Finally, in still another line of research, ultracold atom gases have been trapped optically and arranged in optical lattices. 4,5 In optical traps the system parameters can be controlled and varied in ways not possible in solid-state situations. In particular, the onsite interaction U i can attain values U/t ≈ 100 or larger. Such values are way beyond what is considered 'strongly correlated' in solidstate physics.
Motivated by all these systems, we present, in the present paper, a combined analytical and numerical study of the mapping from the Hubbard model to the Heisenberg model in the presence of spatial inhomogeneity. In Sec. II we investigate the rate of the mapping, as measured by the difference in ground-state energies of the Hamiltonians. We allow the interaction strength to go beyond its typical solid-state values and to enter the ultrastrong regime attainable for cold atoms.
In Sec. III we turn to our main subject, the inhomogeneous Hubbard model of Eq. (4). In Sec. III A we investigate analytically the case of a single impurity, described as one site with a value of U differing from all others, and show that the Hubbard-to-Heisenberg mapping is preserved essentially in its homogeneous form, provided the effective J is calculated from the harmonic mean of the values of U on the sites connected by J. We illustrate this finding numerically, by contrasting, for an impurity system, results obtained from the harmonic mean with results obtained from the arithmetic, geometric and quadratic means. In Sec. III B we show that the harmonic mean allows to extend the Hubbard-to-Heisenberg mapping to systems with more complicated types of spatial inhomogeneity, such as superlattices, disordered systems and interfaces between different materials. Section IV contains our conclusions.
II. RATE OF THE MAPPING FOR TRANSLATIONALLY INVARIANT SYSTEMS
In a first step, to provide the background for the later investigations, we consider spatially homogeneous infinite Hubbard and Heisenberg chains, and investigate the rate of the approach of the ground-state energies of both models as a function of U . This allows us to quantify the rate at which charge fluctuations are frozen out.
The per-site ground-state (GS) energy of the Heisenberg chain in the thermodynamic limit is The per-site GS energy of infinite half-filled (n = 1) Hubbard chain at U/t → ∞ is Both expressions become identical for J = 4t 2 /U . In order to quantitatively investigate the mapping at finite U/t, we calculate e Hubb (n = 1, U ) by numerically solving the Bethe-Ansatz integral equations 22,23 as a function of U and compare the result to the energy of the Heisenberg model at the corresponding value of J, i.e. e Heis (J = 4t 2 /U ). The result is displayed 24 in Fig. 1.
In Table I we show the relative percentage deviation between the GS energies, for various representative values of U . This comparison between both models becomes trivial, once the Bethe-Ansatz solution is available, but already leads to a first somewhat unexpected conclusion. Frequently, the Heisenberg model is taken to be the starting point for a description of undoped antiferromagnetic insulating parent compounds of high-temperature superconductors. The effect of doping is accounted for by going from the Heisenberg to the t-J model, arguing that the latter should be a reasonable approximation to the Hubbard model for the involved large values of U . What the comparison in Fig. 1 and Table I shows is that for values of U that are representative of cuprate materials the t-J or Heisenberg models provide at best a semiquantitative approximation to the Hubbard model. At U = 6 the difference between both ground-state energies is approximately 10%. Charge fluctuations are thus not yet frozen out for such U , even at half filling. The rather large deviation observed shows that the mapping of strongly interacting fermions onto weakly interacting spins is not quantitatively reliable for, e.g., cuprate systems at realistic values of U . There is no doubt, of course, that the t-J model captures the correct physics of the large-U Hubbard model -the above questioning only refers to the accuracy to which one needs to obtain a solution of the former, given that in the parameter regime typical of strongly-correlated solids it is itself only a moderately accurate representation of the latter.
We note that this analysis is based on GS energies. An alternative comparison between both Hamiltonians would proceed in terms of the overlap of their wave functions, instead of the difference of their energies. Our main interest in this initial investigation is to investigate for which values of U the mapping breaks down, and for this it is enough to find one quantity that is not properly reproduced. Thus, if we use our analysis to indicate when the mapping does not hold, we are on the safe side by using energies. Still another, and indeed more fundamental, mode of analysis proceeds by directly comparing the Hamiltonians. We use this procedure in Sec. III A, where our interest is not only in when the mapping breaks down, but also in how it can be restored.
III. MAPPING IN THE PRESENCE OF SPATIAL INHOMOGENEITY
We now turn to systems where the inhomogeneity occurs not only at the surface, due to finite size, but also in the bulk. A typical case is that of a localized impurity or defect, modeled by one site with onsite interaction differing from that of all the others. We take this simple system as representative of Hubbard models with broken translational symmetry, and focus most of our analysis on it. The extension of our conclusions to interfaces and superlattices is discussed in Sec. III B.
Intuitively, one would expect that a localized perturbation of the homogeneous Hubbard model should only produce a similarly localized perturbation of the homogeneous Heisenberg model. However, the relation between both models involves a projection on the subspace with no double occupation, together with an expansion in inverse powers of U , 10,11 and it is not clear from the outset to what extent these operations preserve the above naive expectation of locality.
In fact, there is one sense in which the mapping, if it continues to exist, cannot be local: U is defined on one site, while J connects two adjacent sites. While this difference is almost irrelevant in the homogeneous case where all sites are equivalent and translational symmetry rules, it becomes important in the inhomogeneous case, where any change in the onsite U on the Hubbard model must affect the corresponding intersite J for at least two sites of the Heisenberg model. This is illustrated in Fig. 2.
The questions to address are thus (i) whether the mapping still exists in the absence of translational symmetry, (ii) how to calculate the Heisenberg J from the Hubbard U in inhomogeneous systems, and (iii) if the mapping is as local as possible, i.e., involves only sites adjacent to the impurity site, or requires a higher degree of nonlocality. In next subsections we address these questions analytical and numerically.
A. A single impurity
We start with the one-dimensional Hubbard model in the presence of a single impurity at site k with onsite interaction U ′ differing from the background value U on all other sites i = k, The standard proof of the mapping from the Hubbard to the t-J model 10,11 can be repeated for this Hamiltonian and leads to the inhomogeneous t-J model with Hamil-tonianĤ where we assumed that both U and U ′ are much larger than t. We rewrite this Hamiltonian by extracting one term from the sum over i = k to obtain Now we take the average density n = N/L = 1. A priori this average can be obtained from many different distributions n i . However, since we are already in the limit U, U ′ ≫ t, the total interaction energy on the background and the impurity sites is minimized by the particular distribution n i = 1 for all i. Deviations from this are due to hopping processes that become increasingly suppressed as U and U ′ grow. Thus the Hamiltonian (11) at n = 1 reduces tô where we chose l as a neighbor site of k, k = l + 1, such that Ŝ l · Ŝ l+1 = Ŝ k · Ŝ k+1 . This, in turn, can be written as which has the form of a spatially inhomogeneous Heisenberg model with background interaction J = 4t 2 /U and two bond defects J ′ = 4t 2 /Ū H , whereŪ H is the harmonic mean of U and U ′ , This derivation tells us that the Hubbard model with a single impurity can indeed be mapped onto a Heisenberg model with two bond defects, provided the impurity site and the background sites both have repulsive interactions that are much larger than t, and that the J connecting the impurity site with its neighbors to the left and to the right is calculated from the harmonic mean of the two onsite interactions. The mapping then still exists and is seen to be as local as possible, in the sense explained above.
In order to investigate the rate of the mapping in inhomogeneous systems, we now perform a numerical investigation of both models. For illustration we also include in these calculations the quadratic, arithmetic and geometric means,Ū Both the inhomogeneous Hubbard model with n = 1 and one U ′ = U and the inhomogeneous Heisenberg model with two J ′ = J are diagonalized numerically, and compared by means of the deviation between their GS energies. We use the same method of analysis as in Sec. II, this time, however, applied to impurity systems.
Our key result is contained in Fig. 3, which displays the relative percentage deviation between the inhomogeneous Hubbard and the inhomogeneous Heisenberg models with J ′ = J(Ū ) for each of the four averages. The solid line represents the corresponding deviation obtained for the impurity-free Hubbard and Heisenberg models, where U and J = 4t 2 /U are the same across the system. Figure 3-(a) demonstrates clearly that if the background U is so small that the mapping is not quantitatively reliable even in the homogeneous system (this includes the values found in cuprates), then it is also not reliable in inhomogeneous systems, regardless of the choice made for relating the bond defect to the impurity size. On the other hand, Fig. 3-(b) makes a different statement: once the background U is large enough to permit the basic Hubbard-to-Heisenberg mapping to function, all four averages lead to values of J ′ = 4t 2 /Ū that are equal (H) or different (A,G,Q) than obtained in the homogeneous system. This is unequivocal numerical evidence that the mapping of strongly interacting fermions on weakly interacting spins survives in the presence of impurities and defects. In order to probe the locality of the mapping we have also performed numerical experiments with averages over more than two neighboring sites, calculating J ′ (Ū ) from, e.g. a weighted average of the interactions at the sites connected by U and their nearest-neighbor sites. No improvement (and frequently even worse results) with regard to the simple two-site averages was obtained, indicating that the mapping is indeed local. At first sight more surprising is that the alternative averaging procedures produce even smaller deviations for some values of U , Fig. 3-(b). However, for still larger values of U and U ′ (where the mapping should as a matter of principle get better and better) all these alternative averages produce deviations that continue to grow and yield D > 0, while the harmonic mean correctly approaches the limit D = 0 as U, U ′ → ∞. This shows that only the harmonic mean has a chance to correctly describe the fermion-to-spins mapping in inhomogeneous systems, while the lower deviations of the other averages for some parameter regimes only occur because the curves are monotonous so that there is always a range of values for which they are close to zero.
We note that in Fig. 3 the ratio of U and U ′ was held fixed, such as to guarantee that for all values of U , U ′ was always substantially different from U (U ′ = 3U/2). In Fig. 4 we present the complementary analysis in which the background U is held fixed and the impurity interaction is varied from U ′ < U to U ′ > U . For this comparison we consistently adopted the harmonic mean. A first feature that jumps to the eye is that a single site with U ′ < U is enough to substantially deteriorate the mapping. By contrast, a single site with U ′ > U leads only to a slight reduction of the deviation between GS energies, much less than the deterioration observed for U ′ < U . This behavior arises from the hopping terms. Both at U ′ > U (more repulsive impurity site) and at U ′ < U (less repulsive site) the on-site density at the impurity site slightly deviates from that at the background sites, as long as U and U ′ are both finite. In the latter case, however, hopping processes involving the impurity site increase as U ′ is reduced, and the Hubbard-to-Heisenberg mapping becomes correspondingly worse, while in the former case hopping continues to be strongly suppressed. The behavior displayed by Fig. 4 is thus consistent with what one would expect on the basis of the derivation leading from Eq.(8) to Eq. (12).
Independently of, but in agreement with, our previous analytical derivation we thus find that the harmonic mean solves the problem exactly for a single impurity and for sufficiently large interactions, while the other possible averages do not. However there is still one question concerning this result and it is addressed in next section: is the harmonic mean able to recover the mapping in more complex inhomogeneities?
B. More complex inhomogeneities
In view of our initial discussion of naturally occurring or man-made inhomogeneity in strongly correlated systems, it becomes important to extend our analysis to more complex inhomogeneities than boundaries or single impurities. We here briefly describe our findings on three of these: interfaces, superlattices and disordered systems.
Interfaces and superlattices can be described in the Hubbard and Heisenberg models as shown schematically in Fig. 5. While the description of superlattices by means of periodic spatial variations of U is the standard choice, 17 which we here also adopt, it has been pointed out that a periodic modulation of local electric potentials can bring about a much larger change in the system properties. 18,19,25 Here, however, we are interested in the Hubbard-to-Heisenberg transition, which is driven by the interaction and not by local potentials, and therefore we follow the usual prescription to ignore possible local electric fields in the superlattice structure.
We note that in the superlattice and the interface geometry we now have three different spin-spin interactions, one, J, being calculated from U , another, J ′ , from U ′ and the last, J * , from U and U ′ , as indicated in Fig. 5. Figure 6 shows that for large U and U ′ the GS energies of the Heisenberg model, when calculated from the harmonic mean, become identical to those of the corresponding Hubbard model, for all investigated geometries. In this sense, the mapping continues to work and to be as local as possible. Note that this would not be true for the arithmetic, geometric and quadratic means, whose deviations increase for large U and U ′ .
The fact that the Hubbard-Heisenberg deviation for the superlattice is larger than that for the interface can be understood on purely geometric grounds, as a consequence of the locality of the mapping and the nature of the harmonic mean: By comparing the distribution of J values, Fig. 5, we see that in going from the interface to the superlattice the number of interactions J and J ′ is reduced by the same amount, while that of interactions J * increases. Since J ′ < J * < J, a reduction of the number of sites with J ′ worsens the mapping while a reduction of the number of sites with J improves it. To see what the net effect is we must take into account the interaction J * , which replaces J and J ′ . This interaction is calculated fromŪ , the harmonic mean of U and U ′ . The harmonic mean of any two positive numbers is less or equal their arithmetic mean, so thatŪ is closer to U than to U ′ and J * is closer to J than to J ′ . The substitution of an equal number of J and J ′ by J * thus has the effect of effectively increasing the number of 'bad' sites, and therefore to deteriorate the quality of the mapping. This is what the data show: the deviation for the superlattice is larger than that for the interfaces, if all other parameters are chosen the same. Relative percentage deviation between the GS energies of the Hubbard and the Heisenberg model for a superlattice structure, an interface, a single-impurity system and a system that is spatially homogeneous (except for the surface). In the superlattice and the interface system the number of sites with U and U ′ is the same, respectively, the only difference being in their geometric distribution. System parameters: L = 8, U ′ = 3U/2, J ′ = 4t 2 /U ′ and J * = 4t 2 /Ū , whereŪ is calculated from the harmonic mean of U and U ′ .
This analysis shows that the harmonic-mean prescription continues to be usable for these more complex geometries and that the effect of the geometrical distribution of U and U ′ sites across the system can be understood and analyzed essentially on a site-by-site basis. This is a direct consequence of the locality of the mapping.
Disordered systems can be modeled simply by considering a random distribution of impurities instead of just one. While a complete analysis of disordered systems requires statistical analysis of data resulting from a large number of realizations of the disorder, an analysis of a few representative cases is enough to conclude that the single-impurity results are not changed, in their essence, when the impurity concentration is increased.
Specifically, we find that if all impurities have U ′ > U a higher concentration of impurities reduces the deviation between the GS energies. Keeping the concentration fixed and increasing U ′ /U also reduces the deviation, but to a much smaller degree. On the other hand, if all impurities have U ′ < U the agreement is naturally worsened. However, in the U ′ < U case the decisive factor is not so much the concentration of impurities but their strength, as measured by U ′ /U . This inversion in the effect of concentration and strength of the impurities can be understood on the basis of Fig. 4, which shows that for a single impurity with U ′ < U the deviation rapidly increases as U ′ becomes more different from U , while for U ′ > U it decreases only very slowly and almost saturates as the impurity sites effectively drop out of the system. This discussion shows that it is possible to control the degree of the fermion-spin mapping by means of the in-troduction of a suitable concentration of impurities of suitable strength. This possibility may be useful in the design of nanoscale devices based on strongly correlated systems, whose properties can be tailored from electronlike to spin-like by introducing suitable disorder. We note in passing that this is a strong-interaction effect, completely different from the itinerant-to-localized transition resulting from disorder in Anderson localization.
IV. CONCLUSIONS
In the homogeneous situation, in which all sites are equivalent, the mapping is characterized by the behavior of the system as a function of the onsite interaction U . Not unexpectedly, the Heisenberg model is found to be a good approximation to the t-J and the Hubbard model at n = 1. Somewhat more unexpected is that the Heisenberg model is rather a bad approximation to the Hubbard model at n = 1 even for values of U that are considered strongly correlated in solid-state applications. Only at U near 20 has the deviation dropped to about one percent. The standard mapping thus only becomes quantitatively reliable for values of U that are hard to reach in the solid state, but have already been demonstrated in cold-atom systems.
In inhomogeneous situations, translational symmetry is broken. An analytical calculation for the simple case of a single impurity suggests that the mapping can be preserved in terms of the harmonic mean. Moreover, in terms of this mean the mapping is as local as possible, i.e., the value of the Heisenberg J between two sites is only determined from the value of the Hubbard U at these two sites. Numerical calculations illustrate and corroborate this finding. This is more than a mathematical, or numerical, result: it means that the physics of the mapping, i.e., the gradual freezing out of the chargedegrees of freedom and the localization arising from the concomitant suppression of double occupation, is essentially the same regardless of the geometry and the presence or absence of translational symmetry. The harmonic-mean prescription can be used easily and reliably for a wide variety of spatial inhomogeneities. Once the basic mapping is understood, the harmonicmean prescription allows one to interpret and even to predict the behavior of much more complicated systems, without going through detailed analytical or computationally expensive numerical calculations. | 2010-07-02T17:06:35.000Z | 2010-07-02T00:00:00.000 | {
"year": 2010,
"sha1": "40fcec31fbb16dcc8cedf0f6e564ab89b403c68f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1007.0400",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "40fcec31fbb16dcc8cedf0f6e564ab89b403c68f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15178388 | pes2o/s2orc | v3-fos-license | Broadband chiral metamaterials with large optical activity
We study theoretically and experimentally a novel type of metamaterial with hybrid elements composed of twisted pairs of cross-shaped meta-atoms and their complements. We reveal that such two-layer metasurfaces demonstrate large, dispersionless optical activity at the transmission resonance accompanied by very low ellipticity. We develop a retrieval procedure to determine the effective material parameters for this structure, which has lower-order symmetry ($\mathrm {C}_4$) than other commonly studied chiral structures. We verify our new theoretical approach by reproducing numerical and experimental scattering parameters.
I. INTRODUCTION
Chiral structures with optical activity and circular dichroism have been instrumental for many applications including biological and chemical sensing 1 . In particular, chiral metamaterials can have optical activity several magnitudes of order larger than the effects found in nature. A chiral metamaterial formed by twisting planar structures, such as a pair of crosses or split-ring resonators, can result in large optical activity or giant gyrotropy at a range of frequencies [1][2][3] . The resonant modes of these twisted structures will be dominated by either an electric or magnetic dipole response leading to the impedance being mismatched to free space. Also, optical activity is highly dispersive over the transmission band, and it should be accompanied by ellipticity due to the Kramers-Kronig relations [3][4][5][6][7] . This is undesirable for many polarization-based applications requiring linearly polarized light.
The Babinet principle states that an infinitely thin, perfectly conducting complementary structure illuminated by a complementary incident field generates a field equivalent to the field excited in the original structure but with the electric and magnetic fields exchanged [8][9][10][11] . Intuitively, by coupling an element together with its complement these electric and magnetic responses become coupled, matching the impedance over the transmission peak, which should overcome the previously stated shortcomings in rotated structures.
The approach based on combining a meta-atom with its complement has previously been used to study nonchiral effects, such as dual-band ultra-slow modes 12 and a broad bandpass filter at THz frequencies 13 . The coupling mechanisms of this approach have also been studied at optical frequencies, and circular dichroism observed 14 .
Previously, we proposed a hybrid meta-atom resulting from a combination of a cross and its complement, and suggested that the use of the Babinet principle may address the above mentioned problems with twisted structures 15 . We predicted that this structure may have large, dispersionless optical activity at the transmission resonance, accompanied by very low ellipticity. A nu- merical study of a similar structure was reported recently for the THz regime 16 . Importantly, such structures have C 4 symmetry, which is of lower-order symmetry than commonly studied metamaterial structures created by twisted identical resonators, which have D 4 symmetry 2,11 .
To further understand this new type of metasurfaces, it is very important to calculate the effective parameters of such structures. Obtaining the material parameters of metamaterial structures is a well established procedure for isotropic, achiral media 17 . The approach has been extended for the cases of chiral, bianisotropic and inhomogeneous media [18][19][20][21] . An alternative approach based on the state-transition matrices has also been proposed for isotropic chiral media 22 , however none of these methods can be employed for the case of the C 4 symmetry group. The parameters for the structures with C 4 symmetry were retrieved in Ref. [23] under the assumption that the two bi-anisotropic parameters are related by a frequency-independent constant. This assumption is not valid for general structures, including the one proposed here. This lower-symmetry results in the reflection being dependant on the propagation direction, and is due to the structure being physically different when seen from opposite directions.
In this paper, we study theoretically and experimen- tally the properties of metamaterials composed of twisted pairs of cross-shaped meta-atoms and their complements, and develop a retrieval procedure to determine the effective material parameters for the meta-structures with C 4 symmetry. We verify our new theoretical approach by reproducing both numerical and experimental scattering parameters. The paper is organized as follows. In Sec. II we experimentally verify our previous results, finding a good agreement with numerical simulations and confirming our previous findings. We then develop an approach in Sec. III to retrieve the effective parameters for such structures in a unit cell configuration, based on the eigenvalues of the scattering-transfer matrix. Finally, we verify this approach by recalculating the scattering matrix through the substitution of the retrieved material parameters. Section IV concludes the paper.
II. EXPERIMENTAL RESULTS
We choose the cross and its complement to have arms of length 27mm and width 1.5mm. They are separated by a Rogers R4350 board 1.5mm thick, with dielectric constant 3.48, and twisted through 20 • . The metal components are made of copper, 30µm thick. We conduct the experiment inside a circular waveguide, measuring the scattering matrix for both right-and left-handed polarizations. A schematic of the two elements rotated through an angle θ is shown in Fig. 1(a).
Simulations are performed using CST Microwave Studio, using a linearly polarized input wave propagating along the z-axis, where the first two polarizationdegenerate modes are excited. The first mode is assigned to that with the electric field oriented along the y-axis, and the second along the x-axis. We simulate the co-and cross-polarized transmission coefficients for both linear polarizations (S xx , S yy , S xy and S yx ), and use these to calculate the transmission for the two circularly polarized waves. As our structure has four-fold rotational symmetry, S yy = S xx and S xy = −S yx . The magnitudes of the right-and left-handed polarizations are compared with the experimental results in Fig. 2(a). We see that there is little difference between the two polarizations, however the resonances are blue-shifted in the experiment, which is most likely due to imperfect electrical connection between the metallic sample and the waveguide walls. We also plot the phase for both polarizations in Fig. 2(b), and see good agreement apart from the shift in resonance.
The optical activity is related to the difference in phase between the two polarized waves, while the ellipticity to the difference in transmission magnitudes. We calculate these values using the equations outlined in Ref. [15]. Fig. 3(a) shows the calculated optical activity, comparing the experiment with the numerical simulations. We see that we have good agreement, and see large, flat optical activity over the frequency of transmission. The ellipticity is plotted in Fig. 3(b), noting that the mag-
Experimental Numerical
Optical Activity φ (degrees) nitude of the ellipticity is very small, as intended with this design, so the measured values are comparable to the experimental uncertainties. As the ellipticity corresponds to the gradient of the optical activity, it is not surprising that we see very low ellipticity in the region of transmission resonance, accompanying the low dispersion in the optical activity. These results are consistent with our previous findings, where we compared the response of our mixed structure against that of a pair of crosses and a pair of complementary crosses 15 .
Since the system is achiral when θ = 0 • or 45 • , we expect that by changing θ we can control the optical activity. We measured the transmission for θ = 0 • to 45 • , in 2.5 • steps. The resulting optical activity at the transmission resonance is plotted as a function of θ in Fig. 4, both numerically and experimentally, showing that the optical activity is highly dependent on the twist angle. The small disagreement between numerics and experiment can be explained by imperfections in the fabrication. We also see, from the numerical simulations, that the angle of maximum optical activity is actually about 17.5 • , while we would expect it to be at 22.5 • as that is the angle that the system is furthest away from a symmetric configuration. The reason for this discrepancy is the retardation over the gap between the elements, as explained further in Ref. [24].
These experimental results verify our previous numerical findings of large, dispersionless optical activity at resonance, and very low ellipticity 15 .
III. RETRIEVAL OF THE EFFECTIVE PARAMETERS
To calculate the material parameters we use a unit cell model periodic in the x and y directions for simplification as the waveguide mode is not uniform in the transverse direction, making it equivalent to a non-normal angle of incidence. The cross and its complement are modeled as having arms 28mm in length and are separated by 1.5mm. The metal is modeled as PEC. All other parameters remain the same, except that the complementary cross and the boards are now square in shape, to fill up the unit cell, shown in Fig. 1(b). The system is excited using a plane wave at normal incidence, described using the time convention exp(iωt).
The most general case for our structure, inclusive of all angles, has C 4 symmetry. At normal incidence there is no z component of the macroscopic fields allowing us to model the transverse components using the reduced tensors where is the effective permittivity, µ the effective permeability, κ the chirality, and ξ is a bi-anisotropic parameter which is not present in isotropic chiral media, and is introduced by the lower order of symmetry in our system. The off-diagonal components of¯ andμ are 0, due to time reversal symmetry 25 . The resulting constitutive relations at normal incidence are whereJ = z 0 ×Ī is the 90 • rotator in the x−y plane. We then have the following parameters to calculate: , µ, κ and ξ. The currently established approaches do not cover general structures with this particular symmetry 18,21,23 , so we need to develop a new approach. We have the added complication that due to the meshing in the CST model not preserving 90 • rotational symmetry the eigenstates are not perfectly circularly polarized in the numerical model. To account for this we develop a much more robust method, where we find the scalar parameters of the eigenmodes of the scattering-transfer matrix and use them to assign effective parameters for a medium with circular eigenstates.
A. Eigenmode analysis
We start by solving the eigenvalues of the scatteringtransfer matrix which are then used to find the refractive index n and the impedance Z. The impedance is a tensor, but due to symmetry there are only a few unique values which we will find. When dealing with the tensors, we will denote ⇒ Z as the impedance for waves travelling in the +z direction, and ⇐ Z in the −z direction. To calculate n and Z from the scattering parameters, we make use of the scattering-transfer matrix 26 , which can be found from the scattering matrix S 11 , S 12 , S 21 and S 22 are 2 × 2 arrays including both linear polarizations at each port. We then find the eigenvalues λ n of T S , by using the relation where α is the phase advance across the unit cell of thickness d, and F is defined as where a n and b n are the amplitudes of the waves propagating towards and away from the structure, and can be defined as The value of n refers to the mode being considered and z 0 is the impedance of free space. Reference [21] uses the relation in Equation (4) with the transmission matrix, however this still holds when using scattering transfer parameters as well. The four eigenvalues correspond to the forwards and backwards modes of the two polarizations. The refractive indices can then be found as where d is the thickness of the sample (the substrate thickness plus the thickness of both metal resonators), and k 0 is the wavenumber in free space. The resulting indices for the forwards direction of the two polarizations are plotted in Fig. 5(a) and (b). By finding the eigenvectors F corresponding to these eigenvalues, we can study the fields in the structure. We can determine the eigenstates in our structure by looking at the eigenvectors (not shown). The eigenstates are almost circularly polarized. Equations (5) -(7) can be rearranged to find the ratio of E/H in order to calculate the scalar impedances. For circularly polarized waves the impedances are found as
B. Parameter retrieval
Now that we have the scalar index of refraction and impedance for each eigenmode, we can calculate the effective medium parameters. Using equations (8.6 -8.10) from Ref. [25] modified for a plane wave at normal incidence, we find the refractive index n of the two circular polarizations in the form We can then find the impedance from equations (8.6), (8.7) and (8.38) from Ref. [25], by assuming a plane wave in the form exp(−ink 0 d) at normal incidence.
We can find the eigenvalues z for the different polarizations and propagation directions, which give us the impedances for the eigenstates in the medium. For ⇒ Z 1,2 we get and for ⇐ Z 1,2 where We see that of the four eigenvalues, only two are unique. This supports our earlier argument that the impedance is only dependent on the direction, as shown in Fig. 5(c-d).
Using these eigenvalues, we can rearrange them to find equations for the retrieval of the parameters µ, , κ and ξ: Both the real and imaginary parts of these retrieved parameters are plotted in Fig. 6. In Fig. 6(a) we see that the imaginary part of µ becomes positive, which violates passivity. However this is a known problem with assigning local parameters to metamaterials 17 , despite which the effective parameters can still yield useful insights. In Fig. 6(c) we have κ, the real part of which is directly related to the optical activity, and the imaginary part defines the ellipticity. We see relative flatness in the real part, which is consistent with our earlier findings with the optical activity. We also see that the imaginary part is very low, corresponding to the very low ellipticity reported.
The real and imaginary parts of ξ are plotted in Fig. 6(d). This reproduces the asymmetry of the structure as shown in the reflection coefficients.
In order to verify the accuracy of this approach, we used our retrieved parameters to recalculate the scattering parameters by re-substitution, using equation (8.39) from Ref. [25] to calculate the admittance, then equations (8.40-8.46), (8.51-8.52) to calculate the scattering parameters. The results for both polarizations are plotted in Fig. 7, and show near perfect agreement between our original simulations and the recalculations. We can also see the nearly constant difference between the trans-mission phases in Fig. 7(d), consistent with the flat optical activity. The reflection plotted is that for the forward incidence -to recalculate the opposite direction, the sign of ξ needs to be changed. These calculated scattering parameters confirm the accuracy of our retrieval approach, and also justify us treating the polarizations of the eigenmodes as circular, as this is the assumption made in calculating the parameters.
IV. CONCLUSIONS
We have demonstrated experimentally that the metasurface composed of twisted pairs of meta-atoms with their complement exhibits large, flat optical activity and very low ellipticity. We have studied the response of our structure to a changing twist angle and found the optimal twist angle for maximum optical activity. Because this metasurface has C 4 symmetry, we have developed a novel retrieval method for calculating the effective material parameters which is applicable to structures with C 4 symmetry. This approach can be easily extended for use in more general media, potentially including structures inside a waveguide. We have verified the accuracy of this approach by calculating the scattering parameters theoretically and comparing them with results obtained from numerical simulations and experiment. | 2014-01-30T23:59:28.000Z | 2014-01-30T00:00:00.000 | {
"year": 2014,
"sha1": "83b6a8029143e03217801c6805e5ce20b8a201e2",
"oa_license": null,
"oa_url": "https://openresearch-repository.anu.edu.au/bitstream/1885/69574/2/01_Hannam_Broadband_chiral_metamaterials_2014.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f1d9e1481ffe728a95c581887c3620ab20ab33cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
261923021 | pes2o/s2orc | v3-fos-license | Polymeric Inclusion Membranes Based on Ionic Liquids for Selective Separation of Metal Ions
In this work, poly(vinyl chloride)-based polymeric ionic liquid inclusion membranes were used in the selective separation of Fe(III), Zn(II), Cd(II), and Cu(II) from hydrochloride aqueous solutions. The ionic liquids under study were 1-octyl-3-methylimidazolium hexafluorophosphate, [omim+][PF6−] and methyl trioctyl ammonium chloride, [MTOA+][Cl−]. For this purpose, stability studies of different IL/base polymer compositions against aqueous phases were carried out. Among all polymer inclusion membranes studied, [omim+][PF6−]/PVC membranes at a ratio of 30/70 and [MTOA+][Cl−]/PVC membranes at a ratio of 70/30 were able to retain up to 82% and 48% of the weight of the initial ionic liquid, respectively, after being exposed to a solution of metal ions in 1 M HCl for 2048 h (85 days). It was found that polymer inclusion membranes based on the ionic liquid methyl trioctyl ammonium chloride allowed the selective separation of Zn(II)/Cu(II) and Zn(II)/Fe(III) mixtures with separation factors of 1996, 606 and, to a lesser extent but also satisfactorily, Cd(II)/Cu(II) mixtures, with a separation factor of 112. Therefore, selecting the appropriate ionic liquid/base polymer mixture makes it possible to create polymeric inclusion membranes capable of selectively separating target metal ions.
Introduction
Some human activities release harmful substances into water sources, posing significant risks to aquatic ecosystems and the environment [1].Among the contributors to water pollution, heavy metal ions stand out due to their high toxicity, non-degradability, and potential to bioaccumulate and biomagnify in the food chain [2].The presence of heavy metal ions in aquatic ecosystems can have direct or indirect detrimental effects on living organisms [3].Moreover, these ions also threaten plants and animals in soil environments, as they can be absorbed by plants and subsequently reach animals and humans [4].
Industrial and academic researchers are dedicating their efforts to encouraging the recycling of waste materials for their utilization in various industrial applications [5].This practice of recycling waste materials contributes to mitigating the environmental impact of industrial activities and helps conserve natural resources.Certain wastes exhibit a metallic nature, as exemplified by the spent pickling hydrochloric acid effluent generated by the galvanic industry.This effluent contains high levels of Zn(II), Fe(III), and small quantities of other heavy metals [6].By recycling this effluent, the metals can be recovered and subsequently reused in other industrial processes [7].
Electronic devices represent yet another significant source of metallic waste [8].Burning these devices comes with significant dangers for both the environment and human well-being [9].Within the European Union, incinerating electronic waste has been calculated to release 36 tons of mercury and 16 tons of Cadmium into the atmosphere annually [10].The solid waste stemming from metallurgical zinc industries predominantly comprises copper.This procedure generates a sludge rich in copper, accompanied by other metal ions like iron and cadmium.The extraction and isolation of these metal ions from their mixed solutions hold substantial importance in hydrometallurgical processes due to the potential for reclaiming valuable metals and addressing environmental pollution issues [11].
Numerous technologies exist for eliminating toxic metals from liquid effluents, such as adsorption, membrane, chemical, electric, and photocatalytic-based treatments [12].However, many of these methods are complex and require significant energy input and/or production of large quantities of waste [13].
Membrane-based separation processes have emerged as a hopeful substitute for addressing the limitations of conventional separation techniques, including high energy usage and the necessity for severe operating conditions.These processes operate without energy consumption and can be carried out under more moderate conditions [14,15].
Supported liquid membranes (SLMs) have gained increasing interest as a membranebased separation technique over the past few decades.SLMs involve incorporating an organic liquid into the tiny pores of polymeric support, where capillary forces retain it [16].The benefits of SLMs encompass the utilization of a small volume of organic solvent and carrier compared to liquid-liquid extraction, enabling the use of costly carriers, simultaneous extraction and re-extraction steps, achieving a high separation factor, ease of scaling up, low energy demands, operation under modest conditions, and cost-effectiveness in terms of both capital and operating expenses [16].Enhancing the stability of SLMs can be achieved through a novel approach that employs ionic liquids as the liquid phase [17].This innovative approach has found practical applications in selectively transporting various organic compounds [18], metal ions [19] and inorganic salts [20].Ionic liquids (ILs) are organic salts characterized by their ability to remain liquid below room temperature [21].Typically, ILs are composed of an organic cation paired with a monoatomic or polyatomic inorganic anion, though the trend is shifting towards using organic anions.One of the primary advantages of ILs compared to organic solvents is their remarkably low vapor pressure, rendering them highly stable.
Furthermore, ionic liquids (ILs) showcase advantageous chemical and thermal stability and high viscosity [22].Their solubility within neighboring phases can be meticulously regulated by selecting the appropriate cation and anion [13].The distinctive attributes of ionic liquids allow them to be used as an extracting phase in ion metal liquid-liquid extraction [23] or even novel liquid-liquid extraction approaches [24].Ionic liquid properties also contribute to the establishment of a more enduring membrane phase.Supported liquid membranes (SLMs) that incorporate ionic liquids are named supported ionic liquid membranes (SILMs) [17].
Over the last years, various categories of ionic liquid membranes have emerged.Among them, polymer ionic liquid inclusion membranes (PILIMs) have garnered considerable attention for their potential in diverse chemical operations and applications like metal ion extraction, salts separation, CO 2 removal or selective electrodes [25][26][27][28].However, much fewer examples of PILIMs regarding SILMs could be found in literature due to their most recent appearance.PILIMs are characterized by entrapping the ionic liquid within a polymeric structure, effectively reducing its release into the surrounding phase.PILIM S is also known as ionogels.While both SILMs and PILIMs operate on similar separation mechanisms, PILIMs stand out due to their superior mechanical and chemical properties [29].This advantage is due to the immobilization of the ionic liquid within a polymeric matrix, setting them apart from supported membranes [29] ] ionic liquids as good liquid-liquid phase extracting agents for metals such as Cu(II), Fe(III), Zn(II) and Cd(II) [26] and in the form of ionic liquid membranes for the same purpose [19,30].These findings suggest that em-ploying ionic liquids as a substitute for conventional extraction agents in the selective separation of heavy metal ions holds significant promise.
The present works aim to explore the feasibility and efficiency of using PILIMs for the extraction and separation of Zn(II), Cd(II), Fe(III), and Cu(II) from hydrochloride aqueous solution in the absence of a chelating agent.[omim + ][PF 6 − ] and [MTOA + ][Cl − ] were used as ionic liquid phase.Their stability against hydrochloric and aqueous solutions has been tested, as well as their metal ion separation efficiency.
Polymer Ionic Liquid Inclusion Membranes Preparation
Different mixtures of an ionic liquid and a polymer were dissolved in 3 mL of different solvents to prepare 0.3 g polymer ionic liquid inclusion membranes of different compositions.The casting method, described in other articles [29], was used to create the different PILIMs by pouring the prepared mixtures into a Fluka glass ring (30 mm height, 28 mm internal diameter) placed on a borosilicate glass plate and allowing all the solvent to evaporate for 48 h.
Analysis of Membrane Stability
The new polymer inclusion membranes' stability toward an aqueous phase was assessed by immersing the new PILIMs in a continuously agitated (300 rpm) 60 mL water bath set at 30 • C. Each experiment comprised four 24-h cycles, followed by a 24-h drying period and subsequent weighing of the membranes.Before commencing the next cycle, the aqueous phase was changed with fresh water.To analyze the stability of the PILIMs, the weight loss of the ionic liquid was measured by weighting the membrane before and after each cycle, which comprises 24 h.
Membrane Transport Studies
We assessed the transportation of Zn(II), Cu(II), Cd(II), and Fe(III) through the PILIMs at a temperature of 303.15K using the setup depicted in Figure 1.The setup consisted of a glass diffusion cell with two separate compartments containing 30 mL of solution and separated by the PILIM.Both compartments were subjected to mechanical stirring at 300 rpm to avoid concentration polarization conditions at the membrane interfaces, which improves the extraction conditions and reduces the instability of the membranes, as recommended in previous works [31].The receiving phase was composed of MilliQ water.Once 30 mL of the respective solutions had been added to each compartment, the experiment was started.Atomic absorption spectrophotometry was chosen to monitor the assay, performing periodic analyses by taking 100 µL from each compartment, as described in Section 2.5.Sampling was concluded once the concentrations of the metal ions in both phases had stabilized.The pertraction factor () was the variable used to evaluate the efficacy of the pertraction procedure.Its value was determined by applying the equation (Equation ( 1)) as follows: where represents the concentration of the metal ion in the receiving phase and in the feed phase.To ensure the repeatability of the assay, triplicate determinations were performed, and the resulting mean values were presented.The assay showed a high level of repeatability, with a relative standard deviation of 3% or less.
To evaluate the ability of the PILIMs to separate target metal ions, a separation factor ( / ) was determined as follows (Equation ( 2)).This factor allows us to know the efficiency of the process to separate the metal ions by specific PILIMs.The higher / , the more selective the separation by the specific PILIMs.
where and represent the concentration of the metal ion 1 in the receiving and feed phase, respectively, and and represent the concentration of the metal ion 2 in the receiving and feed phase, respectively.
Analytical Method
The determination of metal ions concentrations, including Zn(II), Cd(II), Cu(II), and Fe(III), was carried out using a Varian, Spectra 10 Plus model, atomic absorption spectrophotometer.As an internal emission source, a specific hollow cathode lamp for zinc, cadmium, iron and copper is used.The calibrating of the equipment was carried out using metal ion standards at concentrations of 0, 0.1, 0.5, 1, 1.5 and 2 mg/L.from a commercial standard of 1000 mg/L.The correlation coefficient of the calibration curves (r 2 ) was greater than 0.99.The working conditions were oxidant: air (3.5 mL/min) and fuel: acetylene (1.5 mL/min).To monitor the sorption of metal ions, periodic samples were withdrawn from the aqueous solutions for analysis.
Stability of Polymer Ionic Liquid Inclusion Membranes Based on [omim + ][PF6 − ] and [MTOA + ][Cl − ] to Aqueous Medium/Hydrochloride Solutions
Water's widespread use as a universal solvent is attributed to its polarity.Research studies have shown that some supported ionic liquid membranes, for instance, those The pertraction factor (PF) was the variable used to evaluate the efficacy of the pertraction procedure.Its value was determined by applying the equation (Equation ( 1)) as follows: where C rM represents the concentration of the metal ion in the receiving phase and C f M in the feed phase.To ensure the repeatability of the assay, triplicate determinations were performed, and the resulting mean values were presented.The assay showed a high level of repeatability, with a relative standard deviation of 3% or less.
To evaluate the ability of the PILIMs to separate target metal ions, a separation factor (α M1/M2 ) was determined as follows (Equation ( 2)).This factor allows us to know the efficiency of the process to separate the metal ions by specific PILIMs.The higher α M1/M2 , the more selective the separation by the specific PILIMs.
where C rM 1 and C f M 1 represent the concentration of the metal ion M1 in the receiving and feed phase, respectively, and C rM 2 and C f M 2 represent the concentration of the metal ion M2 in the receiving and feed phase, respectively.
Analytical Method
The determination of metal ions concentrations, including Zn(II), Cd(II), Cu(II), and Fe(III), was carried out using a Varian, Spectra 10 Plus model, atomic absorption spectrophotometer.As an internal emission source, a specific hollow cathode lamp for zinc, cadmium, iron and copper is used.The calibrating of the equipment was carried out using metal ion standards at concentrations of 0, 0.1, 0.5, 1, 1.5 and 2 mg/L.from a commercial standard of 1000 mg/L.The correlation coefficient of the calibration curves (r 2 ) was greater than 0.99.The working conditions were oxidant: air (3.5 mL/min) and fuel: acetylene (1.5 mL/min).To monitor the sorption of metal ions, periodic samples were withdrawn from the aqueous solutions for analysis.
Stability of Polymer Ionic Liquid Inclusion Membranes Based on [omim + ][PF 6
− ] and [MTOA + ][Cl − ] to Aqueous Medium/Hydrochloride Solutions Water's widespread use as a universal solvent is attributed to its polarity.Research studies have shown that some supported ionic liquid membranes, for instance, those based on nylon support, exhibit instability when they come into contact with polar solvents like water [17].The immobilization of the ionic liquid as a polymeric inclusion ionic liquids membrane could enhance the stability of the ionic liquids membrane and could increase the amount of active phase (ionic liquids) in the membrane.
In this work, the stability of PILIMs based on [omim + ][PF 6 − ] and [MTOA + ][Cl − ] at different ionic liquid/PVC ratios toward aqueous medium have been investigated.[omim + ][PF 6 − ] and [MTOA + ][Cl − ] were selected because of their low water solubility [29,32].The low solubility of ionic liquids in the surrounding phase has been demonstrated to be a crucial factor in the stability of PILIMs [17].
Figure 2 shows the profile of the weight losses of ionic liquids from the membranes prepared with different IL/PVC ration during four cycles in contact with fresh water in each cycle.It can be observed that the percentage of retained ionic liquid increases as the proportion of the base polymer increases.The variations of weight observed after four cycles are attributed to the losses of IL (ionic liquid) due to PVC's insolubility in water.
The results indicate a rise in IL losses when using high [omim + ][PF 6
− ] to PVC ratios.This fact implies that each amount of PVC can only retain a specific quantity of IL.Furthermore, an increase in the initial amount of IL used in the PILIM does not necessarily result in a higher amount of IL retained.Considering that the feed phase in the pertraction tests consists of a hydrochloric acid solution, it is essential to assess the stability of these membranes against the hydrochloric acid aqueous medium.The stability tests were conducted using 60 mL of 1 M HCl at similar conditions to the pertraction assays.where the retained amount of ionic liquids is stabilized after the second cycle for higher concentrations of ionic liquids (50% and 70%).The membrane prepared with 30% IL achieves the highest retained amount of this ionic liquid at the end of the test.
In the case of the PILM based on [MTOA + ][Cl − ] (Figure 2B), the retained amount of ionic liquid tends to stabilize after the fourth cycle, with approximately 55 mg for membranes prepared at 20%, 30% and 50%, and 77 mg for the membranes prepared at 70%.
The selection of membranes with the maximum retained amount of the active phase (ionic liquid) is essential for the pertraction process.The ionic liquid/PVC ratio determining the highest retention varies depending on the specific ionic liquid used.Therefore, the stability of the obtained membranes is influenced not only by the solubility of the ionic liquids but also by the specific interactions between the ionic liquids and the polymer, PVC, in this case.In water, [omim + ][PF 6 − ] exhibits greater solubility compared to 17,32], resulting in the need for a larger amount of PVC to immobilize a higher quantity of IL.In spite of the higher stability of PILIM membranes with 20%, 30%, and 50% with respect to 70% of [MTOA + ][Cl − ] (due to their higher ratio PVC/IL), 70% IL membrane was chosen for pertraction experiments due since this membrane contains the higher amount of immobilized active phase ([MTOA + ][Cl − ]).Furthermore, the use of less amount of PVC in [MTOA + ][Cl − ]/PVC at 70/30 (w/w%) could facilitate the transport of metal ions through the membrane, as explained above.
Based on the obtained results, membranes containing 30% [omim + ][PF 6 − ] and 70% [MTOA + ][Cl − ] were chosen for the pertraction test presented in the subsequent sections due to the higher amount of ionic liquids immobilized after four-cycle.
Considering that the feed phase in the pertraction tests consists of a hydrochloric acid solution, it is essential to assess the stability of these membranes against the hydrochloric acid aqueous medium.The stability tests were conducted using 60 mL of 1 M HCl at similar conditions to the pertraction assays.
Figure 3 shows the profile of the weight losses of the ionic liquid of the membranes prepared with different IL/PVC ratios for [MTOA + ][Cl − ] and [omim + ][PF 6 − ] during the four cycles in contact with hydrochloric solution (1 M).In the case of the ionic liquid [omim + ][PF 6 − ], the PIM exhibits good stability.Although more ionic liquid is lost during the cycles compared to when the membrane is in contact with pure water, approximately 80% of the initial ionic liquid is retained.Conversely, with the IL [MTOA + ][Cl − ], a significant loss of the IL is observed during each cycle.However, the loss of ionic liquid is less than in pure aqueous solution as a phase.
Furthermore, in this case, the amount of the final active phase in the membrane is higher than that in the membrane based on [omim + ][PF 6 − ] at 30%.Notably, the stability of [MTOA + ][Cl − ] based PIMs improves when exposed to hydrochloric acid solutions.The existence of the same counteranion [Cl − ] in the aqueous phase (HCl, 1 M) could stabilize the ionic liquids in the polymer inclusion membrane.
The use of casting techniques to immobilize ionic liquids in PIMs enables the production of resilient PILIMs capable of enduring hydrochlorinated aqueous environments, even with a loading capacity of up to 79 mg (232 mM) and 93.− ] were immobilized as [MTOA + ][Cl − ], with the difference that in the case of B, less PVC (30%) was needed for immobilization.It is important to highlight that optimal ionic liquid concentration will depend on the ionic liquid nature and specifically on the ionic liquid water solubility and interaction with the organic polymer support, in our case, PVC.In previous work, polymeric Nylon membranes of 25 mm diameter were used to immobilize by adsorption of different ionic liquids based on imidazolium cation to create supporting ionic liquid membranes (SILMs).The maximum ionic liquid immobilizes in the membrane pores before the stability experiment was around 90 mg of ionic liquids, which involves a loading capacity of up to 18.3 mg IL/cm 2 .After seven days of stability experiments, almost all the ionic liquid immobilized in the membrane was lost (from 98.5% to 100%) when high polar solvents (DMSO and water) were used in the receiving phase [17].with the difference that in the case of B, less PVC (30%) was needed for immobilization.It is important to highlight that optimal ionic liquid concentration will depend on the ionic liquid nature and specifically on the ionic liquid water solubility and interaction with the organic polymer support, in our case, PVC.In previous work, polymeric Nylon membranes of 25 mm diameter were used to immobilize by adsorption of different ionic liquids based on imidazolium cation to create ] at 30% and 70%, respectively ionic liquid concentrations.These ionic liquid concentrations have been shown, as can be seen in the previous section, as the best concentration to obtain a more stable and higher ionic liquid charge in polymer ionic liquid membranes.In this experiment, a mixture of the four metal ions at a concentration of 100 ppm, dissolved in 1 M HCl, constitutes the feed phase, while the receiving phase consists of milli-Q water with a pH equal to 6. Figure 4 shows the pH profile in the feed and acceptor phases and, separately, the concentrations of the metal ions in the pertraction studies of Fe(III), Zn(II), Cd(II) and Cu(II) from a 1 M HCl solution.Different concentrations of HCl were used as the driving force for ion pertraction.As mentioned above, with the ionic liquid [MTOA + ][Cl − ], it was possible to prepare membranes with a higher IL/PVC ratio than that used with [omim + ][PF6 − ]. Figure 5 shows the results of the pertraction test of a solution of Fe(III), Zn(II), Cd(II) and Cu(II), with a concentration of 100 ppm of each, dissolved in 1 M HCl, using pH = 6 milli-Q water as the receiving phase.
As it is noticeable in Figure 5a, the extraction of Fe(III) takes place gradually during the first 1100 h of the test and from this point, the concentration of Fe(III) stabilizes in the feed phase, and we could say that high among of Fe(III) is recovered in the receiving phase.The Fe(III) concentration in the receiving phase increases progressively until it stabilizes at 1100 h of operation.The outcome is deemed highly satisfactory despite the substantial duration of the operation.Afterwards, we will deal with the kinetic aspect of the ion-metal separation.Figure 5b shows that practically all the Zn(II) present in the feed As can be seen in Figure 4, during the initial moments of the experiments (0-100 h), there is a small decrease in the metal concentrations in the feed phase, but this does not translate into a quantitative increase in the concentration of the metal ions in the receiving phase.Instead, the initial concentration in the feed phase is practically reestablished.The same thing happens with the pH profiles.They decrease and increase in consonance with the concentration of the ions in their respective phases.Hence, the driving forces for transport could arise from differences in both metal ion concentration and HCl concentration between the feeding and receiving phases.In previous work [23], we studied the selective recovery of Zn(II), Cd(II), Cu(II) and Fe(III) from hydrochloride aqueous solutions (0.1 g/L, HCl-1 M).The extraction capability of [omim + ][PF 6 − ] for these metals was the following: Cd (≈70%) > Zn (≈30%) > Fe (≈20%) > Cu(≈5%).The operation with a polymer inclusion membrane does not improve the results found in liquid-liquid extraction experiments.We should consider that in liquid-liquid extraction experiments, the ration of 1 M hydrochloride aqueous solutions to the IL phase was 1 to 1 (v/v).In the polymer inclusion membrane experiment, the ration of the feed phase to the ionic liquid phase was approximately 30,000/50 (v/v) (see comments stability before).The unfeasibility of these dialkyl imidazolium cation-based ionic liquid inclusion polymeric membranes is most likely due to the fact that the amount of extractant phase was insufficient to extract the metals present in the feed phase and that the higher amount of PVC needed, which hinders transport through the membrane.
In terms of the membrane's stability following the pertraction test, the observed outcome was not as anticipated.Surprisingly, the membrane retained only 48% of the initial ionic liquid, significantly lower than expected based on the earlier stability tests.This suggests that a high metal concentration could have an adverse effect on the stability.
Selective Separation through Polymer Inclusion Membrane Based on Methyl Trioctylammonium Chloride, [MTOA + ][Cl − ]
As mentioned above, with the ionic liquid [MTOA + ][Cl − ], it was possible to prepare membranes with a higher IL/PVC ratio than that used with [omim + ][PF 6 − ]. Figure 5 shows the results of the pertraction test of a solution of Fe(III), Zn(II), Cd(II) and Cu(II), with a concentration of 100 ppm of each, dissolved in 1 M HCl, using pH = 6 milli-Q water as the receiving phase.
As it is noticeable in Figure 5a, the extraction of Fe(III) takes place gradually during the first 1100 h of the test and from this point, the concentration of Fe(III) stabilizes in the feed phase, and we could say that high among of Fe(III) is recovered in the receiving phase.The Fe(III) concentration in the receiving phase increases progressively until it stabilizes at 1100 h of operation.The outcome is deemed highly satisfactory despite the substantial duration of the operation.Afterwards, we will deal with the kinetic aspect of the ion-metal separation.Figure 5b shows that practically all the Zn(II) present in the feed solution disappears after about 100 h of operation using a polymeric inclusion membrane based on [MTOA + ][Cl − ], but this is not concomitant with the evolution of the Zn(II) concentration in the receiver phase.Figure 5c depicts the concentration profiles of Cd(II) and pH in both the feed and receiving phases under identical operating conditions to that in the previous experiment, using a polymeric inclusion membrane based on [MTOA + ][Cl − ].
Interestingly, Cd(II) showed similar behaviour to Zn(II) (Figure 5b), with the difference that the final concentration of Cd(II) in the receiving phase only reached one-third of that reached by Zn(II) during the first 100 h of operation.In the case of Cu permeation (Figure 5d), a continuous decrease in the feed phase concomitant with a continuous increase in the receiving phase is observed.During the experiment, the concentration does not reach the equilibrium between the phase and the receiving phase.As the first observation, we can highlight that much better pertraction results were reached with the membrane based on [MTOA + ][Cl − ] than that based on [omim + ][PF 6− ].This fact could be explained by (i) the higher amount of ionic liquid (active phase) immobilized in the [MTOA + ][Cl − ] membrane, (ii) the less amount of PVC (which is inner) in the [MTOA + ][Cl − ] membrane and mainly (iii) by the better results in the extraction of the target metal ionic in liquidliquid experiments as we demonstrated in previous work.In previous work [23], we also studied the selective recovery of Zn(II), Cd(II), Cu(II) and Fe(III) from hydrochloride aqueous solutions (0.It is worth mentioning that when we do a mass balance between the feed and receiving phases for some of the metal ions, Zn(II) and Cd(II), the total amount of metal in the feed and receiving phase is less than the initial amount of ion metal placed in the fed phase.The decrease in metal ion concentration can be explained by two factors: (i) some of the metal ions have precipitated in either the feed or receiving phase, or (ii) a fraction of the metal ions remains absorbed within the immobilized ionic liquid phase of the membrane.To test these hypotheses, we utilized the software "Medusa v.1" (accessible at https://www.kth.se/che/medusa, accessed on 1 September 2023), which determines the predominant species based on pH and metal ion concentration in the medium.Figure 6 illustrates the diagram of the predominant areas of all the ions present in the hydrochloride solution under study, showing the dominant forms of these metal ions at different concentrations in the medium and pH values (from −1 to 13).Our investigation covered a range of concentrations from log[Mx] = −7 (equivalent to a zero concentration of any metal ion present in either of the two solutions) to log[Mx] = −2, which is the upper limit set since the maximum concentration that can be observed for any metal ion, both in the feed and the receiving solution.
illustrates the diagram of the predominant areas of all the ions present in the hydrochloride solution under study, showing the dominant forms of these metal ions at different concentrations in the medium and pH values (from −1 to 13).Our investigation covered a range of concentrations from log[Mx] = −7 (equivalent to a zero concentration of any metal ion present in either of the two solutions) to log[Mx] = −2, which is the upper limit set since the maximum concentration that can be observed for any metal ion, both in the feed and the receiving solution.In the case of Zn(II) If we perform a mass balance analysis in the feed and receiving phases, we observe a gradual decrease in Zn(II) concentration over time.Suppose we refer to the Zn(II) predominance diagram (Figure 6b).In that case, we note that Zn(II) initiates precipitation at pH = 7 for high Zn(II) concentrations and pH > 7.7 for low Zn(II) concentrations.These values were not reached during this experimental procedure.Therefore, the detected Zn(II) loss through mass balance cannot be attributed to precipitation.Instead, it could be attributed to the retention of the metal ion on the polymeric inclusion membrane.As Zn(II), the mass balance is not fixed for Cd(II) between the feed and receiving phase.Returning to the specific predominant area diagram for this ion (Figure 6c) and considering the pH profiles obtained, it can be concluded that the Cd(II) ion does not precipitate in the form of hydroxides, but most of it is retained in the polymeric inclusion membrane.These results agree with those observed in previous studies, where the pertraction of the Cd(II) ion has a similar behaviour in supporting liquid membranes based on [MTOA + ][Cl − ] [21] to that observed in this study.It is interesting to note that after 1000 h, the concentration of Cd(II) in the feed phase increases, probably due to stripping and redissolution of the ion retained in the membrane.On the contrary, Cd(II) could be stripping in the receiving phase more extensively in PILIMs membranes than in SILMs [19].
Regarding the stability of the polymer inclusion membranes, this was weighed before and after the 2048 h (85 days) test.The percentage of ionic liquid remaining in the membrane after the test was 82% of the initial ionic liquid weight.This fact demonstrates the high operational stability of the [MTOA + ][Cl − ]/PVC inclusion polymeric membrane at 70% ionic liquid.It is interesting to note that much more [MTOA + ][Cl − ] was retained in the pertraction test than after four washing cycles stability test (see before section).In the pertraction experiment, the feed and receiving phases were saturated with the ionic liquid, allowing higher stability in the membrane.
It is also important to point out that the objective of this pertraction process is to achieve the separation of the metal ions in the receiving phase.For this purpose, the pertraction factor (PF) was used to evaluate the recovery of these metal ions in the receiving phase.The pertraction factor could help us to identify the point at which the operation should be stopped to get the maximum separation of the specific ion metal in the striping phase.Figure 7 shows the evolution of the pertraction factor over time for Fe(III), Zn(II), Cd(II) and Cu(II).The higher PF factor was reached by Zn(II) because a high concentration of Zn(II) was reached quickly in the receiving phase.The maximum Cd(II) pertraction factor, PF = 22, is reached at a time (t = 600 h).Other pertraction factors could be acceptable in less time to reduce the operation time.In the case of Cu(II) ion pertraction, the concentration of Cu(II) in the feed phase decreases continuously while it increases in the receiving phase, although this transport of matter takes place at a low velocity.After more than 2000 h of operation (about 85 days), only 30% of the ion in the feed phase passes to the receiving phase.The concentration of Cu(II) in the receiving phase was never higher than in the feed phase.
As a consequence, as seen in Figure 7, the pertraction factor did not exceed the value of unity.In this case, there is no accumulation of Cu(II) in the membrane since it is observed that the decrease in the concentration of this metal ion in the feeding phase matches the increase of the same in the receiving phase.Furthermore, the diagram of predominant Meanwhile, the concentration of Zn(II) is drastically reduced in the receiving phase, possibly due to the Zn(II) absorption in the membrane.Cd(II) also reached a high PF due to similar reasons.The concentration of Cd(II) in the feed phase was quickly reduced due to the absorption of Cd(II) in the liquid membrane.Some Cd(II) crosses the membrane towards the receiving phase.The profile observed for Fe(II) is due to the concomitant increase of the concentration in the feed phase with the reduction in the receiving phase.It allows a continuous increase of the PF until 1000 h.In the case of Cu(II), the PF tends to be 1 since the concentrations of Cu(II) tend to be equal on both sides of the membrane.
In the case of Fe(II) (Figure 7a), we can see a gradual increase of this parameter over time, which reaches a maximum value of around 5 at 1100 h.This point represents the optimum operating time for the removal of this ion.In the case of Zn(II) (Figure 7b) the pertraction factor reaches a maximum of 391 at 500 h, indicating the point at which the separation of Zn(II) is highest between the feeding and receiving phases.However, this would not be the optimum point to stop the separation operation.Instead, we must rely on the diagram of the concentrations of the different metal ions (Figure 5) to identify that the correct time to stop the operation would be at 100 h, which would correspond to a pertraction factor of approximately 25, which could be acceptable, but the operation time is reduced considerably.
The maximum Cd(II) pertraction factor, PF = 22, is reached at a time (t = 600 h).Other pertraction factors could be acceptable in less time to reduce the operation time.In the case of Cu(II) ion pertraction, the concentration of Cu(II) in the feed phase decreases continuously while it increases in the receiving phase, although this transport of matter takes place at a low velocity.After more than 2000 h of operation (about 85 days), only 30% of the ion in the feed phase passes to the receiving phase.The concentration of Cu(II) in the receiving phase was never higher than in the feed phase.
As a consequence, as seen in Figure 7, the pertraction factor did not exceed the value of unity.In this case, there is no accumulation of Cu(II) in the membrane since it is observed that the decrease in the concentration of this metal ion in the feeding phase matches the increase of the same in the receiving phase.Furthermore, the diagram of predominant areas for Cu(II) represented in Figure 6d shows that the pH profiles in the experiment do not allow the precipitation of the Cu(II), which makes it possible to satisfy the mass balance for Cu(II) between feed and receiving phase.The results obtained for Cu(II) extraction seem to make sense if we consider that in previous studies carried out under the same conditions as those described for this test, about 80% of Cu(II) was extracted using [MTOA + ][Cl − ] in a liquid-liquid extraction [23].However, when SILMs based on [MTOA + ][Cl − ] were used, Cu(II) could not be practically extracted [19].In liquid-liquid extraction, the ratio of 1 M hydrochloride aqueous solutions to the IL phase was 1 to 1 (v/v) [23].In supported ionic liquid membrane experiments, the ration of the feed phase to the ionic liquid phase was approximately 30,000/80 (v/v) [19].The amount of occluded ionic liquid was insufficient to extract the metal ions.In the experiment with polymer inclusion membranes based on [MTOA + ][Cl − ] (this work), the ration feed phase to ionic liquid phase was approximately 30,000/172 (v/v), more than the double that in the case of supported liquid membranes which would improve the efficiency of the operation, allowing a partial extraction of the Cu(II) retained in the feed phase, around 30%.It is important to point out that, as commented above, we studied the separation of Fe(III), Zn(II), Cd(II) and Cu(II) through supported ionic liquid membranes based on [MTOA + ][Cl − ] in a previous work [19].The feed phase consisted of a hydrochloride aqueous solution (HCl, 1 M) of the four metal ions at 0.1 g/L.When the receiving phase was milli Q water, the pertraction factor was lower than when polymer inclusion membranes were used with the same phase and receiving phase.The selective separation of Cd(II) and Cu(II) with a polymer inclusion membrane was studied based on CTA and the ionic liquids Cyphos IL 101 and Cyphos IL 104 as carriers.At low HCl concentration, high extraction of Cd(II) was reached, and Cu(II) was almost not extracted in the receiving phase.The increase in the HCl concentration in the feed enhanced Cu(II) extraction and decreased the selectivity coefficient for Cd(II) over Cu(II) [33].
To gain a deeper understanding of the separation efficiency of a mixture containing Zn(II), Cd(II), Cu(II) and Fe(III) metal ions, we calculated the separation factors (α) for the six possible pairs of these metal ions, which serve as indicators of the efficiency of the selective separation of two metals between the feed and receiving phases.If one metal remains predominantly in the feed phase while the other metal undergoes significant transfer to the receiver phase, then we will be talking about high separation factors.Figure 8 shows the evolution of the separation factors calculated for the six possible pairs of the four metal ions in solution from the concentrations measured in the feed and receiving phase over the test time.The results obtained in the pertraction test with the [MTOA + ][Cl − ]/PVC inclusion membrane were satisfactory in terms of separation capacity.However, the times required for this operation are very high.Considering that after more than 2000 h of testing, the concentrations in the feed and receiving phases were not equalized for any of the ions studied.Furthermore, at the end of the experiment, there was still an HCl gradient of 0.93 M, indicating that equilibrium was not reached.As commented above in a previous work, the selective separation of Fe(III), Zn(II), Cd(II) and Cu(II) through supported ionic liquid membranes was studied.Generally, when supported ionic liquid membranes were used, the separation factor was lower than when polymer inclusion membranes were used.At this point, the receiving phase could be renewed and thus recover Fe(III), which has slower diffusion kinetics.Recently, there have been advancements in developing polymer inclusion membranes (PIMs) that incorporate phosphonium-based ionic liquids as carriers.Alongside them, o-nitrophenyloctyl ether acted as a plasticizer, and triacetate cellulose functioned as the polymer matrix.This composite material was formulated and applied for the specific purpose of separating Zn(II) from Fe(III). 1 mol L −1 in a hydrochloric acid (HCl) was employed as the stripping phase for Fe(III), facilitating its removal.
Meanwhile, a significant portion of Zn(II) ions were effectively retained in the initial feed phase.The separation factor (SFe(III)/Zn(II)) is 8.85.It should be noted that in this work, they used a continuous operation [34].
In the case of the Zn(II)/Cu(II) pair, the selectivity in the extraction of one compound over the other is even higher than that seen for the Zn(II)/Fe(III) pair.Since the diffusion kinetics of Zn(II) through the membrane are much faster than those of Cu(II), we can point out that there is a possibility of separating Zn(II) from Cu(II) in a successful way.For the Zn(II)/Cd(II) and Fe(III)/Cd(II), relatively high separation factor values 28.8, 3.7 respectively, are obtained, mainly because 80% of the Cd(II) present in the feed phase is retained in the membrane, while the other ions can diffuse to the receiving phase, for the Cd(II)/Cu(II) pair a maximum separation factor of 112 reaches of the low permeability of Cu(II) respect to Cd(II) at short times.For the Fe(III)/Cu(II) pair, high values of the separation factor are reached at the end of the experiment since it is when higher concentrations of Fe(III) are reached in the receiving phase, while Cu(II), which has a slower diffusion rate, remains mainly in the feed phase, due to its slower kinetics to diffuse through the [MTOA + ][Cl − ] based polymeric inclusion membrane.
The results obtained in the pertraction test with the [MTOA + ][Cl − ]/PVC inclusion membrane were satisfactory in terms of separation capacity.However, the times required for this operation are very high.Considering that after more than 2000 h of testing, the concentrations in the feed and receiving phases were not equalized for any of the ions studied.Furthermore, at the end of the experiment, there was still an HCl gradient of 0.93 M, indicating that equilibrium was not reached.As commented above in a previous work, the selective separation of Fe(III), Zn(II), Cd(II) and Cu(II) through supported ionic liquid membranes was studied.Generally, when supported ionic liquid membranes were used, the separation factor was lower than when polymer inclusion membranes were used.However, the kinetic through polymer inclusion membranes were lower than in the case of supported ionic liquid membranes [19].The use of polymer inclusion membranes, which are dense, could increase the ions metal selectivity and the membrane stability.However, it reduces the kinetic of the permeation with respect to supported ionic liquid membranes, in which the ionic liquid is just adsorbed on a polymer material.Consequently, the ionic liquid has "more movement ability" than when the IL is occluded in a dense membrane (PILIMs).
The metal ion fluxes through PILMs were calculated from the initial slopes of the metal ion concentration profiles (see Table 1).As observed, the highest fluxes were achieved by Zn(II) and Cd(II) at 10.9 and 9.39 mg m −2 h −1 , respectively.On the other hand, lower fluxes were obtained for Fe(III) and Cu(II) at 1.82 and 2.92 mg m −2 h −1 , respectively.Recently, the selective separation of Pt(IV), Pd(II), and Rh(III) through polymer ionic liquids inclusion membranes was carried out using different receiving solutions.The PILIM contains the ionic liquids trioctyl(dodecyl) phosphonium chloride (40 wt %), the polymer PVDF-co-HFP (50 wt %), and 2NPOE as plasticizer (10 wt %).The membranes maintained the pertraction factor and the purity of the extract in the receiving phase over the course of 4 cycles (four weeks), demonstrating that the membrane was relatively stable.The flows were 200-1000 mg m −2 h −1 for more permeable metal, Pt and Pd, respectively, and the transport mechanism was described as ionic exchange [35].
New alkylimidazoliums bromide were tested for the selective separation of Cd(II), Cu(II), Pb(II), and Zn(II) ions from hydrochloride aqueous solutions.The most effective Ionic liquid was the longest alkyl chain length imidazolium ionic liquid and, consequently, the most water-insoluble ionic liquid.The observed permeability order was as follows: Cd(II) > Zn(II) > Pb(II) Cu(II).Similar to our research work, the permeability of Cd(II) and Zn(II) was higher than the permeability of Cu(II).The increased HCl concentration in the feed solution enhances the Cd ions transport but decreases transport selectivity defined by the relative fluxes [36].
The main limitation of polymer inclusion membranes could be the low kinetic of the ion transport through the membrane.However, in some applications, like membranes in sensors, this kinetic could be sufficient [28].It would be interesting to consider in future studies different mechanisms to decrease the flow resistance shown by this type of membrane, such as ultrasound or microwave, to improve the permeation rate through the polymeric support.
Regarding the transport mechanism, in the case of [MTOA + ][Cl − ], which is a quaternary amine, the extraction mechanism is suggested to involve an ion-exchanged step, as hypothesized by Juang et al. [37] who studied the separation of Zn(II) and Cd(II) from chloride solutions using Aliquat 336, or by Wang et al. [38] who studied the for the separation of the ionic pair Zn(II)/Fe(III) from chloride solutions also using Aliquat 336.The hypothesized mechanism could be represented by the following chemical equation: In contrast, for imidazolium ionic liquids and particularly [omim + ][PF 6 − ], it was demonstrated [18] that an increase in HCl concentration enhances the extraction efficiency for Zn(II), Fe(III), and Cd(II).This behavior was not observed in the case of Cu(II).This observation suggests that hydrochloric acid could play a role in extracting metal ions in ionic liquids based on imidazolium cation.In the present experimental context, the immobilization [MTOA + ][Cl − ] for extraction Fe(III) at different HCl concentration between the feed phase and receiving phase allow a higher concentration of the metal ion in the receiving phase compared to the equilibrium concentration (50% in each phase).Hence, it is likely that the extraction process will be influenced by the presence of hydrochloric acid in the feed phase.So, the compound formed by the metal ions/IL/hydrochloric acid in the feed phase could be dissociated in the receiving phase due to the lower HCl concentration.Consequently, the transport driving forces are determined by the differences in both metal ion concentration and HCl concentration between the feed and receiving phases.A transport of HCl from the feed to the receiving phase was observed (Figures 4a and 5c), which could be due to the formation of the ion pairs with the IL and the metal ions or the transport of the free HCl by the membrane.Considering the comment above, the ionic nature of ionic liquids can result in various extraction mechanisms, including solvent ion-pair extraction facilitated by HCl, ion exchange, and simultaneous combination of both.
Polymeric inclusion membranes based on [omim + ][PF 6
− ] and [MTOA + ][Cl − ] allows more stable membrane than analogous supported ionic liquid membranes towards aqueous phases.Furthermore, it is possible to immobilize more amount of ionic liquids in [omim + ][PF 6 − ] and [MTOA + ][Cl − ] polymer inclusion membranes than in analogous supported ionic liquid membranes.The optimum ratio of IL/PVC will depend on the nature of the ionic liquid.Despite obtaining very stable polymeric inclusion membranes based on [omim + ][PF 6 − ] ionic liquid for IL/PVC ratio 30/70, these membranes failed for the separation of the mixture of the metals under study.On the other hand, with the ionic liquid [MTOA + ][Cl − ] and PVC as base polymer, membranes with an IL/PVC ratio of 70/30
Figure
Figure 2A illustrates the behavior of the membrane based on [omim + ][PF 6− ], where the retained amount of ionic liquids is stabilized after the second cycle for higher concentrations of ionic liquids (50% and 70%).The membrane prepared with 30% IL achieves the highest retained amount of this ionic liquid at the end of the test.In the case of the PILM based on [MTOA + ][Cl − ] (Figure2B), the retained amount of ionic liquid tends to stabilize after the fourth cycle, with approximately 55 mg for membranes prepared at 20%, 30% and 50%, and 77 mg for the membranes prepared at 70%.The selection of membranes with the maximum retained amount of the active phase (ionic liquid) is essential for the pertraction process.The ionic liquid/PVC ratio determining the highest retention varies depending on the specific ionic liquid used.Therefore, the stability of the obtained membranes is influenced not only by the solubility of the ionic liquids but also by the specific interactions between the ionic liquids and the polymer, PVC, in this case.In water, [omim + ][PF 6 − ] exhibits greater solubility compared to[MTOA + ][Cl − ] [17,32], resulting in the need for a larger amount of PVC to immobilize a higher quantity of IL.In spite of the higher stability of PILIM membranes with 20%, 30%, and 50% with respect to 70% of [MTOA + ][Cl − ] (due to their higher ratio PVC/IL), 70% IL membrane was chosen for pertraction experiments due since this membrane contains the higher amount of immobilized active phase ([MTOA + ][Cl − ]).Furthermore, the use of less amount of PVC in [MTOA + ][Cl − ]/PVC at 70/30 (w/w%) could facilitate the transport of metal ions through the membrane, as explained above.Based on the obtained results, membranes containing 30% [omim + ][PF 6 − ] and 70% [MTOA + ][Cl − ] were chosen for the pertraction test presented in the subsequent sections due to the higher amount of ionic liquids immobilized after four-cycle.Considering that the feed phase in the pertraction tests consists of a hydrochloric acid solution, it is essential to assess the stability of these membranes against the hydrochloric acid aqueous medium.The stability tests were conducted using 60 mL of 1 M HCl at similar conditions to the pertraction assays.Figure3shows the profile of the weight losses of the ionic liquid of the membranes prepared with different IL/PVC ratios for [MTOA + ][Cl − ] and [omim + ][PF 6 − ] during the four cycles in contact with hydrochloric solution (1 M).In the case of the ionic liquid [omim + ][PF 6 − ], the PIM exhibits good stability.Although more ionic liquid is lost during the cycles compared to when the membrane is in contact with pure water, approximately 80% of the initial ionic liquid is retained.Conversely, with the IL [MTOA + ][Cl − ], a significant loss of the IL is observed during each cycle.However, the loss of ionic liquid is less than in pure aqueous solution as a phase.Furthermore, in this case, the amount of the final active phase in the membrane is higher than that in the membrane based on [omim + ][PF 6 − ] at 30%.Notably, the stability of [MTOA + ][Cl − ] based PIMs improves when exposed to hydrochloric acid solutions.The existence of the same counteranion [Cl − ] in the aqueous phase (HCl, 1 M) could stabilize the ionic liquids in the polymer inclusion membrane.The use of casting techniques to immobilize ionic liquids in PIMs enables the production of resilient PILIMs capable of enduring hydrochlorinated aqueous environments, even with a loading capacity of up to 79 mg (232 mM) and 93.2 mg (231 mM) for [omim + ][PF 6 − ] and [MTOA + ][Cl − ], respectively, after four-cycle operation which involves a loading capacity of up to 12.8 and 15.1 mg IL/cm 2 (37.6 and 37.4 mM IL/cm 2 ) for [omim + ][PF 6 − ] and [MTOA + ][Cl − ], respectively.Practically, the same millimoles of [omim + ][PF 6− ] were immobilized as [MTOA + ][Cl − ], with the difference that in the case of B, less PVC (30%) was needed for immobilization.It is important to highlight that optimal ionic liquid concentration will depend on the ionic liquid nature and specifically on the ionic liquid water solubility and interaction with the organic polymer support, in our case, PVC.In previous work, polymeric Nylon membranes of 25 mm diameter were used to immobilize by adsorption of different ionic liquids based on imidazolium cation to create supporting ionic liquid membranes (SILMs).The maximum ionic liquid immobilizes in the membrane pores before the stability experiment was around 90 mg of ionic liquids, which involves a Membranes 2023, 13, 795 7 of 19 existence of the same counteranion [Cl − ] in the aqueous phase (HCl, 1 M) could stabilize the ionic liquids in the polymer inclusion membrane.
Figure 3 .
Figure 3. Profile of the weight losses of ionic liquid from PILIMs after each cycle in contact with the hydrochloric solution for different ionic liquids.(A) [omim + ][PF6 − ]/PVC (30/70) (B) [MTOA + ][Cl − ]/PVC (70/30).The use of casting techniques to immobilize ionic liquids in PIMs enables the production of resilient PILIMs capable of enduring hydrochlorinated aqueous environments, even with a loading capacity of up to 79 mg (232 mM) and 93.2 mg (231 mM) for [omim + ][PF6 − ] and [MTOA + ][Cl − ], respectively, after four-cycle operation which involves a loading capacity of up to 12.8 and 15.1 mg IL/cm 2 (37.6 and 37.4 mM IL/cm 2 ) for [omim + ][PF6 − ] and [MTOA + ][Cl − ], respectively.Practically, the same millimoles of [omim + ][PF6 − ] were immobilized as [MTOA + ][Cl − ],with the difference that in the case of B, less PVC (30%) was needed for immobilization.It is important to highlight that optimal ionic liquid concentration will depend on the ionic liquid nature and specifically on the ionic liquid water solubility and interaction with the organic polymer support, in our case, PVC.In previous work, polymeric Nylon membranes of 25 mm diameter were used to immobilize by adsorption of different ionic liquids based on imidazolium cation to create
3. 2 .
Selective Separation of Fe(III), Zn(II), Cd(II) and Cu(II) HCl 1 M Aqueous Solution trough PILIMs Based on [omim + ][PF 6 − ] and [MTOA + ][Cl − ] Using Mili-Q Water as Receiving Phase As mentioned above, [omim + ][PF 6 − ] and [MTOA + ][Cl − ] have shown good results in terms of metal ion separation in other studies in liquid-liquid extraction and the form of ionic liquid-supported membranes [19,23].However, there are currently no studies that have reported the use of polymeric inclusion membranes based on these ionic liquids to separate metal ions in hydrochloride solutions.In this study, we have used polymer inclusion membranes (ionogel) based on [omim + ][PF 6 − ] and [MTOA + ][Cl −
Figure 4 .
Figure 4. Metal ions concentrations and pH profiles (a) in the feed and receiving phases in the pertraction of Fe(III) (a), Zn(II) (b), Cd(II) (c) and Cu(II) (d) through a PILIM based on [omim + ][PF6 − ]/PVC with 30% of IL.The feed phase was a mixture of the four metal ions at 100 mg/L in HCl (1 M).The receiving phase was milliQ water at pH = 6.3.2.2.Selective Separation through Polymer Inclusion Membrane Based on Methyl Trioctylammonium Chloride, [MTOA + ][Cl − ]
Figure 4 .
Figure 4. Metal ions concentrations and pH profiles (a) in the feed and receiving phases in the pertraction of Fe(III) (a), Zn(II) (b), Cd(II) (c) and Cu(II) (d) through a PILIM based on [omim + ][PF 6− ]/PVC with 30% of IL.The feed phase was a mixture of the four metal ions at 100 mg/L in HCl (1 M).The receiving phase was milliQ water at pH = 6.
membrane and mainly (iii) by the better results in the extraction of the target metal ionic in liquid-liquid experiments as we demonstrated in previous work.In previous work[23], we also studied the selective recovery of Zn(II), Cd(II), Cu(II) and Fe(III) from hydrochloride aqueous solutions (0.1 g/L, HCl-1 M) using [MTOA + ][Cl − ].The extraction capability of [MTOA + ][Cl − ] for these metals was near 100% for Cd, Zn and Fe and near 80% for Cu.However, as we commented above, the maximum extraction with [omim + ][PF6 − ] was around 60% in the case of the Cd(II).
Figure 5 .
Figure 5. Metal ions concentrations and pH profiles in the feed and receiving phases in the transport of Fe(III) (a), Zn(II) (b), Cd(II) (c) and Cu(II) (d) through a PILIM based on [MTOA + ][Cl − ]/PVC whit 70% of IL.The feed phase was a mixture of the four metals ions (100 mg/L each) in HCl (1 M).The receiving phase was milli Q water at pH = 6.
Figure 5 .
Figure 5. Metal ions concentrations and pH profiles in the feed and receiving phases in the transport of Fe(III) (a), Zn(II) (b), Cd(II) (c) and Cu(II) (d) through a PILIM based on [MTOA + ][Cl − ]/PVC whit 70% of IL.The feed phase was a mixture of the four metals ions (100 mg/L each) in HCl (1 M).The receiving phase was milli Q water at pH = 6.
Figure 8 .
Figure 8. Profile of the separation factors for pairs between Zn(II)/Fe(III), Zn(II)/Cd(II), Zn(II)/Cu(II), Fe(III)/Cd(II), Cd(II)/Cu(II) and Fe(III)/Cu(II) using a PILIM based on [MTOA + ][Cl − ]/PVC whit 70% of IL.The feed phase consists of a mixture of the four metals ions (100 mg/L each) in HCl (1 M).The receiving phase was milliQ water at pH = 6.The separation factor reached by the Zn(II)/Cu(II) and Zn(II)/Fe(III) pairs stands out for its high value, with maximums of 1996 and 606, respectively.These high values in the pertraction factor are due to the fact that Zn(II) can diffuse through the PIM based on [MTOA + ][Cl − ] and accumulate in the receiving phase.In the case of the Zn(II)/Fe(III) pair, Zn(II) would diffuse faster, allowing an effective separation of Zn(II) at 100 h of operation. | 2023-09-16T15:18:09.810Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "1c8141f657acb9e82509aaad3debc22d338ae2cb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/13/9/795/pdf?version=1694660207",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a1761f4259e7cca38158313e635386d117771d8",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": []
} |
91555261 | pes2o/s2orc | v3-fos-license | Development of a Comprehensive Antibody Staining Database Using a Standardized Analytics Pipeline
Large-scale immune monitoring experiments (such as clinical trials) are a promising direction for biomarker discovery and responder stratification in immunotherapy. Mass cytometry is one of the tools in the immune monitoring arsenal. We propose a standardized workflow for the acquisition and analysis of large-scale mass cytometry experiments. The workflow includes two-tiered barcoding, a broad lyophilized panel, and the incorporation of a fully automated, cloud-based analysis platform. We applied the workflow to a large antibody staining screen using the LEGENDScreen kit, resulting in single-cell data for 350 antibodies over 71 profiling subsets. The screen recapitulates many known trends in the immune system and reveals potential markers for delineating MAIT cells. Additionally, we examine the effect of fixation on staining intensity and identify several markers where fixation leads to either gain or loss of signal. The standardized workflow can be seamlessly integrated into existing trials. Finally, the antibody staining data set is available as an online resource for researchers who are designing mass cytometry experiments in suspension and tissue.
INTRODUCTION
Immune monitoring (IM) is a systems biology approach for the quantitative evaluation of the state of the immune system (1,2). Changes in hematopoietic cell subset composition and in the cytokines and other proteins these cells produce can indicate the nature and severity of the stress the body is confronting. These immune correlates establish measurable proxies to the hidden details of disease or the effects of treatment, and are promising to become a central component of clinical research (3). Mass cytometry, which can measure over forty parameters per single cell (4,5), has potential applications for IM in a wide variety of contexts, including cancer (6), allergy (7,8), infectious diseases (9)(10)(11)(12), trauma (13), organ transplantation (14,15) and neonatal development (16). Furthermore, there is growing interest in incorporating mass cytometry into large studies such as clinical trials through the Cancer Immune Monitoring and Analysis Centers (CIMAC) and Partnership for Accelerating Cancer Therapies (PACT) initiatives 1 .
Any large-scale study will introduce challenges such as sample quality control, batch effects, and inter-operator variability. There are a plethora of methods to address potential data quality issues in mass cytometry. These include the incorporation of normalization beads into the sample (17), reduction of technical variability and doublets through multi-sample barcoding (18,19), measurement of batch effects using spiked-in references (20), compensation of signal spillover across different masses (21), and others. However, despite the well-developed ecosystem, there is no clear standard on how to run a large-scale mass cytometry study, and researchers are often forced to reinvent the wheel by designing experiments de novo with no clear guidance on best practices.
The situation is even more problematic in the computational biology arena. Numerous mass cytometry analysis methods have been published. These can be broadly classified into one of two categories. Clustering algorithms, such as SPADE (22), PhenoGraph (23), and FlowSOM (24), group cells together based on marker expression patterns. Dimensionality reduction algorithms, such as t-SNE (25, 26), embed the single cell data in a two-dimensional map that can be more easily visualized. These approaches require the operator to review their output and label cells based on his or her judgement. Despite the existence of automatic methods (27), attempts to provide streamlined analysis workflows (28) and online tools such as Cytobank (PMID: 24590675), identifying appropriate analysis methods in large scale IM studies remains a challenge, and many users resort to manual gating (29), which is time consuming, error prone, susceptible to operator bias, and not easily scalable.
Finally, the insights gained from mass cytometry ultimately depend on the antibodies used in a given staining panel, and as with any other antibody-guided assay, antibody selection is a central component of mass cytometry experiment design. While there is some consensus on appropriate markers to identify major circulating immune subsets (30), much of the potential of mass cytometry is in its ability to characterize the roles of less-studied markers (31)(32)(33) and, by extension, in identifying relevant biomarkers for immunotherapy. However, there have been no systematic studies of the expression of a broad set of markers across a broad set of cell subsets to help guide antibody selection in IM studies. This problem is further exacerbated for studies involving fixed samples, since fixation can alter surface epitopes and unpredictably change antibody expression patterns (34). A comprehensive catalog of antibody staining expression patterns across immune cells would represent a valuable resource to establish a starting point for marker selection and panel design.
In order to address the above, we developed a streamlined mass cytometry pipeline that combines a lyophilized antibody panel, two-tier barcoding, efficient batched sample acquisition and a novel cloud-based analytics service. We applied this efficient sample and data processing pipeline to screen the expression of 326 antibodies across all major peripheral blood mononuclear cell (PBMC) subsets from multiple donors on both fresh and fixed cells. This represents one of the largest mass cytometry data sets to date, with approximately 63 million events acquired over a month of operation. The workflow incorporates multiple mechanisms that address and monitor intra-and inter-sample variability, quality control, standardization and automation. The result is a comprehensive antibody staining data set, which screens marker expression in every major immune subset on a single-cell level. These antibody expression data have been made available as an interactive companion website at https://www.antibodystainingdataset.com. This represents a powerful resource that allows researchers to quickly identify potential markers for inclusion in novel mass cytometry studies. Finally, the overall workflow represents a systematic framework that can readily by applied for performing IM in large experiments such as clinical trials.
Samples and Processing
Peripheral blood mononuclear cells (PBMCs) for the primary LEGENDScreen experiment were isolated by Ficoll gradient centrifugation from leukapheresis products derived from 3 independent de-identified donors (New York Blood Center). Additional validation experiments used blood collected from consented healthy donors under an existing IRB protocol at the HIMC. For the primary screen experiment, approximately 120 million cells from each donor were incubated for 20 min at 37 • C in RPMI media containing 10% FBS, 1 µM Rh103 to label dead cells and 50 µM IdU to label actively cycling cells. The samples were then washed, Fc-blocked (FcX, Biolegend) and stained for 30 min on ice with a lyophilized core antibody cocktail comprised of markers to allow identification of all major immune subsets (Supplementary Table 1). All the antibodies in the core panel were conjugated in-house using X8 MaxPar conjugation kits (Fluidigm), and the titrated panel was lyophilized and dispensed as single test aliquots (Biolyph). The reconstituted panel was filtered through a 0.1 micron Amicon filter prior to use.
After staining, the samples were then divided into two aliquots, one of which was fixed with freshly diluted 1.6% formaldehyde in PBS for 20 min, while the other was left untreated. Each of the 6 samples was then barcoded using a combinatorial CD45-based barcoding scheme (Figure 1), allowing the 6 treatments to be combined as a single sample. This pooled sample of ∼300 million cells was then evenly distributed across each of the 372 wells of a LEGENDScreen kit (BioLegend) containing reconstituted PE antibodies (Supplementary Table 2), and incubated for 30 min on ice. Cells from each well were then washed and fixed with 1.6% formaldehyde in PBS for 20 min. To reduce the overall number of samples to facilitate subsequent processing and data acquisition, the samples were washed with barcode permeabilization buffer (Fluidigm), and sets of 10 wells were barcoded and pooled using a combinatorial palladium-based barcoding strategy (Figure 1) (18,35). The pooled samples were then washed and stained with saturating concentrations of 165Ho-conjugated anti-PE antibodies. The samples were then washed and incubated in freshly diluted 2.4% formaldehyde containing 0.02% saponin, 125 nM Ir intercalator (Fluidigim) and 300 nM OsO4 (ACROS Organics) for 30 min. The samples were then washed, frozen in FBS containing 10% DMSO and stored at −80 • C until acquisition.
Data Acquisition and Initial Data Processing
Samples were thawed immediately prior to acquisition, washed once in PBS, once in CAS buffer (Fluidigm) and then resuspended in CAS buffer containing a 1/20 dilution of EQ normalization beads (Fluidigm). Following routine instrument tuning and optimization, the samples were run at an acquisition rate of <300 events per second on a Helios mass cytometer (Fluidigm) modified with a wide-bore injector (Fluidigm). Upon completion of the acquisition, FCS files associated with each barcoded batch of wells were concatenated and normalized using the bead-based normalization algorithm in the Fluidigm software resulting in 38 FCS files.
Mass Cytometry Data Analysis
FCS files were uploaded to the Astrolabe Cytometry Platform (Astrolabe Diagnostics, Inc.) where transformation, debarcoding, cleaning, labeling, and unsupervised clustering was done. Data was transformed using arcsinh with a cofactor of 5 and the marker intensities presented in the paper are all after transformation. Batches were debarcoded using the Ek'Balam algorithm (see below), resulting in 2,232 individual samples corresponding to one (donor, treatment, antibody) combination. Data from 12 antibody wells were excluded due insufficient cell recovery or ambiguous barcoding resulting from a known pipetting errors during sample preparation, resulting in 2,160 samples. For batches 23,25, and 34, between 50 and 75% of events were removed due to loss of stability, as described in the main text.
The individual samples were then labeled using the Ek'balam algorithm (Supplementary Table 3). Each cell subset was clustered using the profiling step in Astrolabe (see below). For the purpose of the Ek'balam algorithm, gdTCR intensities were compensated by 1.9% of CD8 intensity due to known signal spillover due to oxide formation from the 146Nd-CD8 channel being detected in the 162Dy gdTCR channel. Platform output was downloaded in the form of R Programming Language RDS files (36) for manual follow-up analysis. Figures were generated using ggplot (37). To evaluate the quality of the debarcoding, clustering and annotation in Astrolabe and to perform independent analyses, a subsets of samples were processed in parallel using a Matlab based debarcoding algorithm (19) and uploaded to Cytobank for manual gating of major immune subsets.
The Ek'Balam Algorithm
Ek'Balam is a hierarchy-based algorithm for labeling cell subsets which combines the strength of a knowledge-based gating strategy with unbiased clustering. It receives a user-defined subset hierarchy which details gating rules such as "Cells which are CD3+ are T Cells." Subsets can branch through additional rules, for example, "T Cells which are CD4+ are CD4+ T Cells." The hierarchy is organized into levels which correspond to parallel steps when gating. For example, the first level could include "CD3+ are T Cells, " "CD19+ are B Cells, " and "CD33+ are Myeloids." Ek'Balam then iterates over the levels. At each iteration, the data is clustered with FlowSOM (24), using only the markers that appear in the rules of that level. Each cluster is then labeled according to the rules of that level. Labeling is done by optimizing the Matthews Correlation Coefficient (MCC) over the clusters and marker intensity values with a greedy algorithm. The process continues until all cells are assigned to a label which has no rules branching out of it. A formal definition of the algorithm is provided in the supplement.
Cell Subset Profiling
Profiling refers to a variation of unsupervised clustering using the FlowSOM algorithm. The variant differs from classic FlowSOM in two significant aspects. One, each cell subset is clustered separately. This guarantees that the output will not include biologically irrelevant clusters that combine multiple cell subsets. Two, the clusters are labeled according to the markers that differentiate between them the most, according to the MCC. The labeling makes the output more accessible to the researcher by providing an initial intuition about the differences between the clusters. A formal definition of the profiling algorithm is provided in the supplement.
Relevance Metrics
The following metrics were employed when comparing the computational debarcoding and labeling results to manual methods. Metrics were calculated for each class separately, where class is either a barcode (for debarcoding) or a cell subset (for labeling). The class was set as the target and all other classes as not-target. In all cases, the manual method is assumed to be the correct solution.
TP, FP, TN, and FN are true positive, false positive, true negative, and false negative, respectively.
Precision is the frequency of correctly classified target events out of all events classified as target, or TP / (TP + FP).
Recall is the frequency of correctly classified target events out of all target events, or TP/(TP + FN).
The F1 score is the harmonic mean of precision and recall, or
Average Overlap Frequency (AOF)
The average overlap frequency is a metric of staining and clustering quality of a given marker (38). It assumes that the FIGURE 1 | A standardized workflow for mass cytometry experiments and its implementation in generating a comprehensive antibody staining reference. (A) Blood was acquired from three healthy donors and stained with a lyophilized panel of 21 metal conjugated antibodies to allow identification of major immune cell types. The samples were split into two treatments, fresh and formaldehyde-fixed. Each donor and treatment pair was barcoded using a combination of two out of four CD45 channels. Samples were divided between the four, 96-well plates of the LEGENDScreen antibody panel. Finally, the antibodies were organized into batches of ten samples each, which were in turn barcoded using a combination of two out of five palladium channels. (B) The 38 batches were acquired using a Helios instrument over a period of 5 weeks, leading to approximately 63 million events. (C) Samples were automatically debarcoded and tested for quality control using the Average Overlap Frequency (AOF), and immune populations were clustered, annotated, analyzed and visualized using the Astrolabe Cytometry Platform.
marker has two modalities, denoted negative and positive. The AOF is a value between 0 and 1, where 0 is complete separation between the modalities and 1 is complete overlap, and is defined as: where X − is the values of all events in the negative modality, X + is the values of all events in the positive modality, X − h is the negative values that are greater than the 5th percentile of a normal distribution with a mean and standard deviation of , and X + l is the positive values that are lower than the 95th percentile of the normal distribution with mean and standard deviation of.
Given a set of samples, we can extend the AOF into a sample quality score by calculating the Scaled 2 AOF for each (marker, sample) pair: where m indexes over markers and i indexes over samples, and then calculating the Quality 2 AOF for each sample:
Percent Positive Events
For each (profiling subset, antibody), the percent of positive events is the percent of events whose intensity is greater than the 99th percentile of all events in the Blank LEGENDScreen well (well A1 in plate 1, see Supplementary Figure 3). This well does not include any PE-conjugated antibodies, so the intensity distribution there is a background for anti-PE measurement using the Helios. In order to assess the potential effect of the isotype control on the baseline, we calculated an alternative percent positive based on the 99th percentile of the respective isotype for each antibody. The correlation between the Blankbased and the isotype-matched percent positive values was 0.94 and the median different was 1%. Due to this minor difference we decided to use the same Blank 99th percentile for all antibodies.
Design of an Integrated Pipeline for the Acquisition and Analysis of Large Immune Monitoring Experiments
Conducting a large-scale immune monitoring experiment over a long period of time using mass cytometry raises several challenges. One, it is imperative to monitor instrument performance and evaluate sample data quality to identify transient fluctuations in instrument performance resulting in features such as diminished staining for one or more markers, higher than usual debris or doublet count. Two, batch effects due to experimental or instrument variation can be a significant concern. While researchers should always be aware of how technical sources could lead to variation, this is especially pertinent when data is gathered and acquired over weeks or months. Experiment design should therefore include mechanisms that detect both types of failures and alert the researcher appropriately. Finally, the role of human operators should be minimized in order to reduce human-introduced variability. Decision making should follow a clear protocol or be entrusted to computational methods. The antibody expression data set described in this study integrates multiple techniques to maximize experimental and technical reproducibility and streamline data acquisition and analysis (Figure 1). Peripheral blood mononuclear cell (PBMC) samples from three healthy donors ( Figure 1A) were stained with a 21-marker antibody panel comprised of markers to unambiguously identify all the major immune compartments: B Cells, myeloid cells, NK Cells, and T Cells, together with further granularity for subsets within these compartments (such as CD16 +/-monocytes or naive vs. transitional B Cells). This core antibody panel was lyophilized as a single cocktail and the same batch was used throughout sample acquisition to minimize experimental variability due to reagent variability or pipetting. The panel only utilizes a subset of the channels available in mass cytometry, allowing researchers to incorporate an additional 10-15 markers to address experiment-specific questions.
Following initial core antibody panel staining, the samples were split into two groups to evaluate the impact of fixation on each of the antibody epitopes subsequently evaluated in this screen. This design also typifies a common experimental design where a treatment (fixation) is compared to control (fresh samples). The six patient x treatment combinations were barcoded and pooled using a live cell-compatible doublet-free barcoding strategy leveraging CD45 antibodies conjugated to 4 distinct isotopes. This barcode approach streamlines sample processing and minimizes potential variability due to acquiring different patients or treatments at different times. The isotopes used for barcoding were specifically chosen to ensure that potential spillover due to isotopic impurities or oxide formation from these barcoding channels would not influence any of the other antibody channels being measured in this experiment. Next, the samples were evenly distributed across each of the 372 wells of a LEGENDScreen kit, each of which includes a PE-conjugated antibody against a distinct epitope. Following this with a metal-conjugated anti-PE antibody enabled the measurement of a comprehensive set of surface markers across all the cell subsets identified by the broad lyophilized panel. Finally, to streamline data acquisition, sets of 10 wells were further barcoded and combined using a combinatorial strategy leveraging five palladium channels.
The resulting 38 batched samples were then acquired using a Helios mass cytometer ( Figure 1B). Acquisition required around 400 h of instrument time over 5 weeks of operation and resulted in a total of 63 million events. Analyzing such a large amount of data manually would have been time-consuming and risked operator-introduced variability. To avoid these two issues, we employed the standardized Astrolabe Cytometry Platform to debarcode and clean the data, label cell subsets, and conduct unsupervised clustering ( Figure 3C). The Astrolabe analysis took 24 h, and the platform's "Analysis" export was employed in all follow-up analyses.
Debarcoded Sample Data Is Robust and Consistent Across the Screen Samples
The antibody staining data set involves a high number of samples, complex experiment design, a long acquisition period, and advanced computational analysis, any of which could potentially introduce variability or other artifacts. Several tests inspect the various stages of the experiment (Figure 2). First and foremost, accurate debarcoding is critical for all followup analyses. This step is especially challenging due to the twotiered barcoding scheme employed: CD45-based barcoding of patient x treatment and palladium-based barcoding of each batch of 10 LEGENDScreen antibodies. Astrolabe correctly identifies all 60 codes and their channel profile are distinct and follow the expected design (Figure 2A). In order to validate the computational debarcoding approach, the results were compared to manually-debarcoded data for one of the batches. The two methods showed high concordance according to four different statistical metrics (Figure 2B), supporting the use of the more efficient computational approach to debarcode all 2,232 samples.
The starting point for the data set was blood from three healthy donors. After the fixed vs. fresh treatment and the introduction of the kit's antibodies, each of these individuals leads to several hundred different samples. However, the individual donor immune profile across each set of samples are expected to be identical and therefore the acquired data should be highly comparable. This is reflected in the principal component analysis (PCA) map over the sample cell subset frequencies ( Figure 2C). The samples are distributed across three well-separated islands. Each island corresponds to one individual, signifying that the immune profile is consistent throughout acquisition.
We further applied Average Overlap Frequency (AOF) as a metric to evaluate individual marker staining quality across all sample batches (38). This QC step identified issues with staining of multiple markers in three of the batches (Supplementary Figure 1A). Further inspection of the score highlighted several problematic markers (Supplementary Figure 1B). Evaluation of the single-cell data for one of these markers, CD27, revealed a time-dependent increase in background staining resulting in reduced marker resolution over time, which we attribute to a Helios instrument malfunction during acquisition (Supplementary Figure 1C). However, restricting analysis to only the events in the first quarter of acquisition window for these batches resulted in AOF values within the range of other batches, allowing recovery of valid antibody screening data despite the technical issues (Supplementary Figure 1D). The rapid identification, isolation, and solution of these technical artifacts was facilitated by a standardized quality control approach using the well-defined AOF metric. Except for the batch effects identified by the AOF QC, the data set was consistent across cell subsets and marker intensities ( Figure 2D). For four major cell subsets (from top to bottom: T Cells, B Cells, NK Cells, and CD14+ Monocytes), we examined the frequency in each sample (top panel of each, ordered by batch). Subset frequency has very small variation across all the samples of a given donor. Additionally, the distribution of the canonical marker of each subset (CD3, CD19, CD56, and CD14, respectively) is also consistent across the samples (bottom panel of each, one box for each batch).
The combination of the above quality control measures highlights the overall robustness of the antibody staining data set. The overall staining data were cohesive for each donor, and for each cell subset across donors, and specific acquisition issues were identified and addressed using automated QC metrics.
The Astrolabe Platform Correctly Labels Cell Subsets and Provides Meaningful Unsupervised Clustering
The Astrolabe platform automatically labeled canonical immune cell subsets (Figure 3). As with debarcoding, it is imperative to verify that automated cell annotation methods correspond to historical definitions by calculating the overlap with manual gating. The Matthews Correlation Coefficient (MCC) between the two methods was >0.8 for almost all of the cell subsets ( Figure 3A). Biaxial plots of canonical markers further reinforced the overlap ( Supplementary Figures 2A-C). Four of the subsets had a score lower than 0.8, which indicated some discrepancy between computational labeling and manual gating. In all four cases, the disagreement was due to subjective thresholding of a specific marker (Supplementary Figure 2D): these are cases where the exact marker intensity threshold for a given subset is ambiguous, such as where to draw the line on CD24 to distinguish Naive and Transitional B Cells. Importantly, the automated approach allowed consistent thresholding across all samples in these ambiguous cases, avoiding potential human subjectivity and variability in assigning gates across samples.
The marker intensity profiles for each of the subsets labeled by the platform largely follow the consensus HIPC definitions [ Figure 3B, (30)]. Astrolabe consistently identified 11 T Cell subsets (including CD4+ and CD8+ T cells, and Naive, EMRA, EM and CM subsets within each), 6 B Cell subsets, several myeloid subsets, NK Cell subsets, granulocytes, and NKT Cells. Examining cell subset frequencies across the three donors highlighted clear variability in their respective immune profiles (Figure 3C), which further reinforces the previous PCA results.
The discovery of novel cell subsets defined by previously unappreciated marker expression patterns is one of the most exciting promises of high-complexity cytometry such as mass cytometry. While cell subset labeling follows established trends, unsupervised clustering has the potential to unearth previously unknown signals. Astrolabe includes a profiling step, where each defined cell subset is clustered separately (Figure 3D). The number of clusters is decided via a heuristic which depends on the number of cells in each subset and on marker heterogeneity. In the antibody staining data set, the platform returns 71 profiling subsets, which are then labeled according to the marker or markers that provide the greatest separation between them. Notably, several CD8+ T Cell subsets are broken down based on CD161, suggesting MAIT-like T Cells (39). Naive B Cells are differentiated based on IgD, while NK Cells are broken up according to CD8. Similar to the canonical cell subsets, profiling subset frequencies vary between the three donors ( Figure 3E), hinting at a wider heterogeneity within the population.
The Antibody Staining Data Set Defines Expression Patterns of Hundreds of Surface Markers Across 71 Cell Subsets
With 350 measured antibodies over 71 profiling subsets, the antibody staining data set is a rich source of information about expected expression patterns in a healthy immune system. In order to provide an initial view into the full expression dataset, we calculated two metrics for each profiling subset and antibody combination ( Figure 4A, Supplementary Figure 3). The first metric is the median marker intensity, which is most useful in defining expression of markers that show a unimodal distribution within a given subset. To better reflect bimodal expression patterns, or those in which only a subset of cells are positive for a given marker, we used a blank well that lacked any PEprimary antibody to establish a baseline for the second metric, percent positive cells. We set an arbitrary cutoff at the 99th percentile of the blank well and defined any cell above this value as positive for the marker. The resulting heat map provides two separate summary statistics of marker expression over all profiling subsets.
Focusing on any specific section of the heat map reveals a plethora of relevant patterns. The top of the map is populated with well-established markers ( Figure 4B) such as CD7, which is present on all T Cell and NK Cell profiles, and CD11b, which is most highly expressed by monocytes. This section also highlights a limitation of the data set with CD5: while this is generally considered a pan-T cell marker the screen only showed expression on Naive CD4+ T Cells, and not any other CD4+ T Cells. This idiosyncratic staining pattern could be due to many potential reasons, such as limitations of the LEGENDScreen kit, antibody clone used, or specifics of the Helios protocol that we employed. This serves as an important reminder to researchers who are looking to utilize this resource: as with any other biological screen, specific signals should be further validated before being relied upon.
Lower sections of the heat map allow investigation of many surface markers that appear less frequently in the scientific literature ( Figure 4C). Notably, the screen reproduces the expression of CD180 in B Cells (40) and the expression of CD193 in basophils (41), while revealing new potential patterns such as the expression of CD181 by granulocytes and basophils. Additionally, many markers are expressed by myeloid cell subsets to some degree. It remains to be seen whether this is an artifact of the experimental technique employed here, or whether there is a high degree of myeloid cell heterogeneity that still remains to be defined. This trend continues throughout the heat map (Figure 4D), as are some more elusive signals, such as CD371, which has a checkered expression pattern across diverse and seemingly unrelated profiling subsets.
In order to provide some outside validation for the dataset, we conducted a second independent LEGENDScreen experiment using PBMCs from a fourth donor and compared marker medians ( Figure 4E) and percent positive (Figure 4F) between the two experiments (Supplementary Table 4 . Together, these tests show that the trends seen in this data set are generalizable. With that said, unlike the main data set, CD5 is uniformly expressed across all T Cell subsets in the validation data (Supplemental Figure 4), further reinforcing the importance of validation of screens. Examining the set of markers that are distant from the diagonal does not reveal any clear trends and it is possible that they are a result of donorspecific differences, technical variation between the experiments, or random noise.
Several Markers Are Differentially Expressed Between CD161+ and CD161-CD8+ T Cells
This comprehensive antibody resource offers opportunities to identify markers to further interrogate or stratify specific immune cell subsets. As a proof of principle of this approach, we leveraged the inclusion of CD161 in the core antibody staining panel, a marker that is highly expressed on mucosal associated invariant T (MAIT) cells (42). MAIT cells are a subset of T cells that display innate-like qualities (43), including an invariant TCRα chain (44) and an inherent capacity to respond to infection (45). The Astrolabe profiling identified CD161hi and CD161lo subsets for both Central Memory (CM) CD8+ T Cells and Effector Memory (EM) CD8+ T Cells (Figure 3D). These profiling subsets were further explored for differential marker expression trends (Figure 5). Comparing the percent positive metric for each antibody and looking for a consensus across all three donors identified six differentially expressed markers in CM cells ( Figure 5A) and four markers in EM cells (Figure 5B).
Two of these trends overlap between the two cell subsets: an increase in CD26 and a decrease in CD49d. CD26 has been previously associated with MAIT cells (46). When examining anti-PE in the CD26 LEGENDScreen well (Figure 5C), there is a x4.5-fold increase in intensity on average between CD161-and CD161+ CM cells and a x7.2-fold increase on average between CD161-and CD161+ EM cells. For CD49d (Figure 5D), the average decrease in intensity is x1.2 and x1.5, respectively, which is to be expected given the overall low intensity for that marker. CD192 (CCR2) was differentially expressed between CD161hi and CD161low CM cells, with a x3.6 average fold increase in intensity in the CD161hi subset (Figure 5E, left). It was only differentially expressed for two of the three donors in EM cells (Figure 5E, right). CD192 is involved in recruitment of monocytes to inflammatory sites (47), a function that could potentially be shared by MAIT cells. When examining marker intensities on a single-cell level, the CD161hi cells are situated between the CD161low cells and monocytes, and would thus be classified as CD192mid using standard gating nomenclature. In addition to these markers that were selectively upregulated on CD161hi cells, the screen highlighted reduced expression of CD183 on CD161hi CM cells ( Figure 5F) and CD57 on CD161hi EM cells (Figure 5G).
One of the limitations of this screening approach is that each of the antibodies is profiled independently, which precludes co-expression analyses of markers in the screen. To validate and further explore the co-expression patterns of the markers identified in the screen, we independently stained a healthy donor PBMC sample with a panel incorporating several of the differentially expressed markers identified in the screen together with Va7.2 TCR to definitively identify MAIT cells (Supplementary Table 5). tSNE analysis on the gated CD8 T cells revealed that the CD161hi population had a distinct phenotype in high dimensional space defined by co-expression of many of the markers identified in the screen (Figure 5H and Supplementary Figure 5). The differential expression patterns of CD26, CD192, CD183, and CD57 between the CD161hi and CD161low largely mirrored those see in the initial screen, independently validating these results (Figure 5I).
Sample Fixation Leads to Both Loss and Gain in the Intensity of Specific Markers
Formaldehyde fixation is a useful approach to preserve cell samples but has been associated with changes in cell surface epitopes and marker expression profiles [ (34,48), Supplementary Figure 6]. However, given the prevalence and importance of fixation in cytometry experiments, there is an urgent need for a systematic study of the effect of fixation on marker intensity to better inform marker selection and panel design in studies involving fixed samples.
The antibody staining data set includes two conditions for each donor and antibody samples: one stained fresh and stained following fixation with 1.6% formaldehyde. Two hundred fifty-five of the LEGENDScreen markers have cells whose intensity is higher than the blank threshold. For each of these markers, we calculated the ratio between median expression in each of the conditions over all cell subsets (Figure 6A). We arbitrarily set a threshold of 2-fold change as indicative of a significant intensity shift between the conditions. 173 (68%) of the markers were below that threshold suggesting that they are not notably affected by fixation.
Sixty-five of the markers have a 2-fold or more increase in fixed samples relative to fresh (Figure 6B). In other words, these markers gained additional signal when the sample was fixed. This increase in expression can either be an artifact of fixation or true expression of an antigen that was not detected in the corresponding fresh sample. While formaldehyde fixation may be expected to partially comprise the cell membrane, the samples in this screen were not explicitly treated with any permeabilizing agents, so we do not anticipate significant exposure of intracellular antigens. Furthermore, gains in expression were largely seen across most cell subsets, suggesting that in most cases these reflect non-specific staining artifacts following fixation. At the opposite end of the spectrum, 17 markers showed a 2-fold or more decrease from fresh to fixed and were thus classified as loss of signal ( Figure 6C). Since only an existing signal can diminish, the lost pattern is specific to certain subsets.
Examining the ratio between the medians enables a broad survey of all antibodies over all subsets. However, it ignores the single-cell nature of the data. Closer examination of several marker intensity distributions reveals that when the ratio is around zero, the underlying distribution is usually maintained from fresh to fixed as well ( Figure 6D). When marker intensity is gained, it typically only affects some of the cells within the subsets, while the low expression persists in others ( Figure 6E). On the other hand, when signal is lost, it appears that fixation diminishes it completely ( Figure 6F). These trends further reinforce the hypothesis that the signal gained by fixation is due to the protocol rather than the underlying biology. In almost all cases, changes in markers expression patterns showed similar trends across subsets expressing that marker. One notable expression was CD22, which was found to be expressed on both B cells and basophils in the fresh samples using the clone contained in the Legendscreen panel (S-HCL-1), consistent with previous descriptions of clone-specific CD22 expression on basophils (49,50). However, fixation resulted in loss of expression specifically on basophils, but not on B cells (Figure 6G), reflecting differences in the fixation sensitivity of the CD22 conformational epitopes that are differentially expressed between B cells and basophils (51).
The LEGENDScreen kit includes antibodies conjugated to PE which are then measured by mass cytometry using an anti-PE secondary. It is possible that the effects of fixation observed here are not due to effects on the underlying antibody, but rather due to a more complex interaction that potentially includes the marker antibody, PE, and anti-PE. We therefore performed a validation experiment where seven of the gain or loss markers were incorporated into the mass cytometry panel (Figure 6H). For the three loss markers, the validation results confirm the effect we saw in the data set: the same subsets express these markers, and loss their signal after fixation. On the other hand, Scatter plot comparing ratio of fixed and fresh between the LEGENDScreen experiment and a validation experiment where indicated antibodies were part of the mass cytometry panel (not conjugated to anti-PE). X-axis is ratio in validation, Y-axis is ratio in LEGENDScreen. Each dot is a (cell subset, antibody) combination. Color is antibody, shape is category (gain or loss).
the results for the gain markers were mixed. While one of them (CXCR3) fully reproduced the screen results, the other two only lost their signal in some of the cell subsets. Cytometry experiment design can be a daunting task due to the high number of variables that needs to be considered. There are many factors that could influence results in unknown ways, especially when employing a method such as fixation that has the potential to perturb the chemistry and kinetics underlying the assay. This antibody staining data set represents an accessible resource to identify and anticipate such potential effects.
DISCUSION
We present a standardized workflow for the acquisition and analysis of large-scale immune monitoring studies using mass cytometry. The workflow incorporates several established experimental techniques in order to reduce signal variation within samples, across samples, and across operators. One, it utilizes a lyophilized core antibody panel that allows clear identification of major compartments of the immune system and provides higher resolution into T Cell, B Cell, and other subsets. Lyophilization streamlines sample processing and eliminates the variability inherent in pipetting small volumes from a large numbers of individual antibody vials. Two, a two-tiered barcoding scheme assures that all donors and treatments are acquired together and that samples are organized into batches. This reduces the technical variation associated with the instrument and its operation. Three, a fully automated cloud-based analytics platform (Astrolabe) runs the same quality control, data cleaning, cell subset labeling, and unsupervised clustering over the entire data set. Taken together, the workflow provides a flexible framework that can be easily adapted to clinical trial immune monitoring or other large-scale experiments and greatly improve the quality, reproducibility, robustness and utility of mass cytometry data.
We leveraged this standardized workflow as part of a comprehensive screen to establish the expression of 350 surface markers across all major circulating immune subsets at single cell resolution. Acquisition of the entire expression dataset across three donors required more than a month of Helios operation and culminated in over 60 million events; one of the largest single mass cytometry datasets recorded to date.
Several quality control approaches were included in order to ensure the accuracy and quality of the antibody staining dataset. First, we employed a two-tier barcoding approach to minimize technical variability in performing the screen. The barcoded samples were deconvolved using an automated debarcoding approach that was directly compared and shown to perform comparably to manual debarcoding. Second, we used average overlap frequency (AOF) as a metric to evaluate the consistency of individual marker staining quality across all samples, which allows us to identify and address acquisition batch effects. Third, we used an automated approach to identify and label cell subsets, the accuracy of which was validated against manual gating of each of the analogous subsets, demonstrating high overlap and consistency between these approaches. Fifth, we performed the screen using three independent donor blood samples to allow for an evaluation of the biological reproducibility of individual marker expression profiles, and each donor presented a consistent and distinct cell subset profile across the entire experiment with both the frequencies of the major immune compartments and the intensities of their canonical markers showing low variability across the entire acquisition period. Finally, the reproducibility of the antibody expression profiles in our primary screen were further validated using a second independent screen performed using an additional donor. Taken together, these steps highlight the fidelity of the antibody staining resource. However, it is still important to note the limitations of this data set as a high-throughput screen; any findings require independent follow-up to confirm whether the reported expression patterns truly reflect hitherto unknown phenotypic diversity or may reflect specific biological or technical aspects of this screen. As an illustration of this approach, we used the screen to identify potential markers to characterize CD161+ MAIT cells, and then performed an independent experiment where we incorporated these markers as part of a single CyTOF panel. This allowed us to both independently validated the markers identified the screen and to further explore their co-expression patterns, confirming that CD161hi MAIT cells can be further characterized as being CD26hi, CD192hi, CD183low, and CD57low.
In addition to screening marker expression patterns on fresh cells, we also introduced formaldehyde fixation as a treatment, thoroughly examining the influence that this standard perturbation could have on surface marker staining. When examining the effect of fixation on marker expression patterns, 173 out of 255 expressed markers had no change in their intensity. Sixty-five gained some signal from fresh to fixed. We hypothesize that this gain is an artifact of the fixation protocol rather than a novel biological signal since it was subset agnostic and only affected some of the cells in each profiling subset. Seventeen markers lost their existing signal after fixation. In almost all cases, the loss of signal affected all expressing subsets. The one exception was CD22, where one expressing subset (basophils) lost the signal, while another (B Cells) did not. It has previously been suggested that the CD22 epitope on basophils is conformationally distinct from that on B cells (51). Our data provide further evidence suggesting a difference in the fixation sensitivity of the CD22 epitopes expressed on these two cell types.
The overall antibody staining data set is a powerful asset for immunologists seeking to investigate the immune system through the lens of less-explored markers and develop antibody panels to focus on specific cell subsets. To maximize the utility of this versatile qualitative resource, these results are fully accessible through an interactive website at https://www. antibodystainingdataset.com. We included two aggregate statistics for each (marker, subset) combination: median anti-PE intensity and percent positive cells (which was calculated based on the background intensity available in the Blank LEGENDScreen well). In addition to interacting with the dataset through heat maps, survey aggregate statistics for their marker(s) and cell subset(s) of choice, the website allows investigators to delve deeper into the single-cell resolution and the relevant distributions. Overall, this dataset represents an accessible and unbiased resource for assessing potential expression of various markers over a large range of immune subsets in healthy individuals and surveying the statistics in the entire data set reveals intriguing signals for potential expression of less-studied markers. This study offers a valuable new resource to aid in the design of high dimensional antibody panels for immune monitoring studies, and further offers a template for a robust experimental workflow incorporating several components to ensure the accuracy and robustness of data generated using mass cytometry technology.
DATA AVAILABILITY
All datasets acquired in the course of this study will be available on FlowRepository (https://flowrepository.org/id/FR-FCM-Z23S) and on ImmPort (https://www.immport.org) upon manuscript publication.
AUTHOR CONTRIBUTIONS
EA contributed to experiment design, analysis, and writing the manuscript. BL, PB, and XG contributed to sample acquisition and analysis. MG contributed to analysis. MM contributed to experiment design and writing the manuscript. AR contributed to experiment design, sample acquisition and analysis, and writing the manuscript.
FUNDING
This work was partly supported by IOF Projects awarded to AR and MM under the parent Human Immunology Project Consortium grants U19-AI-118610-01 and U19 AI128949-01. This work utilized a Helios mass cytometer purchased using NIH Instrument grant S10 OD023547-01. | 2019-04-03T13:11:45.372Z | 2019-02-28T00:00:00.000 | {
"year": 2019,
"sha1": "ecee26588c0f92cc4fcd38a2dce068cfeaaf84a1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.01315/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecee26588c0f92cc4fcd38a2dce068cfeaaf84a1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science"
]
} |
231587287 | pes2o/s2orc | v3-fos-license | A 3D Plasmonic Crossed-Wire Nanostructure for Surface-Enhanced Raman Scattering and Plasmon-Enhanced Fluorescence Detection
In this manuscript, silver nanowire 3D random crossed-wire woodpile (3D-RCW) nanostructures were designed and prepared. The 3D-RCW provides rich “antenna” and “hot spot” effects that are responsive for surface-enhanced Raman scattering (SERS) effects and plasmon-enhanced fluorescence (PEF). The optimal construction mode for the 3D-RCW, based on the ratio of silver nanowire and control compound R6G, was explored and established for use in PEF and SERS analyses. We found that the RCW nanochip capable of emission and Raman-enhanced detections uses micro levels of analysis volumes. Consequently, and SERS and PEF of pesticides (thiram, carbaryl, paraquat, fipronil) were successfully measured and characterized, and their detection limits were within 5 μM~0.05 µM in 20 µL. We found that the designed 3D plasmon-enhanced platform cannot only collect the SERS of pesticides, but also enhance the fluorescence of a weak emitter (pesticides) by more than 1000-fold via excitation of the surface plasmon resonance, which can be used to extend the range of a fluorescence biosensor. More importantly, solid-state measurement using a 3D-RCW nanoplatform shows promising potential based on its dual applications in creating large SERS and PEF enhancements.
Introduction
Surface plasmon resonance (SPR) is a phenomenon based on collective oscillations of surface electrons in metallic nanostructures. The SPR character strongly depends on the noble metal species, size, and shape of the nanostructures [1]. Plasmon-enhanced optical sensors built using metallic nanostructures can be designed to detect analytes in various fields [2]. With respect to SPR, localized SPR (LSPR) can reveal nonpropagating oscillations of surface electrons, which can concentrate the incident electromagnetic (EM) field around the nanostructure. Thus, the local EM field can promote optical properties such as fluorescence to cause plasmon-enhanced fluorescence (PEF); and Raman scattering resulting in surface-enhanced Raman scattering (SERS), which can be several orders of magnitude stronger than the incident field strength [3]. On the other hand, LSPR occurs when the dimensions of a metallic nanostructure are less than the incident light wavelength. The plasmon energy (peak position) of LSPR will change with shape. A more redshifted plasmon will be observed in larger nanostructures [4]. The localized surface plasmon resonances (LSPR) accompanied by electromagnetic field enhancements exhibited by metallic nanostructures have found utility in photocatalysis [5,6], medical diagnostics [7,8], biological and molecular sensing [9,10], and surface-enhanced Raman scattering (SERS) [9,11,12].
From the perspective of morphology, it is known that in the low scale of a nanostructure, the energy of the LSPR is quickly converted into heat, which then leads to a strong absorption by electron-electron scattering. Alternatively, the electron-electron surface scattering is reduced in a larger nanostructure, and the energy of the plasmons is reradiated, leading to a strong scattering cross section [13], which means that the LSPR energy is reradiated into the far field as scattered radiation [14]. Furthermore, the nonspherical nanostructures can support anisotropic plasmons to drive larger SERS enhancements of analytes at their cross section (cross stacking), sharp tips, or edges. These hot-spot regions can possess an electric field amplitude that can be orders of magnitude larger, which leads to intense near-field EM and a much shorter decay than that found for typical SERS [15].
Followed by the discovery of SERS, plasmon-enhanced fluorescence (PEF) was soon characterized as one of the surface-enhanced spectroscopy techniques [16,17]. When excited, metal nanoparticles such as gold or silver nanoparticles commonly show a broad plasmon spectrum and the optical extinction cross section (absorption + scattering) can be several orders of magnitude higher than that of fluorophores [18]. The main factor in PEF is an increase in the sample's absorption and emission cross sections, which is ascribed to the local field enhancement associated with the excitation of an LSPR in the metal nanostructure. On the other hand, once the fluorophore's emission energy couples with a metal plasmon, it can cause the metal to radiate with enhanced intensity in situ as fluorophore luminescence, which is also called metal-enhanced fluorescence (MEF) [19]. The overlap between the LSPR of a metal nanoparticle and the molecular absorption and emission spectra for the fluorophore is predicted to yield the highest fluorescence enhancement factor [20]. PEF not only offers enhanced emission and a decreased lifetime but also allows an expansion of the field of fluorescence by incorporating weak quantum emitters, avoiding photobleaching, and providing the opportunity of imaging with resolutions significantly better than the diffraction limit. It also opens up a window to a new class of photostable probes by combining metal nanostructures and quantum emitters [21].
Silver metal (Ag) nanostructure materials such as nanoparticles (NPs) and nanowires (NWs) have attracted widespread interest due to their unique and tunable optical properties that arise mainly from an LSPR effect [22,23]. Especially, Ag NWs have a high aspect ratio for their length to their diameter and can, thus, be used to build blocks for fabricating two-dimensional and three-dimensional nanostructures. Such nanostructures can be constructed to form transparent, flexible, conductive 2D layers for use in flexible electronic devices as potential replacements for transparent conducting oxide films [24]. Ag NWs can also be assembled into 3D stacked plasmonic substrates for use in various sensing applications, including surface-enhanced Raman spectroscopy (SERS) and plasmon-enhanced fluorescence (PEF) sensors [25,26]. In a previous study, we proved that for either a small organic molecule or organic nanoparticle, a nanowire offers more apparent metal-enhanced fluorescence over a nanoparticle, and we built a double emission enhancement (DEE) sensor platform based on a nanowire-based chip [27].
This study provides a key design consideration for the use of hot spots for anisotropic nanostructures for SERS and PEF, which begins with the synthesis of Ag NWs and the fabrication of 3D Ag NWs on a nanostructure substrate to form a 3D random crossed-wire woodpile (3D-RCW). After identifying the conformation, we evaluated plasmonic properties of 3D-RCW according to the criteria of plasma generation and found that 3D multilayer stacks of Ag NWs can provide both in-plane LSPR coupling among the parallel NWs and out-of-coupling at the cross-points at which two nanowires are closely stacked. That is, 3D stacked Ag NWs enable concentrated plasmon at closed-packed Ag NW structures, which is several orders of magnitude higher than that of the 2D substrate. Therefore, the local electromagnetic (EM) field can amplify the Raman scattering of adsorbed molecules and mediate fluorescence in fluorescent species. Here, we successfully applied 3D-RCW to the plasmon-enhanced spectroscopic techniques SERS and PEF to execute a variety of chemical and biological sensing applications. The related principles and protocols of 3D-RCW used for SERS and PEF conditions are listed in the Section 2 and in Figure 1.
Apparatus
Absorption spectra were generated using a Thermo Genesys 6 UV-visible spectrophotometer(Waltham, MA, USA), and fluorescence spectra were recorded using a HORIBA JOBINYVON Fluoromax-4 spectrofluorometer (Minami-ku Kyoto, Japan)with a 1 nm bandpass filter in a 1 cm cell length at room temperature. AFM images of the nanostructures were obtained using a NanoMagnetics Instrument Ltd. ezAFM. (Summertown, Oxford, UK) TEM images of the nanostructures were taken using a JEOL JEM-2100F J microscope (Musashino Akishima, Japan)at an accelerating voltage of 100 kV. An aqueous solution containing the compound was deposited onto a carbon-coated copper grid. Dynamic light scattering (DLS) measurement was recorded using a SZ-100-HORIBA(Minami-ku Kyoto, Japan). The fluorescence images were taken using Leica AF6000 fluorescence microscopy(Leitz-Park Wetzlar, Germany) with a DFC310 FX Digital color camera through related cubes. In this study, the UV light cube (in which the light passed through a 390/10 nm band pass filter and the emission was collected through a 410 nm long pass filter), blue light cube (in which light passed through a 470/20 nm band pass filter, and the emission was collected through a 510 nm long pass filter), and green light cube (in which light passed through a 520/20 nm band pass filter, and the emission was collected through a 590 nm long pass filter) were used to collect the fluorescence images of pesticides and R6G.
Synthesis of Ag Nanowires (Ag NWs)
The synthesis of Ag nanowires was achieved according to a procedure reported elsewhere [28]. A typical synthesis involves ethylene glycol (EG) as both the solvent and the reducing agent, with AgNO3 and poly(vinvlpyrrolidone) (PVP, MW = 40,000) as the Ag precursor and the polymeric capping agent, respectively. In this synthesis, the CuCl 2 species can be added to facilitate the anisotropic growth of Ag nanowires. In a typical synthesis, 20 mL of EG was added to a disposable glass vial containing a Teflon stirrer bar; the vial was then suspended in an oil bath (temperature = 150 • C) and heated for 1 h under magnetic stirring (400 rpm). At 1 h, 160 mL of a 4 mM CuCl 2 solution in EG was injected into the heated EG. The solution was then heated for an additional 15 min. Next, 6 mL of a 0.147 M PVP solution in EG (concentration calculated in terms of the repeating unit) was injected into the heated EG, followed by the addition of 6 mL of a 0.094 M AgNO 3 solution in EG. The color of the reaction solution changed as follows: initially clear and colorless to yellow (within 1 min), to red-orange (within 3 min), to green, beginning to become cloudy (within 5 min), to cloudy, with a gradual shift from green to brown-red (within 30 min), and finally to an opaque gray with wispiness, indicating the formation of long nanowires (within 1 to 1.5 h). Upon the formation of Ag nanowires, the reaction was quenched by cooling the reaction vial in a room temperature water bath. The solution was centrifuged at 10,000 rpm for 10 min to ensure the complete collection of products, then washed with double-distilled water and centrifuged (10,000 rpm,10 min), and then washed with ethanol and centrifuged three times (6000 rpm, 15 min) to remove the EG and PVP on the surface of the products. The final products were preserved in ethanol for further characterization.
Construction of a 3D Nanowire Chip and Measurement
We prepared a solid thin film for which the 3D random crossed-wire woodpile (RCW) nanostructures and sample preparation are shown schematically in Figure 1. A circular well on a glass plate with a diameter of 0.5 cm and depth of 0.02 cm was used, onto which we sprinkled 20 µL of the Ag NW-containing solution (the concentration was about 2.6 optical density (OD), as shown in Figure 2a) and allowed it to dry; this step was repeated several times to create a 3D nanowire disarray network. In this way, we obtained 20 µL × 1, 20 µL × 2 . . . 20 µL × N of the 3D-RCW chips for drop casting of the analyte-containing aqueous solution and drying, ready for SERS and PEF detection.
Raman Measurement
Raman micro-spectroscopy measurements were performed on the Micro Raman Identify Dual system (MRID-Raman, ProTrusTech Co., Ltd., Tainan, Taiwan) mounted with one TE cooled CCD of 1024 × 256 pixels as integrated by Protrustech Corporation Limited. The system with a 50 × long working distance lens (Olympus America Inc., New Hyde Park, NY, USA) was operated at an excitation wavelength of 532 nm, with~1 mW power, in order to avoid laser-induced degradation. Raman spectra were recorded at a spectral resolution of 1 cm −1 in the spectral range between 400 and 2500 cm −1 . The exposure time for Raman spectra was 1 s and each spectrum was accumulated for one time. The accumulation time and the laser power were the same for all Raman spectra in the case of no special instructions. The measurement method was as follows: We dropped the analyte aqueous solution with certain concentrations (20 µL at a time) into the well of the 3D-RCW nanochip, as shown in Figure 1, and then dried it in a dry bath incubator (40 • C) for measurement. To measure the data reproducibility and repeatability of 3D-RCW nanochip, SERS spectra was collected from seven different places on each chip and then averaged, as the standard spectrum. The algorithm of data deviation comes from the intensity-subtraction between every spectrum and standard spectrum. The stability of the 3D-RCW nanochip was shown by the SERS signals still being detectable and the intensity less than 10 percentage after a chip was placed in the atmosphere for several days.
Characterization of Ag NWs
The silver nanowires were synthesized using a solution-based polyol process [28]. Figure 2a shows the absorption spectra for Ag NWs in double-distilled water. The domi-
Characterization of Ag NWs
The silver nanowires were synthesized using a solution-based polyol process [28]. Figure 2a shows the absorption spectra for Ag NWs in double-distilled water. The dominant surface plasmon resonance (SPR) peaks for the silver nanostructures in solution were observed to be consistent with the typical optical properties of silver nanoparticles and nanowires synthesized via the polyol process [12]. Here, we focus on the nanowires and assign the SPR peak at 400 nm to the transverse SPR mode (LSPR) of the Ag NWs [13], and the broad absorption covers the wavelength range for the visible range of emission wavelengths for most commercially available fluorophores, which is suitable for PEF applications. In addition to the observations described above, the absorption spectrum of Ag NWs in aqueous solution displayed a broadening that can be attributed to the coupling of the SPR due to the decrease in the spacing between the nanowires. The inset of Figure 2a shows a real-color photograph of Ag NWs in solution with a cloudy yellow-green color (a typical color of silver colloids) and a dominant transverse SPR peak occurring at approximately 400 nm. Figure 2 also presents the microscopy, AFM, and SEM images showing the average values for the diameter and length of the Ag NWs. Silver nanowires were characterized by optical microscopy, AFM, and SEM after the synthesis, as shown in Figure 2b-d. Although there is a broad dispersion in size, characteristic values of (146.7 ± 12) nm in diameter and (55.3 ± 8) µm in length were collected by dynamic light scattering (DLS). We extracted the size of NWs from statistical analysis of several SEM images and found that they were 80-120 nm in width and >10 µm in length. The average aspect ratio (length/diameter) of the nanowires was more than 100. The thickness of the PVP coating on the surface of the nanowires was measured to be 20-25 nm.
Construction of an Ag NW 3D Nanostructure
The schematic in Figure 1 shows the procedure used for fabricating Ag NW-constructed 3D random crossed-wire woodpile (3D-RCW) nanostructures through a very conventional sprinkling method. First, 20 μL of 2.6 O.D. Ag NW solution was drop cast onto a 0.5 cm × 0.2 mm well on a glass slide, and then the solvent was evaporated in vacuum, which corresponded to one cycle of spreading. In this way, multistacked 3D-RCW nanostructures with various numbers of layers were fabricated and the layer and density of crossed-wire 3D-RCWs were spread-cycle dependent. Eventually a 3D nanostructure dry chip was prepared and analytes could be placed onto the well for collection of the plasmon-enhanced spectra. To establish the optimal plasmon enhancement, we examined the SERS characteristics for the 3D-RCW chip based on rhodamine 6G (R6G). Figure 3a
Construction of an Ag NW 3D Nanostructure
The schematic in Figure 1 shows the procedure used for fabricating Ag NW-constructed 3D random crossed-wire woodpile (3D-RCW) nanostructures through a very conventional sprinkling method. First, 20 µL of 2.6 O.D. Ag NW solution was drop cast onto a 0.5 cm × 0.2 mm well on a glass slide, and then the solvent was evaporated in vacuum, which corresponded to one cycle of spreading. In this way, multistacked 3D-RCW nanostructures with various numbers of layers were fabricated and the layer and density of crossed-wire 3D-RCWs were spread-cycle dependent. Eventually a 3D nanostructure dry chip was prepared and analytes could be placed onto the well for collection of the plasmon-enhanced spectra. To establish the optimal plasmon enhancement, we examined the SERS characteristics for the 3D-RCW chip based on rhodamine 6G (R6G). Figure 3a shows a comparison of the SERS intensity of R6G obtained from variable densities of 3D-RCW chips. Here, the intensity of the Raman spectra for a constant concentration of R6G (10 µM × 20 µL) apparently increased up to the third cycle of Ag NW sprinkling (20 µL × 3). The high SERS signal enhancement can be explained by the fact that with more cycles of spreading, there are more layer formations, more cross-stacked nanowires, and then more z-direction hot spots in the 3D-RCW, eventually inducing strong plasmonic coupling along the vertical direction [29,30]. On the other hand, the signal intensity decreased with more than three cycles of spreading, which can be explained by the fact that the penetration depth decreases with higher densities of 3D nanostructures regardless of the laser source or analyte SERS signal. The transmittance of 3D-RCW decreases dramatically as the number of stacking layers increases, which means the incident light intensity is impeded. Accordingly, it is more difficult for CCD to collect the SERS signals of analytes buried in the deeper-layers. Meanwhile, if the molecules are distributed in more layers, the number of molecules in each layer will decrease. Nevertheless, transmittance is the criterion to build up an optimized 3D-RCW for SERS measurement. Figure 3b presents data for the reproducibility, repeatability, and stability of 3D-RCW for R6G SERS signals. The data error distribution was between 10~15% at several points of measurement, and these signals could still be detected when using a chip that was placed in the atmosphere for over 80 days. Finally, the minimum detection limit for R6G was determined to be 10 −11 M in 20 µL volumes using a 3D-RCW (20 × 3) chip to measure the SERS signal of R6G (Figure 3c). (b) SERS collection at seven different locations on a freshly prepared 3D-RCW chip (top); these signals dropped by less than 10% when using a 3D-RCW prepared 80 days previously and left in the atmosphere. (c) SERS detection limit for R6G using a 3D-RCW chip with a 20 × 3 Ag NWs coating density. In (a), 10 uL of analyte was used to detect the SERS spectra.
Plasmon-Enhanced Fluorescence
A high enhancement in fluorescence emission, improved fluorophore photostability, and a significant reduction in the fluorescence lifetimes were obtained using a high density of Ag NWs. These quantities depend on the surface loading of Ag NWs on a glass (b) SERS collection at seven different locations on a freshly prepared 3D-RCW chip (top); these signals dropped by less than 10% when using a 3D-RCW prepared 80 days previously and left in the atmosphere. (c) SERS detection limit for R6G using a 3D-RCW chip with a 20 × 3 Ag NWs coating density. In (a), 10 uL of analyte was used to detect the SERS spectra.
Plasmon-Enhanced Fluorescence
A high enhancement in fluorescence emission, improved fluorophore photostability, and a significant reduction in the fluorescence lifetimes were obtained using a high density of Ag NWs. These quantities depend on the surface loading of Ag NWs on a glass slide, where the enhanced fluorescence emission increases with the density of Ag NWs. Thus, we also checked the 3D-RCW for PEF for R6G. Figure 4a shows fluorescence microscopy images for Ag NW-based plasmonic-emission enhancement for R6G. It is clear that the area covered by Ag NWs reveals apparent fluorescence emission that is much higher than that without Ag NWs. By using a similar investigation method, Figure 4b-f show a comparison of the fluorescence images for PEF of R6G obtained for variable densities of 3D-RCW; a comparison of the spectra and emission intensities are shown in Figure 4g. The emission enhancement initially increases rapidly before flattening out, and is proportional to the variable densities of the Ag NW network platform. That is, for higher density coatings of 3D-RCW, the PEF effect vanishes due to a similar reason to that given above. However, the decreased signal is not so critical due to fluorescence offering a larger cross section than that of Raman. Nevertheless, we found that the PEF of 3D-RCW to R6G increased by more than 1000-fold compared to that of the free R6G in the solid state. The results in Figures 3 and 4 confirm the strong SERS and PEF dual effects in our 3D-RCW chip. The intensified LSPR arises due to strong electromagnetic field enhancement that occurs at the abundant formation of hot spots, which is ideal for LSPR-based SERS and PEF mechanisms [21,31]. Figure 5 clearly shows that more apparent fluorescence highlights occur at the intersection and ends of the nanowires, and we observed that the aspect ratio of the NWs controls the occurrence of hotspots and/or antennas. The big- The results in Figures 3 and 4 confirm the strong SERS and PEF dual effects in our 3D-RCW chip. The intensified LSPR arises due to strong electromagnetic field enhancement that occurs at the abundant formation of hot spots, which is ideal for LSPR-based SERS and PEF mechanisms [21,31]. Figure 5 clearly shows that more apparent fluorescence highlights occur at the intersection and ends of the nanowires, and we observed that the aspect ratio of the NWs controls the occurrence of hotspots and/or antennas. The bigger aspect ratios of the nanoneedles present more intersections to reveal more hot spots with less antenna effects because of difficult conduction (Figure 5a), while shorter nanorods present more bright spots at both ends with fewer crossover points (Figure 5b). That is why we did not observe a similar result in the nanoparticle system. Nevertheless, this is the first time that hot spot formation on a nanoplasmonic sensor has been unequivocally confirmed by utilizing PEF imaging.
Appearance of 3D-RCW
As shown in Figure 1, the formation of a 3D-RCW composed of 3D multilayered stacks of Ag NWs can provide both in-plane (xy-plane) and out-of-plane (z-direction) plasmonic coupling effects for both parallel or cross nanowire stacking. There are some studies that mention 3D nanowire stacking and determine that cross stacking shows the best simulated electric field enhancement. Thus, these kind of 3D Ag NW woodpile structures offer orthogonal nanowires piled up along the z-direction [23,30,32,33]. That is, the xy-plane hot spot combines with the z-direction hot spot to generate a 3D hotspot region, and the maximum electric field enhancement variations should be stacking-layer dependent. In a regular hot spot array constructed from 3D nanowires, the maximum E-field enhancement increases from 2 to 5~6 layers of stacking, and plasmonic nanostructures exhibit a redshifted LSPR peak with increasing number of stacking layers. Thus, a ca. 200 nm thick 3D hotspot network is optimal [30]. In our case, it is reasonable that three cycles of Ag NW spreading is thick enough to permit measurement of the optical properties and SERS performance as well as for the penetration depth limitation for visible light.
With a regular 3D nanowire film, as mentioned in the above reference, there is always a defect (gap, interstice) effect. In other words, different minimum mesh sizes around the nanowires must be constructed to adjust the distance of defect and fit different sized ana-
Appearance of 3D-RCW
As shown in Figure 1, the formation of a 3D-RCW composed of 3D multilayered stacks of Ag NWs can provide both in-plane (xy-plane) and out-of-plane (z-direction) plasmonic coupling effects for both parallel or cross nanowire stacking. There are some studies that mention 3D nanowire stacking and determine that cross stacking shows the best simulated electric field enhancement. Thus, these kind of 3D Ag NW woodpile structures offer orthogonal nanowires piled up along the z-direction [23,30,32,33]. That is, the xy-plane hot spot combines with the z-direction hot spot to generate a 3D hotspot region, and the maximum electric field enhancement variations should be stacking-layer dependent. In a regular hot spot array constructed from 3D nanowires, the maximum E-field enhancement increases from 2 to 5~6 layers of stacking, and plasmonic nanostructures exhibit a redshifted LSPR peak with increasing number of stacking layers. Thus, a ca. 200 nm thick 3D hotspot network is optimal [30]. In our case, it is reasonable that three cycles of Ag NW spreading is thick enough to permit measurement of the optical properties and SERS performance as well as for the penetration depth limitation for visible light.
With a regular 3D nanowire film, as mentioned in the above reference, there is always a defect (gap, interstice) effect. In other words, different minimum mesh sizes around the nanowires must be constructed to adjust the distance of defect and fit different sized analytes. Furthermore, one must consider the distance and kinetic equilibrium problem between molecules and metal in solution-state detection. That is why most chip designs cannot be used for both SERS and PEF. We used the 3D-RCW chip to analyze a molecule by drop casting an aqueous solution and allowing it to evaporate. Under this condition, one ensures that every molecule is absorbed on the surface of the metal. There are no solubility and solvent problems, so we can collect the water soluble and insoluble molecules. Consequently, for the random mesh interval characteristics of 3D-RCW, the same molecule may present SERS (remain on the surface of a nanowire) and PEF (short distance to another nanowire). Nevertheless, an abundantly and evenly distributed 3D hot spot network is the impact factor. The SERS spectra obtained for R6G on a freshly prepared 3D-RCW composite film and that stored for 80 days (Figure 3b) without vacuum sealer storage revealed that the SERS intensity for R6G from the stored substrate was reduced by less than 10%, indicating that the 3D-RCW composite film was quite stable when stored over a period of time. It is concluded that 3D-RCW has the following superiorities over regular 3D nanowire film: (1) Solid-state detection: can detect many kinds (size, solubility) of molecules; (2) Tiny testing volume: a small-area well for analytes to fill ensures that the molecules can spread more uniformly; (3) Both SERS and PEF can be measured by using the same chip; (4) Low cost chip fabrication: no antibody, no labeling, electrical-etching technology.
Application of 3D-RCW
To further investigate the practical applications of the 3D-RCW nanochip, a test for organic pollutant pesticides relevant to environmental monitoring was performed. Thiram, a dithiocarbamate compound, is widely used as a fungicide in agriculture and a bactericide in medical treatment [34]. The SERS spectra for different concentrations of thiram dispersed onto a 3D film were tested, as shown in Figure 6a. In comparison with the normal Raman spectrum of thiram, the vibrational peaks due to thiram appeared at 567 cm −1 , 1150 cm −1 , 1386 cm −1 , and 1516 cm −1 , corresponding to the C-S stretching, N=C=S stretching, C-N stretching, and C-H wagging modes, respectively [35]. The insert shows the available curve of the peak (1386 cm −1 ) intensity with respect to linear detecting amounts ranged from 0.1 to 10 µM, with a limit of detection (LOD) of~0.1 µM (~0.02 mg/Kg), which is much lower than the maximal residue limit (MRL) of 7 mg/Kg in fruit prescribed by the U.S. Environmental Protection Agency (EPA). Here, the LOD was defined as the lowest quantity of analyte that can be detected in our system. For example, the point with the smallest number in the insert of Figure 6. Figure 6b shows the SERS spectrum for carbaryl, which is a carbamate pesticide that is banned in many countries. For example, the MRL (maximum residue limits) for carbaryl in apples is 1 mg/Kg (GB2763-2012.). Peak features at 1382 cm −1 , 1442 cm -1 , and 1578 cm -1 can be obviously observed in the Raman spectra but changes in both relative intensities and the position of the bands were observed [36,37]. A strong peak observed at 1382 cm -1 is due to the symmetric vibration of the naphthalene ring. The peak at 1442 cm -1 arises from C-H wagging modes of the monosubstituted naphthalene ring. The strong peak at 1578 cm -1 can be assigned to the stretching of C=C double bonds in the naphthalene ring. The insert shows the available curve of the peak (1382 cm −1 ) intensity with respect to linear detecting amounts range from 5.0 to 100 µM, the LOD value was found to be~5.0 µM (1.00 mg/Kg). The assignments of the Raman modes of the pesticides we used in this manuscript are listed in Table 1, taken from references as indicated in the manuscript. Among the pesticides, paraquat with moderate toxicity, is widely used in agricultural practices, and its permissible residue for apples and pears is regulated to be lower than 0.05 mg/Kg in the USA, China, and most other countries [38]. Figure 6c shows the characteristic paraquat SERS peaks at 837 cm −1 , 1191 cm −1 , 1293 cm −1 , and 1642 cm −1 , which is attributed to the C-N stretching mode, C=C bending vibration mode, C-C structural distortion mode, and C=N stretching mode, respectively [39]. The insert shows the SERS peak at 1642 cm −1 , corresponding to the linear detection amounts, which ranged from 0.1 to 50 μM, the LOD value was found to be ~0.1 μM (0.02 mg/Kg). We showed how 3D-RCW can be used to detect fipronil, which is rarely detected with an effective quick screen. Fipronil exhibits high sensitivity to insects that are resistant to cyclopentadiene, organic phosphorus, organic chlorine, pyrethroids, carbamate pesticides, and those that have no cross-resistance to existing pesticides [40]. Recently, fipronil sulfone has been detected in eggs at much higher levels than the maximum residue limit. The European Food Safety Authority (EFSA) has set a more stringent limit of 0.005 mg/Kg in poultry muscle and eggs [41]. From the spectra shown in Figure 6d, we can see the strongest characteristic peak occurs at approximately 2253 cm −1 , which is likely due to the nitrile (−C≣N) group [42] that is unique and can be differentiated from many other analytes. The insert shows that the Raman signal intensity at 2253 cm −1 was positively correlated with the amount of fipronil. The amount ranged from 5.0 to 200 μM when for linearity and the LOD value was found to be 5.0 μM (2.18 mg/Kg). As we know, it is difficult to detect fipronil using Raman spectroscopy, which is mainly due to the weak interaction between fipronil and metal and the low solubility of fipronil that can easily crystallize-out of aqueous solutions. In our study, microliter levels of analyte solutions were used followed by drying; the solid-state detection platform and tiny portion demonstrate the better performance and higher sensitivity of the 3D-RCW.
N)
Among the pesticides, paraquat with moderate toxicity, is widely used in agricultural practices, and its permissible residue for apples and pears is regulated to be lower than 0.05 mg/Kg in the USA, China, and most other countries [38]. Figure 6c shows the characteristic paraquat SERS peaks at 837 cm −1 , 1191 cm −1 , 1293 cm −1 , and 1642 cm −1 , which is attributed to the C-N stretching mode, C=C bending vibration mode, C-C structural distortion mode, and C=N stretching mode, respectively [39]. The insert shows the SERS peak at 1642 cm −1 , corresponding to the linear detection amounts, which ranged from 0.1 to 50 µM, the LOD value was found to be~0.1 µM (0.02 mg/Kg). We showed how 3D-RCW can be used to detect fipronil, which is rarely detected with an effective quick screen. Fipronil exhibits high sensitivity to insects that are resistant to cyclopentadiene, organic phosphorus, organic chlorine, pyrethroids, carbamate pesticides, and those that have no cross-resistance to existing pesticides [40]. Recently, fipronil sulfone has been detected in eggs at much higher levels than the maximum residue limit. The European Food Safety Authority (EFSA) has set a more stringent limit of 0.005 mg/Kg in poultry muscle and eggs [41]. From the spectra shown in Figure 6d, we can see the strongest characteristic peak occurs at approximately 2253 cm −1 , which is likely due to the nitrile (−C oderate toxicity, is widely used in agricultural pples and pears is regulated to be lower than her countries [38]. Figure 6c shows the charac-1191 cm −1 , 1293 cm −1 , and 1642 cm −1 , which is C bending vibration mode, C-C structural disspectively [39]. The insert shows the SERS peak etection amounts, which ranged from 0.1 to 50 (0.02 mg/Kg). We showed how 3D-RCW can etected with an effective quick screen. Fipronil esistant to cyclopentadiene, organic phosphote pesticides, and those that have no cross-re-, fipronil sulfone has been detected in eggs at ue limit. The European Food Safety Authority 5 mg/Kg in poultry muscle and eggs [41]. From the strongest characteristic peak occurs at apto the nitrile (−C≣N) group [42] that is unique analytes. The insert shows that the Raman sigcorrelated with the amount of fipronil. The r linearity and the LOD value was found to be ficult to detect fipronil using Raman spectrosaction between fipronil and metal and the low ize-out of aqueous solutions. In our study, mid followed by drying; the solid-state detection better performance and higher sensitivity of N) group [42] that is unique and can be differentiated from many other analytes. The insert shows that the Raman signal intensity at 2253 cm −1 was positively correlated with the amount of fipronil. The amount ranged from 5.0 to 200 µM when for linearity and the LOD value was found to be 5.0 µM (2.18 mg/Kg). As we know, it is difficult to detect fipronil using Raman spectroscopy, which is mainly due to the weak interaction between fipronil and metal and the low solubility of fipronil that can easily crystallize-out of aqueous solutions. In our study, microliter levels of analyte solutions were used followed by drying; the solid-state detection platform and tiny portion demonstrate the better performance and higher sensitivity of the 3D-RCW.
Finally, to demonstrate the proof-of-principle use of 3D-RCW in PEF-based applications, pesticides (thiram, carbaryl, paraquat, and fipronil in Figure 6) were drop cast onto chips (coated with Ag NWs with 20 µL × 3) and measured using a fluorescence spectrometer and fluorescence spectroscopy. Based on the observation and discussion above, we used the same chip in Figure 6 to make sure that systemic emission data did not drop at a high concentration of nanowires. Figure 7 shows the real-color photographs of the fluorescence emission imaging from the 3D-RCW chips taken through an emission filter and the related measured fluorescence emission spectra for the pesticides. There were clear blue, cyan, and blue-green colors of fluorescent microscopy images revealed in our 3D-RCW chips for carbaryl, paraquat, and fipronil, respectively. Spectra measurements in the inserts of Figure 7 indicate the nonlinear PEF enhancements with increasing amounts of pesticides, thus, we estimated the LODs for carbaryl (50 µM, 10.00 mg/Kg), paraquat (5 µM, 1.28 mg/Kg), and fipronil (20 µM, 8.74 mg/Kg) based on the lowest concentration (the smallest amount) of pesticide, which was non-fluorescent in the control chip (without nanowire), that could be detected by using fluoresce microscopy. This is the first time that pesticides were detected using fluorescent images. Since carbaryl and fipronil pesticides are difficult to detect by Raman or SERS spectra assays, PEF provides an opportunity to detect pesticides, with similar or better LOD than SERS, with fluorescence signals. Finally, to demonstrate the proof-of-principle use of 3D-RCW in PEF-based applications, pesticides (thiram, carbaryl, paraquat, and fipronil in Figure 6) were drop cast onto chips (coated with Ag NWs with 20 μL × 3) and measured using a fluorescence spectrometer and fluorescence spectroscopy. Based on the observation and discussion above, we used the same chip in Figure 6 to make sure that systemic emission data did not drop at a high concentration of nanowires. Figure 7 shows the real-color photographs of the fluorescence emission imaging from the 3D-RCW chips taken through an emission filter and the related measured fluorescence emission spectra for the pesticides. There were clear blue, cyan, and blue-green colors of fluorescent microscopy images revealed in our 3D-RCW chips for carbaryl, paraquat, and fipronil, respectively. Spectra measurements in the inserts of Figure 7 indicate the nonlinear PEF enhancements with increasing amounts of pesticides, thus, we estimated the LODs for carbaryl (50 μM, 10.00 mg/Kg), paraquat (5 μM, 1.28 mg/Kg), and fipronil (20 μM, 8.74 mg/Kg) based on the lowest concentration (the smallest amount) of pesticide, which was non-fluorescent in the control chip (without nanowire), that could be detected by using fluoresce microscopy. This is the first time that pesticides were detected using fluorescent images. Since carbaryl and fipronil pesticides are difficult to detect by Raman or SERS spectra assays, PEF provides an opportunity to detect pesticides, with similar or better LOD than SERS, with fluorescence signals. We summarize some previous reports in Table 2 and compare the SERS substrates and analytes used. In order to achieve high detection limits, most of these researchers fabricated their chips with rare materials, complicated processes, or more than one chemical reaction, which means many of the chips were expensive to produce. The benefits of our chip compared to those listed in Table 2 include: 1. Most used Au, while we used a very stable Ag. We summarize some previous reports in Table 2 and compare the SERS substrates and analytes used. In order to achieve high detection limits, most of these researchers fabricated their chips with rare materials, complicated processes, or more than one chemical reaction, which means many of the chips were expensive to produce. The benefits of our chip compared to those listed in Table 2 include: 1.
Most used Au, while we used a very stable Ag.
2.
Most used nanoparticles, while we used nanowires.
3.
A 3D substrate was constructed in our chip. 4.
Our chip and analytes were prepared simply.
5.
Most detected just one or two pesticides, while we detected four pesticides that belong to three types of pesticides (carbamate, paraquat, and fipronil). 6.
Fipronil is hard to detect by SERS, but our chip detected it. 7.
We detected fluorescence and Raman using the same chip.
Further Applications
Ag NWs can be impregnated within or assembled onto solid, flexible substrates using filter paper, fiber mats, elastomers, and plastics, producing 3D flexible plasmonic substrates. The flexible plasmonic substrates display outstanding attachment properties to curved surfaces. Direct attachment of flexible SERS substrates to human skin can also enable in vitro detection of biochemicals or biomarkers from perspiration.
3D-RCW can be used to enhance emission intensity for a fluorophore sensor to improve the detection limit.
The combination of a portable spectrometer with a low-cost but highly sensitive and flexible plasmonic substrate should be commercialized for on-site chemical analysis in environmental monitoring, food safety, forensic science, and point-of-care medical diagnostics.
Conclusions
This article has emphasized the utility of 3D stacked Ag NWs for enhancing plasmonic coupling effects. Plasmon-enhanced fluorescence (PEF) and surface-enhanced Raman scattering (SERS) spectra data can be collected by using a nanochip with a 3D-RCW platform. Based on our study, a 3D-RCW nanostructure can provide rich antenna and hot spot effects, and optimization of the PEF or SERS effect in terms of the detection limit was explored based on layer-by-layer construction between silver nanowires. We successfully observed PEF and SERS effects for R6G and pesticides with this platform, which can be used to build a novel dual analysis and extend the range of fluorescence biosensors. This 3D nanoplatform shows promising potential as a cheap, robust, and portable sensing platform for future applications. Data Availability Statement: The data presented in this study are available in this article. | 2021-01-13T06:17:19.554Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1a4650179d23b9d8c8bd5d746c28dcc72837796d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/2/281/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "133e57313ddab3db2a36f39a72987d10aa9367b4",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
174807021 | pes2o/s2orc | v3-fos-license | Bilingualism and language similarity modify the neural mechanisms of selective attention
Learning and using multiple languages places major demands on our neurocognitive system, which can impact the way the brain processes information. Here we investigated how early bilingualism influences the neural mechanisms of auditory selective attention, and whether this is further affected by the typological similarity between languages. We tested the neural encoding of continuous attended speech in early balanced bilinguals of typologically similar (Dutch-English) and dissimilar languages (Spanish-English) and compared them to results from English monolinguals we reported earlier. In a dichotic listening paradigm, participants attended to a narrative in their native language while ignoring different types of interference in the other ear. The results revealed that bilingualism modulates the neural mechanisms of selective attention even in the absence of consistent behavioural differences between monolinguals and bilinguals. They also suggested that typological similarity between languages helps fine-tune this modulation, reflecting life-long experiences with resolving competition between more or less similar candidates. The effects were consistent over the time-course of the narrative and suggest that learning a second language at an early age triggers neuroplastic adaptation of the attentional processing system.
lexical access as introduced by the BIA framework [14][15][16] , which is strongly supported by findings that both languages are simultaneously active in the bilingual's brain, and that bilinguals regularly switch between them and inhibit the unwanted one [17][18][19][20][21][22][23] . Additionally, a number of studies reported that the same neural network underpins the processing of both languages 24,25 . This constant need to inhibit the activation of the non-target language within the same network was argued to elicit the enhancement of attentional control and the ability to inhibit unwanted information 26,27 . While many studies reported that bilinguals tend to outperform monolinguals in tasks of attentional control and inhibition 28,29 (but see 30,31 ), there are also questions about the reliability of such findings, or about the specific contexts of bilingual language learning and use that may give rise to such differences 5,32 .
Neuroplasticity as a Function of L2 Experience
Even if the behavioural findings about enhanced attentional control cannot be generalized across tasks and different types of bilinguals, it is unequivocal that learning and using multiple languages represents a major environmental demand, which can modify the way the brain processes information. This reflects the brain's capacity to adapt to changes in the environment, and is equivalent to learning-induced neural changes seen across other cognitive domains [33][34][35] . A number of studies investigated how experience with a second language modifies the underlying neural processing, exploring both anatomical and functional differences between monolinguals and bilinguals 36,37 . Results suggest that bilinguals show increased grey matter density [38][39][40] and white matter connectivity compared to monolinguals 41,42 ; as well as less activation in structures related to executive control while still outperforming monolinguals 43 , arguably indicating the presence of a more effective control network.
In the auditory domain the evidence is somewhat limited, with some studies focusing on the processing of isolated syllables only. The existing results show stronger subcortical encoding of the fundamental frequency (F0) and more consistent responses to attended syllables in both subcortical and cortical areas in bilinguals 6,44 , as well as an earlier frontal positivity for primed spoken words, indicating enhanced selective attention 45 . A recent study 46 found that bilingualism can modify the early processing of sound even during pre-attentive listening. Yet, while these studies provide evidence for neural changes in response to the demands of bilingualism, the literature on the relationship between bilingualism and indices of managing interfering information remains inconsistent 47 . In particular, how bilingualism modifies the way speakers track and encode natural continuous speech in the presence of interference remains largely unknown.
Neural encoding of Attended and Unattended speech
The speech signal is strongly encoded in the brain. Studies have shown significant correlations between neural activity and the attended speech envelope [48][49][50] , with modulations of the speech envelope (corresponding to syllabic or phonetic rate of speech) robustly synchronized to the low-frequency neural oscillations 51,52 . This phenomenon has been referred to as the Selective Entrainment Hypothesis [53][54][55] . Encoding can also be observed for higher-level lexical information, with the brain responding to the semantic content of words in a time-locked manner 56 . The mechanisms underlying the neural encoding of speech were suggested to reflect both the enhancement of the attended stream and suppression of the unattended one 49 . Our recent study 50 showed that the nature of the interfering stream significantly modulates attentional encoding, with fully-intelligible distractors causing the strongest encoding of both attended and unattended streams and latest dissociation between them, and non-intelligible distractors causing weaker encoding and earlier dissociation.
Current study
The current study used neural encoding of the speech envelope to investigate whether and how bilingualism modifies the mechanisms of auditory selective attention. Following our previous study 50 we employed a cocktail-party paradigm, in which participants attended to a narrative in their native language presented to one ear, while ignoring a competing talker in the other ear. By manipulating the type of competing streams, we created interference at different levels of intelligibility. In the first condition, the interfering narrative presented in the unattended ear was also in the participant's native language (Native-Native condition), arguably creating the most distracting listening environment. In the second condition, the interfering narrative was also linguistic in nature but in a language that participants did not understand (Native-Unknown condition). In the third condition the interfering stream was Musical Rain (MuR), a non-linguistic stimulus closely matched to the acoustic properties of speech that does not trigger speech percept (Native-MuR condition). Finally, the fourth condition was the 'Single Talker' condition, where participants attended to a narrative presented to one ear, with no interference presented to the other ear.
Based on the existing evidence 49,57,58 we predicted that attention would increase speech encoding in all conditions compared to the non-attended stream. Furthermore, following the results from monolingual listeners presented with the same types of interference 50 , we hypothesized that the nature of the interfering stream might further modify attentional encoding, with intelligible interference (which is most difficult to dissociate from the attended stream) triggering late dissociation and strong enhancement of the attended stream. However, if the demands of learning and using multiple languages from an early age can indeed modify the mechanisms of selective attention, we could also expect a different pattern of results to that seen in monolinguals. This might be manifested in different timing of dissociation between attended and unattended streams, or different distribution of attentional capacity needed to achieve this across conditions -both potentially reflecting reconfiguration of the underlying mechanisms of focusing on the attended stream and distinguishing it from interference. In line with evidence that the brain adapts to the environmental demands to enable task performance 35 we assumed that any such changes to the neural mechanisms of selective attention in bilinguals would serve to enable their optimal behavioural performance in this arguably more challenging processing environment, rather than to provide a behavioural advantage to bilinguals over monolinguals. In order to make this inference however, and ensure that any differences between monolinguals and bilinguals are not driven by differences in behavioral performance www.nature.com/scientificreports www.nature.com/scientificreports/ (which may or may not exist 29,31 ), it was necessary to keep the task demands such that both groups are able to perform optimally and equally well. We therefore simply asked the participants to listen attentively and then answer comprehension questions after the recording of neural activity has taken place.
Effects of Language Similarity
Finally, the current study also explored whether the typological similarity between the bilingual's two languages plays an additional role in modifying the mechanisms of selective attention. Typological similarity is similarity in structural and functional features between languages, describing their commonalities in the phonological, lexical or syntactic domain. Whilst there is no universally accepted index of language similarity, and the outcome of any comparison depends on the specific criterion used, it is widely acknowledged that languages within the same genus (e.g., English and Dutch, both belonging to the Germanic genus of the Indo-European family) are more similar than those from different language genera (i.e., Slavic, Romance, Germanic). We therefore adopted a widely accepted classification 59 , which uses the typological similarity in phonology, vocabulary and grammar to classify languages within families or genera. On this basis, we selected to compare bilinguals whose languages either belong to the same genus of the Indo-European family (English and Dutch, both members of the Germanic genus) or a different one (English and Spanish, belonging to the Germanic and Romance genera respectively). Besides typological criteria, everyday experience attests that the vocabulary, inflectional systems and sound patterns of Dutch and English (including stress and intonation) are much more similar than that of Spanish and English, allowing Dutch learners to easily perceive and produce oral English, and acquire near-native accents. Table 1 lists experimental conditions for both groups of bilinguals.
The existing literature on the effects of language similarity on bilinguals' cognitive performance is mixed. Some studies have shown that any combination of languages or dialects, irrespective of their typological similarity, alters the performance on tasks of attentional control and inhibition of unwanted information. For instance, a meta-analysis 60 reported that bilingualism had a reliable effect on attentional control across language pairs as diverse as Chinese-English and French-English, while another study 61 reported that Chinese-English, French-English and Spanish-English bilingual children all performed better than the monolingual controls on a colour-shape switching task, while showing no differences between the three groups. The same pattern was shown to hold even in cases of bidialectalism 62 , with speakers of two closely related varieties of Greek (Cypriot Greek and Standard Modern Greek) also performing better than monolinguals on tasks requiring switching and ignoring irrelevant information. However, a more recent meta-analysis 31 found no evidence for the effects of bilingualism in general, and language similarity in particular, on the behavioral performance of bilinguals. Yet, whether and how language similarity might influence the neuroplastic changes to the mechanisms of selective attention in bilinguals remains unclear.
One hypothesis arising from the existing data is that, given the well-established parallel activation and competition between the bilinguals' languages 14,27 , any combination of languages or dialects will modulate the systems that monitor for the presence of conflict and its resolution 29 . However, there is also evidence that competition between activated words can be modulated by variables like the degree of orthographic or phonological similarity between them, or the specific task that participants are performing [63][64][65] . For instance, while bilinguals generally recognize cognate words (i.e., words that share meaning and form across languages) faster than language-specific words, phonological overlap between words produces inhibitory effects in lexical decision tasks 63 , while cross-language orthographic similarity produces inhibitory effects when the task is to decide which language words belong to 64 . The alternative hypothesis is therefore that the degree of overlap between co-activated lexical entries can modulate the mechanisms of selection between them, triggering different activation patterns for selection between more similar ones (English and Dutch), compared to the more distant ones (English and Spanish). In this latter case, language similarity would emerge as another variable that helps fine-tune the underlying neural processes to enable optimal performance, without necessarily causing any apparent behavioural differences between the groups.
In sum, the current study investigated how the cognitive demands of using two languages modulate the neural mechanisms of selective attention, and whether the similarity between the languages plays a further role in shaping these processes. To this end, we tested how early Dutch-English and Spanish-English bilinguals encode attended speech in the presence of different types of interference, before comparing these results with the patterns observed in monolinguals using multivariate Representational Similarity Analysis 66 .
Results
Behaviour. Participants completed the comprehension task with a mean accuracy of 93.5% (SD = 4.9%) in the Spanish-English group and 88.68% (SD = 6.1%) in the Dutch-English group, indicating that the target speaker was attended to as instructed. One-way repeated measures ANOVA showed no difference between the number www.nature.com/scientificreports www.nature.com/scientificreports/ of correct responses across the four conditions in the Spanish-English group [F(3,63) = 1.38, p = 0.26], but significant difference between conditions in the Dutch-English group [F(3,51) = 6.46, p = 0.001]. Post-hoc t-tests showed that this was driven by the Single Talker condition, where the number of correct responses was lower than in the Native-Native and Native-MuR conditions (p < 0.05). This also affected the comparison of the overall performance in the two groups (t = −3.0, p < 0.01). Subsequent analyses however revealed that this unexpected Single Talker result in the Dutch-English group arose due to two ambiguous questions, where the majority of participants responded incorrectly. We also compared bilinguals with monolingual results we reported earlier 50 (M = 94.3%, SD = 3.8%). Independent samples t-test showed that monolinguals and Spanish-English bilinguals did not differ from each other (t = 0.61, p = 0.54). However, monolinguals scored higher than the Dutch-English bilinguals (t = −3.9, p < 0.001).
Effects of attention on neural encoding of speech. Across the two bilingual groups, continuous EEG data was recorded from participants listening to narratives in Spanish or Dutch, in four different listening conditions (Native Language, Unknown Language or MuR as interference, Single Talker). The first set of analyses aimed to establish the overall patterns of encoding to attended and unattended speech in bilinguals, and the extent to which this follows the pattern seen in monolinguals 50 . Cross-correlations for attended and unattended speech envelopes for bilinguals (averaged across participants and conditions) are depicted in Fig. 1. The attended cross-correlation functions (Fig. 1C,D) show robust neural encoding of the attended speech envelope, with major clustering of peaks around 100-150 ms and 300 ms post-onset, and a less prominent one around 550 ms; comparable to the results seen in monolinguals (overlaid in blue in Fig. 1D). The averaged cross-correlation functions for unattended speech (Fig. 1E) show that a limited number of EEG channels cross the significance threshold, indicating that attention had a major effect on encoding the speech envelopes in both groups. The shape of the unattended cross-correlation functions differs from the attended ones, replicating previous results 49,50 , and suggesting that the unattended cross-correlations are not a weakened representation of the attended ones. Scalp topographies for average attended cross-correlations ( Fig. 1F) are plotted for latency ranges of 100-160 ms, 290-350 ms and 510-570 ms, based on the concentration of peaks at those time points. They are comparable across the two bilingual groups, with posterior central distribution of effects at earlier time windows, and more frontal distribution of the later effects. www.nature.com/scientificreports www.nature.com/scientificreports/ Comparisons across conditions: attended speech. One of the key findings in monolinguals 50 was that the type of interference significantly modulated attentional encoding, with increasing intelligibility of the distractor causing stronger encoding of the attended stream (Native > Unknown > MuR); and Single Talker (no interference) condition triggering strongest attentional encoding overall. To assess whether the same pattern holds for bilinguals, we subjected attended cross-correlations (including the Single Talker condition) in each dataset to one-way repeated measures ANOVA, followed by pairwise post-hoc cluster-based permutation t-tests. In the Spanish-English group, the ANOVA results (FDR corrected for multiple comparisons) showed significant differences across conditions; post-hoc t-tests revealed that this was driven by the Single Talker condition, which showed strongest envelope encoding ( Table 2). Importantly however, there were no significant differences between encoding of the attended streams across the three interference conditions (Native-Native, Native-Unknown, Native-MuR). In the Dutch-English dataset, a significant ANOVA followed by post-hoc t-tests again revealed that this was driven by the Single Talker condition, which differed from the Native-Native condition from 330 ms post onset. Once more however, post-hoc t-tests showed no significant differences between attentional encoding in the other three interference conditions. This set of results conveys two key points: firstly, and consistently with the results in monolinguals, they show that selective attention requires processing capacity 12,67 such that the presence of interference diminishes the capacity for entrainment to the attended stream, compared to the Single Talker (no interference) condition. More importantly however, they show that the nature of the distractor does not directly influence the strength of encoding of the attended stream in bilinguals. This is in stark contrast to the results from the equivalent analysis in monolinguals, which showed significant modulation of attentional encoding by the intelligibility of the interfering stream (Fig. 2). This clearly points to a modulation of selective attention mechanisms by the experience of speaking multiple languages.
Comparisons across conditions: unattended speech. Next, we compared cross-correlation functions between the EEG data and unattended envelopes across the three interference conditions for both bilingual groups, following the same procedure as above. Results showed no significant differences between conditions in either of the groups, replicating the results seen in monolinguals, where only subsequent post-hoc analyses revealed subthreshold differences between unattended conditions. We explored such potential differences in the current data too, by comparing the unattended cross-correlation functions in each group using pairwise cluster-based permutation t-tests. In the Spanish-English group, the post-hoc t-tests showed no significant differences between unattended Native and Unknown streams, suggesting comparable encoding of unattended linguistic interference. However, both unattended linguistic interferences were more encoded than the unattended MuR stream (Table 3). In the Dutch-English bilinguals, all types of unattended interference were equally encoded, indicating no differences between encoding of unattended linguistic and non-linguistic interference.
Comparisons within conditions: attended vs unattended speech. The next set of analyses aimed to establish the timing of dissociation of attended from unattended speech under different types of interference, by directly comparing attended and unattended cross-correlations in each condition separately. The equivalent analysis in monolinguals 50 showed latest dissociation between the two streams when the interference was fully intelligible (the Native-Native condition), and differences right from the onset in the Native-MuR condition. In the Dutch-English bilinguals, these analyses showed a comparable overall pattern (Table 4), with the differentiation of attended and unattended streams emerging around 300 ms and peaking as late as 540 ms in the Native-Native condition; emerging around 150-200 ms and peaking at 300-400 ms in the Native-Unknown condition, and
Peak (ms) P value T value effect size
Native-Native vs Native-Unknown n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Native-Native vs Native-MuR n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Native-Unknown vs Native-MuR n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Single Talker vs Native-MuR n/a n/a N.S. n/a n/a 50 340 0.001 −3323.6 0.4
DUTCH -ENGLISH
Native-Native vs Native-Unknown n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Native-Native vs Native-MuR n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Native-Unknown vs Native-MuR n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Single Talker vs Native-Native n/a n/a N.S. n/a n/a 330 560 0.016 −1329.7 0.9 Single Talker vs Native-Unknown n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Single Talker vs Native-MuR n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Table 2. Cluster-based permutation t-tests between attended cross-correlation functions across conditions. T value = sum of all t values within the cluster; Cohen's d = effect size at cluster peak; n/a = absence of a significant cluster.
www.nature.com/scientificreports www.nature.com/scientificreports/ emerging right from the onset in the Native-MuR condition. Importantly however, the relative onsets of differentiation of linguistic interference in Dutch-English bilinguals are delayed by an average of 150 ms compared to the results seen in monolinguals. Spanish-English bilinguals also showed early differentiation of attended and unattended envelopes in the Native-MuR condition (starting from onset and peaking at 280 and 590 ms for positive and negative effects respectively), followed by the Native-Unknown condition (emerging at 30 ms and peaking at 200 ms). However, there were no statistically significant differences in this dataset between the encoding of attended and unattended streams in the Native-Native condition ( Fig. 3 and Table 4).
These results reveal that, comparable to the results in monolinguals, the nature of interference affects how early the listeners can differentiate attended from unattended streams, with non-linguistic noise differentiated right from the onset, and linguistic interference differentiated later on. However, they also reveal that bilingualism, as well as the typological similarity of bilingual's languages, modulate this process; with Dutch-English bilinguals showing evidence of delayed differentiation of the two types of linguistic interference, and Spanish-English speakers showing equivalent encoding of attended and unattended streams when the interference is in their native language.
Attention over time. The continuous nature of stimuli allowed us to test whether effects of attention on neural encoding remain constant over time. To this end, we assessed the differences between the encoding of 'beginning' , 'middle' , and 'end' of each narrative across subjects. There were no significant differences in any condition between the strength of neural encoding over time (all p > 0.05) for either attended or unattended streams, indicating that the effects were constant throughout the narratives.
Representational similarity analysis (RsA). The pattern of results reported above suggests that bilingualism modifies some of the key mechanisms of auditory selective attention, namely the strength of attentional encoding under different types of interference, as well as the timing of its differentiation from the unattended stream. To confirm these findings and directly compare attentional encoding across monolinguals and bilinguals -whilst superseding the unavoidable use of different stimuli in each group -we took advantage of RSA 66 , a multivariate pattern analysis that allows us to abstract away from the direct item-to-representation similarities and test for patterns of encoding in listeners presented with the same types of interference (second-order isomorphism). To this end, we extracted patterns of encoding for all attended and unattended conditions in each group, in the time windows of consistent attentional effects (100-160 ms, 290-350 ms and 510-570 ms, Fig. 1D). These patterns were compiled into 7 × 7 representational dissimilarity matrices (RDMs, one per time window per group) and compared within each window. The results are summarized in Fig. 4. As shown there, significant differences in the patterns of encoding emerged from the comparisons between monolinguals and bilinguals, with monolinguals differing from Dutch-English bilinguals at all time windows (100-160 ms, 290-350 ms, 510-570 ms) and from Spanish-English bilinguals in the early (100-160 ms) and late (510-570 ms) time windows. This adds support to the argument that bilingualism modifies mechanisms of selective attention, and that this modification to some degree reflects the typological similarity of the bilingual's languages.
Discussion
This study aimed to establish whether the demands of learning and using a second language influence the neural mechanisms of auditory selective attention, and whether this might be further affected by the typological similarity between the two languages. To this end, we tested the neural encoding of continuous attended speech in early balanced bilinguals of typologically similar Dutch and English, and typologically dissimilar Spanish and English, and compared them to results from English monolinguals reported earlier 50 . In a cocktail-party paradigm, participants attended to a narrative in their native language while ignoring a competing narrative in the other ear. The competing stream varied from fully intelligible story in the participant's native language, to linguistic interference in a language unknown to the listener and well-matched non-linguistic noise (Musical Rain). The results clearly revealed that the experience of knowing and speaking multiple languages modulates the neural mechanisms of selective attention, even in the absence of consistent behavioural differences between monolinguals and bilinguals. They also suggested that the lifelong effects of the demands imposed by the typological similarly www.nature.com/scientificreports www.nature.com/scientificreports/ of bilinguals' languages may help refine how the brain selects relevant information, tuning it towards the type of information recurrently used to dissociate between the co-activated languages. We elaborate on these findings below.
The neuro-cognitive consequences of bilingualism are a hotly-debated topic 32,68 . One controversial issue is how the experience of learning and using a second language affects the capacity to selectively attend to a stimulus in the presence of interference; with some studies reporting that bilinguals outperform monolinguals in such tasks 1,28,29 and others questioning those findings 5,31,32 .Yet as argued earlier, it is unequivocal that learning and using multiple languages presents a major demand for our neurocognitive system, with parallel activation of both languages within the same network triggering competition and inhibition of the unwanted one 14,69 . Across domains as diverse as learning to juggle or read, memorising a sequence or acquiring detailed spatial knowledge, the brain responds to such environmental demands by neuroplastic adaptation and modulation of both its structural and functional architecture 33,34 . It is therefore unsurprising that similar effects have been observed in bilinguals too, with anatomical and structural changes including grey and white matter density 38,40,41 , connectivity 70 or activation in the frontoparetial regions 43 , as well as altered processing of aspects of auditory information 6,44,46 . Our results complement these findings by showing that bilingualism modulates the neural mechanisms of selective attention, without necessarily causing any apparent behavioural differences between monolinguals and bilinguals.
The evidence emerged from both the analysis of how attended speech is encoded across different types of interference for each group separately, and from direct comparisons of activation patterns between monolinguals and bilinguals using multivariate RSA. In line with the literature 49,50,57,71 , the cross-correlation results showed that attention strongly modulated the neural tracking of speech envelopes, with stronger encoding observed for attended than for unattended speech. We also saw that Single Talker condition, where the attended stream was presented in the absence of any interference, triggered more robust encoding than attended speech in the interference conditions -replicating the findings that attention 'consumes' processing capacity 12,67 . However, and in stark contrast to the results observed in monolinguals 50 , the type of distractor did not have an effect on the strength of encoding of the attended stream in bilinguals. The finding that monolinguals enhance the tracking of the attended stream as interference becomes more intelligible 50 conforms to the predictions of flexible accounts of selective attention 12,13 , where selection between streams will be less demanding when the distractor is non-intelligible and can be dissociated using lower-level perceptual information, while the dissociation between two fully intelligible streams requires the use of higher-level semantic and syntactic information, requiring more processing capacity and causing stronger encoding of the attended stream but delayed dissociation. However, this effect was not evident in either Spanish-English or Dutch-English bilinguals, both of which showed equal
Positive Cluster
Negative Cluster
Peak (ms) P value T value effect size
Native-Native vs Native-Unknown n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Native-Native vs Native-MuR n/a n/a N.S. n/a n/a 0 60 0.
DUTCH -ENGLISH
Native-Native vs Native-Unknown n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Native-Native vs Native-MuR n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Native-Unknown vs Native-MuR n/a n/a N.S. n/a n/a n/a n/a N.S. n/a n/a Table 3. Cluster-based permutation t-tests between unattended cross-correlation functions across conditions. T value = sum of all t values within the cluster; Cohen's d = effect size at cluster peak; n/a = absence of a significant cluster.
Peak (ms) P value T value effect size
Native -Native n/a n/a NS n/a n/a n/a n/a NS n/a n/a www.nature.com/scientificreports www.nature.com/scientificreports/ encoding of the attended streams across the three interference conditions. The RSA results further support these findings, with data showing that monolinguals differed from Dutch-English bilinguals in all time windows tested and from Spanish-English bilinguals in the early (100-160 ms) and late (510-570 ms) time windows, implying a modulation of both early and late attentional processing, where information is dissociated based on perceptual and lexicosemantic analysis respectively 12 . This complements the evidence that the type of interference -and the analysis it requires -does not impact attentional encoding in bilinguals the same way as it does in monolinguals. Yet despite the same overall pattern, some of the finer-grained results do not replicate across the two bilingual groups, suggesting that typological similarity of the bilinguals' languages further shapes this neural modulation -a result to which we return later. www.nature.com/scientificreports www.nature.com/scientificreports/ A possible reason for the lack of links between attentional encoding and the intelligibility of interference in bilinguals is that this reflects their ability to utilize fewer resources in difficult listening situations. This would be in line with the argument that consistent suppression of non-target language experienced by bilinguals leads to enhanced capacity for selective attention 26,72,73 . This practice might then reduce the attentional capacity needed for efficient encoding of the attended stream, which in turn would not vary as a function of the nature of interference -while still providing the basis for optimal behavioural performance across all interference conditions. Another possible explanation however links to the evidence that selective attention is a cognitive faculty with limited capacity. According to this interpretation, the process of selecting the target language and inhibiting the non-target one will unavoidably utilize some of the existing attentional capacity, thus limiting the resources available for further attentional enhancement as a function of the type of interference. As a result, there would be no increase in attentional encoding due to increase in intelligibility of interference -a pattern replicated in both Spanish-English and Dutch-English bilinguals. Either way, the present findings add to the substantial body of evidence about neuroplastic changes in response to environmental demands on our neurocognitive system, of which bilingualism is one prominent example. Yet, as previously noted 27 , one notable difference is that in many other domains the neuroplastic change is usually either closely related or in the same domain as the experience driving it (e.g., improved visuospatial coordination as a result of juggling 33 ), while with bilingualism the effects go beyond language, extending into domain-general capacities like selective attention. Even more interestingly however, the current results show that this apparent modulation of neural mechanisms of selective attention in bilinguals does not necessarily result in changes to their behavioural performance. Put differently, our results suggest that bilinguals recruit mental resources differently from monolinguals in order to achieve the same performance, pointing to a different organization of the underlying neurocognitive mechanisms in the two groups.
The pattern of findings about the influence of language similarly on the way the brain selects relevant information is more complex. Here, the existing evidence is mixed, with some indicating that any combination of languages can modify bilingual's performance on tasks requiring inhibition and attentional control 60,63 , consistent with the findings that both languages are activated in parallel regardless of language combinations, or even modalities (i.e., spoken and signed 74,75 ); and other data contradicting these findings 32 . Since we were not interested in behavioural differences between the groups, and the task was designed to allow optimal and comparable performance across the board (i.e., simple comprehension), our focus was firmly on how varying demands of selection between more-or less-similar languages shape the underlying mechanisms of selective attention. In this context, language similarity is seen as an additional variable that helps fine-tune this neuroplastic adaptation. Our results suggest that there is indeed a subtle neural difference in the encoding of attended speech between bilinguals who speak a combination of typologically similar (Dutch-English) or dissimilar languages (Spanish-English). Despite the two groups being comparable in their absence of attentional boosting for intelligible interference, the Dutch-English bilinguals appear to show more comprehensive modulation of the underlying attentional mechanisms, with results showing differences across all three time-windows tested in RSA, and delayed dissociation of the two types of linguistic interference (where the comparable effects in monolinguals emerged 150 ms earlier on average 50 ). This is particularly surprising for the unknown language interference (Serbian), as Dutch and Serbian belong to different genera of Indo-European family and have very different phonology, which should in principle be easy to differentiate for Dutch speakers.
This pattern is arguably pointing to the modification of the mechanisms of selective attention due to the life-long experience of interference from English to Dutch (where resolving competition might rely on stronger top-down processing), which we then see applied even when resolving interference from other languages. In other words, life-long experience with particular processing demands shapes attentional processing accordingly, such that Dutch-English bilinguals in this case use the strategy honed for dealing with their two similar languages, even with an interfering language that is less similar. This would be in line with the adaptive control hypothesis 76 , which suggests that control processes themselves can be adapted to the recurrent processing demands placed upon them. This modification is then just another example of adaptive changes of the mechanisms of selective attention by the demands of bilingualism -in this case the more specific variable of similarity between the co-activated entries. Whether this interpretation is correct or not, our findings suggest that the necessity to choose between typologically similar languages leads to more comprehensive modification of the mechanisms of selective attention, compared to the effects triggered by less similar languages. Another interesting difference between Dutch-English and Spanish-English bilinguals concerns the dissociation of attended and unattended speech in the Native-Native condition (Fig. 3). Here, Dutch-English bilinguals showed late dissociation of the interference in their native tongue as discussed above (starting from 270msec but peaking as late as 540 ms), while Spanish-English bilinguals encoded both attended and unattended native streams equally throughout the tested period. This surprising finding is most likely driven by strong encoding of unattended linguistic interference in the Spanish-English group (Table 3), which nevertheless did not impair their comprehension of attended narratives in this condition. Further research is however needed to clarify this.
In sum, this research revealed that bilingualism modulates the neural mechanisms of selective attention, with typological similarity of the two languages helping refine this process to reflect the requirements of resolving competition between more-or less-similar competitors. This is consistent with the view that learning and using multiple languages represents a major cognitive demand, which triggers neuroplastic adaptation of our processing system. The finding that this holds even in the absence of consistent behavioural differences between monolinguals and bilinguals shows that this reconfiguration is indeed adaptive in nature, aimed at allowing optimal behavioural performance. It also points to a different organization of the underlying neurocognitive mechanisms in early bilinguals, which may or may not be fully met or harnessed in the current educational systems -an intriguing hypothesis that requires further investigation. To our knowledge, this is the first study to investigate attentional encoding of natural continuous speech in bilingualism. www.nature.com/scientificreports www.nature.com/scientificreports/ Design and Methods participants. Forty-six early bilinguals who learned English as their second language before the age of 6 were recruited from the University of Cambridge. Twenty-eight were native speakers of Spanish and 18 were native speakers of Dutch. Participants were recruited if they were balanced and fully proficient in both languages and did not report a dominant language. They completed the Bilingual Language Profile Questionnaire 77 , which assesses language dominance through self-report and takes into account age of acquisition, length of formal education in L1 and L2, environment where the languages are spoken, and dominance. There were no significant differences between the groups on any of these variables (p > 0.05; see Supplementary Materials for details). All participants were right-handed with no history of hearing problems. Six participants from the Spanish-English group were excluded from data analyses due to technical problems, thus 40 participants contributed to present study (17 males; mean age: 26.3). Participants were provided with detailed information regarding the purpose of the study and gave written consent. The study was approved by the Cambridge Psychology Research Ethics Committee and carried out in accordance with the relevant guidelines and regulations. The two groups of bilinguals were also compared to a group of 22 right-handed English monolingual listeners (10 males; mean age 21.5 years), whose results we reported earlier 50 . stimuli and procedure. The stimuli for each group of bilingual listeners consisted of ten stories and two matched Musical Rain (MuR) sets that acted as a non-linguistic acoustic baseline. For the Spanish-English bilinguals, eight stories were in Spanish (native language) and two were in Serbian (language unknown to the participants, which belongs to the Slavic genus of Indo-European family). Two native Spanish female speakers recorded four stories each, and one native Serbian female speaker recorded the Serbian stories. Stories were simple children narratives, such as "Abdula y el genio". For the Dutch-English bilinguals, eight stories were in Dutch (native language) and two were in Serbian (also unknown to the participants), recorded by female native speakers of the two languages. Gender was kept constant to reduce segregation strategies based on talker's gender 78 . All stories were transcribed into 120 sentences each, with each sentence ranging from 2.5-3.1 seconds in length, and were normalised to have equivalent root mean square sound amplitude. From each story, the first 60 sentences (first half) were stringed together and the second 60 sentences (second half) were stringed together (with a 300 ms silence gap between each sentence), to create two blocks of approximately 3.2 minutes (192 s) in length. The full list of stimuli is presented in the Supplementary Materials.
The MuR acoustic baseline is a signal that closely tracks the acoustic properties of speech, while at the same time not being interpretable as speech 79 . To produce it we extracted temporal envelopes from the recorded stimuli and filled them with jittered fragments of synthesized speech. MuR thus preserves the spectrotemporal energy distribution, root mean square level, and the temporal envelope of the speech stimuli, but due to the absence of continuous formants it does not elicit speech percept. MuR was generated using MATLAB (The Mathworks Inc., 2010, Natick, MA, USA).
The study used a dichotic-listening task. In each condition, participants were instructed to attend to four blocks of stories (4 × 60 sentences, 240 sentences in total), which were counterbalanced between their left and right ear. A distractor stream was simultaneously presented in the other ear (Fig. 1A). Participants always attended to stories in their native language. There was no repetition of attended sentences (i.e., each sentence was attended to only once). The Single Talker condition was always presented first in order to familiarize the participants with the demands of attending left/right, and the remaining three conditions were presented in a random order. The order of stories within each condition was also randomized for each participant. In total, participants attended to 960 sentences across four conditions. The total number of unattended sentences was 720, due to the lack of interference in the Single Talker condition. This is the same experimental procedure as used in the study with monolinguals 50 , which we use for comparison with the bilingual data. For the duration of the experiment, participants sat in a comfortable chair in a sound-attenuated room. They were instructed to fix their gaze on a cross placed 150 cm in front of them. All stimuli were delivered through E-A-RTONE 3a earphones, with a mean intensity of 65 dB SPL, and presented using MATLAB's Psychophysics Toolbox 80,81 . Prior to data acquisition we assessed the participants' hearing using a short test which evaluated the perception of pure tones at different frequencies and dB levels. All participants achieved a 100% score on the hearing test.
Behavioural measures.
To ensure that participants were paying attention, keep the task requirements natural, and enable optimal behavioral performance, they were asked to simply listen attentively to the instructed side, and informed that they will be completing a set of comprehension questions after each block. There were ten yes/ no questions after each block, for a total of 160 responses per participant.
Data collection and preprocessing. We recorded EEG using 128 Ag/ag-CI channel electrode net (Electrical Geodesics Inc., Eugene, OR, USA). Thirty-six channels were excluded from the recording, as they are located in the outer layers of the net and measure significantly more muscle noise which is of no interest in the current study. Voltages for the remaining 92 channels were recorded at a sampling rate of 500 Hz, with net impedances kept below 100Ω. Data was down-sampled to 250 Hz, filtered between 1-100 Hz, and pre-processed in MATLAB: EEGLAB Toobox 82 . We epoched data at the sentence level (2 seconds) with a −200 pre-stimulus time window, which resulted in 960 attended and 720 unattended trials per participant. Artifact rejection was carried out per epoch, with bad trials removed and bad channels interpolated. In order to isolate independent components and identify artifacts such as eye blinks and non-brain activity, we used the Infomax Independent Component Analysis (ICA) algorithm. Artifacts were rejected according to their topography, time course, and spectral traits. Data was then re-referenced to the average of all channels. www.nature.com/scientificreports www.nature.com/scientificreports/ speech envelopes. The temporal envelope of the speech was calculated for all attended and unattended stories and the MuR sets. Speech envelopes were computed using the Mel-frequency cepstral coefficients (MFCC). EEG data were down-sampled to 100 Hz to match the speech envelopes. The acoustic properties of the envelopes (i.e., the distribution of their mean frequency components) were matched across the three types of interference in both groups (F < 1; p > 0.05), ensuring validity of comparisons between them using the cross-correlation approach.
Data analysis. The relationship between the EEG channels and the speech envelopes was characterized by calculating the Pearson's correlation coefficient r as a function of lag. This procedure shows EEG activity that encodes the speech envelopes. If a speech envelope is in synchrony with an EEG channel at a particular latency, a non-zero cross-correlation will be shown at a lag equal to that latency. The cross-correlation function 83 assumes a linear relationship between the acoustic envelope and neural activity, and has been widely used in the literature 49,84 . We calculated this correlation for each 10 ms lag in the range of −200ms before the onset of a sentence to 600 ms after the onset of a sentence, a time window that covers the range of the effects reported in the literature 85 . We cross-correlated the 92 EEG channels with the attended, unattended and control speech envelopes of each sentence. Control cross-correlations (which are due to chance, Fig. 1B) were obtained by cross-correlating speech envelopes of non-matching sentences with the EEG channels for each dataset separately. Control cross-correlation functions were then averaged across time and channels to form a Gaussian distribution, which was used to define the confidence interval at 95%. Attended and unattended cross-correlation values that were less than the 2.5 th percentile and more than the 97.5 th percentile were deemed to be significantly different from zero (p < 0.05, before correction for multiple comparisons).
In each dataset, we first computed average cross-correlation functions acorss all attended and all non-attended trials by averaging the correlation values for all participants and conditions at each time lag. This was followed by calculations of the attended and non-attended cross-correlation in each condition separately. The cross-correlation functions for all attended and all non-attended trials were not directly compared due to differences in the overall numbers of attended and unattended trials (960 vs 720). To test for differences between attended cross-correlation functions across the four conditions, we compared attended values per electrode in the −200 to 600 ms time window in a one-way repeated measures ANOVA, using a non-parametric permutation approach as implemented in the statcond function in the EEGLAB Toolbox. Control for multiple comparisons was achieved using False Discovery Rate (FDR p < 0.05) 86 implemented in the fdr_bh function. The ANOVAs were followed by non-parametric cluster based permutation pairwise t-tests described below. The same approach was used to look at the differences between unattended cross-correlation functions across the conditions.
In order to evaluate the differences between pairs of attended or unattended cross-correlation functions, and also compare attended and unattended cross-correlation functions in each condition, we carried out non-parametric cluster-based permutation pairwise t-tests, as implemented in Fieldtrip MATLAB Toolbox 87 . To this end, pairs of experimental conditions were compared in 10 ms steps for each electrode in the −200 to 600 ms time window. All results with a t-value larger than 0.05 (two-tailed test) were clustered on the basis of temporal and spatial adjecency, and corrected for multiple comparisons using the Monte Carlo randomisation. Here, trials are randomly divided from a combined pool of two experimental conditions and placed into two subsets. To create a histogram of t-values and compute the proportion of random partitions with a value greater than the observed t-values, this process was repeated 1000 times. If the probability of the proportion (p-value) was less than 0.05, the conditions were considered to be significantly different from each other. For each cluster of significant differences we report T values (representing the summed t values across all significant electrodes) and effect size (Cohen's d) at the peak. To calculate Cohen's d we collapsed the relevant electrodes and time points (defined as 10 ms before and after the peak) into a vector of N participants for each dataset, and computed the difference between their means. This was done for each comparison in turn.
Attention over time.
To assess whether tracking of both attended and unattended acoustic envelopes changed as the story unfolded over time, we compared the neural encoding of sentences at the beginning, middle and end of the narrative. To this end, each block (60 sentences) was split into three equal parts consisting of 20 sentences (beginning: 1-20; middle: 21-40; end = 41-60), and then summed across all 'beginning' , 'middle' and 'end' items per condition. This resulted in 80 sentences per group in each condition (e.g., condition 1 = 1a, 1b, 1c; where a = beginning, b = middle, c = end), which were compared for attended and unattended cross-correlations using non-parametric cluster-based permutation t-tests described above.
Representational similarity analysis.
To directly compare the patterns of neural encoding across the groups, we used Representational Similarity Analysis (RSA), a multivariate pattern analysis that examines the patterns of neural activity elicited by different experimental items 66 . At the heart of RSA is a distinction between first-order and second-order isomorphism 88 , where a first-order isomorphism captures resemblance between an item and its neural representation, while a second-order isomorphism captures the similarity structure of the items to the similarity structure of their representations. This allows us to abstract away from the direct item-to-representation similarities (which could be affected by different languages presented to each group) and look for similarities in the patterns of attentional encoding in bilinguals and monolinguals presented with the same types of interference. To this end, we used RSA to compute representational (dis)similarity matrices (RDMs) of cross-correlations observed for attended and unattended conditions in each group at time windows of consistent attentional effects (100-160 ms, 290-350 ms and 510-570 ms post sound onset, Fig. 1D). Each entry in an RDM represents dissimilarity (1 minus the correlation value) between activation patterns elicited by a pair of experimental conditions in a specific time-window, averaged across participants and electrodes. To determine (2019) 9:8204 | https://doi.org/10.1038/s41598-019-44782-3 www.nature.com/scientificreports www.nature.com/scientificreports/ the similarity of encoding patterns across the groups, we correlated the RDMs in each time window (Spearman's ρ) and assessed these correlations against a null-hypothesis. The null hypothesis distribution of correlations was obtained by repeatedly randomizing the labels in one RDM and comparing it against the other. Correlations were deemed significant if they fell outside a 97.5% CI (one-tailed) after Bonferroni adjustment for multiple comparisons.
Data Availability
The datasets generated and analysed in the current study are available on request from the corresponding author. | 2019-06-05T13:13:27.509Z | 2019-06-03T00:00:00.000 | {
"year": 2019,
"sha1": "7e856a26c6cf0825d87249a6bd76e79d79789c44",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-44782-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e856a26c6cf0825d87249a6bd76e79d79789c44",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
17956275 | pes2o/s2orc | v3-fos-license | Origin of the Growing Length Scale in M-p-Spin Glass Models
Two versions of the M-p-spin glass model have been studied with the Migdal-Kadanoff renormalization group approximation. The model with p=3 and M=3 has at mean-field level the ideal glass transition at the Kauzmann temperature and at lower temperatures still the Gardner transition to a state like that of an Ising spin glass in a field. The model with p=3 and M=2 has only the Gardner transition. In the dimensions studied, d=2,3 and 4, both models behave almost identically, indicating that the growing correlation length as the temperature is reduced in these models -- the analogue of the point-to-set length scale -- is not due to the mechanism postulated in the random first order transition theory of glasses, but is more like that expected on the analogy of glasses to the Ising spin glass in a field.
Two versions of the M -p-spin glass model have been studied with the Migdal-Kadanoff renormalization group approximation. The model with p = 3 and M = 3 has at mean-field level the ideal glass transition at the Kauzmann temperature and at lower temperatures still the Gardner transition to a state like that of an Ising spin glass in a field. The model with p = 3 and M = 2 has only the Gardner transition. In the dimensions studied, d = 2, 3 and 4, both models behave almost identically, indicating that the growing correlation length as the temperature is reduced in these models -the analogue of the point-to-set length scale -is not due to the mechanism postulated in the random first order transition theory of glasses, but is more like that expected on the analogy of glasses to the Ising spin glass in a field. One of the leading contenders for a theory of glasses is the random first-order transition theory (RFOT) [1][2][3]. It had its genesis in p-spin glass models [1]. The particular p-spin models which might be relevant to the properties of structural glasses have a mean-field limit in which there are two critical temperatures T d and T K . The upper temperature T d marks the temperature at which dynamical singularities appear and are like those found in mode-coupling theory [4]. The lower temperature T K is the temperature of the ideal glass transition. This occurs where the configurational entropy (or log of the number of metastable states) vanishes [5]. Mean-field like calculations on glass-forming liquid models support this picture [6,7].
It has always been recognized that the dynamical transition at T d will disappear outside the mean-field limit due to activated processes out of the metastable states. These activated processes make even the existence of metastable states problematical. In a recent paper Franz et al. [8] calculated a dynamical correlation length using a field theoretic approach and found on comparing with numerical data that the agreement was only good when the length scale was of the order of a particle diameter. The field theory predicts that this length scale should diverge but the simulations reveal that the correlation length remains small [9], even though time scales increase rapidly: the dynamical transition is an example of an avoided transition.
The ideal glass transition at T K at mean-field level (or infinite dimension) is associated with a static (equilibrium) transition to a state with one-step replica symmetry breaking (1RSB) [1]. The order parameter q jumps from zero in the high temperature phase to a finite value at and below T K . It is this jump in q which leads to the "first-order" part in the name of the RFOT theory. The * jhyeo@konkuk.ac.kr † m.a.moore@manchester.ac.uk configurational entropy (or complexity) vanishes as the temperature approaches T K from above. While there is widespread agreement that the transition at T d becomes just a crossover or avoided transition in finite dimensions, there is no consensus about what happens to the ideal glass transition outside the mean-field limit. One of us has argued that the 1RSB transition must also be avoided in any finite dimension [10], just like the dynamical transition at T d . In other words, the lower critical dimension d of the 1RSB state is infinite.
In this paper we examine M -p-spin models; in particular the cases of p = 3 with M = 2 and M = 3. These variants of the p-spin model have been extensively studied [11][12][13][14][15]. Their significance is that calculations and simulations can be done with them both at the mean-field level and in finite dimensions. In the M -p-spin model, there are M Ising spins σ (α) i , α = 1, 2, · · · , M on each site i of (say) a hypercubic lattice. The spins interact with each other via a p-body interaction. The Hamiltonian involves terms of products of p spins chosen from the spins in a pair of nearest-neighbor sites. For the p = 3 case, the Hamiltonian is given by where the notation ij means that the sum is over all nearest neighbor pairs i and j. The number of different coupling constants, J . All these couplings are usually chosen independently from a Gaussian distribution with zero mean and width J.
The versions with M = 2 and M = 3 are of particular interest as at mean-field level they have quite different kinds of behavior. The model with M = 3 at meanfield level has both a dynamical transition at T d and an ideal glass transition at T K . The model with M = 2 is completely different at mean-field level. It has neither of these transitions. The origin of the differences can be glimpsed by putting the M -p-spin model into a field theoretical framework.
The standard way of doing this is to use the Hubbard-Stratonovich transformation on the replicated partition function and then trace over the spins. The resulting field theory associated with this model is the following Ginzburg-Landau-Wilson Hamiltonian where q ab (r) is the order parameter and a and b are replica indices running from 1 to n with n → 0. At mean field level, this model has been known for a long time to show very different behavior depending on the value of R = w 2 /w 1 [16]. When R > 1, there are two transitions at the mean field level as described above; a dynamical transition at some temperature T d and a thermodynamic transition at a lower temperature T K to a state with onestep replica symmetry breaking. When R < 1 neither of these transitions will occur. In Ref. [17], the ratio R was evaluated for the M -p-spin model for general values of M and p. The cases we are interested in this paper, namely p = 3, M = 2 and p = 3, M = 3, correspond to R ≈ 0.879 and R = 2, respectively. Therefore the two models should indeed show very different mean-field behavior. At temperatures below T K for the case where R > 1, there is yet another transition, the Gardner transition, to a state with full replica symmetry breaking (RSB) [18]. For the case where R < 1, there is at mean-field level only one transition -the Gardner transition, a continuous transition to a state which is expected to have full RSB, (although this has never been checked explicitly to the best of our knowledge).
The transition discovered by Gardner is thus present for both the M = 2 and M = 3 models at mean-field level [18]. She showed that the state with full replica symmetry breaking was very similar to that of the Ising spin glass in an applied field h. For this model, there is a line in the h−T phase diagram, the de Almeida-Thouless (AT) line, [19], which separates the paramagnetic replica symmetric state from the state with full replica symmetry breaking. Arguments have been presented [20,21] that the lower critical dimension of states with full replica symmetry breaking is 6. The Gardner transition, which is in the same universality class as the AT transition, should be another avoided transition for all d ≤ 6.
In this paper we have studied both the models with M = 2 and M = 3 within the Migdal-Kadanoff (MK) renormalization group approximation in dimensions d = 2, 3 and 4 to determine how thermal fluctuations modify the mean-field picture of these two models. The MK approximation is one of the few approximations which is reliable for the study of spin glasses in low dimensions [11,12]. The details of our calculation are as in [11,12]. We are interested in particular whether in the physically relevant dimensions, d = 2 and d = 3, whether there are any vestiges left of the mean-field transitions. One can see in the molecular dynamics study of Kob et al. [9], clear remnants of the dynamical transition. Only equilibrium properties are studied in this paper so only the remnants of transitions which could be seen are those of the ideal glass transition and the Gardner transition for the case with M = 3, and just the Gardner transition for the case M = 2. We determined the correlation length ξ by the same method which was used in Refs. [11,12,15], that is via the decay of the interactions J ij with distance L on the MK hierarchical lattice: J ij ∼ exp(−L/ξ). As the temperature T is reduced to zero this correlation length grows to a value ξ(0), which is strikingly large especially for d = 4. The data on ξ(0) are presented in Table I. The large values of ξ(0) certainly suggests there is an avoided transition mechanism at work. Even in d = 3, ξ(0) is considerably larger than those which have been obtained in simulations of realistic glass models, at least down to temperatures which are currently practical [9].
It is useful to measure temperature T on the scale of the mean-field transition temperature T c (defined as when the coefficient t in Eq. (2) equals zero). For M = 2, T c = ( √ 2z) 1/2 J, while for M = 3, T c = (3z) 1/2 J, where z = 2d is the number of nearest neighbors on the hypercubic lattice [17]. In Fig. 1 the ratio of ξ/ξ(0) has been plotted as a function of T /T c . It shows that as a function of T /T c , the ratio ξ/ξ(0) is essentially the same for both M = 2 and M = 3. We had expected to see for the case of M = 3 features which could be associated with a possibly avoided ideal glass transition at T K . None is visible in Fig. 1. This result is our main finding. What is its significance?
The correlation length studied here is the equivalent of the point-to-set length scale [9,22,23] studied in glassy supercooled liquids. In the RFOT theory [2,3] this length scale grows as the temperature is reduced and eventually diverges at T K . In that theory the growth of the correlation length is driven by the decrease of the configurational entropy (or complexity) to zero as the temperature approaches T K . Since for M = 2 there is no ideal glass transition yet this model is almost identical in its properties with those of the M = 3 model, the growing correlation length cannot be a remnant of the ideal glass transition. It must instead be a remnant of the Gardner transition which is common to both models. Thus the mechanism behind the growing correlation length as the temperature is reduced cannot be that envisaged in the RFOT, but instead must be that associated with the growing correlation length which arises in the Ising spin glass in a field as the temperature is lowered. According to the droplet picture, [24][25][26] the correlation length increases as the temperature is decreased and saturates at T = 0 to a value set by equating the interface energy between a droplet of size ξ(0) and its time reverse, ∼ Jξ(0) θ , to the energy gained from the field on flipping the droplet, ∼ hξ(0) d/2 . The exponent θ ≈ −0.28 for d = 2 and θ ≈ 0.24 for d = 3 (see Ref. [27] for a review of the value of θ in various dimensions d). Table 1 shows that ξ(0) gets larger as the dimensionality goes up and when there is an AT line, i.e. when d > 6, it would be expected to be infinite. The MK approximation itself is a low-dimensional approximation (it is exact in one dimension), and cannot be trusted to be even qualitatively correct in dimensions as high as six.
We have argued before that the growing (point-to-set) length scale seen in supercooled liquids as the temperature is reduced [28,29] is a consequence of their being in the same "universality class" as the Ising spin glass in a field. However, until now, we could not rule out the possibility that the growing length scale might arise through a 1RSB transition as in RFOT, (but possibly avoided because of the mechanism in Ref. [10]). The similarity between the M = 2 and M = 3 models shown in Fig. 1 now removes that possibility for the M = 3 model. In a recent paper [30], it has been argued that the discontinuous 1RSB transition in the M = 3 model might be In RFOT theory, the configurational entropy is argued to go to zero at the Kauzmann temperature T K . For M -p spin glass models it is not clear how the configurational entropy should be determined outside the meanfield limit, but we have studied their total entropy by numerically differentiating the free energy calculated within the MK approximation (which leads to some inaccuracy near T = 0). It is plotted in Fig. 2 as a function of the reduced temperature T /T c . Once again the models with M = 2 and M = 3 behave almost identically, indicating that when the correlation length gets large, there is present a form of universality. At high temperatures where the correlation length is small, the two types of model have very different entropies: the high temperature limit of the entropy per site is k B M ln 2. While there is no sign in Fig. 2 of the entropy vanishing below some temperature T K , the entropy is smaller at lower temperatures for the d = 4 versions of the model, and behavior in four dimensions is going to be closer to mean-field theory than behavior in two dimensions.
We have also determined the Edwards-Anderson order parameter q where the average is over the bond realisations. q is independent of the value of α, (which runs from α = 1 to M ). Under the MK iteration scheme the couplings flow to the high-temperature fixed point where the block spins are decoupled and only single site terms remain. As a consequence it is easy to evaluate q. The Edwards-Anderson order parameter q is plotted as a function of the reduced temperature in Fig. 3. The figure shows that q → 1 as T → 0 which is to be expected. At low temperatures where the correlation length is large both the M = 2 and M = 3 models behave almost identically, which is another example of the "universality" emerging in the problem.
What is striking is that q is non-zero at any temperature, although it does become very small when T T c . q is a measure of the extent to which the spin at site i remembers its initial orientation: i.e.
Thus in the p-spin models the spins never completely forget their initial orientations, no matter how high the temperature. This behavior is not a consequence of using the MK approximation. It is a feature which arises in any model described by the H GLW of Eq. (2) with a non-zero value of w 2 , the term which breaks the timereversal symmetry. At mean-field level q does vanish at temperatures above T d . p-spin models are meant to be useful models for understanding the properties of supercooled liquids so this feature of a non-vanishing q is hard to reconcile with the properties of supercooled liquids. These forget their initial conditions after a time of the order of the alpha relaxation time, so for them q is zero on long time scales. Maybe p-spin models are useful for describing the properties of supercooled liquids but only on timescales less than the alpha relaxation time. Given the huge effort which has gone into investigating their properties, this is certainly to be hoped.
Finally we note that if we had used the MK approximation with the further approximations which were made in Ref. [31], a Kauzmann transition would have been found for the model with M = 3. It is only by carrying out the MK calculation exactly that one recovers the correct behavior [11]. | 2012-11-19T05:06:38.000Z | 2012-08-15T00:00:00.000 | {
"year": 2012,
"sha1": "62d794f1b851bd8281bc166303c102c888d084bb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1208.3044",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "62d794f1b851bd8281bc166303c102c888d084bb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
258376831 | pes2o/s2orc | v3-fos-license | Separating Daily 1 km PM2.5 Inorganic Chemical Composition in China since 2000 via Deep Learning Integrating Ground, Satellite, and Model Data
Fine particulate matter (PM2.5) chemical composition has strong and diverse impacts on the planetary environment, climate, and health. These effects are still not well understood due to limited surface observations and uncertainties in chemical model simulations. We developed a four-dimensional spatiotemporal deep forest (4D-STDF) model to estimate daily PM2.5 chemical composition at a spatial resolution of 1 km in China since 2000 by integrating measurements of PM2.5 species from a high-density observation network, satellite PM2.5 retrievals, atmospheric reanalyses, and model simulations. Cross-validation results illustrate the reliability of sulfate (SO42–), nitrate (NO3–), ammonium (NH4+), and chloride (Cl–) estimates, with high coefficients of determination (CV-R2) with ground-based observations of 0.74, 0.75, 0.71, and 0.66, and average root-mean-square errors (RMSE) of 6.0, 6.6, 4.3, and 2.3 μg/m3, respectively. The three components of secondary inorganic aerosols (SIAs) account for 21% (SO42–), 20% (NO3–), and 14% (NH4+) of the total PM2.5 mass in eastern China; we observed significant reductions in the mass of inorganic components by 40–43% between 2013 and 2020, slowing down since 2018. Comparatively, the ratio of SIA to PM2.5 increased by 7% across eastern China except in Beijing and nearby areas, accelerating in recent years. SO42– has been the dominant SIA component in eastern China, although it was surpassed by NO3– in some areas, e.g., Beijing–Tianjin–Hebei region since 2016. SIA, accounting for nearly half (∼46%) of the PM2.5 mass, drove the explosive formation of winter haze episodes in the North China Plain. A sharp decline in SIA concentrations and an increase in SIA-to-PM2.5 ratios during the COVID-19 lockdown were also revealed, reflecting the enhanced atmospheric oxidation capacity and formation of secondary particles.
INTRODUCTION
Fine particulate matter with diameters less than 2.5 μm (PM 2.5 ) poses a major environmental health risk around the world, especially in low-and middle-income countries. 1,2 Its chemical composition includes organic matter, black carbon, sulfate (SO 4 2− ), nitrate (NO 3 − ), ammonium (NH 4 + ), chloride (Cl − ), mineral dust, and trace elements, among others. These components can be categorized into primary and secondary aerosols. The former refers to fine particles directly emitted from different pollution sources, and the latter refers to new particles formed from gaseous or particulate pollutants through photochemical and heterogeneous reactions. Secondary inorganic aerosols (SIA = SO 4 2− + NO 3 − + NH 4 + ) are closely associated with anthropogenic emissions from the energy, industrial, and agricultural sectors. 2−4 Cl − is an important component of sea-salt aerosols, while anthropogenic sources include coal/biomass combustion and industrial processes, influencing aerosol particle growth, atmospheric chemical reactions, PM 2.5 and ozone air quality, especially in developing countries. 5−9 For effective policy-making, monitoring changes in these inorganic components can better reflect changes in specific aerosol sources relative to the total PM 2.5 . 10,11 PM 2.5 composition has noticeable impacts on the ecological environment, ambient air quality, and Earth's climate. Acid rain formed by sulfuric and nitric acid particles via the oxidation of sulfur dioxide (SO 2 ) and nitrogen oxides (NO x ) affects plant growth. 12,13 The formation of SIA components is a main cause of severe haze pollution. 14,15 Sulfate aerosols with moderately long life cycles can cause significant local pollution and even affect global climate change through atmospheric transport and climate response. 16 Different PM 2.5 constituents impact human health in different ways. 17,18 Recent studies have suggested that carbonaceous aerosols from agricultural residue biomass burning and wildfires, 19,20 ultrafine particles from automobile exhaust, 21,22 and severe haze episodes caused by fine particles 23,24 have strong toxicities. Despite the general recognition of the strong health impacts of PM 2.5 , the possible effects of chemical composition on these health hazards are less clear, largely due to the lack of adequate monitoring in diverse environments.
China is an emerging country with rapid industrialization and economic development in recent decades, where PM 2.5 pollution (especially SIAs) has always been a major concern in urban air quality. 25 Many studies have investigated the sources and impacts of PM 2.5 composition. However, most of these studies have involved only a few individual observation stations or specific observation periods in megacities or urban agglomerations. 26−30 Such studies likely have limitations in spatial representation because they mostly reflect the atmospheric composition around a ground site or during short-term periods. Insights from atmospheric chemistry models, e.g., Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2), Goddard Earth Observing System (GEOS)-Chem, and Weather Research and Forecasting-Community Multiscale Air Quality (WRF-CMAQ), can provide regional-to-global perspectives of longterm variations in different PM 2.5 species but tend to be biased toward certain species given limitations in main inputs such as emission inventories. 3,31 Also, model simulations are computationally costly and generally have a coarse horizontal grid resolution of tens of kilometers, limiting their applications across urban-residential scales. PM 2.5 has been estimated from satellite remote sensing of aerosol optical depth (AOD), but little has been done with regard to aerosol composition. The long-term evolution of global PM 2.5 components was assessed by integrating satellite PM 2.5 retrievals, GEOS-Chem-simulated PM 2.5 composition information and coincident profiles, and surface PM 2.5 and composition observations. 10,32−34 PM 2.5 composition was estimated in other regions with moderate to high resolutions incorporated with surface measurements, such as in southern California at 4.4 km using a generalized additive model, 35 in the northeastern United States at 1 km combining a chemical transport model and geographically weighted regression model 11 or a land-use regression model, 36 and in North America at different resolutions with spatially smoothing models. 37,38 However, PM 2.5 composition may have strong spatial gradients due to localized sources and short lifetimes, leading to large estimation uncertainties, especially in regions with sparse observations. This has also been the case in China, where only a handful of studies relying strongly on model simulations have been done. 39,40 What are the concentrations of major species and their proportions to total PM 2.5 mass in China, and what are their temporal changes and impacts on air quality? To address these outstanding questions, we have generated a long-term, daily, seamless PM 2.5 chemical composition product in China at a 1km resolution by applying artificial intelligence to a large ensemble of data sets consisting of ground-based observations of aerosol composition, satellite-derived PM 2.5 at a 1-km resolution, 25 and various auxiliary data, i.e., meteorological reanalyses, pollution emission inventories, and model simulations. Different from previous studies, a stronger deep forest model, which takes advantage of multiple tree-based machinelearning models, was adopted with extensions of multidimensional spatiotemporal heterogeneity made to construct robust nonlinear relationships between each PM 2.5 component and the total PM 2.5 mass concentration. The applicability of this unique data set is also demonstrated in the analysis of atmospheric composition and changes during heavy haze episodes and the coronavirus pandemic. Figure S1). The network is relatively uniform with dense clusters located in the three major economic zones (outlined by purple delineates) centered around the major megacities in China, where emissions are strong and more localized. The urban and suburban/rural sites account for 61% and 39%, respectively. As such, their spatial representativeness is sound overall, similar to the national PM 2.5 observation network. 41 Quartz filter membranes were used to collect PM 2. , NO 3 − , NH 4 + , and Cl − concentrations at each station from 2013 to 2020 were used for model training and validation. Besides, daily mean in situ total PM 2.5 concentrations, provided by the China National Environmental Monitoring Centre network, were collected. For spatial matching, ground measurements were assigned to the nearest grid matching the individual site and 1km 2 grid, and if there were two or more sites falling in the same grid, additional averaging was done.
MATERIALS AND
2.1.2. Satellite PM 2.5 and Auxiliary Data. Our latest version (V4) of the long-term (2000 to present) daily seamless data set of ground-level PM 2.5 across China, i.e., the ChinaHighPM 2.5 data set from the ChinaHighAirPollutants (CHAP) database, was used in this study. It was generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) Multiangle Implementation of Atmospheric Correction (MAIAC) AOD product at a 1-km resolution, 44 together with ground-based measurements of surface PM 2.5 and ample auxiliary variables, using a space-time extra-trees model. 25 The data set is of high quality with an out-of-sample cross-validated coefficient of determination (CV-R 2 ) of 0.92, an average root-mean-square error (RMSE) of 10.76 μg/m 3 , and a mean absolute error (MAE) of 6.32 μg/m 3 , compared to surface observations. It has been widely used in studies concerning public health, the environment, and the economy; 45−49 thus, it is employed here as the primary constraint of total PM 2.5 mass for separating different chemical components. Other satellite remote-sensing products related to land surface cover, topography, and population density were also used, derived from MODIS vegetation index (1 km), Shuttle Radar Topography Mission (90 m), and LandScan (1 km) products, respectively.
Model and Reanalysis Data.
Model data employed were hourly and every 3 h surface mass concentrations of primary aerosol diagnostics (i.e., PM 2.5 inorganic components) derived from the MERRA-2 (∼0.625°× 0.5°) and GEOSforward processing (GEOS-FP) (∼0.3125°× 0.25°) models. We also used monthly precursors of aerosol particles indicating anthropogenic emissions and chemical reactions, including ammonia (NH 3 ), NO x , SO 2 , and volatile organic compounds (VOCs) from the Copernicus Atmosphere Monitoring Service (CAMS) monthly global emission inventories (∼0.1°× 0.1°). 50 Meteorological conditions affect the formation, transport, and removal of air pollutants, as well as PM 2.5 composition through particle hygroscopic growth and chemical reaction rate. Thus, hourly meteorological data were employed as model inputs, including temperature, precipitation, evaporation, winds near the surface (10 m) and in the middle troposphere (850 hPa), and surface pressure (∼0.1°× 0.1°) from the ERA5-Land reanalysis, 51 as well as boundary layer height and relative humidity (∼0.25°× 0.25°) from the ERA5 climate reanalysis. 52 All hourly level meteorological and chemical composition simulations were averaged or accumulated to obtain daily values. The finer-resolution input parameters were aggregated, while the coarser-resolution ones were resampled to the 1-km resolution (≈0.01°× 0.01°) using the bilinear interpolation approach. 25 Table S1 provides detailed information on all of the data used.
Methodology. 2.2.1. PM 2.5 Composition Modeling and Validation.
Relative to PM 2.5 , sources and changes in its chemical composition are more complex. To improve the PM 2.5 -composition separation, adopted here is a more powerful deep-learning model with a stronger data-mining capability, i.e., deep forest. 53,54 Similar to but different from the deep neural network, it is constructed by relying on the nondifferentiable decision tree instead of the differentiable neuron, i.e., based on the forest model, stacking ensemble learning with further optimization, solving the overfitting problem occurring in the deep layer. The multi-grained scanning method is applied to generate input features via the sliding window. The deep forest framework is built in a cascade forest structure adopting two kinds of forests, i.e., random forests and completely random tree forests. Each layer of training is independent supervised learning, and the number of model layers can be adaptively determined through iterative validation. The multilayer training results are integrated using the Light Gradient Boosting Machine (LightGBM) model to determine the final output. Compared with the traditional deep learning of a neural network, it has a higher training speed and uses fewer superparameters that do not need much adjustment. Also, the complexity of the model can be automatically adjusted.
Air pollutants are spatiotemporally heterogeneous, and accounting for this in models can enhance the model capability. 55,56 Here, we extend the deep forest by optimizing the way to determine spatiotemporal information on independent variables, i.e., weighted effects referencing polar coordinates compared to previous studies, 54,56,57 leading to a new model called the four-dimensional spatiotemporal deep forest (4D-STDF) model ( Figure S2). Unequal autocorrelation and difference of points in space (Ps) are expressed in Euclidean space with three spherical coordinates [S1, S2, S3] (eq 1). The temporal characteristics of points (Pt) are expressed by three helix-shape trigonometric vectors [T1, T2, T3] (eq 2) to include both seasonal cycles and daily variations of air pollution. 58
Ps S S S
where Lon and Lat indicate the longitude and latitude of one point in space, respectively, and DOY and N represent the day of the year and the total number of days in a year, respectively. Ground-based measurements of PM 2.5 chemical components are true values, and satellite-retrieved PM 2.5 is taken as the main input to the 4D-STDF model, together with all auxiliary factors, including MERRA-2 and GEOS-FP simulations of PM 2.5 components, CAMS emission inventories, ERA5 meteorological fields, three surface-related and population variables, and space-time terms, for training. Here, three widely used 10-fold cross-validation approaches, i.e., out-of-sample, out-of-station, and out-of-day procedures, performed by randomly discarding 10% of the data samples (overall accuracy), monitors (spatial prediction ability), and days (temporal prediction ability), respectively, are used to generate independent training and validation samples and characterize the model performance in separating different PM 2.5 components, respectively. 17 The linear regression equation and coefficient of determination (R 2 ) are used to quantitatively evaluate the model accuracy, and RMSE and MAE are used to evaluate the model uncertainty.
Model
Variable Importance. Our model is superior in physical interpretation to traditional black-box deep-learning models because it can quantitatively evaluate the contribution (importance) of each input variable in separating PM 2.5 composition. Being the sum of all species, PM 2.5 carries the bulk of changes and contributes 36−45% for the three SIA components ( Figure S3), followed by meteorological conditions (importance score = 22−31%). The boundary layer height is particularly critical by affecting the vertical distribution and mixing of air pollutants. Spatiotemporal information is important for modeling, accounting for 7− 11%. Emission inventory and model simulations also have large impacts of 7−10% and 3−8%, respectively. The remaining factors are less important but still contribute >1%, so they are included in our model.
RESULTS AND DISCUSSION
Applying the 4D-STDF model to the aforementioned input data sets, we have created a daily spatially complete (gapless) data set of PM 2.5 inorganic composition over China at a 1-km resolution from 2013 to 2020 and extended the model to reconstruct historical records dating back to 2000, called ChinaHighPMC, one of a series of ChinaHighAirPollutants (CHAP) data sets. Figure S4 shows the annual maps of four , and Cl − , respectively (Figure 1). A few stations in northwest and central China had large estimation uncertainties. In general, the reflects the dominant sources of SO 2 , with coal combustion being a major source. Enhanced NO 3 − mainly occurs in cities and industrial centers, with large contributions from traffic and industrial emissions of NO x . 59−61 NH 4 + resembles the spatial pattern of the above two species but with a lower concentration because it mainly comes from agricultural emissions of NH 3 that neutralize SO 4 2− and NO 3 − . 61,62 By contrast, Cl − concentrations are usually less than 2 μg/m 3 in eastern China, 3−5 times lower in population-weighted mean content than the other three inorganic components (Table S2). High values are mainly localized to heavily industrialized zones, such as BTH, and coastal areas, such as the Bohai Rim. These areas have abundant coarse-mode particles, e.g., sea salt, and fine particles produced by the combustion of fossil fuels like coal and biomass burning. 9 Similar to PM 2.5 , strong seasonal variations are revealed in our estimates (Figure 3). All inorganic components are at their highest levels in winter, especially in northern China, where coal burning for heating is the primary source, 63 compounded by the low boundary layer height. 64,65 By contrast, populationweighted mean concentrations are 1.4−3.3 times lower in summer than in winter in eastern China, especially NO 3 − in the Pearl River Delta (PRD, 4 times lower) (Table S2), mainly due to evaporative loss under high-temperature conditions. 66 There are also significant north−south differences due to different meteorological conditions, e.g., more abundant precipitation promoting the wet removal of particulate (Figures 2 and 3), SO 4 2− accounts for 20.5% in eastern China and higher values in the south compared to the north, e.g., PRD is 1.4 times that of BTH. The SO 4 2− contribution reaches a maximum of 27.6% in summer (Table S3), especially in Shanxi Province, where high amounts of SO 2 are emitted from coal power plants. 41 The high temperature and stronger radiation also significantly enhance the chemical conversion from SO 2 to SO 4 2− . 10,39,69 NO 3 − accounts for 19.8% of the total PM 2.5 but has opposite seasonal changes to that of SO 4 2− , with the lowest value in summer (15%) and higher values in cold seasons in eastern China (Figure 3). This is explained by lower temperatures and more available NH 3 -neutralizing sulfates that favor nitrate aerosol partitioning. 61,70 The annual NH 4 + fraction is 13.9% in eastern China, showing a weaker seasonal contrast, with a somewhat higher value of ∼14% in summer, presumably due to higher NH 3 emissions from agricultural sources. 62,71 NH 4 + resides mostly in the form of ammonium sulfate in summer but in ammonium nitrate in winter. 61 Annual and seasonal Cl − -to-PM 2.5 ratios are much smaller (average = 3−4.6%) than those of the other three inorganic species. In winter, the Cl − contribution is higher in the northern and western vast regions and major urban areas due to the saline-alkali soils with dry meteorological conditions and large emissions from anthropogenic sources like coal combustion and residential biomass burning. 7−9 In general, the four main inorganic aerosols account for 58.1% of the total PM 2.5 in eastern China, of which the fraction of SIA is more than half (54.2%), reaching a maximum of 56.6% in summer while a minimum of 49.4% in winter. This dominant presence of SIAs calls upon the need for the persistent regulation of emissions of relevant precursor gases (i.e., SO 2 , NO x , and NH 3 ).
Temporal Variation and Trend.
Steady declines are seen in the concentrations of the four inorganic components from 2013 to 2020 in eastern China at the annual rates of −0.63, −0.5, −0.34, and −0.11 μg/m 3 for SO 4 2− , NO 3 − , NH 4 + , and Cl − (p < 0.001), respectively, especially in the BTH (−0.19 to −1.06 μg/m 3 /year, p < 0.001) (Figure 4). This is highly consistent with the significant decline in total PM 2.5 (−2.78 to −5.44 μg/m 3 /year, p < 0.001) attributed to substantial reductions in anthropogenic emissions benefiting from the implementation of new national environmental protection policies. 25,72 While the SO 4 2− contribution decreased sharply in Beijing, Tianjin, and surrounding areas, it increased rapidly in the south. The former decrease may have stemmed from the reduction of flue gas desulfurization from coal-fired boilers as coal combustion shifted to gas and electricity during the heating season in northern China. 73 The proportion of NO 3 − also greatly increased in eastern China (0.36%/year, p < 0.001), especially since 2018 in BTH (0.81%/year, p < 0.001), highlighting the importance of NH 3 and NO x controls for preventing future PM 2.5 pollution. 76 The NH 4 + -to-PM 2.5 ratio increased at a rate of 0.26% per year (p < 0.001) in eastern China as a result of the combined effects of changes in SO 4 2− and NO 3 − , which was higher in the south where the fractions of both anions increased. The Cl − contribution overall did not change much over time. The increase in the inorganic-to-PM 2.5 ratio may reflect the faster decline in other components (−1.54 to −4.10 μg/m 3 /year, p < 0.001), especially the significant reduction in primary PM 2.5 emissions (mostly organic carbon) following nationwide regulations after 2013. 59,60,72,77 Increased oxidation rates due to rising surface ozone levels could also speed up the formation of SIAs. 17,78 In addition, this may be partly attributed to reduced dust because coarse-mode aerosol particles (PM 10 , PM 2.5−10 ) were observed to have declined considerably. 79,80 In general, annual population-weighted mean concentrations of inorganic chemical composition dropped by 40−43% in eastern China from 2013 to 2020, with the largest declines in SO 4 2− in BTH (by 54%) and NO 3 − in the PRD (by 51%). Seasonally, SO 4 2− and NH 4 + always decreased the most in summer in eastern China and three key regions (by 40−58% and 40−53%, respectively), while NO 3 − dropped the most in autumn by 45−52% ( Figure S10). By contrast, the SIA contribution has been continuously increasing in eastern China during the last eight years (slope = 0.9%, R 2 = 0.95), with SO 4 2− still being the main secondary component in 2020 (population-weighted mean concentration = 6.9 μg/m 3 and SO 4 2− -to-PM 2.5 ratio = 21.7%), 8% higher in concentration and 1.6% higher in proportion than NO 3 − . Nevertheless, NO 3 − (slope = 0.36%, R 2 = 0.96) gradually approached SO 4 2− (slope = 0.29%, R 2 = 0.86). However, great contrasting regional differences exist, e.g., SO 4 2− remained dominant in the PRD (slope = 0.43%, R 2 = 0.8), while NO 3 − continuously In BTH, the SO 4 2− contribution has been declining (slope = −0.13%, R 2 = 0.28), while the NO 3 − contribution has been rapidly rising (slope = 0.54%, R 2 = 0.9), becoming the dominant component since 2016 ( Figure S11).
For policy implementations ( Figure S12), the decline in PM 2.5 inorganic components was most dramatic during the Clean Air Action Plan (2013−2017) in eastern China, especially for SO 4 2− and NO 3 − (−0.72 and −0.60 μg/m 3 / year, p < 0.001) (Table S4), with SO 2 and NO x emissions falling by 59% and 21%, respectively. 60,81 The largest downward trends occurred in the NCP, consistent in spatial pattern with the changes in SO 2 and NO 2 concentrations at the surface. 41,82 However, during the Blue Sky Defense War (2018−2020), the downward trends slowed significantly. Areas with significant decreases shrank in size, mainly located in a handful of provinces (e.g., Beijing and Anhui) and urban agglomerations (e.g., PRD and YRD). In particular, SO 4 2− decreases occurred mainly in the core urban areas of central Shanxi province after 2018 due to the enforcement of the clean heating policy, 83−85 leading to a sharp reduction of 33% in surface SO 2 pollution. 41 Except for Cl − , increases in the SIA proportion have accelerated overall in recent years. However, this was not seen in densely populated large cities like Beijing and Guangzhou, where the SIA proportion remained relatively stable from 2018 to 2020.
To validate the data reliability, we further compared national and regional PM 2.5 components and their ratios to total PM 2.5 calculated from surface observations (Tables S2 and S3). They are highly consistent, with little differences at both annual and seasonal levels. However, specific differences exist with average results from satellite retrievals, increasing as the region expands, e.g., eastern China, caused by significant differences in spatial representations. 25 Also, considering the large difference in sampling time between satellite-derived and ground-based observations, to be a fair comparison, we calculated their temporal trends with collocated data at each site ( Figure S13). Results illustrate that our data set can accurately capture the variations of aerosol components (R 2 = 0.85) and reproduce well the changes in the proportion of PM 2.5 species (R 2 = 0.8).
Preliminary Investigation before 2013.
Using our newly developed model, we also reconstructed historical data records of PM 2.5 chemical composition before 2013 to fill gaps in surface observations. Figure S14 shows the multiyear mean and annual temporal trends of the four inorganic components during 2000−2012. Their spatial patterns are particularly similar to those of 2013−2020 in eastern China, but 16%, 7%, 7%, and 13% higher in population-weighted mean concentrations of SO 4 2− , NO 3 − , NH 4 + , and Cl − ; differently, these four components show significant increasing trends at the annual rates of 0.15, 0.11, 0.07, and 0.03 μg/m 3 (p < 0.001), respectively. Highly polluted conditions and greater trends were observed in the NCP, associated with the significant increase in total PM 2.5 , mainly due to more anthropogenic emissions of major pollutants caused by the rapid economic growth of the country. 25,39 Similar findings were also reported from previous model simulated studies. 39,77 Nevertheless, given the absence of historical observations, an independent analysis is needed to validate the reliability of data records estimated for the prior period.
Haze Episode in the North China
Plain. The unique advantage of the daily seamless data set enables us to capture short-term episodes of heavy PM 2.5 pollution and analyze their causes with the help of chemical composition and its changes. Figure 5 illustrates a typical example of a severe wintertime haze episode that occurred during December 16−22, 2016, in the NCP. Our satellite-derived results are highly consistent with ground-based observations in terms of spatial pattern and amplitude of change. In particular, we filled spatial gaps, providing a seamless insight into this pollution episode where observations of PM 2.5 components were scarce. Here, overall low concentrations of PM 2.5 and inorganic components at the initial stage were captured as moderate pollution gradually formed near the major cities of Tianjin, Baoding, and Shijiazhuang. Subsequently, atmospheric pollution increased substantially, spreading rapidly to surrounding areas and finally to the entire NCP. Pollution reached its peak around December 20. Heavy pollution broadly affected several provinces, including Beijing, Tianjin, Hebei, Henan, Shandong, and even Hunan in southern China. Extremely high SO 4 2− hotspots are seen in heavily industrial cities like Shijiazhuang and Zibo, while NO 3 − hotspots were observed in core hub cities like Beijing and Tianjin mainly due to differences in main secondary pollutant emission sources (e.g., heavy industrial production and transportation) in local areas. After December 20, high concentrations of PM 2.5 and its components covered less area, finally dropping to background levels over the NCP.
A meta-analysis of multiple cases of heavy haze episodes from 2013 to 2020 (population-weighted mean PM 2.5 = 155.6 μg/m 3 (Table S5). This suggests the important role of these two PM 2.5 components in regional pollution due to intensive anthropogenic emissions. 86 The continuous growth of SIA fractions (slope = 1.46%, R 2 = 0.87) became the main driver for winter haze in the NCP. In particular, NO 3 − (slope = 0.96%, R 2 = 0.72) became increasingly prominent over SO 4 2− (slope = 0.17%, R 2 = 0.18), with their rapid rises proving to be the key factors for the explosive growth of PM 2.5 pollution. 87,88 3.3.2. Impacts of the COVID-19 Lockdown. This data set allows us to quantify changes in PM 2.5 composition more accurately during dramatic short-term events at a fine scale and investigate their influential factors and mechanisms. Figure 6 compares changes in SIA components and proportions during the COVID-19 lockdown in eastern China, where the former significantly decreased by 16.8% (especially NO 3 − by 19.7%) and the latter increased by 3.5% (especially SO 4 2− by 7.3%) as a whole (Table S6). The spatial patterns are striking: drastic declines of more than 20% in all SIA components in central (e.g., Henan, Hubei, Shandong, Jiangsu, and Anhui) and southern (e.g., Guangdong) China, in contrast to significant increases of more than 40% in the north and northeast (e.g., Beijing, Tianjin, northeastern Hebei, and western Liaoning) and parts of the southeast and southwest. The main reason was strict domestic restrictions on industry and transportation that sharply reduced anthropogenic emissions, e.g., NO x and SO 2 , by 36% and 27%, respectively. 54,89,90 Adverse meteorological conditions, which offset or even reversed the effect of anthropogenic emissions on air quality, may explain the anomaly in the north. 91−94 By contrast, the ratios of SIA components to PM 2.5 increased across most of the mapping domain, especially in the north, with the exception of the PRD, where they decreased (Table S6). Regarding the NO 3 − fraction, opposite declines were observed in the worst-hit areas by the epidemic, i.e., Hubei province (↓ 2.4%) and surrounding areas (e.g., Anhui), due to rapid reductions in NO x , CO, and VOC emissions. These led to a significant increase in surface ozone in northern China, saturating NO x 95 and greatly enhancing the atmospheric oxidation capacity and the formation of secondary PM 2.5 . For the NO x -controlled area in southern China, the impact was the opposite. 17,96,97 3.4. Comparison with Related Data Sets and Previous Studies. We first compared our data set with model simulations of PM 2.5 inorganic composition by collecting daily mean surface mass concentrations (kg m −3 ) of SO 4 2− , NO 3 − , NH 4 + , and Cl − from the GEOS-FP reanalysis and SO 4 2− and Cl − from the MERRA2 reanalysis and validating against ground-based measurements from 2013 to 2020 in China ( Figure S15). Besides coarse spatial resolutions (0.25°− 0.625°), the chemical transport models did a poorer job in simulating PM 2.5 inorganic components, seriously underestimating the concentrations with stronger deviations, especially for SO 4 2− and Cl − (e.g., R 2 ≤ 0.05, slope ≤0.18). By contrast, our estimates improved the spatial resolution drastically by 25−63 times, increasing the correlations by 3−66 times and reducing the RMSE (MAE) values by 36−61% (40− 65%) compared to chemical-model simulations, benefiting from the integration of big data and deep learning.
Two previous estimates of SIAs over China were derived from chemical transport modeling data with compositionspecific conversions based on satellite AOD or PM 2.5 estimates but showed poor agreement with in situ measurements (R 2 = 0. 38 , NO 3 − , and NH 4 + components ( Figure S16). Compared to these studies, our new ChinaHighPMC data set has a ten times higher spatial resolution (1 km) and higher data quality (R 2 = 0.71−0.75 and RMSE = 4.2−6.7 μg/ m 3 ) for the three SIA components; our data set also includes another inorganic component, i.e., Cl − . One reason for the better performance of our model is that it relies on a denser network of direct ground-based observations rather than on the more dependent chemical model conversion. Another reason is the stronger data-mining ability of our deep-learning model.
Limitations and Prospects.
Despite the encouraging results, limitations still exist. The in situ PM 2.5 composition network is much sparser than that of total PM 2.5 , resulting in insufficient spatial representation. Some input parameters simulated by chemical transport models also suffer from large biases in regions without observations. They undoubtedly bring significant uncertainties to our estimates. The current study only focuses on the inorganic composition of PM 2.5 , while a future study will be investigated focusing on the organic parts as well as black carbon, which may have greater toxicity and environmental effects. More detailed satellitebased aerosol information, such as those (e.g., aerosol shape, size, and extinction) conveyed in NASA's Multi-angle Imaging SpectroRadiometer (MISR) products, will be explored to improve the estimation in the future. | 2023-04-29T06:18:12.542Z | 2023-04-28T00:00:00.000 | {
"year": 2023,
"sha1": "e48f7ae5e705a61c6ceadbef0e0e7950841f6ce9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dfe99cf9588f78fa828083c7885251a16bd59049",
"s2fieldsofstudy": [
"Chemistry",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55510750 | pes2o/s2orc | v3-fos-license | Nonadiabatic Van der Pol oscillations in molecular transport
The force exerted by the electrons on the nuclei of a current-carrying molecular junction can be manipulated to engineer nanoscale mechanical systems. In the adiabatic regime a peculiarity of these forces is negative friction, responsible for Van der Pol oscillations of the nuclear coordinates. In this work we study the robustness of the Van der Pol oscillations against high-frequency bias and gate voltage. For this purpose we go beyond the adiabatic approximation and perform full Ehrenfest dynamics simulations. The numerical scheme implements a mixed quantum-classical algorithm for open systems and is capable to deal with arbitrary time-dependent driving fields. We find that the Van der Pol oscillations are extremely stable. The nonadiabatic electron dynamics distorts the trajectory in the momentum-coordinate phase space but preserves the limit cycles in an average sense. We further show that high-frequency fields change both the oscillation amplitudes and the average nuclear positions. By switching the fields off at different times one obtains cycles of different amplitudes which attain the limit cycle only after considerably long times.
The force exerted by the electrons on the nuclei of a current-carrying molecular junction can be manipulated to engineer nanoscale mechanical systems. In the adiabatic regime a peculiarity of these forces is negative friction, responsible for Van der Pol oscillations of the nuclear coordinates. In this work we study the robustness of the Van der Pol oscillations against high-frequency bias and gate voltage. For this purpose we go beyond the adiabatic approximation and perform full Ehrenfest dynamics simulations. The numerical scheme implements a mixed quantum-classical algorithm for open systems and is capable to deal with arbitrary time-dependent driving fields. We find that the Van der Pol oscillations are extremely stable. The nonadiabatic electron dynamics distorts the trajectory in the momentum-coordinate phase space but preserves the limit cycles in an average sense. We further show that high-frequency fields change both the oscillation amplitudes and the average nuclear positions. By switching the fields off at different times one obtains cycles of different amplitudes which attain the limit cycle only after considerably long times.
I. INTRODUCTION
Research activity on the interaction between electrons and nuclei began more than a century ago, and still today continues to stimulate new ideas and to pose challenging problems. Some of the open issues in this field go back to the early studies by Peierls on one-dimensional lattice instabilities, 1 to continue with the works of Feynman, Frölich and Holstein on polarons, 2 the study of charge and heat conduction, 3 to arrive to present day open questions about the role of phonons in superconductivity/magnetism for layered structures, 4 to mention a few significative examples. Modern research covers also more fundamental aspects. The electron-nuclei interaction (ENI) coupling is typically derived from the potential energy surfaces of the Born-Oppenheimer approximation. As the coupling relies on an approximation, there has been a significant effort in constructing a formally exact theory. Progress has been made in this context too. The Born-Oppenheimer ansatz for the electronnuclear wave-function is exact in both the static 5 and time-dependent 6 case and hence the potential energy surfaces constitute a very useful concept even in an exact treatment.
In the last few decades, a more quantitative approach to the understanding of the ENI became possible via computer simulations. For example ab-initio molecular dynamics, 7 with a mixed quantum-classical time evolution for electrons and nuclei, was used to study phenomena as different as lattice vibrations and melting, vacancy diffusion, gas-surface dynamics, etc.. Since the advent of nanotechnology, the ENI problem has attracted considerable attention in open nanoscale systems out of equilibrium as well. 8,9 Assessing the nature of ENI and its dependence on the device support in these low-dimensional geometries is a key ingredient to control the decoherence of carriers, the effect of thermal dissipation, in other words to engineer the ENI to increase device efficiency. 10 While the theoretical study of ENI for steadystate quantum transport has been the subject of large interest, 11-24 a real-time description of phenomena like, e.g., nuclear rearrangement, multi-stability, electromigration etc., have received less attention (examples of work done in this less developed area are Refs. [25][26][27][28][29]). Recently, the discovery of the nonconservative nature of steady-state forces 30,31 has re-awakened the interest in time-dependent phenomena. Two additional types of forces, both linear in the velocity of the nuclear coordinates, have been proposed. One force stems from the friction induced by particle-hole excitations 32,33 and the other force is a Lorentz-like force in which the magnetic field is the curl of the Berry's vector potential of the Born-Oppenheimer approximation. 34 All these forces are contained in the Ehrenfest dynamics which evolves the electrons quantum-mechanically in the classical field generated by the nuclei and, at the same time, the nuclear coordinates according to the classical Newton equation in which the forces are generated by the nuclei and the electrons. Assuming that the nuclear motion is slow on the electronic time-scale and that the electrons are fully relaxed in the instantaneous nuclear configuration, one can expand the electronic force in powers of the nuclear velocities (and their derivatives). The zeroth order term corresponds to the nonconservative steady-state force whereas the first-order term corresponds to the sum of the friction force and the Lorentz-like force. 35,36 We refer to this approximate nuclear dynamics as the Adiabatic Ehrenfest Dynamics (AED). From the explicit expression of the AED forces, either in terms of scattering matrices 35 or nonequilibrium Green's functions, 36 one can show that (i) the steady-state force is nonconservative only provided that we are at finite bias and that the number of nuclear degrees of freedom is larger than one, (ii) at zero bias the friction force is always opposite to the nuclear velocity but it can change sign at finite bias (negative friction 32,37 ) and (iii) the Lorentz force vanishes if the number of nuclear degrees of freedom is one.
In this work we go beyond the AED by evolving both electrons and nuclei according to the full Ehrenfest dynamics (ED). The ED has so far being employed to study fast vibrational modes in DC regimes. 38,39 Here, instead, we break the adiabatic condition in a different way. We consider the physical situation of heavy nuclei (and hence slow vibrational modes) and drive the system out of equilibrium by high frequency AC biases or gate voltages. In fact, our scheme can deal with arbitrary driving fields at the same computational cost and is not limited to the wide band limit approximation for the leads. Furthermore, although our scheme can also include several vibrational modes, in this first study we consider only one vibrational mode and focus on one specific issue, namely the negative friction force. The AED predicts the occurrence of limit cycles in the nuclear momentumcoordinate phase space. These cycles are similar to those of a van der Pol oscillator 36 and imply that a steadystate is not reached. Is this prediction confirmed by the full ED? What are the qualitative and quantitative differences? How robust are the van der Pol oscillations against ultrafast driving fields? To anticipate our conclusions, we confirm the existence of limit cycles, even though the shape and, more importantly, the period of the oscillations are different from those of the AED. Our main finding, however, is that these cycles are remarkably stable against ultrafast driving fields for which the electrons are far from being relaxed, and hence the AED is not justified. In the next Section we discuss the ED and its adiabatic version. In Section III we introduce the model Hamiltonian with a single vibrational mode and present results on the time-dependent electron current, density and nuclear coordinate. Details on the numerical implementation can be found in Appendix A. Our conclusions and outlook are drawn in Section IV.
II. THEORETICAL FRAMEWORK
We consider a system consisting of a left (L) and right (R) metallic electrode coupled to a central (C) molecular junction. The whole system is initially in the ground state and then driven out of equilibrium by exposing the electrons to an external time-dependent bias V α (t) in lead α =L, R and possibly to some time-dependent gate voltage v C (t) in C. We describe the metallic regions L and R by free-electron Hamiltonianŝ with α = L, R. In region C the electrons interact with the classical field generated by the nuclear degrees of freedom x = (x 1 , . . . , x N ) where the sum runs over the M one-electron states of C.
The nuclear Hamiltonian has the general form where p = (p 1 , . . . , p N ) is canonically conjugated to x and U cl (x) is the classical potential. Finally the metallic electrodes are connected to C through the non-local tunneling operator Thus, the full electron Hamiltonian readŝ
A. Ehrenfest dynamics
We are interested in calculating time-dependent density, current and nuclear coordinates. In the limit of heavy nuclear masses the nuclear wavefunction is sharply peaked around the classical nuclear coordinates. Then, an expansion around the classical nuclear trajectory leads to a Langevin-type (or stochastic) equation. 38 Ignoring the stochastic forces in this equation corresponds to implement the ED. Denoting by |Ψ(t) the many-electron state at time t, the ED for electrons and nuclei is governed by the equations where in the last equation is the time-dependent one-particle density matrix. Equations (6)(7)(8) are first-order differential equations in time.
To solve them we need to specify the boundary conditions. As the system is initially in equilibrium, |Ψ(0) = |Ψ g is the electronic ground state, x(0) = x g are the ground-state coordinates and p(0) = 0 (we set t = 0 as the time at which the external bias or gate voltage are switched on). The coordinates x g can be calculated from the zero-force equation (see right hand side of Eq. (8)) For t < 0 the HamiltonianĤ el (x, t) is a time-independent free-electron Hamiltonian for any x and hence its ground is the Slater determinant formed by the occupied one-electron wavefunctions . Consequently the ground-state density matrix reads where ψ s (i) = ψ s [x](i) is the amplitude of ψ s on the i-th one-electron state of C. Equations (10,11) constitute a set of coupled equation for the unknown x g and Ψ g . In Appendix A we describe a numerical procedure to solve these equations for one-dimensional electrodes. In order to solve the time-dependent and coupled equations (6-8) in practice we extract from Eq. (6) an equation for ρ ji (t). SinceĤ el is a free-electron Hamiltonian at all times we have where ψ s (i, t) is the time-evolved one-electron wavefunction which, by definition, fulfills with boundary condition ψ s (i, 0) = ψ s (i). This equation can be further manipulated to express the amplitudes ψ s (kα, t) in the electrodes in terms of the amplitudes ψ s (i, t < t) in C with times earlier than t. 40 We then obtain a close set of equations for x(t) and ρ ji (t). This wavefunction approach has been proposed in Ref. 27 and has the advantage of not being limited to wide-band leads and/or to DC biases. In Appendix A we provide some numerical details on the time-propagation algorithm.
An alternative, but equivalent, method to calculate ρ ji is the NonEquilibrium Green's Functions (NEGF) technique. 41 The Green's function is defined as where γ is the Keldysh contour going from −∞ to ∞ and back to −∞, and z, z are contour variables. A contour variable can either be on the forward branch (−∞, ∞) or on the backward branch (∞, −∞) of γ. For any real time t we denote by z = t − the contour time on the forward branch and by z = t + the contour time on the backward branch. The lesser Green's function is defined according to and hence For any finite t, t the lesser Green's function can be written in matrix form as provided that no bound states are present in the spectrum ofĤ el when t → ∞. 42 The retarded/advanced Green's functions can be calculated from The lesser and retarded components of the embedding self-energy appear in Eqs. (17,18). These quantities are completely determined by the parameters inĤ α andĤ T and read is the zero temperature Fermi function and µ is the chemical potential of the system in equilibrium. This set of equations provide an alternative way to implement the ED.
B. Adiabatic Ehrenfest dynamics
Let us now consider the case of slowly varying driving fields. As the nuclei are much heavier than the electrons the electronic Green's functions G R/A (t, t ) and G < (t, t ) depend slowly on the center-of-mass time T = (t + t )/2. In the adiabatic limit G = G ss depends only on the timedifference and equals the steady-state Green's function of a system with constant bias V α , constant gate voltage v C and steady-state coordinates x ss . The steady-state coordinates can be determined similarly to the equilibrium case. In Eqs. (10,11) we have to replace Ψ g by Ψ ss where Ψ ss is the steady-state Slater determinant formed by all right-going scattering states with energy below µ+V L and all left-going scattering states with energy below µ + V R . Alternatively we can calculate x ss using NEGF. From Eq. (16) the steady-state one-particle density matrix is and from Eq. (17) At the steady state the solution of Eq. (18) is simply with, see Eq. (19), Taking into account that the Fourier transform of the lesser self-energy is we can write ρ ss = ρ ss (x ss ) in terms of x ss and then determine x ss from the solution of the zero-force equation For slowly varying fields is therefore convenient to change variables and express the Green's functions in terms of T = (t + t )/2 and τ = t − t . If we Fourier transform the lesser Green's function with respect to the relative time then we can rewrite Eq. (16) as To first order in the nuclear velocities Bode et al. 36 have shown that where the matrix Λ ν = Λ ν (x(t)) ≡ ∂h(x(t))/∂x ν and all steady-state Green's functions, see Eqs. (22,23), are calculated in x ss = x(t). Substitution of Eq. (29) into Eq. (28) and the subsequent substitution of ρ into Eq. (8) allows us to decouple the electron and nuclear dynamics, since is the classical force, is the nonconservative steady-state force, is the friction force and is the Lorentz-like force. In the last two equations γ (35) All these forces are well defined functions of x and therefore we can evolve the nuclear coordinates in time without evolving the electronic wavefunctions. This is the adiabatic version of the ED and relies on the fact that for any t the electronic wavefunctions are steady-state wavefunctions (right-and left-going scattering states) of the HamiltonianĤ el (x(t), t). The AED is no longer justified if the system is perturbed by driving fields varying on a time scale much smaller than the nuclear time-scale.
For V L = V R one can show that the curl ∂F ss,ν /∂x µ − ∂F ss,µ /∂x ν = 0 and hence that the steady-state force is conservative. Instead for V L = V R , i.e., when current flows through the molecular junction, this property is not guaranteed. 30,36 Of course in the presence of only one degree of freedom, x = x, the steady-state force is, by definition, conservative and we can define the total potential The minima of this potential corresponds to stable nuclear coordinates in the current carrying system. We now consider the friction matrix γ (+) νµ . If we define the spectral function A(ω) = i(G R ss (ω) − G A ss (ω)) we have When V L = V R = V the system is in equilibrium at chemical potential µ + V . Then, from the fluctuationdissipation theorem −iG < ss (ω) = f (ω − µ − V )A(ω) and hence The friction matrix is therefore positive definite. This implies that with no current the friction force is opposite to the nuclear velocities and its effect is to damp the nuclear oscillations around a stable position. Again this property can be violated in the current carrying system, see Refs. 33, 36, and 38 as well as the next Section. Finally we observe that the Lorentz-like force vanishes for only one nuclear degree of freedom. In the next Section we analyze this case and study the interplay between F ss and F fric in a current carrying system. This will be done both in terms of AED and full ED simulations, to illustrate how the adiabatic picture changes under ultrafast driving fields.
III. NUMERICAL RESULTS
We consider the same model molecular junction as in Ref. 36 and 38 describing, e.g., a polar diatomic molecule and a stretching vibrational mode. We assign one single-particle basis function to each atom and model the molecule with the 2 × 2 Hamiltonian The coordinate x moves in the classic harmonic potential The junction is coupled through molecule 1 to the left lead and through molecule 2 to the right lead, see Fig. 1. We choose the leads as one-dimensional tight-binding metals with nearest neighbor hopping T lead T C and zero onsite energy. Thus kα = k = 2T lead cos(k) with k ∈ (0, π). The tunneling amplitude from molecule 1 (2) to the left (right) lead is denoted by T T . If we measure all energies in units of λ 2 /(M Ω 2 ) then the Hamiltonian
of region C for electrons and nuclei readŝ
x is a dimensionless coordinate and n i ≡ c † i c i is the electron occupation operator on molecule i = 1, 2. We consider the following equilibrium parameters: T lead = −10, T T = − √ 3, T C = −0.7 and Ω = 0.1.
A. AED analysis
We calculate the total potential U tot and the friction coefficient γ (+) (in this model the friction matrix is a scalar) in and out of equilibrium. For the steady-state values of bias and gate voltage we take v C = 0.2 and V L = −V R = 1. Since T lead T C we evaluate the embedding self-energy in the Wide Band Limit Approximation (WBLA). The WBLA corresponds to taking the limit T lead , T T → ∞ in such a way that 2T 2 T /T lead = Γ is a finite constant (with our parameter Γ = 0.6). Then Σ R (ω) = −iΓ/2 1 0 0 1 is independent of frequency and In Fig. 2 we display the total potential as defined in Eq. (36). In equilibrium U tot (x) exhibits a shallow double minimum. The position of the minima corresponds to a stable nuclear coordinate. The minima are symmetric aroundx = 0 consistently with the symmetry under reflection of the Hamiltonian. In the presence of a bias this reflection symmetry breaks and the current carrying system has only one stable coordinatex ss 0.088. The nonequilibrium minimum is much deeper than the equilibrium ones and occurs at a positivex ss . From the zero-force equationx ss = −(ρ ss,11 − ρ ss,22 ), and we infer that the occupation on molecule 2 is larger than on molecule 1.
Next we calculate the friction coefficient γ (+) . The results are displayed in Fig. 3. As expected, in equilibrium the friction is positive for all values ofx. Instead the nonequilibrium friction turns negative in a narrow window of positivex values. Interestingly, the steady-state coordinatex ss belongs to this window. This means that if we perform AED simulations there is no guarantee that a steady-state is reached. For the model that we are considering the AED equations reduce to wheret = Ωt. This equation has the same structure as that of a van der Pol oscillatorÿ = −y − γ(y 2 − 1)ẏ for which the function multiplyingẏ is negative (negative friction) when y is in the range (−1, 1) where the stable solution y = 0 lies. As a consequence of this fact one finds a limit cycle in the momentum-coordinate phase space. In Fig. 4 we show the solution of Eq. (43) in thē p −x plane (withp = dx/dt) for a situation in which the system has initially a nuclear coordinatex(0) = 0.5 and evolves without any bias or gate voltage (top panel), and for a situation in which the system has initially a nuclear coordinatex(0) = 0.04 and evolves in the presence of a bias V L = −V R = 1 and gate voltage v C = 0.2 (bottom panel). For comparison we also illustrate the periodic trajectories corresponding to the solution of Eq. (43) with γ (+) = 0. The main difference between the two panels is that in the current carrying system the nuclear oscillations are not damped. Due to negative friction the trajectory expands outward until reaching a limit cycle. We have checked numerically (not shown) that starting from differentx the trajectory always tend to the same limit cycle. Bottom panel. The system evolves with a bias VL = −VR = 1 and gate voltage vC = 0.2 starting from an initial nuclear coordinatex(0) = 0.04. Without friction we observe a periodic trajectory. Instead with friction the trajectory expand outward until reaching a limit cycle. This is a consequence of the negative friction, see Fig. 3.
B. Ehrenfest dynamics simulations
We now perform full ED simulations and, instead of studying the evolution of the system when the initial coordinate is arbitrarily chosen by us, we take the system initially in equilibrium and then drive it away from equilibrium using external time-dependent biases and/or gate voltages. In Fig. 5 we suddenly switch on a bias V L = V R = 1 and a gate voltage v C = 0.2. These are the same parameters as in the previous Section. Panels a) and b) show the time-dependent current between atoms 1 and 2 and the atomic occupations respectively. After a fast transient (see insets) during which the electrons are not relaxed, these quantities start to oscillate on a nuclear time scale. Despite the DC bias, no steadystate is reached. In panel c) we compare the ED with the AED for the x coordinate. In both cases we observe persistent oscillations of similar amplitude. However the period of these oscillations is different and the curves go out of phase after a few periods. Also the shape of the oscillations is slightly different. In panels d) and e) we put side by side the ED and AED trajectories in phase space. AED reaches the limit cycle much faster than the ED. Apart from these quantitative differences the AED remains a good approximation since during the electronic transient the x coordinate moves very little. Thus at times t ∼ 20/Ω the electrons are essentially relaxed in the initial x-coordinate.
New physical scenarios may emerge if the electrons are kept away from their relaxed state. In this context, a central question is: Do the van der Pol oscillations disappear or get distorted? To address the issue, we consider two different time-dependent protocols. As first protocol, we superimpose to the original DC bias a high frequency AC component. In order to isolate the effects of the ultrafast AC component, we switch on the DC bias smoothly.
whereas for the gate voltage v C (t) = v C sin 2 ( πt 2×250 ) for t < current (panel a), occupations (panel b) and nuclear coordinate (panel c) are shown in Fig. 6. In this figure we also show results (curve "AC pulse") of simulations in which the superimposed AC bias is switched off after a time t = 700 + 2π × 10. Remarkably the van der Pol oscillations persist in this highly nonadiabatic regime. A glance to the nuclear coordinate (panel c) would suggest that the AC bias is only responsible for increasing the amplitude of the oscillations. This is, however, not the case. The trajectory in phase space (panel d) reveals that the nuclear coordinate feels the nonadiabatic electron dynamics. In fact, we observe cycles with superimposed oscillations of the same frequency ω as the AC bias. Interestingly an AC pulse can be used to manipulate the radius of the cycles. In the "AC pulse" curve of panel d) the inner cycle sets in before the pulse while the outer cycle sets in after the pulse. The ED is nonperturbative in the velocities and their derivatives, and we are not aware of any mathematical results on the uniqueness of the limit cycle. We therefore addressed this issue numerically. A close inspection to panel d) shows that the inner cycle is moving outward whereas the outer cycle is moving inward, thus suggesting the uniqueness of the limit cycle even within the ED. We performed several simulations with different switching-on protocols of the DC bias and found that cycles with radius larger (smaller) than a critical radius move inward (outward). On the basis of this numerical evidence we conclude that there exists only one limit cycle within the ED. In contrast with the ADE, however, the time to attain the limit cycle is considerably longer; hence cycles of different radius can, de facto, be considered as quasi-stable limit cycles for practical purposes. Similar conclusions are reached when the system is perturbed with a second protocol for a time-dependent perturbation, namely a ultrafast gate voltage. In Fig. 7 we study the response of the system to a train of square pulses in the molecular junction. After the same smooth switching-on of the DC bias and gate as in Fig. 6, we superimpose to v C = 0.2 the time-dependent gate voltage where t n = 600 + n∆ and S(t) = 1 if |t| < ∆/4 and zero otherwise (time is in units of Ω −1 ). The calcula-tions are performed with ∆ = 10 and v 0 = 0.1. In the figure the "wave" curves refer to simulations in which δv C (t) is never switched off while the "pulse" curves refer to simulations in which δv C (t) is switched off after a time t = 695. The van der Pol oscillations are stable in both cases. An interesting common feature of Figs. 6 and 7 is that the radius of the quasi-stable limit cycle can be tuned by switching off the time-dependent fields at different times. However, if we also want to tune the center of the cycle then the time-dependent fields have to remain on. Both the "AC pulse" curve of Fig. 6 (panel d) and the "pulse" curve of Fig. 7 (panel e) exhibit two concentric cycles. Overall, these features suggest that much more complex nuclear trajectories are to be expected when the electron dynamics in a junction is nonadiabatic.
IV. CONCLUSIONS AND OUTLOOK
We have studied the robustness of nuclear van der Pol oscillations in molecular transport when the junction is subject to ultrafast driving fields. In this ultrafast regime the electrons have no time to relax and the adiabatic Ehrenfest dynamics (AED) is no longer justified. We therefore implemented the full Ehrenfest dynamics (ED) using a wavefunction approach. The numerical scheme can deal with arbitrary time-dependent perturbations at the same computational cost and is not limited to wide band leads. We found that the van der Pol oscillations are extremely stable. In the DC case the AED results are in good qualitative agreement with the full ED simulations, as expected. However the ED period of the oscillations as well as the damping time to attain the limit cycle are both longer than those obtained within the AED. In the presence of ultrafast fields the van der Pol oscillations are distorted by the nonadiabatic electron dynamics. In all cases we observed superimposed oscillations of the same frequency as the driving field. We showed that highfrequency biases or gate voltages can be used to tune the amplitude of the oscillations and to shift the average value of the nuclear coordinate. By switching the field off the amplitude remains large for very long times while the average nuclear coordinate goes back to its original value rather fast. Thus, ultra-fast fields can be used to set in quasi-stable limit cycles of desired amplitude. Our numerical evidence suggests that every quasi-stable limit cycle eventually attain a unique limit cycle. We are not aware of any rigorous mathematical proof of this fact.
In this first work we focused on one aspect of currentinduced forces, namely the negative friction. In order to observe the nonconservative nature of the steady-state force or the Lorentz-like force one has to consider at least two vibrational modes. The research on currentinduced forces is still in its infancy and interesting applications like nanomotors have started to appear in the literature. 30,[43][44][45] Our results here show that ultrafast fields constitute another knob to tweak nanomechanical engines and that the theoretical scheme we proposed offers a tool to carry on investigations along these lines.
We consider an arbitrary central region of dimension M described by the one-particle matrix h. We choose a basis set {|φ i , i = 1, . . . , M } such that an electron can hop to the left only through the state |φ 1 and to the right only through the state |φ M . (Here and in the following we use the Greek letter φ for states strictly localized in region L/R or C and ψ for states of the entire system S=L+C+R.) Let T L , T R be the corresponding hopping parameters (in our model system T L = T R = T T ). The electrodes are described by semi-infinite one-dimensional tight-binding models with nearest neighbor hopping parameter T lead = T (the same for left and right). The one-particle eigenstates of system S can be classified according to their energies. The isolated left and right electrodes have a continuous energy spectrum between −2|T | and 2|T |. Therefore, one-particle eigenstates with energy < −2|T | are bound states with exponential tails in L and R. On the other hand, one-particle eigenstates with energy in the band (−2|T |, 2|T |) are extended states delocalized all over the system.
Below we compute the degenerate extended states ψ (a) q , a = 1, 2, and bound states ψ b in C. We also describe a damped ground state dynamic for the self-consistent solution of Eqs. (10,11). Finally we present an efficient algorithm for the time-propagation.
Extended states
Delocalized states are twice degenerate and we denote by ψ (1) q and ψ (2) q the two eigenfunctions with eigenenergy q = 2T cos (q), q ∈ (0, π). The eigenvalue equation in region C reads where ψ . We diagonalize h and find eigenstates |λ µ with eigenenergies µ (the index µ runs between 1 and M ). In terms of |λ µ and µ Eq. (A1) can be rewritten as with i = 1, . . . , M . This equation allows us to obtain the amplitude of extended states in C provided that q = µ , ∀µ. Indeed, we can exploit the degeneracy of q and choose the vectors ψ q (R) as we please. Of course, in order to obtain two independent eigenvectors ψ (1) Having the projection of |ψ (a) q onto region C we can match it to the analytic form in the leads. We first use the Schrödinger equation to calculate ψ (a) q (k) on sites k = L + 1, L + 2 (second and third sites of electrode L) and k = R + 1, R + 2 (second and third sites of electrode R). Then we use ψ The two degenerate eigenfunctions ψ (1) q and ψ (2) q are independent by construction but not orthonormal. In order to orthonormalize them we need the overlap N aa = 2 ψ
Bound states
Without loss of generality we choose the hopping parameter T < 0 in the left and right electrodes. Let |ψ b be a possible bound state of energy b < −2|T |. As for the extended states, the wavefunction in region C is completely determined by the amplitudes ψ b (α), α = L, R on the first site of electrode α, see Eq. (A1). The amplitudes ψ b (L) and ψ b (R) can be expressed in terms of ψ b (1) = φ 1 |ψ b and ψ b (M ) = φ M |ψ b respectively. We have where g(ω) is the retarded Green's function of a semiinfinite chain. For ω < −2|T | Using Eqs. (A5), the Schrödinger equation in region C reads where the embedding self-energy Σ R (ω) is a M × M matrix having only two non-vanishing matrix elements: the (1,1) element which is equal to T 2 L g(ω) and the (M, M ) element which is equal to T 2 R g(ω). Bound-state energies b are given by the solutions of with ω < −2|V |. The corresponding bound state in C can be calculated from Eq. (A7).
In analogy with the procedure described in Section A 1 we extended the bound-state wave function up to the second and third sites of electrode α = L, R, matched it to the analytic form in the leads and calculated the amplitudes A b,α and penetration lengths λ b,α . Knowing |ψ b in region C and in the leads we can calculate and normalize the bound-state wavefunction.
Ground state
The parametric dependence ofĤ el on the coordinates x renders every eigenstate a function of x. We use the notation |ψ q [x] with energy below the chemical potential µ = 2|T | cos(q F ). The ground state value x g of the nuclear coordinates can be computed from the zero-force equation (10). In our practical implementation we constructed the one-particle density matrix of Eq. (11) and then evolved the coordinates according to the fictitious damped dynamics with γ > 0 some friction coefficient. Due to the multivalley nature of the potential, the damped dynamics might not converge to the lowest-energy solution. We therefore embedded C in in finite rings of increasing length L, found the energy minimum x g (L) and used its extrapolated value x g (L → ∞) as initial condition for the fictitious dynamics. This concludes the description of the numerical algorithm used to find the ground state configuration of H el [x] + H cl [p, x]. In the next Section we present how to propagate the electronic wavefunctions and the nuclear coordinates when the system is disturbed by external driving fields.
Time evolution
The evolution is governed by Eqs. (6,7,8). It is convenient to rearrange the electronic Hamiltonian matrix h el of the entire system S=L+C+R as h el (x(t), t) = h 0 (x(t), t) + v(t) where h 0 depends on time only through the central region whereas describes the perturbation due to the bias. For the time-propagation we discretize the time t m = 2mδ (where δ is an infinitesimal quantity and m is an integer). We first propagate all occupied wavefunctions from t m to t m+1 using the algorithm of Ref. 40. Then we propagate coordinates and momenta from t m to t m+2 using a Verlet-like algorithm. Finally, we complete the propagation of a full time step ∆ = 4δ by evolving the wavefunctions from t m+1 to t m+2 . The overall scheme for the time-propagation reads | 2013-05-08T13:20:34.000Z | 2013-05-08T00:00:00.000 | {
"year": 2014,
"sha1": "a75c68ee62a3cf7e11418e73756df62e21d403f6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1305.1811",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a75c68ee62a3cf7e11418e73756df62e21d403f6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119201154 | pes2o/s2orc | v3-fos-license | Dynamic mode evolution and phase transition of twisted light in nonlinear process
We report on studying the dynamic evolution of orbital angular momentum (OAM) carrying light in sum frequency generation (SFG). Dynamic evolution of the SFG beam is studied in different cases, in which the two pump beams are either a single OAM mode or OAM superposition mode. For the cases when the two pumps are both in superposition mode, two kinds of spatial patterns and evolution behaviors are observed, one set of spatial pattern is non-diffractive and unchanged with propagating. In addition, SFG of two pump beams with opposite OAM will evolve into a kind of quasi-Gaussian mode. These observations show that the pumps phases are coherently transferred to SFG beam in the conversion process. Our findings give insights to physical picture of OAM light transformation in nonlinear processes, which can be used in OAM mode engineering and de-multiplexing of different OAM modes.
We report on studying the dynamic evolution of orbital angular momentum (OAM) carrying light in sum frequency generation (SFG). Dynamic evolution of the SFG beam is studied in different cases, in which the two pump beams are either a single OAM mode or OAM superposition mode. For the cases when the two pumps are both in superposition mode, two kinds of spatial patterns and evolution behaviors are observed, one set of spatial pattern is non-diffractive and unchanged with propagating. In addition, SFG of two pump beams with opposite OAM will evolve into a kind of quasi-Gaussian mode. These observations show that the pumps' phases are coherently transferred to SFG beam in the conversion process. Our findings give insights to physical picture of OAM light transformation in nonlinear processes, which can be used in OAM mode engineering and de-multiplexing of different OAM modes.
Recent years have seen vast progress in the generation and detection of structured light, with potential applications in high capacity optical data storage and optical communications. It is well known that photons can carry both spin and orbital angular momentum (OAM), the spin is associated with the polarizations and the OAM with helical phase structure in the paraxial regime [1]. It is shown by L. Allen et al. that a photon with helical phase form of il e θ carries l OAM. The singularity of OAM light has found as a useful tool in many applications [2][3][4][5][6][7][8]. OAM modes with different l values are orthogonal, therefore information can be encoded in a higher dimension in contrast to polarization modes. The combination of OAM and polarization encoding has demonstrated to increase the channel capacity of communication systems [8]. In addition, the tolerance of quantum key distribution can be increased by encoding in multi-level OAM basis [9,10].
Phase-matched nonlinear processes include second order or third order nonlinear processes such as sum frequency generation (SFG), spontaneous parametric down conversion (SPDC), four wave mixing (FWM) and stimulated Raman scattering. Phase matching in the longitude wave vectors results in momentum correlation but it also allows for transfer of transverse phase information. The study of structured light in various nonlinear processes has been a long history.
For various studies of OAM-carrying light transformation in phase-matched nonlinear processes, the dynamic mode evolution of the generated light by consider it propagation has not been given yet. In this work, the dynamic evolution of 525.5nm structured light generated from SFG of 795nm and 1550nm OAM-carrying light is studied. The two pump beams are either single or superposition of OAM-carrying light. By tracking the mode evolution from the near field to far field, abundant phenomena are observed. The first interesting phenomenon is that SFG of two pump beams with opposite OAM will evolve form a ring structure in the near field to quasi-Gaussian mode in the far field. The second interesting phenomenon is that two sets of patterns and dynamic evolving behaviors are observed when the two pump beams are both in OAM-superposition modes. These two sets of patterns are obtained by changing the relative phase between OAM-superposition modes in one of the pump beam, which shows the transferring of the phase from the pump beam to the SFG beam. The above phenomena have not been studied and observed before. Our findings give clear pictures for OAM mode evolution in SFG process, which can be used for OAM mode engineering and de-multiplexing.
The experimental setup is depicted in figure 1. 795nm OAM-carrying light is generated from spatial light modulator (SLM), the 1550nm OAM-carrying light is generated using vortex phase plate (VPP). The superposition of 1550nm OAM-carrying light is obtained by using a modified Sagnac interferometer. The two pump beams are focused using lenses L1 and L2 separately and combined using dichromatic mirror (DM) before entering the periodically poled KTP crystal (PPKTP, Raicol Crystals, dimensions is 1mm×2mm×10mm, the poling period is 9.375µm). Before measuring the SFG mode using charge coupled device (CCD) camera, the SFG beam is imaged using lens L3 and passed through filters to remove the pump beams.
For SFG of two OAM-carrying beams with OAM index of m and n , the SFG beam has the analytical express form of [17] Where λ and k is the wavelength and wave number of the SFG beam respectively. z is the propagation direction; 1 w and 2 w are two pump beams' waist respectively, and Here s w represents SFG beam's waist. The analytical expression equation (1) will give all theoretical simulation results for our experiments in the discussion below. To investigate the modes evolution dynamics of OAM modes, the case when the two pump beams carry OAM with opposite sign is studied first. The experimental results are showed in figure 2. Images in rows A1-A3 are the experimental results for 795nm pump beam carries OAM index of 1, 2, 3 and 1550nm pump beam carries OAM index of -1, -2, -3 respectively. The images in each row are obtained by moving the CCD camera from near field to far field. In the near filed, the intensity distribution of SFG beam has a single ring shape (images of columns 1, 2 in each row). Then with propagation of the beam away, the intensity of the ring is diming and a central point is appearing, the intensity of the central point becomes brighter and brighter with the propagation distance. Finally in the far field, the SFG beam evolves to a spatial shape with a central bright point and dim concentric rings outside. In the far field approximation ( Where Γ is a constant. LG , but they are difference by a factor of 2 in the Laguerre polynomials. This difference makes the intensity in radial direction of the SFG beam decreasing much more rapidly than standard LG mode. Therefore the outer rings in rows A2, A3 can not be seen experimentally. For OAM index of 1, 2, 3, there should be 1, 2, 3 outer rings for the SFG beam in far field. Another property of the far field SFG beam is that most of the power is distributed in the central bright point, the power of the outer rings can be ignored, therefore the mode can be treat as quasi-Gaussian mode. This property will have potential application for OAM mode de-multiplexing. The corresponding theoretical simulation results for rows A1-A3 are showed in rows B1-B3 respectively. LG for the 795nm and 1550nm pump beam respectively. Viewing form figure 3, we conclude that SFG beams in the near field is petal-like interference patterns (the number of petals are 2l or 1 2 l l + ). After propagating away, a central point is appearing and it becomes brighter and bright with distance. At the same time the petals are twisted and rotated in the same directions like a windmill. The SFG of two OAM with the same sign will evolve to standard LG mode in the far filed [17], therefore the SFG modes in the far filed in the present case are LG e LG θ + + for rows A1-A3 and A4, respectively. The far field spatial shapes are interference of a standard LG mode and a non-standard LG mode. The corresponding theoretical results are showed in rows B1-B4, the slight difference between them is result from mode distortion in the conversion process. When the two pump beams are in orthogonal superposition modes, the experimental and simulation results are depicted in figure 5. Both the near and far field spatial shapes are petal like structures, the shapes are unchanged with propagating. The numbers of petals are 4l . This non-diffractive behavior is rather different from case I. The defects of the patterns in the far field are arising from imperfect destructive interference of the cross terms in equation (4). In conclusion, the dynamic mode evolution of OAM in SFG is studied for different cases. A variety of spatial patterns are observed. For two pump beams carry opposite OAM, the SFG beam will evolve from a single ring in the near field to quasi-Gaussian mode in the far field. In the case when one pump beam carries single OAM mode, the other pump carries OAM superposition mode, the SFG beam will evolve from petal like interference pattern in the near field to twisted petals with a central bright point. There are two sets of patterns and propagation behaviors for both pump beams carry OAM superposition mode. For two pump beams with the same OAM superposition mode, the spatial pattern evolves from petal like structure to a central bright point with dim symmetry structure around. While for two pump in orthogonal OAM superposition mode, the spatial pattern is non-diffractive and unchanged along propagation. Our results show that the phases for two pump beams are preserved in the SFG process. The mode evolution dynamics of OAM light of this study is not limited to SFG process, it also applies to other second or third order nonlinear processes. The present study gives insight for OAM mode conversion in SFG and will have potential applications for OAM mode de-multiplexing and engineering. | 2018-12-29T12:39:51.815Z | 2015-06-17T00:00:00.000 | {
"year": 2015,
"sha1": "e5bada1a400514f280b8d67219b66fc42519334e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1506.05201",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "008b1d7b6d4687647340b55eb3324e08fabeb4df",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
269477490 | pes2o/s2orc | v3-fos-license | Effective factors in planning, implementation, and management of educational program evaluation in medical sciences: A practical guide
BACKGROUND: Educational program evaluation is a complex issue, and it is essential to have knowledge of the potential challenges and solutions during the whole process. The present study aimed to identify the influential components in planning, implementation, and evaluation management of educational programs in medical sciences and then provide an applied guide to guarantee the best possible evaluation by evaluators of educational programs. MATERIALS AND METHODS: This descriptive study was conducted in three steps. First, the effective components in planning, implementation, and evaluation management of educational programs in medical sciences were reviewed. Second, experts’ opinion was asked through a focus group discussion regarding the mentioned components. Third, regarding the complied applied guide, the opinions of 40 medical educationist and program evaluation experts were investigated using a checklist. RESULTS: An applied guide for planning, implementation, and evaluation management of educational programs in medical sciences consists of eight stages: determining the evaluation questions and standards, determining the type of information required, determining resources to collect information, determining methods and tools to collect information, determining data analysis methods, determining the timing and frequency of reporting, determining the appropriate ways of reporting, and determining strategies to maintain the cooperation of data sources. CONCLUSION: The spread of educational programs in medical sciences universities leads to an increasing need for program evaluation to provide evidence of their effectiveness and improvement. The present research provided an applied guide to make the evaluation of educational programs feasible by using a set of concepts, principles, methods, theories, and models.
Introduction
E valuating educational programs of medical sciences is one of the most fundamental aspects of educational interventions. [1,2]Evaluation is an inseparable part of every educational program, and it is a continuous and dynamic method to identify errors in the teaching-learning process. [3,4]An evaluation is required due to the recent developments in education systems of medical sciences and a huge expenditure and a lot of time on educational programs every year.Therefore, an accurate evaluation is one of the regulators' concerns that reveal the strengths, weaknesses, and effectiveness of educational programs, as well as finding ways to improve them. [5,6]though a great number of studies have been conducted regarding various
Department of Medical Education, Medical Education Development Center, Kerman University of Medical Sciences, Kerman, Iran
approaches to educational programs evaluation, there are few studies presenting solutions for challenges while implementing program evaluation occur [7] .Every challenge presents itself in ways that cannot be expected; therefore, it is difficult for evaluators to have access to detailed guidance [8] .Different approaches and methods to evaluation are mainly generic in nature, and it needs to be clarified what details need to be followed by the evaluators when facing challenges. [9]o m e e v a l u a t i o n c h a l l e n g e s i n t r o d u c e d b y Grandisson et al. [10] in 2014 include scarcity of resources, multiple factors related to the program's effectiveness, and many beneficiaries with unique needs.According to Grandisson et al., [10] evaluators need to consider various aspects of every program before proceeding to evaluation.Guyadeen et al. (2018) [11] emphasized the importance of providing necessary practical training for the evaluators.
Since no guide has been developed in Iran regarding the 'implementation' of educational program evaluation in medical sciences, compiling an applied guide could lead to a major breakthrough.The present study aimed to identify the effective components involved in planning, implementing, and managing the evaluation of educational programs in medical sciences and develop a guide using the effective components to improve program evaluation.
Study design and setting
This descriptive study was conducted at Kerman University of Medical Sciences in 2022 in three steps including literature review, focus group, and a survey by a checklist.
Study participants and sampling
For participation in focus group, an e-mail was sent to 10 medical education experts.The participants of the third step were 45 medical education experts and educational program evaluators.
Data collection tool and technique
The first step included a literature review regarding the effective components of designing a guide.Keywords including plan evaluation, planning evaluation, medical evaluation, design management, and program evaluation have been searched through 2010 to 2020 in Medline, Scopus, Web of Sciences, and EMBASE.The presence of keywords in the title and abstract of articles has been considered as an inclusion criterion.The exclusion criteria were unrelated content; studies that did not address the components of planning, implementation, and evaluation management in medical sciences educational programs; studies that had compared different methods of evaluations; and studies that had investigated the effectiveness of different methods of evaluation.
In the second step, one focus group discussion was conducted with several medical education experts.
The findings of the previous step were presented and completed during this session by the experts.The experts were informed to attend focus group discussion via e-mail.Therefore, an e-mail was sent to 10 medical education experts.The focus group discussion was held for 2 hours.The collected data regarding the planning, implementation, and evaluation management of medical sciences educational programs were reviewed and discussed.Finally, an eight-stage applied guide was designed and compiled.
In the third step, the compiled guide was given to 45 medical education experts and educational program evaluators as an online checklist with 35 items (23 closed-ended questions and 12 open-ended questions).
They were asked to state their opinion regarding the clarity and practicality of each guidance step.Sampling was carried out by census due to the limitation of the experts.The closed-ended questions evaluated the clarity and practicality of the guidance using a dichotomous scale (yes or no), and the open-ended questions were used to collect opinions and suggestions.Data were analyzed using SPSS.Finally, the ultimate guide was developed using the participants' opinions.The steps of the study are shown in Figure 1.
Ethical consideration
This study was approved by the Research Ethics Community of Kerman University of Medical Sciences (No. IR.KMU.REC.1400.075.).Participants did not receive any incentives, and participation was voluntary.Informed consent for participation was obtained based on the proposal approved by the ethics committee.The participants were also assured of the confidentiality of their information, and it was explained that the results would only be used for research objectives.
Results
In the first step, 53 articles were found.The titles and the abstract of 49 articles were reviewed.Finally, based on the inclusion criteria, nine articles in Persian and 30 in English were thoroughly studied.Then, the effective components in planning, implementation, and evaluation management of the medical educational programs were established.Table 1 presents the mentioned effective components in planning, implementation, and evaluation management of the medical educational programs divided into three categories.
In the second step, the mentioned factors were presented in a focus group discussion for six medical education experts, and based on their opinions, step-by-step practical guidance was developed.
In the third step, 40 checklists were returned out of 45 online checklists (response rate: 88.8%).45% of the respondents were male, and the rest were female.The work experience mean of the respondents was 8 years, and about 35% of them had less than 5 years of work experience.The minimum work experience was 8 months, and the most extended work experience was 15 years.More than 90% found the content of the guidance clear and unambiguous.93.6% found the content of the guidance practical.According to the respondents, 74.4% of the least practical belonged to the eighth stage, with the amount of 74.4%.Based on the Chi-square test, there was no significant difference between the frequencies of women's and men's responses regarding the clarity of the overall content of the guidance and its practicality.The frequency distribution of the responses regarding the clarity and practicality of the content revealed that the work experience mean scores of the respondents with the answer 'yes' were higher than those with the answer 'no.'However, the difference in work experience mean score between the positive and negative answers was not statistically significant.
Finally, the applied guide for planning, implementation, and evaluation management of educational programs in medical sciences includes eight stages: determining the questions and standards, determining the type of information required, determining appropriate resources for data collection, determining tools and methods for data collection, determining the data analysis method, determining the timing and frequency of reporting, determining an appropriate way to present reports, and finally determining strategies to maintain the cooperation of data sources.
Discussion
At the time of implementation or at the end of any educational program such as educational classes, faculty development programs, conferences, and seminars, the people involved including policy-makers, planners, instructors, and evaluation experts evaluate the implemented program.The ultimate end of the evaluation process is to judge and make a decision based on the evidence.In other words, they need to decide whether the program is allowed to be continued or requires modification.The present research aimed to provide an applied guide for professional evaluators to evaluate educational programs carefully.According to the results, the effective components in evaluating educational programs were identified in eight stages of planning, implementation, and evaluation management of medical educational programs.
It is essential to identify questions and standards in relation to evaluation in the area of planning educational programs.When the questions are designated, it will be apparent where the evaluation is headed, and all the following steps will be identified.In regard to the difference and distinction of questions, criteria, and standards of the evaluation, Yarbrough (2017) stated that the evaluation question reflects the purpose of the evaluation, and evaluation criteria state the characteristics of a successful program; last but not least, the evaluation standard states the appropriate characteristics of a program. [12]cording to Nobrega et al. (2021), [13] the process of developing evaluation questions comprises two primary stages, namely, divergent and convergent.In the divergent stage, efforts are made to gather all the questions that seem appropriate to the experts and stakeholders of the program.In the convergent stage, the goal is to categorize and reduce the number of questions based on their importance and relevance to the objective of the evaluation.Finding questions in the divergent stage and selecting the questions in the convergent stage are carried out in collaboration with program stakeholders.The sources of evaluation questions are different.According to the study by Jayaratne in 2016, there are various resources that can be involved in determining evaluation questions.These resources include the questions of stakeholders, program evaluation models, standards, checklists, tools designed for similar evaluations, the perspective and experience of experts, and the evaluator's personal experience and judgment. [14]Once evaluation questions have been selected, it is essential to establish standards for each question.If no standard is defined for a question, it is critical to establish a standard for that question. [12] can sometimes be challenging to determine a standard for questions, and it may not be possible to determine a specific level as a standard.Therefore, evaluators need to have a general understanding of standards.When setting standards, evaluators must always be careful to avoid setting standards that are too high or too low. [15]hmady et al. (2009) [16] recommended obtaining feedback from stakeholders with different perspectives in order to avoid subjectivity in setting standards.
The implementation steps of the educational program evaluation consist of data collection, analysis, and interpretation.Lemire et al. (2020) [17] stated that there are four essential steps in data collection: determining what information is needed, determining appropriate resources to collect data, determining required methods and tools to collect data, and finally determining appropriate conditions for data collection.According to Nielsen et al. (2022), [18] evaluators must have a plan for coding, organizing, maintaining, retrieving, and analyzing the data.In addition, the interpretation of the findings is one of the important steps in program evaluation since statistical data mean nothing without the right interpretation.
Proper planning and implementation of the evaluation results are helpful; however, if the mentioned results are not effectively reported, the chances of using them cannot be significant.Husereau et al. (2022) [19] conducted a study to consolidated health economic evaluation reporting standards and emphasized various areas including continuous reporting of evaluation results, identifying and applying various reporting methods, identifying audiences, and reporting results based on their requirements and characteristics.
Evaluators must constantly be in contact with the evaluation audience and communicate the results with them.This is an essential matter since it provides the evaluator with an opportunity to have an understanding of their unexpected reaction and a chance to manage it.Moreover, the audiences can have a grasp of the results and a sense of ownership toward it; as a result, they feel motivated to make changes in order to eliminate the imperfection of the program. [20]According to Portell et al. (2015), [21] the timing and frequency of presenting a report depend on the purpose of the evaluation.In formative evaluations, there is more reporting frequency.
The timing of intermediate reports can be flexible; it can be either at the end of each stage of the program or at the end of each stage during information collection; it can even be spontaneous and whenever unpredictable results are obtained.
There are various ways to present evaluation reports.Some are less interactive, and some are more interactive.The methods that involve the least interaction between the evaluator and program stakeholders are as follows: reporting through newsletters, summaries, brochures, websites for posting news, or news media.
In the middle of the mentioned spectrum, there are other ways, such as oral presentation, PowerPoint, video report, posters, images, caricatures, animations, and poetry.However, at the end of the spectrum, the most interactive methods involve the most interaction between the evaluator and the stakeholders of the program, including meeting reports, either individually or using simultaneous electronic communication. [22,23]ucational program evaluators must be aware of the audience's needs in proportion to the evaluation reports.Some common mistakes when presenting evaluation reports to various audiences include forgetting a particular audience, not considering their specific needs, and considering too broad or too narrow an audience. [24]porting negative results is of great importance.When presenting adverse reports, it is better to start the report by presenting positive aspects and bringing up negative aspects in face-to-face and friendly meetings; first, an intermediate written report should be provided, and then their reactions should be examined.After this stage, the final report will be sent for review, and then it will be finalized.Moreover, informing stakeholders about negative results cannot be postponed. [25]One way that helps to make negative results more effective while reporting is to ask for the opinions of the audience regarding how to present negative results. [20]
Conclusions
The spread of educational programs in medical sciences universities leads to an increasing need for evaluating the programs to investigate their effectiveness and improvement.Based on a set of concepts, principles, methods, theories, and models in the field of program evaluation, this research provides an applied guide for planning, implementing, and managing educational program evaluation in medical sciences.It consists of eight stages, including setting evaluation questions and standards, identifying required information, selecting appropriate resources for data collection, determining data collection methods and tools, selecting data analysis methods, determining the timing and frequency of evaluation reporting, selecting reporting methods, and identifying strategies to main collaboration among information resources.
Figure 1 :
Figure 1: Steps of the study | 2024-05-01T15:14:27.931Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "f73e9473c2b5cd1dce734b06d1e99996cf0fa977",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jehp.jehp_308_23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea4d38ceb2791a48197415b7f4b40aa409082292",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": []
} |
2340765 | pes2o/s2orc | v3-fos-license | Macular Buckling Surgery for Retinal Detachment Associated with Macular Hole in High Myopia Eye
A 68-year-old woman presented to our clinic with a 1-month history of central scotoma and visual loss in her right eye. The best corrected visual acuity (BCVA) was hand motion in her right eye. Fundus examination showed myopic chorioretinal degeneration in association with posterior staphyloma and the retina was slightly elevated throughout the macula. Optical coherence tomography (OCT) revealed retinal detachment involving the posterior pole with a macular hole and staphyloma. The patient underwent pars plana vitrectomy, internal limiting membrane peeling, macular buckling, and perfluoropropane gas tamponade. At 3-month follow-up, her BCVA was improved to counting fingers at 1 meter and flattened retina with closed macular hole was observed by OCT. Myopic macular hole with retinal detachment associated with posterior staphyloma represent a challenge regarding their management and several surgical techniques have been described. Although satisfactory anatomical improvement is achieved in these eyes after surgery, the visual acuity outcomes may be poorer than expected due to the chorioretinal atrophy at the posterior pole.
Introduction
Although macular hole is reported to be a rare cause of retinal detachment (RD), accounting for approximately 0.5% of all detachment cases, this figure has been reported as 9% and over in some races. 1,2 One of the most common causes of macular holes leading to RD is high myopia. 1 Although the pathogenesis is not fully understood, various mechanisms have been suggested to play a role in the development of RD associated with macular hole (MHRD) in high myopic patients. These include increased vitreous traction due to posterior staphyloma, reduced chorioretinal adhesion due to posterior chorioretinal atrophy, stiffening of the internal limiting membrane (ILM), increased tension in retinal vessels, and tangential forces created by increased cortical vitreous contractions. 3,4 The treatment of MHRD in high myopia is difficult. Several surgical approaches have been recommended, such as pneumoretinopexy, pars plana vitrectomy (PPV) with ILM peeling or macular buckling (MB). In this study, we present the outcomes of PPV, ILM peeling, MB, and perfluoropropane (C 3 F 8 ) gas tamponade performed to treat MHRD in a patient with high myopia and posterior staphyloma.
Case Report
A 68-year-old female patient presented with complaints of low vision and central vision loss in her right eye for the past month. Her best corrected visual acuity was hand motion in both eyes. Intraocular pressure was 19 mmHg in the right eye and 17 mmHg in the left eye. Slit-lamp examination revealed bilateral nuclear sclerosis. On fundus examination, bilateral posterior staphyloma with myopic degenerative changes were observed, as well as a shallow RD associated with the posterior staphyloma in the right eye. Examination by optical coherence tomography (OCT) showed RD associated with the full-thickness macular hole in the center of the posterior staphyloma of the right eye ( Figure 1A and Figure 1B). Anterior-posterior axis length was 33.65 mm. B-mode ultrasonography showed significant posterior bulging of the sclera ( Figure 2A). Surgical repair was done by dissecting the conjunctiva and Tenon's capsule in an approximately 150-160 degree area of the superotemporal region of the right eye, and bridle sutures were passed through the superior and lateral rectus. In the superotemporal region, 5/0 nylon sutures were placed in the sclera approximately 20 mm from the limbal zone where the implant would be fixed between the insertion points of the superior and inferior oblique muscles. Following phacoemulsification and intraocular lens implantation in the posterior chamber, triamcinolone acetonide (TA)-assisted PPV and ILM peeling were performed. Before securing the explant (AJL Ophthalmic) to the superotemporal region, a fiber-optic light attached to the explant was used to check where the explant contacted the posterior pole by transillumination ( Figure 3). Laser photocoagulation was applied to the hole and degenerative areas in the peripheral retina, followed by fluidgas exchange using C 3 F 8 .
The patient was recommended to lie in prone position for 3 days postoperatively. Fundus examination and B-mode ultrasonography performed at postoperative 2 months revealed a bulge in the macular area associated with the local explant ( Figure 2B). At postoperative 3 months, the patient's visual acuity was counting fingers from 1 meter. Fundus examination showed that the macular hole had closed and the retina was attached. These findings were confirmed with OCT ( Figure 1C).
Discussion
The treatment of MHRD in patients with high myopia presents a considerable challenge. Several surgical approaches have been suggested for such cases. Since 1982, PPV has generally been accepted as the preferred surgical approach for the treatment of MHRD high-myopic eyes. 5 Using TA during PPV facilitates the detection of vitreous cortex remnants and the differentiation and visualization of the epiretinal membrane. Compared to TA-assisted procedures, patients undergoing PPV without TA show a higher rate of repeat surgery due to postoperative development of preretinal fibrosis. 6 ILM peeling eliminates the risk of prefoveal vitreous cortex remnants following PPV. Moreover, ILM peeling with PPV improves the chances of surgical success by reducing the amount of tangential traction at the macular hole. 7 In light of these data, we also applied TA-assisted PPV with ILM peeling in our case. Previous studies have reported anatomical success rates of 70-92% with PPV, ILM peeling, and gas tamponade in the treatment of MHRD in high myopic eyes. 8,9,10 However, although PPV with ILM peeling and gas tamponade is the primary surgical approach for such cases, it may not be adequate to address certain pathophysiological factors such as the tension created by the posterior staphyloma. The presence of posterior staphyloma in these patients may lead to complications such as foveoschisis, 3 showed that the incidence of RD in eyes with macular holes was associated with degree of myopia, chorioretinal changes, and the presence of posterior staphyloma. Wei et al. 11 reported that greater axial length, severe chorioretinal atrophy, and posterior staphyloma negatively affected postoperative anatomic success in high myopia patients with MHRD. Therefore, MB methods have been proposed to prevent increased tension due to posterior staphyloma.
MB is an old surgical technique used to counteract the pulling effect of the staphyloma. 12 However, it is difficult to accurately position the material during the procedure so that it will have the desired effect on the macula. The second difficulty we have with this procedure is the availability of explants. Various materials such as silicone sponge, siliconcoated polymethylmethacrylate, silicon plate containing metal wire (Ando), and polytetrafluoroethylene are used as explants in MB. Theodossiadis and Theodossiadis 13 reported achieving anatomic success in 88% of patients with high myopia and MHRD using MB with silicone sponges. Numerous studies have reported anatomical success rates of 90% or higher after MB in cases with MHDR. 14,15 These high rates of reported anatomical success in high-myopic MHRD patients have led to MB gaining prominence, especially when treating patients with posterior staphyloma.
By flattening the excessive concavity in the posterior pole caused by the posterior staphyloma, MB reduces the anteriorposterior traction caused by both the posterior staphyloma itself and the tension in the retinal arteries. However, PPV and ILM peeling applied in addition to MB may be effective in preventing the recurrence that is sometimes seen in these cases. PPV and ILM peeling eliminates tangential and centripetal traction which can result from ILM and epiretinal membrane. Therefore, combined surgical approaches have been proposed to increase both anatomic success and the likelihood of macular hole closure. Alkabes et al. 16 reported that a combination of PPV, ILM peeling, and MB resulted in macular hole closure in 81% and retinal reattachment in 95% of MHRD cases. In the same study, 16 this combined procedure led to macular hole closure in 57% and retinal reattachment in 90.5% of patients who had not responded well to previous surgical approaches. Similarly, in a large prospective study by Ma et al. 17 comparing the outcomes of PPV with ILM peeling and combined PPV, ILM peeling, and MB in patients with MHRD, the combined procedure was associated with significantly higher rates of both macular hole closure and retinal reattachment. We also performed PPV, ILM peeling, MB, and gas tamponade procedures in our MHRD patient due to findings of increased axial length, posterior staphyloma, and chorioretinal atrophy. We observed both macular hole closure and retinal reattachment postoperatively. However, there was not as much functional improvement as we expected, and the increase in visual acuity was limited. Even if no intraoperative complications are noted, vascular structures or optic nerve damage may occur while the explant is placed. In addition, both the preexisting RD and the thin, delicate retinas in eyes with high myopia and MHRD can cause serious complications during ILM peeling, such as the formation of new holes in the retina. Nevertheless, we observed no intraoperative or postoperative complications related to ILM peeling in our case.
The combination of PPV, ILM peeling, MB, and gas tamponade may be effective in patients with high myopia and MHRD. However, although the anatomical success is high with this procedure, functional success may be limited due to chorioretinal atrophy resulting from high myopia. In our patient, limited functional improvement was achieved due to chorioretinal atrophy in the macular region. Therefore, the fact that the severity of chorioretinal atrophy in the posterior pole will limit functional success should be considered prior to surgical intervention in these patients.
Ethics
Informed Consent: It was taken. Peer-review: Externally peer-reviewed. | 2018-04-03T03:18:47.645Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "b6b0ac5c5d686898b620c57bb5cee6bae1209b99",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4274/tjo.55453",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6b0ac5c5d686898b620c57bb5cee6bae1209b99",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55135056 | pes2o/s2orc | v3-fos-license | Influence of Bed Geometry on the Drying of Skimmed Milk in a Spouted Bed
In this present work, the fluidynamic and drying process of skimmed milk in conical and conicalcylindrical spouted bed was analyzed as a function of different bed geometry and operating conditions. It used three internal cone angles (45 ̊, 60 ̊ and 75 ̊), different loads of inert particles (1.50, 3.00 and 4.50 kg) and a fixed static bed height (20.50 cm). Polyethylene particles of 4.38 mm of diameter and 930.50 ± 0.3 kg/m3 of specific mass were used as inert particles. An artificial neural network model was trained to predict the peak pressure drop and the minimum spout velocity from an experimental data bank. The experimental results showed a significant effect of geometric characteristics of the bed on fluidynamics parameters. It was also observed for the operating conditions that conical spouted bed and cone angle of 45 ̊ were more suitable for drying skimmed milk. The neural network provided predictions in good agreement with experimental data.
Introduction
Several studies have been conducted to investigate the operating conditions on the fluidynamics and drying of pastes in a spouted bed.Mathur and Epstein [1] reported that the minimum fluid velocity at which a bed would remain in the spouted state depended on solid and fluid properties on the one hand and bed geometry on the other hand.Olazar et al. [2] conducted a study using contactors with different geometries, solids of different characteristics and with wide range of air velocity.The authors observed that there were limits for cone angle, D o /D i and D o /d p .From that, it was obtained the design parameters for the stable operation in a spouted bed.Olazar et al. [3] investigated the effect of the operating conditions (base angle, air inlet diameter, stagnant bed height, particle diameter and air velocity) on the fountain geometry.It was seen that the contactor base angle had a major incidence on the fountain geometry.San Jose et al. [4] reported that the angle cone had significant effect on the air velocity, the particles trajectory, and height of the font.
Pham [5] verified the existence of stagnant regions and the difficulty of particles circulation in the annular region with inserting pastes in bed.Thus, it was concluded that the pastes significantly changed the fluidynamics parameters as well as the solid and fluid circulation patterns.Medeiros et al. [6] investigated the influence of the chemical composition on the spouted bed performance in drying pulps of tropical fruits.The authors found that the presence of pulps with high concentrations of sugars provoked problems of instability in the spout, while the presence of fat favored the bed dynamic.Recently, Nascimento et al. [7] studied different concentrations of milk fat and found that the pulp composition also affected the process, since the absence of fat in skimmed milk caused significant changes in the flow of inert, and therefore provided increase expressive values of pressure drop.
The context presented shows that there are several factors affecting the fludynamic parameters and drying process in spout beds.However, it is difficult to get a good prediction of minimum spout velocity, pressure drop behavior and evaporation capacity of the equipment as a function of different experimental conditions once there are some gaps in the information provided by literature.In general, the analysis of the influence of bed geometry is done in dry beds and for operations using pastes the bed geometry was fixed.In order to improve knowledge in this subject, the experimental analysis was divided in three stages: first it was performed the determination of fludynamic parameters (peak pressure drop and minimum spouting velocity).In the wet experiments, distilled water was used as standard paste for being the constituent corresponding to 75% to 97% of the weight of real pastes.With knowledge from the previous steps, the drying of real pastes was done.According to Ochoa-Martinez et al. [8] and Nascimento et al. [7] the absence of fat in skimmed milk leads to difficulties in the circulation of the inert particles, accumulation of paste on the surface of inert particles, and consequently, an increase in the values of pressure drop.So, skimmed milk was used to show the potential improvement of drying with three different cone angles.In addition, a neural network was designed and trained with the data base to predict the minimum spouting velocity and the peak pressure drop on the spouted bed for the experimental conditions studied.
Equipment and Operations Conditions
The fluidynamics experiments were performed in a spouted bed consisting of an inox iron cylindrical vessel, with 120 cm of height, 30 cm of diameter, inlet diameter with 3 cm and three lower cone angles (45˚, 60˚ and 75˚).Figure 1 shows the schematic diagram of the experimental setup.Figure 2 presents the bed specifications used in this study and the bed dimensions are presented in Table 1.A Venture nozzle is employed, whose geometric factors are defined in Figure 3. Polyethylene particles of 4.38 mm of diameter and 930.50 ± 0.3 kg/m 3 of specific mass as inert particles were used in the experiments.
Experimental Description
Firstly the peak pressure drop characteristic and minimum spouting velocity in both conical and conical-cylindrical spouted beds were obtained based on the curve of the pressure drop versus the superficial air velocity from both increasing and decreasing the superficial air velocity, according to the methodology proposed by Mathur and Epstein [1].The experimental operating conditions are summarized in Table 2.All measurements were triplicate in order to check the reproducibility of the data accuracy.The values of pressure drop are the measurements of empty bed pressure drop subtracted of the measurements of bed pressure drop with the load of particles.After that, the experiments that followed employed distilled water as standard paste in order to build an initial background on drying pastes.Finally, drying experiments using skimmed milk were carried out.Measurements of inlet air temperature and velocity, bed pressure drop, dry and wet bulb temperatures at the cyclone exhaust were available.Data was collected every 30 seconds by the acquisition system, 1024 points at a frequency of 500 Hz.Dry and wet bulb temperature measurements were converted to relative humidity values of the exhaust air.The inlet air temperature and inlet air velocity were at 100˚C and 1.30 u mj , respectively.
Artificial Neural Networks
An artificial neural network (RNA) was proposed to predict the minimum spout velocity, pressure drop on the spouted bed for different cone angles and load of inert particles.The RNA can represent non-linear processes with complex structures and, in some cases, provide better results than empirical correlations [9], they may be an interesting and promising alternative to estimate fluidynamics parameters for different bed configurations.Another advantage to be highlighted and that fits the approach of this study is that neural networks are a simple alternative for processes that involve phenomena which are complex and of difficult mathematical description.
The neural network was developed using Neural Network Toolbox in MATLAB 2007 software.This was a three layer feedforward neural network, with one single hidden layer, two inputs and two outputs as shown in Figure 4.
The number of neurons in the hidden layer was chosen by trial and error, as suggested by Himmelblau [9], starting with 2 neurons and adding up some more until the network performance in estimating the correct output is satisfactory.The number of neurons for this study was 3. Backpropagation was used as learning method.operational conditions employed, the minimum spouting velocity was influenced by the cone angle (θ) and by the load of inert particle (m p ).As it is seen in these results, a decrease in θ leads to higher values of u mj for all values of m p studied.This is due to the fact that for a fixed load of inert particles, increasing the cone angle decreases the static bed height, H 0 , (Figure 6) and, consequently, smaller air velocity is needed to support the spouted state.In agreement with these results, Mathur and Epstein [1] reported that the minimum spouting velocity depends on the solid characteristics, fluid properties and bed geometry.Mathur and Epstein [1] also observed that for conical-cylindrical spouted bed the intensity of cone angle effect depending of other variable, as example, column diameter, D c .
Minimum Spouting Velocity
Another aspect that should be analyzed is the influence of cone angle on the fluidynamic for fixed static bed height.Figure 7 presents the values of minimum spout velocity as a function of cone angle obtained for polyethylene particles, fixed static bed height and 100˚C.
It is shown in Figure 7 that u mj increases as θ is increased between 45˚ to 60˚, on the other hand, the minimum spouting velocity remains practically constant increases θ from 60˚ to 75˚.This is due to the fact that for a fixed static bed height (Table 2), increasing the cone angle from 45˚ to 60˚ the m p difference was higher than varies cone angle form 60˚ to 75˚, 1.36 e 0.84 kg, respectively.Thus, a higher air velocity is required to maintain spouting regime increasing θ from 45 to 60˚.
Another aspect that must be taken into account is the bed configuration used.As one can observe in Figure 7 it was used two distinct bed configurations: conical and conical-cylindrical spouted bed.According to Wang et al. [10], in conical spouted beds where the solids inventory is restrained within the conical region below the cylindrical section, the hydrodynamics of the conical spouted bed are quite different from that of the conventional cylindrical spouted beds.Kmiec [11] reported that the minimum spouting velocity in a conical spouted bed is more dependent on the bed height than the minimum spouting velocity in conical-cylindrical spouted beds.Mathur and Epstein [1] analyzed the data predict by the Mathur-Gisler Equation for different experimental data provided by literature for conical-cylindrical spouted bed bed.The authors observed that for columns up to 30.5 cm diameter the angle cone did not have significantly effect on the minimum spouting velocity.Although, in a 61 cm column, the spouting velocity for wheat was 10% higher with an 85˚ cone than with a 45˚ cone.It was also observed that the large column diameter, D c , results were better correlated if the exponent to the ratio D i /D c was reduced to 0.23 for 45˚ and 60˚ cone angles and to 0.13 for 85˚.However, Olazar et al. [12] found that D c should not be used in the correlation to predict u mj in conical beds, because according to the authors this velocity will remain unchanged with variation in D c as long as the bed remains entirely in the conical section.
Olazar et al. [2] conducted a study to delimit the application ranges of spouting regimes in conical contactors.The experiments were carried out with extension to solid of different characteristics, contactors with different geometry and with wide ranges of gas velocity.The authors found that particle diameter, d p , had great effect, since there are restricted ranges of cone angles and inlet diameter for each values of d p .This information indicates that this subject has to be treated with more care.The authors also observed that for high static bed height, the effect of the cone angle on the hydrodynamics it was more pronounced.Similar behavior was observed by Wang et al. [10].
Peak Pressure Drop
Figure 8 shows the values of peak pressure drop as a function of base angle obtained for polyethylene particles, different load of inert material and inlet air temperature of 100˚C.As observed in Figure 8, an increase in the cone angle decreases the peak pressure drop.As presented before in Figure 6 for a given m p , a decrease in θ increases in static bed height, H 0 .Consequently, there is more air resistance and a higher pressure drop is required to break the bed and open the spout [13].
It was also observed that the effect of cone angle was less pronounced for 1.50 kg of inert particles.This is explained by the fact that was only used conical spouted bed and the m p variation between for all cone angles studied was very similar.However, for 3.00 and 4.50 kg of inert particle it was used conical and conical-cylindrical spouted bed as showed in Table 2.These results showed that for higher load of inert particles the effect of the bed configuration was more pronounced.According to Mathur and Epstein [1] and Moustoufi, Kulah and Koksal [14], the hydrodynamics of conical spouted beds is significantly different from that of conventional spouted beds.
Another aspect studied in this work was the influence of cone angle on fluidynamic parameters for fixed bed height.There is a conflict between the information provided by literature.According to Bi [15], the Mukhlenov-Gorshtein [16] correlation predicts that ∆P máx increases with increasing cone angle for given static bed height.The opposite is predicted by the correlations of Gelperin et al. [17] and Olazar et al. [18].Wang et al. [10] did not observe a clear effect of cone angle on ∆P máx .In order to confirm this information, experiments were conducted using fixed static bed height and distinct cone angles, as shown in Figure 9.
Experimental evidences presented in Figure 9 demonstrate the effect of cone angle on the peak pressure drop.It was observed for a fixed bed height, an increase in θ leads higher values of ∆P máx.As shown in Table 2, for a fixed static bed height, the load of inert material increases as the cone angle is increased.Consequently, the measured values of ∆P máx increases as the cone angles are increased.In the same way, it is observed that the effect of cone angle was more pronounced cone angles between 45˚ to 60˚ and 45 to 75˚.Whereas for θ from 60 to 75˚ the ∆P máx variation is very small, since the m p variation was not large to provoke a pronounced increase in ∆P máx .From Figure 9, it can be seen that the intensity of cone angle effect was dependent on the bed configuration.
The experimental results obtained in this work showed that the fluidynamic parameters of spouted bed were dependent on bed configuration, bed geometry and operating conditions.Thus, a confinable tool to predict the minimum spouting velocity and the peak pressure drop for different bed configuration is very useful and necessary.
Neural Network
A neural network model was designed from experimental data to predict the minimum spouting velocity and peak pressure on the spouted bed for different cone angles and load of inert particle.Neural network proposed in this work was trained and evaluated with experimental data for u mj (30.72 m/s) and ∆P máx (983.51Pa) and it was obtained the respective values for u mj (30.70 m/s) and for ∆P máx (1027.60Pa).It was observed that the network fitted best the measured values of minimum spouting velocity, since the error provided by the network is close to the experimental error.
Water Evaporation
The information obtained and discussed in the previous section were essential for to initiate the experimental runs using distilled water.Table 3 shows the maximum capacity for evaporation water obtained from experimental measured for stable conditions.Experimental investigation showed that the effect of m p on maximum capacity for evaporation water, Q, was more pronounce than θ.As previously mentioned by Almeida et al. [19] the load of inert particles had a strong influence on the maximum allowable feed flow rate.This is because an increase in m p increases the specific area of the bed.So, as evaporation occurs on the particles surface, an increase in the specific area increases the evaporation rate.
Referring to the effect of cone angle on maximum capacity for evaporation water, it was observed that Q had only an improvement of 5 ml/min.as θ varies from 75˚ to 60˚ or 75˚ to 45˚ for both load of inert particle available.Rodrigues [20] studied the effect of the cone angle on the maximum capacity for evaporating water.The author observed that the rate of water evaporation per unit of inert solid volume increases as θ varies from 60 to 30˚.It is also verified that the annulus aeration improves as θ decreases.However, this information cannot be compared, since the author used fixed bed height.So the author assigned this behavior to the fact that a higher in θ implies increasing in m p , which consequently increases Q.It is noted that there is insufficient information about the effect of cone angle on maximum capacity for evaporation water mainly drying pastes provided by literature.The most researches analyze the inference of bed geometry of dry bed.
Referring to the effect of water on the fluidynamic behavior, Figure 11 shows the dimensionless bed pressure drop as a function of time for 100˚C, 1.30u mj , 4.50 kg of inert particles and cone angle of 60˚.
As Figure 11 shows the dimensionless bed pressure drop practically did not change for feed flow rates below 50 ml/min (ΔP t /ΔP t=o close to 1).However, it decreased for feed flow rates above 55 ml/min.Similar results have been reported by Patel et al. [21], Schneider and Bridgwater [22], Almeida et al. [19] and Bitti et al. [23].According to the abovementioned authors less air passes through the annulus as the amount of paste fed to the dryer is increased.The air main stream goes through the spout channel, reducing, in this way, the bed pressure drop.Another possible way to understand this phenomenon is that the presence of a liquid phase increases particle agglomeration, slowing down particle motion in the annulus.The same behavior was also observed for the other cone angles studied.
Drying of Skimmed Milk
Based on experimental results presented in Table 3, Q was obtained for 4.50 kg for all cone angles used.Thus, the drying experiments using skimmed milk were conducted for 4.50 kg, 1.30 u mj , 100˚C and different cone angles.Figures 12(a)-(c) show the dimensionless bed pressure drop as a function of time for cone angles of 45˚, 60˚ and 75˚, respectively.
Despite cone angles did not had great effect on maximum capacity to evaporating water, Figures 12(a)-(c) showed the effect of dimensionless pressure drop during skimmed milk drying.By comparing Figures 12(a)-(c) it became clear that for the same operating conditions the dimensionless pressure drop had different behavior for θ = 60˚.The dimensionless pressure drop deviated from the straight line corresponding to ΔP t /ΔP t=o = 1 for feed flow rates varying from 10 to 35 ml/min.Values of dimensionless pressure drop were higher than one for most feed flow rates employed.Similar behavior has been reported by Almeida et al. [19] and Nascimento et al. [7].Comparing these figures was noteworthy that the dimensionless bed pressure drop practically did not change for feed flow rates below 45 ml/min (ΔP t /ΔP t=o close to 1) for 45˚ and 40 ml/min for 75˚.However, it decreased for feed flow rates above 45 and 40 ml/min for cone angles 45 and 75˚, respectively.Similar results have been reported using water as paste by Almeida et al. [19], Patel et al. [21], Schneider and Bridgwater [22] and Bitti et al. [23].
The experimental evidences indicated that dimensionless pressure drop for skimmed milk had similar behavior when it was used distilled water for cone angles of 45˚ and 75˚, as observed comparing Figure 11, Figure 12(a) and Figure 12(c).One can be said that the bed did not "feel" the presence of the paste.However, it was also observed that the feed flow rate of skimmed milk was smaller than water, as shown in Table 4. Similar results have been reported by Almeida et al. [19] and Nascimento et al. [7].
An aspect that must be taken into account is the bed configuration used in this work (conical and conical-cylindrical spouted bed).According to Olazar et al. [12], for conical spouted beds, cone angles higher than 60˚ are not recommended because the solid circulation rate is very low.Elperin et al. [24] reported that a cone angle of 40˚ -45˚ was the optimum for maximizing solids circulation rate.Thus, the stable values of ΔP t /ΔP t=o for conical spouted bed and cone angle of 45˚ can be related to the high circulation of the inert particles, which minimized the effect of paste adhesion as related by Nascimento et al. [7].On the other hand, the lower limit of cone angle for conical beds is 28˚, as the bed is unavoidably unstable for lower angles [12].
The high values of dimensionless pressure drop using cone angle of 60˚ can be a consequence of dead zone (zone of stagnant solids) that is formed near the inlet orifice for high values of cone angles [25]- [27].Moreover, the absence of fat in skimmed milk provokes difficults in the particles circulation [7].The dead zone reduces solid circulation and is not recommended for drying of pastes.However, Figure 12(c) showed that cone angle of 75˚ was more stable than 60˚ (Figure 12(b)).In both cases it was used conical-cylindrical spouted beds.According to Mathur and Epstein [1] for conical-cylindrical beds, an increase in cone angle leads increases in the cross flow rate in the upper region of the bed.Thus, there was more solid circulation using the cone angle of 75˚, which favored the drying of skimmed milk.
The analysis of the three steps: fludynamic without paste, water evaporation and drying paste showed that the spouted bed technique was complex and one should be taken into account several factors to understand them.For instance, cone angle had not great effect on fludynamic using water, although it was observed a significant inference by using a real paste (skimmed milk).
Conclusions
From the results of this study, it was found that fluidynamic parameters and maximum capacity of water evaporation were influenced by both cone angles and load of inert particle in the range of operating conditions analysed.It was also found that the effect of load of inert material on water evaporation was more pronounced than cone angle.The neural network was a good tool for prediction when you had a data bank.
Drying experiments showed that the cone angle had significant effect on the dimensionless pressure drop.The evidences show that a conical spouted bed with cone angle of 45˚ is recommended to drying skimmed milk.
Figure 5
Figure5shows the values of minimum spout velocity as a function of cone angle obtained for polyethylene particles, different load of inert material.The air temperature was keep at 100˚C.It is seen in Figure5that, for the
Figure 4 .
Figure 4. Schematic representation of the neural network used to estimate fludynamic parameters.Where, m p is the load of inert material, θ is the cone angles, u mj is the minimum spouting velocity and ∆P max is the peak pressure drop.
Figure 5 .
Figure 5. Minimum spout velocity as a function of cone angle obtained for different load of inert particle.* c-conical spouted bed, ** cc-conical cylindrical spouted bed.
Figure 7 .
Figure 7. Minimum spouting velocity as a function of cone angle obtained for fixed static bed height.* c-conical spouted bed, ** cc-conical cylindrical spouted bed.
Figure 8 .
Figure 8. Peak pressure drop as a function of cone angle obtained for different load of inert particle.* c-conical spouted bed, ** cc-conical cylindrical spouted bed.
Figure 9 .
Figure 9. Peak pressure drop as a function of cone angle obtained for fixed static bed height.* c-conical spouted bed, ** cc-conical cylindrical spouted bed.
Figure 10 (
a) and Figure 10(b) show the values of fluidynamic parameters provided by the neural networks and experimentally data obtained.As observed in Figure 10(a) and Figure 10(b), predictions by the neural network agree well with the experimental data, once the predicted values were close to 45˚ line.The results presented in Figure 10(a) and Figure 10(b) indicates that the RNA is useful method for predicting minimum spouting velocity and peak pressure drop of conical and conical-cylindrical spouted beds.
Figure 10 .
Figure 10.Experimental and estimated data for the neural network: (a) u mj ; (b) ∆P max .
Figure 11 .
Figure 11.Dimensionless pressure drop as a function of time for T = 100˚C, mp = 4.5 kg and θ = 60˚, parametrizing Q.
Figure 12 .
Figure 12.Dimensionless pressure drop as a function of time, parametrizing Q sm : (a) cone angle of 45˚; (b) cone angle of 60˚ and (c) cone angle of 75˚.
Table 3 .
Maximum allowable water feed flow rates for different cone angles, 1.3 u mj and 100˚C.
Table 4 .
Maximum flow rate of distilled water and skimmed milk in spouted bed: 1.30 u mj , 100˚C. | 2018-12-13T10:21:34.895Z | 2015-08-25T00:00:00.000 | {
"year": 2015,
"sha1": "dd908d80fb15490b3e9d9bfa808c95e838bd7ed7",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=60199",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "dd908d80fb15490b3e9d9bfa808c95e838bd7ed7",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
261468837 | pes2o/s2orc | v3-fos-license | Criteria for central respiratory chemoreceptors: experimental evidence supporting current candidate cell groups
An interoceptive homeostatic system monitors levels of CO2/H+ and provides a proportionate drive to respiratory control networks that adjust lung ventilation to maintain physiologically appropriate levels of CO2 and rapidly regulate tissue acid-base balance. It has long been suspected that the sensory cells responsible for the major CNS contribution to this so-called respiratory CO2/H+ chemoreception are located in the brainstem—but there is still substantial debate in the field as to which specific cells subserve the sensory function. Indeed, at the present time, several cell types have been championed as potential respiratory chemoreceptors, including neurons and astrocytes. In this review, we advance a set of criteria that are necessary and sufficient for definitive acceptance of any cell type as a respiratory chemoreceptor. We examine the extant evidence supporting consideration of the different putative chemoreceptor candidate cell types in the context of these criteria and also note for each where the criteria have not yet been fulfilled. By enumerating these specific criteria we hope to provide a useful heuristic that can be employed both to evaluate the various existing respiratory chemoreceptor candidates, and also to focus effort on specific experimental tests that can satisfy the remaining requirements for definitive acceptance.
Introduction
The respiratory control system is responsible for homeostatic regulation of blood gases and rapid control of tissue pH, with dedicated sensors to detect the principal regulated variables, O 2 and CO 2 /H + , and drive the appropriate ventilatory responses.It has long been known that O 2 sensing is mediated primarily by the carotid bodies, with Corneille Heymans winning the Nobel Prize in 1938 for this discovery; the molecular mechanisms by which carotid glomus cells sense hypoxia remains an area of active investigation (Buckler, 2015;Mokashi et al., 2021;López-Barneo, 2022).It has also been long known that detection of CO 2 /H + takes place mainly in the brainstem.However, in this case the cellular identity of the relevant chemosensors has remained elusive, and thus the cellular and molecular mechanisms for CO 2 /H + detection have been less clear.
The hunt for central chemoreceptors has been active for more than a century, at least since the description of the hypercapnic ventilatory reflex (HCVR) by Haldane and Priestly in 1905 (Haldane and Priestley, 1905).Subsequent research pointed to the brainstem as the most likely site for the cells controlling the chemoreflex, and various inventive approaches have been used to examine CO 2 /H + sensitivity in various brainstem regions and link the putatively chemosensitive cells in those regions to breathing regulation.Historically, these approaches have included: determining in vivo activation of cells by CO 2 , often via proxy measures such as Fos expression; identifying CO 2 /H + sensitive cells, mostly using various in vitro preparations; measuring effects of focal acidification on breathing in vivo; and examining effects of localized, but relatively non-specific, chemotoxic lesions on respiration and the HCVR (Feldman et al., 2003).Informed by these approaches, some initial evidentiary criteria were enumerated for establishing respiratory chemoreceptor sites, and this yielded support for multiple regions/cell types to be proposed as respiratory chemoreceptors, with each contributing differentially under specific physiological conditions (e.g., during sleep and wake) (Feldman et al., 2003).At the same time, however, appropriate cautions regarding the criteria engendered by those experimental approaches, with their inherent limitations, were already apparent (Feldman et al., 2003).Since then, there have been staggering technological advances that have allowed precise phenotypic characterization and genetic access to distinct cell types, cell-specific manipulation of activity using novel optogenetic and chemogenetic tools, and molecular identification of putative substrates for CO 2 /H + detectors.These new approaches obviate some of the earlier limitations and also permit elaboration of a more exacting set of criteria for defining a cell as a respiratory chemoreceptor.
We have proposed the following set of five criteria that can be used to standardize the interpretation of work to identify central chemoreceptors (Guyenet and Bayliss, 2022): 1) activation and inhibition of the candidate cell group have opposite effects on respiration; 2) inhibition of candidate cells blunts the respiratory response to CO 2 ; 3) cell activity in vivo tracks pH or PCO 2 ; 4) CO 2 / H + modulation of cell activity is a direct effect, at least in part; and 5) interfering with the specific molecular mechanism(s) by which a cell senses CO 2 /H + inhibits the normal hypercapnic ventilatory response (Guyenet and Bayliss, 2022).Conditions 1 and 2 are obviously necessary, but also not sufficient-i.e., it is possible to obtain those effects by acting on component(s) of the respiratory system downstream of the actual "chemoreceptors."Similarly, condition 3 is also necessary but not sufficient since alterations in cell activity could be solely due to synaptic mechanisms and not reflect an intrinsic CO 2 /H + sensitivity that would be required for a true "sensor."Conditions 4 and 5 are the most stringent and address the specific molecular mechanisms by which cells sense and respond to CO 2 /H + .Condition 5 is the only criterion that is both necessary and sufficient.This review will focus on the primary cell groups that have been put forward as chemoreceptors, evaluate whether the existing evidence satisfies these new criteria, and identify gaps in knowledge that remain to be filled.
Retrotrapezoid nucleus
The retrotrapezoid nucleus (RTN) was first identified as a group of cells near the ventral surface of the rostral medulla, inferior to the facial motor nucleus and posterior to the trapezoid bodies, that project to the dorsal respiratory group (DRG) and ventral respiratory group (VRG) in the brainstem (Smith et al., 1989;Connelly et al., 1990;Ellenberger and Feldman, 1990).The anatomical location of these RTN neurons coincided well with an acid-sensitive region of the rostral ventral medullary surface first identified in 1963 (Mitchell et al., 1963), prompting an early and prescient speculation that RTN neurons might be the relevant anatomical substrate for these respiratory chemoreceptors (Smith et al., 1989).It is now known that RTN neurons project to various respiratory-related regions, including the preBötzinger complex (preBötC), nucleus of the solitary tract (NTS), Kolliker Fuse (KF), and the lateral parabrachial nucleus (lPBN).This region also receives diverse chemical inputs from the NTS, the medullary and dorsal raphe nuclei, KF, A5, and the lPBN (Figures 1Ai-Aii) (Rosin et al., 2006;Bochorishvili et al., 2012).
As mentioned, the RTN appellation was originally applied to cells in the parafacial region that project to the DRG and VRG.The RTN name has been used by some groups to reference the parafacial region more generally, including all the various cells located therein.We choose a more restrictive definition, to respect both the initial hodological definition of RTN neurons and to acknowledge the subsequent characterization of those cells based on developmental lineage and molecular phenotype that has allowed further refinement of their key defining features.By this definition, RTN neurons share a common lineage, emerging from the dB2 domain of rhombomere 5 and expressing transcription factors Egr2, Phox2b, Lbx1, and Atoh1 at various times during early development as they differentiate and migrate to their ultimate destination in the rostral ventrolateral medulla (van der Heijden and Zoghbi, 2020).The intersectional combination of Phox2b and Atoh1 expression selectively identifies just two cell groups in the mouse brainstem: the peri-facial (periVII) neurons comprising the RTN, and a second peri-trigeminal (periV) cell population that controls lapping behavior in mice (Huang et al., 2012;Hirsch et al., 2013;Ruffault et al., 2015;Dempsey et al., 2021).Of the transcription factors associated with RTN development, only Phox2b expression persists at appreciable levels in postnatal RTN neurons; however, Phox2b is also found in other neurons, including the nearby C1 adrenergic neurons and facial motoneurons (Stornetta et al., 2006).Additional work using immunochemical and single cell molecular approaches has produced a more precise and limited phenotypic definition for RTN neurons (Figure 1Aiii) (Shi et al., 2017;Cleary et al., 2021).In addition to Phox2b expression, all RTN neurons express Slc17a6 (VGlut2); they can be differentiated from other nearby Phox2bexpressing populations, like C1 neurons and motoneurons, by the absence of tyrosine hydroxylase (TH) and choline acetyltransferase (ChAT) expression (Stornetta et al., 2006;Guyenet et al., 2019).All RTN neurons express the excitatory neuropeptide PACAP (pituitary adenylate cyclase activating peptide), and subsets also express variable levels of the inhibitory neuropeptides enkephalin and galanin, but these are not specific for the RTN (Stornetta et al., 2009;Shi et al., 2017;Cleary et al., 2021).Of particular note, RTN neurons can be most definitively identified in this region of the rostroventrolateral medulla by their unique and universal expression of the neuropeptide, Neuromedin B (NMB) (Stornetta et al., 2006;Shi et al., 2017).NMB-positive RTN neurons express a variety of receptors for other neuromodulators, including serotonin (primarily 5-HT2C), substance P (NK1R), orexin (Hcrt1/Hcrt2), and ATP (P2Y12) (Guyenet et al., 2019).Finally, the majority of RTN neurons (>80%) express transcripts for two putative pH sensors, the proton (Souza et al., 2023), Figure 7D.(E) Inhibition of Phox2bexpressing RTN neurons with the inhibitory opsin ArchT reduces V E ; this effect depends on arterial pH (upper) or PaCO 2 (lower) and tracks most closely with pHa.The more pronounced inhibition at lower pHa reflects greater RTN neuron activity.Panel adapted from (Basting et al., 2015), Figures 11C, F. (F) Phox2b-expressing RTN neuron activity tracks inspired CO 2 level when measured by extracellular recording in anesthetized rats in vivo; example of recorded RTN neuron (biotinamide) that was immunopositive for Phox2b.Panel adapted from (Stornetta et al., 2006), Figures 2A, C activated G-protein coupled receptor GPR4, and the proton inactivated K 2P background K + channel TASK-2 (encoded by Kcnk5) (Shi et al., 2017); as discussed below, both GPR4 and TASK-2 have been implicated in mediating pH sensitivity of RTN neurons.Coming full circle, the NMB + cells project to multiple pontine and medullary respiratory regions, including to the DRG and VRG that served as the original defining hodological feature of the RTN (Souza et al., 2023).For these reasons, we now use this constellation of specific features to define these neurons within the parafacial region as the RTN.
Criteria #1 and 2
Several different methods have been used to obtain activation and inhibition of RTN neurons, and these manipulations in turn activate or inhibit respiration in both conscious and anesthetized animals.Inhibition (acute) or ablation (chronic) of the RTN also blunts/abolishes the HCVR, both in vivo and ex vivo.
The RTN region is crucial for maintaining normal respiration.Acute ablation (via local kainic acid injection or electrolysis) decreases phrenic nerve activity, often to the point of apnea (Nattie et al., 1988;Nattie and Li, 1990), and this nontargeted disruption of the RTN region is also sufficient to abolish the HCVR (Nattie et al., 1988).Selective developmental elimination of the RTN has been achieved using various mouse genetic models (e.g., by Atoh1 deletion in Phox2b cells, inactivation of Phox2b in Atoh1 cells, expression of Phox2b polyalanine expansion or Lbx1 frameshift mutations); this physical deletion of the RTN in turn leads to disrupted baseline breathing in embryos and neonates, and severely blunts CO 2 -evoked breathing stimulation at birth (Dubreuil et al., 2008;Pagliardini et al., 2008;Marina et al., 2010;Patwari et al., 2010;Ramanantsoa et al., 2011;Ruffault et al., 2015;Hernandez-Miranda et al., 2018).Moreover, selective intersectional deletion of VGlut2 from Phox2b-Atoh1 neurons reduces baseline ventilation and eliminates the HCVR in P0 mouse pups.Likewise, essentially complete ablation of the RTN in adults (~90-95% loss of Nmb + neurons), either by targeted bilateral injection of saporinconjugated substance P in rats or viral-mediated Cre-dependent expression of caspase in Nmb-Cre mice, reduces baseline breathing (partially compensated by carotid body input) and nearly completely abolishes the HCVR (Figure 1B) (Souza et al., 2018;Souza et al., 2019;Souza et al., 2023).
Transient activation of RTN neurons via photoactivation of channelrhodopsin 2 (ChR2) expressed in RTN neurons under the control of a Phox2b-responsive promotor (PRSx8) increases minute ventilation (V E ) through effects on both tidal volume and frequency and occludes further activation by CO 2 .These effects are observed in both conscious and anesthetized animals, and ChR2-mediated increases in V E depend on glutamatergic transmission from the RTN (Figure 1C) (Abbott et al., 2009;Abbott et al., 2011;Basting et al., 2015;Holloway et al., 2015;Souza et al., 2020).Conversely, acute inhibition of Phox2b-or Nmb-expressing neurons in the RTN with the inhibitory opsin, ArchT, transiently decreases V E in room air, and silences CO 2 -stimulated RTN neuronal activity and V E (Figures 1D, E) (Basting et al., 2015;Souza et al., 2023).Similarly, inhibition of the RTN with an inhibitory GPCR (Drosophila allatostatin receptor) blunts phrenic nerve discharge intensity and frequency at baseline as well as during an acute hypercapnic challenge in an ex vivo brainstem-spinal cord preparation (Marina et al., 2010).
Criterion #3
There is good evidence for CO 2 -evoked activation of RTN neurons in vivo.For example, neurons in the RTN anatomical region, as well as the molecularly defined Phox2b + /NMB + cells, express high levels of the neuronal activity marker Fos after acute hypercapnic challenge (Sato et al., 1992;Teppema et al., 1994;Okada et al., 2002;Kumar et al., 2015;Shi et al., 2017).Direct electrophysiological assessments by extracellular recordings in anesthetized rats in vivo and in isolated brainstem-spinal cord preparations have identified neurons within the anatomical boundary of the RTN displaying "respiratory modulated" activity at baseline as well as CO 2 -stimulated activity during hypercapnic challenge (Figure 1F) (Pearce et al., 1989;Connelly et al., 1990;Nattie et al., 1993;Kawai et al., 1996;Mulkey et al., 2004;Guyenet et al., 2005;Stornetta et al., 2006;Marina et al., 2010;Basting et al., 2015).As expected for RTN neurons, the CO 2 -stimulated cells are Phox2b + , as demonstrated by post hoc immunostaining of the juxtacellularly-labeled recorded neurons (Figure 1F) (Stornetta et al., 2006).The CO 2 -modulated RTN cell firing activity occurs in the absence of feedback from the central pattern generator, i.e., it initiates at a CO 2 threshold lower than required for phrenic nerve activity and persists after carotid body denervation, glutamate receptor blockade, or pharmacologic silencing of the respiratory central pattern generator (Figure 1F) (Mulkey et al., 2004;Guyenet et al., 2005).
It is important to point out that these in vivo electrophysiological recordings were obtained in anesthetized animals, and because anesthetics can exert complex direct and indirect effects on RTN neurons and other respiratory nuclei (Lazarenko et al., 2010a), this leaves open the possibility that the cells might respond differently if recorded in conscious animals.In this respect, indirect measures of RTN neuron function in freely behaving rats are also consistent with CO 2 -modulated neuronal activity.That is, the ventilatorydepressant effects of ArchT-mediated inhibition of RTN neurons are enhanced under conditions of elevated CO 2 or lower arterial pH, implying that RTN neuronal activity and contribution to respiratory drive is similarly enhanced under those conditions (Figure 1E) (Basting et al., 2015).More recent work applying implanted miniscope imaging of neuronal GCaMP6f dynamics in the region containing the RTN demonstrates the presence of neurons in freely behaving mice that track inspired CO 2 via graded increases in Ca 2+ signal, along with other CO 2 -insensitive cells.Whereas these experiments represent an advance in visualizing neuronal activity in a deep medullary structure, like the RTN, those specific chemosensitive cells were not directly targeted and the molecular identity of the recorded neurons was not confirmed.Thus, it remains unclear whether the mixed population that was imaged included the chemosensitive RTN neurons in the region (i.e., Phox2b + /Nmb + , with GPR4 and/or TASK-2 expression), and it seems certain that the sampling was diluted by recording from the multiple other neuronal subtypes present in the general parafacial region (Bhandare et al., 2022).Future experiments using this technique will undoubtedly use currently available targeting approaches to sample the behavior of specific phenotypically-defined cell populations.Overall, the available evidence provides strong support for the conclusion that RTN neuronal activity tracks with CO 2 /H + in vivo, in both anesthetized and conscious animals, even if direct recordings of that activity in freely behaving animals still remain elusive.
Criterion #4
RTN neurons are intrinsically sensitive to changes in CO 2 /H + across a variety of in vitro preparations, including brainstem-spinal cord preparations, acute or cultured brainstem slices and, importantly, acutely dissociated neurons (Figure 2A) (Mulkey et al., 2006;Mulkey et al., 2007a;Lazarenko et al., 2010b;Hawryluk et al., 2012;Wenker et al., 2012;Wang et al., 2013a;Sobrinho et al., 2014;Hawkins et al., 2015;Kumar et al., 2015;Wu et al., 2019).During early development, a group of CO 2 /H + sensitive, Phox2b-expressing neurons in the parafacial region display rhythmic pre-and post-inspiratory firing patterns in brainstemspinal cord preparations; these have been called the embryonic parafacial oscillator (ePF) or, in the early postnatal period (P0-P2), the parafacial respiratory group (pFRG), and are most likely early precursors to the RTN (Onimaru et al., 2008;Thoby-Brisson et al., 2009;Ruffault et al., 2015).In slightly older neonatal brainstem slice preparations (>P6), RTN neurons are tonically active at physiological pH levels, depolarize and increase action potential firing during bath acidification, and hyperpolarize and decrease firing during bath alkalization.This modulation is observed with changes in fixed acid in HEPES-based buffers and with changes in CO 2 in HCO 3 − -based buffers (Figure 2B); these effects appear to track with changes in extracellular pH since RTN neuron firing is increased by hypercapnic acidosis and reduced by normocapnic alkalosis in CO 2 /HCO 3 − -based solutions (Mulkey et al., 2004).The pH sensitivity of RTN neurons is retained in acute slices in the presence of tetrodotoxin (TTX, to block action potential-dependent (Wang et al., 2013a), Figure 2E; (Kumar et al., 2015), Figure 2C.(E) Whole body knockout of TASK-2 and/or GPR4 blunts (GPR4 −/− or TASK-2 −/− ) or nearly eliminates (GPR4 −/− ; TASK-2 −/− ) the HCVR as measured by whole body plethysmography in conscious mice.Panel adapted from (Guyenet et al., 2019), Figure 6G.(F) Schematic of RTN neuron illustrating mechanisms mediating tonic firing and K + channel modulation by hypercapnic acidosis.Panel adapted from (Guyenet et al., 2019), Figure 5B.release) and in low Ca 2+ /high Mg 2+ synaptic blockade solutions (Mulkey et al., 2004).In addition, pH-dependent modulation of RTN neurons is preserved when slices are exposed to a variety of neurotransmitter receptor blockers, e.g., for glutamate (CNQX, APV), GABA (bicuculline), glycine (strychnine), ATP (suramin, reactive blue 2, PPADS, MRS2179), 5-HT (ketanserin, SB269970), and substance P (spantide, L-703606) (Mulkey et al., 2004;Mulkey et al., 2006;Mulkey et al., 2007a).Finally, individual GFP-positive cells dissociated from the parafacial region of two distinct lines of Phox2b-GFP mice, which were verified as bona fide RTN neurons by single cell RT-PCR (i.e., Phox2b + , VGlut2 + , TH-, ChAT-), were also found to retain their CO 2 /H + sensitivity (Lazarenko et al., 2010b;Wang et al., 2013b;Wu et al., 2019).Together, these data make a compelling case that RTN neurons are intrinsically chemosensitive, and they also suggest a molecular basis for direct modulation of neuronal activity by CO 2 /H + .However, it should be noted that respiration is exquisitely sensitive to changes in CO 2 , and the effects of CO 2 /H + on RTN firing in vitro appear to be quantitatively less robust than those effects in vivo, even in anesthetized animals (Guyenet et al., 2005).Thus, whereas direct actions of CO 2 /H + on RTN excitability seem certain, this does not preclude additional indirect effects by modulators that enhance baseline excitability or convey information regarding CO 2 /H + changes that are sensed remotely.
Multiple neurotransmitters, including those that arise from alternative candidate chemoreceptor cells, are known to affect RTN neuronal excitability and may thereby also modulate the firing response to CO 2 /H + (Moreira et al., 2021).This includes serotonin and substance P (from raphe neurons) (Mulkey et al., 2007a), orexin (from the lateral hypothalamus), (Lazarenko et al., 2011), and ATP (from local astrocytes) (Mulkey et al., 2006;Gourine et al., 2010;Wenker et al., 2010).In the case of 5-HT and ATP it has been suggested that these modulators are themselves responsible for conferring an apparent pH sensitivity onto RTN neurons that instead originates from CO 2 /H + sensitive raphe neurons and/or astrocytes (Gourine et al., 2010;Wu et al., 2019).However, the evidence for such an obligatory role of 5-HT and ATP is inconclusive.For example, ketanserin (5-HTR 2 antagonist) or SB269970 (5-HTR 7 antagonist) can block RTN activation by exogenous 5-HT in vitro (Mulkey et al., 2007a;Wu et al., 2019), but these same blockers are reported in different in vitro preparations to either have no effect or abrogate the CO 2 /H + sensitivity of RTN neurons (Mulkey et al., 2007a;Wu et al., 2019).In vivo, direct injection of SB269970 into the RTN of conscious mice blocked respiratory stimulation by a co-injected 5-HT 7 agonist but did not alter CO 2 -stimulated breathing (Shi et al., 2022).Similarly inconsistent results have been obtained with purinergic P2X/Y receptor antagonists (i.e., with suramin, PPADS, MRS2179, reactive blue 2), which have variably been shown to dampen (Gourine et al., 2010;Wenker et al., 2012) or to have no effect on (Mulkey et al., 2004;Mulkey et al., 2006;Mulkey et al., 2007a;Onimaru et al., 2012) CO 2 /H + induced RTN neuronal activity.
Collectively, the available data support the overall conclusion that the CO 2 /H + sensitivity of RTN neurons is a cell intrinsic effect, at least in part.This fulfills criterion #4.It also seems certain that input from other presumptive chemoreceptors (raphe, astrocytes), along with additional modulators from various other cell groups (muscarinic, noradrenergic), can enhance baseline activity of RTN neurons and thereby facilitate their response to CO 2 /H + (Moreira et al., 2021).It is possible that these combined effects-excitatory neuromodulation superimposed on intrinsic CO 2 /H + sensitivity-may account for the difference in CO 2 /H + sensitivity that has been observed between in vitro and in vivo recordings, and perhaps also for various physiological changes in CO 2 /H + sensitivity (e.g., between sleep and wake states) (Guyenet et al., 2005).
Criterion #5
Under voltage clamp, in the presence of TTX and a cocktail of blockers of fast synaptic transmission, acid-evoked depolarization of the RTN is mediated by inhibition of a pH dependent background K + current (Figure 2C).Activation of RTN neurons by CO 2 /H + , as well as full expression of the HCVR, requires the expression and activity of two pH sensitive molecules: TASK-2 and GPR4 (Gestreau et al., 2010;Wang et al., 2013a;Kumar et al., 2015).
TASK-2 is a background K + channel expressed in RTN neurons and in a limited number of additional brainstem cell groups (Gestreau et al., 2010).It shows highest sequence similarity to the TWIK-related alkaline-activated (TALK) subgroup of K 2P channels compared to the similarly named, and also pH sensitive, TASK-1 and TASK-3 channels (Lesage and Barhanin, 2011).Inhibitory gating of TASK-2 occurs through the physiological pH range and is mediated via independent intracellular and extracellular pH sensor domains, each with a pH 50 ~8.0(Niemeyer et al., 2006;Niemeyer et al., 2007;Niemeyer et al., 2010;Lesage and Barhanin, 2011;Li et al., 2020).Inhibition of TASK-2 by acidification leads to membrane depolarization and increased cell excitability.It has not been directly tested whether changes in internal and/or external pH changes account for TASK-2-mediated activation of RTN neurons although, as mentioned above, experimental manipulation of CO 2 and HCO 3 − levels in bath solutions suggest a primary role for extracellular pH.Whereas nearly all GFP-expressing RTN neurons with wild-type TASK-2 alleles are pH-sensitive in brain slices from Phox2b-GFP mice (~95%), only 56% of those GFP + RTN neurons are pH-sensitive in TASK-2 deleted mice; the pH-sensitive background K + current is reduced in pH-sensitive cells from these TASK-2 global knockout mice, and eliminated in ~44% of cells that emerged as pH-insensitive after TASK-2 deletion (Wang et al., 2013a).In TASK-2 global knockout mice, the stimulation of breathing by CO 2 is strongly reduced (by ~60% at 8% CO 2 ) while baseline respiration is unaffected (Gestreau et al., 2010;Wang et al., 2013a;Kumar et al., 2015).We should note that TASK-2 global knockout mice present with a slight metabolic acidosis (ΔpH: −0.03) (Warth et al., 2004), and it is possible that this could have influenced the HCVR.However, the HCVR is unaffected when a more severe metabolic acidosis is induced chronically in mice by NBCe1 deletion from the kidney (ΔpH: −0.2) (Li et al., 2023), or acutely in human subjects by treatment with carbonic anhydrase inhibitors (ΔpH: −0.1) (Teppema and Dahan, 1999;Teppema et al., 2020).
GPR4 is a proton sensing GPCR expressed in RTN neurons (Kumar et al., 2015;Hosford et al., 2018); it senses extracellular proton concentration via protonation/deprotonation of multiple histidine residues on its outward facing surface (Ludwig et al., 2003;Liu et al., 2010;Tobo et al., 2015).Depending on the expression system it can couple to Ga s -and Ga q -mediated signaling pathways with a pH 50 of 7.2-7.6 (Ludwig et al., 2003;Tobo et al., 2007;Liu et al., 2010;Kumar et al., 2015;Tobo et al., 2015;Hosford et al., 2018).In addition to the RTN, GPR4 transcript is also detectable in a limited number of brain nuclei, including the caudal and dorsal raphe nuclei, the lateral septum, and C1, as well as in endothelial cells (Kumar et al., 2015;Shi et al., 2017;Hosford et al., 2018).In the acute slice, treatment with a GPR4 antagonist (Dalton M46; Niemeyer et al., 2010), or whole-body knockout of GPR4, alters the ratio of pH-sensitive to pH-insensitive RTN neurons, with the appearance of a pH-insensitive population that accounts for ~40% of the recorded cells (Figure 2D) (Kumar et al., 2015).The remaining pH-responsive population of RTN neurons are presumably those that have intact TASK-2-mediated pH sensitivity.CO 2 -dependent activation of RTN neurons in vivo (Fos expression) is also reduced in GPR4 global knockout mice while activation of caudal raphe neurons (pallidus, obscurus, magnus, and parapyramidal) is unaffected by GPR4 deletion (Kumar et al., 2015).Administration of the GPR4 antagonist NE 52-QQ57 to mice and rats via an intraperitoneal (i.p.) bolus injection (20 mg/kg) blunts the HCVR by a small, but significant, amount in conscious animals (Hosford et al., 2018).The concentration NE 52-QQ57 reaches at the relevant GPR4-expressing populations after systemic administration is unknown so this inhibition may represent only a small fraction of receptor antagonism in vivo.Localized application of NE 52-QQ57 on the ventral surface of the medulla had no effect on the HCVR in anesthetized animals but it is not clear whether the compound reached efficacious levels for GPR4 inhibition at the RTN (Hosford et al., 2018).Importantly, genetic elimination of GPR4 reduced the HCVR (by ~60% at 8% CO 2 ) and selective re-expression of GPR4 in the RTN alone restores CO 2 -induced Fos expression in RTN neurons and rescues the respiratory defects observed in GPR4 global knockout animals (Kumar et al., 2015).This indicates that the expression of GPR4 specifically in RTN neurons may be especially crucial for both RTN neuronal activation and the HCVR.Notably, simultaneous global deletion of both GPR4 and TASK-2 in mice nearly completely abolishes the HCVR (by ~90% in 8% CO 2 ) (Figure 2E) (Kumar et al., 2015), approximating the deficit in HCVR observed with gross ablation of RTN neurons (Souza et al., 2018;Souza et al., 2019;Souza et al., 2023).The effect of RTN-specific deletion of either proton sensor on baseline respiration or the HCVR has not yet been reported.
It is also worth noting that RTN neurons fire action potentials in a steady pacemaker-like pattern both in vitro as well as in vivo, when other respiratory-related inputs are eliminated (Mulkey et al., 2004;Guyenet et al., 2005;Lazarenko et al., 2010b).The ionic basis for this tonic firing involves a background Na + current, carried by NALCN, and a Ca 2+ -activated cationic current with TRPM4-like properties (Figure 2F) (Shi et al., 2016;Li et al., 2021).These channels contribute to cell excitability, basal activity, and the firing responses to neuromodulators and H + .However, neither is directly responsible for intrinsic CO 2 /H + sensing by RTN neurons (Shi et al., 2016;Li et al., 2021).Nevertheless, the HCVR is significantly blunted in vivo after either shRNA-mediated knockdown of NALCN, or pharmacological inhibition of TRPM4 in the RTN (Shi et al., 2016;Li et al., 2021).These examples provide a cautionary note: they illustrate how cellular and molecular manipulations that affect general cell function and excitability can modulate the HCVR, even when the targets are not responsible for intrinsic CO 2 /H + sensitivity (i.e., when they are not "sensors".)
Summary
For the RTN, there is compelling evidence, albeit not yet complete, supporting each of the enumerated criteria.It is clear that positively and negatively modulating RTN neuron activity in vivo has the expected effects of facilitating and inhibiting respiration; in addition, RTN neuron activation can occlude effects of CO 2 on breathing and inhibiting/ablating RTN neurons blunts/eliminates the HCVR.Single unit recordings in anesthetized rats clearly demonstrate CO 2 modulation of RTN neuronal activity in vivo, but only indirect evidence based on Fos expression or implied from effects of ArchT inhibition has been obtained from conscious animals.This particular criterion would be better supported by recordings of activity specifically from RTN neurons in freely behaving animals (e.g., GCaMP for either photometry or single cell imaging of Nmb + cells).It is also clear that RTN neurons are intrinsically sensitive to CO 2 /H + , and that this intrinsic sensitivity is imparted by expression of both TASK-2 and GPR4 (Figure 2F).It will be important to understand why intrinsic CO 2 /H + -dependent activation, as measured in vitro, is less robust than that observed in vivo.If this reflects additional neuromodulatory effects in vivo from other CO 2 /H + -sensitive neuron populations, a better delineation of the relative contributions to overall CO 2 /H + -stimulated RTN activity would be helpful.Finally, it remains a formal possibility that TASK-2 and GPR4 are only necessary to maintain excitability of RTN neurons and that pH sensitivity is conferred to those cells through other inputs.This proposition seems unlikely as there are no deficits in baseline respiration of either GPR4 or TASK-2 knockout animals like those that occur with silencing of the RTN via chemo/optogenetic means, indicating that RTN activity remained at or above the threshold necessary to provide baseline respiratory drive in both single and double knockout animals.Importantly, the amino acid determinants of intrinsic pH sensitivity are known for both GPR4 and TASK-2, and so it should be possible to generate genetic models to test whether selective elimination of pH sensitivity, per se, is sufficient to recapitulate the observed respiratory effects of the cognate gene knockouts.
Serotonergic raphe
The brainstem raphe nuclei include the dorsal raphe (DR), median raphe (MnR), raphe magnus (RMg), raphe pallidus (RPa), raphe obscurus (ROb), and the parapyramidal (PPy) cell groups; they contain all the serotonergic neurons in the CNS, along with other non-serotonergic neurons.Among the serotonergic raphe neurons there is a wide diversity of neuronal subtypes as defined by both developmental origin and molecular phenotype (Figure 3A).Elegant recent work has meticulously catalogued these serotonergic neuron subtypes while also providing a range of intersectional genetic tools that have begun to find use in probing differential functions of raphe neurons (Brust et al., 2014;Hennessy et al., 2017;Okaty et al., 2019;Okaty et al., 2020;Senft et al., 2021).
Criteria 1 and 2
The specific contribution of serotonergic raphe neurons to respiration and the HCVR has been examined by gross ablation approaches and by using chemogenetic or optogenetic tools.Mice with developmental depletion of serotonergic neurons exhibit a blunted HCVR and impaired CO 2 -induced arousal along with an inability to regulate body temperature during a thermal challenge (Hodges et al., 2008;Hodges et al., 2009;Smith et al., 2018).Acute chemotoxic ablation of SERT + or NK1R + neurons in the midline raphe causes a decrease in basal respiration as well as a blunted HCVR, mainly through decreases in tidal volume (V T ) (Nattie et al., 2004).More recently, cell-specific chemogenetic or optogenetic approaches have assessed the effect of acute raphe activation or inhibition on respiration at rest and during a hypoxic or hypercapnic challenge.For example, after viral-mediated Cre-dependent expression of ChR2 in Epet-Cre mice, optogenetic activation of ROb neurons increased the frequency and amplitude of diaphragmatic EMG in anesthetized mice (Figure 3B); in the presence of CO 2 , the respiratory stimulation by ChR2 activation of ROb neurons was proportional to the CO 2 -elevated baseline respiratory output (DePuy et al., 2011).Conversely, inhibition of the SERT-Pet1, Egr2-Pet1 and Tac1-Pet1 neuron subpopulations of neurons via intersectional expression of the hM4Di inhibitory DREADD and subsequent CNO administration blunted the HCVR without any effect on baseline respiration (Figure 3C) (Ray et al., 2011;Brust et al., 2014;Hennessy et al., 2017).These data provide strong support for the conclusion that serotonergic raphe neuronal activity can modulate respiratory output and the HCVR, satisfying the first two criteria.
Criterion 3
There is evidence for activation of serotonergic raphe neurons by CO 2 in vivo, but it remains somewhat ambiguous.Support for CO 2induced raphe neuron activation has been obtained by using indirect proxy measures, such as changes in 5-HT levels or Fos expression after in vivo exposure to CO 2 (Sato et al., 1992;Larnicol et al., 1994;Johnson et al., 2005;Kanamaru et al., 2007;Iceman et al., 2013;Kumar et al., 2015).However, direct measures of CO 2 effects on raphe firing activity in vivo have yielded inconsistent results-and opposing conclusions.For example, early extracellular recordings from the 1980s in unanesthetized cats supported the idea that the activity of raphe neurons can be modulated by CO 2 in vivo (Veasey et al., 1995;Veasey et al., 1997).Veasey, Jacobs and others found a subset of recorded neurons from both dorsal (8/36) and caudal (6/27) raphe neurons displayed firing activity that tracked with inspired CO 2 ; moreover, firing activity in CO 2 correlated impressively with increased V E (Figure 3Di) (Veasey et al., 1995;Veasey et al., 1997).Notably, the CO 2 sensitivity of caudal raphe neurons was statedependent (absent in slow wave sleep), even though that is a period when respiration is strongly dependent on chemoreceptor input.In addition, the activity of all CO 2 -sensitive caudal raphe neurons increased during motor activity (treadmill locomotion), consistent with a general role in motor function.Although the neurochemical phenotype of these recorded neurons was not definitively established, the physiological, pharmacological, and functional characteristics, together with their anatomical location, indicate that they were likely serotonergic.Overall, these early data suggested that only a subset of serotonergic raphe neurons is CO 2 sensitive, and they are thus consistent with the recent recognition of multiple genetically, developmentally, and functionally diverse subgroups of serotonergic raphe neurons in mice (Okaty et al., 2019;Okaty et al., 2020).
In subsequent work, direct electrophysiological recordings of CO 2 stimulated activity in rodent raphe neurons in vivo have been sought, but not yet obtained.For example, in halothane-or isoflurane-anesthetized rats (n = 37 cells, N = 4 rats) and mice (n = 20, N = 4), neurons recorded in medullary raphe nuclei (ROb, RPa, Ppy) were generally insensitive to increases in inspired CO 2 (Figure 3Dii) (Mulkey et al., 2004;DePuy et al., 2011).These neurons showed functional characteristics expected of serotonergic neurons, and this was further verified either by juxtacellular labeling and post hoc tryptophan hydroxylase (TPH) immunostaining in rats (Mulkey et al., 2004) or by opto-tagging following ChR2 expression in ePet-cre mice (DePuy et al., 2011).It is notable that despite their CO 2 insensitivity, optogenetic activation of spontaneously active, ChR2-expressing serotonergic raphe neurons was able to stimulate respiratory output in a manner proportional to the effects of CO 2 .As mentioned, raphe neurons are functionally diverse, and it is possible that the neurons sampled in these experiments did not include the subset of CO 2 -sensitive cells that were identified, perhaps fortuitously, in the cat raphe.It is also possible that in vivo CO 2 sensitivity of raphe neurons is affected by anesthesia.This possibility is supported by experiments using an in situ unanesthetized decerebrate rat brainstem preparation in which CO 2 -sensitive serotonergic raphe neurons were identified, and for which subsequent exposure to isoflurane caused membrane hyperpolarization and inhibited spontaneous and CO 2 -evoked firing (Massey et al., 2015).Although it was suggested that raphe neuron inhibition by anesthesia could be due to anesthetic activation of pH-sensitive TASK-1/TASK-3 channels, which are enriched in serotonergic raphe neurons, it is important to point out that protonmediated inhibition of those TASK channels is maintained, and actually enhanced, in the presence of anesthetics (Figure 4A) (Hirshman et al., 1977;Patel et al., 1999;Sirois et al., 2000;Talley et al., 2001;Massey et al., 2015).Thus, any CO 2 /H +sensitivity attributable to those TASK channels would be retained even in the presence of halothane or isoflurane.
At this point, support for CO 2 sensitivity of serotonergic raphe neuron activity in vivo remains equivocal; further in vivo recordings, directed specifically toward the subgroup of putative chemosensitive raphe neurons (i.e., Egr2-Pet1 cells) and perhaps incorporating GCaMP-enabled fiber photometry or cell imaging, would be particularly helpful in resolving the question of the chemosensitivity of serotonergic raphe neurons in freely behaving animals in vivo.The latter was recently attempted; miniscope recordings of GCaMP6sexpressing serotonergic neurons in RMg and RPa of conscious mice uncovered multiple types of CO 2 -dependent responses (e.g., transient activation, inhibition), with a graded response to CO 2 observed in some cells (8/26) (Bhandare et al., 2022).
Criteria 4 and 5
Despite the ambiguous results from in vivo studies described above, it is abundantly clear from extensive experiments that medullary raphe neurons are directly activated by CO 2 /H + in vitro; this has been repeatedly demonstrated in the acute slice, in slice culture and, importantly, under conditions of fast synaptic blockade and/or in dissociated neurons where indirect activation is precluded (Richerson, 1995;Wang et al., 1998;Richerson et al., 2001;Wang et al., 2002;Severson et al., 2003;Richerson et al., 2005;Brust et al., 2014;Massey et al., 2015).The CO 2 /H + -sensitivity is preferentially in the serotonergic (TPH + /Pet1 + ) population within raphe cells recorded in vitro, and recent work indicates that this property appears to be specific to the Egr2-Pet1-expressing subset of serotonergic neurons (Figure 3E) (Brust et al., 2014).
The molecular substrate(s) for modulation of serotonergic raphe neuron activity by CO 2 /H + have not been definitively identified, and thus criterion #5 has not been conclusively tested for these putative respiratory chemoreceptors.Nevertheless, some candidates merit discussion.The anesthetic-activated and pH-sensitive background K + channels TASK-1 and TASK-3 are expressed at high levels in serotonergic raphe neurons throughout postnatal development (Figure 4B), and a corresponding anesthetic-and pH-sensitive background current was observed in recordings from serotonergic neurons of the dorsal and caudal raphe in vitro (Talley et al., 2001;Washburn et al., 2002); importantly, this current is eliminated in TASK-1/TASK-3 double knockout mice (Figure 4C), as is the effect of pH on firing in caudal raphe neurons (Mulkey et al., 2007b).Notably, global knockout of these TASK channels has no effect on the HCVR (Figure 4D) (Mulkey et al., 2007b;Trapp et al., 2008).So, if these channels are responsible for the pH sensitivity of these cells, then this result is not consistent with a role for serotonergic raphe neurons in the HCVR.Some caveats are worth noting.The widespread expression of TASK-1/TASK-3 in all serotonergic raphe neuron cell groups (~80%) does not align with the proposed selective CO 2 sensitivity limited to only the Egr2-Pet1expressing subset of neurons in the caudal raphe.In addition, the recordings of TASK channel-dependent pH sensitivity in raphe neurons were obtained in slices from neonatal knockout mice and it is possible that some alternative molecular proton detector mediates CO 2 sensitivity in adult animals (despite the continued TASK expression) and/or compensates for loss of TASK channels in knockout mice.One potential alternative is GPR4, one of the proton detectors in RTN neurons that is also expressed in serotonergic raphe neurons (Kumar et al., 2015;Hosford et al., 2018).However, there is no functional evidence for a GPR4 contribution to pH modulation of raphe neuron excitability in vitro, GPR4 expression is not necessary for CO 2 -induced raphe activation in vivo (as assessed by Fos expression), and the inhibition of the HCVR in GPR4-deleted mice is fully rescued by GPR4 re-expression limited only to the RTN (Kumar et al., 2015).It is also worth mentioning that the molecular determinants for pH sensitivity of both TASK-1/TASK-3 channels and GPR4 are localized to extracellular domains, and pH modulation of raphe neuron firing is thought to involve changes in intracellular pH (Wang et al., 2002), leaving open the possibility for another, Frontiers in Physiology frontiersin.orgstill unidentified proton detector in raphe neurons.In this regard, it was suggested that the pH dependent current and depolarization in medullary raphe neurons may be due to a novel pH-sensitive and Ca 2+ -dependent nonselective cation channel (Richerson, 1995;Wang et al., 2002;Mulkey et al., 2007a;Trapp et al., 2008).However, no molecular candidate has yet been revealed that fits those still preliminary and unpublished electrophysiological observations.
Summary
There is substantial evidence supporting a role for medullary serotonergic raphe neurons in driving respiratory output and supporting the HCVR, and this now appears to be a function selectively of both the CO 2 /H + -sensitive Egr2-Pet1 and presumably CO 2 /H + -insensitive Tac1-Pet1 cell subtypes of serotonergic neurons in ROb and RPa (Brust et al., 2014;Hennessy et al., 2017).There is no consensus yet as to whether these cells are activated by CO 2 in vivo but there is strong support for direct activation of serotonergic neurons by CO 2 /H + in reduced preparations; this is true of dorsal and caudal raphe neurons, but within the caudal raphe, it appears that this may be a property selective for the Egr2-Pet1-expressing subset of serotonergic neurons in RPa (Brust et al., 2014).Future experiments should take advantage of the intersectional genetic approaches now available to record specifically from this subset of neurons in vivo.Among proposed molecular substrates for the intrinsic CO 2 /H + sensitivity of raphe neurons, the clearest evidence supports a role for TASK-1/TASK-3 channels, at least in vitro.Notably, even as genetic deletion of those TASK channels in mice eliminated pH sensitivity in neonatal raphe neurons, loss of TASK channels had no effect on the HCVR in adult mice, a finding inconsistent with a role for serotonergic neurons as respiratory chemoreceptors.However, those data cannot rule out a compensatory mechanism in the knockout mice, or an alternative mechanism in adult mice.As yet, no alternative molecular CO 2 /H + sensor has been identified.The ability to use intersectional approaches to selectively mark the putative subgroup of chemosensitive medullary serotonergic raphe raises the possibility for a differential transcriptomic analysis that may uncover further candidate sensor molecules to examine in the context of addressing the crucial criterion #5 for these cells (Okaty et al., 2015).
Medullary astrocytes
Astrocytes were historically considered to provide a simple supporting role for neuronal function but work over the last several decades has made it abundantly clear that they are much more intimately involved in shaping neural activity.Astrocytes respond to various physicochemical factors and neuromodulators via intracellular calcium signaling and, in turn, they can regulate the activity of neighboring astrocytes and neurons by secreting various gliotransmitters (e.g., ATP).In the context of respiratory control by CO 2 , the possibility that astrocytes could serve as respiratory chemoreceptors initially derived from the observations that ATP release is stimulated by CO 2 /H + in brainstem regions conventionally associated with chemoreception (Gourine et al., 2005); ATP can drive respiration via actions on P2 purinergic receptors (Gourine et al., 2003); and inhibition of P2 receptors can blunt the effect of CO 2 on respiratory output (Mulkey et al., 2006;Wenker et al., 2012;Sobrinho et al., 2014;Barna et al., 2016).In addition to metabolic support and direct gliotransmitter actions on nearby brain cells, astrocytes are also well poised to modulate local blood flow during metabolic/respiratory challenge via close apposition of their end feet to CNS vessels (Gourine et al., 2010;Wenker et al., 2010;Hawkins et al., 2017).In brainstem chemoreceptor regions of the ventral medullary surface (VMS), where CO 2 -dependent astrocytic ATP release has been measured, hypercapnia causes a P2-dependent vasoconstriction that decreases the rate of washout of CO 2 and allows for further activation of chemosensitive neurons and astrocytes (Kasymov et al., 2013;Mishra et al., 2016;Hawkins et al., 2017;Cleary et al., 2020;Marina et al., 2020;Wenzel et al., 2020;Hosford et al., 2022).Together, these observations suggest the presence of distinct populations of astrocytes in different brainstem regions and multiple mechanisms by which the CO 2 /H + sensitivity of astrocytes could translate into enhanced respiratory output.For the purposes of this review, we will discuss these different astrocytic populations and/or cellular mechanisms, noting the anatomical regions in which the functional characterization has been examined.
Criteria 1 and 2
Studies of the effect of exogenous activation of astrocytes have focused on the astrocytes in the region near the RTN and/or the preBötC.Parapyramidal astrocytes have not been specifically targeted for exogenous activation or inhibition experiments.For a number of experiments, spatial delineation was not fine enough to enable distinction between different astrocyte populations.
Optogenetic ChR2-mediated stimulation of ventral medullary astrocytes in an ex vivo preparation (likely containing the RTN area) drives astrocytic calcium transients (Gourine et al., 2010); in brain slice culture, ChR2 stimulation of the same astrocytes leads to depolarization and increased firing of nearby RTN neurons (identified via expression of Phox2b) (Gourine et al., 2010).Additionally, ChR2 activation of VMS astrocytes in anesthetized rat increases phrenic discharge and can drive phrenic activity during hypocapnic apnea (Gourine et al., 2010).The induction of RTN firing as well as phrenic discharge resulting from ChR2 excitation of VMS astrocytes is blocked by MRS2179, a P2Y receptor antagonist (Figure 5B) (Gourine et al., 2010).It is important to note the caveat that ChR2 stimulation of astrocytes can lead to elevations in extracellular K + , which may contribute to depolarization and firing of RTN neurons, although direct K +dependent neuronal excitation would not be sensitive to P2Y receptor blockers.In addition, depolarization of RTN-adjacent astrocytes with fluorocitrate, a putatively astrocyte selective metabolic disrupter, can increase the firing frequency of individual RTN neurons in slices, and increase phrenic discharge and respiratory frequency during CO 2 exposure in anesthetized animals (Erlichman et al., 1998;Holleran et al., 2001;Erlichman and Leiter, 2010;Wenker et al., 2010;Sobrinho et al., 2017).There have been no experiments reported on the effects of acute, transient inhibition of RTN area astrocytes on basal respiration or the HCVR.
Chemogenetic activation of preBötC astrocytes with the excitatory Gq-coupled DREADD increases the respiratory rate in conscious animals in an ATP-dependent manner (Figure 5A) (Sheikhbahaei et al., 2018).A variety of molecular approaches were developed and applied to disrupt astroglial signaling in the preBötC; these include viral-mediated expression in astrocytes of proteins that block vesicular release mechanisms (i.e., dominant-negative SNARE, tetanus toxin subunit) or ATP signaling (i.e., ectonucleotidase) (Sheikhbahaei et al., 2018).Notably, these manipulations of preBötC astrocytes interfere with respiratory responses to multiple stimuli (hypoxia, hypercapnia, exercise), consistent with a general contribution to maintaining respiratory network function.These experiments have not been repeated for the specialized subset of CO 2 /H + -activated VMS astrocytes that are proposed to mediate respiratory chemosensitivity.
Criterion 3
Astrocyte activation is typically assayed using fluorescent probes that assess increases in intracellular calcium (e.g., GCaMP, Ca 2+sensitive dyes).Ca 2+ imaging of "ventral surface astrocytes" in vivo in the anesthetized rat and ex vivo in the acute horizontal slice during an acute pH challenge (HEPES 7.45 → 7.25) reveals a marked increase in astrocytic Ca 2+ throughout the ventral surface of the brainstem, regions including the RTN and PPy groups of astrocytes (Figure 5D) (Gourine et al., 2010).ATP is proposed to be a primary mediator for astrocytic control of breathing and ATP levels indeed increase at two locations on the VMS in anesthetized rats exposed to During hypercapnic acidosis, ventral medullary astrocytes are activated, ATP levels increase, and purinergic signaling is necessary for astrocyte activation to drive respiration.(A) Activation of preBötC astrocytes with a Gq-coupled DREADD leads to an increased frequency of respiration (fR) in the conscious mouse in an ATP dependent manner (i.e., increase is not present with co-expression of endonucleotidase TMPAP).Panel adapted from (Sheikhbahaei et al., 2018), Figure 2K.(B) Activation of astrocytes in the RTN region using ChR2 leads to increased phrenic nerve discharge in a purinergic dependent manner (MRS 2179, P2Y receptor antagonist).Panel adapted from (Gourine et al., 2010), Figure 4C.(C) Inhibition of preBötC astrocytic vesicular release using a dominant negative SNARE (dnSNARE) or tetanus toxin (TeLC) expression leads to decreased fR at baseline and during exposure to elevated CO 2 .Panel adapted from (Sheikhbahaei et al., 2018), Figures 1G, H, 4B.(D) Astrocytes on the ventral surface of a medullary slice in the RTN region respond to low pH by increasing intracellular Ca 2+ .Panel adapted from (Gourine et al., 2010), Figure 1B.(E) ATP levels increase on the ventral surface of the brainstem in isolated heart brainstem preparation perfused with high CO 2 solution.Panel adapted from (Gourine et al., 2005), Figure 1A.(F) Whole cell recordings of astrocytes in RTN region of rat brain slices show membrane depolarization and development of a weakly-rectifying CO 2 /H + sensitive current during bath acidification (i) or exposure to elevated CO 2 (ii).Panel adapted from (Wenker et al., 2010), Figures 1A, G, Figures 3A, E.
Frontiers in Physiology frontiersin.org12 Gonye and Bayliss 10.3389/fphys.2023.1241662 CO 2 , albeit by different magnitudes (Figure 5E).The level of ATP released from the area around the facial motor nucleus, containing the RTN, in response to a hypercapnic challenge is significantly lower than that from the parapyramidal area just caudal to the hypoglossal nerve root, possibly reflecting different magnitudes or mechanisms of CO 2 /H + sensitivity (Gourine et al., 2005;Huckstepp et al., 2010a).Neither Ca 2+ dynamics nor ATP release from preBötC astrocytes during different respiratory challenges have been reported.Note that ATP release is a less satisfying surrogate of astrocyte activation since ATP can also be released from neurons and microglia, and the ATP sensors employed do not provide the spatial/temporal resolution available with Ca 2+ imaging techniques.(Trapp et al., 2011), Figures 1A, B.
A demonstration of astrocytic response to CO 2 /H + in the RTN, cPPy, and/or preBötC regions containing the proposed astrocyte chemoreceptors of the intact, unanesthetized animal, perhaps by measuring Ca 2+ activity (with GCaMP) or even ATP release with cellular sensors (iATPSnFR, GRAB ATP ), will be required to better satisfy criterion 3.
Criteria 4 and 5
Multiple mechanisms for activation of distinct populations of VMS astrocytes by CO 2 /H + have been proposed in the context of respiratory chemosensitivity.Astrocytes near the RTN depolarize in response to both low external pH and high CO 2 exposure (Figure 5Fi, ii) (Wenker et al., 2010), with these VMS astrocytes broadly displaying an increase in intracellular calcium and sodium (Gourine et al., 2010;Turovsky et al., 2016).The proposed molecular bases for these astrocytic CO 2 /H + responses include: a) direct activation of connexin 26 (Cx26) by molecular CO 2 in astrocytes of the parapyramidal region; b) activation of a Na/HCO 3 − exchanger (NBCe1) by CO 2 -mediated intracellular acidification of preBötC and RTN astrocytes; and c) direct inhibition of K ir 4.1/5.1 by intracellular H + , leading to depolarization of astrocytes adjacent to the RTN (Figure 6A).
In these cases, the physiological coupling from astrocytes to nearby respiratory-related neurons is thought to occur via paracrine purinergic signaling following CO 2 /H + -stimulated ATP release and/ or by altering the local CO 2 /H + concentration around respiratory chemosensory neurons.These two general effects are not mutually exclusive and can theoretically occur in parallel.For example, released ATP may directly activate respiratory-related neurons while also provoking a local vasoconstriction to reduce washout of metabolic byproducts (e.g., CO 2 ); likewise, HCO 3 − uptake due to activation of NBCe1 by intracellular acidification or depolarization can remove buffering equivalents from the extracellular space and further accentuate acidification of the extracellular space.Note that VMS astrocytes are unique compared to other CNS populations in that they induce vasoconstriction, opposed to dilation, of nearby vessels during hypercapnia, likely via a P2Y2-dependent mechanism (Figure 6B) (Kasymov et al., 2013;Mishra et al., 2016;Hawkins et al., 2017;Cleary et al., 2020;Marina et al., 2020;Wenzel et al., 2020;Hosford et al., 2022).Here, we describe these three proposed molecular CO 2 /H + sensors, the associated physiological coupling mechanisms, and outline the brainstem regions where they have been examined for a role in respiratory chemosensitivity in vivo.
Activation of Cx26 in parapyramidal astrocytes
Connexins can form hemichannels capable of mediating release of the gliotransmitter ATP from astrocytes; together with its breakdown product ADP, these signaling molecules are proposed to act on nearby VMS respiratory neurons and vasculature to control the chemoreflex.Pharmacological inhibition of connexins decreases the level of ATP released on the VMS during a hypercapnic event in the anesthetized and ventilated rat (Figure 6C).The same inhibition also blunts the effect of 10% CO 2 on respiration but does not affect baseline respiration in anesthetized/ventilated rats (Huckstepp et al., 2010a).Based on its expression pattern, Cx26 was proposed to be the most likely connexin conduit for ATP release from VMS/preBötC astrocytes (Solomon et al., 2001a;Solomon et al., 2001b).Subsequently, heterologous Cx26 expression was found to confer CO 2 sensitivity to nonchemosensitive cells, and Cx26 was shown to be gated by molecular CO 2 (not H + ) via a lysine carbamylation event (Huckstepp et al., 2010a;Huckstepp et al., 2010b;Meigh et al., 2013;Dospinescu et al., 2019).Development of a dominant negative Cx26 (dnCx26) allowed for lentiviral-based manipulation of Cx26 carbamylation in subsets of VMS astrocytes at various levels along the respiratory column to assess effects on respiration (van de Wiel et al., 2020).It should be recognized that these experiments represent a test of the final and most stringent criterion we have outlined-i.e., examining effects on the HCVR of disrupting a molecular CO 2 -sensing mechanism in the relevant cells.In fact, expression of dnCx26 in the caudal parapyramidal area (cPPy) reduced the V T component of the HCVR at 6% CO 2 and was noted only at the level of the cPPy, not rostrally near the RTN or more caudal to the cPPy (van de Wiel et al., 2020).However, respiratory effects were limited only to changes in V T , were observed at a single intermediate CO 2 concentration (not at 3% or 9% CO 2 ) and did not persist for all timepoints tested.The effects of dnCx26 expression on CO 2 -evoked ATP release in the cPPy were not reported (Meigh et al., 2013).
Activation of NBCe1 in the RTN and/or preBötC areas
Astrocytes express high levels of NBCe1, an electrogenic Na + -HCO 3 -co-transporter that buffers acid-base changes associated with high levels of neuronal activity.In this proposed model of medullary astrocyte activation (Turovsky et al., 2016), elevated CO 2 leads to an increase in intracellular [H + ] which drives import of HCO 3 − through NBCe1 to buffer changes in intracellular pH.The concomitant increase in intracellular Na + leads to the reversal of a Na + -Ca 2+ exchanger (NCX), providing the Ca 2+ uptake required for vesicular release of ATP.Aside from initiating ATP release, the uptake of HCO 3 − can remove buffering equivalents from the extracellular space, potentially exacerbating local acidification.Consistent with this mechanism, the CO 2 -dependent Ca 2+ /Na + signal in astrocytes is completely blocked in vitro by the NBCe1 inhibitor S0859 and partially blocked by inhibition of NCX (Figure 6Di, ii) (Turovsky et al., 2016).In addition, deletion of NBCe1 from medullary astrocytes decreases the frequency of acid induced Ca 2+ transients (Figure 5E) (Turovsky et al., 2016).Although this cellular mechanism is well documented in vitro, recent work from NBCe1 knockout mice has failed to support a role for astrocytic NBCe1 in the HCVR.That is, no significant difference in the HCVR was observed in multiple conditional knockout models in which recombination of floxed NBCe1 alleles was achieved in astrocytes by using GFAP-Cre, Aldh1l1-CreERT2 and GLAST-CreERT2 mouse lines (Figure 6F) (Hosford et al., 2022;Li et al., 2023).In the GFAP-Cre line, consistent with GFAP expression patterns, NBCe1 deletion was particularly prominent in astrocytes along the ventral medullary surface, including near the RTN (Hosford et al., 2022); the more widespread NBCe1 deletion obtained with the two tamoxifeninducible CreERT2 lines was obtained in adults, avoiding potential issues with developmental compensation (Hosford et al., 2022;Li et al., 2023).Although these data suggest that astrocytic NBCe1 expression is not required for the HCVR, it remains possible that elimination of NBCe1 was incomplete in these models and/or spared some select population of astrocytes that are involved in chemosensation.
Inhibition of K ir 4.1/5.1 channels in RTN area astrocytes
Astrocytes display inwardly rectifying K + currents that have been attributed to K ir 4.1/5.1 heteromeric channels; these channels are directly inhibited by a decrease in intracellular pH such as occurs during a hypercapnic challenge in medullary astrocytes (Figure 6G) (Tanemoto et al., 2000;Xu et al., 2000;Pessia et al., 2001;Patterson et al., 2021;Zhang and Guo, 2023).Inhibition of the K + channel causes astrocytic depolarization, promoting NBCe1-mediated Na + and HCO 3 − uptake due to the electrogenic nature of the transporter (Wenker et al., 2010;Mulkey and Wenker, 2011).As described above, the associated Na + influx and reversal of NCX can provide the increased intracellular Ca 2+ import for vesicular release of ATP.In support of this model, preliminary observations suggested a reduced V T response to CO 2 in astrocyte-specific K ir 4.1 knockout mice (Hawkins et al., 2014), and whole body knockout of the gene coding for K ir 5.1 leads to profound metabolic acidosis and blunting of the HCVR and HVR (Figure 6Hi, ii) (Trapp et al., 2011;Puissant et al., 2019).The blunted chemoreflex in Kir5.1 mice could reflect chemoreflex desensitization due to sustained metabolic acidosis rather than any specific effect of K ir 5.1 deletion on chemosensing by astrocytes (Trapp et al., 2011).Moreover, as noted below, there is also evidence for expression of K ir 5.1 in LC neurons, which have been separately implicated in respiratory chemosensitivity.
Summary
There is abundant evidence from ex vivo and in vivo preparations showing that increases in CO 2 /H + can drive calcium signaling in astrocytes and provoke release of ATP in multiple regions associated with respiratory chemosensitivity, at least in part from astrocytes.A demonstration of astrocytic activation by CO 2 has not yet been realized in unanesthetized animals.It has also been demonstrated that ChR2 activation of VMS astrocytes can activate RTN chemosensitive neurons and stimulate breathing, likely via P2Y (and possibly P2X) receptors.Notably, there is disagreement over the necessity for purinergic stimulation in CO 2 /H + activation of RTN neurons, but it seems likely that engagement of P2 receptors plays some role.The approaches used to inhibit astrocyte signaling in the preBötC support a contribution to the HCVR but those same manipulations also affect respiratory stimulation by a number of other stimuli.Moreover, they have not yet been applied in the ventral medullary regions where astrocytes were proposed to regulate CO 2 /H + sensitivity via nearby RTN neurons (Gourine et al., 2010).Although this is consistent with broadly distributed respiratory chemosensory function, it is also possible that this reflects a relatively non-specific support of neuronal activity.Indeed, interpretations of experiments disrupting astrocyte activity in terms of specific chemosensory functions are complicated by the baseline functions of astrocytes in regulating K + buffering, neurotransmitter recycling/release, synaptic function, local blood flow, etc., and by the fact that modulation of the HCVR by astrocytes must ultimately be channeled through neurons within respiratory circuits.Finally, there is evidence for at least three different molecular mechanisms for CO 2 and/or H + sensing by distinct populations of astrocytes throughout the medulla.It is unknown whether these groups of medullary astrocytes differ in their reliance on any particular sensory mechanism.To date, in vivo tests of each mechanism in the context of the HCVR have yielded results that are either inconclusive (e.g., K ir 4.1/5.1 inhibition) or do not support a necessary role (Cx26 carbamylation, NBCe1 activation).For K ir channels, studies that eliminate their function specifically in astrocytes would be helpful.It is also possible that these mechanisms are redundant during hypercapnia in vivo, and that simultaneous inhibition of more than one mechanism is necessary to uncover some more prominent role.Finally, alternative molecular mechanisms for proton sensing by astrocytes may yet be uncovered.
Locus coeruleus
The locus coeruleus (LC) is a brainstem structure located in the rostral pons, lateral and ventral to the fourth ventricle; it comprises ~3000 noradrenergic neurons in mouse or rat, providing the primary noradrenergic innervation throughout the central nervous system (Loizou, 1969;Schwarz and Luo, 2015;Liu et al., 2021;McKinney et al., 2023).Its activity is tightly correlated to arousal levels and stress.Neurons within the LC are electrically coupled via gap junctions, particularly in their dendritic processes, and they exhibit a steady pacemaker-like baseline firing pattern with a pronounced subthreshold oscillation (Ishimatsu and Williams, 1996;Oyamada et al., 1999;Ballantyne et al., 2004).Like the serotonergic raphe, the LC is historically considered a part of the "reticular activating system" and, as such, it influences the activity of many targets in an arousalstate-dependent manner.There is considerable evidence that LC neurons can influence respiration and display intrinsic CO 2 /H + sensitivity, but the most powerful of the new technical advancements in neuroscience have not yet been applied to addressing the significant gaps in fulfilling the criteria that would be necessary for acceptance as bona fide central respiratory chemoreceptors.
Criteria 1 and 2
The effects of LC activation and inhibition on respiration and the HCVR have been tested in several ways, mostly indirect.Targeted ablation of the LC via injection of SP-saporin or anti-DBH-saporin decreases the magnitude of the HCVR without effects on basal respiration in room air (Figure 7A) (Li and Nattie, 2006;de Carvalho et al., 2010).Microinjection of the carbonic anhydrase inhibitor acetazolamide to produce a local acidification in the LC caused an increase in phrenic nerve output in anesthetized cats and rats (Coates et al., 1993).Likewise, microinjection of agonists of purinergic signaling or antagonists of serotonergic or glutamatergic signaling into the LC augment the HCVR via effects on tidal volume (De Moreno et al., 2010;Biancardi et al., 2014).Similar targeted injection of antagonists of purinergic, orexinergic, or gap junctional activity attenuate the whole body HCVR predominantly through effects on tidal volume (De Moreno et al., 2010;Taxini et al., 2013;Biancardi et al., 2014;Patrone et al., 2014;Vicente et al., 2016).Acute electrical excitation of the LC in an ex vivo, brainstem spinal cord preparation can increase C4 burst frequency, albeit by a small amount (Figure 7B) (Hakuno et al., 2004).In a more direct test, acute inhibition of the LC via activation of an exogenously expressed inhibitory allatostatin receptor has no effect on baseline respiration but this was able to blunt the ventilatory response to 7% CO 2 (Magalhães et al., 2018).Analogous experiments using chemo or optogenetics to determine the effects of acute LC activation on respiration in the conscious, behaving animal have not yet been reported.
Criteria 4 and 5
There is good evidence for direct chemosensitivity of LC neurons in vitro.For example, LC neurons increase action potential firing in response to a hypercapnic challenge in vitro; this is likely a direct effect of CO 2 /H + since it is resistant to synaptic block with kynurenic acid, picrotoxin, or low Ca 2+ /high Mg 2+ solution and, importantly, is retained in LC neurons studied after Panel adapted from (Hakuno et al., 2004), Figure 7A.(C) In rat brainstem slices, LC neuronal activity (integrated firing rate, IFR) tracks internal pH and does not require changes in CO 2 .Panel adapted from (Filosa et al., 2002), Figure 7. (D) In extracellular recordings from anesthetized rat, LC neuron firing increases during hypoxia (KCN, potassium cyanate) and hypercapnia (7% CO2).Panel adapted from (Magalhães et al., 2018), Figure 1C.(E) In rat brainstem slices, LC neurons exhibit a subthreshold membrane potential oscillation that increases in power and frequency during hypercapnic acidosis (see thickening and darkening of trace); the oscillation is blocked by extracellular Co 2+ and nifedipine (L-type Ca 2+ channel blocker).Panel adapted from (Filosa and Putnam, 2003), Figure 3. (F) Injection of BK inhibitor paxilline directly into the LC to remove the oscillatory brake does not affect baseline respiration but augments the HCVR.Panel adapted from (Imber et al., 2018), Figure 9B.
acute dissociation (Ito et al., 2004;Ritucci et al., 2005;Johnson et al., 2008;Nichols et al., 2008;Erlichman et al., 2009).The cellular and ionic bases for CO 2 -dependent changes in cell excitability and action potential firing have been explored extensively.In contrast to the RTN, LC activity tracks internal and not external pH (Figure 7C) (Pineda and Aghajanian, 1997;Filosa et al., 2002;Hartzler et al., 2008).The pH sensitive K ir 4.1/5.1 channels are expressed in LC neurons at relatively high levels, at least when compared to other candidate chemoreceptor cell groups, and genetic deletion of K ir 5.1 attenuates the LC firing response to NH 4 Cl-induced internal acidification in vitro (Wu et al., 2004;D'Adamo et al., 2011).The expression of K ir 4.1/K ir 5.1 in both LC neurons and astrocytes, where they are also implicated as potential pH sensors, confounds interpretation of physiological experiments performed in whole animal K ir 5.1 knockouts and further supports a move to cell type specific manipulations moving forward.The L-type calcium channels that contribute to subthreshold oscillations and action potential firing are not directly affected by internal pH and LC neurons still depolarize in response to CO 2 when oscillations are blocked with nifedipine (Figure 7E) (Filosa and Putnam, 2003;Imber and Putnam, 2012;Li and Putnam, 2013;Imber et al., 2014;Imber et al., 2018;Li et al., 2021).However, the subthreshold oscillations are modulated by cAMP/PKA signaling and can be indirectly facilitated by CO 2 -dependent intracellular acidification, uptake of bicarbonate and activation of a bicarbonatesensitive soluble adenylate cyclase (sAC) (Imber et al., 2014).In addition, calcium-activated K + channel (BK) activity acts as a brake on CO 2 -activated firing in LC neurons (Imber et al., 2018), but this contribution of BK channels to the cellular response appears to be distinct from any role as direct sensors for CO 2 /H + .
Even as these mechanisms for CO 2 /H + regulation of LC neuron activity have been examined in vitro, their role in initiating or supporting the whole animal HCVR is relatively unknown.As mentioned earlier in the discussion of astrocytes, global genetic deletion of K ir 5.1 can blunt the HCVR, but it is not possible to attribute this effect to an action on the LC.It has been demonstrated that microinjection of paxilline, a BK inhibitor, into the LC of the adult rat can augment the HCVR via effects on tidal volume, presumably by removal of the oscillatory brake (Figure 7F) (Imber et al., 2018).The effects of targeted inhibition of the other identified components of pH sensitivity in the LC have not yet been reported.
Summary
There is good evidence that the firing activity of LC neurons can be modulated by CO 2 /H + in vitro.Aside from the chemogenetic experiment with allatostatin, most of the evidence for in vivo modulation of respiration and the HCVR lacks cell specificity and would benefit from application of more targeted optogenetic/ chemogenetic approaches.The effects of CO 2 on LC activity have been observed in vivo under anesthesia, but have not yet been examined in freely behaving animals.Finally, despite identification of various ion channel contributors to CO 2 /H + -dependent firing in LC neurons, a principal molecular candidate for the intracellular pH sensor has not been forthcoming, and it has therefore not been possible to examine effects of disrupting direct CO 2 /H + sensing in LC neurons on the HCVR or respiration generally.
Lateral hypothalamus
The lateral hypothalamus (LH) is a highly heterogeneous region which contains a large proportion of the orexin producing neurons within the CNS.The orexin system has been a focus of recent research on arousal state, cardiorespiratory control, and environmental stress response.The orexinergic neurons in the LH have a broad range of targets throughout the brain, including to the RTN, LC, raphe, and preBötC (Peyron et al., 1998;Trivedi et al., 1998;Date et al., 1999;Nambu et al., 1999;Marcus et al., 2001; et al., 2005;Rosin et al., 2006;Puskás et al., 2010;Lazarenko et al., 2011;Tupone et al., 2011;Nattie and Li, 2012).Application of exogenous orexin to a number of these nuclei increases their firing activity and excitability and leads to an altered cardiorespiratory state (Shirasaka et al., 1999;Machado et al., 2002;Zhang et al., 2005;Corcoran et al., 2010;Lazarenko et al., 2011;Young et al., 2005;Shahid et al., 2011;Luong and Carrive, 2012;Shahid et al., 2012;Sugita et al., 2014;Loiseau et al., 2019).Of relevance to central chemoreception, animals exposed to a hypercapnic challenge show increased Fos expression in the orexin cells of the LH (Figure 8A), perfusion of the LH with low pH solution increases respiration rate in an orexin dependent manner, and the ventilatory response to CO 2 is attenuated in orexin knockout mice (Kayaba et al., 2003;Nakamura et al., 2007;Sunanaga et al., 2009;Song et al., 2012;Li et al., 2013).Treatment with pharmacological antagonists of orexin receptors, which theoretically mediate the downstream effects of orexinergic neuron activation, leads to a blunted HCVR in the whole animal and decreased effects of CO 2 on phrenic nerve activity in an isolated rat brainstem spinal cord preparation (Figure 8B) (Dias et al., 2009;Corcoran et al., 2010;Li and Nattie, 2010;Vicente et al., 2016;Fukushi et al., 2022).Together, these data support a role for the orexinergic system in respiratory control and provide a rationale to assess the evidence supporting orexinergic LH neurons for their potential roles as respiratory chemoreceptors.
Criteria 1 and 2
The orexin neurons of the LH are interspersed with a number of other cellular subtypes in the LH.Disinhibition of LH neurons via microapplication of the GABA A receptor antagonist bicuculline or direct activation via electrical stimulation leads to an increased heart rate, blood pressure, and frequency of respiration (Figure 8C) but these cell activation methods do not specifically target orexin neurons in this heterogenous area (Kayaba et al., 2003;Iigaya et al., 2012).There are no reports describing the effect of inhibition of LH orexin neurons on baseline respiration or the HCVR.
Criterion 3
As mentioned above, animals exposed to an acute hypercapnic challenge demonstrate increased expression of the activity marker Fos in the orexin neurons of the LH.Other measures of activity during respiratory challenge in vivo have not yet been reported for the orexinergic LH neurons.
Criteria 4 and 5
The activity of orexin neurons in acute slices of the LH is modulated by CO 2 /H + , and pH mediated depolarization is maintained in the presence of TTX suggesting a cell-intrinsic sensitivity (Figure 8D) (Williams et al., 2007;Song et al., 2012).The molecular mechanism(s) controlling this intrinsic CO 2 /H + modulation are largely uncharacterized.LH orexin cells express high levels of pH sensitive potassium channels TASK-1/TASK-3; those channels regulate cell excitability in orexin neurons but are not necessary for their CO 2 sensitivity (Gonzalez et al., 2009;Guyon et al., 2009).Respiration can be activated by microinjection into the LH of an extremely low pH solution (pH 6.5); this effect relies on acid-sensitive ion channels (ASICs) but it is unknown if this requirement is due to altered chemosensitivity or just due to overall decreased excitability as seen in the TASK-1/TASK-3 knockout system (Song et al., 2012).The properties of ASIC channels seem poorly suited to homeostatic regulation of ventilation, particularly their pH sensitivity in extreme acidic ranges typically associated with pathophysiology (pH 6-7); moreover, global knockout of ASIC1, ASIC2 or ASIC3 had no effect on HCVR in conscious, unrestrained mice (Guyenet et al., 2016;Detweiler et al., 2018).
Summary
There is good evidence that the orexinergic system can provide a general excitatory drive to respiratory circuits, likely via orexin signaling and in an arousal state-dependent manner.However, the evidence addressing the criteria required for a bona fide respiratory chemosensory function is less well developed.Experiments examining effects on respiration and the HCVR using cell-specific methods for activating and inhibiting orexin neurons would be helpful.In addition, although orexin neurons appear to be sensitive to CO 2 in vivo (Fos), like most of the other cell groups reviewed here, there have been no direct measures of this CO 2 -mediated neuronal activation in freely behaving animals.It also seems certain that orexin neurons in the LH can be activated by CO 2 /H + in vitro, likely directly, but the cellular and ionic mechanisms so far suggested for intrinsic chemosensitivity of those neurons have not held up to experimental scrutiny, at least in the context of CO 2 -regulated breathing.Thus, better satisfying a number of these criteria, especially identifying and manipulating a relevant molecular CO 2 /H + sensor, will be crucial to support a role for these cells as chemosensors.
Conclusion
There has been a long-term quest to identify the brainstem sensory cells that detect changes in CO 2 /H + and drive the respiratory circuits that adjust ventilation to correct deviations from normal physiological set points for PaCO 2 and tissue acid-base balance.As cellular candidates have emerged, there have been additional efforts to use various technical advances to define those cell types with greater phenotypic clarity, seek molecular substrates for their CO 2 /H + sensitivity, and validate their physiological role in respiratory chemosensitivity.To formalize evaluation of these ongoing efforts, we have enumerated a set of increasingly stringent criteria that we believe are necessary and, for the final criterion sufficient, to declare a candidate as a bona fide respiratory chemoreceptor (Guyenet and Bayliss, 2022).
Here, we examined the extant experimental support for the most prominent current chemoreceptor candidates and can confidently conclude that none have yet surpassed the full evidentiary bar demanded by these criteria.However, for a number of these cell types we could identify strong, albeit partial, support for many of the criteria.
In the case of the developmentally and biochemically defined RTN neurons, experimental modulation of their activity has the expected effects on respiratory output, and they are directly responsive to CO 2 /H + in vitro via two identified proton detectors (TASK-2, GPR4) that are both required for full elaboration of the HCVR.The CO 2 /H + modulation of RTN in vivo remains to be directly observed in unanesthetized animals, and the genetic elimination of TASK-2 and GPR4 was global and did not disrupt the pH sensing mechanism, per se.Nonetheless, both RTN ablation and combined TASK-2/GPR4 knockout eliminate the HCVR nearly completely in conscious animals, consistent with a particularly prominent role for both RTN neurons and their molecular pH sensors.The effect of RTN ablation also suggests that these neurons may be a point of convergence for inputs from other presumptive chemoreceptors.Indeed, RTN neurons are modulated by several transmitters and peptides from those other cell groups, and such a convergent action may support the more pronounced CO 2 /H + sensitivity of RTN neurons in vivo, by comparison to in vitro.
The other chemoreceptor candidates that have accrued the most experimental support are the serotonergic raphe neurons and brainstem astrocytes.For raphe neurons, recent elegant intersectional approaches have revealed remarkable molecular and functional diversity within the serotonergic system, and focused attention specifically on the Egr2-Pet1 subset of caudal raphe neurons as potential respiratory chemoreceptors.These particular neurons are directly CO 2 /H + sensitive in vitro, an observation not yet verified in vivo, and inhibition of this subset of serotonergic cells blunts the HCVR.To date, TASK-1/TASK-3 channels are the only molecularly identified pH sensors in serotonergic raphe neurons, but genetic deletion of those TASK channels has no effect on the HCVR in mice.For astrocytes, there is good evidence that they are activated by CO 2 /H + to mobilize intracellular Ca 2+ , but this has not been validated in conscious animals.Optogenetic activation of VMS astrocytes evokes ATP release and stimulates local RTN neurons and respiration via a P2Y receptor mechanism; conversely, inhibition of gliotransmitter release and ATP signaling in preBötC neurons blunts the HCVR, along with various other respiratory reflexes.It remains to be clarified whether there is a specific site for astrocytic modulation of CO 2 -dependent respiratory output, and the molecular specializations proposed to support CO 2 /H + sensing by astrocytes have not yet been clearly linked to the HCVR.For LC and orexin neurons, which can modulate respiratory output and may indeed be CO 2 /H + sensitive in vitro, there is much less direct evidence for the various criteria.
If this set of criteria can be fulfilled by one or more of these cell types and molecular sensors, then it will also be important to quantify their relative contributions and determine whether they function together in series, in parallel, or both.Our current working model holds that respiratory chemoreception and the HCVR is primarily subserved by a multicellular sensory apparatus.In particular, we see the RTN as both a direct CO 2 /H + sensor and as a principal integrative center that transduces local environmental variations in CO 2 /H + and neuromodulatory input from the other presumptive chemosensory cell groups for onward transmission to the respiratory rhythm and pattern generator circuits.These inputs modulate the excitability of RTN neurons, increasing their CO 2 /H + sensitivity and input-output gain.To the extent that those other cell groups encode CO 2 /H + in vivo, their inputs may confer a secondary CO 2 /H + signal to RTN neurons while imparting their own chemosensitivity onto other elements of the respiratory control and output networks.Many predictions of this working model have not been directly tested, and those together with the chemoreceptor criteria we outlined here, can hopefully serve as a guide for future experiments.Regardless of whether any of these cell groups fulfill all the listed criteria for bona fide respiratory chemoreceptors, it is clear that they each provide important modulatory influences on downstream respiratory networks that enhance how changes in CO 2 are ultimately translated into an effective homeostatic ventilatory response.Finally, it is also important to recognize that these cell groups could serve chemoreceptor functions for other non-respiratory effects of CO 2 (arousal, anxiety, etc.).
FIGURE 4
FIGURE 4 TASK-1 and TASK-3 expression underlies a TASK-like pH-and halothane-sensitive K + current in raphe neurons but is not required for CO 2stimulated breathing.(A) Acidosis stimulated activity of caudal raphe neurons persists under halothane inhibition.Panel adapted from (Washburn et al., 2002), Figure 7C, copyright 2002 Society for Neuroscience.(B) TASK-1 (Kcnk3) and TASK-3 (Kcnk9) expression in caudal raphe serotonergic (Tph+) neurons.Panel adapted from (Washburn et al., 2002), Figure 4, copyright 2002 Society for Neuroscience.(C) Dorsal raphe neurons demonstrate a pH sensitive whole cell current that is lost in TASK-1 and TASK-3 single or double knockout animals.Panel adapted from (Mulkey et al., 2007b), Figure 3D, copyright 2007 Society for Neuroscience.(D) Whole body plethysmography shows no effect of single or double knockout of TASK-1 and/or TASK-3 on the HCVR in freely behaving mice.Figure adapted from (Mulkey et al., 2007b), Figure 6B, copyright 2007 Society for Neuroscience.
FIGURE 5
FIGURE 5During hypercapnic acidosis, ventral medullary astrocytes are activated, ATP levels increase, and purinergic signaling is necessary for astrocyte activation to drive respiration.(A) Activation of preBötC astrocytes with a Gq-coupled DREADD leads to an increased frequency of respiration (fR) in the conscious mouse in an ATP dependent manner (i.e., increase is not present with co-expression of endonucleotidase TMPAP).Panel adapted from(Sheikhbahaei et al., 2018), Figure2K.(B) Activation of astrocytes in the RTN region using ChR2 leads to increased phrenic nerve discharge in a purinergic dependent manner (MRS 2179, P2Y receptor antagonist).Panel adapted from(Gourine et al., 2010), Figure4C.(C) Inhibition of preBötC astrocytic vesicular release using a dominant negative SNARE (dnSNARE) or tetanus toxin (TeLC) expression leads to decreased fR at baseline and during exposure to elevated CO 2 .Panel adapted from(Sheikhbahaei et al., 2018), Figures1G, H, 4B.(D) Astrocytes on the ventral surface of a medullary slice in the RTN region respond to low pH by increasing intracellular Ca 2+ .Panel adapted from(Gourine et al., 2010), Figure1B.(E) ATP levels increase on the ventral surface of the brainstem in isolated heart brainstem preparation perfused with high CO 2 solution.Panel adapted from(Gourine et al., 2005), Figure1A.(F) Whole cell recordings of astrocytes in RTN region of rat brain slices show membrane depolarization and development of a weakly-rectifying CO 2 /H + sensitive current during bath acidification (i) or exposure to elevated CO 2 (ii).Panel adapted from(Wenker et al., 2010), Figures1A, G, Figures 3A, E.
FIGURE 6
FIGURE 6 Multiple mechanisms are proposed for astrocyte CO 2 sensing, CO 2 -dependent ATP release, and effects on CO 2 -stimulated breathing.(A) Schematic of proposed astrocytic mechanisms of CO 2 /H+ sensitivity and physiological signaling.Figure prepared in Biorender.(B) Medullary vessels in brain slices vasoconstrict in response to CO 2 and this vasoconstriction is inhibited by a P2 receptor blocker (PPADS).The reverse is observed in cortical vessels.Panel adapted from (Hawkins et al., 2017), Figure 1A, 3A.(C) Treatment of the ventral medullary surface with connexin blockers in anesthetized rats attenuates but does not abolish CO 2 induced phrenic discharge.Panel adapted from (Huckstepp et al., 2010a), Figure 13.(D) Blockers of NBCe1 and/or NCX activity blunt pH induced Ca 2+ (i) and/or Na + (ii) transients in astrocytes in organotypic slice culture.Panel adapted from (Turovsky et al., 2016), Figure 2F.(E) Whole body knockout of NBCe1 reduces the frequency of pH induced Ca 2+ transients in cultured astrocytes.Panel adapted from (Turovsky et al., 2016), Figures 5A, B. (F) RTN astrocyte pH sensitive current is decreased in K ir 5.1 (Kcnj16) knockout animals.Panel adapted from (Patterson et al., 2021), Figure 7F.(G) Loss of astrocytic NBCe1 via tamoxifen induced GLAST-Cre driven deletion does not affect the HCVR.Panel adapted from (Hosford et al., 2022), sup. Figure 6B.(H) Both the HVR (i) and HCVR (ii) are significantly blunted in K ir 5.1 whole body knockout mice.Panel adapted from(Trapp et al., 2011), Figures 1A, B.
FIGURE 7
FIGURE 7Locus Coeruleus neurons are activated by CO 2 and they contribute to a normal HCVR.(A) Targeted ablation of LC neurons using SP-saporin leads to a blunted HCVR through altered CO 2 effects on tidal volume and not frequency.Panel adapted from(de Carvalho et al., 2010), Figure4.(B) Electrical stimulation of the LC in the isolated brainstem spinal cord preparation leads to a small increase in hypoglossal nerve burst frequency (respiratory rate, RR).Panel adapted from(Hakuno et al., 2004), Figure7A.(C) In rat brainstem slices, LC neuronal activity (integrated firing rate, IFR) tracks internal pH and does not require changes in CO 2 .Panel adapted from(Filosa et al., 2002), Figure7.(D) In extracellular recordings from anesthetized rat, LC neuron firing increases during hypoxia (KCN, potassium cyanate) and hypercapnia (7% CO2).Panel adapted from(Magalhães et al., 2018), Figure1C.(E) In rat brainstem slices, LC neurons exhibit a subthreshold membrane potential oscillation that increases in power and frequency during hypercapnic acidosis (see thickening and darkening of trace); the oscillation is blocked by extracellular Co 2+ and nifedipine (L-type Ca 2+ channel blocker).Panel adapted from(Filosa and Putnam, 2003), Figure3.(F) Injection of BK inhibitor paxilline directly into the LC to remove the oscillatory brake does not affect baseline respiration but augments the HCVR.Panel adapted from(Imber et al., 2018), Figure9B.
FIGURE 8
FIGURE 8 CO 2 /H + can activate orexin-expressing lateral hypothalamus neurons and blocking orexin receptors reduces the HCVR.(A) Orexin expressing (red) lateral hypothalamus cells in rat express increased levels of Fos (green) after an acute CO 2 challenge.Panel adapted from (Sunanaga et al., 2009), Figures 1A, B. (B) Treatment with the orexin receptor antagonist suvorexant slightly blunts the HCVR in conscious mice.Panel adapted from (Fukushi et al., 2022), Figure 5A.(C) Locations in lateral hypothalamus where electrical stimulation increased respiratory frequency in the anesthetized mouse.Panel adapted from (Kayaba et al., 2003), Figure 1.(D) Orexin neuron membrane potential is sensitive to bath pH changes in acute slices from rat under TTX treatment.Panel adapted from (Williams et al., 2007), Figure 4B, copyright 2007 Society for Neuroscience. | 2023-09-03T15:18:46.369Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "cdd80d797171e78369df13ff7be906071a97916a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2023.1241662/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fd118770e9e5b74d5b3c8389a2755b04d8ba9c6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232035866 | pes2o/s2orc | v3-fos-license | Special functions associated with automorphisms of the space of solutions to special double confluent Heun equation
The family of quads of interrelated functions holomorphic on the universal cover of the complex plane without zero (for brevity, pqrs-functions), revealing a number of remarkable properties, is introduced. In particular, under certain conditions the transformations of the argument $z$ of pqrs-functions represented by lifts of the replacements $ z \leftarrow -1/z $ $ z \leftarrow -z $, and $ z \leftarrow 1/z $ are equivalent to linear transformations with known coefficients. Pqrs-functions arise in a natural way in constructing of certain linear operators acting as automorphisms on the space of solutions to the special double confluent Heun equation (sDCHE). Earlier such symmetries were known to exist only in the case of integer value of one of the constant parameters when the predecessors of pqrs-functions appear as polynomials. In the present work, leaning on the generalized notion of pqrs-functions, discrete symmetries of the space of solutions to sDCHE are extended to the general case, apart from some natural exceptions.
Hence all solutions to the above system are holomorphic in some vicinity of any point z 0 " 0.
One may also regard pqrs-functions as solutions of the Cauchy problem for Eq.s (1)-(4) with arbitrary (but not totally null) initial data specified at arbitrary given z " z 0 " 0. Obviously, such local solution can be analytically continued to any other point of C except zero. In particular, all solutions to Eq.s (1)-(4) (i.e. pqrs-functions) are single-valued holomorphic functions on any connected and simply connected subset of C˚" C K t0u.
At the same time it has to be noted that, except for the very special conditions, the natural (inextendible) domain of holomorphicity of pqrs-functions is neither C˚nor any its subset but rather the universal cover of C˚. This is the Riemann surfaceC˚diffeomorphic to C, the covering projection Π :C˚» C Þ Ñ C˚being realized by the natural exponential function. However, in what follows, we shall consider, unless otherwise specified, only a part ofC˚(subdomain) denoting it C˚. It is representable by the result of the removing from Co f the ray R´of negative reals, C˚" C˚K R´. When considered on C˚, any instance of pqrs-functions combines the four single-valued holomorphic functions uniquely defined by their values (which may be arbitrary but all zero) at any given point z 0 P C˚. The two their single-side continuations to R´also exist giving rise to real analytic functions in the common domain R´. However, as a rule, these one-side limits do not coincide pointwise.
Pqrs-functions reveal a number of noteworthy properties. The first of them is expressed by the following statement.
Remark 1 Eq. (5) is obviously implied by Eq. (7) alone. The remaining three equations, when evaluated at z " i, either turn out to be fulfilled identically or follow from Eq. (5) (and, thus, from Eq. (7)).
Remark 2
The constraint (5) does not affect the value of the function s at the selected point and, moreover, sp¨q is present only in Eq. (9) which might be considered as decoupled from the preceding ones. However, there is an indirect influence of the selection of s (via the unrestricted setting of spiq) to the other pqr-functions in view of their "unbreakable interrelation" implied by Eq.s (1)- (4).
Remark 3
The involutive transformation which we shall here refer to as the transformation A, signified in the left-hand sides of Eq.s (6)- (9) by the replacement z Ø´1{z (10) of the argument z of the functions involved, is here tacitly regarded as the map keeping the particular argument z " i unchanged. This point is worth mentioning because in the case we deal with, i.e. for functions possessing domains distinct of C˚, "the reflected imaginary unit"´i is not a fixed point of the implied transformation of the arguments albeit´1{p´iq " p´iq, formally. Moreover, there is another transformation (let us denote itÃ) or, one might say, another implementation of the rule (10) recognizing just´i, but not`i, as a fixed point in the domain of a (this time)Ã-transformed functions. Accordingly, as long as we consider`i as the fixed point of the transformation signified by the argument replacement (10), there exist connected and simply connected open sets containing`i and contained in C˚such that their images through the transformation A also contain`i and are contained in C˚. On them, the asserted relations expressed by Eq.s (6)-(9) are well defined. It is here preferable to consider the subdomain C˚only. The extension of Eq.s (6)- (9) to the whole domain of pqrs-functions (the universal cover of C˚) by means of analytic continuation is obviously feasible although in general it might prove to be not representable by the original formulas.
To clarify some specialties of the above interpretation, we consider the following example. Let z be continuously moving from i P C˚towards some x P R`Ă C˚along a concave curve. Then´1{z, also starting from`i but further differing from z, is moving around zero in the opposite angular direction, arriving ultimately at´x´1 P R´which does not belong to C˚. Thus, when dragging z farther across R`inward the half-plane ℑz ă 0, the corresponding (A-transformed argument)´1{z leaves C˚across 'the upper edge' of the cut along the ray R´. Notice that we may not consider it entering C˚again through the lower cut edge disconnected from the upper one. This means that in the course of the above process the literal applicability of the formulas (6)- (9) breaks down on the ray of positive reals. Thus, to ensure their meaningfulness, one is compelled to obey the restriction ℑz ą 0. At the same time, it is obvious that such a limitation is only a consequence of certain simplification we had adopted for convenience. It would not arise in case of consideration of pqrs-functions on their full domain. However, then yet another complication related to certain non-uniqueness of interpretation of Eq.s (6)-(9) would appear. In total, we still prefer here to restrict consideration to the subdomain C˚keeping in mind limitations induced by such a simplification.
Eq. (5) singles out some subset of pqrs-functions constraining their values (i.e. the initial data for Eq.s (1)-(4)) at z " i. Yet another property leans on their parameterizing by the values at z " 1. It reads as follows.
Remark 4
For z " 1 the replacement of argument of the functions on the left in Eq.s (13)-(16) (we shall refer to it as the transformation C) reveals no effect. Accordingly, there exist open sets containing`1 which remain invariant under the action of the transformation C. Then it is reasonable to consider first the equalities (13)-(16) on such neighborhoods of the unity and then utilize analytic continuation for their extending to greater domains.
Remark 5 Besides z " 1, the point z "´1 (excluded, by definition, from C˚) is also unaffected by the replacement z Ø 1{z utilized in Eq.s (13)-(16), formally. However, it can not be considered as a fixed point of the transformation C. More precisely, claiming of z "´1 to be a fixed point, one must replace the transformation C by "yet another implementation"C of the above argument replacement. For it, the former fixed point z " 1 loses such a property. Besides, forC, the associated (sub-)domain of pqrsfunctions, playing role of C˚, has to contain R´but not R`. Having thus noted the presence of certain ambiguity in the interpretation of Eq.s (13)-(16), we limit ourselves with the above remark and shall not consider here this issue in greater details.
Combining conditions of the two above theorems we obtain one more relationship in accordance with the following.
Theorem 3 Let pqrs-function obey the conditions of both Theorem 1 and Theorem 2, i.e. they are holomorphic on a connected and simply connected open set containing`i and`1 and meet the constraints (5), (11), and (12). Then the equalities M 1{2 rps " e iℓπ pλ`µ 2 q´1`µz 2 r`s˘, M 1{2 rqs "´e iℓπ`p µ z 2 p`qq`µpλ`µ 2 q´1z 2 pµz 2 r`sq˘, hold true, where the arguments z of all the functions coincide and hence are suppressed, and where the operator M 1{2 carries out analytic continuation of the function it acts to along the circular arc started at z, centered at zero, subtending an angle π, and oriented counter-clockwise. Moreover, the products of pqrs-functions times the power function z´ℓ are single-valued and holomorphic on C˚.
Remark 6 As opposed to transformations of pqrs-functions treated by Theorems 1 and 2, the transformation of arguments of functions on the left in Eq.s (17)-(20) (let us call it the transformation B) admits no fixed points and is not involutive. Moreover, applying the transformation B twice, the resulting effect turns into the analytic continuation of the function to be transformed along the loop projected to (essentially, coinciding with) the full circle. Such kind of analytic continuation around a singular point (in our case, the center z " 0) is commonly named the monodromy transformation. We denote it by the symbol M. We have therefore M 1{2˝M1{2 " M by definition. The effect of the operator M 1{2 can thus be named semi-monodromy transformation. In our case M is the linear operator which sends, in particular, the values of pqrs-functions on the "lower" edge of the cut along the ray R´to the (generally speaking, distinct) values they assume on its "upper" edge. Since pqrs-functions obey on the both edges the same system (1)-(4) of linear homogeneous ODEs (since their coefficients are invariant with respect to M) such a transformation is represented by a constant 4ˆ4 matrix. Evading such a complication, we shall assume ℑz ă 0 for simplicity unless otherwise specified. Analytic continuation has to be applied for relaxation of the limitation and extending the local form of the equalities (17)-(20) in which M 1{2 -transformation is regarded as the inversion of sign of the function argument to a greater domain.
Remark 8 In general case, given a prescribed set of constant parameters, simultaneous fulfillment of Eq. (5) and Eq.s (11), (12) for the same instance of pqrs-functions should be achievable by means of their appropriate selection. Indeed, the set of all pqrs-functions can be indexed by the quad of their values at z " 1 fixed up to multiplication by an insignificant (associated with a decoupled degree of freedom) non-zero common factor, i.e. by points of a projective space CP 3 t1u . The two linear equations (11),(12) single out the projective line embedded therein. This projective line is conveyed (pushedforward) by the vector flow associated with the equations (1)-(4) into another projective space CP 3 tiu indexing the same set of pqrs-functions by their values (also considered up to a common constant factor) at z " i. In the latter projective space, the equation (5) singles out certain embedded projective plane. The question equivalent to the issue of consistency of Eq. (5) with Eq.s (11) and (12) reads: whether the former (conveyed) projective line intersects the latter projective plane or not? This problem remains open yet but numerical computations point in favor of the affirmative upshot, at least, under apparently generic conditions. Thus, most plausibly, inconsistency of Eq. (5) with Eq.s (11) and (12) and the subsequent inanity of Theorem 3, if any, could only occur under the very special conditions (currently unknown). We may state therefore the following.
Corollary 4 There exists a set of pairwise linearly independent quads of holomorphic functions p, q, r, s parameterized by points of CP 2 such that the equations (6)-(9) are fulfilled. The last assertion of the Theorem 3 says how the pqrs-functions referred to in the above Conjecture are expressed through functions which are single-valued and holomorphic on C˚.
We proceed now with proofs of the three above theorems.
Proof of Theorem 1. Let us denote the four differences of the left-and righthand sides of Eq.s (6), (7), (8), (9), by the symbols A ∆ p , A ∆ p , A ∆ r , A ∆ s respectively, considering them, as they stand, as the functions of z. For example, one of such definitions reads A ∆ p pzq " pp´1{zq`e iℓπ z 2p1´ℓq ppzq, etc. As it is shown in Appendix A, they obey the following system of linear homogeneous ODEs provided Eq.s (1)-(4) are fulfilled. Using the explicit definitions, let us compute the particular values of A ∆ ✪ piq " A ∆ ✪ pe i 2 π q for ✪ P tp, q, r, su. Notice that for such a choice of the argument z one has´1{z "´e´i 2 π " i, z´2 ℓ " e´i ℓπ , z 2p1´ℓq "´e´i ℓπ . Then it follows from Eq. (6) that A ∆ p piq " 0. The values A ∆ ✪ piq of the other differences are not automatically zero but one easily finds that in accordance with definitions A ∆ ✪ piq " ζ ✪¨`q piq´µppiq`rpiq˘for ✪ P tq, r, su, and ζ q " ζ r " 1, ζ s " µ.
Thus if Eq. (5) is fulfilled then A ∆ ✪ piq " 0 for all 'the indices' ✪ P tp, q, r, su. This implies the vanishing everywhere of all the functions A ∆ ✪ pzq in view of uniqueness of solutions of the Cauchy problem for Eq.s (21) with the null initial data posed at z " i.P roof of Theorem 2. Building on the notations utilized in the preceding proof, we denote the differences of the left-and right-hand sides of Eq.s (13), (14), (15), (16) by the symbols C ∆ p pzq, C ∆ q pzq, C ∆ r pzq, C ∆ s pzq, respectively. It is shown in Appendix B that they obey the following system of linear homogeneous ODEs provided Eq.s (1)-(4) are fulfilled.
Thus if the constraints (11) and (12) are fulfilled then all the differences C ∆ ✪ pzq vanish at z " 1. But then they are the identically zero functions, C ∆ ✪ pzq " 0, as a consequence of Eq.s (22). This means exactly that Eq.s (13)-(16) hold true.P roof of Theorem 3. As above, let us denote the differences of the left-and righthand sides of Eq.s (17), (18), (19), (20) by the symbols B ∆ p pzq, B ∆ q pzq, B ∆ r pzq, B ∆ s pzq, respectively. It is shown in Appendix C that in case of fulfillment of Eq.s (1)-(4) they obey the following system of linear homogeneous ODEs The next step should assume computation of the particular values B ∆ ✪ p´iq, ✪ P tp, q, r, su. However, carrying out this by means of the mere substitutions z Ø´i into the definitions of B ∆ ✪ , some ambiguity may arise due to possibility of overlapping of sheets of the branching domain pqrs-functions live on. To make the computation univocal, we consider first the "deformed" versions ǫ B ∆ ✪ of the differences B ∆ ✪ , where ǫ plays role of the deformation parameter. Their distinction is that in case of ǫ B ∆ ✪ the factor in argument of the pqrs-function on the left is distinct of the one involved in Eq.s (17) -(20), see Remark 7. The common exponential multiplier on the right is also modified. Namely, let the factor e iǫπ , where ǫ P r0, 1s is the auxiliary real parameter, be used instead of´1 " e iπ . For example, one has Now let us notice that in the case ǫ " 0 all the arguments of pqrs-functions utilized for computation of 0 B ∆ ✪ coincide with z and no ambiguity in their evaluation can thus arise. Then, starting from these values, we carry out analytic continuation varying ǫ through the segment r0, 1s. We define B ∆ ✪ pzq to be "the final values" the functions ǫ B ∆ ✪ pzq arrive at as ǫ Õ 1. Such an interpretation leaves no room for ambiguity in the meaning of definitions of B ∆ ✪ and, more generally, the relations Eq.s (17)-(20) represent.
Assuming the above interpretation of B ∆ ✪ , it is shown in Appendix D that the following equations are fulfilled for arbitrary functions p, q, r, s holomorphic on the circular arc passing through the point´i,`1, and`i.
The symbols C ∆ ✪ , ✪ P tp, q, r, su, were already utilized in the proof of Theorem 2. They denote the differences of the left-and right-hand sides of Eq.s (13)-(16), considered, as they stand, as the functions of z. Every equation from the system (24) can therefore be regarded as the coincidence, upon simplifications, of a pair of certain linear combinations of 4+4 instances of pqrs-functions of which some are evaluated at z " i and others at z "´i.
On the other hand, the conditions of the theorem to be proven imply, in particular, the fulfilment of the assertion of Theorem 2 which establishes the vanishing of all the four functions C ∆ ✪ pzq irrespectively of the choice of their arguments. Thus all the terms in Eq.s (24) involving those factors may be discarded. Now, taking into account the fulfillment of Eq. (5), we see that all the expressions on the left in (24), i.e. the functions B ∆ ✪ pzq, ✪ P tp, q, r, su, evaluated at z "´iq, actually vanish. Since these functions obey the system of linear homogeneous first order ODEs (see Eq.s (23)) they reduce to identical zero. This means exactly that the equalities (17)-(20) hold true.
Let us consider the second claim of the theorem which establishes, under the restrictions assumed, a simpler domain C˚for the products of pqrs-functions times z´l as compared to pqrs-functions themselves which are not single-valued on C˚and hence must be considered on its universal cover. We note first that the four-element vector consisting of the right-hand sides of Eq.s (17)-(20) can be obtained by means of the multiplication of the vector pppzq, qpzq, rpzq, spzqq J by the matrix Thus under the conditions assumed the action of the operator M 1{2 to pqrsfunctions is completely described by the matrix M 1{2 ℓ . Let us examine the effect of this action applied twice. In the language of matrices it is described by the product of the ones associated with the mentioned transformation. However whereas the first operator M 1{2 of the composition defined as, in a sense, a rotation of the function argument, is associated with M 1{2 ℓ pzq the second one 'starts' with the arguments already rotated acting separately to the matrix M 1{2 ℓ pzq and to the vector of pqrs-functions. In other words, the operator composition M 1{2˝M1{2 has to be associated with the matrix product We can compute it making use of the following identity where I is the unit matrix and ǫ P r0, 1s is the real parameter. The analytic continuation ("the rotation of the argument") carried out by the operator M 1{2 can be represented by the evaluation of the limit as ǫ Õ 1. Then the factor in parenthesis in front of the second summand on the right goes to zero and we obtain M 1{2 rM 1{2 spzq¨M 1{2 pzq " I. We see therefore that under the conditions of the theorem the monodromy transformation M " M 1{2˝M1{2 of pqrs-functions reduces to their multiplication by the constant e 2iℓπ . Accordingly, the products of pqrs-functions times z´ℓ reveal the trivial (identical) monodromy transformations. Thus they can be continuously extended in both directions across the cut along R ď0 distinguishing C˚from C˚. Since they also obey a system of the first order ODEs (which can easily be derived from Eq.s (1)-(4)) no branching appear showing that they are actually single-valued holomorphic in C˚itself.
The theorem is proved.9 It proves sometimes to be useful to take into account the following noteworthy property of all solutions to Eq.s (1)-(4).
0. If holomorphic functions p, q, r, s obey Eq.s (1)-(4) then the value of expression D " z 2p1´ℓq`p pzqspzq´qpzqrpzq˘ (25) does not depends on z; 1. if holomorphic functions p, q, r, s obey Eq.s (6)-(9) then where and in what follows tDu denotes the right-hand side of Eq. (25) considered as a function of z; 2. if holomorphic functions p, q, r, s obey Eq.s (13)-(16) then 3. if holomorphic functions p, q, r, s obey Eq.s (17)-(20) then It has to be added that the precise meaning of the argument replacements z Ø´1{z, z Ø 1{z, and z Ø M 1{2 z p»´zq involved in the above formulas is the same as in the corresponding systems of the equations claimed to be fulfilled. Proof. We shall consider the above assertions one by one. Assertion 0. Let us expand the expression of the derivative of the right-hand side of Eq. (25) in case of arbitrary holomorphic functions p, q, r, s. A straightforward computation establishes the following identity Here the symbols ∆✪, where ✪ P tp, q, r, su, denote the differences of the leftand right-hand sides of Eq.s (1), (2), (3), (4), respectively, as they stand. Hence if the latter equations are fulfilled then the derivative (29) vanishes and the value of tDu does not depend on z.
Assertion 1. Its validity follows from the equality holding true for arbitrary functions p, q, r, s, holomorphic at (and in the vicinity of) z " i. Here the symbols A ∆ ✪ , ✪ P tp, q, r, su, denote the differences of the left-and right-hand sides of Eq.s (6), (7), (8), (9), respectively, considered, as they stand, as the functions of z.
Assertion 2. Let us consider the following identity which is verifiable by straightforward computation. Here C ∆ ✪ pzq, ✪ P tp, q, r, su, denote the differences of the left-and right-hand sides of Eq.s (13), (14), (15),(16), respectively, considered, as they stand, as the functions of z. The equality (31) holds true for arbitrary functions p, q, r, s holomorphic at (and in the vicinity of) z " 1. It is extended to any other z " 0 by means of analytic continuation.
Turning to the assertion 3, let us consider the equation Here the symbols B ∆ ✪ , where ✪ P tp, q, r, su, denote the differences of the leftand right-hand sizes of the equations (17), (18), (19), (20), respectively. 'The diacritic mark' ð denotes the transformation of the function argument defined as follows: Here lim ǫÕ1 should be understood as the analytic continuation along the image of the segment r0, 1s Q ǫ to the end point corresponding to ǫ " 1. In Theorem 3 such a transformation is associated with the operator M 1{2 .
Eq. (32) follows from the identity (65) given in Appendix E. In turn, under conditions of the theorem, Eq. (28) is the obvious consequence of Eq. (32).T he constant D (in fact, the first integral for the system (1)-(4)) may vanish. Indeed, if it is null at some point (this is a quadratic constraint to values of pqrsfunctions thereat) then it is zero everywhere. Such a case bears many signs of a degeneracy -being nevertheless in no way meaningless. Following here the requirement of genericity, we assume throughout that D " 0 without separate mentioning. It is also worth noting that there is another case for which many of the relations discussed here degenerate. Namely, this takes place if λ`µ 2 " 0. We evade here clarification of its specialties as well.
On applications of pqrs-functions
The properties of pqrs-functions established above make them an object of notable interest in itself. However, they arose originally in the context of another important problem, namely, the study of symmetries of the space of solutions to the following ordinary second-order linear homogeneous differential equation Here E " Epzq is the unknown holomorphic function, ℓ, λ, µ are the constant parameters which may be identified with the ones involved in Eq.s (1)-(4 [3,4] contain some more recent bibliography. Since a generic DCHE is characterized by the four constant parameters, whereas Eq. (33) involves only three ones, Eq. (33) was named a special double confluent Heun equation (sDCHE). This naming is adopted in the present work as well.
It should be noted that Eq. (33) was segregated within the DCHE family because of its intimate relation (in fact, equivalence) to the following nonlinear first-order ODE 1 9 ϕ`sin ϕ " B`A cos ωt, in which ϕ " ϕptq is the unknown function, the symbols A, B, ω denote real constants, t is a free real variable, and the overdot denotes derivating with respect to t. The latter equation and its generalizations are, in turn, well known due to their emerging in a number of problems in physics (most notably in the modeling of Josephson junctions) [5,6,7], mechanics [8,9], dynamical systems theory [10], and geometry [11].
In earlier investigations the functions, obeying equations equivalent to Eq.s (1)-(4), were utilized for the constructing of linear operators sending the space of solutions to Eq. (33) into itself [12]. It was found that the transformations they determine generate a group which can be regarded as a discrete symmetry of the noted space of solutions. (More precisely, in case of real parameters, one of the three groups arises depending on their values).
The principal limitation of those considerations was, however, the restriction of the parameter ℓ (sometimes called the order of Eq. (33)) to integers only. The simplification following from this assumption (the starting point of derivation of the mentioned symmetry transformations, in fact) is the reducing of the functions equivalent to our pqrs-functions to polynomials 2 in z as well as in the parameters λ, µ. Moreover, there exists the recurrent scheme enabling one to compute these polynomials for any given positive integer ℓ.
The definition of pqrs-functions considered in the present work needs no such a restriction that enables us to make a crucial step in revealing of discrete symmetries of the noted space of solutions in case on non-integer ℓ. We apply the approach closely following the one utilized in the case of integer order although some specific subtleties still have to be taken into account.
To that end, let us define the two families of linear operators, ǫ L A and ǫ L B , depending on the real parameter ǫ P r´1, 1s. They act to arbitrary functions (denoted E) holomorphic in C˚in accordance with the following formulas.
The functions p, q, r, s are assumed to be holomorphic in the same domain.
If ǫ " 0 then the common argument of the functions p, q, E and r, s, E in right-hand sides of (34) and (35) Proof. Let us introduce the operator H associated with Eq. (33), i.e. let HrE spzq " z 2 E 2 pzq``pℓ`1qz`µp1´z 2 q˘E 1 pzq``λ´µpℓ`1qz˘Epzq. (37) Composing it with the operator L A , the following expansion of the slightly modified result of their combined action to an arbitrary holomorphic function E can be obtained provided the functions E, p, q, r, s of the variable z are holomorphic at (and in the vicinity of) z " i. Here the symbols ∆✪, where ✪ P tp, q, r, su, denote the differences of the left-and right-hand sides of Eq.s (1)-(4) considered as the functions of z (they were already used in the proof of Theorem 6), H 1 " d{dz˝H.
In case of the operator L B similar expansion looks as follows.
The symbols ∆✪ have the same meaning as in Eq. (38). 'The diacritic mark' ð denoting the semi-monodromy transformation was also used in the proof of Theorem 6. It is worth reminding that The equalities (38), (39) follow from the identities (66) and (67), respectively, given in Appendix F. In turn, the theorem's assertion follows from Eq.s (38) and (39) since the fulfillment of Eq.s (1)-(4) implies ∆✪ " 0 and the identical vanishing of HrEs is equivalent to fulfillment of Eq. (33) that had also been assumed.R emark 9 The transformations realized by the operators L A and L B carry out the (lifted) replacements z Ø´1{z and z Ø´z of arguments of the functions involved. There exists the third operator which we denote L C also sending any solution to Eq. (33) to some its solution and utilizing the missed replacement z Ø 1{z of arguments expressing the composition of the preceding ones and constituting in conjunction with them the Klein group of maps naturally acting on C˚. L C is not linked to pqrs-functions and is well defined for any choice of constant parameters. It can be represented by the following formula.
In view of the nontrivial structure of the domain of solutions to Eq. (33) "the implementation" of the rule (40) is not unique. In particular, for one of them (the lift of)`1 is the fixed point of the map indicated by the argument replacement z Ø 1{z whereas for the other one it is (the lift of)´1 which plays a similar role.
The transformations of the space of solutions to Eq. (33) associated with pqrs-functions possess the properties of quasi-involutions similar to ones found earlier in the case of integer order ℓ, cf Ref. [14], Eq.s (34), (35).
Proof. The above claims follow from the equalities pL A˝LA qrEspzq`e iℓπ tDuEpzq "`pspzq`µ z´2qpzqqEpzq involving arbitrary functions E, p, q, r, s and their derivatives. These are, in turn, the consequences of the identities (68) and (69), given in Appendix G.
Concerning the notations utilized therein, let us remind that tDu denote the right-hand side of Eq. (25) considered as a function of z. The symbols ∆✪, A ∆ ✪ , and B ∆ ✪ , where ✪ P tp, q, r, su, denote the differences of the left-and righthand sides for Eq.s (1)-(4), for Eq.s (6)-(9), and for Eq.s (17)-(20), respectively. They are also considered as the functions of z.
There are also the two kinds of 'diacritic marks' in use. Of them, 'the accent' ð indicates the transformation of the function argument carrying out its continuous anti-clockwise rotation in the complex plane at an angle π. It was earlier named the semi-monodromy map. In Theorem 3 such a transformation is associated with the operator M 1{2 . Evidently, if ℑz ă 0 then ð ✪pzq is simply ✪p´zq. However, if ℑz ě 0 then the semi-monodromy transformation sends such argument out the subdomain C˚and this can not be expressed by the inversion of the sign. It worth noting here that ð p pz´1q (see the last but one line in Eq. (43)) is well defined, provided ℑz ą 0. Indeed, then ℑz´1 ă 0 and the argument of evaluation of the function p when computing ð p pz´1q " lim ǫÕ1 ppe iǫπ z´1q belongs to C˚.
The second 'accent' ö has a similar meaning but "the rotation angle" of a function argument is here twice as much amounting to 2π. Such a transformation looks like a full revolution in C˚around zero but it does not lead to the identical map in view of non-trivial structure (distinction of complex plane or any subset of the complex plane) of the domains of the functions we consider. Rather it corresponds to the monodromy transformation.
For some reasons we had agreed above to consider pqrs-functions on their subdomain C˚" C˚K R´. Here, however, this is not enough and we are forced to introduce for a time a somehow extended one. Indeed, if z P C˚then the point of evaluation of a monodromy-transformed function does not belong to C˚due to the cut along the ray of negative reals which the circular path of analytical continuation inevitably meets. "The minimally extended subdomain", where the monodromy map can still be consistently defined, is constructed, for instance, by means of addition of another copy of C˚and the gluing of it to the original one along the opposite edges of their cuts (the two complementary ones remain free). Then if z belongs to the "lower" (original) sheet of this "double -C˚" then the point of evaluation of analytic continuation of the function to be monodromy transformed belongs to the upper one and in this way all the constituents of Eq. (44) can be consistently computed (and it is finally fulfilled).
The important circumstance is, however, that under conditions of the theorem the evaluation of many functions and the handling of the associated subtleties it implies is superfluous. Indeed, the fulfillment of certain equations required by the theorem conditions means the vanishing of the expressions
Summary
We define a family of quads of holomorphic functions (referred to, for brevity, as pqrs-functions) as the non-trivial solutions to the system of linear homogeneous first order ODEs (1)- (4). Each instance of such functions can be constructed as a solution of the Cauchy problem for the initial data specified at any given point z 0 except zero. It is shown that, fixing the initial data at z 0 " i and claiming fulfillment of the linear homogeneous constraint (5), one obtains pqrsfunctions which obey the equalities (6)-(9) (Theorem 1). Similarly, if the initial data are specified at z 0 " 1 and obey thereat the two linear homogeneous constraints (11), (12) then pqrs-functions obey the equalities (13)-(16) (Theorem 2). Lastly, if all the three mentioned linear constraints (imposed at two distinct locations) are met then the equalities (17)-(20) involving semi-monodromy map take place as well (Theorem 3). This case is most important since for it the monodromy transformation can also be easely computed. It turns out coinciding with multiplication to a known numerical factor showing that pqrs-functions are the products of certain power function and functions holomorphic on C˚(instead of the uinversal cover of C˚).
Pqrs-functions had found application (if fact, arose) in frameworks of investigation of properties of solutions to special double confluent Heun equation (33). Under conditions here assumed the operators L A , L B defined by the formulas (36), (34), (35) turn out to define the maps of the space of its solutions into itself (Theorem 7). Moreover, they possess quite remarkable composition properties. It particular, the operator L A is "almost involutive" (see Theorem 8, Eq. (41)) while L B , being applied twice, reduces, up to a known constant factor, to the monodromy transformation (see Eq. (42)). Besides, they define automorphisms of the space of solutions to Eq. (33) (Corollary 9).
In the special case of integer values of the constant parameter ℓ the functions almost identical to our pqrs-functions were originally introduced in Ref. [12]. The distinction of functions with the same notations considered therein against the present ones reduces to different normalizations of the functions p and q. It is worth mentioning that the variant of pqrs-functions considered in [12] deals exclusively with polynomials. Moreover, they are polynomial not only in z but also in the parameters λ and µ (while ℓ determines the polynomial degrees). Thus we may claim that in the case of a (positive) integer ℓ Eq.s (1)-(4) admit a polynomial solution.
Let us notice now that lim ǫÕ1 ǫ A ∆ ✪ pzq coincide with the functions A ∆ ✪ pzq introduced in the beginning of the proof of Theorem 1 and involved in Eq.s (21). It is worth reminding that they were defined as the differences of the left-and right-hand sides of the equations (6)- (9). They are correctly defined if the functions p, q, r, s are holomorphic in the vicinity of z " i. Besides, for ǫ " 1, the last summands in the right-hand sides of the equalities (45)-(48), which are proportional to either p1`e iǫπ q or p1`e´i ǫπ q, vanish. Finally, it remains to note that if the functions p, q, r, s obey Eq.s (1)-(4) then the differences ∆✪pzq become identically zero for all ✪ P tp, q, r, su and all the summands which contain them can also be dropped out. After such simplifications, comparing the resulting form of Eq.s (45)-(45) with Eq.s (21), one easily finds that they coincide. Thus the equalities (21) hold true.
B Identities leading to Eq.s (22)
Eq.s (22) follow from the identities given below which can be, in principle, verified by straightforward computations. Namely, for any functions p, q, r, s holomorphic, at least, in the vicinity of z " 1 the following identities hold true Here the symbols C ∆ ✪ pzq, where ✪ P tp, q, r, su, used already in the proof of Theorem 2, stand for the differences of the left-and right-hand sides of Eq.s (13)-(16). The symbols ∆✪pzq, where ✪ P tp, q, r, su, denote the differences of the left-and right-hand sides of the equations (1)-(4). Thus the equalities (50)-(53) signify the pairwise coincidences, upon simplification, of certain expressions constructed in two different ways from arbitrary holomorphic functions p, q, r, s and their first order derivatives. Obviously, these expressions are correctly defined if the above four functions p, q, r, s are holomorphic in the vicinity of z " 1. Finally, if the functions p, q, r, s are not arbitrary but verify Eq.s (1)-(4) then the differences ∆✪pzq vanish and the identities (50)-(53) convert to Eq.s (22) which are therefore the direct consequence of Eq.s (1)-(4).
C Identities leading to Eq.s (23)
Eq.s (23), utilized in the proof of Theorem 3, can be obtained from the four identities displayed below which are verifiable by straightforward computations. Namely, it can be shown that e´i ǫπ ∆ p pe iǫπ zq´e iℓǫπ pλ`µ 2 q´1pµz 2 ∆ r pzq`∆ s pzqq p1`e´i ǫπ q`µ ppe iǫπ zq`e iǫπ z 2 rpe iǫπ zq e iℓǫπ pqpzq`µz 2 ppzq µpλ`µ 2 q´1z 2 pspzq`µz 2 rpzqqq˘, (54) d dz Here ǫ P r0, 1s is the auxiliary real parameter, the symbols p, q, r, s stay for arbitrary functions holomorphic in the vicinity of an arc of the circle connecting e´i ǫπ{2 with e iǫπ{2 and passing inbetween them through`1 counter-clockwise. The functions ǫ B ∆ ✪ pzq, where ✪ P tp, q, r, su, are defined as follows.
Thus the equalities (54)-(57) signify the pairwise coincidences, upon simplification, of certain expressions constructed in two different ways from arbitrary holomorphic functions p, q, r, s and their first order derivatives.
Let us now consider the case ǫ " 1. The definitions (58) are pertinent if the domain of functions p, q, r, s covers˘i and 1, i.e., in particular, if they are holomorphic on the circular arc passing through´i, 1 and`i. The solutions of the Cauchy problem for Eq.s (1)-(4) with initial data specified at z " 1 possess such a property. One has for them, by definition, ∆✪pzq " 0. Besides, due to the above choice of ǫ, the summands in right-hand sides of Eq.s (54)-(57) involving either the factor p1`e iǫπ q or the factor p1`e´i ǫπ q have to be discarded as well.
Taking the above simplifications into account, Eq.s (23) follow since the argument for which the transformed pqrs-functions on the left in Eq.s (17)-(20) have to be evaluated is exactly the limit of e iǫπ z reached as ǫ Õ 1 (provided z belongs to the vicinity of´i, at least).
The expressions C ∆ ✪ , where ✪ P tp, q, r, su, were introduced in the proof of Theorem 2. They denote the differences of the left-and right-hand sides of Eq.s (13)-(16).
In each of the above four pairs of equalities the first ones are merely the expansions of the corresponding definitions (58) with regard to the particular value of z picked out above. On the contrary, the second equalities are "the genuine identities" in which the right-hand sides represent some rearrangements of the left-hand ones whose several constituents are aggregated to the expressions C ∆ ✪ . Thus Eq.s (59)-(62) express the coincidences, upon simplification, of some linear combinations of arbitrary fixed functions p, q, r, s evaluated at z " e iǫπ{2 and at z " e´i ǫπ{2 .
If ǫ " 0 then the argument of all the pqrs-functions and the expressions ǫ B ∆ ✪ considered as the functions of z is`1. Let ǫ be further varied through the segment r0, 1s. Then the arguments of the functions involved in Eq.s (59)-(62) move along the circular arcs, either clockwise of counter-clockwise. The values the functions assume thereat can be regarded as the result of their analytic continuation from the vicinity of`1. At end points of the noted arcs corresponding to ǫ " 1 the arguments of the functions become either e iπ{2 " i or e´i π{2 "´i while the expressions ǫ B ∆ ✪ on the left turn into B ∆ ✪ " lim ǫÕ1 ǫ B ∆ ✪ evaluated at i. Besides, it holds t-1u " tr-1su "´1, t0u " 0, t2u " 2 thereat.
Taking all these simplifications into account, one finds that in the particular case under consideration the equalities of the first and the last expressions in each of the formulas Eq.s (59)-(62) combine to Eq.s (24).
E Identities utilized in the proof of Theorem 6
The following two identities verifiable by straightforward computation hold true for arbitrary holomorphic functions p, q, r, s.
Here ǫ P r´1, 1s is the auxiliary real parameter. tDu denotes the right-hand side of Eq. (25). Eq.s (49) play role of definitions of the symbols The renderings of all these abbreviations, as they stand, are considered as the functions of z. Here we also employ in recording some tricks allowing somewhat more compact presentation of formulas than in the preceding Appendices. In particular, 'the accent' ǫ ð denotes the transformation of rotation of the function argument at an angle ǫπ, i.e. ǫ ð ✪pzq " ✪pe iǫπ zq. Note that the arguments of functions are displayed in Eq. (64) (except for tDu) but they are suppressed in Eq. (65) because in the latter case the arguments of all the functions coincide and are equal to z.
To guarantee the meaningfulness of the formulas (64) and (65) one has to ensure the belonging of the values of arguments, for which the functions involved in them are evaluated, to the appropriate domain. These values depend on ǫ. In particular, if ǫ " 0 then all the functions are evaluated at either z or 1{z. In such a case one may get any z P C˚for which both formulas (64), (65) prove to be correctly defined -and the equalities they represent hold true. Further, starting with ǫ " 0, we carry out analytic continuations of all the constituents of Eq.s (64) and (65) varying ǫ P r0, 1s from 0 to 1. In the limit ǫ Õ 1 (i.e. at the end point of the arc of analytic continuation) the expressions denoted ǫ A ∆ ✪ and ǫ B ∆ ✪ become identical to the expressions A ∆ ✪ and B ∆ ✪ , respectively (see the discussion following Eq. (30) and Eq. (32)), while the transformation indicated by 'the accent' ǫ ð converts to the semi-monodromy transformation denoted earlier by 'the accent' ð (indicating application of the operator M 1{2 ). Inspecting the result of the outlined analytic continuation along the image of the segment r0, 1s P ǫ, one finds that this is nothing else but the equations (30) and (32), provided that z belongs to the vicinity of`i in the former case and ℑz ă 0 in the latter one.
If ǫ " 0 then all the instances of the functions E, p, q, r, s and their derivatives involved in Eq.s (68), (69) are evaluated either at z or at 1{z. As a consequence, all the constituents of the these formulas are well defined for arbitrary z P Cand the equalities they represent hold true. Next, we allow the parameter ǫ to vary through the segment r0, 1s and carry out analytic continuation along the corresponding curves (in fact, circular arcs) in the function domains. At their end points corresponding to ǫ " 1 the coefficients represented by the abbreviations t-1u, tr-1su, t0u, t2u acquire the valueś 1,´1, 0, 2, respectively, see Eq.s (70). This allows us, in particular, to ignore the last lines in the both formulas (68) and (69). Simultaneously, the effect of the rotation of the function arguments tagged by 'the accent' ǫ ð turns into the action of the semi-monodromy operator M 1{2 (see Theorem 3) which we indicate also by 'the accent' ð over the function symbol, see, e.g., the proof of Theorem 8. As to the exceptional symbols ǫ ð B ∆ ✪ , ✪ P tp, q, r, su, it is easy to see that at the end point of the curve of analytic continuation they become equal to the expressions denoted in Eq. (44) by the symbols ð B ∆ ✪ . A separate note on the effect of 'the accent' 2ǫ ð is necessary. In the limit as ǫ Õ 1 it also 'rotates' the argument of the function to be transformed but now the rotation angle amounts to 2π meaning, in a sense, a full revolution. It had been noticed that such transformations are termed monodromy. We denoted the operator carrying out the monodromy transformation by the symbol M but in some formulas (e.g. in Eq. (44)) it is also indicated by 'the diacritic mark' ö .
It has also to be noted that in case of monodromy transformation some precaution on structure of the domain of the function to which it acts needs to be taken. This point is briefly discussed in the proof of Theorem 8. Now, collecting all the modifications of the formulas Eq.s (68) and (69), arising when the analytic continuation corresponding to lim ǫÕ1 has been carried out, one finds that they finally convert to Eq.s (43) and (44), respectively. | 2021-02-25T02:15:55.385Z | 2021-02-23T00:00:00.000 | {
"year": 2021,
"sha1": "de3f91321d0081adfe71fb1556f1160d5a47754a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fedaa48aa8358fff97353cd830959d759cffecc9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
4373693 | pes2o/s2orc | v3-fos-license | Peptidylarginine Deiminases—Roles in Cancer and Neurodegeneration and Possible Avenues for Therapeutic Intervention via Modulation of Exosome and Microvesicle (EMV) Release?
Exosomes and microvesicles (EMVs) are lipid bilayer-enclosed structures released from cells and participate in cell-to-cell communication via transport of biological molecules. EMVs play important roles in various pathologies, including cancer and neurodegeneration. The regulation of EMV biogenesis is thus of great importance and novel ways for manipulating their release from cells have recently been highlighted. One of the pathways involved in EMV shedding is driven by peptidylarginine deiminase (PAD) mediated post-translational protein deimination, which is calcium-dependent and affects cytoskeletal rearrangement amongst other things. Increased PAD expression is observed in various cancers and neurodegeneration and may contribute to increased EMV shedding and disease progression. Here, we review the roles of PADs and EMVs in cancer and neurodegeneration.
The Interplay of PADs and EMVs in Cancer
The presence of PADs has been confirmed in EMVs released from various cancers cells [90]. Based on a search in the Vesiclepedia dataset (http://www.microvesicles.org/), using gene symbol identifiers, PADs have been reported in EMVs from melanoma, breast, colon, kidney, lung, melanoma, ovarian, and prostate cancer cell lines [90], as well as colorectal cancer cells [91]. It may be postulated that the increased EMV release observed in cancers is partly driven by elevated PAD expression in cancers and that PAD enzymes-which are amongst the cargo packaged in EMVs-are carried into plasma where they can deiminate target proteins [92]; and aid in the spread of cancer indirectly.
In metastatic prostate PC3 cancer cells, both PAD2 and PAD4 isozymes were found to be elevated and to undergo increased nuclear translocation in correlation with increased EMV release [26].
Both PAD2 and PAD4 have been shown to translocate to the nucleus in response to TNFα upregulation [93][94][95]. As part of the inflammatory response, it may be postulated that increased EMV release also causes upregulation of TNFα which may lead to a feed-back loop of PAD translocation and EMV shedding in an ongoing inflammatory environment.
Which of the PAD isozymes is the main player in EMV release and the critical respective target proteins for successful MV and/or exosome shedding has to be further investigated. The different PADs may well be either selectively or collectively involved with different, albeit equally important, roles. In addition, the specific effect of PAD isozymes involved in EMV biogenesis will need to be taken into consideration dependent on tumour type. The selectivity of potential EMV inhibitors and combinatory application with chemotherapeutic agents is thus of great interest. Most potential EMV inhibitors tested so far have displayed a preferential tendency for inhibition of either MVs or exosomes [22,34,59,61,[96][97][98] and thus the effect of PAD inhibitor Cl-amidine observed on both vesicle types indicates their potential usefulness. A combination of selective EMV inhibitors may indeed encourage re-testing of chemotherapeutic drugs currently not in favour due to severe side effects and poor effectiveness, as for example 5-FU treatment of prostate cancer [99].
Deiminated Target Proteins and PAD-Interacting Proteins Identified in EMV Biogenesis
Depending on target protein preference of PAD2 and PAD4, EMV release may occur via cytoskeletal and/or epigenetic pathways as the different PAD isozymes have indeed demonstrated distinct substrate preferences, with PAD4 showing more restrictive substrate specificity compared to PAD2 [100][101][102][103]. While PAD4 prefers sequences with highly disordered conformation, PAD2 has a broader sequence specificity, which might partly be reflected by the broader tissue expression of PAD2 [104]. PAD2 deiminates βand γ-actins [100] and has been shown to affect histone H3 deimination [84], while PAD4 has been shown to deiminate histone H3 and H4 [104,105] and to regulate histone arginine methylation levels [80].
Targets of PAD-activation observed in EMV release include cytoskeletal actin which contributes to the reorganisation of the cytoskeleton necessary for successful vesicle release [15]. The presence of deiminated β-actin increased in cells that were stimulated for EMV release was markedly diminished after pre-treatment with PAD-inhibitor [26]. β-Actin, one of six different human actin isoforms, is a cytoskeletal protein involved in cell structure and integrity, cell migration, and movement [106]. This provides evidence for the importance of PAD-mediated deimination of target proteins that are involved in cytoskeletal rearrangement-such as β-actin, actin α1, and glyceraldehyde-3-phosphate dehydrogenase (GAPDH)-as an essential step for successful EMV biogenesis as the process of multivesicular body recruitment to the plasma membrane to release exosomal cargo likely involves actin and microtubular elements of the cytoskeleton [107]. During vesicle formation, both βand F-actin stress fibres play important roles in the redistribution of the actin-cytoskeleton through the activation of Rho/Rho-associated kinase (ROCK) pathways during apoptosis and thrombin stimulation [14]. Deiminated βand γ-actins have indeed also previously been detected in sera and synovial fluid from RA patients [108] and been identified as a substrate for PAD2 in ionomycin-activated neutrophils [100]. Other deiminated protein targets identified in association with EMV release included GAPDH, which is reported to be exosome associated ( [109] http://www.exocarta.org). It is a multifunctional enzyme involved in glycolysis, nuclear functions such as transcription and DNA replication, as well as apoptosis [110]. GAPDH has also been shown to contribute to the regulation of intracellular Ca 2+ levels via binding to integral membrane proteins, such as the inositol-1,4,5-triphosphate receptor (IP3R) and sarcoplasmic reticulum Ca 2+ (SERCA) pump [111,112]. Cytosolic GAPDH also catalyzes microtubule formation and polymerization by binding the cytoskeletal protein tubulin [113] and is associated with endoplasmic reticulum (ER) to Golgi vesicular transport [114]. Based on a STRING analysis (https://string-db.org/), putative binding partners of PADI2 and PADI4 were identified and found to be present in EMVs based on a search by gene symbol in the Vesiclepedia protein data set ( Figure 1). These included histone H3, known to be deiminated [84,104,105,115]; p53, which is known to be regulated by PAD4 [74,116]; interleukin 6 (IL6), one of the major cytokines in the tumour microenvironment [117]; epidermal growth factor (EGF) which is a crucial mitogenic factor including in prostate cancer [118]; Tripartite Motif Containing (TRIM) 9 and TRIM 67 which are associated to microtubule binding [119], lung cancer [120], and neuronal differentiation [121]; Arginase 2 (ARG2), which has roles in suppressing macrophage cytotoxicity and myeloid-derived suppressor cell function [122] and is elevated in breast cancer [123]; Zinc-finger and BTB domain-containing protein 17 (ZBTB17/Miz1) which modulates Myc, a multifunctional nuclear phospoprotein in cell cycle progression, apoptosis, and cellular transformation and which is enhanced in tumours [124]; Adenosine Deaminase, RNA Specific B1 (ADARB1), which is overexpressed in various cancer cell types and transformed stem cells [125]; Annexin A4 (ANXA4), the upregulation of which promotes the progression of tumour and chemoresistance of various cancers [126]; Major histocompatibility complex, class II (HLA-DRB1), which besides known functions in autoimmunity, including the generation of anti-citrullination antibodies [127], is also associated to carcinoma [128].
the upregulation of which promotes the progression of tumour and chemoresistance of various cancers [126]; Major histocompatibility complex, class II (HLA-DRB1), which besides known functions in autoimmunity, including the generation of anti-citrullination antibodies [127], is also associated to carcinoma [128].
PADs in Central Nervous System (CNS) Damage and Neuroprotective Effects of PAD Inhibitors
In two animal models of acute CNS damage, pharmacological pan-PAD inhibition has been shown to be neuroprotective in vivo following administration straight after insult and for up to two hours post-injury, indicating a clinically relevant time window for intervention [53,54,129]. Firstly, in a spinal cord injury model, significant reduction was observed in infarct size, accompanied by reduced neuronal cell death and histone H3 deimination, compared to non-treated control injuries [53]. Secondly, two murine models of neonatal hypoxic ischaemic encephalopathy (HIE), showed
PADs in Central Nervous System (CNS) Damage and Neuroprotective Effects of PAD Inhibitors
In two animal models of acute CNS damage, pharmacological pan-PAD inhibition has been shown to be neuroprotective in vivo following administration straight after insult and for up to two hours post-injury, indicating a clinically relevant time window for intervention [53,54,129]. Firstly, in a spinal cord injury model, significant reduction was observed in infarct size, accompanied by reduced neuronal cell death and histone H3 deimination, compared to non-treated control injuries [53]. Secondly, two murine models of neonatal hypoxic ischaemic encephalopathy (HIE), showed similar neuroprotective effects as estimated by volume infarct analysis, reduced cell death, and histone H3 deimination, and in addition a significant impact on neuroinflammatory responses as reflected in reduced microglial activation in all affected brain regions [54]. The fact that these neuroprotective effects of PAD-inhibitors are translatable between CNS injury and animal models, is indeed promising for effective application also in other cases of neuronal damage. Interestingly, while increased protein deimination has been also detected in the pathology of traumatic brain injury [130], EMV release has been associated with cerebral hypoxia induced by acute ischaemic stroke [131,132] and mesenchymal stromal cell-derived EMVs have recently been shown to protect the foetal brain following hypoxia-ischaemia in an experimental ovine model [133], and to be neuroprotective in stroke [134,135] and traumatic brain injury [136] rat models. The significance of EMV release in relation to pharmacological PAD manipulation requires further investigation in acute CNS damage.
EMVs in Neurodegenerative Diseases
EMVs are increasingly being associated with neurodegenerative disease progression and pathologies [137][138][139][140][141][142][143]. In the CNS, EMVs have been shown to be produced by several cell types including neurones, microglia, oligodendrocytes, astrocytes, and embryonic neural stem cells [8,[144][145][146] and to play important roles in the development and function of the nervous system [147]. Roles for EMVs in neurodegenerative disease progression include intercellular communication and neuroinflammation due to transport of parent-cell specific cargo that can be translated in recipient cells and also affect gene regulation [148][149][150]. In Amyotrophic Lateral Sclerosis (ALS), exosomes have for example been shown to export misfolded mutant superoxide dismutase 1 (SOD1) [151,152]; in relation to ALS and Frontotemporal dementia (FTD) to export TAR DNA-binding protein 43 (TDP-43) [153,154]; and there is increasing evidence emerging for critical roles for miRNA transport in the pathogenesis of FTD-ALS [155,156]. In tauopathies, EMVs have been shown to export phosphorylated tau [157,158]; in Parkinson's disease (PD), exosomes were shown to export α-synuclein and leucine-rich repeat kinase 2 (LRRK2) [159][160][161]; and in Alzheimer's disease (AD), they export amyloid β (Aβ) [162,163]. All of these proteins form aggregates involved in the disease pathologies [164]. As EMVs have the capability to travel further via the blood or cerebrospinal fluid, misfolded proteins may spread via this pathway in a prion-like manner [165][166][167][168][169][170]. In addition, functional effects of such a protein transport have been indicated for Aβ, which progressively accumulates in EMVs with age, while the β-site cleavage of amyloid precursor protein (APP) has been reported to occur inside EMVs [171]. Also, the phosphorylation of tau differs in exosomes compared to total cell lysates, indicating functional consequences for its seeding capability [157]. In AD, neuroinflammation has been linked to circulating TNFα [172][173][174], which causes nuclear translocation of PADs [94,95], and to neutrophil extracellular trap formation [175], which is PAD4-dependent [38,94] and causes externalization of deiminated histones [176] and release of active PAD enzymes [177]. In addition, in PD, α-synuclein induces TNF-α containing exosomes from microglia [161] while TNF-α has been shown to promote EMV shedding from endothelial cells [162]. In light of this increasing evidence for crucial roles of EMVs in neuroinflammation, and the transfer and spreading of neurodegenerative protein aggregates alongside other cargo, the mechanisms of EMV biogenesis and routes of modulation are pivotal. It has also to be considered that the primary changes in most neurodegenerative diseases occur in specific brain locations followed by propagation into well-defined brain regions. The levels of secretion and cargo composition may thus not be homogenous among brain regions [142].
Although some deiminated target proteins have been described, most remain to be identified. Using proteomic analysis of deiminated proteins in the injured CNS, several proteins with neurodegenerative implications were identified, including with roles in neuroinflammation and perivascular drainage of Aβ [53,54,193]. In AD patients, β-amyloid has been shown to be deiminated [44,181]. In hippocampal lysates from AD patients, glial fibrillary acidic protein (GFAP), an astrocyte-specific marker protein, and vimentin were identified as deiminated proteins and the deimination of GFAP was shown to be PAD2 specific [194]. In vitro studies demonstrated that amyloid peptides bind to PAD2, resulting in catalytic fibrillogenesis and formation of insoluble fibril aggregates [42]. In PD brain samples, increased levels of total protein deimination and deimination-positive extracellular plaques were observed [178]. Mutated misfolded α-synuclein protein has been related to increased protein deimination, amyotrophic lateral sclerosis (ALS) spinal cords show increase in deiminated proteins [44], and Creutzfeldt Jacob Disease (CJD) brain samples indicate roles for deiminated enolase [195]. In AD brains, pentatricopeptide repeat-containing protein 2 (PTCD2), a mitochondrial RNA maturation and respiratory chain function protein [196], is present in a deiminated form and is an antigen target of an AD diagnostic autoantibody. There are thus indications that disease-associated autoantibodies are generated due to the production and release of deiminated proteins and deiminated protein fragments, which may be released from damaged cells in regions of pathology [197,198]. In AD, both PAD2 and PAD4 were shown to be expressed in cerebral cortex and hippocampus, the brain regions most vulnerable to AD pathology, with PAD2 localized in activated astrocytes and PAD4 selectively expressed in neurones [197]. Evidence for increased PAD expression with progression of neurodegenerative disease has also been obtained by analysis of whole genome microarrays from mouse models carrying TAU and APP+PSEN1 mutations. Significant increase of PADI2 transcription was found in cortex and hippocampus in both mutants with disease progression compared to age matched controls [193]. PAD4 expression has been shown to co-localize with amyloid-β-42 in pyramidal neurones in cerebral cortex and in hippocampal large hilar neurones of the hippocampus, which were also surrounded by activated astrocytes and microglia. These neurones contained cytoplasmic accumulations of deiminated proteins [197]. Using iPSC neuronal models derived from fibroblasts from patients [199] carrying FTD/ALS associated valosin-protein containing mutations VCPR155C and VCPR191Q, both PAD2 and PAD4 expression, accompanied by significantly increased pan-protein deimination, has been observed compared to control (non-mutation carrying) neurones, with significant increases in histone H3 deimination in VCPR155C carrying neurones [193]. Similar changes were also observed for α-synuclein triplication [200] compared to control neurones [193]. The release of deiminated proteins from necrotic neurones has been thought to cause an increased exposure of deiminated neuronal proteins to the immune system. In addition, the continual return of cerebrospinal fluid to circulation via the arachnoid villi, containing modified deiminated proteins and protein fragments, has been suggested to be a key step in the ongoing pathology due to generation of autoantibodies [197]. PADs are thus expressed in neurones residing in brain regions that are engaged in neurodegenerative pathological changes and inflammatory changes such as reactive astrogliosis and microglial migration and invasion. This brain-region specific increase observed in PAD expression may affect local exosome or microvesicle release specifically, contributing to spread of pathology in these regions. Figure 2 summarises the proposed interplay of PADs and EMVs in neurodegenerative disease pathologies. , Ca 2+ entry is facilitated via the reversal of the Na + /Ca 2+ exchanger due to over activation of the Na + /H + exchanger (NHE). Ca 2+ entry can also be facilitated due to membranolytic pathways including the complement membrane attack complex (MAC) and perforin. Increased cytosolic Ca 2+ triggers the neurotoxic cascade, which includes activation of the Ca 2+ dependent PAD enzymes. Neurodegenerative disease mutations cause protein aggregation and impaired calcium buffering, which activates the downstream PAD-cascade. Both in CNS acute injury and neurodegeneration, PAD activation causes protein deimination and further protein misfolding, affecting cell motility, autophagy, phagoptosis, and mictochondrial function, leading to neurotoxic events. Deiminated neo-epitopes and leakage of deiminated proteins from dying cells contribute to neuroinflammation that in turn may upregulate TNFα, which causes nuclear translocation of PADs, leading to histone deimination and also formation of neutrophil extracellular traps (NETosis). PAD-mediated cytoskeletal protein deimination and nuclear PAD translocation, which can affect histone deimination, contribute to EMV release, resulting in export of misfolded proteins, DNA, RNA, miRNAs, enzymes, and other EMV cargo that can contribute to pathologies. PAD-inhibitior Cl-Amidine targets PAD activation and reduces deimination of target proteins and neuroinflammatory responses. Cl-Amidine also significantly reduces EMV shedding, resulting in decreased transport of noxious EMV cargo (red arrows emphasise the main events associated to PAD activation and PAD inhibition that affect EMV release. Blue arrows indicate additional downstream changes due to PAD-mediated protein misfolding; based on [26,129]).
Conclusions
Recent studies have emphasized roles for both EMVs and PAD enzymes in cancers and neurodegeneration. Critical roles for PADs and their pharmacological inhibition have been established in cancers and neuroinflammation. PAD-mediated mechanisms have been shown as a novel mediator in the biogenesis of EMVs, which may contribute in part to increasing EMV shedding from cancer cells and act as a protective mechanism to expel chemotherapeutic drugs. In the context of neurodegeneration, EMVs are increasingly implicated in the spread of pathologies via transfer of miRNAs and misfolded proteins. While Cl-amidine [201] remains the most used experimental pan-PAD inhibitor to date, the therapeutic potential and generation of second generation and selective isozyme-specific PAD inhibitors is receiving ever increasing attention [45,49,94,[201][202][203][204][205][206][207]. The use of targeted isozyme-selective PAD inhibitors in synergy with other EMV Figure 2. Mechanisms of peptidylarginine deiminases PADs) in central nervous system (CNS injury and neurodegenerative pathologies and the proposed effect of PAD-inhibitors. Upon CNS injury (hypoxic ischaemic encephalopathy, HIE), Ca 2+ entry is facilitated via the reversal of the Na + /Ca 2+ exchanger due to over activation of the Na + /H + exchanger (NHE). Ca 2+ entry can also be facilitated due to membranolytic pathways including the complement membrane attack complex (MAC) and perforin. Increased cytosolic Ca 2+ triggers the neurotoxic cascade, which includes activation of the Ca 2+ dependent PAD enzymes. Neurodegenerative disease mutations cause protein aggregation and impaired calcium buffering, which activates the downstream PAD-cascade. Both in CNS acute injury and neurodegeneration, PAD activation causes protein deimination and further protein misfolding, affecting cell motility, autophagy, phagoptosis, and mictochondrial function, leading to neurotoxic events. Deiminated neo-epitopes and leakage of deiminated proteins from dying cells contribute to neuroinflammation that in turn may upregulate TNFα, which causes nuclear translocation of PADs, leading to histone deimination and also formation of neutrophil extracellular traps (NETosis). PAD-mediated cytoskeletal protein deimination and nuclear PAD translocation, which can affect histone deimination, contribute to EMV release, resulting in export of misfolded proteins, DNA, RNA, miRNAs, enzymes, and other EMV cargo that can contribute to pathologies. PAD-inhibitior Cl-Amidine targets PAD activation and reduces deimination of target proteins and neuroinflammatory responses. Cl-Amidine also significantly reduces EMV shedding, resulting in decreased transport of noxious EMV cargo (red arrows emphasise the main events associated to PAD activation and PAD inhibition that affect EMV release. Blue arrows indicate additional downstream changes due to PAD-mediated protein misfolding; based on [26,129]).
Conclusions
Recent studies have emphasized roles for both EMVs and PAD enzymes in cancers and neurodegeneration. Critical roles for PADs and their pharmacological inhibition have been established in cancers and neuroinflammation. PAD-mediated mechanisms have been shown as a novel mediator in the biogenesis of EMVs, which may contribute in part to increasing EMV shedding from cancer cells and act as a protective mechanism to expel chemotherapeutic drugs. In the context of neurodegeneration, EMVs are increasingly implicated in the spread of pathologies via transfer of miRNAs and misfolded proteins. While Cl-amidine [201] remains the most used experimental pan-PAD inhibitor to date, the therapeutic potential and generation of second generation and selective isozyme-specific PAD inhibitors is receiving ever increasing attention [45,49,96,[202][203][204][205][206][207]. The use of targeted isozyme-selective PAD inhibitors in synergy with other EMV modulators-aimed at either exosomes, MVs, or both populations in conjunction-present promising combinatory therapies for both cancers and neurodegenerative diseases.
Conflicts of Interest:
The authors declare no conflicts of interest. | 2017-07-25T02:36:54.655Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "f703214d37e2ccb69c192d6ce357359937d037c4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/18/6/1196/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e7e82d70e1807ac6d6d22cb539e00dad5adca61",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55411822 | pes2o/s2orc | v3-fos-license | Kinetics of the aerobic decomposition of Talauma ovata and Saccharum officinarum
The aim of this study is to evaluate the kinetics of aerobic decomposition of Saccharum officinarum and Talauma ovata leaves. For each species, decomposition chambers (leaves and water) were set up, which were maintained under controlled conditions. Each sampling day (1, 7, 15, 30, 39, 58, 72 and 90 days), the concentrations of total organic carbon, pH and electrical conductivity (EC) were determined in the dissolved fraction, while the mass and cell wall fractions (CWF) were determined in the particulate fraction. The pH stabilization of the chambers with T. ovata and S. officinarum leaves occurred in alkaline (ca. 8 8.5) and close to the neutrality (ca. 7 7.5) environment, respectively. The EC values were on average 1.6 times higher in incubations with T. ovata leaves. The mass loss did not differ between the species (mean = 53.85%), however the decay coefficient was higher for S. officinarum (k4 = 0.007 day ) than for T. ovata (k4 = 0.005 day ) leaves. The CWF mass loss (mean = 50.16%) and their coefficient (0.0090 day) were similar. S. officinarum decomposed faster due to its high concentrations of energetic compounds of interest to the microbiota. The slower decomposition of T. ovata may have occurred due to the presence of secondary compounds with negative effects to the microorganisms.
Introduction
Riparian vegetation is the transition zone between the terrestrial and aquatic ecosystem (RICHARDSON et al., 2007).This zone has functions of great importance to the environment: (i) avoiding the erosion of stream banks and the consequent widening of the channel; (ii) shaping the channel morphology by the heterogeneity introduced via plants or large woody debris that change the water flow direction; (iii) altering channel hydraulics, also by large woody debris structures, reducing or increasing water velocity and changing its residence time; (iv) impacting water quality, either by acting as filters, reducing nutrients and pesticide movements into rivers by taking and storing them, or even by releasing nutrients through decomposition or exuding components; (v) controlling the streams microclimate via shading and evapotranspiration; (vi) working as ecological corridors, enabling biological connections through environmental gradient and (vii) providing habitat, refuge and food for the fauna (NAIMAN; DÉCAMPS, 1997;RICHARDSON et al., 2007;TABACCHI et al., 2000).However, one of the most relevant functions is the supply of organic matter into the water.In low order forested streams, allochthonous matter is the most important source of energy to support the biotic communities inhabiting the site (ABELHO, 2001).This occurs because the dense canopies shade the stream, reducing the penetration of solar radiation, promoting low temperatures and consequently limiting primary production (ABELHO, 2001).In many cases, leaves are the most abundant fraction of the allochthonous particulate organic matter (e.g., GONÇALVES JUNIOR et al., 2006a).Thus, leaf litter breakdown is a fundamental function performed in streams (PASCOAL et al., 2005).It depends on both litter quality (e.g., leaf chemistry such as secondary compounds -tannins and lignins for example -and nutrient concentrations) and stream characteristics (e.g., temperature, pH, specific conductivity, salinity, total dissolved solids and nutrient concentrations), which affect biofilm formation, microbial decomposition and invertebrate colonization (LEROY; MARKS, 2006;TREVISAN;HEPP, 2007;WRIGHT;COVICH, 2005).
The leaf litter breakdown in streams is characterized by three distinct phases, which act simultaneously: leaching, conditioning and fragmentation (GESSNER et al., 1999).Leaching is the release of soluble leaf constituents, which is generally quick, accounting for a substantial reduction in initial mass (ABELHO, 2001;GESSNER et al., 1999).Conditioning is the colonization of leaf litter by microorganisms that enhance breakdown by grinding, metabolizing and incorporating leaves into secondary production (ABELHO, 2001).Microorganisms also increase detritus palatability for invertebrate shredders, although leaf decomposition does not necessarily end up in the feeding of shredders (GESSNER et al., 1999).The microbial community is basically composed by fungi and bacteria (GONÇALVES JUNIOR et al., 2006b).However, fungi, especially aquatic hyphomycetes, are of greater importance than bacteria in this process in terms of biomass and activity (ABELHO et al., 2005;GULIS;SUBERKROPP, 2003;HIEBER;GESSNER, 2002;PASCOAL;CÁSSIO, 2004).Finally, fragmentation can occur in two distinct ways: (i) physical fragmentation occurs by abrasion and shear stress carried by the flowing water and (ii) biotic fragmentation occurs by microbial enzymatic degradation and feeding of shredders, which transform coarse into fine particulate organic matter (ABELHO, 2001;GESSNER et al., 1999;GRAÇA, 2001).Subsequently, the dissolved and fine particulate organic carbon is converted into CO 2 and other inorganic compounds (mineralization) by oxidation (CUNHA-SANTINO; BIANCHINI JUNIOR, 2000;GESSNER et al., 1999).
Although riparian and riverine systems have always played a fundamental role in human life, providing the most diverse ecosystem services, these systems are subject to antrophic degradation all over the world (KYLE; LEISHMAN, 2009).Inappropriate agricultural practices, for example, have led to the loss of riparian vegetation, with its replacement by monoculture that has great economic interest, such as sugar-cane (Saccharum officinarum).This replacement modifies the quality and quantity of matter entering stream, consequently affecting its communities and functional processes (e.g.BELTRÃO et al., 2009;CORBI;TRIVINHO-STRIXINO, 2008).Brazil is the largest sugar-cane producer in the world, which is cultivated in the southeastern and northeastern parts of the country (MORIYA et al., 2007).In São Paulo State, particularly in recent years, this plant has been extensively cultivated and usually replaces the original riparian forest.Talauma ovata is among the species threatened by this process.This is a late secondary or climax plant particularly found in the Atlantic Rain Forest, with substantial representation in the gallery forests of Brazilian Savanna or the wetland environments (LORENZI, 2002).
We hypothesized that the decomposition of T. ovata leaves is slower than S. officinarum leaves, since T. ovata is rich in secondary compounds (STEFANELLO et al., 2005) which can present anti microbial action.Taking this into account, the aim of this study is to describe the kinetics of the aerobic decomposition of Talauma ovata and Saccharum officinarum leaves.This was done by analyzing the particulate organic carbon decay, the carbon balance and the release of hydrosoluble compounds from leaves in controlled conditions.
Experimental procedures
Talauma ovata (Magnoliaceae) is a perennial species (ANTUNES; RIBEIRO, 1999) whit glabrous leaves and reticulate nervure, simple blades with entire margins and an acute apex and base.Its leaves were collected on the banks of the Espraiado stream at the beginning of flowering, i.e. in the dry season.Leaves at the senescence stage were taken directly from the adult plant, just before abscission.Saccharum officinarum (Poaceae) has hairy leaves (silica) with parallel nervure, simple blades with ciliate margins, an invaginating base and acute apex.Its leaves were collected in a plantation located between the cities of Araraquara and Ibaté (21°52'S and 48°0.5'W), also in the dry season, when the plant was in the ripening period (i.e.moments before harvest).After collecting, the leaves of both species were washed in running water in order to remove any material that interferes in the gravimetric method (e.g.inorganic material, small organisms, animal feces).Afterwards, they were dried in an oven at 45°C, until they obtained a constant mass, and fragmented (Ø = 4.01 ± 1.21 cm).
For each species, chambers of decomposition (n = 24) were set up, with ca.0.5 g of leaf fragments and 50 mL of water sample from the Espraiado stream.This water was previously filtered using a cellulose acetate membrane (pore Ø = 0.45 μm; Millipore) to remove all particulate organic material.The chambers were maintained in the dark under aerobic conditions at 22.6ºC, for 90 days.Every sampling day (1, 7, 15, 30, 39, 58, 72 and 90 days), three chambers of each resource were fractionated into particulate and dissolved fractions through a nylon mesh (Ø = 400 μm).In the dissolved fraction the following was determined: (i) the concentrations of total organic carbon (TOC) by combustion and infrared detection (Shimadzu TOC-5000A); (ii) the pH values, using the potentiometric method (Qualxtron, model 8010); and (iii) the electrical conductivity (EC) values, also using the potentiometric method (Digimed, model DM3).The remaining particulate organic matter (POM) was dried at 45°C to obtain a constant weight and its mass was determined by gravimetry (WETZEL; LIKENS, 1991).The cell wall fraction (CWF; lignin, cellulose and hemicellulose) of POM was determined by the modified method proposed by Van Soest and Wine (1967).The POM was converted into carbon bases (POC) by a factor of 0.40.This value represents the mean obtained by a compilation of 45 studies of different species conducted by Bianchini Junior and Cunha-Santino (2008).
Data treatment
The temporal variations of pH, EC and mass loss of the two species were tested using the Shapiro-Wilk normality test.For the data that presented normal distribution (mass loss), the Student t test was applied later.For the data with non-normal distribution (pH and EC), the Kruskal-Wallis test was applied.
The half-life times (t 1/2 ) of the decomposition process of T. ovata and S. officinarum were calculated by the Equation 5.
For the temporal variations of CWF a first-order kinetic model (single exponential) was applied.
Results
The pH of incubations with T. ovata leaves did not differ statistically from that with S. officinarum leaves (Kruskal-Wallis test, p = 0.0934).Initially the dissolved fraction of both species showed acidic character (5.18).Then, the pH of the chambers with T. ovata leaves increased, reaching its maximum value (8.53) on the 39 th day of decomposition and then tended to stabilize in a basic pH medium (mean = 8.28).The pH of the chambers with S. officinarum leaves tended to stabilize from the 30 th day at close to the neutrality environment (mean = 7.37), reaching its maximum value (7.49) on the 72 nd day of decomposition (Figure 1).
The electrical conductivity (EC) of the incubation with T. ovata and S. officinarum leaves differed statistically (Kruskal-Wallis test, p = 0.004718), and the values obtained for T. ovata were 1.6 times higher than those obtained for S. officinarum.The initial water EC value was 19.18 μS cm -1 for both species.After the first day, this value increased dramatically, reaching 797 and 559 μS cm -1 in the incubations with T. ovata and S. officinarum leaves, respectively.The EC of incubation with T. ovata leaves increased progressively, reaching its maximum value (1134 μS cm -1 ) on the 30 th day and, subsequently began to decrease gradually arriving at 888 μS cm -1 at the end of the experiment (90 th day).In the decomposition of S. officinarum leaves, the maximum EC value (718 μS cm -1 ) was found on the 58 th day, decreasing from then and reaching 532 μS cm -1 at the end of the experiment (90 th day) (Figure 1).The DOC presented an increase of 15.69% in the incubation with T. ovata leaves and of 9.16% in that with the S. officinarum leaves after the first day, which are the maximum values recorded.Since then, the DOC concentrations decreased gradually reaching the minimum value on the 90 th day of the experiment (1.22% for incubations with T. ovata leaves and 1.59% for that with S. officinarum leaves) (Figure 2).T. ovata leaves showed a loss of 54.63% of their initial mass after the 90 days of the experiment.This value was very close to that lost by S. officinarum leaves in the same period (53.07%).The POC decay (mass loss) was not statistically different between the two species (t-test, F = 1062 and P = 0.4671) (Figure 2).In general, the kinetic model suggests that the particulate organic carbon (POC) from selected resources had two fractions: a labile and/or soluble (POC LS ), and a refractory (POC R ).The POC LS is represented by the fast mass loss during the first days of decomposition through a leaching process, while the POC R had a slower decomposition.T. ovata leaves showed a POC LS content of 30.11%, almost twice the S. officinarum leaves values (16.15%).Due to the greater labile character of T. ovata leaves, 20.23% of its initial mass was lost after the first day of decomposition, double the amount lost by S. officinarum (10.63%).A higher content of POC R was, consequently shown by S. officinarum (83.82% compared to 69.88% of T. ovata leaves).The percentage of POC R was 53.75, on average, higher than the POC LS fraction.The parameter values obtained for the kinetic model are shown in Table 1.
The values of the global decay coefficients of POC LS (k T ) of T. ovata (1.071 day -1 ) and S. officinarum (0.954 day -1 ) leaves were high and very close, and the value of T. ovata leaves slightly higher.The corresponding t 1/2 was 0.6 and 0.7 day, respectively.The POC R mineralization coefficients (k 4 ) were low, and in this case, the mass loss of S. officinarum leaves (0.007 day -1 ) was slightly faster than that of the T. ovata leaves (0.005 day -1 ).Thus, the process t 1/2 was lower for S. officinarum leaves (97 days) than for T. ovata leaves (126 days).
The dissolved organic carbon (DOC), formed due to the mass loss of POC LS , corresponded to 74.29% of the leachate for T. ovata leaves (k 2 = 0.796 day -1 ).For S. officinarum leaves, this value was 11.95% lower (k 2 = 0.594 days -1 ).The rest of the POC LS was immediately mineralized by direct oxidation at a rate (k 1 ) corresponding to 0.275 day -1 for T. ovata leaves.For S. officinarum leaves, this value was 30% higher (0.359 day -1 ).The direct mineralization from POC LS (IN 1 ) of T. ovata leaves accounted for 7.74% of the total particulate organic carbon, while the DOC mineralization (IN 2 ) corresponded to 22.36% and the POC R (IN 3 ) to 69.88%.S. officinarum leaves showed lower percentages of POC LS mineralization, 6.08 and 10.07%respectively, and consequently a higher percentage of carbon presented mineralization from POC R , 83.82%.The total mineralized carbon curve from the three pathways is shown in Figure 2.
The initial cell wall fraction (CWF) was 88.19% and 59.14% total mass to S. officinarum and T. ovata leaves, respectively.After the first day of incubation, the CWF of T. ovata was enriched by 12%, it was followed by a gradual decrease, reaching 52.14% of the initial value at 90 days of the experiment.S. officinarum leaves presented a slightly higher reduction, reaching 48.19% in the same period (Figure 3).According to the kinetic model adopted, there was no difference between the CWF decay coefficients of both species (0.0090 day -1 ).
Discussion
The leaching process that in the early stages of decomposition in this experiment was responsible for the rapid release of organic and inorganic compounds present in the protoplasm and hydrosoluble fractions of detritus (DAVIS III et al., 2003).This led to chemical changes in the environment, among them the increase of pH values observed at the beginning of the process.The subsequent stabilization of the pH occurred, according to Cunha-Santino and Bianchini Junior (2004), as a result of humic substances formation, which behaved as buffers.The pH probably did not present any negative effect on the decomposition, since stabilizations occurred in a circumneutral and basic environment, not in acid.The acidity presents negative effects on the decomposition (WEBSTER; BENFIELD, 1986), reducing the microbial metabolism, the richness and abundance of invertebrates and making the leaf breakdown substantially slower, as demonstrated by numerous studies in acid natural systems (e.g.DANGLES; GUÉROLD, 1998;DANGLES;CHAUVET, 2003;DANGLES et al., 2004;SUBERKROPP, 1995).On the other hand, Suberkropp (2001) analyzed leaf breakdown in a circumneutral stream and in a basic one (pH 6.7 and 8.0, respectively) and observed higher rates of decomposition and fungal production in the basic stream.Considering that the decay rate is correlated with pH, once the degradation process involves hydrolytic microbial enzymes that depend on pH, then it is possible that this factor has contributed in some way to the faster decomposition of S. officinarum in relation to T. ovata leaves.
The release of hydrosoluble compounds, including ions, was also responsible for the abrupt increase of EC in the environment observed after the first day of incubation (PAGIORO; THOMAZ, 1999).The carbonic acid dissociation deriving from oxidation of labile compounds is another factor that may have contributed to the increase in EC.On the other hand, the decrease in EC values observed at the end of the experiment can be attributed to the assimilation of ions by the microorganisms (CUNHA-SANTINO; BIANCHINI JUNIOR, 2004).Leaching also explains the peak of DOC observed after the first day of incubation.According to Wetzel (1995), more than 40% of total organic carbon of detritus is often leached during the first 24 hours of decomposition.The gradual decrease showed since then, can also be attributed to the mineralization of DOC, which justifies the gradual increase of the MC, and the formation of microbial biomass.The high values of EC and DOC observed in incubations with T. ovata indicate the more labile character of its leaves, when compared to S. officinarum leaves.This is confirmed by the higher content of POC LS from T. ovata.The mass loss observed at the beginning of the experiment (also due to the leaching process) was high for both species.However, this loss may be over estimated.The leaves that fall naturally in streams are usually fresh, while those used in experiments are usually previously dried, whether at room temperature or in an oven (BÄRLOCHER, 1997).This procedure is designed to homogenize the samples and make the quantification of initial mass more accurate (BÄRLOCHER, 1997;TAYLOR;BÄRLOCHER, 1996).However, the drying of the leaves causes death and loss of tissue integrity, thus accelerating the leaching process (GESSNER et al., 1999).
The higher percentages of POC R in relation to POC LS , as well as the higher values of k T in relation to k 4 obtained for T. ovata and S. officinarum, indicate the predominance of the slow process of decomposition.This tendency was also observed by Bianchini Junior (1999), who calculated, from data presented by different references, the content of labile and refractory fractions, as well as their decay coefficients (k T and k 4 ), of 118 resources decomposed in different environmental conditions.The author obtained a variation of the labile particulate organic matter from 0 to 71.6% and of the refractory between 28.4 and 100%, with average values equal to 26.6 and 73.4%, respectively.The k T values observed by the author were about 118 times, on average, higher than k4.In the present study, the k T /k 4 ratios were 214 and 136 for the leaves of T. ovata and S. officinarum, respectively.This confirmed the fact that the labile fraction of the detritus is generally smaller and is lost faster than the refractory one.The S. officinarum k T was slightly higher than for T. ovata.Other values of k found in the literature can be seen in Table 2.The values found ranged between 0.0002 day -1 for Fagus sylvatica (DANGLES; GUÉROLD, 1998) and 0.0672 day -1 for Hura crepitans (ABELHO et al., 2005) an average of 0.01226 day -1 .It is important to highlight that the k values presented in the table are comparable to k 4 of this study as these authors did not evaluate separately the decay of labile and refractory organic matter.The large differences in decay rates can be attributed to numerous factors, including the structural differences of each species (hardness) and its content of nutrients (MUN et al., 2001), the environmental conditions imposed on decomposition and methodological limitations (BIANCHINI JUNIOR, 1999).The enrichment of CWF observed for T. ovata leaves after the first day of incubation occurred due to soluble organic matter leaching, as in the initial material the soluble fraction was still present, reducing the overall percentage of CWF.The initial CWF found for S. officinarum in this study was higher than other values found in the literature.Azevêdo et al. (2003), by analyzing three different varieties of sugar cane found an average CWF of 46.7%, a value 41.3% lower than that found here.Bakshi and Wadhwa (2007) determined the CWF of nine tree species leaves and obtained values ranging between 35% (Melia azedarach and Morus Alba) and 60% (Ficus glomerate).The authors also obtained the value of 58% for Albizzia lebbock, a value similar to those obtained for T. ovata leaves in this experiment.Abdulrazak et al. (2000) determined the CWF of six Acacia tree leaves and obtained values ranging between 15.4% (A.nubica) and 31.2%(A.nilotica; about half the value obtained for T. ovata).It can be noted that, although T. ovata leaves presented a CWF content considerably lower than S. officinarum, this value is quite high when compared to other tree species.These high percentages of CWF of both species investigated here can result in a great contribution for the particulate organic matter accumulation in lotic ecosystems.This occurs due to the difficult decomposition of structural compounds.
Conclusion
In conclusion, T. ovata showed a great content POC LS , while S. officinarum showed higher percentages of POC R .Although there have been no significant differences between the mass loss from the leaves of both species, the decomposition of S. officinarum was relatively faster, even with its high content of CWF.This may be due to the high concentrations of energetic compounds in the biomass, mono and polysaccharides, for example, which is of great interest to the decomposing microorganisms.T. ovata, as expected, presented a lower decomposition, probably due to the high content of secondary compounds (e. g. terpenoids; STEFANELLO et al., 2005) which may have negatively affected the decomposing organisms.
Figure 1 .
Figure 1.Mean values and standard deviations (n = 3) of pH and electrical conductivity temporal variations of the incubations with Talauma ovata and Saccharum officinarum leaves.
Figure 2 .
Figure 2. Mean values and standard deviations (n = 3) of the temporal variations during the decomposition of particulate organic carbon (POC), dissolved organic carbon (DOC) and mineralized carbon (MC) of detritus from Talauma ovata and Saccharum officinarum leaves.
(
*)Values proportional to the formation of IN 1 ; (**)Values estimated by the difference between POC LS and DOC.
Figure 3 .
Figure 3. Mean values and standard deviations (n = 3) of the temporal cell wall fraction (CWF) variations of detritus from Talauma ovata and Saccharum officinarum during the decomposition process.
Table 1 .
Values of the parameters obtained from the kinetic model used for T. ovata and S. officinarum leaves.Where: POC LS = labile/soluble particulate organic carbon; POC R = refractory particulate organic carbon; DOC = dissolved organic carbon; IN 1 = content of organic carbon easily oxidized and mineralized according to k T ; k T = global decay coefficient of POC LS life time.
Table 2 .
Values of the decomposition coefficient (k) obtained from various studies for different tree leaves decomposed in streams in different environmental conditions. | 2018-12-06T00:25:04.297Z | 2012-03-23T00:00:00.000 | {
"year": 2012,
"sha1": "c73277c455c8c9b84b9a37df4430ed0d14193004",
"oa_license": "CCBY",
"oa_url": "https://periodicos.uem.br/ojs/index.php/ActaSciBiolSci/article/download/9396/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c73277c455c8c9b84b9a37df4430ed0d14193004",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
251169575 | pes2o/s2orc | v3-fos-license | Adequate Dietary Intake and Vitamin D Supplementation: A Study of Their Relative Importance in Determining Serum Vitamin D and Ferritin Concentrations during Pregnancy
Vitamin D is essential for human health. However, it is not clear if vitamin D supplementation is necessary for all pregnant women. This study examines the relative importance of dietary patterns and vitamin D supplementation frequency in determining serum 25-hydroxyvitamin D (25(OH)D) and ferritin concentrations among pregnant women in Hong Kong, China. A total of 572 healthy women were recruited from antenatal clinics at 25–35 weeks pregnant. Participants completed an electronic version of the food frequency questionnaire and a web questionnaire on supplement use. Their blood samples were tested for serum 25(OH)D and ferritin. The associations of dietary patterns and vitamin D supplementation frequency with serum 25(OH)D and ferritin concentrations were analyzed using moderated hierarchical regression. Two dietary patterns were identified. The adequate dietary intake was characterized by the high probability of meeting recommended daily food group servings, whereas the inadequate dietary intake was characterized by inadequate consumption of vegetables, fruits, meat, fish, and eggs, or alternatives. The association between adequate dietary intake and serum ferritin concentrations was independent of vitamin D supplementation frequency (β = 0.05, p = 0.035), but dietary patterns interacted with vitamin D supplementation frequency to determine serum 25(OH)D concentrations (β = −13.22, p = 0.014). The current study presents evidence on the relative importance of dietary patterns and vitamin D supplementation in maintaining sufficient vitamin D and iron in pregnancy. Antenatal nutrition counselling services should be provided to pregnant women who show signs of inadequate dietary intake.
Introduction
Vitamin D is a steroid hormone that helps to maintain calcium homeostasis and facilitates bone mineralization in the body. Vitamin D deficiency during pregnancy can cause bone loss, preeclampsia, preterm delivery, and low birth weight [1]. Iron-deficiency anemia is another common nutritional-deficiency disease during pregnancy. Under normal circumstances, serum ferritin concentrations can be indicative of the total amount of iron stored in the body. While there are many iron-rich foods such as red meat, livers, nuts, and green, leafy vegetables, the food sources of vitamin D are relatively limited [2]. In humans, vitamin D is primarily produced from the skin by way of exposure to ultraviolet light. However, due to the increased nutritional needs for body metabolism and fetal growth, pregnant women are advised to take daily vitamin D supplements [3]. A previous study showed that the most effective and safest dose for achieving optimal serum vitamin D levels among pregnant women in all races is 4000 IU per day [4]. It is however not clear if vitamin D supplementation is still required for healthy women who have adequate dietary intake to obtain sufficient vitamin D during pregnancy.
Previous research has shown no meaningful differences in iron status improvement among women receiving different doses of vitamin D supplementation during gestation (4200 IU, 16,800 IU, and 28,000 IU per week) [5]. While some evidence shows that vitamin D supplementation cannot improve serum ferritin concentrations in pregnant women [5][6][7], low vitamin D status was found to be associated with an increased risk of prenatal iron deficiency [8]. Notably, although previous research has examined the effect of vitamin D supplementation on both vitamin D and ferritin concentrations, it remains unclear whether vitamin D supplementation could have simultaneous effects on both vitamin D and iron levels. Furthermore, while natural food are important sources of iron, little work has been undertaken to examine the effect of diet and its potential interaction with vitamin D supplementation on vitamin D status, particularly during pregnancy.
Hong Kong provides a good setting for research to examine the relative effects of diet and supplementation on vitamin D concentrations because of its small changes in sunlight intensity between months and sunlight exposure duration between individuals. Thus, when the research is conducted in Hong Kong, the effects of sunlight exposure on vitamin D status can be minimized. Furthermore, given the nutritional benefits of natural foods and the amount of nutrients needed during pregnancy, the dietary intake recommended by Hong Kong's local health agencies for pregnant women who are in the second and third trimesters (14th to 40th week), have a normal body mass index, and weigh between 45 and 60 kg before pregnancy is characterized by five food groups including grains (3.5-5 servings per day); vegetables (4-5 servings per day); fruits (2-3 servings per days); meat, fish, eggs, or alternatives (5-7 servings per day); and dairy or alternatives (2 servings per days) [9]. Local researchers can use these recommended daily servings as the reference threshold levels to classify pregnant women who have adequate dietary intake and those who do not in the population. The objective of this study is thus to examine the relative importance of dietary patterns (adequate/inadequate dietary intake) and vitamin D supplementation frequency (at least/less than 1 time per week) in determining serum 25hydroxyvitamin D (25(OH)D) and ferritin concentrations among pregnant women in Hong Kong, China. We hypothesized that the association between dietary patterns and serum ferritin concentrations will remain significant after controlling vitamin D supplementation frequency, whereas regular vitamin D supplementation will compensate for inadequate dietary intake to improve serum 25(OH)D concentrations during pregnancy.
Participants
All women making a visit to the antenatal clinic of local public hospitals during the period of July 2019 to December 2020 were invited to participate if they were healthy, Hong Kong Chinese citizens, aged 18 or above, at 25 to 35 weeks gestation, residing in Hong Kong, and literate in Chinese.
Design and Procedure
Data collection took place in the waiting area of the antenatal clinic. Upon providing written informed consent, participants completed demographic questionnaires including questions about the weekly frequency of vitamin D supplementation during pregnancy. The electronic version of the Food Frequency Questionnaire (eFFQ) was also administered. In addition, their peripheral blood samples were collected by trained phlebotomists. As a token of appreciation for their participation in this study, an incentive of HKD 50 supermarket voucher was offered to the participants at the end of the session. The research protocol was approved by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster Research Ethics Committee (UW 13-055).
Dietary Patterns
Participants completed the eFFQ to report their food intake over the past month. Details of the eFFQ development can be found in the previous publication [10]. Briefly, the eFFQ is a modified version of the original FFQ, which was validated in the general adult population [11]. A total of 311 food items were categorized into twelve food groups, namely fish and seafood, mushrooms, eggs, dairy beverages, beans, fruits, grains, meats, snacks, soups, vegetables, and condiments and oil. Frequency options include once a month, 2-3 times per month, once to twice a week, 3-4 times per week, 5-6 times per week, and every day. Portion size was reported freely using either standardized household measurements or gram weight for food items. It was then converted into the number of daily servings according to the local guidelines [9]. In this study, the identification of latent subgroups was based on the categorization of whether the recommended number of daily servings of vegetables, fruits, meat, fish, eggs, or alternatives was met. Grains and dairy food groups were excluded because grains are a food staple in the Chinese diet, whereas dairy consumption levels are generally low for people living in Hong Kong [12]. Thus, these food groups were not considered helpful for differentiating dietary classes in this study.
Serum 25(OH)D and Ferritin Concentrations
Serum was extracted from the collected peripheral blood samples. In this study, the liquid chromatography-tandem mass spectrometry (LC-tandem MS) method was adopted to determine serum 25(OH)D concentration, defined as the sum of 25(OH)D3 and 25(OH)D2 minus 3-Epi-25(OH)D3, using the QTRAP 5500 LC-MS/MS system (AB SCIEX Instruments, Framingham, MA, USA). The method has been verified against samples from the Vitamin D External Quality Assessment Scheme with satisfactory performance (within ± 15% of the target value) [13]. Blood serum samples were also tested for ferritin by a local accredited laboratory.
Weekly Vitamin D Supplementation Frequency
Participants were asked, "how often do you take vitamin D supplements over the course of your current pregnancy?". The answer options provided included "everyday", "6-4 times per week", "1-3 times per week", "less than 1 time per week", and "never". Participants who selected "everyday", "6-4 times per week", and "1-3 times per week" were grouped as the "at least 1 time per week" subgroup, whereas those who selected "less than 1 time per week" and "never" were grouped as the "less than 1 time per week" subgroup.
Demographics
Demographics, including chronological age, gestational age, gravidity, parity, marital and employment status at enrollment, highest education level, monthly family income, and history of chronic diseases, were self-reported by the participants.
Data Analysis
All data were analyzed using both the SPSS statistical software package (Version 26.0, SPSS Inc., Chicago, IL, USA) and the R statistical software (version 4.1.1, R Core Team, Vienna, Austria). Descriptive statistics were calculated to summarize the characteristics of participants and their families. A series of independent t-tests (for continuous variables) and chi-square analyses (for categorical variables) were performed to determine whether demographics and pregnancy characteristics, including vitamin D supplementation patterns and serum 25(OH)D and ferritin concentrations, differed between the dietary pattern groups. Data of ferritin were normalized using log transformation. There were no missing values on any of the key variables. Assumptions of linear regression including normality and linearity were assessed and verified graphically. There was no evidence of outliers and multicollinearity between variables. Latent class analysis (LCA) was first applied to identify latent subgroups of pregnant women with similar dietary patterns as indicated by the dichotomous status (sufficient/insufficient) of the three food group parameters using the R package "poLCA" (R Core Team, Vienna, Austria) [14]. Simple models with fewer classes were preferred and selected based on statistical fit indices (Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC), adjusted BIC (aBIC), and consistent AIC (cAIC)) [15]. Lower values indicate better fitting models. The associations of vitamin D supplementation frequency and dietary patterns with serum 25(OH)D and ferritin concentrations were tested using hierarchical linear regression analyses [16]. The first model (Model 1) included the main categorical variable "inadequate dietary intake" and the covariate "age at assessment". In the next model (Model 2), the other main categorical variable "infrequent vitamin D supplementation" was added. In Model 3, the interaction term "diet × supplementation" was created and included in the adjusted regression models. All regression coefficients were unstandardized, with p-values < 0.05 denoting statistical significance.
Sample Characteristics
The overall sample consisted of 572 women (average age at assessment: 34 years) at 26 weeks gestation on average. Table 1 shows their demographic and pregnancy characteristics. The majority of women were pregnant for the first time as indicated by the average gravidity of 1.80 and parity of 0.62. In the study sample, 500 (90.9%) had no history of chronic diseases. 325 (57.3%) had completed tertiary education, and 367 (75.7%) were either full or part-time employees at the time of assessment. In addition, 276 (48.2%) had inadequate dietary intake, whereas 296 (51.8%) had adequate dietary intake. The adequate and inadequate dietary intake subgroups had similar demographic characteristics and serum 25(OH)D and ferritin concentrations, but the subgroups differed in terms of vitamin D supplementation frequency (p = 0.01), with a higher proportion of pregnant women in the inadequate dietary intake subgroup (27.5%) taking supplements less than once per week compared to those in the adequate dietary intake subgroup (18.6%). The number of participants with vitamin D deficiency (<50 nmol/L) was low in both subgroups (inadequate: 21 (7.6%); adequate: 14 (4.7%)).
Model Selection and Latent Subgroups
Models with one through four latent classes were compared in order to select a model of multiple food groups. Table 2 shows the values of the information criteria. The statistics suggested the two-class model (AIC = 2249.30 for two latent classes versus AIC = 2257.30 for three latent classes; BIC = 2279.74 for two latent classes versus BIC = 2305.14 for three latent classes; cAIC = 2286.74 for two latent classes versus cAIC = 2316.14 for three latent classes; and aBIC = 2257.52 for two latent classes versus aBIC = 2270.22 for three latent classes), and its parameter estimates also presented a solution with a logical, substantive interpretation. Each latent class corresponds to an underlying subgroup of pregnant women characterized by a particular pattern of food consumption. The parameter estimates shown in Table 3 provide the necessary information for interpreting and labeling each diet subgroup. Specifically, the first latent class labelled Inadequate is characterized by a high probability of having insufficient servings of vegetables (0.85), fruits (0.70), or meat, fish, eggs, or alternatives (0.62), whereas the second latent class was labelled Adequate due to the low probability of pregnant women in this subgroup reporting insufficient servings of the three food groups.
Associations of Vitamin D supplementation Frequency and Dietary Patterns with Serum 25(OH)D and Ferritin Concentrations among Pregnant Women
Serum 25(OH)D and ferritin concentrations had a small yet statistically significant correlation with each other (r = 0.14, p < 0.01). Table 4 shows the estimates of associations of dietary patterns and vitamin D supplementation frequency with serum 25(OH)D and ferritin concentrations after adjusting for age at assessment. There was a significant association between dietary patterns and serum ferritin concentrations among pregnant women, with lower serum ferritin concentrations found in the inadequate dietary intake subgroup compared to the adequate dietary intake subgroup (β = −0.06, p = 0.03). This association remained significant after further adjusting for vitamin supplementation frequency (β = −0.05, p = 0.04). No significant interaction between vitamin D supplementation frequency and dietary patterns on serum ferritin concentrations was found. On the other hand, although serum 25(OH)D concentrations had no association with dietary patterns, its association with vitamin D supplementation frequency remained significant after adjusting for dietary patterns, with lower serum 25(OH)D concentrations observed in those who took vitamin D supplements less than once a week compared to the more frequent supplementation group (β = −6.89, p = 0.01). The results of moderation analysis showed that only the interaction between vitamin D supplementation frequency and dietary patterns was significant (β = −13.22, p = 0.01). As illustrated in Figure 1, the differences in serum 25(OH)D concentrations between the groups of vitamin D supplementation frequency were trivial when the dietary intake amount was adequate, but for those with inadequate dietary intake, serum 25(OH)D concentrations were higher among pregnant women who took vitamin D supplements at least once a week compared to those who did not.
Discussion
In this study, latent class analysis techniques were employed to identify subgroups of pregnant women in Hong Kong based on their dietary patterns during pregnancy. Results revealed two dietary patterns, which were labelled as adequate and inadequate dietary intake subgroups, respectively. The adequate dietary subgroup had a high probability of meeting the recommended number of daily servings for all three target food groups (vegetables; fruits; and meat, fish, eggs, or alternatives), whereas the inadequate dietary subgroup had a high probability of not meeting the recommendations. A previous study of 285 healthy pregnant women in Ireland also identified two dietary patterns [17]. The unhealthy diet cluster consisting of 124 women (43.5%) reported significantly higher median intakes of white bread, refined breakfast cereals, confectionery, chips, processed meats, and high-energy beverages; the health-conscious diet cluster consisting of 161 women (56.5%) reported significantly higher intakes of wholegrain breads and breakfast cereals, fruits, vegetables, fruit juice, fish, low-fat milk, and white meats. Another study found that no pregnant women in Australia had achieved the daily food group recommendations provided in the Australian Guide to Healthy Eating [18]. Overall findings suggest that pregnant women, regardless of geographical locations, tend to make consistent food choices (either sufficient or insufficient intake). Considering the adverse health consequences of inadequate dietary intake for mothers and their offspring [19,20],
Discussion
In this study, latent class analysis techniques were employed to identify subgroups of pregnant women in Hong Kong based on their dietary patterns during pregnancy. Results revealed two dietary patterns, which were labelled as adequate and inadequate dietary intake subgroups, respectively. The adequate dietary subgroup had a high probability of meeting the recommended number of daily servings for all three target food groups (vegetables; fruits; and meat, fish, eggs, or alternatives), whereas the inadequate dietary subgroup had a high probability of not meeting the recommendations. A previous study of 285 healthy pregnant women in Ireland also identified two dietary patterns [17]. The unhealthy diet cluster consisting of 124 women (43.5%) reported significantly higher median intakes of white bread, refined breakfast cereals, confectionery, chips, processed meats, and high-energy beverages; the health-conscious diet cluster consisting of 161 women (56.5%) reported significantly higher intakes of wholegrain breads and breakfast cereals, fruits, vegetables, fruit juice, fish, low-fat milk, and white meats. Another study found that no pregnant women in Australia had achieved the daily food group recommendations provided in the Australian Guide to Healthy Eating [18]. Overall findings suggest that pregnant women, regardless of geographical locations, tend to make consistent food choices (either sufficient or insufficient intake). Considering the adverse health consequences of inadequate dietary intake for mothers and their offspring [19,20], antenatal nutrition counseling together with dietary monitoring and education services should be provided to pregnant women, particularly for those who show signs of inadequate dietary intake.
Given that previous research has focused on either food intake or supplementation but not both, this study expands the scope of previous research by demonstrating a link between food preferences and use of vitamin supplements during pregnancy. We found that although the adequate and inadequate dietary intake subgroups had similar demographic backgrounds, the adequate dietary intake subgroup was more likely to have regular vitamin D supplementation (i.e., at least 1 time per week) than the inadequate dietary intake subgroup. This finding can be explained partly by factors such as motivation to take supplements and knowledge about healthy diets, all of which can influence supplementation frequency. Dietary supplements are intended to supplement the diets of people who eat insufficiently. However, people who eat insufficiently tend to have low health consciousness. It has been posited that pregnancy is a time when women have increased motivation to make dietary improvements [21]. Research has found that pregnant women receiving dietary advice, regardless of the frequency of prenatal visits and sociodemographic characteristics, had a higher probability of using multivitamins compared to those lacking information [22]. The attendance of prenatal education sessions was also related to supplement intake [23]. Hence, healthcare professionals play an important role in explaining to pregnant women whether taking dietary supplements is necessary for improving their nutrient intake.
In addition, we found that the inadequate dietary intake subgroup had lower serum ferritin concentrations compared to the adequate dietary intake subgroup, after adjusting for their vitamin D supplementation frequency status. The findings are consistent with previous studies reporting that vitamin D supplementation cannot improve serum ferritin concentrations in pregnant women [5][6][7]. In addition, since serum ferritin is a recognized indicator of iron status, our findings highlight the implications of dietary patterns in determining the risk of iron deficiency. In this study, the adequate dietary intake subgroup was more likely than the inadequate dietary intake subgroup to meet the recommended daily servings of vegetables, fruits, meats, fishes, eggs, and other alternatives. Red meat is the major source of dietary intake of heme iron, which represents more than 95% of functional iron in the human body [24]. A previous review documented evidence showing the positive association between meat consumption and iron status among young women [25]. Vitamin C (ascorbic acid) in fresh fruits and vegetables can also act as an enhancer to promote iron solubility and absorption through the conversion of iron chelators and ferric iron to ferrous iron [26,27]. On the other hand, our findings suggest that the association of dietary patterns with serum 25(OH)D concentrations may be conditional upon the frequency of vitamin D supplementation. As reported by previous studies [28][29][30][31][32][33], individuals with low baseline serum vitamin D levels could have greater responses to vitamin D supplementation than those with higher baseline serum vitamin D levels. Therefore, regular vitamin D supplement use is particularly important for pregnant women who eat insufficiently to achieve sufficient vitamin D levels.
This study has several limitations. First, while the participants were recruited from different Hong Kong local hospitals, they were not fully representative of Hong Kong's pregnant women. However, the cluster analysis identified two latent subgroups which are largely consistent with previous research on this topic. Second, the number of participants who were vitamin D deficient in our sample was small, and other lifestyle factors (e.g., physical activity and smoking) were not assessed in this study. The findings thus may not be applicable to special subgroups such as those who lack vitamin D due to genetic factors or those who smoke. Third, dietary patterns and vitamin D supplementation frequency were self-reported by participants using questionnaires. Their reports may have recall bias. Fourth, only vitamin D supplementation frequency was examined. Future research should assess the dosage and frequency of other supplements such as multivitamins and iron supplements. In addition, more studies should be conducted to identify the best dietary composition for achieving the daily recommended dietary intake of iron and vitamin D.
Conclusions
This study expands the current understanding of vitamin D supplementation use by showing the relative importance of dietary patterns and vitamin D supplementation frequency for pregnant women. It also replicates previous research demonstrating that dietary sources are more important than vitamin D supplementation for promoting ferritin expression. The findings are particularly useful for allocation of limited health care resources. They point to the need of simple dietary screening in early pregnancy and targeted nutrition counseling and education services at antenatal clinics to improve the dietary knowledge and behavior of at-risk pregnant women such as those who have inadequate dietary intake. More studies are needed to examine the misconceptions of food safety and nutrition in these women and their behavioral barriers to meeting dietary recommendations in pregnancy. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data that support the findings of this study are available upon request from the corresponding author. | 2022-07-30T15:05:09.007Z | 2022-07-27T00:00:00.000 | {
"year": 2022,
"sha1": "c384723d45e6e8f0d341af16c2a99c5cdf55dc18",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/14/15/3083/pdf?version=1659058585",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f963501bc58de98bac2813e062f4f5443f11f5c2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
243913986 | pes2o/s2orc | v3-fos-license | Refresh Rate Identification Strategy for Optimal Page Replacement Algorithms for Virtual Memory Management
: Operating system offers a service known as memory management which manages and guides primary memory. It moves processes between disk and main memory during the execution back and forth. The process in which we provisionally moves process from primary memory to the hard disk so the memory is available for other processes. This process is known as swapping. Page replacement techniques are the methods by which the operating system concludes which memory pages to be swapped out and write to disk, whenever a page of main memory is required to be allocated. There are different policies regarding how to select a page to be swapped out when a page fault occurs to create space for new page. These Policies are called page replacement algorithms. In this paper the strategy for identifying the refresh rate for ‘Aging’ page replacement algorithm is presented and evaluated.
I. INTRODUCTION
Operating system offers a service known as memory management which manages and guides primary memory. It moves processes between disk and main memory during the execution back and forth [1]. The process in which we provisionally moves process from primary memory to the hard disk so the memory is available for other processes. This process is known as swapping. A computer can find extra memory than the amount of manually equipped on the system. This extraneous memory is literally called virtual memory and it is indeed a section of hard disk that is set up to imitate the computer's RAM Virtual memory is generally attained with the demand paging. A memory management method paging is commonly used in which the memory is parted into fixed size pages [2]. Paging is used for accessing data rapidly. Whenever a program requires a page, it could be found in primary memory as if the operating system duplicates a certain number of pages on the main memory from the hard disk. A page table is a data structure, which is used by the virtual memory system in a computer's operating system to find the mapping within the virtual addresses and physical addresses. The accessing process uses virtual addresses, while physical addresses used up by the hardware and most categorically, by the RAM subsystem [3]. Whenever a program attempt to reference a page that is not available in RAM, then the processor take it as an invalid memory reference, or as a page fault and then it relocate the control from the program to the operating system [4]. Page replacement techniques are the methods by which the operating system concludes which memory pages to be swapped out and write to disk, whenever a page of main memory is required to be allocated. A page replacement algorithm hits on a less knowledge about obtaining the pages given by the hardware, and then it tries to elect which pages must be replaced to minimize the total number of page misses, during adjusting it with the cost of primary memory and processor time of the algorithm [5]. The main memory capacity of laptops and mobile devices has been rapidly increasing due to the ever-increasing degree of multitasking and improving quality of multimedia data. For example, the random access memory (RAM) size of the reference was 4 GB in 2018, while it was only 512 MB in 2010, an eight-fold increase in 8 years. The demand for a larger memory capacity is still strong. However, increasing the memory capacity is a challenging issue because it results in higher manufacturing costs and poor energy efficiency [6], [7]. Conventional computing systems, such as personal computers and servers, have achieved larger main memory space than RAM capacity by swapping out infrequently accessed pages to secondary storage devices. However, because the access speed of swap storage devices is usually lower than that of the RAM by up to 100 times, the swap-to-secondary storage scheme is undesirable for consumer electronics, which requires an immediate response to user inputs. In addition, the structure of flash memory, which is being popularly used for storage device in mobile consumer electronics for their high speed, robustness, and small form-factors, is transitioning from single-(SLC) to multi-level cell (MLC), including a triple-and quadruple-level cell (TLC and QLC, respectively) technology [8]. Because the write endurance cycles of TLCs and QLCs are 10-100 times smaller than those of SLCs [9], the frequent small random writes caused from page swap-out adversely impact the life span of mobile devices. In the last two decades, various in-memory compressed swap schemes have been proposed to accommodate more data than the RAM size and improve the system performance by reducing page swapping I/O operations [10]- [15].
As shown in Figure 1.1, such compressed swap schemes compress cold pages, which are not expected to be accessed in the meantime, and store the compressed pages, called zpages, in a swap page in the compressed swap pool in the RAM. When a zpage stored in the pool is requested from the main memory, it is decompressed and moved back to the main memory.
II. AGING PAGE REPLACEMENT ALGORITHM
The aging algorithm is a descendant of the NFU algorithm, with modifications to make it aware of the time span of use. Instead of just incrementing the counters of pages referenced, putting equal emphasis on page references regardless of the time, the reference counter on a page is first shifted right (divided by 2), before adding the referenced bit to the left of that binary number. For instance, if a page has referenced bits 1,0,0,1,1,0 in the past 6 clock ticks, its referenced counter will look like this: 10000000, 01000000, 00100000, 10010000, 11001000, 01100100. Page references closer to the present time have more impact than page references long ago. This ensures that pages referenced more recently, though less frequently referenced, will have higher priority over pages more frequently referenced in the past. Thus, when a page needs to be swapped out, the page with the lowest counter will be chosen. Note that aging differs from LRU in the sense that aging can only keep track of the references in the latest 16/32 (depending on the bit size of the processor's integers) time intervals. Consequently, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price.
III. REFRESH RATE IDENTIFICATION STRATEGY FOR AGING ALGORITHM
The Aging Algorithm is likely the most complex. Aging works by keeping an 8 bit counter and marking whether each page in the page table was used during the last 'tick' a time period of evaluation which must be passed in by the user as a 'refresh rate', whenever the Aging Algorithm is selected. All refresh rates are in milliseconds on my system, but this relies on the implementation of Python's "time" module, so it's possible that this could vary on other systems. For aging, a refresh rate of 0.01 milliseconds is suggested, passed in on the command line as "-r 0.01", in the 2nd to last position in the arg list. This minimizes the values in testing, and going lower does not positively affect anything. In order to find a refresh rate that would work well, it is decided to start at 1ms and move 5 orders of magnitude in each direction, from 0.00001ms to 100000ms. To ensure that the results were not biased toward being optimized for a single trace, tried both to confirm that the refresh rate would work will for all inputs. The graphs in figure 1 to figure 4 below show the Page Faults and Disk Writes found during each test. Fig. 1 Aging Algorithm --Page Faults / Refresh Rate GCC.TRACE reaches its minimum for page faults at 0.0001ms and SWIM.TRACE reaches its own minimum at 0.01ms. The lines cross at around 0.01ms. Additionally, 0.01ms seems to achieve the best balance, if we need to select a single time for BOTH algorithms. The number of disk writes also bottoms out at 0.01ms from SWIM.TRACE. It is relatively constant for GCC.TRACE across all of the different timing options. Because of this, 0.01ms is suggested as the ideal refresh rate. This is because it is optimal for SWIM.TRACE. For GCC.TRACE, it is not the absolute best option, but it is still acceptable, and so this selection will achieve a good balance. For all tests, a frame size of 8 is chosen, since this small frame size is most sensitive to the algorithm used. At higher frame sizes, all of the algorithms tend to perform better, across the board. So it is wanted to focus on testing at the smallest possible size, preparing for a 'worst case' scenario.
IV. CONCLUSIONS
In this paper a strategy for identification of refresh rate for Aging page replacement algorithm is presented. In order to find a refresh rate that would work well, it is decided to start at 1ms and move 5 orders of magnitude in each direction, from 0.00001ms to 100000ms. To ensure that the results were not biased toward being optimized for a single trace, tried both to confirm that the refresh rate would work will for all inputs. GCC.TRACE reaches its minimum for page faults at 0.0001ms and SWIM.TRACE reaches its own minimum at 0.01ms. The lines cross at around 0.01ms. Additionally, 0.01ms seems to achieve the best balance, if we need to select a single time for BOTH algorithms. | 2021-11-10T16:10:28.778Z | 2021-11-30T00:00:00.000 | {
"year": 2021,
"sha1": "1ed0117fd6792939739d3406bb140182212782b9",
"oa_license": null,
"oa_url": "https://doi.org/10.22214/ijraset.2021.38770",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "be3046c1e69ef01217558950c7538b2d8ee38013",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
258177240 | pes2o/s2orc | v3-fos-license | The right to reject an unwanted citations: do we need it?
In a Letter to the Editors published in No 6/2021, Teixeira da Silva and Vuong (2021) open a discussion on ‘‘the right to refuse unwanted citations’’ (mainly in predatory or lowlevel journals or by ‘problematic’ authors). The basic idea behind their contribution is that ‘problematic citations can certainly harm a scientist’s reputation’. However, the entire published letter to the editors does not provide any evidence for that assertion, nor do the authors show how the authors of the cited work might be harmed. On the contrary, it actually builds on the mindset that if publishers’ predatory practices or authors’ ‘problematic’ behaviour are bad, the victim must also be bad. In a healthy environment of evaluation of research, this is not possible. I consider it prudent to reject such a construction in its infancy. I see two main problems lying in the background of the paper by Teixeira da Silva and Vuong:
• Why should an author be responsible for citations by other authors? • Who decides on the predatory or 'problematic' behaviour?
The large group for which an author should have the right to refuse to be cited are cases where the citation is by 'problematic' authors such fake authors or authors of the retracted paper or by authors that cite in 'problematic' way (e.g., misinterpretation). Anyone, not just the cited author, can point out 'problems' in published article to the editors of the journal concerned. Serious publishers have mechanisms for correcting, revising or retracting articles after publication. Justifying where a published article is wrong or inaccurate is more useful to the readers than information that the cited author "John Doe" does not wish to be cited in the published article. Therefore, it is not clear how 'having the right not to be cited' would help. Secondly, the retraction is a standard procedure in the scientific publication process and COPE provides clear guidance for this procedure (COPE Council, 2019). In many cases, the retraction is not evidence of unethical or 'problematic' behaviour.
If Teixeira da Silva and Vuong (2021) mentioned the deliberate omission of cited articles by rival scientists, should the reverse be mentioned, where rival scientists will deliberately reject citations from their opponents? The idea of the right not to be cited raises further questions for me. If authors should have the right not to be cited, shouldn't they also have the right to agree in advance to be cited? Won't we end up requiring the authors of the articles to document the consent of all cited authors?
The fight against predatory journals is gradually degenerating into a witch hunt, at least in some cases. Instead of evaluating a journal's content, the identification of predatory journals/publishers is based on descriptive criteria only. These criteria were originally intended to serve as a warning about possible collusion by publishers. The fact that these criteria do not always clearly identify abusive behaviour has been raised virtually throughout the time the subject has been discussed (e.g., Butler, 2013;de La Blanchardière et al., 2021). As Eriksson and Helgesson (2018) put it 'distinguish between journals that deceive scientists and journals that are just amateurish, annoying, or of low quality'. But some researchers identify individual journals or publishers as predatory only on these criteria and determine how many 'predatory' journals are included in one of the indexing databases (e.g., Cortegiani et al., 2020;Manca et al., 2017). Other researcher studied how many citations were obtained by articles in such labelled journals (e.g., Akça & Akbulut, 2021;Frandsen, 2017;Oermann et al., 2019;Ross-White et al., 2019).
The existence of predatory journals is a problem for the entire scientific community, and there is a need to spread awareness of the risks of publishing in predatory journals. Still, is it necessary to have tabloid style headlines for editorials like 'Readers beware! Predatory journals are infiltrating citation databases' (Severin & Low, 2019)? Publishing in predatory journals is described as misconduct by Moher et al. (2017), but should its articles therefore automatically be described as bad? The authors of the previous study themselves state that, although the overall level of articles in so-called predatory journals was poor, some studies met normal standards and were registered with the relevant authorities.
Some publishers' practices are unacceptable and have nothing to do with the dissemination of scientific knowledge, while other publishers operate in the so-called 'grey area' between acceptable and unacceptable publishing practices (Siler, 2020). The world is not black or white. A problem is the unclear definition of a publisher's 'predatory practices'. 43 experts from 10 countries agreed on the definition as follows: 'Predatory journals and publishers are entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices' (Grudniewicz et al., 2019). This definition perfectly captures the idea of predatory journals but provides no clear clues for the practical identification of predatory journals. Oddly enough, the essential characteristic of predatory behaviour, which is the publication of dubious articles of no or minimal scientific value, is expressed only indirectly as 'deviation from best editorial and publication practices'. On the other side, even the largest publishers such as Elsevier could be classified as predatory according to this definition (Tennant, 2020). Before a general condemnation of predatory journals, the assessment of the quality of individual articles is lost. I consider this a fundamental violation of the integrity of research.
The author is to be criticised for the content of the work, not for where he publishes it (although he can influence it) and certainly not for who cites it (which he cannot influence in any way). This is likely to happen after we accept that an author should not be cited in a 'problematic' or predatory journals and should (actively) reject citations in such journals.
Concluding note
Every scientist has the right to express his/her opinion and present his/her thoughts. But I find the idea that a scientist should have the right to reject a citation because it might be associated with a shoddy article or a citing author or a predatory publisher absurd. I admit that many ideas, though they may sound absurd or unlikely at the time of their presentation, can be pioneering and open a path that will take humanity further. In the case of the article by Teixeira da Silva and Vuong, I have my doubts. Even ideas, or just topics for discussion, are to be supported by strong arguments, which are lacking in the case of this article.
Funding The preparation of this article was not supported by any financial resources.
Conflict of interest
The author has no relevant financial or non-financial interests to disclose.
Ethical approval
The manuscript was not submitted to another journal for consideration and not have been published elsewhere in any form or language (partially or in full).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2023-04-17T15:15:41.485Z | 2023-04-15T00:00:00.000 | {
"year": 2023,
"sha1": "9232c8fffc5e87aa3fc9af4a94cc764831dcaf39",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11192-023-04702-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "bea5224899f0bf9335f7ab68b63975717c7ed111",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
254316612 | pes2o/s2orc | v3-fos-license | Subaxillary Replacement Flap Compared with the Round Block Displacement Technique in Oncoplastic Breast Conserving Surgery: Functional Outcomes of a Feasible One Stage Reconstruction
Background: For selected women diagnosed with breast cancer (BC), partial reconstructive techniques involve displacement or replacement procedures to improve cosmesis without compromising oncological safety. This study aims to evaluate the surgical outcomes of the round block (RB) compared with the subaxillary flap (SF) technique for patients with upper outer tumor. Patients and Methods: Thirty-three patients treated with oncoplastic conserving surgery (15 RB and 18 SF) were enrolled in this retrospective study. After carrying out a comparison of baseline characteristics, all cases were recruited for postoperative evaluation of oncological and cosmetic parameters. Moreover, we investigated several scoring combinations to check whether they could discriminate surgeon and patient satisfaction according to different functional results. Results: Median age (p < 0.05), average tumor size (p > 0.05), estimated resection volume (p > 0.05), and nodal involvement (p > 0.05) were slightly higher in the SF group. A greater frequency of DCIS (p < 0.05) in the RB series correlated with reintervention for positive margins (p > 0.001). At a mean follow-up of 19 months, no locoregional recurrences were recorded and early and late complications were comparable (p > 0.05). The overall satisfaction with cosmesis was characterized by similar proportions of good results (p > 0.05), with some details more related to each procedure. Conclusion: The proposed techniques represent effective solutions for reshaping that follows upper outer wide excision, achieving comparable complication rates, low reinterventions, and good aesthetic results in relation to technical and social functioning evaluations. However, it is crucial to establish a careful patient selection in order to manage correct surgical planning while predicting any potential sequelae or complication.
Introduction
Partial breast reconstructive techniques are increasingly being used in breast cancer (BC) surgery to improve cosmesis and patient satisfaction without compromising the oncological outcome.
These procedures involve reshaping or volume replacement to fill the parenchymal defect after tumor excision, potentially reducing the incidence of margin involvement, while respecting body image and breast parameters [1].
The advantage of performing a unilateral or bilateral approach is debated. Frequently, a very large resection may be taken using "therapeutic mammoplasty" and the chances of incomplete lumpectomy are very small. However, several reduction techniques may require a contralateral tissue redistribution to achieve symmetry, thus not limiting the number of surgical procedures [2].
In this regard, clarifying the indications of the modern oncoplastic conserving surgery might be based upon the predicted volume to be excised for any given cancer relative to breast size, while improving the preoperative assessment of the patient's expectation [3].
Furthermore, an increasing demand for reduced scars has led to the development of numerous minimal or "invisible" incision procedures [4]. Among these options, the round block (RB) reconstruction is a useful method for resection of centrally located breast malignancies, especially for all patients with small defects and less than 20% of volume removal. It is a technically challenging operation associated with a wide skin sparing dissection to manipulate tissue through a periareolar doughnut incision. The mobilization of the gland is a key component of breast reshaping after tumor excision, especially in cases with dense tissue, moderate hypertrophy, and no severe ptosis [5]. The radial closing of the parenchymal defect to recone the breast, not only favours the ability to perform a re-excision of margins compared to the wise pattern procedure, but also a low rate of subsequent contralateral surgery with excellent symmetry at follow-up [6].
As opposed to the breast reshaping, the replacement repair is based on flaps outside the breast to restore the defect, as in case of subaxillary, thoracolateral, thoracoepigastric, bilobed, and myocutaneous flaps [7]. These procedures are most appropriate for patients with small-to-medium sized breasts, who cannot afford to lose the volume associated with displacement techniques or wish to avoid contralateral surgery.
In this regard, the subaxillary dermocutaneous fat flap (SF) is a novel approach for reconstruction of the upper outer quadrant and can be used if no more than 25% of tissue is removed for tumor clearance. After that, a wide local excision is carried out through the lateral mammary crease, the de-epitheliazed flap can be rotated and transposed to fill the defect [8]. Even if very thin patients with little subcutaneous fat are not suitable, using this procedure, the shape and symmetry of the conserved breast is well maintained and a contralateral procedure is seldom necessary [9].
There is limited evidence in the literature on the safety outcomes of different approaches in oncoplastic conserving surgery, but future horizons are likely to see the development of better selection tools to allow the best personalized strategies, while providing data on how patients feel about their management.
The purpose of this retrospective study is to review a single institution's experience of round block repair compared with the subaxillary replacement flap for patients with upper outer quadrant BC, in order to assess surgical, oncological, and aesthetic outcomes related to these different partial reconstructions.
Patients and Methods
A total of 33 patients with upper outer BC underwent oncoplastic conserving surgery using the RB or the SF technique at the Academic Breast Unit of Campus Bio-medico University Hospital of Rome, from January 2018 to December 2022. Patients were eligible if they were older than 18 years, had a histological diagnosis of invasive or in situ BC within 3 months, had a primary tumor deemed technically appropriate for surgical resection, and had no clinical evidence of distant metastatic disease. We did not exclude patients based on tumor size, age, lymph node status, or previous primary systemic treatment. All patients underwent a multidisciplinary approach, involving medical and surgical oncologists, plastic surgeons, breast radiologists, and radiation oncologists. In order to perform a breast conserving procedure, we localized the lesions the day before surgery using ultrasound examination; the sentinel lymph node (SN) was localized preoperatively by injecting Tc99m-nanocoll. Axillary lymph node dissection, chemotherapy, radiotherapy to the breast, and endocrine therapy were performed to our standard protocols if required. All patients had regular postoperative follow-up after any adjuvant treatment provided. The medical records including tumor characteristics, pathologic reports, complications, and surgeon or patient satisfaction assessed by a questionnaire were evaluated. All patients treated at our Institute provided informed written consent (ethical statement n. 07.19 PAR V ComET CBM).
The indications for displacement or replacement partial reconstruction surgery and study eligibility were based on excision volume, tumor location, glandular density, breast size, and level of ptosis.
Surgical Techinique
The resection was planned preoperatively, and markings were made with the patient in the upright position. The success of the procedure depended on patient selection and careful intraoperative management to perform an individual "customized" repair.
The central round block approach is a volume displacement reconstruction with a varying amount of skin adjustement to consider depending on the volume loss or skin reduction requirement in patients with small-to-medium-sized breasts, with low to moderate ptosis, and who may not require contralateral surgery for symmetrization ( Figure 1). It is a versatile technique that requires sophisticated glandular reshaping, while limiting scars with good aesthetic results. A circumareolar incision down to the dermis delimiting the outline of the areola, and another parallel concentric or oval not far away from the first, is made considering the location of the tumor and any potential asymmetry with the contralateral breast. A further trick to avoid scar widening and changes in areolar shape is to keep the width of the donut skin excision as close as possible, ideally within 20 mm [10]. After de-epithelializing the intervening ring area, slightly wider in the quadrant opposite to the resection to prevent deviation of the nipple-areolar complex (NAC), the dermis is cut up to 180 • at the lesion side to provide a good exposure. The NAC remains vascularized by its posterior glandular base through the underlying plexus and the Würinger septum. The adjacent breast skin is widely undermined in the mastectomy plane between the subcutaneous fat and parenchyma at the level of the superficial fascia, preserving subdermal vascularization and trying to limit the dissection to the tumor-bearing quadrant. Up to a half breast dissection is enough to obtain a good operative field for wide excision and tissue reapproximation, thus decreasing the operation time or the possibility of postoperative seroma [11]. Excision of a radially oriented ellipse of parenchyma-facilitated closure follows, after eventually mobilizing the gland in the prepectoral plane. A surgical clip is inserted into the excised cavity, subsequently, to facilitate radiation therapy. No tight 2-0 absorbable sutures were used to close the defect radially and recone the breast tissue, often over a drain according to the extension of reshaping. In our approach, reducing to the maximum the periareolar skin removal, we used simple interrupted inverted dermal 3-0 sutures to support mild skin tension and simply maintain a good circular form. Running absorbable suture with 4-0 were used to secure the skin edge in an attempt to lower the chances of scarring. An SN biopsy was often performed through a separate transverse skin incision. This procedure was indicated for cases whose excision volumes were 10-20% of the total [12] and assumed to be best suited for treating periareolar lesions. Patients with very large and fatty breasts or a lot of additional skin with peripheral tumor location were not considered for this approach, nor for an extensive dual plane dissection. Our goal was to limit the breast detachment as much as possible to maximize the vitality of the glandular flaps and to ensure a better conical shape. Concerning the skin, the trend was to limit the amount of the periareolar de-epithelialization in order to prevent complications such as bad scarring and flattening owing to excess tension in the periareolar area, while achieving the best symmetry with the contralateral markings. However, careful attention to patient selection with good anatomic conditions, as well as to some operative details were critical in performing this procedure. olar lesions. Patients with very large and fatty breasts or a lot of additional skin with peripheral tumor location were not considered for this approach, nor for an extensive dual plane dissection. Our goal was to limit the breast detachment as much as possible to maximize the vitality of the glandular flaps and to ensure a better conical shape. Concerning the skin, the trend was to limit the amount of the periareolar de-epithelialization in order to prevent complications such as bad scarring and flattening owing to excess tension in the periareolar area, while achieving the best symmetry with the contralateral markings. However, careful attention to patient selection with good anatomic conditions, as well as to some operative details were critical in performing this procedure. For the replacement repair, the technique basically employs a transposition flap of skin and subaxillary fat for the breast defect, using the excess of subcutaneous autologous tissue in the lateral extramammary thoracic region ( Figure 2) [9]. Its major clinical application is in patients who refuse higher morbidity procedures or are not candidates for more extensive reconstruction, particularly with myocutaneous flaps. The main indication For the replacement repair, the technique basically employs a transposition flap of skin and subaxillary fat for the breast defect, using the excess of subcutaneous autologous tissue in the lateral extramammary thoracic region ( Figure 2) [9]. Its major clinical application is in patients who refuse higher morbidity procedures or are not candidates for more extensive reconstruction, particularly with myocutaneous flaps. The main indication for this local flap was primarily in patients with small-volume breasts with or without ptosis and included cases with moderate lateral defect where there was not enough breast tissue to perform the reconstruction by local glandular flaps or reduction mammoplasty techniques [13]. For this purpose, a convex flap design was used that, rather than depend on an axial blood vessel for nourishment, relied upon the dermal-subdermal plexus of blood vessels. A lateral contour acces is performed and the parenchyma of the breast upper part is widely mobilized under the skin and above the muscle to the tumor level, in order to perform the lumpectomy. A full-thickness glandular resection is performed, and the adequacy of surgical margins is evaluated by intraoperative macroscopic assessment. Usually, a rhomboid or a wedge-shaped flap was designed on the subaxillary region and the amount of tissue available was determined by a pinch test. The base of the flap varied from 4 to 9 cm and the length from 6 to 10 cm, respectively. For small defects, the flap is planned as a triangle located exclusively on the axilla. For moderate and large excisions, the distal limit can reach the whole lateral thoracic region, designing the inferior and superior limits more obliquely and with curved borders. A surgical clip is inserted into the excised cavity, subsequently, to facilitate radiation therapy. The skin and subcutaneous axilla fat are dissected from the underlying muscles in a medial direction. Then, this "tissue bridge" is rotated into the lateral breast quadrant and fixed with no tight 2-0 absorbable sutures to the chest wall or breast parenchyma to close the resection cavity. The defect is close under the skin after being de-epitheliazed or with skin replacement when necessary, and the contour is restored by subcutaneous axillary adipofascial tissue with preservation of perforating vessels, in order to ensure a sufficient blood supply coming to the lateral part of the partial reconstruction [4]. The wound is closed in two layers with 3-0 and 4-0 absorbable sutures, respectively, over a drain, producing a neat single curvilinear scar in the lateral mammary crease, with mild skin tension and good circular form according to this anatomical subunit. An SN biopsy is performed through the same skin incision in all cases. All patients with superolateral tumors are potential candidates for subaxillary flap reconstruction, although some limitations to this procedure exist in thin patients, with insufficient tissue to replace the volume removed and reduce the deformity to an acceptable level. In this regard, it is important to allow optimal positioning of the axillary incisions to avoid scar widening or an ischemic flap with subsequent necrosis [14]. Moreover, an important requirement for applying a correct approach should consider that the primary tension of the closure never have to displace a neighboring structure and the prominent landmarks, especially the nipple and the inframammary fold.
Study Design and Conduct
All cases were reprospectively recruited for postoperative evaluation of oncological and cosmetic outcomes after any oncoplastic treatment, including also early and late surgical complications.
The size of the tumours was measured and classified according to the American Joint Committee on Cancer (8th edition) staging criteria, i.e., T1a-b (<10 mm), T1c (11-20 mm), T2 (21-50 mm), or T3 (>50 mm) [15]. Nuclear grade and axillary node metastases were also examined by histopathology. The intrinsic BC subtypes were identified according to the clinicopathological criteria recommended by the 2013 St. Gallen International Expert Consensus Report [16].
Study Design and Conduct
All cases were reprospectively recruited for postoperative evaluation of oncological and cosmetic outcomes after any oncoplastic treatment, including also early and late surgical complications.
The size of the tumours was measured and classified according to the American Joint Committee on Cancer (8th edition) staging criteria, i.e., T1a-b (<10 mm), T1c (11-20 mm), T2 (21-50 mm), or T3 (>50 mm) [15]. Nuclear grade and axillary node metastases were also examined by histopathology. The intrinsic BC subtypes were identified according to the clinicopathological criteria recommended by the 2013 St. Gallen International Expert Consensus Report [16]. The patients were categorized based on the receptor status of their primary as follows: luminal A [oestrogen receptor-positive (OR+) or progesterone receptorpositive (PR+) and HER2-]; luminal B HER2-(OR+, HER2-, and at least one of Ki-67 "high" or PR "negative or low"); luminal B HER2+ (OR+, HER2 overexpressed or amplified, any Ki-67 value, any PR); HER2 (OR-or PR-and HER2+); or triple-negative (OR-or PRand HER2-). Tumours were considered HER2-positive only if they were scored as 3+ by immunohistochemistry (IHC; strong, complete membrane-staining in >10% of cancer cells) or showed HER2 amplification (ratio >2) using fluorescence in situ hybridization (FISH). In the absence of positive FISH data, tumours scored as 2+ on IHC were considered negative for HER2. Tumours were also classified as luminal or nonluminal according to hormone receptor expression. The primary disease was classified as multifocal at the time of initial diagnostic work-up if the radiologist or histological assessment available after surgery described two or more lesions separated by ≥1cm of normal parenchyma. Furthermore, in our institution, we routinely measured the estimated breast volume resected, using a method previously described by Behluli et al. In brief, after each primary surgical intervention, we approximated the resection volume from the dimensions of the specimens using the following formula: V = a × b × c, where V was the resection volume (cm 3 ), and a, b, and c were the specimen length (cm) in the medial/lateral, superior/inferior, and anterior/posterior directions, respectively [17]. The tumor volume was estimated to be a cube, based on the formula: V = L3, where V was the resection volume (cm 3 ), and L was the maximum histological lesion diameter.
The oncological endpoint was evaluated by rate of positive or close margins, defined by the presence of cancer cells less than 2 mm from the specimen's margins at the histological examinations. The tumor and specimen volumes were compared with the rate of margin involvement and need of further surgery, either re-excision or mastectomy. Complete data on the oncological outcomes of local recurrence, metastasis, and overall survival were also reported.
At the time of the cosmetic evaluation, all patients had completed their postoperative radiation therapy. The surgeon's technical opinion and patient self-assessment were achieved by rating the shape and symmetry between the treated and untreated breast, also, with respect to the position on the nipple-areola complex, the perception of the surgical scar, and appraisal of their cosmetic outcome and satisfaction. All parameters were evaluated on a 3-point scale (excellent, good, and fair) based on these five items [8]. The observer evaluated patients' photographs, taken at their last follow-up appointment, in frontal views with arms in the neutral position on the hips, with arms raised, and in profile. Quality of life outcomes after surgery were further described using a questionnaire based on the following criteria assessed by the patients: psychological wellness with social functioning, physical discomfort, and side effects of adjuvant radiotherapy [19]. Finally, we compared group A (round block: RB) and group B (subaxilllary replacement flap: SF) for all the selected outcome variables, in order to assess the operative measures and quality outcome variables for each type of surgery performed.
Statistical Analysis
Data originated from patient dossiers, including electronic access and review of operation and pathology reports, as well as discharge letters. Continuous variables were summarized using mean, median, and standard deviation, categorical variables in per-centile. Furthermore, the associations between the type of surgery and clinicopathological factors (age, invasive tumor size, nuclear grade, lymphovascular invasion, ER, PR, and HER2 status, molecular subtypes, and Ki-67 labeling index) were examined. For hypothesis testing, we applied the two-tailed t-test and chi-squared test followed by Fisher's exact test for confirmation. Statistical difference was considered significant for p < 0.05. All statistical and stratified analyses were performed using IBM SPSS 23 software (IBM, SPSS Statistics, Chicago, IL, USA).
Results
The series included 33 patients who underwent oncoplastic breast conserving surgery for unilateral upper outer BC, of which 15 were treated with the round block technique and 18 with the subaxillary replacement flap. Table 1 shows the demographic information of the patients and their primary tumor characteristics. The median age at the surgery was 58 years (range = 34-80), 53 in the RB (range = 34-73) and 62 (range = 44-80) in the SF, respectively (p = 0.022). Of all the cases, 69.6% (n = 23/30) were peri-or postmenopausal (p = 0.701).
Oncologic Outcome
The mean resection volume was slightly larger in the subaxillary replacement cases than for the round block (SF: 128.9 ± 89.4 cm 3 vs. RB: 107.2 ± 75.8 cm 3 ), although the distribution did not statistically differ (p = 0.158).
Regarding the approximated tumor volume, we observed no significant differences between groups (SF: 7.6 ± 8.8 cm 3 vs. RB: 7.2 ± 8.1 cm 3 ) even in consideration of the dimensional homogeneity of the cases included (p = 0.612).
We reported a higher re-excision rate in the RB (2/15 = 13.3%) than in the SF series (0/18 = 0%), where no further resection was revealed to be necessary (p > 0.001). All positive margin patients had a previous DCIS diagnosis. Only one patient underwent contralateral reduction in the RB group after a second re-excision due to extensive disease.
No locoregional recurrences were observed at the end of the follow-up, and only three percent (1 of 33) developed distant relapse and subsequent BC related death ( Table 2). This patient had G3 triple negative invasive ductal carcinoma and showed disease progression with brain recurrence after primary systemic treatment and complete pathological response (ypT0N0) subsequent to subaxillary replacement repair and axillary dissection.
Postoperative Complications
Overall, there was a 3% rate of major complications, all in the subaxillary flap group (2/18 = 5.5%), exclusively represented by delayed healing of the surgical wound (p = 0.374).
An incidence of 18.2% (n = 6/33) of minor complications was reported, 11.1% (n = 2/18) in the SF and 26.6% (n = 4/15) in the RB group (p < 0.001), characterized mainly by the onset of seroma or hematoma in the postoperative period, with a higher trend in the displacement (19.9%) compared to the replacement technique (5.5%).
Onset of late complications (hypertrophic scar, adipose necrosis, fibrosis) was recorded in 24.2% (n = 8/33), 27.7% (n = 5/18) in the SF and 20% (n = 3/15) in the RB cases (p = 0.697). However, in the first group, late complications were represented exclusively by the appearance of hypertrophic scar (27.7%), while in the second group by fibrosis and fat necrosis (16.6%), (Table 3). All major and minor complications were managed conservatively and responded to outpatient treatment with local wound care. No unanticipated readmission or return to the operating room were documented. No cases of surgical site infection were reported, considering that a short-term antibiotic prophylactic therapy (5 days) was prescribed to all patients.
Cosmetic Outcome and Patient Satisfaction
Aesthetic results were evaluated following radiotherapy in 29/33 patients after excluding 4 cases (two in each group) for incomplete data (Tables 4 and 5). The overall satisfaction with cosmetic outcome assessed by the patient was considered excellent in 8/16 (50%) in the SF, while it was 6/13 (46%) in the RB series (p = 0.220); the overall satisfaction assessed by the surgeon was considered excellent in 8/16 (50%) in the SF, while in 10/13 (77%) in the RB series (p = 0.09). There were no significant differences in shape, breast, and NAC symmetry between patients and surgeon assessment with similar proportions of good and poor outcome in both the series (p = 0.541). A higher significant trend was reported while measuring the scar appearance where the overall satisfaction rate evaluated by the surgeon encountered excellent results in 8/13 (62%) in the RB compared to 2/16 (13%) in the SF group (p = 0.001).
Among the quality of life outcomes assessed by the patients after surgery (Table 6), psychological wellness and social functioning was evaluated with a maximum score in 75.8% (n = 22/29) of the cases included in the study: 81.3% (n = 13/16) in the "Subaxillary rotation flap" and 69.2% (n = 9/13) of the "Round block" group, respectively (p > 0.05). Furthermore, a persistent postoperative pain, skin tension in the treated breast, ipsilateral arm impairments or signs of lymphedema were considered as representative elements of physical discomfort. In this context, a minimum score was reported by 72.4% (n = 21/29), 81.3% (n = 13/16) in the SF and 61.5% (n = 8/13) in the RB (p = 0.41).
Moreover, the appearance of thickening, redness, skin dehydration, or pain were recorded to evaluate the adjuvant radiotherapy outcomes with the related potential side effects.
Discussion
Correct surgical planning means offering a variable number of projects with heterogeneous complexity, but at the same time predicting and managing any sequelae or complications that may result from that type of treatment. In this context, it is crucial to establish a careful selection of patients who can be potentially submitted to conservative management, based on disease location, glandular or fatty volume composition, and breast size, always doing measurements regarding symmetry and ptosis degree [20]. Furthermore, the aim to maintain a good cosmetic result is directly linked to a wide excision with radical margins and being able to prevent local recurrences or disease progression [21].
The present study aims to compare two different reliable approaches for upper outer quadrantectomies both respecting oncological safety and preserving a pleasant breast contour. The analysis focuses not only on the patient's clinical or pathological features, but also on the percentage of complications and satisfaction after surgery, to implement the selection criteria for these techniques. Even if both groups revealed homogenous characteristics (Table 1), there was a slight significant trend for the round block repair in younger patients, confirming that this procedure is applied more in breasts with higher glandular density and lower ptosis [13].
On the contrary, the subaxillary rotation flap has been more frequently indicated in cases with an average age of 62 up to 80 years, also in consideration of a greater frequency of adipose morphology or associated comorbidities [22].
Although a significant difference in resection volumes was not reported, we noticed a tendency towards greater excisions in the replacement approach, which can improve a better oncological radicality in outermost tumors, considering even the chance of removing the overlying skin if necessary. However, the patient's morphology is always determinant in establishing the amount of tissue available in the lateral thoracic region to fill the cavity defect, and often, this treatment is not applicable in young and thin patients [23].
A greater resection efficiency of the replacement technique must also be correlated to a larger tumor size in this group and with a significantly higher nodal involvement. In all these cases, the axillary dissection was possible from the same incisional pattern used to set up the flap which, despite the greater scar compared to the round block, was able to offer an excellent locoregional control on the neurovascular bundles [24]. Moreover, in line with a more aggressive biological profile, in the SF group, there was a significantly higher cut-off of ki67 when categorized as ≤20%, and for >20%, an increased frequency of luminal B HER2+ or TN BC and a larger application of primary systemic treatment.
In general, these techniques often allow for wider local tumor excision, potentially reducing the incidence of margin involvement, while enhancing the cosmetic outcome [25].
However, in accordance with the hypothesis of a lower volume of resection in the RB group, as an intrinsic limit of the procedure in our hands, all cases of re-excision for positive margins occurred in these patients, and always for a DCIS diagnosis [14]. In this context, several authors assessed that subaxillary dermocutaneous fat flaps can be used in all patients, regardless of breast size, if no more than 25 per cent of the breast tissue is removed for tumour clearance [26]. On the other hand, the proportion of resection volume achieved with the round block approach is usually slightly inferior (up to 20% of tissue loss), always considering a technical procedure that potentially should not require a contralateral approach. Thus, moderate-to-large-sized breasts tend to exhibit poor outcomes due to asymmetrical breast size caused by a shrinking volume if the excision volume is >20%, plus potential problems of late-onset scar widening or changes in the areola shape, and in this case, the round block approach should be considered in combination with other techniques [27].
Moreover, from this analysis, it is highlighted that the two pathways could have a different spectrum of complications. For instance, the longer incisional pattern of the subaxillary flap was more subject, not only to a higher occurrence of delayed healing in the early postoperative period but also to a more frequently hypertrophic long-term scarring [28]. These features could represent the basis for optimizing the surgical technique, accurately defining the tension lines during the partial breast reconstruction and the wound management in the follow-up.
On the other hand, the onset of seroma or hematoma was the most frequent complication of the round block, probably linked to the extensive dual-plane undermining achieved by subcutaneous and prepectoral dissection. These minor events could be related in the long term with development of fat necrosis and fibrosis, thus potentially decreasing the overall aesthetic outcome. To better manage the risk of these problems, the background breast composition must be carefully evaluated by preoperative mammograms, since most of fat necrosis occurs with a higher frequency in the low-density tissue group with a major fatty composition [29]. Moreover, in these higher risk patients, the surgical technique may possibly include a "cold dissection" (with scissors or blade), to decrease the tissue damage caused by the electrocautery and if alternative oncoplastic approaches are not feasible.
Finally, we did not observe any significant difference in terms of cosmetic outcomes between the two groups, which can act as evidence that both the procedures represent a safe and effective solution for the reshaping that follows upper outer resection with comparable patient and surgeon satisfaction [30,31].
The major scarring consequences in the subaxillary cases had a negative impact on the overall assessment both from a subjective or technical point of view. For this reason, more recent interest in "short" scar procedures has prompted breast surgeons to reduce complications more carefully, while always providing for more aesthetic and long-lasting shapes [32]. However, this concept must not compromise the technical choice to obtain an adequate shape based on the type of patient, although this can result in more complex incisional patterns. Furthermore, the fact that psychosocial well-being was judged to be slightly better in the replacement group may be motivated by the assumption that the scar is not directly visible in the frontal position when the patient looks in the mirror, resulting in a lower cognitive impact. Therefore, the patient's evaluation is important in determining the quality of cosmetic outcome after oncoplastic surgery [30]. Indeed, the cultural and emotional aspects elaborated by every single patient, also based on their own social development, might explain why, in this study, the patient assessment was often characterized by lower scores than those obtained from the specialist's technical evaluation [31].
In conclusion, to decrease breast deformities, it is strongly suggested to think in terms of aesthetic subunits during partial resections and reconstructions. Meticulous handling of tissue is essential to avoid necrosis, decreased sensation, lymphedema, and unnecessary undermining, which might violate perforators. Moreover, the breast size, shape, and degree of ptosis must be considered when discussing the potential for poor cosmetic results and designing the best reconstructive option, since unreasonable expectations are best managed before surgery, as dealing with them postoperatively can be difficult [32].
Consequently, we must properly evaluate the case to select the recommended surgical procedure and to be able to obtain reproducible results, since to be an oncoplastic surgeon is a great responsibility, the patient's life and hopes being in our hands.
This study presents the limitation of being retrospective and a longer follow-up could further confirm the oncological and cosmetic safety of the proposed surgical approaches. Furthermore, the ratio of the patient's original breast volume to the excised tissue cannot be evaluated since the breast volume was not measured. In this regard, the resection volume in comparison with the tumor size could also have been inaccurate, because their assessment was conventionally performed through a mathematical model and not through clinical or instrumental evaluation.
Another drawback concerns the relatively small number of cases and, moreover, this analysis did not adress the complexities of selection, which clearly involves many factors including both patient and physician concerns and biases. As such, our protocol should be considered exploratory in nature. Optimally, we would have had sufficient cases to evaluate the impact of different clinicopathological criteria in determining the surgical treatment, but further studies with increased numbers and a longer follow-up may ascertain potential differences between these parameters.
In summary, our study has shown that the proposed procedures represent safe and effective solutions for reshaping following upper outer breast wide excision, achieving comparable complication rates, low reinterventions for positive margins, and good aesthetic results in relation to technical and social functioning evaluations.
In this context, further analyses may allow more patients to be considered for this multiparametric comparison and could provide relevant meaningful information and longer prognostic data. | 2022-12-07T19:08:07.833Z | 2022-11-30T00:00:00.000 | {
"year": 2022,
"sha1": "a70710711f7478ec33b25daca48d133c526fda56",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1718-7729/29/12/736/pdf?version=1669801739",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2d2c1cc0eaf071e843869c58e5e48770e5a9d28",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
46695529 | pes2o/s2orc | v3-fos-license | Alzheimer ’ s amyloid-β peptide disturbs P 2 X 7 receptor-mediated circadian oscillations of intracellular calcium
Recent data indicate that Alzheimer’s disease (AD) is associated with disturbances of the circadian rhythm in patients. We examined the effect of amyloid-β (Aβ) peptide, the main component of the senile plaques playing a critical role in the deregulation of calcium (Ca2+) homeostasis in AD, on the circadian oscillation of cytosolic calcium (Ca2+) levels in vitro. The experiments we carried out in human primary skin fibroblasts. This cell line was previously shown to exhibit circadian rhythms of clock genes. Moreover, the basic clock properties of these peripheral cells closely mimic those measured physiologically and behaviorally in human and do not change during aging. In this study we showed that i) cytosolic Ca2+ oscillations depend on the activation of purinergic P2X7 receptors; and ii) these oscillations are abolished in the presence of Aβ. In total, our new findings may help to deepen our understanding of the molecular mechanisms involved in AD-related circadian alterations.
Introduction
Alzheimer's disease (AD) is the most common neurodegenerative disorder that is characterized by progressive neuronal loss, mainly in the brain cortex and hippocampus [28].One of the main neuropathological hallmarks of this disease is the deposition of extracellular senile plaques containing amyloid-β (Aβ) peptides, derived from processing of the amyloid-β precursor protein (APP) [3,18].Many previous data point out to a critical role for deregulation of calcium (Ca 2+ ) homeostasis in the pathogenesis of AD.Thus, the levels of intracellular Ca 2+ concentra-tion and calcium-regulated enzymes (e.g.calpains, proteases, phospholipases) were found to be elevated in animal models of AD [24] as well as the brains of AD patients [47,48].The "calcium hypothesis" was further supported by demonstrations that, during brain aging, the molecular processes responsible for Ca 2+ regulation were impaired.This involved the mechanisms of Ca 2+ sequestration into its intracellular stores (endo(sarco)plasmic-reticulum (ER) and mitochondria) as well as Ca 2+ influx into the cytoplasm by voltage-gated Ca 2+ channels, ionotropic or metabotropic receptors [13,21,64].In AD, disturbed Ca 2+ homeostasis is not restricted to neurons but represents a global phenomenon affecting virtually all cells in the brain.AD-related aberrant Ca 2+ signaling in astrocytes and microglia probably contributes profoundly to an inflammatory response that, in turn, impacts neuronal Ca 2+ homeostasis and brain function [8].It has been shown that Aβ release induces intracellular calcium overload and activates intracellular calcium-dependent events, leading to a decrease in learning and memory as well as cognitive dysfunction [27,43,62].Previous findings suggested that N-methyl-D-aspartate receptors (NMDARs) are the main mediators of enhanced Ca 2+ entry evoked by Aβ [19,35].However, more recent discoveries showed that the soluble form of Aβ oligomers induce their toxic effects by disrupting the integrity of the cell plasma membrane leading to uncontrolled fluxes of Ca 2+ into the cells [66].
Moreover, recent data indicate that AD progression is associated with disturbances of the circadian rhythm in patients.Circadian rhythms govern a wide variety of physical, behavioral and metabolic processes that follow a roughly 24-hour cycle, responding primarily to the light/dark cycle.These are controlled by the circadian clock machinery, in which rhythm-generating mechanisms are encoded by a transcription-translation feedback loop.The mammalian circadian clock machinery is regulated by a central pacemaker in the suprachiasmatic nucleus (SCN) of the brain that synchronizes oscillators in peripheral tissue [16].The entrained signals from SCN neurons are distributed through different target organs by efferent neural and humoral mechanisms, such as circulating melatonin, producing changes in metabolism, core body temperature, and sleep.Calcium ions are a potent second messenger coupling the clock gene oscillation and the rhythmic firing of action potential in SCN neurons [33,37,58,60].Calcium mediates intracellular clock signals, such as entrainment processes [6,23,30], clock gene expression [33,37,45,55], and output signaling [2].Moreover a topological specificity of the circadian Ca 2+ rhythm in SCN was observed, suggesting that calcium plays a role in the hierarchical organization of rhythmicity in the central pacemaker [25].
The alterations of the SCN as well as in melatonin secretion are the major factors of circadian clock disruption [72].Insomnia, nocturnal behavioral changes, sundowning syndrome and excessive daytime sleepiness are the common to circadian disturbanc-es observed in AD patients as well as patients with mild cognitive impairment [7,17,67,68,70].The studies in animal and humans demonstrated that Aβ level in the cerebrospinal fluid is modulated by sleepwake cycles [5,36,40].This raises the possibility that disturbances in the circadian rhythm causes brain Aβ accumulation over time, suggesting a causative rather than an associative link between sleep loss and Aβ accumulation [52].However early-stage AD events such Aβ aggregation and disturbances in calcium homeostasis may also induce molecular changes that lead to circadian clock disruption.Therefore the aim of this study was to deepen our understanding of the molecular mechanisms involved in AD-related circadian clock alterations, by investigating i) the clock-dependent regulation of intracellular calcium levels in a peripheral tissue, and; ii) the effect of Aβ peptides on the changes of cytosolic calcium levels around the clock.For this purpose, we used primary cultures of human fibroblasts because: i) fibroblasts coming from AD patients present a disturbed Ca 2+ homeostasis [31,39]; and ii) fibroblasts are a valuable in vitro model of peripheral oscillators [10,56,57].
Ethical permission
Prior ethical consent to the use of human skin tissues was given by the Ethical Committee of Basel, and informed written consent to participation in this study was obtained from all human subjects.
Synchronization of cells and timetable to study cellular circadian rhythms
For all experiments, cells were seeded onto collagen-coated 48-well or 96-well dark plates at the density of 1,4*10 5 cells per ml.Cells were synchronized by treatment with DMEM containing 50% Horse Serum for 2 hours at 37°C.After the synchronization, cells were washed with PBS and the medium was changed into DMEM/2% FBS according to [1].Experiments were performed every 4 hours, starting 4 h after synchronization and until 48 h.
Preparation of Aβ species and cell treatment
Aβ 42 was dissolved in PBS to make stocks of 500 μM and stored at -80°C until use.Aging of the peptides was induced by shaking the diluted solution (50 μM) at 1000 rpm overnight at 37°C.The cells were treated after synchronization at a final concentration of 0.5 μM of Aβ 42 .In selected experiments, after measurement of basal Ca 2+ levels, cells were treated either with a sarco/endoplasmic reticulum Ca 2+ ATPase (SERCA) inhibitor (Thapsigargin; 10 nM), or with an agonist (ATP; 1 mM) of purinergic P2X7 receptors.The measurements were repeated immediately or after 5 min of incubation, respectively.
[Ca 2+ ] i measurements [Ca 2+ ] i measurement was carried out using the fluorescent indicator Fluo-4 acetoxymethyl (AM) ester (200 μM stock solution in DMSO).At the specific time points after cell synchronization, fibroblasts were loaded with 4 μM Fluo-4 AM supplemented with 0.02% Pluronic ® F-68 for 60 min at 37°C in a standard HBSS.The cells were washed 3 times with HBSS and, to ensure complete AM ester hydrolysis, kept for 30 min at 37°C in the dark.After a second washing step, the fluorescence was measured using Fluoroskan ® counter at 485/520 nm.To study the involvement of purinergic receptors and endoplasmic reticulum (ER) stores on a cytosolic calcium level, cells were treated with an agonist (ATP, 1 mM) or an antagonist of purinergic P2X7 receptors (Coomassie Brilliant Blue G, 5 μM) for 1 min and a non-competitive SERCA inhibitor for 30 seconds, as described in figures' captions, and the fluorescence of Fluo-4 was measured.
Statistical analysis
The results were expressed as mean values ± SEM.Differences between means were analyzed using Student's two-tailed t test.P < 0.05 was considered statistically significant.Cosinor software analysis was used to evaluate and estimate the parameters of circadian rhythm (period plus mean, amplitude and acrophase).
Results
Since intracellular calcium was previously shown to exhibit a well-defined circadian rhythm in neuronal population of SCN [32], we first verified whether Ca 2+ level exhibited a circadian rhythmicity in peripheral oscillators (Fig. 1).We used human primary fibroblasts since these cells are an excellent in vitro model of peripheral oscillators [10,56,57] and show a disturbed Ca 2+ homeostasis in AD patients [31,39].
In the current study, fibroblasts presented changes in Ca 2+ accumulation, with a peak 16 h post-synchronization (TP16) and a trough 28 h post-synchronization (TP28) (Fig. 1A).Under these conditions, relative Fluo-4 fluorescence in cell cultures was significantly different (p < 0.001) between the peak and the trough (Fig. 1D).
Since endoplasmic reticulum (ER) Ca 2+ stores are important in the regulation of Ca 2+ signaling in cells, we quantified [Ca 2+ ] i in fibroblasts after treatment with 10 nM thapsigargin (THAPS), an SERCA inhibitor.Depletion of ER Ca 2+ stores with THAPS did not alter the circadian rhythm of [Ca 2+ ] i but increased the total Ca 2+ level (Fig. 1B).The significant difference between [Ca 2+ ] i at TP16 and TP28 was still present (Fig. 1D).
Considering the controlling mechanisms of cytosolic Ca 2+ fluctuations, it is possible that receptormediated Ca 2+ influx is involved in the regulation of circadian rhythm of [Ca 2+ ] i .Since primary human fibroblasts are electrically non-excitable and do not express voltage-gated Ca 2+ channels, Ca 2+ could be transported via purinergic P2X receptors, especially P2X7 subtype that is widely distributed in skin tissue [65].Using the specific antagonist of P2X7 receptor-Brilliant Blue G (BBG), we showed that treatment with this compound abolished circadian oscillations of [Ca 2+ ] i (Fig. 1C).Thus, the significant difference between [Ca 2+ ] i at TP16 and TP28 was not observed anymore (Fig. 1D).
Disturbances of the Ca 2+ homeostasis have been demonstrated to be associated with Aβ neurotoxicity.Therefore, we investigated the effect of Aβ peptides on the circadian fluctuation of Ca 2+ levels (Fig. 2).Our data showed that extracellular treatment with Aβ 42 (aged peptide) completely abolished circadian oscillations of intracellular Ca 2+ and impacted the levels of [Ca 2+ ] i at Fig. 1.Circadian oscillations of cytosolic calcium depend on activation of P2X7 receptors but not calcium uptake through SERCA.A) Cytosolic calcium levels were evaluated by using fluorescent dye, Fluo-4 (4 μM), in synchronized human skin fibroblasts from 12 hours post-synchronization time point every 4 hours for 7 time points (n = 5).B-C) Cytosolic calcium levels were evaluated by using fluorescent dye, Fluo-4 (4 μM), in synchronized human skin fibroblasts from 12 hours post-synchronization time point every 4 hours for 7 time points in presence of (B) thapsigargin (THAPS, 10 nM), inhibitor of SERCA, or (C) Coomassie Brilliant Blue G, antagonist of purinergic P2X7 receptors (BBG, 5 μM).D) Relative cytosolic calcium level at 16 hours post-shock (peak: TP16) and at 28 hours (trough: TP28) compared to non-treated cells (CTRL) (n = 3-5).The emitted fluorescence is linearly related to the cytosolic calcium content.*p < 0.05, ***p < 0.001; Student's two-tailed t test comparing single time points.Data are represented as average ± SEM.
Discussion
Accumulating evidence has suggested that sleep disturbances may be early indicators of dementia and may actually precede the onset of cognitive symptoms in AD [53].Moreover, the sleep-wake cycle was shown to be a critical regulator of Aβ release and loss of slow wave sleep resulted in higher cumulative levels of neuronal activity and higher Aβ concentration in CSF [12].It was previously suggested that intracellular Ca 2+ may be a coordinator of the circadian timing system and biochemical reactions due to its ubiquitous role as a metabolic regulator [4].Therefore the disturbances in Ca 2+ homeostasis observed in AD brains could be partly associated with deregulation of patients' circadian rhythms.However the role of Ca 2+ in regulating the clock function in pathophysiology is unknown.In this study, we showed for the first time that Alzheimer's Aβ peptides could negatively influence circadian fluctuations of Ca 2+ in peripheral oscillators.This subsequently may alter calcium-dependent molecular processes involved in circadian clock regulation in AD.
It was demonstrated that intracellular Ca 2+ concentration exhibits circadian rhythms in pacemaker neurons of the SCN [15,38].The oscillatory physiology of Ca 2+ was shown to be regulated by the circadian fluctuations in the Ca 2+ currents generated by voltage-dependent calcium channels (VDCC) [41,60].Calcium fluctuations were also shown in astrocytes of SCN, but, unlike in neurons, they were regulated by the Ca 2+ release from ER stores [11,20].Our study extends previous results by showing the existence of daily fluctuations in cytosolic calcium in peripheral oscillators, the human skin fibroblasts.This cell line was previously shown to exhibit circadian rhythms of clock genes, and the clock properties of these peripheral cells closely mimic those measured physiologically and behaviorally in humans [10,57].Therefore, skin fibroblasts are a good in vitro model for studying molecular mechanisms of circadian rhythms.Moreover, it was shown that aging does not alter the basic clock properties (period length, amplitude and phase) of fibroblasts [56].We observed that calcium oscillations in fibroblasts correspond to the previously demonstrated changes in transcript levels of the clock genes Per2, Bmal1, Rev-Erb, and Cry1 in those cells [9].Former data showed that Ca 2+ mobilized from internal deposits modulates the molecular circadian clock of hepatic cells ex vivo, in a manner that did not depend on the entrainment cue (meal or light) [4].This suggests that Ca 2+ signaling is a key regulator of circadian rhythms in peripheral tissues in contrast to the central pacemaker mediating hierarchical organization of rhythmicity [25].
Calcium signaling in non-excitable cells is initiated by mobilization of Ca 2+ from intracellular ER stores through IP3 and ryanodine receptors.In our experiments, inhibition of SERCA significantly elevated cytosolic Ca 2+ level, but did not alter calcium fluctuations.A similar effect was observed in SCN astrocytes [11].Besides, in SCN neurons, the expression of SERCA was shown to follow a circadian pattern [50].Together, all these data suggested that ER stores are not necessary for controlling [Ca 2+ ] i daily oscillations in non-neuronal cells.
Previous data reported the crosstalk between circadian oscillation of intracellular Ca 2+ and rhythmic extracellular ATP accumulation in SCN astrocytes [11].Exogenous ATP was shown to be a mediator of intercellular communication in physiology and neurodegeneration, by acting on the cell surface receptors, including ligand-gated ion channels (P2X) and G-protein coupled (P2Y) receptor subtypes.It was demonstrated that ATP selectively promotes the expression of the clock gene Per1 through gene transactivation after stimulation of P2X7 purinergic receptors in microglial cells [54].Moreover, the endogenous purinergic receptors were shown to determine the local clock activity in the urinary bladder cells [69].Therefore, the ATP-signaling may be also involved in changes of Ca 2+ fluctuations in peripheral oscillators.Indeed, our study demonstrated that the circadian rhythm of the calcium level in fibroblasts depend on activation of the ATP-binding receptor, P2X7.
In AD pathology, changes of neuronal Ca 2+ concentration are responsible for the oxidative stress as well as altered metabolism of APP and overproduction of Aβ peptides.On the other hand, Aβ neurotoxicity has been associated with the disturbances of intracellular Ca 2+ homeostasis in neurons as well as in glial cells.The studies using APP transgenic mouse models of AD identified significantly elevated numbers of neurites with overloaded cytosolic Ca 2+ and this effect was positively correlated with the distance from Aβ plaques [43].Many previous studies demonstrated impaired Ca 2+ regulation in fibroblasts of AD patients [31,39].Our study confirms those reports, by demonstrating that exposure to Aβ 42 (aged peptide) abolished Ca 2+ fluctuations in the cytosol of fibroblasts.
A disruption of the Ca 2+ regulation in the ER was previously shown to mediate signal-transduction alterations associated with AD [44].Moreover, mutations that cause familial Alzheimer's disease have been linked to disturbances in intracellular calcium signaling pathways [34].Skin fibroblasts from humans that harbor a mutation in presenilin 1 (PS1-A246E) showed exaggerated Ca 2+ release from IP3-gated stores compared to controls after treatment with bombesin and bradykinin [39].The elevated Ca 2+ release from the ER, evoked by activation of the IP3 [14] or ryanodine [61] receptors, was shown to increase Aβ level.Overexpression of SERCA was also shown to increase Aβ production [29].Furthermore, ER is also a potential intracellular target for Aβ protein [26,71], which disrupts the function of the intracellular Ca 2+ stores.In our study, thapsigargin treatment did not restore the physiological oscillations of [Ca 2+ ] i that were significantly altered by Aβ.These data suggest that Aβ-mediated disruption of intracellular Ca 2+ homeostasis may be evoked by an excess of calcium influx across the plasma membrane.
Furthermore, previous studies indicated that an altered activity of the purinergic P2X7 receptor mediates the pro-inflammatory processes in a transgenic AD model and in brains of AD patients [46,49,59].The observation that Aβ may cause ATP release from microglia, and that P2X7 receptor is an obligate participant in microglia activation by Aβ, put the role of ATP and P2 receptors as a key event in neurodegeneration [42,63].Recent data demonstrated that in vivo inhibition of P2X7 receptors significantly reduces the amyloid plaques formation in brain hippocampal structures through activation of α-secretase activity [51].The mechanism of P2X7R-specific cleavage of APP was shown to be independent of ADAM9, -10, and -17 activity, but involved Erk1/2 and JNK phosphorylation [22].In our study, we demonstrated that Aβ peptide significantly interferes with the P2X7-receptor mediated circadian oscillations of intracellular Ca 2+ , however the mechanism underlying this phenomenon needs to be further investigated.
In summary, our data provide first evidence that Alzheimer's Aβ 42 peptides induce disturbances of P2X7 receptors mediated Ca 2+ oscillation in peripheral oscillators.These findings may be therefore helpful for a better understanding of the circadian rhythms disruption related to AD.
Fig. 2 .
Fig. 2. Cytosolic calcium oscillations are abolished in the presence of Aβ.A-B) Cytosolic calcium levels were evaluated by using fluorescent dye, Fluo-4 (4 μM), in synchronized human skin fibroblasts from 12 hours post-synchronization time point every 4 hours for 7 time points in presence of (A-B) Aβ 42 (aged peptide) at 0.5 μM (n = 3).(B) Aβ treated cells were co-treated with ATP (1 mM) to overcome the dampening due to Aβ presence (n = 3).(C) Relative Cytosolic Calcium level at 16 hours post-shock (peak: TP16) and at 28 hours (trough: TP28) compared to non-treated cells (CTRL) (n = 3).Aβ treatment completely abolished differences in [Ca2+] i concentration between peak and trough time points while ATP treatment rescued the circadian oscillation of cytosolic calcium.The emitted fluorescence is linearly related to the cytosolic calcium content.**p < 0.01, ***p < 0.001; Student's two-tailed t test comparing single time points.Data are represented as average ± SEM. | 2018-04-03T06:25:05.425Z | 2016-12-27T00:00:00.000 | {
"year": 2016,
"sha1": "33c24d4e766775a3138b27c6959c6aa29be9ec49",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-20/pdf-28976-10?filename=Alzheimer's.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "33c24d4e766775a3138b27c6959c6aa29be9ec49",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
233392382 | pes2o/s2orc | v3-fos-license | COVID -19 lockdown and its impact on orthodontic patients
Background: WHO declared COVID-19 as a worldwide pandemic, As a measure to control the spread of the disease, national emergencies and lockdowns have been declared in many countries. Dental appointments were suspended considering risk of transmission sand orthodontic patients have been stranded with only emergency services been provided to them, unlike the scheduled appointments. This can affect the total duration of the treatment and can affect the mental attitude and tend to increase the apprehensive behaviour amongst patients undergoing orthodontic treatment. Aim: The aim of this study is to assess the impact and attitude of COVID-19 lockdown among patients undergoing active orthodontic treatment. Materials and Methods : A self-designed questionnaire consisting of 10 questions, was widely circulated among the orthodontic patients using communication media such as Whatsapp, messaging apps and via mails. Only 208 subjects returned the questionnaire that constituted the final study sample. Participation in the study was voluntary and identification information was not collected from the study subjects. Results: The study revealed that the majority of female patients were apprehensive, stressed and anxious owing to increased treatment duration, discomfort during the lockdown period, financial worries and treatment outcomes due to their missed follow-ups during the period of lockdown. The overall future outlook post lockdown showed anxiety amongst orthodontic patients. Conclusion: Proper communication with patients is a must to reassure their concerns over treatment like concerns, anxiety, fear and disturbed mental well-being. © This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2; first coined as the 2019-novel coronavirus or 2019-nCoV by WHO) 1 has rapidly spread throughout the world and has led to major health and financial concerns. As a measure to control the spread of the disease, national emergencies and lockdowns have been declared in many countries.
COVID-19 was first reported in Wuhan, Hubei province, central China in December 2019, 2 where bats are suspected to be the primary host. [3][4][5] Currently, COVID-19 are spread within cities through local or community transmission.
Human transmission is predominantly through the respiratory tract via droplets, respiratory secretions (cough, sneeze), and or direct contact, where the virus enters the mucous membrane of the mouth, nose, and eyes. [6][7][8] An adequate management of the oral health of patients becomes crucially important during the COVID-19 epidemic period. 1 The entire country is in a state of 'Lockdown' and the government is issuing advisories on daily basis for its citizens particularly regarding delivery of essential health services and various protective measures to be taken to guard oneself from getting infected; the most important being staying at home (Isolation and Social-Distancing). Although most of the dental clinics are closed during these times, orthodontic patients have been stranded with only emergency services been provided to them, unlike the scheduled appointments. The regular scheduled appointments have been affected due to travelling restrictions since this is a communicable disease and that it can only be prevented with social distancing norms. The knowledge about impact of such emergency lockdown during COVID-19 on patients undergoing orthodontic treatment is lacking. The effects of unsupervised orthodontic treatment during lockdown of COVID-19 could create unintended detrimental effects which are undesirable. This can affect the total duration of the treatment and can affect the mental attitude and tend to increase the apprehensive behaviour amongst patients undergoing orthodontic treatment. Information and guidelines for the clinical orthodontic management of patients during the COVID-19 pandemic are lacking. 2 The aim of this study is to assess the impact of COVID-19 lockdown among patients undergoing active orthodontic treatment.
Ethical clearance and informed consent
Informed consent was taken from the subjects for their participation in the study. The study was conducted in May 2020. Participation in the study was voluntary and identification information was not collected from the study subjects.
Study population and study sample
The present study was a descriptive cross-sectional (questionnaire) study conducted in School of Dental Sciences, Karad. The study population consisted of orthodontic patients undergoing treatment before the lockdown and that their treatment had been suspended since the onset of COVOD-19 lockdown. The questionnaire was widely circulated among the 350 orthodontic patients using communication media such as WhatsApp, messaging apps and via mails since 'Lockdown' was implemented by the government to prevent the spread of the virus. However, only 208 subjects returned the questionnaire that constituted the final study sample.
A self-designed questionnaire consisting of 10 questions written in English language was created specifically for the study. The questionnaire was made keeping in mind the anxiety, level of apprehensive and concerns of patients undergoing orthodontic treatment. The questionnaire was assessed for validity and reliability from orthodontists. The questionnaire consisted of demographic such as gender, age followed by questions and the subjects were given 10 days time to fill the questionnaire.
Statistical analysis
Calculations were done using descriptive statistical analysis. Number and percentages were used to tabulate results. SPSS package version 19.0 (SPSS, Chicago, IL, USA) was used to statistically analyze the results. Chi square test was used to compare the samples.
Results
Demographic data of study subjects is depicted in Tables 1 and 2. The present study included a total of 208 subjects consisting of male subjects (90, 43.3%) and female subjects (118, 56.7%). The subjects were grouped into three groups according to their age. The majority of patients included in age group of 21-25years (72.6%). Table 3 depicts the responses from orthodontic patients to the question asked for their views on their orthodontic treatment during the period of lockdown. The study subjects were asked questions under common encountered domains during the impact of COVID-19 lockdown. The questions compromised the common apprehensive behaviour encountered by the subjects 49% of the subjects responded as anxious and worried for their orthodontic treatment during the period of lockdown and female subjects (71) were apprehensive regarding the lockdown. Around 16.1% of the subjects were less apprehensive and anxious and the rest 13.9% subjects were not sure but were quite less or more anxious regarding their suspended orthodontic treatment during the lockdown.
But the period of lockdown has not affected the compliance domain, of which females have showed their maximum compliance for orthodontic treatment regarding oral hygiene instructions and diet instructions. As compared to males (Table 4). Females are most apprehensive regarding the missed scheduled appointments (Table 5) during the period of lockdown. Since the missing of appointments have impact on undesirable effects of tooth movement females 86 (72.9%) have expressed their concern on it as compared to males 43 (47.8%). (Tables 6 and 8). Subjects reported their maximum responses for the discomfort from orthodontic appliances (115 (55.3%)) ( Table 9), because regular appointments and follow-ups of their treatments were not possible considering the lockdown. Concerns for financial aspect of the treatment was minimal (151 (72.6%)) ( Table 7). The overall outlook for orthodontic patients in pandemic period included, their Worry for increased treatment charges post lockdown, disinfectant measures practised in dental office since they had fear of transmission of the disease and followed by the concerns for their treatment duration which they missed during the period of lockdown. (Table 10)
Discussion
The current study evaluated the most common domains encountered by orthodontic patients like their monthly appointment, their concerns for their treatment outcomes along with financial outlook towards the treatment, their compliance towards the treatment, their experience for any discomfort they faced during the lockdown period and their outlook towards orthodontic treatment during the lockdown period and post the COVID-19 lockdown era. As a measure to prevent further spread of the disease, national emergencies like lockdowns have been implemented in many countries. The efforts taken by the health organizations include lockdowns, with restrictive travel movements and emergency healthcare services with social distancing norms, owing to spread of infection and the difficulty in its containment. The suspension of orthodontic treatment has affected the various domians of concerns and anxiety amongst these patients. Aerosols and air droplets spread of infection are more common in a dental office, hence patients who are willing to go about orthodontic appointments might fear to visit an orthodontist. Due to its high mortality and infection rate, the severe acute respiratory syndrome (SARS) epidemic caused anxiety and panic in the affected countries. The majority of the patients treatment were suspended due to travel restrictions and lockdown implementation.
In this study, we found that female patients were more likely to have mental distress, which might be attributed to the biologic nature of their responses to stress and risk factor as well as transportation difficulties were encountered for their regular monthly visits for rural patients as most dental clinics and hospitals were located in cities. 9,10 Therefore, they were apt to have mental distress and anxiety about treatment outcomes, for missing of their regular appointments and were concerned about the delay in finishing treatment, which were similar to study by Shenoi et al and Sayers et al. 11,12 Beckwith et al reported that each missed appointment added 1.09 months to treatment time, and 40.9% of the patients held the view that the pandemic would extend the entire treatment. A prolonged delay during a lockdown could potentially lead to a further increase in the severity and number of patients who developed anxiety and mental distress. 13 This proves the awareness of the need for regular follow-ups, which is in accordance with 66.8% of patients post lockdown they would follow regular followups.
Bartsch et al stated that compliance is a major problem in orthodontics, but around 75% of our patients showed good compliance towards their treatment. Communication is the key with the patient, and a timely reassurance provides patient satisfaction and patient cooperation for adequate compliance towards the treatment. 12,14 Gyawali et al reported that the most common reason for orthodontic emergencies included the loosening of brackets or bondable buccal tubes, loosening of bands, soft tissue trauma by the overextension of distal wire, loosening of ligature ties, and dislodgement of elastomeric chains. 15 Around 55.3% of our subjects showed similar complaints of discomfort. 12% of subjects expressed fear of increased treatment costs which were similar to results to study by Shenoi et al. 11 The orthodontist must assure that if any additional cost is charged to the patient, it would be for the protection equipments to provide safety for both, the health care professional as well as the patient. The severity and seriousness of the transmission of disease must be properly explained to all patients, making them aware of the importance of social distancing and the need for personal protection even after the lockdown has been lifted. The possible explanation (Table 10) of patients having concern to visit their orthodontist post the lockdown, shows their future outlook towards orthodontic treatment. The use of telehealth consultations to support long distance health care has been a good option over faceto-face service, especially in disasters and public health emergencies. 16,17 Technologies include phone calls, live video/teleconferencing, texting messages via WhatsApp or social medias and e-mails, allowing orthodontists and dental staff to communicate 24/7 with patients. 18 Our study helps us to assess the impact of COVID-19 lockdown among orthodontics patients.
Conclusion
Proper communication with patients is a must to reassure their concerns over orthodontic treatment. This study, identifies the significant impact of COVID-19 lockdown on patients undergoing orthodontic treatment like concerns, anxiety, fear and disturbed mental well-being. The end of the lockdown period will mark 'the beginning of new approach and management in orthodontic treatment.
Conflict of Interest
None declared.
Source of Funding
None. | 2021-04-26T02:29:18.952Z | 2021-02-15T00:00:00.000 | {
"year": 2021,
"sha1": "a7505ae32d133659fa9aa281510413b526d33161",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.idjsronline.com/journal-article-file/13162",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a7505ae32d133659fa9aa281510413b526d33161",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221108038 | pes2o/s2orc | v3-fos-license | Multimodal Brain Tumor Classification Using Deep Learning and Robust Feature Selection: A Machine Learning Application for Radiologists
Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.
Introduction
A brain tumor is an abnormal growth of brain cells in an uncontrollable way [1,2]. Brain tumors can be cancerous or noncancerous. The gravity inside the skull can accelerate the growth of a brain tumor. In the worst case, it can cause brain damage, which can be life-threatening. According to an [15].
In this work, we propose a deep learning scheme for multimodal brain tumors classification. To handle the problem of shallow contrast, we implemented a linear contrast enhancement technique, which was further refined through histogram equalization. Transfer learning was used for feature extraction from two different CNN models and the fusion was performed. The motivation behind the fusion of two CNN models was to get a new feature vector with more information. Although this process improved accuracy, the computational time was affected. To further enhance efficiency and computational time, we proposed a feature selection technique. The robust features obtained using [15]. In this work, we propose a deep learning scheme for multimodal brain tumors classification. To handle the problem of shallow contrast, we implemented a linear contrast enhancement technique, which was further refined through histogram equalization. Transfer learning was used for feature extraction from two different CNN models and the fusion was performed. The motivation behind the fusion of two CNN models was to get a new feature vector with more information. Although this process improved accuracy, the computational time was affected. To further enhance efficiency and computational time, we proposed a feature selection technique. The robust features obtained using this technique were later classified through the Extreme Learning Machine (ELM).
•
We divided the image into two clusters based on a K-Means clustering algorithm and applied edge-based histogram equalization on each image. Further, the discrete cosine transform (DCT) was utilized for local information enhancement.
•
Deep learning features were extracted from two pre-trained CNN models through transfer learning (TL). The last FC layer was used in both models for feature extraction.
•
The Partial Least Square (PLS) based features of both CNN models were fused in one matrix.
•
The robust features were selected using correntropy-based joint group learning. The robust features were finally classified using the ELM classifier.
•
Three datasets such as BRATS 2015, BRATS 2017, and BRATS 2018 were used for the experiments and the statistical analysis to examine the scalability of the proposed classification scheme.
Related Work
Classification of multimodal brain tumors (i.e., T1, T2, T1CE, and Flair) required the determination of altered features, such as shape and texture in the MRI Image [16]. The popular method of diagnosis of these tumors-which spread widely among computer vision researchers-is a computer-aided diagnosis (CAD) system [1,17]. In a CAD system, two main stages are involved-first, tumor preprocessing and detection, and second, classifying the tumor into a relevant category. In this work, we focused on the classification task of multimodal brain tumors. For classification, we used the BRATS series based on few top submissions [18][19][20][21]. Amin et al. [22] introduced a CNN framework for brain tumor classification. In the presented method, the DWT fusion process was performed to improve the original MRI scan and then a partial diffusion filter was employed for noise removal. Later on, they used a global thresholding algorithm for tumor extraction that passed to the CNN model for classification of tumors into the related categories. Five BRATS datasets, namely, BRATS2012, 2013, 2015, 2018, and BRATS2013 were used and showed improved performance on the fusion approach. Sajjad et al. [23] presented a CNN-based multimodal tumor classification system. They initially segmented the tumor regions in the MRI scans using CNN. Then, they performed an extensive data augmentation to train a good CNN model. Later on, they fine-tuned the pre-trained CNN model using augmented brain data. The last layer was used as a classification of tumors in the presented method and it showed that augmented data gave better results on the selected datasets.
Sharif et al. [24] presented an active deep learning system for the segmentation and classification of brain tumors. They initially performed contrast enhancement, and the resultant image was passed to the Saliency-based Deep Learning (SbDL) method, for the construction of a saliency map. The thresholding was applied in the next step, and the resultant images were used to fine-tune the pre-trained CNN model Inception V3. Further, they also extracted dominant rotated local binary pattern (DRLBP) features, fused with CNN features. Later on, a PSO-based optimization was performed and the optimal vector was passed to the Softmax classifier for final classification. They used BRATS 2015, 2017, and 2018 datasets for evaluation, and achieved improved classification accuracy. In [25], the authors presented a CNN-based scheme for the classification of brain tumors. They considered the problem of structural variability of the tumor around the adjacent regions. For this purpose, they designed Diagnostics 2020, 10, 565 4 of 19 small kernels to keep the weights of each neuron very small. Taking advantage of these weights, they achieved an accuracy of 97.5%.
Vijh et al. [26] presented an adaptive particle swarm optimization (PSO) with the Otsu method to find the optimal threshold value. Later, they applied anisotropic diffusion (AD) filtering on brain MRI images to cancel noise and improve image quality. Features were extracted from enhanced images that were used both for training the CNN and performing the classification. Other methods were also introduced in the literature for brain tumor classification, such as a generative adversarial network (GAN)-based approach [19], artificial neural network (ANN)-based learning [27], ELM-based learning [28], residual network [29], standard-features-based classification [30,31], adaptive independent subspace analysis [32], transfer learning-based tumors classification [33], and Excitation DNN [34]. In addition, Togaçar et al. [35] proposed a hybrid method based on CNN and feature selection, for the classification of brain tumors. They achieved an improved accuracy of above 90%. In the above techniques, they did not provide the computational time. However, the computational time was most needed for this current era for each automated system. The more recent, Muhammad et al. [36] presented a detailed review on multi-grade brain tumor classification. They presented a detailed description of brain tumor classification (BTC) steps like preprocessing of tumor, deep learning features, and classification. They discussed detailed limitations and achievements of existing deep learning techniques for BTC. In addition, they also presented the importance of transfer learning for deep learning feature extraction.
Proposed Methodology
In this section, the proposed methodology for multimodal brain tumor classification using deep learning is presented. The proposed method consists of five core steps-linear contrast stretching, deep learning features extraction using transfer learning, a correntropy-based joint learning approach along with ELM for best features selection, the PLS-based fusion of the selected features, and finally the ELM-based classification. The testing of the proposed method was performed on the BRATS datasets. The performance of the approach was checked using standard performance measures like accuracy and false negative rate (FNR). Furthermore, the performance of the proposed work was also reported by measuring the execution time. A detailed flow of the proposed methodology is illustrated in Figure 2. In the following, the technical description of each step is provided.
Linear Contrast Enhancement
Improving the graphic features of an image is the primary objective of contrast enhancement. It is a preprocessing step that is used in many applications like biomedical imaging, agriculture infections diagnosis, and some others [37][38][39][40][41][42]. The impact of low contrast images is not useful for feature extraction, as visually, tumors are not visible and error prone. Therefore, in this step, we
Linear Contrast Enhancement
Improving the graphic features of an image is the primary objective of contrast enhancement. It is a preprocessing step that is used in many applications like biomedical imaging, agriculture infections diagnosis, and some others [37][38][39][40][41][42]. The impact of low contrast images is not useful for feature extraction, as visually, tumors are not visible and error prone. Therefore, in this step, we improved the linear contrast of an image, which showed the main impact on the tumor region. For this purpose, we implemented a hybrid technique. In this technique, initially, we split the image into two parts using the K-Means clustering algorithm. Then, edge-based texture histogram equalization (HE) was applied. Later on, DCT was applied to combine both clusters in one image. The resulting image had enhanced contrast as compared to the original one. The mathematical formulation of this method is given as follows: Consider, we have a dataset ∆ = {τ 1 , τ 2 , τ 3 , . . . , τ N }, τ N ∈ R d . Consider τ(x, y) is an MRI image of dimension N × M where N = 256 and M = 256, rows, and columns, respectively. Let τ i denotes the average of clusters K i then using this, the criterion function is defined as follows: where S denotes the sum of square error of all pixels, τ i means input images, and K implies the number of clusters that are initialized in this work. In K-Means, the Euclidean distance was used to as criterion distance, which was defined as follows: where τ i and y i are two vectors. This formulation obtained two clusters. Using resultant images defined by τ 1 (x, y), we employed edge-based texture HE, where τ 1 (x, y) ∈ S. For the resultant image τ 1 (x, y), the gradient was computed as follows: where G x and G y denotes x derivatives and y derivatives of τ 1 (x, y), respectively. Later, the edge map was constructed using a threshold function, as follows: From this equation, we considered the pixels with values higher than the threshold (T = 0.55). These pixels were used for texture histogram computation (HC). Later on, α and β were calculated, where α denotes minimum and β denotes maximum pixel value. The grey levels whose value lied between α and β, were represented as HC. Finally, the cumulative distribution function (CDF) and the transfer functions were applied to obtain an enhanced image. This was defined by Equations (5) and (6), as follows: The resultant image τ 2 (x, y) ∈ CDF(i)&F τ was passed to the DCT method to refine the local contrast of the tumor region. Mathematically, this was computed as follows: Diagnostics 2020, 10, 565 6 of 19 Hence, using τ xy , the DCT method was applied to an image τ 2 (x, y), as follows: As τ xy is a real orthogonal matrix and its inverse could be computed as: where t denotes the transpose of an image. Hence, the representation of the final DCT enhanced image τ 3 (x, y) is depicted in Figure 3. In this figure, the sample enhancement results are presented for each step (top to bottom).
The resultant image 2 ( , ) ∈ ( )& was passed to the DCT method to refine the local contrast of the tumor region. Mathematically, this was computed as follows: Hence, using , the DCT method was applied to an image 2 ( , ), as follows: As is a real orthogonal matrix and its inverse could be computed as: where denotes the transpose of an image. Hence, the representation of the final DCT enhanced image 3 ( , ) is depicted in Figure 3. In this figure, the sample enhancement results are presented for each step (top to bottom).
Deep Learning Features
The deep learning features were extracted using two pre-trained deep CNN models-VGG16 and VGG19. The visual representation of both models is shown in Figures 4 and 5, respectively.
The VGG16 model consisted of 12 convolution layers, 15 ReLu activation layers, five max-pooling layers, three fully connected (FC) layers, and one Softmax layer, as a classification layer. The input layer size was 224 × 224 × 3. The number of filters in the first convolution layer was 64, and the filter size was 3 × 3 × 3, along with a stride of 1 × 1. In the next convolution layer, the number of filters was not updated but the filter size was updated to 3 × 3 × 64. Further, the dimension of learnable weights was 3 × 3 × 64 × 64, which were 3 × 3 × 3 × 64 in the first convolution layer. The learnable weights of each convolution layer were updated according to the number of filters and the filter size. In the first max-pooling layer, a 2 × 2 filter size was opted along with the same stride 2 × 2. After the convolution layers, three FC layers were added. The learnable weights dimension of the first FC layer was 4096 × 25088. After a 50% dropout, the weights matrix size of the second FC layer was 4096 × 4096. Another dropout layer was added and a ratio of 50% was set. The resultant weight matrix used as an input of the third layer (denoted as FC8) returned a weight matrix of dimension 1000 × 4096. Finally, the Softmax function and the classification layers were added for the final classification.
Deep Learning Features
The deep learning features were extracted using two pre-trained deep CNN models-VGG16 and VGG19. The visual representation of both models is shown in Figures 4 and 5, respectively.
Deep Learning Features
The deep learning features were extracted using two pre-trained deep CNN models-VGG16 and VGG19. The visual representation of both models is shown in Figures 4 and 5, respectively. VGG19 model consists of a series of 16 convolution layers, 19 ReLu activation layers, four max-pooling layers, three FC layers, and one Softmax layer as a classification layer. The input layer size was 224 × 224 × 3. The number of filters in the first convolution layer was 64, and the filter size was 3 × 3 × 3. This filter size was updated according to the number of filters. In the first max-pooling layer, a 2 × 2 filter size was opted along with the same stride. After the convolution layers, three FC layers were added. The weights dimension of the first FC layer was 4096 × 25088. After a 50% dropout, the weights matrix size of the second FC layer was 4096 × 4096. The resultant weight matrix used as an input of the third layer (denoted as FC8) returned a weight matrix of dimension 1000 × 4096.
Network Modification for Transfer Learning
Using domain adaptation transfer learning (TL) [43], we retrained both models (VGG16 and VGG19) on the BRATS datasets, without changes in any parameters. In the tuning process, first, we loaded the brain datasets and set the training/testing ratio to 60:40. Further, the labels of each image were also defined. Then, we set input and output layers for training. This process was conducted for both deep learning models. In this paper, for the VGG16 model, the input convolution layer (conv_1) was employed, where the number of filters was 64, and the filter size was 3 × 3 × 64. The selected output layer was FC8. Then, we performed activation on this layer and trained a new modified CNN network that included only the brain image features. The last two layers, namely, the classification and Softmax layers were removed. In the output, the resultant learnable weights vector length was 4 × 4096, and the feature-length was 1 × 1000. Hence, for n images, the feature vector length was N × 1000, denoted by η i . Similarly, for the VGG19 model, the last two layers were removed. The convolution layer (conv_1) was employed as an input with 64 filters, and the filter size was 3 × 3 × 64. The selected output layer was FC8, which we chose for the activation function. The activation function was performed on this layer and trained a new modified CNN network that included only the brain image features. The dimension of the learnable weight matrix was 4 × 4096, and the length of the extracted feature vector was 1 × 1000. For n brain images, the feature vector length should be N × 1000 and should be denoted by η j .
Feature Selection
The main motive of the feature selection step was to remove the redundancy among features and select only those features that were robust for the correct classification. The second motive of this step was to minimize the number of predictors, which helped in the fast execution of the testing process. To inspire with these two essential functionalities, we implemented a technique named correntropy via mutual learning and ELM (CML-ELM). The working of this method is presented in Algorithm 1: Algorithm 1 Proposed feature selection method using CML-ELM.
End For End
Diagnostics 2020, 10, 565 9 of 19 In the above algorithm, the notation η i denotes the original feature vector of the VGG16 deep learning model, S w (i) means selected feature vector, LR denotes regularization parameter, b i is a selected parameter, A i is an affine combination of S w (i) and S w (i − 1), MSER denotes mean squared error, computed by Equation (10), and the updating of features S w (i + 1) are done by Equation (11).
where the LR i denotes the observed features, andL R i denotes the predicted features. Each time, the MSER was calculated, and if its value was greater than or equal to 0.1, then the features were updated, iterating this process for 1000 times. If the target was not achieved, then the last iteration features were selected for the classification. Finally, a robust vector was obtained, where the dimension of this vector was X 1 × K and was denoted by η S w (1), where the K stood for the number of selected features and X 1 denoted the total number of images. This feature selection process was also performed for the VGG19 feature vector η j and obtained a robust feature vector of dimension X 2 × K and denoted by η S w (2), where X 2 was the number of observations, and K represented the number of selected features.
Feature Fusion and Classification
Finally, the selected feature vectors were fused in one matrix using the PLS-based fusion approach. Consider η S w (1) and η S w (2) are two selected feature vectors of dimension X 1 × K and X 2 × K. Suppose η S w ( j) represents a fused vector of dimension X 3 × K. Further, we assumed that the central variables When using PLS, a pair of directions among u i and v i was found, as follows: These pairs were combined in one matrix and a resultant vector was obtained with X 3 × K dimension. The fused vector was represented by η S w ( j). Later on, this vector was passed to ELM [44] for the final classification. The formulation of ELM was given as follows. For L hidden layers node, the activation function g(x) was defined as follows: where L denotes a hidden layer, which was initialized as one in this work, β i denotes the output weight vector, u i is the input weight vector coming to the hidden layer, B i denotes the offset value, H is the output hidden layer node, u i .u j means an inner product of u i , and O is the expected output. Equation (19) was solved as:β To further improve the stability of ELM, we defined a minimization function as: where i denotes training error, t i indicates corresponding labels to the sample u i , and c denotes the penalty parameter. The labeled results of the proposed architecture are given in Figure 6.
Experimental Results and Analysis
We present the classification results for the proposed ELM classifier using three datasets, namely, BraTS 2015, BraTS 2017, and BraTS 2018. For all datasets, a 60-40 split ratio was used along with 10-fold cross-validation. The results are provided for two different pipeline procedures, namely; (i) feature extraction from FC layer seven and a performed feature selection approach that followed the feature fusion and classification and (ii) which followed the proposed architecture, as given in Figure 2. For the sake of comparison, we also provided the results for four well-known classifiers, like Naïve Bayes, Multiclass Support Vector Machine (MSVM), Softmax, and Ensemble Tree, as baselines. The performance of all classifiers was validated by the following measures, namely
Experimental Results and Analysis
We present the classification results for the proposed ELM classifier using three datasets, namely, BraTS 2015, BraTS 2017, and BraTS 2018. For all datasets, a 60-40 split ratio was used along with 10-fold cross-validation. The results are provided for two different pipeline procedures, namely; (i) feature extraction from FC layer seven and a performed feature selection approach that followed the feature fusion and classification and (ii) which followed the proposed architecture, as given in Figure 2. For the sake of comparison, we also provided the results for four well-known classifiers, like Naïve Bayes, Multiclass Support Vector Machine (MSVM), Softmax, and Ensemble Tree, as baselines. The performance of all classifiers was validated by the following measures, namely accuracy and FNR measures. Furthermore, the clock time taken by each classifier was also reported to give the reader an idea about the classification time during the testing process. All simulations of the proposed technique were conducted on MATLAB 2019b (MathWorks, Natick, MA, USA). The personal Desktop Computer with 16 GB RAM and 128 GB SSD was used for these experiments. A graphics processing unit (GPU) was also utilized for feature extraction and classification, which significantly helped in improving the classification time. The execution time was also noted during the testing process; however, it was not consistent and was only based on the execution platform. Table 1 presents the classification results for the BraTS 2015 dataset. The results were provided for the proposed classifier, as well as the existing well-known classifiers, such as Naïve Bayes, MSVM, Softmax, and Ensemble Tree. These results were provided for two experimental pipeline procedures, as mentioned above. Apart from the validation measures in terms of accuracy and FNR, the results were also provided for the classification time in seconds. The entries in the bold represent the best results. It can be seen from Table 1 that the minimum accuracy achieved was 91.48% for Softmax. The maximum accuracy of 98.16% (FNR = 1.74%) was achieved by the ELM classifier, which used the proposed method. Pro-FC7 defines feature extraction from the FC7 layer and performed feature selection, as well as fusion, and 'Proposed' denotes the proposed classifier architecture, as given in Figure 2. The best values are shown in bold.
Results for the BraTS 2015 Dataset
The proposed selection scheme also reduced the classification time during the testing process. In Table 1, time is given for all classifiers, which clearly shows that the time for the proposed method was lesser than that compared to Pro-FC7. The classification time for Softmax was minimum (81.02 s), using the proposed method. Though the classification time for the proposed classifier was not minimum (87.41 s), it was still quite close to Softmax and considerably lower, as compared to the rest of classifiers.
The results of the proposed method on the ELM classifier were also verified by the confusion matrix values presented in Table 2. The diagonal values showed the correct classification rate of each tumor class. The maximum achieved accuracy of Pro-FC7 was 96.02% for ELM (Table 1), which could also be verified by the confusion matrix in Table 3. Table 4 presents the classification results for the BraTS 2017 dataset. Results are provided for the proposed method along with several other well-known classifiers, such as Naïve Bayes, MSVM, Softmax, and Ensemble Tree. These results were provided for two experimental pipeline procedures, as mentioned above. Apart from the validation measures in terms of accuracy and FNR, results were also provided for the classification time in seconds. It can be clearly seen from Table 4 that the ELM classifier, which used the proposed method, had an accuracy of 97.26% and an FNR of 2.74%. The minimum met accuracy was 90.09% for Softmax. The best values are shown in bold.
Results for the BraTS 2017 Dataset
The proposed selection scheme also reduced the classification time during the testing process, as was evident from the results shown in the last column in Table 4. The classification time for ELM was minimum (89.64 sec) using the proposed method, which clearly showed the improved efficiency of the ELM classifier.
The results of the proposed method on the ELM classifier could also be verified by the confusion matrix in Table 5. The diagonal values showed the correct classification rate of each tumor class, which were 96.24%, 98.66%, 97.20%, and 97% for the T1, T1CE, T2, and Flair tumors. The maximum achieved accuracy of Pro-FC7 was 95.82% for ELM, which could be further verified by the results in Table 6. Table 7 presents the classification results for the BraTS 2018 dataset. Results were provided for the proposed method, as well as for other well-known classifiers, such as Naïve Bayes, MSVM, Softmax, and Ensemble Tree. These results were provided for two experimental pipeline procedures, as discussed earlier in Section 3. It can be seen from this table that the maximum achieved accuracy was 93.40% for the ELM classifier, using the proposed method. The noted FNR rate was 6.60%. The minimum achieved accuracy was 89.49%, using the proposed method for the Naïve Bayes classifier. The best values are shown in bold.
Results of the BraTS 2018 Dataset
The classification accuracy was also computed for Pro-FC7 to analyze the proposed results. For Pro-FC7, the maximum achieved accuracy was 91.69% for the ELM classifier. The accuracy of ELM using the proposed method and Pro-FC7 was further verified through Tables 8 and 9. In both these tables, the diagonal values represent the correct predicted rate of each tumor class, such as T1, T2, T1CE, and Flair.
Time was measured for each classifier during the testing process and presented in Table 7. We used tic-toc commands to compute the testing computational time of proposed method. In this table, it was observed that the best execution time was (63.83 s) for the ELM classifier, using the proposed method. However, this time was based on the platform that was used like GPU, system RAM, etc. Based on the presented results of accuracy and the testing execution time, the effectiveness of the proposed method was apparent for the accurate and efficient brain tumor type classification.
Statistical Analysis of Results
To examine the stability of the proposed method results, a detailed statistical analysis was conducted in terms of variance, standard deviation, and standard error mean (SEM). The noted values were obtained after 1000 iterations. The detailed analysis of the proposed method for the BraTs2015 dataset is given in Table 10. In this table, the accuracy of ELM had low variability, and SEM was 0.1862, which was better than that compared to other methods. Table 11 shows the detailed analysis of the proposed method using the BraTs2017 dataset. The accuracy of ELM was better than that compared to other listed classifiers (SEM is 0.0754). Table 12 illustrates the analysis results for the BraTs2018 dataset. Here, the SEM for the proposed method was 0.2875. As compared to other classifiers, it was better, and the results were stable after the selected iterations. Overall, the results of the proposed method were more stable for all listed classifiers. Moreover, we also plotted the confidence interval of ELM at different confidence levels (CL), such as 90%, 95%, 99%, etc., as shown in Figures 8-10. As shown in Figure 8, at 95% CL, the margin of error was 97.763 ± 0.365 (±0.37%). Similarly, in Figures 9 and 10, the margin of error at 95% CL was 97.1 ± 0.148 (±0.15%) and 92.79 ± 0.564 (±0.61%), respectively. Based on these values, it was shown that our method was significantly better than that compared to other classifiers.
Statistical Analysis of Results
To examine the stability of the proposed method results, a detailed statistical analysis was conducted in terms of variance, standard deviation, and standard error mean (SEM). The noted values were obtained after 1000 iterations. The detailed analysis of the proposed method for the BraTs2015 dataset is given in Table 10. In this table, the accuracy of ELM had low variability, and SEM was 0.1862, which was better than that compared to other methods. Table 11 shows the detailed analysis of the proposed method using the BraTs2017 dataset. The accuracy of ELM was better than that compared to other listed classifiers (SEM is 0.0754). Table 12 illustrates the analysis results for the BraTs2018 dataset. Here, the SEM for the proposed method was 0.2875. As compared to other classifiers, it was better, and the results were stable after the selected iterations. Overall, the results of the proposed method were more stable for all listed classifiers. Moreover, we also plotted the confidence interval of ELM at different confidence levels (CL), such as 90%, 95%, 99%, etc., as shown in Figures 8-10. As shown in Figure 8, at 95% CL, the margin of error was 97.763 ± 0.365 (±0.37%). Similarly, in Figures 9 and 10, the margin of error at 95% CL was 97.1 ± 0.148 (±0.15%) and 92.79 ± 0.564 (±0.61%), respectively.
Based on these values, it was shown that our method was significantly better than that compared to other classifiers. The best values are shown in bold. Min, Avg, and Max are the minimum, average, and maximum accuracy, respectively. SEM-standard error mean. The best values are shown in bold. Min, Avg, and Max are the minimum, average, and maximum accuracy, respectively. SEM-standard error mean. .
Discussion
We discuss the results of the proposed method from a critical point of view. The labeled results are illustrated in Figure 6. Three BraTs datasets were used for the validation of the proposed method. The numerical results are presented in Table 1, Table 4, and Table 7. The results presented in these tables were validated through two pipeline procedures, as mentioned in Section 3. The results showed that the accuracy of Pro-FC7 was less, as compared to the proposed architecture. The main reason for the degradation of the classification accuracy was the number of features. For the architecture of VGG19, the feature length on FC7 was 4096, whereas, the feature length of FC8 was 1000; therefore, during the selection process, the target MSER could not be met. Moreover, due to higher number of features, the execution time was also higher for Pro-FC7, as compared to the proposed method.
To give the reader an idea of comparison with the existing techniques, we briefly mentioned some published results. In [24], the authors presented a deep-learning-based system and used the BraTs dataset series for the experimental process. They achieved an accuracy of 97.8%, 96.9%, and 92.5% for the BraTs2015, BraTs2017, and BraTs2018, respectively. Sajjad et al. [23] presented a deep learning model and evaluated this on two datasets-Brain tumor and Radiopaedia. They achieved an accuracy of 94.58% and 90.67%, respectively, on both. Togaçar et al. [35] achieved an average 96.77% accuracy for the classification of healthy and tumor MRI images. The proposed method achieved an accuracy of 98.16%, 97.26%, and 93.40%, which was better than that compared to the accuracy reported for the state-of-the-art techniques. Additionally, the worst time complexity of our algorithm was O n 3 + k + C, where k represents the number of iterations and C is a constant term.
In addition, we also calculated the Mathew correlation coefficient (MCC) measure for the ELM classifier; the results are given in Table 13. In this table, it is shown that the MCC values were closer to 1, which showed the better prediction performance of the proposed scheme. The better values are shown in bold.
Conclusions
This paper presents a fully automated deep learning system, along with contrast enhancement for multimodal brain tumor classification. The strength of this work was in three steps. First, in the preprocessing step, contrast stretching using edge-based texture HE was employed to increase the local contrast of the tumor region. Secondly, the selection of robust deep learning features by implementing correntropy via mutual learning and ELM (CML-ELM) was utilized. Using CML-ELM, the robust features were computed, which were fused through the PLS-based approach, in a later stage. Third, the ELM classifier was implemented for the classification of proposed tumors into the relevant category. The experimental process was conducted on the BraTs datasets and the results showed an improved accuracy (98.16%, 97.26%, and 93.40%, for the BraTs2015, BraTs2017, and BraTs2018 datasets, respectively). The feature selection process was not only helpful for improving the classification accuracy, but also resulted in the reduction of the computational time. Finally, the accuracy results of the proposed method were stable, which could be concluded on the basis of the presented results. | 2020-08-13T10:03:21.851Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "281d37c17e16ed974671feb9665f6d874c5e127c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/10/8/565/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93dafd5408cdee070b232d46d7d9fa0ed2795739",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
211252775 | pes2o/s2orc | v3-fos-license | Extending Gaia DR2 with HST narrow-field astrometry. II. Refining the method on WISE J163940.83-684738.6
In the second paper of this series we perfected our method of linking high precision Hubble Space Telescope astrometry to the high-accuracy Gaia DR2 absolute reference system to overcome the limitations of relative astrometry with narrow-field cameras. Our test case here is the Y brown dwarf WISE J163940.83-684738.6, observed at different epochs spread over a 6-yr time baseline with the Infra-Red channel of the Wide Field Camera 3. We derived significantly improved astrometric parameters compared to previous determinations, finding: (mu_RAcosDc,mu_DC,parallax) = (577.21+/-0.24mas/yr,-3108.39+/-0.27mas/yr,210.4+/-1.8mas). In particular, our derived absolute parallax corresponds to a distance of 4.75+/-0.05pc for the faint ultracool dwarf.
INTRODUCTION
Distance is a crucial parameter for investigating the basic physical properties of any astronomical object. Indeed, precise distances are essential to connect measured properties to intrinsic characteristics (e.g. apparent to absolute magnitude), and therefore to compare observations to theoretical predictions.
Current atmospheric and evolutionary models struggle to reproduce the photometric properties of the lowestmass and coolest brown dwarfs (Schneider et al. 2016;Leggett et al. 2017). Measurements of accurate distances allow for the determination of absolute fluxes and unbiased spectral energy distributions, making such measurements a necessary step to improve characterisation and modelling of low-mass objects (e.g. Kirkpatrick et al. 2019). Precise distance estimates can also be used to compare the appearance of individual objects to well-calibrated colour-magnitude diagrams. In particular, the identification of outliers along the standardised locus can probe secondary attributes of these substellar objects. For example, over-luminous sources may be indicative of unresolved binarity (Manjavacas et al. 2013;Tinney et al. 2014;Kirkpatrick et al. 2019). Likewise, excessively red or blue colours can trace a deviant surface gravity or metallicity, or be evidence for diverse atmospheric ⋆ Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. † E-mail: luigi.bedin@oapd.inaf.it features like clouds (Knapp et al. 2004;Chiu et al. 2006;Cruz et al. 2007Cruz et al. , 2009).
Finally, the study of well-defined and complete samples in space allows for the development and testing of formation and evolution theories (e.g. Kirkpatrick et al. 2019). Current observations of substellar mass functions and space densities are in tension with model predictions (Burgasser 2004;Allen et al. 2005;Pinfield et al. 2006;Kirkpatrick et al. 2012). High-confidence volume-limited samples can only be achieved through measurements of distances, which are thus required to obtain a comprehensive portrait of the local substellar population.
Parallaxes are the most direct measures of distance for stellar and substellar objects. With the extensive sky coverage of large astrometric missions (e.g. Gaia, Hipparcos), most stars in the solar neighbourhood and nearby moving groups or star-forming regions have reliable parallax measurements. Isolated brown dwarfs and free-floating planetary-mass objects, on the other hand, are generally too faint and too red to be detected by these broad surveys, and very few substellar objects are typically included in these astrometric catalogues.
Spectrophotometric distances (based on expected relations between absolute magnitude and spectral type or apparent photometry) are often the only viable way to estimate distances for intrinsically faint objects. However, significant disagreements have been found between model-derived spectrophotometric distances and parallactic measurements (e.g. Kirkpatrick et al. 2011Kirkpatrick et al. , 2012, and the former estimates are often viewed as unreliable (Cushing et al. 2011;Liu et al. 2011). Some dedicated programs aim at deriving trigonomet-ric parallaxes for brown dwarfs, such as the Hawaii Infrared Parallax Program (Dupuy & Liu 2012;Liu et al. 2016) or the Brown Dwarf Kinematics Project ) (see also Dupuy & Kraus 2013;Manjavacas et al. 2013Manjavacas et al. , 2019Martin et al. 2018;Kirkpatrick et al. 2019 for other compilations of parallactic distances). Despite these remarkable efforts, the typical precision reached in these observationally-expensive campaigns results in substantial uncertainties in the underlying distances, and large inconsistencies remain between programs for the faintest targets (e.g. Beichman et al. 2014).
We recently devised in Bedin & Fontanive (2018) (hereafter, Paper I) a new method to improve the astrometric precision of Hubble Space Telescope (HST) observations and derive astrometric parameters with Gaia-level precisions for sources too faint to be detected with Gaia. This provides a powerful procedure to infer highly-precise distances for faint, ultracool brown dwarfs. For our test case target, the Y1 brown dwarf WISE J154151.65−225024.9 (Cushing et al. 2011;Schneider et al. 2015), we achieved a precision at the milli-arcsecond (mas) level on parallax and at the sub-mas level on proper motion, improving by an order of magnitude the uncertainties from previous estimates.
In this paper, we further improve our method and apply it to the Y dwarf WISE J163940.83-684738.6 in order to constrain its astrometric parameters to unprecedented levels.
2 W1639−6847 WISE J163940.83−684738.6 (hereafter W1639−6847) was first reported by Tinney et al. (2012), after using groundbased methane imaging to carefully resolve the nearinfrared counterpart of a blended WISE source. The authors estimated a Y0−Y0.5 spectral type based on nearinfrared spectroscopy. Tinney et al. (2014) subsequently found W1639−6847 to show under-luminous J and W 2 absolute magnitudes and to be more consistent with a later type of Y0.5. The authors also concluded that some photometric properties of the brown dwarf were in better agreement with Y1 brown dwarfs. Using HST spectroscopy, Schneider et al. (2015) found that the J-band peak of W1639−6847 matched well with the Y0 spectral standard, in agreement with previous spectral type estimates. However, the Y -band peak appeared to be significantly blueshifted when compared to the T9 spectral standard, and Y − J colour seemed unusual relative to other Y0 dwarfs. This led Schneider et al. (2015) to classify W1639−6847 as Y0-Peculiar (Y0pec), which is since the adopted spectral type of this object. Opitz et al. (2016) studied W1639−6847 as part of a multiplicity survey, attempting to resolve close Y dwarf binaries with the Gemini Multi-Conjugate Adaptive Optics System. The authors were able to rule out secondary companions down to 3.5 mag fainter from separations beyond 0.5 AU. However, the search for companions was limited to the inner 2.5 AU around the primary. No search for wide binary companion around W1639−6847 is reported in the literature to this date.
From atmospheric fits to the observed spectrum and photometry of W1639−6847, Schneider et al. (2015) esti-mated an effective temperature of 400 K and a high surface gravity for the target, although such model-derived physical characteristics are likely to be somewhat unreliable (Schneider et al. 2015). Based on Gemini spectroscopic data, Leggett et al. (2017) derived a similar effective temperature (360−390 K) as Schneider et al. (2015), but found a lower surface gravity. Using evolutionary models, they obtained a mass of 5−14 MJup for an age of 0.5−5 Gyr. More recently, Zalesky et al. (2019) performed detailed atmospheric retrieval analyses on late-T and Y brown dwarfs using HST data. While the large majority of their studied objects appeared consistent with the physics of radiative-convective equilibrium, the retrieved structure for W1639−6847 was strongly deviating from typical temperature-pressure profiles under the assumption of radiative-convective equilibrium. The obtained fit provided rather unrealistic results, with a high effective temperature of ∼650 K, and very small radius (0.5 RJup) and mass values (1.5 MJup). The authors concluded that their data-driven atmospheric retrieval was poorly adapted to explain the deviant physical characteristics of this unique ultracool brown dwarf.
As noted by several authors, the majority of such analyses are highly sensitive to the adopted distances of the studied objects. Tinney et al. (2012) initially derived a parallactic distance of 5.0 ± 0.5 pc for W1639−6847. They also reported a very large proper motion (∼3 arcsec yr −1 ) and measured a significant tangential velocity. They deducted from kinematics arguments that the source was likely older than the overall field population, in agreement with Leggett et al. (2017) who found it to be consistent with thin disk membership. Tinney et al. (2014) then refined the proper motion and parallax estimates, significantly reducing the size of previous uncertainties. Recent work by Martin et al. (2018) and Kirkpatrick et al. (2019) provided updated astrometry for W1639−6847 based on Spitzer images, refining its distance to 4.39 +0.18 −0.17 pc (Martin et al. 2018) and 4.72 ± 0.06 pc (Kirkpatrick et al. 2019), respectively. Existing results on parallax based on various datasets remain discrepant by up to 2.9 σ. Additional and independent reliable astrometric measurements of W1639−6847 will thus be crucial to understand the nature and further characterise the peculiar features of this distinct object.
OBSERVATIONS
W1639−6847 was observed at three different epochs with the Wide Field Camera 3 (WFC3) instrument onboard the Hubble Space Telescope (HST ). All data were collected using the infrared (IR) channel of WFC3. The first visit was obtained in the WFC3/IR F125W filter. It was split into 4 dithered images of 602.937 s exposure each, for a total exposure time of 2411.749 s. Each image was taken in MultiAccum mode with NSAMP=14 samplings and using the sequence SAMP-SEQ=SPARS50.
The photometric data acquired for W1639−6847 on Oc- tober 26 th and 27 th 2013 each consist of 3 dithered, shallow images of duration 127.935 s in the F105W bandpass (SAMP-SEQ=SPARS25 and NSAMP=7) for total exposures of 383.805 s. The data from October 29 th 2013 consist of 4 slightly deeper exposures obtained in the F125W filter adding up to a combined exposure time of 986.749 s: 2 images of 277.938 s each using SAMP-SEQ=SPARS25 and NSAMP=13, and 1 exposure of duration 252.937 s with the same SAMP-SEQ sequence and NSAMP=12, and a final image of 177.936 s with NSAMP=9 samples. The rest of these orbits were dedicated to spectroscopic observations, which we do not consider in this work. The final, most recent epoch consists of one HST orbit, split between the F127M and F139M filters, the combination of these two bandpasses being highly suited to identify substellar objects through a deep water absorption feature (see Fontanive et al. 2018 for details). In each filter, 4 dithered images of equal duration (327.939 s) were acquired in MultiAccum mode, with SAMP-SEQ=SPARS25 and NSAMP=15, for a total exposure time of 1311.756 s in each band. Due to the faintness of our target in the F139M filter, only the F127M data is considered in the astrometric analysis presented in this work.
Therefore, a total of 18 individual images (4+3+3+4+4) are employed for the analysis described in the following.
ANALYSIS
We first briefly summarise the data reduction and analyses described in Paper I.
We have extracted positions and magnitudes in every single WFC3/IR FLT image with the software developed by J. Anderson (Anderson & King 2006) and publicly available for WFC3/IR. 1 This software also produces a qualityof-fit parameter (Q; Anderson et al. 2008) that essentially measures how well the flux distribution resembles the pointspread-functions (PSFs). In these data sets, the parameter Q is close to 0.02 for the best measured stars, degrading to Q ∼ 0.75 for the faintest stars. Artefacts, resolved galaxies, and compromised or blended measurements always have larger Q values compared to point sources of a same brightness. The derived positions for detected sources are in raw pixel coordinates and are then corrected for the nominal distortion of the camera, which is also publicly available 1 .
Given the expected highest signal-to-noise ratio for the sources measured in images from the first epoch, we choose 2013.12427 (Feb. 15 th , 2013) as our reference epoch. Four images are available for epoch 2013.12427. The distortioncorrected positions for the sources measured in all four images are combined to compute a more robust estimate of their relative positions. This provides us with 436 sources defining our reference frame (X, Y ).
Next, we link our (X, Y ) reference frame to Gaia DR2 (Gaia Collaboration et al. 2016Collaboration et al. , 2018, in order to transform our measured positions into the ICRS. To do that, Gaia DR2 (α, δ) sources positions, which are given at epoch 2015.5, are first re-positioned at the 2013.12427 epoch using (when available) the tabulated proper motions (pms) of those sources. Then a tangent point is adopted, and the co-ordinates on the tangent plane (ξ, η) are computed. At this point, for all common sources, it becomes possible to compute the most general linear transformations to transform any measured position on the master frame into the tangential plane, and then those positions on the tangential plane via trivial transformations (see equations 1-4 in Paper I) into the ICRS. We initially consider all sources in our master frame, including those with no pms in the Gaia DR2 catalog. Once the match is found, we then restrict this sample to sources that are not saturated in the first epoch images, have Gaia DR2 pms, and have positions consistent within at most 0.03 WFC3-pixels (i.e., 3.6 mas, for the pixel scale 120.9918 mas derived from this transformation, see Section 3.4 of Paper I) between Gaia DR2 at the reference epoch and the reference system. This reduced our available number of common sources to 55.
Finally, measured positions in all the images from all epochs can be linked to the very same reference frame (X, Y ), now made of Gaia DR2 sources that can be repositioned to the corresponding epoch using the tabulated pms. This enables us to transform to the ICRS the positions of every measured object (including sources much fainter than those detectable in Gaia), in every single image, of every single epoch. We refer the reader to Paper I for a more extensive description of the entire procedure.
Improving the method
In our previous work, when re-positioning the Gaia DR2 sources at the corresponding epoch of each individual image, we only considered the pms -and not the parallaxes. However, the reference sources are all at a finite and different distance, which if ignored would inevitably lead to underestimates in the absolute parallax of the target. Given the size of the uncertainty (∼2 mas) in the parallax of the target of Paper I (WISE J154151.65−225024), and the already complicated nature of the method, we opted to not add in that work the further complication of dealing with the individual parallaxes of the reference sources. Instead, we simply applied an a posteriori correction from relativeto-absolute for the target parallax, which was of the order of 0.2 mas (i.e., << 2 mas), and taken as the median of the Gaia DR2 parallaxes of the reference objects (after rejecting the one with the most significant parallax). Now that the bulk of the procedure has been presented in Paper I, we further refine our method and develop the procedure to include the parallaxes of all the reference sources as well. As we will see, this will turn out to be a rather unnecessary step given the currently available data for the specific case of W1639−6847 analysed in the present paper. Nevertheless, it is the appropriate occasion to improve the method in order to obtain absolute astrometric parameters, which might be a necessity for future applications with data sets of higher precisions.
First of all, we need to consider only the sources in Gaia DR2 that, in addition to positions and proper motions, have also a parallax estimate, and then compute their astrometric place at each of the observation epochs, this time including their parallaxes. To compute the positions of the sources in the reference frame, we make use of the sophisticated tool developed by U.S. Naval Observatory, the Naval Observatory Vector Astrometry Software, hereafter NOVAS (in version F3.1, Kaplan et al. 2011), which accounts for many subtle effects, such as the accurate Earth orbit, perturbations of major bodies, nutation of the Moon-Earth system, etc.
In particular, we employ the NOVAS's subroutine ASSTAR, which computes the astrometric place of a star. This subroutine takes as input for a source: the ICRS coordinates at epoch 2000.0, the proper motions, the parallax and the radial velocity (RV, which we set identically to RV=0.0 km s −1 for all sources). The routine in turn produces -at a specified location in the Solar Systemthe astrometric place of the source in right ascension and declination, at a specified Julian date. Therefore, as Gaia DR2 ICRS positions are given at the epoch 2015.5, we first need to re-position ICRS coordinates to epoch 2000.0, by using Gaia DR2 pms, before passing them to the ASSTAR subroutine.
The need for an existing parallax measurement (and with a positive value) significantly restricts the sample of usable Gaia DR2 sources. Most of the images have over 20 Gaia DR2 sources detected on them satisfying these criteria, with a maximum of 25 and a minimum of 14 common sources. Nevertheless, even the image with the minimum number of detected sources in common with Gaia DR2, i.e., 14, has 14×2D positions that are more than adequate to constrain the six parameters of the most general linear transformation to bring those detected positions on that individual image into the ICRS, at the sub-mas level. [Note that for a six parameter transformation, 3×2D data points would be sufficient.] We are thus able to exploit the Gaia DR2 reference sources in each of the 18 individual images employed in this work to carefully study the motion in the field surrounding W1639−6847.
Stack Images
With the coordinate transformations from each image to the reference frame (X, Y ) we can create stacked images within each epoch, and for each filter. Stacked images give the best view of the astronomical scene that can be used to independently check the nature of sources in images. On left panel of Fig.1, we show the obtained stacks for the three main epochs for the two filters with a similar effective wavelength, i.e., F125W (for 2013.1 and 2013.8) and F127M (for 2019.2), in the patch of sky crossed by W1639−6847 between these epochs. The right panel shows the entire field of view for the F127M observations. We saved our stacked images in fits format, and put in their headers our absolute astrometric solution with keywords for World Coordinate System. These five stacked images -one for each filter/epoch combination-are released as supplementary electronic material of this work. Note that the (X, Y ) coordinates in this paper are not in the same pixel-coordinate system of these stacked images, which are instead super-sampled by a factor two (i.e., each pixel is ∼ 60 mas in size).
Determination of the Astrometric Parameters
Our 18 images (in 2D) gave 36 individual data points, from which to extract the five astrometric parameters: positions (X, Y ), proper motions (µX , µY ) and parallax (̟) for W1639−6847. As motivated in Paper I, we keep our calculations in the observational plane (X, Y ).
Again, NOVAS is used to predict the astrometric place of W1639−6847. We then use a Levenberg-Marquardt algorithm (the FORTRAN version lmdif available under MINIPACK, Moré et al. 1980) to find the minimisation of the [observed−calculated] values for the five parameters: X, Y, µX , µY , and ̟.
Our best-fit astrometric solution is given in Table 1 and shown in Fig. 2. We note that the estimated parallax is already in an absolute reference system. To assess the uncertainties of our solution we perform 25 000 simulations, adding random errors following Gaussian distributions with dispersion derived from the observed data of W1639−6847 for each of the five filter/epoch combinations (i.e., F125W@2013.1, F105W@2013.812, F105W@2013.820, F125W@2013.826, and F127M@2019.2). The intrinsic ∼0.050 mas systematic uncertainties inherent to the Gaia DR2 parallaxes (Lindegren et al. 2018) need to be added to the error budget, although completely insignificant compared to the estimated errors on the parallax.
We note that the two epochs with the widest timebaseline also have the best astrometric accuracies and are taken almost exactly at the same phase of the year, making our pms exquisitely accurate, at a quart-of-a-mas level. However, with only three phases of the year mapped, the parallax estimate relies entirely on (and is therefore limited by) the weakest measurements at epochs ∼2013.8.
The astrometric precisions at this epoch are, unfortunately, significantly worse than those at the other two epochs for several reasons. First, all images within this epoch are affected by contaminating light coming primarily from scattered Earthlight. This anomaly is often present for IR observations made when the limb angle, which is the angle between HST 's pointing direction and the nearest limb of the bright Earth, is less than ∼30 degrees. Second, the total exposure times, and therefore the average signal to noise ratios in each of these images are significantly lower than for those taken in 2013.1 and 2019.2. Third, the close proximity of a relatively bright star at ∼3.5 pixels from W1639−6847 might also have contributed to enlarge the errors (see Fig.1).
In addition to Fig. 2 and its insets, we show in Fig. 3 the parallax ellipse along with HST measurements [proper motion subtracted]. This representation better reveals the sampling of the parallactic motion which, with only three main epochs, could be problematic. The fact that the 2013.8 epoch is made of three sub-epochs separated by about a day (on 26, 27 and 29 October 2013, respectively) slightly alleviates this problematic situation in the parallax estimate. While our parallax best fit provides a formal error of only ∼2 mas, a close look at our best-fit compared to the observed points at these ∼2013.8 epochs seems to suggest a marginally larger parallax, which could be larger by as much as ∼0.04 WFC3/IR pixel (i.e., ∼5 mas), or possibly residuals caused by the closeness to the aforementioned field-star at ∼3.5 pixels in that epoch. Indeed, with only 3 main annual phases probed, it is hard to highlight the presence of unaccounted systematic errors in these values. A single future measurement could be sufficient to significantly refine and consolidate our new parallax estimate.
Improved-vs. old-method, and RVs
Even if we expect negligible differences for the case of W1639−6847, it is worth comparing the numerical results of our procedure from Paper I with the new procedure presented in this work, which includes the parallaxes for the Gaia DR2 reference sources. In our first test, we performed the astrometric parameters fit using the very same sample of reference stars in each image (14-25), but this time not including their parallaxes (i.e., assuming them to be at infinite distances, therefore setting their parallaxes to zero). We obtained a parallax of π=209.74 mas, which is slightly smaller than the value ̟=210.35 mas obtained in Table 1. This reduced parallax for W1639−6847 goes in the right direction, meaning that it is an apparent parallax (π) which is obtained with respect to reference sources that are not at infinite distances. Therefore, π is smaller than the absolute parallax (̟), as it does not contain the parallax of the references sources, hence expected to be smaller. However, it is only a marginally smaller value, as ∼0.6 mas compared with an estimated uncertainty for ̟ of 1.8 mas (1-σ) corresponds to a ∼0.3 σ significance. Finally, we note that all the other astrometric parameters (positions, and proper motions) show even less significant changes.
As a second test, we compute transformations using all the Gaia DR2 stars with proper motions, even when no (positive) parallaxes were available. This results in an enlarged sample of reference objects (57-79 vs. 14-25). The derived apparent parallax in this case is π=210.02 mas, thus even closer (∼0.3 mas) to our derived absolute parallax (̟=210.35 mas), and consistent with it at the ∼0.2 σ-level. We note that the consistency in positions between Gaia DR2 stars and their positions in the HST images are always bet- Our solution for the parallax ellipse in the (X, Y ) 2013.12 coordinate system. Individual HST data points are indicated with star symbols, which are connected with small segments to their expected positions according to our best fit. Smaller ellipses in magenta, green, and blue, indicate the 1-σ X,Y of individual data points within each epoch. Note how ellipses are significantly smaller for the first and last epochs, compared to the 2013.8 sub-epochs. Insets in gray have the same scale, and show zoom-in views around the locations marked by gray boxes.
ter than ∼3 mas (Paper I, Fig. 3, as well as this work), and that the inconsistencies are dominated by random errors in the positions measured in the HST images. Therefore, going from ∼20 to ∼70 reference sources we could hope to reduce the errors in our transformations (from the image coordinates system of the HST individual images to the Gaia DR2 system) at most from ∼0.65 mas to ∼0.35 mas, which are both well within the uncertainties of our individual measurements, and also within the errors in our fitted absolute parallax, σ̟=1.8 mas.
In our third and last test, we explore the impact of RVs on our final astrometry. In our derivations of the astrometric parameters, we have assumed the RVs for all the stars, W1639−6847 included, to be identically zero. However, a non-null radial velocity means that objects change in time their distance with respect to the observer, and therefore change their parallax in time, as result of projection effects. Essentially all reference stars in our studied field are significantly further away than our science target. Therefore neglecting their RVs has a much smaller effects than neglecting the target RV, as their distances will change by much less in percentage than for W1639−6847. Assuming arbitrary RV values for the target simply cause fluctuations of our fit within the noise, for RVs up to ±1000 km s −1 . This is not surprising, as even for the most extreme case of the Barnard's Runaway Star, which has an RV of −110.6 km s −1 and a ̟=547.45 mas, we expect a parallax change rate of only̟ = +34 µas yr −1 (Dravins et al. 1999).
Nevertheless, it is interesting to note that astrometry could be used, in turn, to estimate RVs, and that these astrometric-RVs do not suffer from spectroscopic biases such as gravitational redshifts (as high as 25 km s −1 for WDs), convective bubble motions (∼0.5 km s −1 for red giants), etc. (indeed, any spectroscopic measurement is always model-dependent, while astrometry is a purely geometrical one). The secular changes of trigonometric parallaxes are well known effects that can be used to determine model-independent astrometric-RVs (see paper series by Dravins et al. 1999 for a review). Astrometric-RVs are well within the reach of Gaia precision for several close-by (or fast-moving) stars, but extremely hard to measure with traditional HST images (at least in non-trailing mode).
CONCLUSIONS
In this work, we have perfected the procedure developed in Paper I (Bedin & Fontanive 2018) exploiting the power of Gaia DR2 to improve imaging-astrometry with narrow-field cameras. Our method makes use of the positions, proper motions and parallaxes of stars in the Gaia DR2 catalog to derive highly-precise astrometric solutions for sources too faint for Gaia observed in multiple epochs of HST data. The technique was refined in this paper to include the Gaia DR2 parallaxes of the astrometric reference sources in the analysis, allowing us to directly obtain absolute astrometric parameters.
This improved procedure was applied to the brown dwarf WISE J163940.83−684738.6, a Y0pec dwarf with puzzling photometric and spectroscopic features. The distance and proper motion of this unusual object are poorly constrained, with significant inconsistencies between existing estimates. Using three epochs of HST /WFC3 data acquired over a period of ∼6 years, we were able to constrain its parallax to ̟ = 210.4 ± 1.8 mas, and its proper motion to µ α cos δ = 577.21 ± 0.24 mas yr −1 , µ δ = −3108.39 ± 0.27 mas yr −1 .
With achieved precisions of ∼2 mas in parallax and at the sub-mas level in proper motion, these new astrometric parameters represent considerable improvements relative to previous estimates, as summarised in Table 2. On one hand, our proper motion measurements are in good agreement with other estimates from the literature. In particular, our derived µ α cos δ and µ δ values are consistent with the results from Tinney et al. (2014) and Martin et al. (2018) within 2 σ, although our obtained uncertainties are smaller by more than an order of magnitude. On the other hand, larger disparities (>3 σ) are observed between our proper motion measurements and those from Kirkpatrick et al. (2019), which were the most accurate to date.
Our estimates of the astrometric parameters for W1639−6847 are completely independent from the ones obtained with Spitzer data, and because of this, have an important value on their own. For the same reason it would also be interesting to combine them properly. Indeed, while unaccounted systematic errors in our estimated parallax could be as large as ∼5 mas, due to the problematic epochs around 2013.8 (see Sect 4.3), based on our experience we can hardly expect residual systematic errors larger than 1 mas yr −1 in the estimated proper motions derived from HST data (e.g., Bellini et al. 2018 and reference therein). As we do not have the competence to analyse Spitzer data at the same level of accuracy as we have done for the HST data (not only distortion and positioning, but particularly the way to simultaneously fit HST data with data from a telescope in a significantly different, Earth-trailing, Heliocentric orbit), we list in Table 3 our HST individual measurements to allow future investigators to be able to properly combine the two space-based datasets.
In terms of parallax, results from previous works were more discordant, with a ∼10% discrepancy between the best available estimates so far (Tinney et al. 2014;Martin et al. 2018;Kirkpatrick et al. 2019). Interestingly, our newlyderived value was found to be somewhere in between the ground-based and Spitzer determinations from Tinney et al. (2014) and Martin et al. (2018), respectively, and this time in excellent agreement with the Spitzer -derived value from Kirkpatrick et al. (2019), which used additional epochs of data compared to the work from Martin et al. (2018). The corresponding distance of 4.75±0.05 pc we obtained here for W1639−6847 makes our result the most accurate distance measurement available for this Y dwarf.
As previously discussed, our parallax estimate for W1639−6847 relies entirely on the epoch with the lowest astrometric precision, and will require an additional epoch of observations to be further validated and refined. An accurate measurement of the distance to W1639−6847 will certainly be the key to modelling and understanding the peculiar atmospheric characteristics observed to date for this object. Nevertheless, we have successfully demonstrated that our powerful procedure allows us to place strong constraints on the parallax and proper motion of extremely faint objects, based on only three epochs of observations taken over a baseline of ∼half a decade.
The Hubble Space Telescope indeed provides a unique opportunity to reach such results for faint and red brown dwarfs, with an ideal comprise between the ∼121 mas plate scale of the WFC3/IR channel and the wide field of view allowing for numerous astrometric references, combined with the exquisite stability achieved from space. In contrast, other space-based telescopes generally have significantly broader pixel sizes (>1−2 arcsec), leading to lower astrometric resolutions and increased chances of blended sources (like it was originally the case for our target W1639−6847 in WISE ; Tinney et al. 2012). While ground-based facilities typically have much higher angular resolutions, mitigating the broad plate scale drawbacks, observations from the ground are constrained by sensitivity, rending observations of the faintest brown dwarfs extremely challenging. In addition, groundbased data generally suffer from atmospheric aberrations and numerous systematic errors that can be difficult to quantify and account for when comparing between nearinfrared brown dwarf targets and field stars of very different colours.
HST therefore represents a superior platform for highprecision astrometry of ultracool dwarfs, and for a method like the one developed in this paper to be applied. The derivation of new distance measurements for a number of additional Y brown dwarfs via such an approach will be crucial to the characterisation of these objects, and will undoubt-edly shed new light on substellar studies, at the individual and population levels.
The remarkable spatial and spectral resolution of the anticipated James Webb Space Telescope (JWST ) will soon allow for unparalleled probes of ultracool brown dwarfs at near-infrared wavelengths, by observing at wavelengths where Y dwarfs are orders of magnitude brighter than they are at HST wavelengths. In particular, between the very large field of view of the Near Infrared Camera (NIRCam) instrument and its exceptional angular resolution of 32 mas at 2 µm, we will be able to take our technique a step further with JWST, and measure precise distances to the coldest objects in the Solar neighbourhood to even greater accuracies. This will in turn tremendously enhance our understanding of planet-like atmospheres and will provide unique opportunities to calibrate theoretical models at the low-mass end of the substellar regime. Table 3. (only for the on-line version): For each of the 18 HST images analysed in this work we list: the Modified Julian day, our estimated coordinates for W1639−6847 in the ICRS at the epoch of the image, its positions on the master frame (X, Y ), the image archival root-name, and finally the measured raw coordinates of the target in pixel for that image. | 2020-02-24T02:01:05.058Z | 2020-02-21T00:00:00.000 | {
"year": 2020,
"sha1": "7b39e497bdd62dc52a97d3b195def3c4907286c0",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/494/2/2068/33096559/staa540.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "7b39e497bdd62dc52a97d3b195def3c4907286c0",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
248214391 | pes2o/s2orc | v3-fos-license | Wheat Germ Fermentation with Saccharomyces cerevisiae and Lactobacillus plantarum: Process Optimization for Enhanced Composition and Antioxidant Properties In Vitro
Wheat germ, a by-product of the flour milling industry, is currently commercialized mainly for animal feed applications. This study aims to explore and optimize the process of wheat germ fermentation to achieve products with enhanced nutritional composition and biological properties and further characterize the fermented products generated using these optimum conditions. The type of microorganism (Saccharomyces cerevisiae 5022 (yeast) and Lactobacillus plantarum strain 299v (bacteria)), pH (4.5, 6, and 7.5) and fermentation time (24, 48, and 72 h) were optimized using response surface methodology (RSM) aiming to achieve fermented products with high total phenol content (TPC), dimethoxy benzoquinone (DMBQ) and antioxidant activities. Optimum fermentation conditions were achieved using L. plantarum, pH 6, 48 h, generating extracts containing TPC (3.33 mg gallic acid equivalents/g), DMBQ (0.56 mg DMBQ/g), and DPPH radical scavenging (86.49%). These optimally fermented products had higher peptide concentrations (607 μg/mL), gamma-aminobutyric acid (GABA) (19,983.88 mg/kg) contents compared to non-fermented or yeast-fermented products. These findings highlight the influence of fermentation conditions of wheat germ and the promising industrial application of wheat germ fermentation for developing food products with enhanced biological properties promising for their commercialization as functional foods.
Introduction
Wheat is one of the main staple foods in several countries around the globe, serving as an essential commodity to over one-third of the world's population, contributing more than any other crop to the caloric intake of this population [1]. Wheat grain constituents can be divided into endosperm (80-85%), bran (13-17%), and germ (2-3%), the latter one currently being considered a by-product of the flour milling industry. The presence of wheat germ negatively affects the technological and quality attributes of flour and the stability of dough when the flour is used in bread-making processes [2,3]. Wheat germ has been described as a source of macronutrients (proteins and peptides, carbohydrates and lipids, as well as other minor compounds with proven health benefits when used as functional foods, such as tocopherols, phytosterols, carotenoids, thiamin, riboflavin, niacin, phenolics, saponins, flavonoids, γ-aminobutyric acid (GABA), and quinones [3]. Furthermore, wheat germ is also known to possess a well-balanced amino acid profile and has relatively rich contents of essential amino acids, especially lysine, methionine,
Biological Materials
Wheat germ (Shiraz wheat cultivar) was obtained from Khousheh Fars Flour Milling Plant (Shiraz, Iran) and stored at −18 • C to avoid lipid oxidation and other undesirable changes in the biological material prior to further processing. YGC (Yeast Extract Glucose Chloramphenicol) medium and MRS (Modified deMan, Rogosa, and Sharpe) culture media were purchased from Merck Co. (Darmstadt, Germany). Lyophilized S. cerevisiae (PTCC 5022) and L. plantarum (PTCC 299V) cultures were purchased from the Iran Organization for Research and Technology's culture collection (Tehran, Iran). YGC culture medium was incubated in a shaking incubator at 120 rpm and 28 • C for 48 h. Thereafter, an aliquot of 100 µL of L. plantarum was added to 5 mL of the MRS culture medium and incubated further at 37 • C for 24 h. Before the inoculation, to efficiently activate the MRS medium, 2 mL of this culture medium containing the L. plantarum was added to 50 mL of the MRS media and incubated under identical conditions for 48 h.
Wheat Germ Fermentation
10 g of the wheat germ was mixed into 200 mL of sodium phosphate buffer solution (0.05 M). Bacterial and yeast cells were then separated from the culture medium by centrifugation (6000× g, 5 min at room temperature). The harvested cells were then washed with sterile phosphate buffer multiple times, resuspended in water to achieve a cell population of 10 8 CFU/mL, and homogenized using a vortex unit. The yeast and bacterial cells were fermented at 28 • C and 37 • C, respectively, with variable fermentation times (24, 48, and 72 h) and pH levels (4.5, 6.0, and 7.5) required for the process of optimization as described in detail in Section 2.6. Upon the completion of each fermentation process, the samples were freeze-dried (Christ ALPHA 1-2 LD plus, Osterode am Harz, Germany) and preserved at −20 • C for further chemical analyses.
Chemical Analyses
All the chemical analyses were performed in triplicate.
Proximate Composition Analyses
The moisture, ash, fat, and protein contents of the samples were determined according to the official methods of analysis AACC 44-15, 08-12, 30-10, and 46-12, respectively (AACC, 2001). For moisture content determination, samples (1 g) were weighed in pre-weighed Petri dishes and oven-dried at 105 ± 2 • C for 5 h. The samples were subsequently cooled to room temperature in a desiccator and weighed. The difference in sample weight before and after oven-drying represents the moisture content of the samples. For ash content determination, samples (1 g) were weighed in pre-weighed crucibles and placed in a muffle furnace at 550 • C for 4 h. The crucibles were cooled down in desiccators and re-weighed. The difference in sample weight before and after the process is the ash content. For protein content, samples (1 g) were digested with 50 mL sulphuric acid in the presence of catalyst tablets. The digestion process consists of heating the samples for 30 min at 220 • C followed by 120 min at 420 • C. Subsequently, samples were cooled down to room temperature in a desiccator and were distilled using an Auto-Kjeldahl apparatus (BUCHI Labortechnik AG, Flawil, Switzerland). The fat contents were determined using the Soxhlet extraction method using 1 g of samples and 100 mL hexane for 6 h at 68 • C.
Total Phenolic Content (TPC) Analysis
TPC was measured using the method adapted from Liu, Chen, Shao, Wang, and Zhan [10]. Briefly, the Folin-Ciocalteu phenol reagent (2N) was diluted ten times using distilled water. 0.1 mL of sample or standard (gallic acid, 0.1-10 mg/mL) were mixed with 0.75 mL of the diluted Folin-Ciocalteu phenol reagent and the mixtures were incubated at 20 • C for 10 min. Following this incubation, 0.75 mL sodium carbonate solution (2% w/v) were added to each mixture, vortexed and incubated in dark conditions for 45 min. The absorbance of the mixtures was read at 765 nm using a spectrophotometer (UV-1650PC; Shi-Foods 2022, 11, 1125 4 of 12 madzu Corp., Kyoto, Japan). The TPC results were expressed as mg gallic acid equivalents (GAE) per g of freeze-dried sample (mg GAE/g).
DMBQ Analysis
DMBQ contents were measured using an HPLC system equipped with a quaternary pump (Knauer pump 1000, Berlin, Germany). The samples were prepared for DMBQ analysis following the protocol, as described by Zheng et al. [15]. Briefly, 10 g of samples were dissolved in 250 mL of distilled water and extracted three times using 200 mL of chloroform. The chloroform layers were collected, washed three times with distilled water, and dried over anhydrous sodium sulfate. The filtrates were evaporated using a vacuum evaporator (Rotavapor RII, BUCHI, Flawil, Switzerland) at 30 • C. The dried samples were re-dissolved in the mobile phase (20% acetonitrile and 80% water v/v) and filtered through 0.45 µm filters before their injection into the HPLC system. The HPLC system was equipped with a quaternary pump (Knauer pump 1000, Berlin, Germany), a UV detector (245 nm) and a C-18 column (5 µm, 250 × 4.6 mm; Nucleodur C18 pyramid 250/4.6, Macherey-Nagel, Düren, Germany). The mobile phase consisted of 20% acetonitrile−80% water (v/v) mixture at a flow rate of 0.5 mL/min and a temperature of 25 • C. Peaks were detected based on retention time, and DMBQ concentrations were determined by comparison with the standard (DMBQ 97%, ACROS Organics). All the measurements were conducted in triplicate, and the results were reported as mg/g DMBQ per g of the freeze-dried sample [15].
Peptide Content Analysis
Peptide content analysis was performed following the protocol as described by Liu, Chen, Shao, Wang and Zhan [10] with slight modifications. Briefly, 0.25 mL of freeze-dried wheat germ were mixed with 2 mL of 0.2 M sodium phosphate buffer (pH 8.2) followed by 2 mL trinitrobenzenesulfonic acid 0.1 (v/v) (TNBS). The mixtures were incubated at 60 • C for 1 h under dark conditions and the reaction was stopped by adding 4 mL of HCl (0.1 M). The absorbance of each sample was recorded at 340 nm and the peptide content of each sample was quantitatively determined using L-leucine amino acid as the standard at concentrations ranging from 0 to 1.2 mg/mL [4].
γ-Aminobutyric Acid (GABA) Analysis
GABA content of the samples was determined according to the method described by Donkor et al. [16] with modification. 0.25 g of wheat germ were mixed with 1 mL of 70% (v/v) ethanol, homogenized for 10 min in a vortex and centrifuged (10,000 rpm, 10 min, 4 • C). This process was repeated twice, the supernatants were pooled and the ethanol evaporated at 40 • C. The samples were re-dissolved in 1 mL of distilled water and cleaned through a 0.45 µm filter. The GABA content was determined by injecting 20 µL of the extract into the same HPLC system (HPLC column Nucleodur C18 Pyramaid 125 × 3 mm, 5 µm) equipped with a refractive index (RI) detector (Wyatt, Optilab rEX) and column (1000 Kanaber, Germany). The temperature of the column was set to 25 • C and HPLC grade water was used in column stationary phase at a flow rate of 0.6 mL/min.
DPPH Radical Scavenging Activity Determination
DPPH radical scavenging activity assays were performed in triplicate following the protocol, as described by by Liu, Chen, Shao, Wang, and Zhan [10] with some modifications. Briefly, 2 mL of wheat germ extract were diluted with 100 mL 90% methanol aqueous solution. 2 mL of extract were mixed with 1 mL of DPPH stock solution (4 mg per 100 mL of solvent 90% methanol) and the mixtures were incubated in the dark for 45 min. The absorbance of the samples was read at 517 nm. A methanolic solution containing all reagents without the addition of a test compound was used as a control. The DPPH radical scavenging activity of the samples was calculated using the following equation: where A C is the absorbance of the control, and A S represents the absorbance of the samples.
Experimental Design for Optimization
Response surface methodology (RSM) was used to optimize the fermentation of wheat germ using the software Design Expert (v 12.0, Stat-Ease US). The optimization of the fermentation process focused on the parameters pH (X 1 ), time (X 2 ), and type of microorganism (X 3 ) to achieve products with maximum TPC and DMBQ contents as well as maximum DPPH radical scavenging activities. The type of microorganism are categorical factors introduced in the design as level 1 = bacteria and level 2 = yeast, while the independent variables were coded as X 1 (−1 = 4.5, 0 = 6, +1 = 7.5) and X 2 (−1 = 24, 0 = 48, +1 = 72).
Twenty-six experimental runs were performed following a central composite design. The different combinations of the process parameters were studied and the main responses achieved in each fermentation run are summarized in Table 1. The correlation between independent and dependent variables was explained through the second-order polynomial model outlined in the following equation.
where Y stands for a predicted response (TPC, DMBQ, and DPPH radical scavenging activity); β 0 , β i , β ii and β ij represent regression coefficients; and X i and X j are the coded independent factors. One model was generated for each dependent variable.
Statistical Analyses
All experiments and measurements were performed in triplicate and the data was analyzed by a randomized complete block design using the statistical software SPSS (v. 19). Duncan's tests were used to perform mean comparisons and to determine the significance of the differences. In all cases, the criterion for statistical significance was p < 0.05.
Non-Fermented Wheat Germ Sample Composition
The chemical composition of the original wheat germ biomass prior to fermentation was 14% of fat, 32% of protein, 15.5% of moisture, and 2.5% of ash.
Modelling the Fermentation Process of Wheat Germ
The matrix design and the experimental responses (TPC, DMBQ, and DPPH radical scavenging activity) for each run are presented in Table 1. There was considerable variation in the results obtained across the different responses analyzed, with ranges for TPC (1.59-3.99 mg GAE/g freeze-dried sample), DMBQ (0.06-0.64 mg DMBQ/g freeze-dried sample), and DPPH radical scavenging activity (50.01-89.15%). The highest yields TPC (3.99 mg GAE/g freeze-dried sample), DMBQ (0.64 mg DMBQ/g freeze-dried sample), and DPPH radical scavenging activity (88.95%) were achieved when wheat germ was fermented at pH of 6, during 48 h and using the bacteria (L. plantarum).
Modelling TPC during Fermentation
Contour plots (2D) and response surface plots (3D) were generated from the previously described modeling equations for TPC as a function of different pH and fermentation times when using either bacteria or yeast ( Figure 1).
Overall, the TPC of the fermented samples ranged between 1.59-3.99 mg GAE/g freeze-dried samples (Table 1). Keeping the pH constant and independent of the type of microorganism used for the fermentation, the TPC of the fermented wheat germ increased with an increased fermentation time, reaching its maximum levels at 48 h. Further increases in fermentation time resulted in unchanged or even slightly declined TPC levels. When the fermentation was performed using yeast, the highest TPC was 3.45 mg GAE/g freeze-dried sample, achieved at pH 6.0 and 48 h of fermentation time.
Modeling DMBQ Contents during Fermentation
The levels of DMBQ ranged from 0.06 to 0.64 mg DMBQ/g freeze-dried sample, depending on the fermentation conditions. Fermentation with either bacteria or yeasts contributed to increased levels of DMBQ, particularly at pH of 6.0 and fermentation time of 48 h, while further increases in any of these parameters resulted in reduced DMBQ contents. Overall, the TPC of the fermented samples ranged between 1.59-3.99 mg GAE/g freeze-dried samples (Table 1). Keeping the pH constant and independent of the type of microorganism used for the fermentation, the TPC of the fermented wheat germ increased with an increased fermentation time, reaching its maximum levels at 48 h. Further increases in fermentation time resulted in unchanged or even slightly declined TPC levels. When the fermentation was performed using yeast, the highest TPC was 3.45 mg GAE/g freeze-dried sample, achieved at pH 6.0 and 48 h of fermentation time.
Modeling DMBQ Contents during Fermentation
The levels of DMBQ ranged from 0.06 to 0.64 mg DMBQ/g freeze-dried sample, depending on the fermentation conditions. Fermentation with either bacteria or yeasts contributed to increased levels of DMBQ, particularly at pH of 6.0 and fermentation time of 48 h, while further increases in any of these parameters resulted in reduced DMBQ contents.
Modelling DPPH Radical Scavenging Activity during Fermentation
The influence of the fermentation conditions on the DPPH radical scavenging activity of fermented wheat germ as a function of different pH and fermentation times when using either bacteria or yeast is shown in Figure 1. The levels of DPPH radical scavenging activity of fermented wheat germ ranged from 50.01 to 88.95%. Overall, when at the same pH values, the DPPH radical scavenging activity increased by increasing the fermentation time, achieving its highest value of 89.1% at a pH of 6.0 and fermentation time of 72 h. The process of fermentation significantly increased the DPPH radical scavenging activity of the samples compared to those of non-fermented wheat germ.
Modelling DPPH Radical Scavenging Activity during Fermentation
The influence of the fermentation conditions on the DPPH radical scavenging activity of fermented wheat germ as a function of different pH and fermentation times when using either bacteria or yeast is shown in Figure 1. The levels of DPPH radical scavenging activity of fermented wheat germ ranged from 50.01 to 88.95%. Overall, when at the same pH values, the DPPH radical scavenging activity increased by increasing the fermentation time, achieving its highest value of 89.1% at a pH of 6.0 and fermentation time of 72 h. The process of fermentation significantly increased the DPPH radical scavenging activity of the samples compared to those of non-fermented wheat germ.
Optimum Conditions for Wheat Germ Fermentation
The coefficients provided in Table 2 indicate the effect of every independent parameter (pH (X 1 ), time (X 2 ), and type of microorganism (X 3 )) on the dependent variables. The magnitude of these coefficients relates to the weight of their effect, and the sign of the relationship (positive and negative) indicates an increase and decrease in the experimental responses, respectively. The results of the ANOVA indicated that the goodness-of-fit of quadratic polynomial models for all dependent variables was significant (p < 0.0001) (see Table 2). The mathematical models generated from the experimental data for TPC (Y 1 ), DMBQ content (Y 2 ), and DPPH (Y 3 ) are expressed by the following equations: The high values of R 2 and adjusted R 2 (>0.80 in all the cases) indicated that the suggested models work well to elucidate the relationship between the variables proposed.
The CV values for all the dependent variables were also low (<10% in all cases), indicating that the variation in the mean value is low and the proposed model has sufficient precision and reliability. The adequate precision measures the signal-to-noise ratio, and a ratio > 4 was considered desirable [17]. The adequate precision values of the current models-10.79 for TPC, 10.091 for DMBQ, and 23.162 for DPPH-suggest that the fitted models have a very good signal-to-noise ratio. Furthermore, the lack-of-fit values were also non-significant for all response models of the current study. Based on the model equations provided by RSM for each of the optimization objectives defined, the independent variables were subsequently adjusted using the RSM package's response optimizer. A numerical optimization was performed to predict the optimum levels of each of the independent variables to obtain maximum values of TPC, DMBQ, and DPPH radical scavenging activity. The corresponding optimum conditions were achieved by the bacterial fermentation of wheat germ at pH 6 during 48 h of fermentation with a high desirability coefficient of 0.89. The desirability lies between 0 and 1, and it represents the closeness of a response to its ideal value. Under these optimum conditions, the levels of TPC, DMBQ, and DPPH radical scavenging activity were 3.33 mg of GAE/g, 0.56 mg DMBQ/g and 86.49%, respectively.
Further Chemical and In Vitro Biological Activity of Optimally Fermented Wheat Germ
Fermented wheat samples generated using optimum fermentation conditions (bacteria, pH 6 and 48 h fermentation) were further analyzed for their contents of certain bioactive compounds (peptide and GABA contents) with proven health benefits frequently described when fermenting wheat germ. Moreover, the chemical composition of non-fermented wheat germ and yeast-fermented wheat germ (pH 6 and 48 h fermentation) are also reported for comparison purposes.
Peptide Contents of Optimally Fermented Wheat Germ
The peptide content of wheat germ in this study was 35.5 µg/mL, rising to levels of 607 µg/mL following an optimized fermentation process using bacteria, is also higher than those of wheat germ fermented using yeast following similar fermentation parameters of 532.50 µg/mL. Figure 2 summarizes the main changes in GABA contents between raw, yeast-fermented, and bacterial-fermented wheat germs. GABA content in non-fermented wheat germ samples was 2421.67 mg GABA/kg freeze-dried sample, increasing up to 13,675.62 mg/kg following yeast fermentation and achieving maximum levels following bacterial fermentation (19,983.88 mg/kg). described when fermenting wheat germ. Moreover, the chemical composition of non-fermented wheat germ and yeast-fermented wheat germ (pH 6 and 48 h fermentation) are also reported for comparison purposes.
Peptide Contents of Optimally Fermented Wheat Germ
The peptide content of wheat germ in this study was 35.5 μg/mL, rising to levels of 607 μg/mL following an optimized fermentation process using bacteria, is also higher than those of wheat germ fermented using yeast following similar fermentation parameters of 532.50 μg/mL. Figure 2 summarizes the main changes in GABA contents between raw, yeast-fermented, and bacterial-fermented wheat germs. GABA content in non-fermented wheat germ samples was 2421.67 mg GABA/kg freeze-dried sample, increasing up to 13,675.62 mg/kg following yeast fermentation and achieving maximum levels following bacterial fermentation (19,983.88 mg/kg).
Discussion
The results of non-fermented wheat germ composition were in close agreement with those of Zhang, Xiao, Dong, Wu, Yao, and Zhou [2], who reported the protein content of wheat germ before fermentation with L. plantarum to be 32.9%. The TPC, DMBQ, and
Discussion
The results of non-fermented wheat germ composition were in close agreement with those of Zhang, Xiao, Dong, Wu, Yao, and Zhou [2], who reported the protein content of wheat germ before fermentation with L. plantarum to be 32.9%. The TPC, DMBQ, and DPPH radical scavenging activity levels of non-fermented wheat germ in the current study were 0.77 mg of GAE/g, 0.12 mg/g and 23.22%, respectively.
This study showed great variations in the TPC of wheat germ after the samples were fermented under different processing conditions (see Table 1). Polyphenols, as a group of antioxidant molecules, play key roles in the prevention of several diseases, including cancer. Fermentation has been reported as an effective method to considerably enhance the content of polyphenols in the resulting products [18]. It is worth mentioning that the fermented wheat germ sample exhibited a significant enhancement in its TPC compared to its non-fermented counterpart, particularly when using the optimized conditions designed in this study to enhance the fermentation process. These results are in agreement with those of Zheng et al. [19], which achieved the highest phenolic contents in fermented wheat germ using Saccharomyces cerevisiae after 48 h of fermentation (3.6 mg GAE/g sample) that declined to 1.5 mg GAE/g sample when increasing the time of fermentation. TPC of wheat germ following a bacterial fermentation during 48 h was significantly higher than those described using yeast following the same experimental conditions. Similar results were also achieved by other researchers using various microorganism types during the fermentation of wheat germ. Liu, Chen, Shao, Wang, and Zhan [10] reported levels of TPC of 10.55 mg GAE/g of non-fermented wheat germ that increased up to 26.02 mg GAE/g following a 72 h fermentation process using Bacillus subtilis. Sandhu, Punia, and Kaur [18] also reported that the fungal fermentation of wheat germ with Aspergillus awamorinakazawa achieved increases in TPC from 1.3 mg GAE/g to 3.54 mg GAE/g after 2 days. LAB and S. cerevisiae, which are the focus of the current study, contain a wide range of enzymes-βglucosidase, carboxylase, α-glucosidase, and phosphokinase-that are able to disrupt most of the fibers present in the wheat germ's cell walls, such as cellulose, hemicellulose, and pentosans [3,15,20]. Thus, during the fermentation process, these enzymes will generate a breakage of the polyphenol-hemicellulose bonds which will ultimately lead to the increases in TPC also appreciated in this study.
DMBQ is a derivative of quinones that contribute greatly to the beneficial biological properties attributed to the consumption of wheat germ [19]. Overall, the results confirm that fermentation resulted in a significant increase of these beneficial compounds and thus, these fermented products can have an increased value when sold as nutraceuticals or functional foods, particularly when using the optimized fermentation conditions determined in this study. Zheng, Guo, Zhu, Peng, and Zhou [15] used a combined artificial neural network and genetic algorithm strategy to optimize wheat germ fermentation by the Saccharomyces cerevisiae, achieving a maximum content of quinones of 0.939 mg/g sample. Similarly, Zhang, Xiao, Dong, Wu, Yao, and Zhou [2] reported wheat germ contained approximately 33.8 µg DMBQ/g, and after fermentation with Lactobacillus plantarum dy-1, the concentration of DMBQ increased to 181.1 µg DMBQ/g. Rizzello, Mueller, Coda, Reipsch, Nionelli, Curiel, and Gobbetti [3] also demonstrated increases in DMBQ from 0.035 to 0.252 mg/g achieved by LAB fermentation. The mechanism of release of hydroquinones (which exist as β-glucosides) from wheat germ during fermentation is attributed to the action of β-glucosidase released during both yeast and bacterial fermentations. When these compounds are released via the breakage of β-glucosidic bonds, they are oxidized to DMBQ. Moreover, in wheat germ, high levels of β-glucosidase and peroxidase enzymes can be naturally present, contributing further to the formation of DMBQ [20].
Increased antioxidant activity of different metabolites has been linked to other biological properties also displayed by these compounds, including their anticarcinogenic activity [20]. Thus, during the process of optimization, antioxidant activities were used as a marker of in vitro biological properties of the fermented wheat germ. Liu, Chen, Shao, Wang, and Zhan [10] fermented wheat germ using L. plantarum and reported differences in the antioxidant activities (expressed as % DPPH radical scavenging activity) of samples at early stages of fermentation and those fermented after 72 h. The authors reported antioxidant activities of 10% in the raw wheat germ samples that increased to reach levels of approximately 78% when fermenting the products with L. plantarum. Rizzello, Nionelli, Coda, De Angelis and Gobbetti [4] also reported that the fermentation of wheat germ with L. plantarum LB1 and L. rossiae LB5 led to an enhancement of 33% in the antioxidant activities of fermented wheat germ. The improved antioxidant activity reported in multiple studies as a result of the fermentation process may be mainly related to the production of phenolic and flavonoid compounds [3] as well as to the release of peptides through microbial-derived hydrolysis during the process of fermentation [10].
Overall, the process of fermentation increased the release of bioactive peptides from wheat germ, especially when using the optimized protocol designed in this study. Bioactive peptides can be produced by enzymatic hydrolysis during the processes of fermentation, germination, and ripening [5], and they may have an active role in contributing to the antioxidant and anticarcinogenic activities of wheat germ. Liu, Chen, Shao, Wang, and Zhan [10] reported that the peptide contents of wheat germ increased from 4.31 to 29.68% during the first 48 h of fermentation with Bacillus subtilis, while these levels were reduced to 25.80% at 72 h. The authors attributed this increased peptide content to the activity of a proteinase secreted by Bacillus subtilis which could hydrolyze protein to several peptides. These findings were further supported by Niu et al. [21], who reported increased peptide content in wheat germ samples when fermented for less than 48 h, while the concentration of these compounds declined following additional fermentation time.
GABA is a four-carbon non-protein amino acid that is involved in multiple biological processes relevant to human health, including control of blood pressure, antidiabetic, anticarcinogenic, anti-obesity, and tranquilizing effects, which minimizes the risks of heart diseases and Alzheimer's [22,23]. Thus, the significant increase in GABA contents also appreciated when fermenting wheat germ using the optimum protocol developed in this study also indicates the potential additional health benefits that could be achieved following the fermentation conditions explored in this study. Similar to the current study results, Rizzello, Nionelli, Coda, De Angelis, and Gobbetti [4] reported an increase in GABA contents from 903 mg/kg in raw samples to 2043 mg/kg when the samples were fermented by Lactobacillus plantarum LB1 and Lactobacillus rossiae LB5. The higher levels of GABA in the fermented samples of the current study can be attributed to the different bacterial strains used to optimize the fermentation of wheat germ.
Conclusions
The bacterial fermentation of wheat germ using L. plantarum was more efficient compared to yeast fermentation using S. cerevisiae for the generation of bioactive compounds and increased biological activities in vitro of fermented wheat germ. Moreover, the fermentation process using L. plantarum was also optimized for increased bioactive compounds and biological properties. Under optimum fermentation conditions (pH 6, 48 h), bacterial fermentation significantly improved the contents of TPC, DMBQ, and DPPH of fermented wheat germ at higher levels than those described in both the raw and yeast fermented wheat germ. Fermentation parameter modifications, such as increased fermentation time (72 h) or increases of pH beyond those optimum conditions, did not improve and, in some cases, even reduced the generation of the compounds analyzed and their biological properties in vitro. The insights gained into the understanding of the effects of different fermentation parameters of wheat germ by L. plantarum and S. cerevisiae can be potentially used at an industrial level by the food industry in order to achieve value-added products with specific functional properties, such as antioxidant properties, from wheat germ that is currently considered as a low-value by-product from the flour and milling industries.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article. | 2022-04-17T15:04:55.529Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "a00df784ba841b15ce6fa4df3ea43b1229faed00",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/11/8/1125/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0524f7d35fffc25aad6b018cd35f3551c46b5f87",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15336783 | pes2o/s2orc | v3-fos-license | Amenability constants for semilattice algebras
For any finite unital commutative idempotent semigroup S, a unital semilattice, we show how to compute the amenability constant of its semigroup algebra l^1(S), which is always of the form 4n+1. We then show that these give lower bounds to amenability constants of certain Banach algebras graded over semilattices. We show that there is no commutative semilattice with amenability constant between 5 and 9.
In conjunction with V. Runde [13], the third named author proved that for a locally compact group G, G is compact if and only if its Fourier-Stieltjes algebra B(G) is operator amenable with operator amenability constant less than 5. In a subsequent article [14], examples of non-compact groups G 1 were found for which the operator amenability constant is exactly 5. In related work of Dales, Lau and Strauss [3,Corollary 10.26], improving on [16,Theorem 3.2], it was shown that a semigroup algebra ℓ 1 (S) has amenability constant less than 5, if and only if S is an amenable group. For the multiplicative semigroup L 1 = {0, 1}, it is known that the amenability constant of ℓ 1 (L 1 ) is 5. These parallel facts are not coincidences since for the special groups G 1 , mentioned above, B(G 1 ) is ℓ 1 -graded over L 1 , i.e. there are 1-operator amenable subalgebras A 0 and A 1 such that B(G 1 ) = A 0 ⊕ ℓ 1 A 1 , and A 0 is an ideal.
We are thus led to consider the general situation of Banach algebras graded over semilattices, i.e. commutative idempotent semigroups, which we define in Section 2. To do this, in Section 1 we develop a method for computing the amenability constants associated to finite semilattice algebras. The results in Section 1 have a similar flavour to some results from those in the recent monograph [3], and are very similar to some results of Duncan and Namioka [4]. However, our method is explicit and quantitative, and thus is a nice complement to their work. In Section 2 we obtain a lower bound for the amenability constant of Banach algebras graded over finite 2000 Mathematics Subject Classification. Primary 46H20, 43A20; Secondary 20M14, 43A30. Key words and phrases. amenable/contractible Banach algebra, semilattice, graded Banach algebra.
Research of the third named author supported by NSERC Grant 312515-05.
semilattices. We show a surprising example which indicates our lower bound is not, in general the amenability constant. We show, at least for certain finite dimensional algebras graded over linear semilattices, that our lower bound is achieved. We close with an answer to a question asked of us by H.G. Dales: we show that there does not exist a commutative semigroup G such that 5 < AM(ℓ 1 (G)) < 9.
There are natural examples of Banach algebras from harmonic analysis, due to Taylor [17], Inoue [9], and Ilie and Spronk [7,8], to which our techniques apply. We recommend the reader to [7] and [13] for more on this. We feel that ideas developed here may lead to a tool to help classify which locally compact groups admit operator amenable Fourier-Stieltjes algebras B(G). Our hope is that the operator amenability constants AM op (B(G)) can all be computed. We conjecture they are a subset of {4n + 1 : n ∈ N}, motived by Theorem 1.7 and Theorem 2.2, below. We hope that these values will serve as a tool for classifying for which groups G, B(G) is operator amenable.
Interest in amenability of semigroup algebras, in particular for inverse semigroups and Clifford semigroups, goes back at least as far as Duncan and Namioka [4]. Grønbaek [5] characterised commutative semigroups G for which ℓ 1 (G) is amenable. A recent extensive treatise on ℓ 1 -algebras of semigroups has been written by Dales, Lau and Strauss [3], which includes a charaterisation of all semigroups G for which ℓ 1 (G) is amenable. Biflatness of ℓ 1 (S), for a semilattice S, has recently been characterised by Choi [1]. 0.1. Preliminaries. Let A be a Banach algebra. Let A ⊗ γ A denote the projective tensor product. We let m : A⊗ γ A → A denote the multiplication map and we have left and right module actions of A on A ⊗ γ A given on elementary tensors by Following Johnson [10], we will say that a Banach algebra A is amenable if it admits a b.a.d. A quantitative feature of amenability was introduced by Johnson in [11], for applications to Fourier algebras of finite groups. The amenability constant of an amenable Banach algebra A is given by The problem of understanding amenable semigroup algebras in terms of their amenability constants has attracted some attention [16,3].
We call A contractible if it admits a diagonal, i.e. an element D in A ⊗ γ A for which for each a in A. Note, in particular, then A must be unital and the norm of the unit is bounded above by AM(A).
If A is a finite dimensional amenable Banach algebra, then A ⊗ γ A is a finite dimensional Banach space, so any b.a.d. admits a cluster point D.
Since any subnet of a b.a.d. is also a b.a.d., the cluster point must be a diagonal, whence A is contractible.
We record the following simple observation.
Proposition 0.1. If A is a contractible commutative Banach algebra, then the diagonal is unique.
Proof. We note that A ⊗ γ A is a Banach algebra in an obvious way: It will also be useful to observe the following.
Proposition 0.2. Let A and B be contractible Banach algebras, with respective diagonals D A and D B , then A ⊗ γ B has diagonal Proof. It is simple to check the diagonal axioms (0.3) and (0.4).
Amenability constants for semilattice algebras
A semilattice is a commutative semigroup S in which each element is idempotent, i.e. if s ∈ S then ss = s. If s, t ∈ S we write It is clear that this defines a partial order on S. We note that if S is a finite semilattice, then o = s∈S s is a minimal element for S with respect to this partial order. We note that if S has a minimal element, then it is unique. Also if S has a unit 1, then 1 is the maximal element in S.
A basic example of a semilattice is P(T ), the set of all subsets of a set T , where we define στ = σ ∩ τ for σ, τ in P(T ). The minimal element is ∅, and the maximal element is T . We call any subsemilattice of a semilattice P(T ) a subset semilattice. This type of semilattice is universal as we have a semilattice "Cayley Theorem": for any semilattice S, the map s → {t ∈ S : t ≤ s} : S → P(S) (or s → {t ∈ S \ {o} : t ≤ s} : S → P(S \ {o})) is an injective semilattice homomorphism (by which o → ∅).
For any semilattice S we define where each δ s is the usual "point mass" function. Then ℓ 1 (S) is a commutative Banach algebra under the norm · 1 with the product In particular we have δ s * δ t = δ st . We shall consider the Banach space ℓ ∞ (S), of bounded functions from S to C with supremum norm, to be an algebra under usual pointwise operations. The Cayley map, indicated above, extends to an algebra homomorphism Σ : ℓ 1 (S) → ℓ ∞ (S), given on each δ s by We note that if S is finite, then Σ is a bijection. In this case a formula for its inverse is given by where χ s = χ {s} and µ : {(t, s) : S×S : t ≤ s} → R is the Möbius function of the partially ordered set (S, ≤) as defined in [15, §3.7]. Our computations in this section will be equivalent to explicitly computating µ, though we will never need to know µ directly. It follows from [4, Theorem 10] that ℓ 1 (S) is amenable if and only if S is finite. Thus it follows (0.3) that ℓ 1 (S) is unital if S is finite. If S is unital, then δ 1 is the unit for ℓ 1 (S). If S is not unital, the unit is more complicated. We let M (S) denote the set of maximal elements in S with respect to the partial ordering (1.1).
for each p in S and we adopt the convention that an empty sum is 0. Moreover Proof. While we have already established existence of the unit above, let us note that we can gain a very elementary proof of its existence. Indeed since Σ : and thus, inspecting the coefficient of δ p , we obtain (1.4). Note that if p ∈ M (S) the formula above gives u(p) = 1, and for any and thus obtain (1.5).
We note that if S is a finite semilattice then S \ M (S) is a subsemilattice, in fact an ideal, of S. We also note that S×S is also a semilattice and the partial order there satisfies (s, t) ≤ (p, q) ⇔ s ≤ p and t ≤ q.
The following gives an algorithm for computing the diagonal for ℓ 1 (S).
from which we obtain (a). In particular, if p ∈ M (S) we obtain an empty sum in (a) and find d(p, p) = 1. The equation (0.4) implies that δ q ·D = D·δ q and hence we obtain If q ≥ p then there is no s in S for which qs = p. Hence examining the coefficient of δ p ⊗ δ q and δ q ⊗ δ p , respectively, in (1.8), yields We can see for any pair (p, q) with p = q, so p ≤ q or q ≤ p, that d(p, q) is determined by coefficients (s, t) > (p, q). Hence by induction, using the coeficients d(p, p) and d(p, q) for distict maximal p, q as a base, we obtain (c). For example, if q ∈ M (S \ M (S)), then (b) implies for every p > q that It is clear, form the above induction, that each d(p, q) is an integer.
Let us see how Lemma 1.2 allows us to compute the diagonal D of ℓ 1 (S) for a finite semilattice S.
Step 2. We label S = {s 0 , s 1 , . . . , s |S|−1 } in any manner for which Thus, the elements of M (S k ) comprise the last part of the list of S k for k = 1, . . . , n(S). In particular, s 0 = o and s |S|−1 ∈ M (S).
Step 3. The diagonal D will be represented by an |S|×|S| matrix [D] = [d(s i , s j )]. The lower rightmost corner will be the |M (S)|×|M (S)| identity matrix. We can then proceed, using formulas (b) and (a) from the lemma above, to compute the remaining entries of the lower rightmost (|M (S)| + 1)×(|M (S)| + 1) corner of [D], etc., until we are done.
In order to describe certain semilattices S, we define the semilattice graph Γ(S) = (S, e(S)), where the vertex set is S and the edge set is given by ordered pairs e(S) = {(s, t) ∈ S×S : s > t and there is no r in S for which s > r > t}.
To picture such a graph for a finite semilattice S it is helpful to describe levels. Let S 0 , S 1 , . . . , S n(S) be the sequence of ideals of S given in (1.10). For s in S we let the level of s be given by Note that for the power set semilattice P(T ), λ(σ) = |σ|, the cardinality of σ. However, this relation need not hold for a subsemilattice of P(T ), as is evident from the Example 1.4, below. A 6-element, 4-level semilattice is illustrated in (2.7).
We apply this algorithm to obtain the following examples. We denote, for a finite semilattice S, the amenability constant . . , n} be a "linear" semilattice with operation st = s ∧ t = min{s, t}. Then we obtain diagonal with (n + 1)×(n + 1) matrix Hence AM(L n ) = 4n + 1.
Example 1.5. Let F 1 n = {o, s 1 , . . . , s n , 1} be the unitasation of F n , above. Then we obtain diagonal with (n + 2)×(n + 2) matrix Hence AM(F 1 n ) = 4n 2 + 4n + 1. The next example is less direct than the previous ones, so we offer a proof. Example 1.6. Let P n = P({1, . . . , n}) with multiplication st = s ∩ t. Then the diagonal D has 2 n ×2 n matrix which is, up to permutative similarity, the Kronecker product Hence AM(P n ) = 5 n .
We have the following summary result.
Theorem 1.7. If S is a finite semilattice, then AM(S) = 4n + 1 for some integer n ≥ 0. All such numbers are achieved.
Proof. We first establish that for p in S, d(p, p) ≥ 0. This does not seem obvious from Lemma 1.2. We use a calculation from [1, §3] which exploits the Möbius function. We have that Σ : ℓ 1 (S) → ℓ ∞ (S) is invertible and D = r∈S χ r ⊗ χ r is the diagonal for ℓ ∞ (S). Thus, using (1.3), we have that Finally, Examples 1.3 and 1.4 provide us with semilattices admitting amenability constants 4n + 1, for each integer n ≥ 0.
We now gain a crude lower bound for AM(S) which we will require for Proposition 2.4. We note that if S is unital, then for p < 1, u(p) = 0 and since d(s, t) = d(t, s) for (s, t) > (p, p) we find from Lemma 1.2 (a) that d(p, p) is even; in particular d(p, p) ≥ 2. The proof above may be adapted to show AM(S) ≥ 4|S| − 3, in this case. We conjecture the estimate AM(S) ≥ 4|S| − 3 holds for any finite semilattice S.
Banach algebras graded over semilattices
A Banach algebra A is graded over a semigroup S if we have closed subspaces A s for each s in S such that We will be interested strictly in the case where S is a finite semilattice. Notice in this case each A s is a closed subalgebra of A. The next proposition can be proved by a straightforward adaptation of the proof of [14, Proposition 3.1]. However, we offer another proof.
Proposition 2.1. Let S be a finite semilattice and A be graded over S. Then A is amenable if and only if each A s is amenable.
Proof. Suppose A is amenable. If s ∈ S, then A s = t≤s A t is an ideal in A which is complemented and hence an amenable Banach algebra (see [12,Theorem 2.3.7], for example). It is easy the check that the projection π s : A s → A s is a quotient homomorphism. Hence it follows that if (D s α ) is an approximate diagonal for A s then π s ⊗ π s (D s α ) is an approximate diagonal for A s . (This is quotient argument is noted in [12, Corollary 2.
3.2] and [3, Proposition 2.5].)
Now suppose that each A s is amenable. Let S 0 , S 1 , . . . , S n(S) be the sequence of ideals from (1.10). For each n = 0, 1, . . . , n(S) we set A n = s∈Sn A s and observe, for each n = 0, 1, . . . , n(S) − 1, that we have an isometrically isomorphic identification where multiplication in the latter is pointwise, i.e. A s A t = {0} if s = t in M (S n ). The pointwise algebra ℓ 1 -s∈M (Sn) A s is amenable as each A s is amenable; if (D s,α ) is a bounded approximate diagonal for each A s , then in
t)∈M (Sn)×M (Sn)
A s ⊗ γ A t the net of elements D α = s∈M (Sn) D s,α is an approximate diagonal. Thus if A n+1 is amenable, then A n must be too by [12, Theorem 2.3.10]. The algebra A n(S) = A o is amenable, and hence we may finish by an obvious induction.
In the computations which follow, we will require one of the following linking assumptions which are very natural for our examples.
(LA1) For each s in S there is a bounded approximate identity (u s,α ) α in A s , such that for each t ≤ s and a t ∈ A t we have lim α u s,α a t = a t = lim α a t u s,α .
(LA2) For each s ∈ S there is a contractive character χ s : A s → C such that for each s, t in S, a s ∈ A s and a t ∈ A t , we have χ st (a s b t ) = χ s (a s )χ t (a t ). Notice that in (LA1), each (u s,α ) α is a bounded approximate identity for A s = ℓ 1 -t≤s A t . Thus since A s is an A s -module, Cohen's factorisation theorem [6, 32.22] tells us that There is a right factorisation analogue, and the result also holds on each A s module A t , where t ≤ s. We note that (LA2) is equivalent to having a contractive character χ : A → C such that χ| As = χ s for each s.
We note that many natural Banach algebras, graded over semilattices, which arise in harmonic analysis, satisfy (LA2). However, (LA1) can be used whenever each component algebra A s admits no characters. For example, if we have a (finite unital) semilattice S, a family of algebras {A s } s∈S each having no characters, and a system {η s t : s, t ∈ S, s ≥ t} of homomorphisms, we can make ℓ 1 -s∈S A s into a Banach algebra by setting a s a t = η s st (a s )η t st (a t ) for a s in A s and a t in A t . (This construction is analagous to that of the Clifford semigroup algebras which will be presented in Section 2. Proof. A is amenable by the proposition above. Let us suppose (LA1) holds. We let for each p in S, π p : A → A p the contractive projection. We define for a, b ∈ A, π p (a ⊗ b) = π p (a) ⊗ b and (a ⊗ b)π p = a ⊗ π p (b). Clearly these actions extend linearly and continuously to define π p D and Dπ p for any D ∈ A ⊗ γ A.
We let (D α ) be a bounded approximate diagonal for A and D = (s,t)∈S×S d(s, t)δ s ⊗ δ t be the unique diagonal for ℓ 1 (S). We will prove that for p, q ∈ S and a ∈ A p , b ∈ A q that (⋆) lim α am(π p D α π q )b = d(p, q)ab.
This requires induction and we will need some preliminary steps. Suppose that q = p in S, say q ≥ p. If v q ∈ A q then (0.2) implies that We note that on an elementary tensor in A ⊗ A we have 1). We then have, in analogy to Lemma 1.2 (b), using (2.2) and (2.3) Similarly we see Note that if p, q ∈ M (S) with p = q, then then (b 1 ') takes the form and a simlar version holds for (b 2 '). Thus (⋆) holds in this case. Now we show that for p ∈ S and b in A p that where u = p∈S u(p)δ p is the unit for ℓ 1 (S).
In particular, if p ∈ M (S), then Then the equation (2.4) follows inductively from (2.5) and (1.4), using the case of maximal p as a base. Now we establish an analogue of Lemma 1.2 (a). For an elementary tensor a ⊗ b in A ⊗ A, we have It then follows from (2.4) and (2. Note that if p ∈ M (S), then by Proposition 1.1, (a') becomes Thus (⋆) holds in this case.
We now prove (⋆) by induction on pairs (p, q) in S×S with pairs (p, q) ∈ M (S)×M (S) as a base. If p ∈ S, the induction hypothesis is that for a, b ∈ A p lim α am(π s D α π t )b = d(s, t)ab for (s, t) > (p, p) with st = p.
Notice that in the hypothesis above we have A p ⊂ A s ∩ A t , and, moreover, either t ≥ s or s ≥ t. But then it follows from (a') and Lemma 1.2 (a) that which establishes (⋆) in this case. Also, if q = p, say q ≥ p, then for a in A p and b in A q the induction hypothesis is that lim α am(π p D α π t )b = d(p, t)ab for t > q.
Combining this with (b 1 ') and Lemma 1.2 (b) we obtain the equation (⋆) for this case. We can use (b 2 ') in place of (b 1 ') above, to acheive (⋆) with p and q interchanged. We now use (⋆) to finish the proof. Let for p, q in S η(p, q) = sup a∈A p ,b∈A q ab a b .
We note that our assumption (LA1) provides that η(p, q) > 0. For ε > 0 let a ε in A p and b ε in A q be so aεbε aε bε ≥ (1 − ε)η(p, q). Then by (⋆) we have Thus where the equality ( †) holds because of the isometric identification It might seem plausible that in the situation of the theorem above, if it were the case that AM(A s ) = 1, for each s, then AM(A) = AM(S). Indeed this phenomenon was observed for S = L 1 , in a special case in [14,Theorem 2.3]. However this does not seem to hold in general, as we shall see below.
2.1. Clifford semigroup algebras. Let S be a semilattice, and for each s in S suppose we have a group G s , and for each t ≤ s a homomorphism η s t : G s → G t such that for r ≥ s ≥ t in S we have η s s = id Gs and η r s •η s t = η r t then G = s∈S G s (disjoint union) admits a semigroup operation given by x s y t = η s st (x s )η t st (y t ) for x s in G s and y t in G t . It is straightforward to check that G is a semigroup, and is called a Clifford semigroup, as such a semigroup was first described in [2]. We note that the set of idempotents E(G) is {e s } s∈S , where e s is the neutral element of G s , and E(G) is a subsemigroup, isomorphic to S. It is clear that and that ℓ 1 (G) is thus graded over S. Note that ℓ 1 (G) satisfies (LA1) by design, and satisfies (LA2) where the augmentation character is used on each ℓ 1 (G s ). As with semilattices we will write AM(G) = AM(ℓ 1 (G)) Consider the semilattice S = {o, s 1 , s 2 , s 3 , s 4 , 1} whose graph is given below.
Using the algorithm following Lemma 1.2, with the semilattice ordered as presented, we obtain diagonal D with matrix Thus we obtain amenability constant AM(S) = 41. Now let n ≥ 2 be an integer and G n be the Clifford semigroup graded over S for which G n,s 3 = {e 3 , a, . . . , a n−1 } and G n,s i = {e i } for all i = 3 and all connecting homomorphisms are trivial. Here, {e 3 , a, . . . , a n−1 } is a cyclic group, and each other {e i } is the trivial group. This is a finite dimensional commutative amenable algebra, and hence admits a unique diagonal by Proposition 0.1. It is straightforward to verify that if we order the semigroup {o, e 1 , e 2 , e 3 , a, . . . , a n−1 , e 4 , 1} we obtain matrix for the diagonal Notice The constant AM(G 2 ) = 43 is the smallest amenability constant we can find for an commutative semigroup which is not of the form 4n + 1.
2.2.
Algebras graded over linear semilattices. We note that if G is a finite Clifford semigroup, graded over a linear semilattice L n , then AM(G) = AM(L n ) = 4n + 1. Indeed, this holds more generally, by the following proposition.
Proposition 2.3. If A = ℓ 1 -k∈Ln A k is a graded Banach algebra which satisfies (LA1), and A k is contractible with AM(A k ) = 1 for each k in L n , then AM(A) = 4n + 1.
Proof. We have from Theorem 2.2 that AM(A) ≥ AM(L n ) = 4n + 1, hence it suffices to exhibit a diagonal D with D γ ≤ 4n + 1. We will show that such D exists by induction. Write L n = {0, 1, . . . , n}. We identify L k as an ideal of L n for each k = 0, 1, . . . , n − 1 in the usual way. Let us note that if (u k,α ) is a bounded approximate identity for A k , which satisfies (LA1), then the unit e k of A k is the limit point of (u k,α ), and hence e k is the unit for A k = ℓ 1 -j∈L k A j . Note, moreover, that the assumption that AM(A k ) = 1 forces e k = 1.
Applying the multiplication map, and noting that m(D k ) = e k , we have so (0.3) for D k+1 is satisfied. Now if a ∈ A k+1 then by property (0.4) for , so it follows that a·D k+1 = D k+1 ·a. Now if a ∈ A k , then each aa i ∈ A k so [aa i (e k+1 − e k )] ⊗ (e k+1 − e k ) + (aa i )·D k ·b i = ∞ i=1 D k ·(aa i b i ) = D k ·a = a·D k which, by symmetric argument, is exactly the value of D k+1 ·a. Since any a ∈ A k+1 is a sum a = π k+1 (a) + (a − π k+1 (a)) where, π k+1 (a) ∈ A k+1 and a − π k+1 (a) ∈ A k , we obtain (0.4) for D k+1 .
We note that to generalise our proof of the preceding result to amenable but not contractible Banach algebras, we would require at each stage approximate diagonals D k α such that m(D k α ) = 1, which we do not know how to construct, in general. We point the reader to [13,Theorem 2.3] to see a computation performed on a Banach algebra graded over L 1 .
We note that we can modify the proof of Proposition 2.3 to see that a Banach algebra A = ℓ 1 -s∈F 1 2 A s graded over F 1 2 , where each A s is contractible with AM(A s ) = 1, satisfies AM(A) ≤ 45. This is larger than AM(F 1 2 ) = 25 from Example 1.4. We have found no examples of such Banach algebras A with AM(A) > 25. However, we conjecture only for semilattices S = L n , that a Banach algebra A = ℓ 1 -s∈S A s graded over S, where each A s is amenable with AM(A s ) = 1, satisfies AM(A) = AM(S). It would be interesting to find non-linear unital semilattices over which this conjecture holds.
2.3.
On allowable amenability constants. We close by partially answering a question posed in [3]. There it is proved, that there is no semigroup G such that 1 < AM(G) < 5. It is further conjectured that there are no semigroups G for which AM(G) ∈ (5, 7) ∪ (7,9). In [3] there is an example given of a noncommutative semigroup G with AM(G) = 7. For commutative semigroups there is a further gap.
Proof. Since G is commutative, it is proved in [5,Theorem 2.7] that if ℓ 1 (G) is amenable, then G is a Clifford semigroup, whose component groups are abelian, graded over a finite semilattice S. If AM(G) < 9, then by Theorem 2.2 then AM(S) < 9 and hence by Theorem 1.7 and the corollary which follows it we have 2|S| − 1 ≤ AM(S) ≤ 5 so |S| ≤ 3. Clearly, if |S| = 1, S = L 0 , and if |S| = 2, S = L 1 . If |S| = 3 then S is either unital, in which case S = L 2 , or S has 2 maximal elements, in which case S = F 2 ; in either case AM(S) = 9, contradicting our assumptions. Thus S = L 0 or L 1 . But it then follows by a straighforward adaptation of [13, Theorem 2.3] that AM(G) = 1 or 5. In particular AM(G) ≤ 5. | 2007-08-31T18:53:23.000Z | 2007-05-29T00:00:00.000 | {
"year": 2009,
"sha1": "08c8b1a0717013a146f3d86956344229805df215",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0705.4279",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9ae383455286b70340324af91dd7c9dea32863e2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
240417438 | pes2o/s2orc | v3-fos-license | Efficiency, Adequacy, and Uniformity for Normal Distribution of the Application Depths
The research aims to find a set of figures in case of the normal distribution of application depths that represent relationships for the parameters used to describe the performance characteristics of the irrigation system. Using 6010 values for each of the dimensionless application depth, dimensionless cumulative area, and the coefficient of variation, with the help of the SPSS statistical program, an empirical equation was obtained to estimate the dimensionless application depth as a function of the dimensionless cumulative area and the coefficient of variation. Five figures were developed containing the relationships between dimensionless net depth of irrigation, application uniformity, coefficient of variation or uniformity coefficient, application efficiency or deep percolation losses, and storage efficiency or deficit coefficient. By knowing two of these parameters, the rest of them can be easily found from these figures.
INTRODUCTION
The distribution of the application depths corresponding to the area are linear distribution and normal distribution [ [ 1 . In sprinkler irrigation, the distribution of the application depths usually coincides with the normal distribution and corresponds to its average value when the Christensen uniformity coefficient for irrigation depths is greater than 70% [2,3]. At lower uniformity coefficients, it is closer to a linear than to a normal distribution [4]. In drip irrigation, system can be considered to have a normal distribution due to its high uniformity distribution [5]. Where for surface irrigation, it is closer to the linear distribution than to the normal distribution [1]. [6] Used a statistical method to divide infiltration depths into net irrigation and deep percolation based on a normal distribution and mentioned that this method has the ability to find the average irrigation depth required to get full yields for a specific part of the field.
A set of equations were derived to express both storage efficiency and deficit coefficient as a function of both application adequacy and application uniformity and a function of the application adequacy and deep percolation losses [7]. The efficiency, adequacy and uniformity of irrigation are parameters for designing and evaluating the performance of the irrigation system. When high efficiency irrigation is combined with a good water distribution uniformity, this reflects positively on water use efficiency and productivity [8]. There is no adequate and efficient irrigation without good uniformity [9]. The efficiency of any irrigation system related to the uniformity of water distribution and affected by any change that occurs to it [10]. In addition, managing the irrigation system requires finding efficient irrigation and distribution [11].
In the normal distribution of infiltration depths, there are no relationships between the efficiency, uniformity and adequacy of irrigation and deep percolation, and these parameters are important in the design and evaluation of irrigation projects. Therefore, the research aims to find a set of figures in case of the normal distribution of application depths. The figures represent relationships expressing parameters that used to describe the performance characteristics of the irrigation system: the application adequacy, the coefficient of variation or uniformity coefficient, the application efficiency or the deep percolation losses, and the storage efficiency or the deficit coefficient.
DISTRIBUTION OF DIMENSIONLESS APPLICATION DEPTHS
By adopting the standard normal distribution and 601 values of the standard variable Z from -3 to 0 to +3 with an interval of 0.01, the dimensionless cumulative area Q (Z) with values from 0 to 0.5 to 1 was found from the following cumulative distribution function [12]: Where Y is the dimensionless application depth and CV is the coefficient of variation, which is equal to the quotient of the standard deviation divided by the mean ((s.d.) / Y ̅ ). And 601 has a standard Z value from -3 to 0 to +3 with an interval of 0.01, The dimensionless application depth Y was found from equation (3) at different values of CV from 0.05 to 0.5 with an interval of 0.05, thus, we have 6010 values for each of the variables: Y, Q, CV. Using the SPSS statistical program, an empirical equation (4) has been derived to estimate the dimensionless application depth Y as a function of the dimensionless cumulative area Q and the coefficient of variation CV, with determination coefficient R 2 =0.999 and RMSE=0.0475: (4). Figure (2) shows the distribution of the estimated dimensionless application depths from equation (4) with the dimensionless cumulative area for different variation coefficients.
IRRIGATION SYSTEM PERFORMANCE CHARACTERISTICS
There are many parameters that are used to characterize the performance characteristics of the irrigation system [1, 7, and 13] in addition to what is related to monitoring the infiltration depths within the root zone of a particular crop during the growing season resulting from the addition of irrigation water and loss by water consumption and deep percolation. depths with the dimensionless cumulative area, area a1 represents the irrigated water stored in the root zone. Area a2 represents irrigation water that penetrates below the root zone, and area a3 represents the deficit of irrigation water in the root zone. The parameters characterize the performance characteristics of the irrigation system are: First: Application adequacy. It is the ratio of the area receiving application depth equal to or greater than the net depth of irrigation NDI to the total area; it represents A in Figure (3). The depth of irrigation expresses the dimensionless net depth of irrigation (NDI) (Irrigation water required in the root zone) when Q is equal to A in equation (4) (5) Second: Application Efficiency gives a general indication of how well an irrigation system is performing [14], it represents the ratio of irrigation water stored within the root zone to the amount of water given. Accordingly, application efficiency E is the following: ………………… (6) E= [NDI*A+∫
RESULTS AND DISCUSSION
Based on equation (5), the dimensionless net depth of irrigation was calculated for several values of coefficient of variation from 0 to 0.625 or equivalent to that for the uniformity coefficient according to equation (9) for different levels of application adequacy from 0.5 to 1. Figure (4) shows the relationship between the dimensionless net depth of irrigation and the coefficient of variation or uniformity coefficient, for different application adequacies.
The application efficiency was also found from equation (7) or its equivalent from deep percolation according to equation (8) for several values of the coefficient of variation from 0 to 0.625 or equivalent to that for the uniformity coefficient for different levels of application adequacy from 0 to 1. Figure (5): shows the relationship between application efficiency and deep percolation losses with the coefficient of variation or uniformity coefficient, at different levels of application adequacy.
The storage efficiency was found from equation (12) or its equivalent from the deficit coefficient according to equation (14) and for several values of the coefficient of variation from 0 to 0.625, or equivalent to that for the uniformity coefficient, for different levels of application adequacy from 0 to 1. Figure (6 The storage efficiency was found from equation (13) and for several values of application efficiency from 0 to 1, or equivalent to that for deep percolation losses according to equation (8), for different levels of the dimensionless net depth of irrigation from 0 to 1. Figure (7) shows the relationship between storage efficiency or deficit coefficient with application efficiency or deep percolation losses for different values of dimensionless net depth of irrigation. The data obtained from equation (7) for application efficiency and the data obtained from equation (12) for storage efficiency were used to clarify the relationship between the storage efficiency or the deficit coefficient with the application efficiency or deep percolation losses at different levels of application adequacy in Figure (8).
CONCULOSION
There are five groups of parameters used to characterize the performance characteristics of the irrigation system: the dimensionless net depth of irrigation, the application adequacy, the coefficient of variation or the uniformity coefficient, the application efficiency or the deep percolation losses and the storage efficiency or deficit coefficient. From knowing two of them, the rest of the parameters can be determined easily from Figures (4-8). An example of this presented in Table (1) and indicated on the Figures, from knowing the dimensionless net depth irrigation and the coefficient of variation or coefficient of uniformity. | 2021-10-18T17:00:57.557Z | 2021-10-05T00:00:00.000 | {
"year": 2021,
"sha1": "050447948acea54f96ee4127af46acff1ffbfbf0",
"oa_license": "CCBY",
"oa_url": "https://rengj.mosuljournals.com/article_169434_86b08d78b4b3b0bad8b013ccb95d1890.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bd97ca5dba9ff34cd22e3e685c27da35d4ff1424",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
239129775 | pes2o/s2orc | v3-fos-license | Altered Hypoxia-Induced and Heat Shock Protein Immunostaining in Secondary Hair Follicles Associated with Changes in Altitude and Temperature in Tibetan Cashmere Goats
Simple Summary Cashmere goats in Tibet are adapted to a high altitude, cold climate, high solar radiation, and hypoxia. The aim of the present study was to compare the morphology of the secondary hair follicles and immunostaining of hair follicle regulatory proteins in Tibetan cashmere goats from a high altitude and low temperature (Rikaze) to goats from a lower altitude and comparatively warm temperature (Huan). We conclude that, at the same time of the year, the secondary hair follicles were at different development stages. HIF-1a protein immunostaining in the inner root sheath (IRS) and hair shaft (HS) was higher than the immunostaining in the outer root sheath (ORS). In contrast, immunostaining for HIF-2a protein in the ORS and IRS was higher than that present in the HS. Immunostaining for HIF-3a protein was higher in the ORS than the IRS while HOXC13 protein immunostaining was higher in the ORS than the IRS and HS. Immunostaining in secondary hair follicles for HIF-1a, HIF-2a, and HSP27 protein in the cashmere goats living in Rikaze was significantly higher than that in the secondary hair follicles of cashmere goats from Huan. In contrast, HOX13 protein immunostaining was significantly higher in cashmere goats from Huan than from Rikaze. These results are useful in understanding how altitude and temperature influence secondary hair follicle development. Abstract This experiment compared secondary hair follicles (SFs) in Tibetan cashmere goats from two different steppes that were at different altitudes and had different temperatures. Twenty-four 2-year-old goats were studied. Twelve goats were from Rikaze in Tibet which is at an altitude of above 5000 m with an average temperature of 0 °C. The other 12 studied goats were from Huan County of Gansu Province which is around 2000 m above sea level with an average temperature of 9.2 °C. The structural features of SFs were assessed using light microscopy and transmission electron microscopy. The presence of HIF-1a, HIF-2a, HIF-3a, HSP27, and HOXC13 proteins was studied using immunohistochemistry and immunofluorescence. Light and electron microscopy revealed that the SFs of the Tibetan cashmere goats that lived in the Rikaze Steppe were in the proanagen stage in May. However, the SFs of the goats from the lower warmer Huan County were in the anagen stage at the same time. Immunohistochemistry revealed intense immunostaining for HIF-1a protein in the inner root sheath (IRS) and hair shaft (HS); immunostaining against HIF-2a in the outer root sheath (ORS) and IRS; HIF-3a protein immunostaining in the ORS; HSP27 immunostaining in the ORS, IRS, and HS; and HOXC13 immunostaining in the ORS and HS. HIF-1a protein expression in the IRS and HS was higher than the expression in the ORS (p < 0.05) while the expression of HIF-2a protein was higher in the ORS and IRS than the HS (p < 0.05). The expression of HIF-3a protein was higher in the ORS than in the IRS (p < 0.05). Expression of HOXC13 protein was higher in the ORS than in the IRS and HS (p < 0.05). Immunostaining of HIF-1a, HIF-2a, and HSP27 protein was significantly higher in SFs from cashmere goats from Rikaze than in goats from Huan (p < 0.05). In contrast, HOX13 protein immunostaining was significantly higher in cashmere goats from Huan than from Rikaze (p < 0.05). Significant differences were observed in the SFs of cashmere goats from two locations that differ in altitude and temperature. This suggests the differences in the secondary hair follicles could be due to the hypoxia and lower temperatures experienced by the goats in Rikaze. These results are useful in understanding how altitude and temperature influence SF development. Hair produced by the SFs are used for down fiber. Therefore, understanding of the factors that influence SF development will allow the production and harvest of these valuable fibers to be maximized.
Introduction
China produces around a half of all cashmere fiber in the world [1]. This fiber is the hair shafts from the secondary hair follicles (SFs) from cashmere goats. Cashmere goats are farmed across mainland China and live in a diverse range of conditions that range from warm sea-level pasture to cold high-altitude (over 5000 m above sea level) farms on the Tibetan Plateau [2].
Due to the value of the cashmere fibers, cashmere goats are economically important animals for farmers on the Tibetan Plateau. Cashmere goats in Tibet have adapted to high altitudes, extreme cold, and hypoxia [3][4][5][6][7][8][9][10][11]. However, goats from the location are famous for producing high-quality down fiber, suggesting the high altitude and cold contribute to the high-quality fiber from these animals.
Hypoxia-inducible factors (HIFs), including HIF-1a, HIF-2a, and HIF-3a, are important in protecting the body against the low-oxygen environment that is present at high altitude [12][13][14]. The expression of HIFs in animals living at high altitude has been reported in many tissues [7,[15][16][17]. However, the expression of HIFs in secondary hair follicles has not previously been investigated.
Heat shock protein 27 (HSP27) regulates actin polymerization [18] and has been found to be expressed more in anagen hair follicles than in telogen hair follicles [19] and expression of this protein is also correlated to epidermal differentiation in human skin [20]. In addition, in Longdong cashmere goats, HSP27 immunostaining in secondary hair follicles was found to be different in extensively fed animals compared to those that were in an intensively fed group [21].
HOXC13 is also involved in hair follicle formation and growth [22,23] and this protein has been shown to be expressed in the epidermis and outer root sheath (ORS) of SFs and correlated with cashmere goat skin thickness [24,25].
The purpose of the presently reported experiment is to compare the SFs of cashmere goats from a high altitude and cold climate to the follicles of cashmere goats from a lower altitude and more moderate temperatures. To do this, goats from the Rikaze Steppe (an average altitude of over 4000 m with a yearly average temperature of 0 • C) were compared to goats from the Huan County Steppe (an average altitude of 2000 m and a yearly average temperature of 9.2 • C). The presence of HIF-1a, HIF-2a, HIF-3a, HSP27, and HOXC13 in the SFs in samples taken from Tibetan cashmere goats from Rikaze was compared to the presence of these proteins in SFs from goats from Huan. The presence of differences would enable greater understanding of how goats respond to hypoxia and changes in temperature and may inform methods to improve Tibetan cashmere goat down fiber production. To the authors' knowledge, differences in the SFs of cashmere goats from different environments have not been previously studied.
Tibet Cashmere Goats in Rikaze Steppe and Huan County Steppe
Twenty-four two-year-old non-pregnant female cashmere goats were studied. Twelve goats were from the Rikaze Steppe (29 • 15 0 N, 88 • 52 59 E) of Tibet while the others were from Huan County (36 • 35 59.99 N, 107 • 05 60.00 E) in Gansu. The goats from Huan County had been born in the same herd as the goats from Rikaze, but had been transported to Huan when they were 1 year old. All goats were grazed on pasture.
Skin Sample Collection
A sample of skin was removed from the same area from all goats. The area sampled was from the dorsum at the level of L2. All samples were taken over 3 days in May 2020 immediately after slaughter at a commercial abattoir that conformed with all regional regulations regarding the slaughter process. Collected samples were preserved in formalin for light microscopy and modified Karnovsky's fixative (3% gluteraldehyde (v/v) 2% formaldehyde (w/v) in 0.1 M phosphate buffer (pH 7.2)) for samples that were to be used for electron microscopy.
Wax Section and Ultrathin Section Analysis
Hematoxylin and eosin sections of the skin samples were prepared following standard methods. Samples were prepared and examined by transmission electron microscopy (TEM) as previously described [25].
Immunohistochemical and Immunofluorescence Analysis
For immunohistochemistry, 5 µm tissue sections were mounted on charged slides and antigen retrieval was performed in 10 mM citrate buffer (pH 6.0) for 30 min in a microwave oven. After washing in PBS, the sections were preincubated for 60 min at room temperature in 5% goat serum, and incubated overnight at 48 • C with primary antibody against rabbit HIF-1a, HIF-2a, HIF-3a, and HOXC13 (all antibodies diluted 1:200, Bioss, China, bs-20398R, bs-1447R, bs-5989R, and bs-13599R) and primary antibody against mouse HSP27 (ab-2790 Abcam, Hong Kong) overnight at 4 • C. After washing five times for 5 min with PBS, the sections were incubated in rabbit and mouse Histostain-Plus Kits (SP-0022 and SP-0024, Bioss, Beijing, China), respectively, and antibodies were visualized using diaminobenzidine (DAB, Bioss, Beijing, China) with a Meyer's hematoxylin counterstain. To detect immunofluorescence, 5 µm tissues sections were rehydrated and incubated in PBS with antibodies against rabbit HIF-1a, HIF-2a, HIF-3a, HOXC13, or mouse HSP27 (as described for immunohistochemistry except diluted 1:100 overnight at 4 • C), then the fluorescent secondary antibodies (donkey anti-mouse, Abcam, Hong Kong, GR112688-1/donkey anti-rabbit, Abcam, Hong Kong, GR115771-1) were added at a dilution of 1:1000 in PBS. The primary antibody was replaced with PBS for negative controls. The stained slides were examined and photographed using CaseViewer.
Measurements and Statistical Analysis
To quantify the integrated optical density (IOD) of immunoreactivity of sections of secondary follicles, images of immunostaining sections were taken and the IOD measured using Image Pro-Plus 6.0 software (Media Cybernetics, Inc., Bethesda, MD, USA) as previously described [15]. The data were expressed as the mean ± SD and were analyzed by one-way ANOVA using SPSS software (version 17.0). A p-value of <0.05 was considered statistically significant. An independent sample t-test using was used to compare the optical densities of immunostaining from goats from Rikaze Steppe with goats from Huan County. A p-value of <0.05 was considered statistically significant.
Morphological Analysis of Secondary Hair Follicle and Histomorphological Evaluations
At the time that the samples were taken (May), SFs from cashmere goats in Rikaze were in the proanagen phase while these hair follicles from goats in Huan County were in the anagen phase of growth ( Figure 1A-C). Using light microscopy, hair follicles in the anagen phase were characterized by a high density of SFs ( Figure 1D,E), a rounded morphology of the hair bulbs ( Figure 1F), and the presence of inner root sheaths (IRSs) and ORSs. Hair bulbs showed complete division; the cells of the dermal papilla had high-density pear-shaped granules ( Figure 1F). statistically significant. An independent sample t-test using was used to compare the optical densities of immunostaining from goats from Rikaze Steppe with goats from Huan County. A p-value of <0.05 was considered statistically significant.
Morphological Analysis of Secondary Hair Follicle and Histomorphological Evaluations
At the time that the samples were taken (May), SFs from cashmere goats in Rikaze were in the proanagen phase while these hair follicles from goats in Huan County were in the anagen phase of growth ( Figure 1A-C). Using light microscopy, hair follicles in the anagen phase were characterized by a high density of SFs ( Figure 1D,E), a rounded morphology of the hair bulbs ( Figure 1F), and the presence of inner root sheaths (IRSs) and ORSs. Hair bulbs showed complete division; the cells of the dermal papilla had high-density pear-shaped granules ( Figure 1F). Using TEM, the proanagen phase was identified by the hair follicle rudiments which were visible and surrounded by the cells of the ORS (Figure 2A,B). The elongation zone contained undifferentiated keratinocytes ( Figure 2C,D). TEM revealed that the IRS consisted of three concentric layers which is the typical ultrastructure of anagen secondary hair follicle ( Figure 2E-H), including Huxley's layer and Henle's layer, which was the outermost layer ( Figure 2F,G). Using TEM, the proanagen phase was identified by the hair follicle rudiments which were visible and surrounded by the cells of the ORS (Figure 2A,B). The elongation zone contained undifferentiated keratinocytes ( Figure 2C,D). TEM revealed that the IRS consisted of three concentric layers which is the typical ultrastructure of anagen secondary hair follicle ( Figure 2E-H), including Huxley's layer and Henle's layer, which was the outermost layer ( Figure 2F,G).
Immunohistochemical Detection of HIF-1a, HIF-2a, HIF-3a, HSP27, and HOXC13 in Secondary Hair Follicles
Immunostaining for HIF-1a, HIF-2a, HIF-3a, HSP27, and HOXC13 was present in the ORS, IRS, and HS in samples from both groups of cashmere goats. Intense immunostaining for HIF-1a protein was present in the IRS and HS ( Figure 3A
Immunohistochemical Detection of HIF-1a, HIF-2a, HIF-3a, HSP27, and HOXC13 in Secondary Hair Follicles
Immunostaining for HIF-1a, HIF-2a, HIF-3a, HSP27, and HOXC13 was present in the ORS, IRS, and HS in samples from both groups of cashmere goats. Intense immunostaining for HIF-1a protein was present in the IRS and HS ( Figure 3A,B), HIF-2a immunostaining was visible in the ORS and IRS ( Figure 3C,D), while HIF-3a immunostaining was visible in the ORS ( Figure 3E,F). HSP27 expression was present in the ORS, IRS, and HS (Figure 3G,H). However, HOXC13 expression was only visible in the ORS and HS ( Figure 3I,J). HIF-1a immunofluorescence was present in the IRS and HS ( Figure 4A) while HIF-2a was present in the ORS and IRS ( Figure 4B). HIF-3a was found in the ORS ( Figure 4C), and HSP27 was found in the ORS, IRS, and HS ( Figure 4D). The immunofluorescence of HOXC13 was found in the ORS and HS ( Figure 4E). HIF-1a immunofluorescence was present in the IRS and HS ( Figure 4A) while HIF-2a was present in the ORS and IRS ( Figure 4B). HIF-3a was found in the ORS ( Figure 4C), and HSP27 was found in the ORS, IRS, and HS ( Figure 4D). The immunofluorescence of HOXC13 was found in the ORS and HS ( Figure 4E). HIF-1a immunofluorescence was present in the IRS and HS ( Figure 4A) while HIF-2a was present in the ORS and IRS ( Figure 4B). HIF-3a was found in the ORS ( Figure 4C), and HSP27 was found in the ORS, IRS, and HS ( Figure 4D). The immunofluorescence of HOXC13 was found in the ORS and HS ( Figure 4E). Optical density analysis revealed that more intense HIF-1a imm sent in the IRS and HS than in the ORS (p < 0.05), there was highe munostaining in the ORS and IRS than the in HS (p < 0.05), highe munostaining in the ORS than the IRS (p < 0.05), and HOXC13 immun in the ORS than the in IRS and HS (p < 0.05; Figure 5). However, ther differences in the immunostaining of HSP27 protein between the OR Optical density analysis revealed that more intense HIF-1a immunostaining was present in the IRS and HS than in the ORS (p < 0.05), there was higher HIF-2a protein immunostaining in the ORS and IRS than the in HS (p < 0.05), higher HIF-3a protein immunostaining in the ORS than the IRS (p < 0.05), and HOXC13 immunostaining was higher in the ORS than the in IRS and HS (p < 0.05; Figure 5). However, there were no significant differences in the immunostaining of HSP27 protein between the ORS, IRS, and HS. Optical density analysis revealed that more intense HIF-1a immunostaining was present in the IRS and HS than in the ORS (p < 0.05), there was higher HIF-2a protein immunostaining in the ORS and IRS than the in HS (p < 0.05), higher HIF-3a protein immunostaining in the ORS than the IRS (p < 0.05), and HOXC13 immunostaining was higher in the ORS than the in IRS and HS (p < 0.05; Figure 5). However, there were no significant differences in the immunostaining of HSP27 protein between the ORS, IRS, and HS. When the total immunostaining throughout the SFs in goats from Rikaze was compared to the total immunostaining in the SFs from goats from Huan, total immunostaining for HIF-1a, HIF-2a, and HSP27 protein was higher in hair follicles from Rikaze goats than Huan goats (p < 0.05; Figure 6). However, there was less HOX13 protein immunostaining in the hair follicles of goats living in Rikaze compared to those of goats from Huan County (p < 0.05).
(p < 0.05) in the IOD values between goats from Rikaze and goats from Huan in that protein in that layer of the secondary hair follicle.
When the total immunostaining throughout the SFs in goats from Rikaze was compared to the total immunostaining in the SFs from goats from Huan, total immunostaining for HIF-1a, HIF-2a, and HSP27 protein was higher in hair follicles from Rikaze goats than Huan goats (p < 0.05; Figure 6). However, there was less HOX13 protein immunostaining in the hair follicles of goats living in Rikaze compared to those of goats from Huan County (p < 0.05). Figure 6. Total immunostaining of HIF-1a, HIF-2a, HIF-3a, HSP27, and Hoxc13 in the secondary hair follicles of Tibetan cashmere goat. Rikaze indicates that the goat was from Rikaze Province while Huan indicates the goat was from Huan County. ** Indicates a significant difference in which the p-value was between 0.01 and 0.05 while *** indicates a significant difference with a p-value < 0.01.
Discussion
In the presently described experiment, all twenty-four Tibetan cashmere goats were born in Rikaze and lived at high altitude in a cold climate for their first year of life. However, twelve of these goats were then subsequently moved to a lower altitude and a warmer climate for a year prior to samples of the skin being taken. This was to observe what differences in the SFs were present between the two groups of goats. This would allow evaluation of some of the adaptations made by the goats in response to the lower altitude and warmer climate. Evaluation of the SFs revealed that there were significant differences between Tibetan cashmere goats from different steppes. The SFs in goats from Huan County were in anagen, suggesting the hair follicles developed faster than the hair follicles of goats from the higher, colder Rikaze Province. This suggests hypoxia and low environmental temperature could delay entry of the SF into the anagen stage. These results support previous studies that suggested growth of SFs is influenced by the sunshine period and environmental temperature [21,26]. They also suggest that a warm environment and lower altitude may promote the growth of cashmere down fiber.
The heat shock proteins influence cell growth and differentiation and HSP expression increases in response to heat, oxidative stress, or glucose deprivation [27]. HSP27 has previously been shown to be expressed in the epidermis [28][29][30] with levels of expression of this protein increasing with increased epidermal differentiation and trichilemmal keratinization [20]. A study of cashmere goats revealed that HSP27 protein expression may be Figure 6. Total immunostaining of HIF-1a, HIF-2a, HIF-3a, HSP27, and Hoxc13 in the secondary hair follicles of Tibetan cashmere goat. Rikaze indicates that the goat was from Rikaze Province while Huan indicates the goat was from Huan County. ** Indicates a significant difference in which the p-value was between 0.01 and 0.05 while *** indicates a significant difference with a p-value < 0.01.
Discussion
In the presently described experiment, all twenty-four Tibetan cashmere goats were born in Rikaze and lived at high altitude in a cold climate for their first year of life. However, twelve of these goats were then subsequently moved to a lower altitude and a warmer climate for a year prior to samples of the skin being taken. This was to observe what differences in the SFs were present between the two groups of goats. This would allow evaluation of some of the adaptations made by the goats in response to the lower altitude and warmer climate. Evaluation of the SFs revealed that there were significant differences between Tibetan cashmere goats from different steppes. The SFs in goats from Huan County were in anagen, suggesting the hair follicles developed faster than the hair follicles of goats from the higher, colder Rikaze Province. This suggests hypoxia and low environmental temperature could delay entry of the SF into the anagen stage. These results support previous studies that suggested growth of SFs is influenced by the sunshine period and environmental temperature [21,26]. They also suggest that a warm environment and lower altitude may promote the growth of cashmere down fiber.
The heat shock proteins influence cell growth and differentiation and HSP expression increases in response to heat, oxidative stress, or glucose deprivation [27]. HSP27 has previously been shown to be expressed in the epidermis [28][29][30] with levels of expression of this protein increasing with increased epidermal differentiation and trichilemmal keratinization [20]. A study of cashmere goats revealed that HSP27 protein expression may be influenced by the level of nutrition and, therefore, this protein may allow adaptation to adverse environment conditions [21]. In the present study, it was shown that HSP27 was present in all layers of the SFs, and the expression of HSP27 in Rikaze was higher than them in Huan County. The harsh environment of Rikaze compared with Huan appeared to increase expression of this protein. This suggests that HSP27 expression in SFs may be influenced by environment stress. HOXC13 expression has been shown to be associated with goat hair follicle formation and hair growth [21,22], with protein expression correlated with cashmere goat skin thickness [24]. In addition, the expression of HOXC13 appeared to be correlated to the activity of the SF, suggesting a role of this protein in stimulating new hair follicle development [21]. In the present research, it was found that HOCX13 was mainly present in the ORS, and the expression was higher in goats from lower, hotter places than in goats from a harsher environment. These results indicate that HOXC13 expression is not induced by cold or hypoxia, although this protein also appears important in regulating development of the SFs.
In the present experiment, HIF-1a, HIF-2a, and HIF-3a immunostaining was present in all layers of the SFs. However, the immunostaining of HIF-1a was highest in the IRS and HS, HIF-2a was highest in the ORS and IRS, and HIF-3a was highest in the ORS. It is speculated that HIF-1a has a key role in hair shaft growth, and HIF-2a and HIF-3a influence secondary hair follicle reconstruction. Comparing the HIF immunostaining between the two groups of cashmere goats showed that HIF-1a and HIF-2a were higher in goats living in the harsher Rikaze Steppe compared to goats from Huan County. This suggests that HIF-1a and HIF-2a may be involved in the response to the harsh environment stress of SF development. The immunostaining of HIF-1a and HIF-2a proteins in SFs between two groups has the same trend, which suggests the two isoforms may be regulated by similar factors. However, the immunostaining of HIF-3a in the goats that lived in Rikaze was lower than in goats from Huan County. This may suggest that, although HIF-3a influences the development of the SFs, expression of this protein is not increased by a harsh environment. These results are consistent with a previous study that also found differences in HIF-3a expression due to altitude-induced hypoxia [16]. The Tibetan cashmere goats in Huan County were in anagen stage, which is the phase at which the highest development of SFs occurs. As the immunostaining of HIF-3a and HOCX13 was higher in goats from Huan County, it is possible that HIF-3a and HOXC13 may influence the development of SFs within the anagen phase.
Conclusions
The results of this study suggest: (1) Hypoxia and environmental temperature may influence SF development and may influence entry of the follicle into the anagen phase.
(2) HSP27, HIF-1a, HIF-2a, and HIF-3a have a role in the response of the hair follicle to adverse environmental conditions. (3) HOXC13 expression was not increased in animals subject to a harsh environment.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-10-19T15:19:23.431Z | 2021-09-25T00:00:00.000 | {
"year": 2021,
"sha1": "88be6767c9e3363182b317b297b381f7381dd293",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/10/2798/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c69787d91a7a297e450148220a02821e27f5d16a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252362846 | pes2o/s2orc | v3-fos-license | Determination of Tropifexor in Beagle Dog Plasma by UPLC-MS/MS and Its Application in Pharmacokinetics
The primary objective of this study was to develop and validate an efficient and accurate ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) approach as a means to detect tropifexor plasma concentrations in beagle dogs and to study its pharmacokinetic profile in beagle dogs. The chromatographic separation of tropifexor and oprozomib (internal standard, ISTD) on the column, with the addition of acetonitrile for rapid precipitation and protein extraction, was achieved with 0.1% formic acid aqueous solution-acetonitrile for the mobile phase. A Xevo TQ-S triple quadrupole tandem mass spectrometer, under the selective reaction monitoring (SRM) mode, for the determination of the concentrations in the positive ion mode. The mass transfer pairs of tropifexor and oprozomib (ISTD) were m/z 604.08 ⟶ 228.03 and m/z 533.18 ⟶ 199.01, respectively. The profile displayed well linearity with calibration curves for tropifexor and oprozomib (ISTD) ranging from 1.0 to 200 ng/mL. In parallel, the lower limit of quantification (LLOQ) value for tropifexor could be measured with the aid of this novel technique at 1.0 ng/mL. In addition, the scope of intraday and interday for analyte accuracy was between −4.86% and 1.16%, with a precision of <7.31%. The recoveries of the analytes were >88.13% and were free of significant matrix effects. The stability met the requirements for the quantification of plasma samples under various conditions. Finally, the pharmacokinetic profile of tropifexor in beagle dog plasma following oral administration of 0.33 mg/kg tropifexor was determined by using the method facilitated in this work.
Introduction
e term nonalcoholic fatty liver disease (NAFLD) refers to a syndrome of pathology dominated by excessive fat deposition in the hepatocytes due to diseases other than alcohol consumption and other well-defined liver injuries. NAFLD has been described as including steatosis simplex and nonalcoholic steatohepatitis (NASH). Steatosis simplex is the accumulation of triglycerides. In contrast, NASH, as a prolonged disease, will likely progress to hepatic fibrosis, hepatocirrhosis, and eventually hepatocellular cancer if it continues to progress. Currently, the proportion of NASH in NAFLD is 10%-30% [1], which is 10% more than the data in the 2010 China NAFLD diagnosis and treatment guidelines [2]. e global prevalence of NAFLD is increasing, from 15% in 2005 to 25% in 2010. Similarly, among patients with NAFLD, the incidence of progression to NASH has almost doubled [3]. erefore, the prevention and treatment of NASH have become a research focus as an important stage in the progression from steatosis simplex to hepatic fibrosis, hepatic cirrhosis, and liver cancer. e pathogenesis of NASH is currently considered unclear, and the "multiple parallel strikes" theory suggests that NASH is the result of parallel interactions between multiple risk factors, multiple cell types, and multiple tissues and organs. Insulin resistance, lipotoxicity, oxidative stress, endoplasmic reticulum stress, systemic hypoinflammatory response, immune or cytokine or mitochondrial function alterations, and apoptosis are the pathogenic pathways involved in the development and progression of NASH [4]. For the treatment of NAFLD, basic therapies such as weight reduction, diet control, and exercise should be recommended first. Due to poor patient compliance, these approaches are not ideal in the management of NASH and hepatic fibrosis [5]. At the same time, drugs to treat and improve NASH were created, including insulin sensitizers, angiotensin receptor blockers, lipid-lowering drugs, antioxidants, hexoketo cocaine, and ursodeoxycholic acid. However, because of the obvious limitations of these drugs, their effectiveness and security still warrant ongoing investigation with clinical trials. So, there is a real demand for continuous research of current and potential drugs both clinically and preclinically to augment NASH therapy.
Hence, there is an urgent clinical imperative to discover new drugs against NASH, and this has led to targeted interventions targeting these pathogenic mechanisms becoming a hot topic of current research. Most of the current research is focused on single-target interventions.
eoretically, therapeutic targets need to be balanced against hepatic steatosis, inflammation, hepatocyte injury, and fibrosis.
e combination of multiple drug interventions targeting different targets in the pathogenesis of NASH is a direction for future research.
Currently, there are six nuclear receptor farnesoid X receptor (FXR) agonists entering clinical trials worldwide for the NASH indication [6]. Tropifexor, one of the new highpotency agonists for FXR, shows an EC50 value of 0.2 nM. In addition, tropifexor observed a strong induction of BSEP and small heterodimer partner (SHP) genes among primary cells in a concentration-dependent manner. When the concentration was as low as 1 nM, we could note that the induction of BSEP was higher than that of the control (DMSO), while at 10 nM, a 15-fold stronger SHP induction than the control was observed. And at 1 nM, a moderate SHP induction, three times higher than the control, was observed [7]. In parallel, pharmacokinetic studies in rats revealed that tropifexor was poorly cleared with a CL value of 9 mL·min −1 ·kg −1 and it also possessed a high terminal t 1/2 of 3.7 h. Tropifexor as an aqueous microemulsion formulation was formulated with an oral bioavailability of 20% in rats. In mice, administered by intravenous injection, tropifexor displayed both poor clearance and narrow volume of distribution with a half-life of 2.6 hours. In dogs, when administered intravenously, tropifexor showed a t 1/2 of 7.4 hours and a volume of distribution of 0.46 L/kg [7]. Tropifexor targets FXR in the intestinal epithelial cells and causes a concentration-dependent increase in FGF-19 levels in single-and multiple-dosing experiments [8]. In a model of ANIT-induced liver injury, tropifexor could ameliorate hepatic transcarbamylase and fibrosis [9]. Among the Stelic animal models of NASH (STAM), in which tropifexor regressed well-established fibrosis and decreased NAFLD activity scores and hepatic glycerol triglycerides. Meanwhile, tropifexor dramatically downregulated adipose hepatitis, fibrosis, and fibrogenic gene expression in the insulin-precipitated hepatic NASH model(AMLN) [10].
To date, LC-MS/MS approaches have been used for pharmacokinetic-related studies of tropifexor [11]. However, an analytical bioanalytical method for the detection of tropifexor in biological samples by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/ MS) has not been available. erefore, the work aims to verify a facile and precise UPLC-MS/MS assay which measures tropifexor in the plasma of beagle dogs as well as investigate the pharmacokinetic profile of tropifexor in Beagles. (Figure 1(b)) (purity > 98%) and Oprozomib (Figure 1(a)) (internal standard, ISTD, purity > 98%) that were sourced from Shanghai Tronsai Technology Co. HPLC grade methanol and acetonitrile which were obtained from Merck (Darmstadt, Germany). Milli-Q reagent system (Bedford Millipore, USA) followed by filtration and preparation of ultrapure water.
Instrumentations and Analytical Conditions.
e apparatus used in the chromatographic analysis was a Waters ACQUITY ultra-performance liquid chromatography (UPLC) (Milford, Massachusetts, USA). e basic procedure was as follows. First of all, the chromatographic separation was done on a Waters ACQUITY UPLC BEH C18 (50 mm × 2.1 mm, 1.7 μm) and a precolumn. en, thorough separation of analytes was achieved by an efficient gradient process with both mobile phase acetonitrile (B) and 0.1% formic acid aqueous solution (A). e column temperature was controlled at 40°C for the entire elution period, with the autosampler (FTN) set at 10°C, and the sample chamber temperature at 4°C. e injection volume was 2.0 μL and the flow rate was constant at 0.3 mL/min. e whole gradient elution process lasted for 2.0 min, and the acetonitrile concentration was 10% within 0 to 0.5 min. en, by 1 min, the acetonitrile concentration reached 90% and was maintained up to 1.4 min, until it dropped to 10% at 1.5 min and continued up to 2.0 min.
A Waters Xevo TQ-S triple quadrupole tandem mass spectrometer (Milford, MA, USA) and an electrospray ion source (ESI) with an ESI temperature and mass spectrometry voltage of 550°C and 5500 V (positive), respectively, were used, and both were combined to perform positive ion scanning in selective reaction monitoring (SRM) mode. Finally, as summarized in Table 1, the control parameters and statistics of the MS/MS system that were acquired by Masslynx 4.1 were listed.
Preparation of Standard and Quality Control (QC)
Samples.
e weighed tropifexor and oprozomib were dissolved in methanol, fixed in a 10 mL volumetric flask, and mixed thoroughly to form a stock solution and an internal standard stock solution with a mass concentration of 1.0 mg/ mL, respectively. To obtain a valid solution mixture, appropriate amounts of the reserve solution were removed and gradually diluted with methanol, and the same was done for the ISTD working solution, eventually diluting both down to 200 ng/mL. After preparing the standard working solution, 10 μL of it was taken, followed by adding 90 μL of blank beagle dog plasma which was configured as a standard solution of plasma. e standard concentrations for the tropifexor calibration curve were as follows: 1, 2.5, 5, 10, 25, 50, 100, and 200 ng/mL. e blank samples and different concentrations of standard working solutions were precisely aspirated to configure three concentrations of quality control (QC) samples of 2.5, 50, and 150 ng/mL as low quality control (LQC), medium quality control (MQC), and high quality control (HQC), severally. All of the above configurations were kept under 4°C for further experimental studies.
Sample Operation.
A precise amount of 100 μL of the plasma specimen was placed in a 1.5 ml plastic centrifuge tube. en, add 20 μL of ISTD working solution was added, followed by 250 μL of acetonitrile to precipitate the protein.
en, the three mixtures were vortexed for 2.0 min and centrifuged at 10, 000 × g, 4°C for 15 min. Afterward, the supernatant was pipetted into the autosampler vial with an internal insertion tube for a final injection of 2.0 μL for analysis and determination.
Method Validation.
e selectivity, standard curve, precision and accuracy, matrix effect, as well as stability for the proposed approach were verified as per "Guidelines for the Validation of Quantitative Methods for the Analysis of Biological Samples" in the fourth general rule of the Chinese Pharmacopoeia, 2020 edition. ree different types of beagle dog samples were selected: real plasma samples obtained in pharmacokinetic studies, blank plasma adulterated with tropifexor and ISTD, and six blank beagle dog plasma samples from different batches. Separately, they were analyzed and evaluated as a screening for the selectivity of the protocol.
Using the linearity of the analytical procedure, the results were obtained in proportion to the concentration of the sample, and the tropifexor regression was calculated using a weighted (W � 1/X 2 ) least squares algorithm at eight different concentrations between the scope of 1.0-200 ng/mL, based on the concentration of the test substance in the plasma as the horizontal coordinate X (ng/ml), and the ratio of the peak area of the test substance to that of the internal standard as the vertical coordinate Y. e calibration curve was established by regression with the weighted (W � 1/X 2 ) least squares method. e range of LLOQ of the regression equation should be within ±20%.
ree QC samples, at three concentration levels, were used to assess accuracy (RE, %) and precision (RSD, % samples of each concentration level analyzed in one day, as well as interday precision and accuracy over three consecutive days. e accepted values for accuracy and precision should be limited to ±15%. e extraction recovery of this experiment was assessed by dividing the response value of the analyte recovered from the sample matrix by the response value generated by the standard at three quality control levels (2.5, 50, and 150 ng/ mL). e analytes were added with concentrations at 2.5, 50, and 150 ng/mL to validate the assessment of the matrix effect (ME) by comparing the peak area from the existence of the matrix with the proportion of the corresponding one without matrix. e ME of ISTD was also assessed the same way at a working concentration of 100 ng/mL. e stability of tropifexor was appraised at five replicate levels of 150, 50, and 2.5 ng/mL under the following stock conditions utilizing a newly prepared calibration curve. Initially, the samples were subjected to room temperature for 24 hours to ascertain their short-term stability. In addition, the long-term stability was also evaluated by storing the samples at 80°C for 60 days. Also, the freeze-thaw stability of the analytes in plasma was studied during three freeze-thaws. Eventually, the extracted samples from the sample manager (10°C) were stored for 8 hours to determine the stability of the autosampler. e final value obtained should be within ±15% of the concentration at the time of the preliminary analysis.
Animal Experiments and Pharmacokinetic Study.
Six healthy, aged adult Beagles (2-3 years old) selected by the Experimental Animal Center of Henan University of Science and Technology (Luoyang, China), weighing between 8.0 and 10.0 kg, were selected and kept in the experimental kennel for a week of qualified feeding and ensuring their normal diet. e production license No.: SCXK(E) 2021-0020. roughout the experimental process, the animal studies followed the Guide for Ethical Review of Laboratory Animal Welfare (GB/T35892-2018).
After fasting for more than 12 hours the day before the experiment, each beagle was given 0.33 mg/kg tropifexor prepared from 0.5% sodium carboxymethylcellulose (CMC-Na) by oral administration. Next, approximately 1.0 mL of venous blood was collected at 0, 0.25, 0.5, 1, 2, 4, 6, 8, 12, 24, and 48 hours and stored in a heparin-containing 1.5 mL polyethylene tube. e blood samples were then centrifuged at 4°C for 10 minutes at 3000 rpm. e supernatant was taken immediately after centrifugation and stored at −80°C for subsequent analysis. e established UPLC-MS/MS method was used to detect the concentration of tropifexor in the beagle dog plasma. e drug and statistics (DAS) 2.0 software was utilized to execute a nonatrioventricular analysis to derive the primary pharmacokinetic parameters that we desired.
Method Development and Optimization.
e standard solution (1 μg/mL) was scanned for both positive and negative ion patterns under the continuous injection mode by a flow syringe pump, and the findings indicated that the [M + H] + of the compound had better stability and sensitivity in the positive ion mode. e mass spectrometry parameters were automatically optimized to select the optimal target analyte spectral conditions and characteristic daughter ions.
Acetonitrile was chosen as the organic phase due to its low column pressure and high mass spectrometric response value. en, this experiment compared the effects of wateracetonitrile, 0.1% formic acid aqueous solution-acetonitrile, and 0.1% acetic acid aqueous solution-acetonitrile on the response of the target compounds, which showed a high response and good specificity of the target analytes, when the mobile phase was 0.1% formic acid aqueous solution-acetonitrile. Meanwhile, this experiment compared Waters ACQUITY UPLC BEH C18 (50 mm × 2.1 mm, 1.7 μm) with Waters ACQUITY UPLC HSS T3 column (50 mm × 2.1 mm, 1.8 μm), and the results showed that the former had better separation effect than the latter and could meet the requirements of instrumental analysis. Finally, acetonitrile was chosen as the reagent for the protein precipitation method because of its lack of significant endogenous interference and high extraction rate compared to those of methanol.
Sample Selectivity.
We verified the selectivity of the protocol by comparing the chromatograms obtained from the blank plasma specimens of beagle dogs with those taken from the plasma samples in which the standard solution was added, as well as with those of the plasma samples from beagle dogs after the oral administration of the drug. As illustrated by Figure 2, the retention times of tropifexor and ISTD were not affected by the blank plasma samples, with retention times of 1.53 and 1.90 min for both. e outcomes point out that this investigation was reproducible with selectivity and specificity. Table 2, it was evident that the coefficient for determinacy (r 2 ) of the linear regression analysis remained above 0.99 throughout the validation test, while the standard curve of tropifexor was well linear in the range of 1 to 200 ng/mL. e regression equation, as verified by this study, is Y � 0.01921 × X + 0.01570 (r 2 � 0.9994). Lastly, LLOQ was the minimum concentration of analyte in the sample that could be reliably quantified, and the LLOQ value for tropifexor in this study was 1.0 ng/mL, which was within 20% of the relative precision and accuracy.
Precision and Accuracy.
e accuracy and precision of tropifexor were obtained by performing multiple replicate assays at four different concentrations, as illustrated in Table 2, with in-depth analysis at LLOQ and three QC samples. e obtained accuracies and precision ranges were within ±15%. e findings suggest that this approach was reliable, as well as accurate for the measurement of tropifexor in beagle plasma. Table 3, it could be concluded that the average extraction recoveries obtained for the analytes derived from beagle plasma for QC samples at three concentration levels were 88.13%-93.13%, indicating the high reproducibility of the method. e ME of tropifexor in this work ranged from 99.88%-101.93%, indicating that no clear matrix effects were exhibited.
Stability.
After studying and analyzing the stability of tropifexor under 2.5, 50, and 150 ng/mL concentrations, the obtained outcomes were steady under short-term, long-term, freeze-thaw, and sample manager (10°C) conditions as shown in Table 4. e RSD was less than 15% for all storage conditions, which was following the requirements of the "Guidelines for the validation of quantitative methods for the analysis of biological samples" in the fourth general rule of the Chinese Pharmacopoeia, 2020 edition.
Pharmacokinetic Study.
e concentration of tropifexor in plasma after oral administration of 0.33 mg/kg of tropifexor in beagle dogs was determined by a novel method, the UPLC-MS/MS technique. Figure 3 shows the mean blood
Conclusions
In conclusion, an accurate and reliable UPLC-MS/MS method was prepared for the measurement of tropifexor in plasma as well as its pharmacokinetic levels in beagles were characterized. e optimized method was demonstrated to have low interference, good reproducibility, high accuracy, precision, and good linearity. is method is suitable for tropifexor drug interaction studies due to its good sample adaptability and high stability.
Data Availability e original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.
Conflicts of Interest
e authors declare that there are no conflicts of interest. | 2022-09-19T15:06:38.935Z | 2022-09-17T00:00:00.000 | {
"year": 2022,
"sha1": "24ff71c4ca42dede05e093377eed9790eae739c3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jamc/2022/2823214.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72aafeb3ed2cacdfcd3a50b30d6d0a582d3e932d",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
6602872 | pes2o/s2orc | v3-fos-license | A Numerical Scheme for the Quantum Boltzmann Equation Efficient in the Fluid Regime
Numerically solving the Boltzmann kinetic equations with the small Knudsen number is challenging due to the stiff nonlinear collision term. A class of asymptotic preserving schemes was introduced in [6] to handle this kind of problems. The idea is to penalize the stiff collision term by a BGK type operator. This method, however, encounters its own difficulty when applied to the quantum Boltzmann equation. To define the quantum Maxwellian (Bose-Einstein or Fermi- Dirac distribution) at each time step and every mesh point, one has to invert a nonlinear equation that connects the macroscopic quantity fugacity with density and internal energy. Setting a good initial guess for the iterative method is troublesome in most cases because of the complexity of the quantum functions (Bose-Einstein or Fermi-Dirac function). In this paper, we propose to penalize the quantum collision term by a 'classical' BGK operator instead of the quantum one. This is based on the observation that the classical Maxwellian, with the temperature replaced by the internal energy, has the same first five moments as the quantum Maxwellian. The scheme so designed avoids the aforementioned difficulty, and one can show that the density distribution is still driven toward the quantum equilibrium. Numerical results are present to illustrate the efficiency of the new scheme in both the hydrodynamic and kinetic regimes. We also develop a spectral method for the quantum collision operator.
Introduction
The quantum Boltzmann equation (QBE), also known as the Uehling-Uhlenbeck equation, describes the behaviors of a dilute quantum gas. It was first formulated by Nordheim [13] and Uehling and Uhlenbeck [16] from the classical Boltzmann equation by heuristic arguments. Here we mainly consider two kinds of quantum gases: the Bose gas and the Fermi gas. The Bose gas is composed of Bosons, which have an integer value of spin, and obey the Bose-Einstein statistics. The Fermi gas is composed of Fermions, which have half-integer spins and obey the Fermi-Dirac statistics.
Let f (t, x, v) ≥ 0 be the phase space distribution function depending on time t, position x and particle velocity v, then the quantum Boltzmann equation reads: Here ǫ is the Knudsen number which measures the degree of rarefaction of a gas. It is the ratio between the mean free path and the typical length scale. The quantum collision operator Q q is where θ 0 = dv , is the rescaled Planck constant. In this paper, the upper sign will always correspond to the Bose gas while the lower sign to the Fermi gas. For the Fermi gas, we also need f ≤ 1 θ0 by the Pauli exclusion principle. f , f * , f ′ and f ′ * are the shorthand notations for f (t, x, v), f (t, x, v * ), f (t, x, v ′ ) and f (t, x, v ′ * ) respectively. (v, v * ) and (v ′ , v ′ * ) are the velocities before and after collision. They are related by the following parametrization:
3)
This work was partially supported by NSF grant DMS-0608720 and NSF FRG grant DMS-0757285. FF was supported by the ERC Starting Grant Project NuSiKiMo. SJ was also supported by a Van Vleck Distinguished Research Prize and a Vilas Associate Award from the University of Wisconsin-Madison. where ω is the unit vector along v ′ − v ′ * . The collision kernel B is a nonnegative function that only depends on |v − v * | and cos θ (θ is the angle between ω and v − v * ). In the Variable Hard Sphere (VHS) model, it is given by where C γ is a positive constant. γ = 0 corresponds to the Maxwellian molecules, γ = 1 is the hard sphere model. When the Knudsen number ǫ is small, the right hand side of equation (1.1) becomes stiff and explicit schemes are subject to severe stability constraints. Implicit schemes allow larger time step, but new difficulty arises in seeking the numerical solution of a fully nonlinear problem at each time step. Ideally, one wants an implicit scheme allowing large time steps and can be inverted easily. In [6], for the classical Boltzmann equation, Filbet and Jin proposed to penalize the nonlinear collision operator Q c by a BGK operator: where λ is a constant that depends on the spectral radius of the linearized collision operator of Q c around the local (classical) Maxwellian M c . Now the term in the first bracket of the right hand side of (1.5) is less stiff than the second one and can be treated explicitly. The term in the second bracket will be discretized implicitly. Using the conservation property of the BGK operator, this implicit term can actually be solved explicitly. Thus they arrive at a scheme which is uniformly stable in ǫ, with an implicit source term that can be inverted explicitly. Furthermore, under certain conditions, one could show that this type of schemes has the following property: the distance between f and the Maxwellian will be O(ǫ) after several time steps, no matter what the initial condition is. This guarantees the capturing of the fluid dynamic limit even if the time step is larger than the mean free time.
Back to the quantum Boltzmann equation (1.1), a natural way to generalize the above idea is to penalize Q q with the quantum BGK operator M q − f . This means we have to invert a nonlinear algebraic system that contains the unknown quantum Maxwellian M q (Bose-Einstein or Fermi-Dirac distribution) for every time step. As mentioned in [7], this is not a trivial task compared to the classical case. Specifically, one has to invert a nonlinear 2 by 2 system (can be reduced to one nonlinear equation) to obtain the macroscopic quantities, temperature and fugacity. Due to the complexity of the quantum distribution functions (Bose-Einstein or Fermi-Dirac function), it is really a delicate issue to set a good initial guess for an iterative method such as the Newton method to converge.
In this work we propose a new scheme for the quantum Boltzmann equation. Our idea is based on the observation that the classical Maxwellian, with the temperature replaced by the (quantum) internal energy, has the same first five moments as the quantum Maxwellian. This observation was used in [7] to derive a 'classical' kinetic scheme for the quantum hydrodynamical equations. Therefore, we just penalize the quantum collision operator Q q by a 'classical' BGK operator, thus avoid the aforementioned difficulty. At the same time, we have to sacrifice a little bit on the asymptotic property. Later we will prove that for the quantum BGK equation, the so obtained f satisfies: f n − M n q = O(∆t) for some n > N, any initial data f 0 , i.e. f will converge to the quantum Maxwellian beyond the initial layer with an error of O(∆t).
Another numerical issue is how to evaluate the quantum collision operator Q q . In fact (1.2) can be simplified as so Q q is indeed a cubic operator. Almost all the existing fast algorithms are designed for the classical Boltzmann operator based on its quadratic structure. Here we will give a spectral method for the approximation of Q q . As far as we know, this is the first time to compute the full quantum Boltzmann collision operator with the spectral accuracy. The rest of the paper is organized as follows. In the next section, we give a brief introduction to the quantum Boltzmann equation: the basic properties, the quantum Maxwellians and the hydrodynamic limits. In section 3, we present the details of computing the quantum collision operator by the spectral method as well as the numerical accuracy. Our new scheme to capture the hydrodynamic regime is given in section 4. In section 5, the proposed schemes are tested on the 1-D shock tube problem of the quantum gas for different Knudsen number ǫ ranging from fluid regime to kinetic regime. The behaviors of the Bose gas and the Fermi gas in both the classical regime and quantum regime are included. Finally some concluding remarks are given in section 6.
The Quantum Boltzmann Equation and its Hydrodynamic Limits
In this section we review some basic facts about the quantum Boltzmann equation (1.1).
• At the formal level, Q q conserves mass, momentum and energy.
• If f is a solution of QBE (1.1), the following local conservation laws hold: Define the macroscopic quantities: density ρ, macroscopic velocity u, specific internal energy e as and stress tensor P and heat flux q the above system can then be recast as where M q is the quantum Maxwellian given by where z is the fugacity, T is the temperature (see [7] for more details about the derivation of M q ). This is the well-known Bose-Einstein ('-') and Fermi-Dirac ('+') distributions.
With the macroscopic variables ρ, u and e, they are exactly the same as the classical Euler equations. However, the intrinsic constitutive relation is quite different. ρ and e are connected with T and z (used in the definition of M q (2.8)) by a nonlinear 2 by 2 system: where Q ν (z) denotes the Bose-Einstein function G ν (z) and the Fermi-Dirac function F ν (z) respectively, The physical range of interest for a Bose gas is 0 < z ≤ 1, where z = 1 corresponds to the degenerate case (the onset of Bose-Einstein condensation). For the Fermi gas we don't have such a restriction and the degenerate case is reached when z is very large. For small z (0 < z < 1), the integrand in (2.11) and (2.12) can be expanded in powers of z, Thus, for z ≪ 1, both functions behave like z itself and one recovers the classical limit.
On the other hand, the first equation of (2.10) can be written as is just the coefficient of the classical Maxwellian, which should be an O(1) quantity.
(z) → 0, which means z ≪ 1 by the monotonicity of the function Q ν . This is consistent with the fact that one gets the classical Boltzmann equation in QBE (1.1) by letting θ 0 → 0.
The quantum Euler equations (2.9) can be derived via the Chapman-Enskog expansion [3] as the leading order approximation of the quantum Boltzmann equation (1.1). By going to the next order, one can also obtain the quantum Navier-Stokes system which differs from their classical counterparts. In particular, the viscosity coefficient and the heat conductivity depend upon both ρ and e [1].
Computing the Quantum Collision Operator Q q
In this section, we discuss the approximation of the quantum collision operator Q q . The method we use is an extension of the spectral method introduced in [12,5] for the classical collision operator.
We first write (1.2) as is the classical collision operator. The cubic terms In order to perform the Fourier transform, we periodize the function f on the domain is an approximation of the support of f [14]). Using the Carleman representation [2], one can rewrite the operators as (for simplicity we only consider the 2-D Maxwellian molecules), (3.5) Now we approximate f by a truncated Fourier series, Plugging it into (3.4) (3.5), one can get the k-th mode ofQ q . The classical part is the same as those in the previous method [12]. We will mainly focus on the cubic terms. Define the kernel modes δ(x · y)e i π L l·x e i π L m·y dxdy.
Following [12], β(l, m) can be decomposed as πs sin( π L Rs), M is the number of equally spaced points in [0, π 2 ] and θ p = π 2 p M . Then Terms inside the bracket is a convolution (defined asĝ k−n (n)), which can be computed by the Fast Fourier Transform (FFT). However, the outside structure is not a convolution, sincê g k−n (n) itself depends on n. So we compute this part directly. (3.11) In this case, both inside and outside are convolutions. The FFT can be implemented easily. (3.12) Factoring out α p (l + m), both inside and outside are convolutions again. (3.13) This term can be evaluated similarly asQ 3 . 3.1. Numerical Accuracy. To illustrate the accuracy of the above method, we test it on a steady state, namely, we compute Q q (M q ) and check its max norm. In all the numerical simulations, the particles are assumed to be the 2-D Maxwellian molecules. Let ρ = 1, T = 1, from (2.10) one can adjust θ 0 to get z that lies in different physical regimes. When θ 0 = 0.01 ( = 0.1), z Bose = 0.001590, z Fermi = 0.001593. In this situation, the quantum effect is very small. The Maxwellians for the Bose gas, classical gas and Fermi gas are almost the same (Fig.1). When we increase θ 0 , say θ 0 = 9 ( = 3), z Bose = 0.761263, z Fermi = 3.188717, the difference between the quantum gases and the classical gas is evident (Fig.2). In Table 1, we list the values of Q c (M c ) L ∞ and Q q (M q ) L ∞ computed on different meshes N=16, 32, 64 (number of points in v direction), M=4 (number of points in angular direction θ p ; it is not necessary to put too many points since M won't effect the spectral accuracy, see [12] These results confirm the spectral accuracy of the method, although the accuracy in the quantum regime is not as good as that in the classical regime. This is because the regularity of the quantum Maxwellians becomes worse when θ 0 is increasing, or strictly speaking, the mesh size ∆v is not small enough to capture the shape of the Maxwellians. To remedy this problem, one can add more grid points or more effectively, shorten the computational domain. For the Bose-Einstein distribution, we also include the results computed on [−6, 6]×[−6, 6] in Table 1 3.2. Relaxation to Equilibrium. Let us consider the space homogeneous quantum Boltzmann equation for the 2-D Maxwellian molecules. As already mentioned, this equation satisfies the entropy condition, and the equilibrium states are the entropy minimizers. Hence, we first consider the quantum Boltzmann equation for a Fermi gas with an initial datum 0 ≤ f 0 ≤ 1 θ0 and observe the relaxation to equilibrium of the distribution function. Then, we take a Bose gas for which the entropy is now sublinear and fails to prevent concentration, which is consistent with the fact that condensation may occur in the long-time limit. Fermi gas. The initial data is chosen as the sum of two Maxwellian functions with v 1 = (2, 1). The final time of the simulation is T end = 0.5, which is very close to the stationary state.
In the spatially homogeneous setting, Pauli's exclusion principle facilitates things because of the additional L ∞ bound 0 ≤ f (t) ≤ 1 θ0 . In this case, the convergence to equilibrium in a weak sense has been shown by Lu [10]. Later Lu and Wennberg proved the strong L 1 stability [9]. However, no constructive result in this direction has ever been obtained, neither has any entropy-dissipation inequality been established.
In Fig.3 we report the time evolution of the entropy and the fourth and sixth order moments of the distribution with respect to the velocity variable. We indeed observe the convergence to a steady state of the entropy and also of high order moments when t → ∞. In Fig.4 we also report the time evolution of the level set of the distribution function f (t, v x , v y ) obtained with N = 64 modes at different times. Initially the level set of the initial data corresponds to two spheres in the velocity space. Then, the two distributions start to mix together until the stationary state is reached, represented by a single centered sphere. It is clear that the spherical shapes of the level sets are described with great accuracy by the spectral method. Bose gas. This is an even more challenging problem since there is no convergence result, due to the lack of a priori bound. Lu [11] has attacked this problem with the well-developed tools of the modern spatially homogeneous theory and proved that the solution (with a very low temperature) converges to equilibrium in a weak sense. In [4], the authors studied an one dimensional model and proved existence theorems, and convergence to a Bose distribution having a singularity when time goes to infinity because Bose condensation cannot occur in finite time.
Here we investigate the convergence to equilibrium for space homogeneous model in 2-D, for which condensation cannot occur. We consider the following initial datum with v 1 = (1, 1/2) and T 0 = 1/4. We still observe the convergence to equilibrium and convergence of high order moments when t → ∞ in Fig.5.
In Fig.6 we report the time evolution of the level set of the distribution function f (t, v x , v y ) obtained with N = 64 modes at different times and observe the trend to equilibrium.
A Scheme Efficient in the Fluid Regime
So far we have only considered spatially homogeneous quantum Boltzmann equations, now what happens for spatially inhomogeneous data? Due to the natural bound 0 ≤ f (t) ≤ 1 θ0 , the Boltzmann-Fermi model seems to be well understood mathematically [17]. The situation is completely different for the Boltzmann-Bose model, since singular measures may occur [17]. We first review the scheme in [6] for the classical Boltzmann equation The first-order scheme reads: where λ is some appropriate approximation of |∇Q c | (can be made time dependent). To solve f n+1 explicitly, we need to compute M n+1 c first. Since the right hand side of (4.2) is conservative, it vanishes when we take the moments (multiply by φ(v) = (1, v, 1 2 v 2 ) T and integrate with respect to v). Then (4.2) becomes where U = (ρ, ρu, ρe + 1 2 ρu 2 ) T is the conserved quantities. Once we get U n+1 , M n+1 c is known. Now f n+1 in (4.2) is easy to obtain.
When generalizing the above idea to the quantum Boltzmann equation (1.1), the natural idea is to replace Q c and M c in (4.2) by Q q and M q respectively. However, as mentioned in section 2, one has to invert the nonlinear system (2.10) to get z and T . Experiments show that the iterative methods do converge when the initial guess is close to the solution (analytically, this system has a solution [1]). But how to set a good initial guess for every spatial point and every time step is not an easy task, especially when ρ and e are not continuous.
Here we propose to use a 'classical' BGK operator to penalize Q q . Specifically, we replace the temperature T with the internal energy e in the classical Maxwellian using relation e = dv 2 T (true for classical monatomic gases) and get An important property of M c is that it has the same first five moments as M q . Now our new scheme for QBE (1.1) can be written as Since the right hand side is still conservative, one computes M n+1 c the same as for (4.2). It is important to notice that z and T are not present at all in this new scheme, thus one does not need to invert the 2 by 2 system (2.10) during the time evolution. If they are desired variables for output, one only needs to convert between ρ, e and z, T at the final output time.
4.1.
Asymptotic Property of the New Scheme. In this subsection we show that the new scheme, when applied to the quantum BGK equation, has the property (1.6). Consider the following time discretization: Some simple mathematical manipulation on (4.6) gives (4.7) Assume all the functions are smooth. When λ > 1 2 , The O(ǫ) term comes from the second term of the right hand side of (4.7). The O(∆t) term is from the third and fourth terms. Then (4.9) |f n − M n q | ≤ α n |f 0 − M 0 q | + O(ǫ + ∆t) . Since ∆t is taken bigger than ǫ, this implies the property (1.6). It is interesting to point out that f approaches M q , not M c , with (4.6).
Remark 4.1. The first order (in-time) method can be extended to a second order by an Implicit-Explicit (IMEX) method (see also [6]): (4.10) This scheme can be shown to have the same property (1.6) on the quantum BGK equation.
Numerical Examples
In this section, we present some numerical results of our new scheme (4.5) (a second order finite volume method with slope limiters [8] is applied to the transport part) on the 1-D shock tube problem. The initial condition is The particles are again assumed to be the 2-D Maxwellian molecules and we adjust θ 0 to get different initial data for both the Bose gas and the Fermi gas.
In all the regimes, besides the directly computed macroscopic quantities, we will show the fugacity z and temperature T as well. They are computed as follows. First, (2.10) (d v = 2) leads to We treat the left hand side of (5.2) as one function of z, and invert it by the secant method. Once z is obtained, T can be computed easily using for example the first equation of (2.10). To evaluate the quantum function Q ν (z), the expansion (2.13) is used for the Bose-Einstein function. The Fermi-Dirac function is computed by a direct numerical integration. The approach adopted here is taken from [15] (Chapter 6.10). When approximating the collision operator Q q , we always take M = 4, N = 32 and L = 8, except L = 6 for the Bose gas in the quantum regime.
Hydrodynamic Regime.
We compare the results of our new scheme (4.5) with the kinetic scheme (KFVS scheme in [7]) for the quantum Euler equations (2.9). The time step ∆t is chosen by the CFL condition, independent of ǫ. Fig.7 shows the behaviors of a Bose gas when θ 0 = 0.01. Fig.8 shows the behaviors of a Bose gas when θ 0 = 9. The solutions of a Fermi gas at θ 0 = 0.01 are very similar to Fig.7, so we omit them here. Fig.9 shows the behaviors of a Fermi gas when θ 0 = 9. All the results agree well in this regime, which exactly implies the scheme (4.5) is asymptotic preserving (when the Knudsen number ǫ goes to zero, the scheme becomes a fluid solver).
Kinetic Regime.
We compare the results of our new scheme (4.5) with the explicit forward Euler scheme. The time step ∆t for the new scheme is still chosen by the CFL condition. When the Knudsen number ǫ is not very small, 10 −1 or 10 −2 , the above ∆t is also enough for the explicit scheme. Fig.10 shows the behaviors of a Bose gas when θ 0 = 0.01. Fig.11 shows the behaviors of a Bose gas when θ 0 = 9. The solutions of a Fermi gas at θ 0 = 0.01 are very similar to Fig.10, so we omit them here. Fig.12 shows the behaviors of a Fermi gas when θ 0 = 9. Again all the results agree well which means the scheme (4.5) is also reliable in the kinetic regime. To avoid the boundary effect, all the simulations in this subsection were carried out on a slightly larger spatial domain x ∈ [−0.25, 1.25].
Conclusion
A novel scheme was introduced for the quantum Boltzmann equation, starting from the scheme in [6]. The new idea here is to penalize the quantum collision operator by a 'classical' BGK operator so as to avoid the difficulty of inverting the nonlinear system ρ = ρ(z, T ), e = e(z, T ). The new scheme is uniformly stable in terms of the Knudsen number, and can capture the fluid (Euler) limit even if the small scale is not numerically resolved. We have also developed a spectral method for the quantum collision operator, following its classical counterpart [12,5].
So far we have not considered the quantum gas in the extreme case. For example, the Bose gas becomes degenerate when the fugacity z = 1. Many interesting phenomena happen in this regime. Our future work will focus on this aspect. | 2010-09-17T07:36:27.000Z | 2010-09-17T00:00:00.000 | {
"year": 2010,
"sha1": "c577e32d005d7d2d3ba5b2fe5d134aee35636d80",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c577e32d005d7d2d3ba5b2fe5d134aee35636d80",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
261184541 | pes2o/s2orc | v3-fos-license | End-to-end framework for automated collection of large multicentre radiotherapy datasets demonstrated in a Danish Breast Cancer Group cohort
Large Digital Imaging and Communications in Medicine (DICOM) datasets are key to support research and the development of machine learning technology in radiotherapy (RT). However, the tools for multi-centre data collection, curation and standardisation are not readily available. Automated batch DICOM export solutions were demonstrated for a multicentre setup. A Python solution, Collaborative DICOM analysis for RT (CORDIAL-RT) was developed for curation, standardisation, and analysis of the collected data. The setup was demonstrated in the DBCG RT-Nation study, where 86% (n = 7748) of treatments in the inclusion period were collected and quality assured, supporting the applicability of the end-to-end framework.
Introduction
Big data and data science methods have the potential to accelerate the development of radiotherapy (RT) by acting as a supplement to the traditional translational research chain [1].To take advantage of this potential, large-scale studies must move beyond binary registration of RT or prescribed dose and fractionations only and instead include the full exposure data (images, structure sets, treatment plans and 3D dose distributions) available in the Digital Imaging and Communications in Medicine (DICOM) format [2,3].
Large DICOM datasets also play a major role in the development of machine learning (ML) technology, which is rapidly finding its way into research and clinic.Unavailability of vendor provided functionality for bulk data exporting, needed to provide diverse training data [4], is however a hindrance.
In a recent survey 69% of respondents reported that they were either using or planning to use ML algorithms, naming the need for larger multicentre databases among the top priorities for going forward [5].
This need can be met by prospective data collection in clinical studies; however, it can be very time consuming for large datasets [6].A multicentre collaborative effort to implement local methods for bulk DICOM data extraction would make this process faster and enable learning from archived treatment data, but may also increase the need for data curation, as data is not assessed on an individual level.
The variability and conformality of datasets depend on the extent of cross-centre collaboration and guideline implementation [7].This is especially true for non-protocol treatments, which represent most of the available data.To address the task of curation, standardisation, and analysis of DICOM files, a vendor-agnostic tool is needed.Tools with standardisation capabilities exist, but these are either single purpose like nomenclature standardisation [8], focused on dose analysis such as the DVH Analytics package [9] or not open source like the DcmCollab system [10].While not made for explorative data curation, a system like DcmCollab which focuses on storage, security and GDPR compliance, could however be used as a storage solution after the dataset has been curated.Though DICOM image data is traditionally stored in a Picture Archiving and Communications System (PACS), the widespread adoption of PACS in RT has been foiled due to several issues [11], making an RT-specific system a more suitable choice as the final step of an end-toend framework.
In this technical note, we present and discuss the implementation of an end-to-end framework for providing large multicentre DICOM-RT datasets.This includes implementing multicentre bulk DICOM data extraction solutions and developing a solution to handle curation, standardisation and analysis of large DICOM-RT datasets prior to permanent storage.The setup is demonstrated in a case study (Danish Breast Cancer Group (DBCG) RT Nation study) and quality assurance (QA) is performed for dose-volume histogram (DVH)-parameter extraction.
Defining a multicentre cohort
Patients can be identified using a database with treatment and patient characteristics and a central identification system such as a social security number or by each participating centre based on local registration of treatment and patient characteristics.To ease the subsequent data curation and analysis, it is advisable to predefine a set of inclusion criteria to limit and streamline the extent of the exported data.
Case study: DBCG RT-NATION data selection
In DBCG RT-Nation, patients were identified using the Danish Civil Registration System (CPR) identifiers [12] obtained from the DBCG database [13].All patients who underwent surgery for early breast cancer in Denmark 2008-2016 with an indication for loco-regional RT according to DBCG guidelines were eligible.Treatment planning data was collected for the first breast cancer RT (and sequential boost if present).For adaptive RT, the treatment plan with the most delivered fractions was collected.If this information was missing, the first treatment plan was collected.Information on GDPR compliance can be found in the supplementary document.
Implementation of bulk DICOM data extraction
Treatment planning systems (TPS) rarely support bulk DICOM data extraction as a standard solution, but do allow for scripting, which can be used to implement such extraction.Fig. 1 displays the complete endto-end framework with solutions for the three TPS used in DBCG RT-Nation.
Eclipse (Varian Medical Systems)
In one of the Varian centres, a pilot project was carried out, implementing an application for automated batch export of DICOM files in the Eclipse Scripting Application Programming Interface (ESAPI), based on a script made available online by Varian [14].We facilitated a workshop for all Varian centres where the application was shared for teaching purposes and later implemented in local variations by each Varian centre.
In the Varian centres, information on intended and actually delivered fractions of specific treatment plans was available in the ARIA database system.This information was used to find the dominant treatment plan (most fractions treated) and to filter out treatments that did not comply with inclusion criteria by using various automated methods, depending on local naming conventions and use of diagnosis codes.
Oncentra external beam (Nucletron B.V.)
The Oncentra centre had a DICOM file-based archiving system, which were organised in folders using the patient CPR, allowing easy extraction.No link between the archived plans and the number of fractions treated was implemented at the time, and the first plan was used if multiple plans were available, as all plans were planned with the full number of fractions by convention.
Pinnacle (Phillips)
The centre using Pinnacle implemented a local solution for an automated full DICOM data dump of their system.This solution was based on executing pinnacle scripts from a python shell script.A MAT-LAB script was used to select the relevant treatments.
Workload
Centres were surveyed to estimate the time spent on implementation and data extraction, which was compared between systems.
Collaborative DICOM analysis for radiotherapy (CORDIAL-RT)
After collecting and pseudonymising the DICOM files, a vendoragnostic solution was needed to store and curate the large number of DICOM files.We developed the CORDIAL-RT solution, which consists of an SQLite database and a collection of functionalities made in python.CORDIAL-RT enabled scaling, summing and extraction of doses (based on Dicompyler-core python package [15]) as well as mapping of structure names and export of DVH data to a file and DICOM data to a centralised storage solution.A brief introduction to the solution is given in the document.The source code is available on Github [16].
Case study: DBCG RT-Nation data curation, standardisation and QA
CORDIAL-RT was used to curate and organise DICOM files into one treatment per patient.Treatments that did not fit the inclusion criteria were removed.If multiple dose files were associated with a treatment, the system would do automatic summing and save a new dose file representing the full treatment, provided the same image-set was referenced.In case of multiple image-sets, the dominant plan was used, and a scaling factor was added to the treatment and handled by the system.For sequential boost, doses were summed if the same image-set was used.All relevant structure names were categorised to a common name-set as defined in the DBCG Skagen trial 1 [17], based on the AAPM TG-263 report [18].This was done by identifying the most frequent names using the Levenshtein distance to find similar named structures.The method was demonstrated on the ipsilateral lung, which was expected to be present in all treatments.
CORDIAL-RT was used to QA DVH-parameter extraction.Sample testing was done on a diverse subset of treatments (n = 20), comparing 87 dose and volume parameters for various structures, using the MAT-LAB CERR [19] package for independent validation.
Population dose QA was performed as a sanity check, by extracting the maximum treatment doses for all treatments and comparing the results to expected ranges.For a subset of treatments, the median target dose was also assessed.
Implementation of bulk DICOM data extraction
Bulk DICOM-RT data extraction solutions were implemented for all centres.An estimated 6-7 workdays were spent developing the pilot solution for the Varian setup.For the other four Varian centres, estimated time spent was: 1-3 days implementing the automatic solution, 2-3 days developing and executing code for selecting and curating treatment data, and 1-2 min per patient for executing the automated export.For the Oncentra centre, a few hours were spent implementing the solution, three days for data curation and less than a minute for copying data for each patient.The Pinnacle centre could not estimate the time spent on this specific project, as it was done as part of a larger effort.
Case study: DBCG RT-Nation end-to-end demonstration
From the DBCG database, 9100 patients were identified.In total, DICOM data (~1.2 million DICOM files) for 8028 treatments (91%) was collected.In the screening processes before the data export, 246 treatments did not match the inclusion criteria and 826 eligible treatments could not be collected (Fig. 2).About half of the uncollected eligible treatments (n = 453), were from 2008 and non-retrievable due to loss of access to data storage.During the curation and standardisation process, 219 additional treatments were found to not fit the inclusion criteria.Furthermore, 334 treatments were either incomplete or inconclusive and could not be processed.In total, 7448 treatments (86%) were processed and included in the dose QA.From 2009 this was 90%.
In the sample test, all volume differences between CORDIAL-RT and MATLAB Cerr were <1% or <1 cm 3 .All dose differences were <2% or <0.5 Gy (supplementary Table 1).The population dose QA showed that 9% of treatments had a maximum relative dose above 120% (supplementary Fig. 1).Of these treatments, 5% had a median target dose above 110% and 0.8% had a median target dose above 120% (supplementary Fig. 2).No treatments had a maximum relative dose below 100%.We identified 158 different names associated with the ipsilateral lung, one of which was present in all but one treatment.As a proof of concept, 200 treatments were successfully exported to the DcmCollab system.
Discussion
We demonstrated the feasibility of a national end-to-end framework for collecting large DICOM-RT datasets, exemplified in a curated national dataset of 7448 node-positive breast cancer patients treated 2008-2016.In 2009-2016, 90% of all loco-regional breast cancer RT treatments in Denmark were successfully collected and processed.
The 10% missing and inconclusive data from 2009 and later was caused by several factors e.g., centres not being able to identify all treatments automatically, some treatments not being exported with all the needed files and CORDIAL-RT not being able to process treatments with different structure-sets for primary and sequential boost plans.Despite the missing data, we were able to collect a dataset that is among the largest in radiotherapy containing full DICOM data.In comparison, the recently published CANTO-RT study from France included 3976 breast cancer patients [6].
The semi-automatic method used in CORDIAL-RT for categorising | 2023-08-27T15:07:17.259Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "657bae63970c8b847ea5dd755803be716ec78ee5",
"oa_license": "CCBYNCND",
"oa_url": "http://www.phiro.science/article/S2405631623000763/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89a1c4258d4140e363540eb4996950288159572d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
247825653 | pes2o/s2orc | v3-fos-license | Natterin-Induced Neutrophilia Is Dependent on cGAS/STING Activation via Type I IFN Signaling Pathway
Natterin is a potent pro-inflammatory fish molecule, inducing local and systemic IL-1β/IL-1R1-dependent neutrophilia mediated by non-canonical NLRP6 and NLRC4 inflammasome activation in mice, independent of NLRP3. In this work, we investigated whether Natterin activates mitochondrial damage, resulting in self-DNA leaks into the cytosol, and whether the DNA sensor cGAS and STING pathway participate in triggering the innate immune response. Employing a peritonitis mouse model, we found that the deficiency of the tlr2/tlr4, myd88 and trif results in decreased neutrophil influx to peritoneal cavities of mice, indicative that in addition to MyD88, TRIF contributes to neutrophilia triggered by TLR4 engagement by Natterin. Next, we demonstrated that gpcr91 deficiency in mice abolished the neutrophil recruitment after Natterin injection, but mice pre-treated with 2-deoxy-d-glucose that blocks glycolysis presented similar infiltration than WT Natterin-injected mice. In addition, we observed that, compared with the WT Natterin-injected mice, DPI and cyclosporin A treated mice had a lower number of neutrophils in the peritoneal exudate. The levels of dsDNA in the supernatant of the peritoneal exudate and processed IL-33 in the supernatant of the peritoneal exudate or cytoplasmic supernatant of the peritoneal cell lysate of WT Natterin-injected mice were several folds higher than those of the control mice. The recruitment of neutrophils to peritoneal cavity 2 h post-Natterin injection was intensely impaired in ifnar KO mice and partially in il-28r KO mice, but not in ifnγr KO mice. Finally, using cgas KO, sting KO, or irf3 KO mice we found that recruitment of neutrophils to peritoneal cavities was virtually abolished in response to Natterin. These findings reveal cytosolic DNA sensors as critical regulators for Natterin-induced neutrophilia.
Introduction
Natterin proteins were first revealed in the venom of the medically significant Brazilian toadfish, Thalassophryne nattereri (VTn), in five orthologs named Natterin (1-4 and -P) [1]. They were identified as toxins since they are responsible for the main damage effects of the VTn envenomation, such as local edema and excruciating pain. Natterin modulates stress levels in the microvasculature, with venous stasis and ischemia that evolves into necrosis [2,3].
We recently performed an extensive screening using available genome databases across a wide range of species and identified 331 species displaying 859 natterin or natterinlike genes [4]. Structurally, all Natterin-like proteins share a similar architecture with a variable membrane-binding domain in the N-terminal region and a conserved aerolysinlike module [5] in the C-terminal region; the latter contains the AGIP (Ala-Gly-Ile-Pro) family's signature domain [4].
These proteins containing the Natterin domain are distributed throughout all kingdoms of life, including plants, fungi and sessile marine animals with primitive anatomical structure and organization [4]. However, no homologs have been described in prokaryotes, protists, amphibians and mammals so far. Interestingly, although fish represent the majority of species that contain Natterin-like proteins (109 species with 598 sequences), only four species are venomous and present a venom apparatus, namely, Plotosus canius, Plotosus lineatus, Thalassophryne amazonica and Thalassophryne nattereri [5,6]. The presence of a large number of Natterin-like sequences in widely divergent non-venomous species that originated at least 400 million years ago points to the importance of the evolutionary conservation of the aerolysin module [5] across the Natterin group and also shows an important adaptive value consistent with the continuity of the plurality of functions, including action on innate immune defense system, rather than its role only as a toxin.
Natterin founding members are potent pro-inflammatory molecules and a large number of cells may sense and respond to them. Investigations through in vivo studies have showed that Natterin induces local and systemic neutrophilic inflammation in mice dependent on the signals derived from IL-33/ST2 and IL-1β/IL-1R1, as well as IL-1α. Interestingly, the Natterindependent neutrophilic inflammation was mediated by the activation of both caspase-1 and caspase-11 by the non-canonical NLRP6 (NOD-like receptor family pyrin domain containing 6) and NLRC4 (NLR family CARD domain containing 4) adaptors through ASC (apoptosisassociated speck like protein) interaction of the inflammasome complex with gasdermin D activation, independent of NLRP3-NOD-, LRR-and the pyrin domain-containing protein 3 [7].
Our data add to previously published studies showing NLRP3 as the only member of the inflammasome family implicated in the sensing of several aerolysin-like pore-forming toxins from several species and emphasize that the NLRP6/NLRC4-dependent neutrophilmediated response may be part of an innate immune mechanism underlying aerolysin from fish.
Increasing evidence suggests that the stimulator of interferon genes protein (STING) is a critical signaling molecule in immunity and tissue inflammation. Cyclic GMP-AMP (cGAMP) synthase (cGAS) serves as a cytosolic sensor of DNA, and it activates STING to trigger a signaling cascade leading to the production of type I interferons (IFNs) [8]. In addition to pathogen-derived DNA and self-DNA from the nucleus, DNA leaked into the cytosol from damaged mitochondria (mtDNA) activates the cGAS-STING pathway [9]. Furthermore, Swanson et al. [10] demonstrated for the first time that the second messenger, cGAMP, not only activates type I IFNs but also activates the inflammasomes pathway, highlighting the positive cross-talk between inflammasome and cGAS/STING in innate immunity.
Our data demonstrated that Natterin induced neutrophilic inflammation mediated by the activation of the inflammasome complex and that the associated ischemic/necrotic injury could generate the release of the danger-associated molecular pattern (DAMPs). However, the role of the cytosolic DNA-sensing pathway in neutrophilic inflammation induced by Natterin is still unclear. In this work, we investigated whether Natterin-induced inflammation activates mitochondrial damage, resulting in self-DNA leaks into the cytosol and whether the DNA sensor cGAS and STING pathway participate triggering the innate immune response.
Natterin Induces Signals through TLR4 and MyD88/TRIF Adaptors
The production of inflammatory cytokines that governs the trafficking of leukocytes to organs through the vascular barrier of endothelial cells (ECs) is a result of activation of NFκB, the major outcome of TLR signaling. First, we examined whether neutrophil infiltration was mediated by the engagement of Natterin to pattern-recognition receptors (PRRs). In Figure 1A, we observed that BL6 mice with tlr2 and tlr4 gene deficiency presented a drastic reduction (99 ± 0.1%) of neutrophil recruitment to the peritoneal cavity 2 h post-injection, indicating that Natterin engages either TLR2 or TLR4, which induces chemoattractant production for neutrophil recruitment. As a negative control, mice were only injected i.p. with PBS (WT_PBS). An independent group of WT mice were pre-treated 1 h before Natterin injection with i.p. injection of 2-DG at 10 mg/Kg (WT 2-DG_Natterin, (C). Two hours after injection, mice were killed and the peritoneal cavities were washed to obtain exudates. Peritoneal exudate cells were harvested and the number of macrophages (large leukocytes with a blue-grey ground glass cytoplasm and an irregularly shaped nucleus with Figure 1. Natterin induces neutrophilic inflammation dependent on PRRs and GPR91 sensors. Natterin (1 µg diluted in PBS) was injected intraperitoneally (i.p.) in non-treated WT mice (WT_Natterin) or deficient in tlr2/tlr4, myd88 or trif (A) or gpr91 KO mice (B) (KO_Natterin groups). As a negative control, mice were only injected i.p. with PBS (WT_PBS). An independent group of WT mice were pre-treated 1 h before Natterin injection with i.p. injection of 2-DG at 10 mg/Kg (WT 2-DG_Natterin, (C). Two hours after injection, mice were killed and the peritoneal cavities were washed to obtain exudates. Peritoneal exudate cells were harvested and the number of macrophages (large leukocytes with a blue-grey ground glass cytoplasm and an irregularly shaped nucleus with vacuoles.) and neutrophils (with 3-5 nuclear lobes and fine granules within the cytoplasm) was evaluated in cytospin slides stained with a Diff-Quick staining kit. Examples of representative photomicrographs are shown in (D). Each bar represents the mean ± SEM of 3-5 animals/group. * p < 0.05 compared with WT_PBS group and # p < 0.05 compared with WT_Natterin-group.
TRIF (TIR-domain-containing adapter-inducing interferon-β, encoded by Ticam1) is an adaptor for TLR3 and TLR4; and MyD88 is an adaptor for all TLRs except TLR3, and also is involved in TLR-independent signals activated by IL-1R. Then, we tested the involvement of both adaptors for TLR signaling after Natterin stimulation. In addition, we observed that the recruitment of neutrophil to the peritoneal cavity was significantly decreased by 90% in myd88 KO and by 71% in trif KO mice 2 h post Natterin injection ( Figure 1A,D).
GPR91 Succinate Sensor Drives Neutrophilic Inflammation
Intracellular molecules present in the cytoplasm in the context of major cellular stress could also be detected by intracellular sensors of the innate immune system, either directly or indirectly, and trigger a pro-inflammatory immune response through the formation of inflammasome. Metabolite succinate is a universal metabolic signature response to ischemic/hypoxia conditions [11]. SUCNR1/GPR91 is a G protein-coupled (GPCRs) cell surface sensor for extracellular succinate released and accumulated under hypoxia and oxidative stresses [12], and synergizes with TLR, inducing reactive oxygen species (ROS) release [13].
Since the response caused by Natterin is characterized by ischemic and necrotic injury, we hypothesized that the mitochondrial dysfunction with leakage of the intracellular messenger succinate are involved in the neutrophilic mobilization to peritoneal cavity of Natterin-injected mice. Then, with the use of gpcr91-deficient mice, we examined whether endogenous succinate accumulation acted as an inflammatory factor triggering neutrophilic infiltration. We found that gpr91 deficiency led to a strongly reduction (82 ± 0.8%) of neutrophils' recruitment to peritoneal cavities after Natterin injection in KO mice compared to Natterin-injected WT mice ( Figure 1B,D).
Metabolomics studies have demonstrated that succinate transported from the mitochondria to the cytosol leads to hypoxia inducible factor (HIF)-1α stabilization and a metabolic activity shift [14]. When WT mice were pre-treated with 2-deoxy-d-glucose that blocks glycolysis and stimulated with Natterin, no change in the high number of neutrophils was observed compared with Natterin-injected mice ( Figure 1C). These results suggest that the ischemic accumulation of succinate signaling via the GPR91 receptor plays a decisive role in neutrophilic inflammation, unrelated to the alteration of the metabolic profile for glycolysis.
Mitochondrial Dysfunction Is Important for Natterin-Dependent Neutrophilic Recruitment
Accumulated succinate is rapidly re-oxidized by succinate dehydrogenase, driving extensive mitochondrial ROS generation, a critical early driver of injury [15]. Next, we wanted to identify whether ROS production induced by Natterin is involved in the neutrophilic infiltration. Figure 2A shows a decreased (59 ± 2%) infiltration in neutrophils to peritoneal cavities of mice pre-treated with DPI (diphenyleneiodonium), a potent inhibitor of nitric oxidase synthase (NADPH), which blocks mitochondrial ROS and phagosomal ROS [16].
In Figure 2A, it can be observed that mice pre-treated with cyclosporin A [19], an inhibitor of mPTP opening via binding to mitochondrial peptidyl-prolyl cis-trans isomerase F (PPIF, also known as cyclophilin D) presented 55% of reduction in the number of neutrophils in peritoneal exudate compared with WT Natterin injected-mice.
Self-derived dsDNA, including linear nuclear DNA and mtDNA into the extracellular space, where it can be engulfed and sensed by endosomal or cytoplasmic nucleic acid sensors, elicits neutrophilic inflammation [20]. Interestingly, we found that the levels of dsDNA in the supernatant of the peritoneal exudate of WT Natterin-injected mice were several folds higher (4-fold) than those of the control mice ( Figure 2B). IL-33 is a nuclear-targeted cytokine abundantly expressed at mucosal barriers, which can be released from intact cells to propagate inflammation [21,22]. Interestingly, the role for cleaved IL-33 alarmin decorating NETs in human systemic lupus erythematosus, linking neutrophil activation, type I IFN production and end-organ inflammation has been demonstrated [23]. . As a negative control, mice were injected i.p. with PBS (WT_PBS). As a positive control, mice were only injected with Natterin (WT_Natterin). Two hours after injection, mice were killed and the peritoneal cavities were washed to obtain exudates. Peritoneal exudate cells were harvested and the number of neutrophils related to total cell number was evaluated in cytospin slides stained with the Diff-Quick staining kit. Each bar represents the mean ± SEM of 3-5 animals/group. * p < 0.05 compared with WT_PBS group and # p < 0.05 compared with WT_Natterin group. Concentrated supernatant of peritoneal exudates from WT_PBS or WT_Natterin groups of mice were analyzed for the content of double-stranded DNA using Quant-iTPicoGreen dsDNA reagent (B). Proteins present in concentrated supernatant or cytoplasmic and nuclear proteins (C) collected 2 h after Natterin injection were analyzed using the iBind TM Flex Western System with goat anti-mouse IL-33 (processed form: 18 to 20 kDa) followed by the secondary antibody anti-goat IgG-HRP. The β-tubulin was used as housekeeping protein. The immune complex was revealed by enhanced chemiluminescence detection system.
Next, we identified the subcellular location of IL-33 after Natterin stimulation in processed samples of neutrophil-rich peritoneal cavity exudates. Increased processed IL-33 (20 kDa) was observed in the exudate supernatant or cytoplasmic supernatant of peritoneal exudate (obtained after lysis of the cell pellet from neutrophil-rich Natterininjected mice), but not in the nuclear samples ( Figure 2C). Moreover, mice pre-treated with DPI or cyclosporin A continued to release processed IL-33 in the supernatant of peritoneal exudate after Natterin stimulation, showing that the release of the processed cytokine by activated neutrophils is an independent event of mPTP opening or ROS production.
cGAS/STING/IRF3 via Type I IFN Axis Supports Natterin-Neutrophilic Inflammation
Our previous results demonstrated that caspase-1 and caspase-11 were required for the processing of pro-IL-1β, which together with IL-1α, control the local and systemic neutrophilic inflammation in response to Natterin [7]. Type I IFNs induce caspase-11 expression, an event that is both necessary and sufficient to promote caspase-11 autoprocessing. Yi [24] summarized and discussed the current studies exploring the activation mechanisms and the regulatory roles of non-canonical inflammasomes, such as mouse caspase-11 and human caspase-4 and caspase-5 non-canonical inflammasomes in the inflammatory response and human diseases.
Here, we interrogated the upstream regulation of capase-11, focusing on type I (IFN α/β) or III (λ) IFNs signaling. Notably, recruitment of neutrophils to peritoneal cavity 2 h post-Natterin injection was intensely impaired in ifnar KO mice, which are deficient in the type I interferon-α/β receptor (75 ± 1%) or even partially in il-28r KO mice, deficient in the IFNλ receptor (69 ± 2%) ( Figure 3A,C). In addition, using ifnγr KO mice, we confirmed that the neutrophilic infiltration following Natterin stimulation is negatively regulated by IFNγR signaling ( Figure 3A). One way to activate the type I IFN signaling response [25,26] and IL-33 release [27,28] is via the cGAS/STING pathway. Therefore, we assessed the Natterin-induced STING pathway's downstream molecules' requirement, such as the transcription factor interferon regulatory factor 3 (IRF3), in neutrophilic inflammation.
Using cgas KO, sting KO or irf3 KO mice we found that recruitment of neutrophils to peritoneal cavities was virtually abolished (95, 99 and 99 ± 0.1%) in those mice in response to Natterin (Figure 3B,C). . Two hours after Natterin injection, mice were killed and the peritoneal cavities were washed to obtain exudates. Peritoneal exudate cells were harvested and the number of neutrophils related to total cell number was evaluated in cytospin slides stained with the Diff-Quick staining kit. Examples of representative photomicrographs are shown in (C). Each bar represents the mean ± SEM of 3-5 animals/group. * p < 0.05 compared with WT_PBS group and # p < 0.05 compared with WT_Natterin-group.
Discussion
Recently, we investigated the regulatory mechanisms controlling acute neutrophilic inflammation induced by Natterin, a family of proteins responsible for the toxic effects of the venom of Thalassophryne nattereri. We reported that Natterin induced the extracellular release of mature IL-1β and the sustained production of IL-33 by bronchial epithelial cells, which are essential signals for driving local and systemic neutrophil migration [7]. In addition, our data showed that the IL-1β-dependent neutrophilic inflammation induced by Natterin is the result of non-canonical activation of the inflammasome complex with the participation of cytosolic NLRP6/NLRC4 sensors.
In this study, we have provided evidence that STING is an important signaling molecule in IL-1β-dependent neutrophilic inflammation mediated by inflammasome activation in response to Natterin. In fact, our results indicated that Natterin leads to the release of significant amount of DNA in peritoneal exudate, activates cGAS, STING and IRF3, which mediates neutrophilic inflammation. In cgas-, stingand irf3-deficient mice, the influx of neutrophils was alleviated.
The inflammasome and type I IFNs pathways are two seminal routes by which innate immunity is activated to combat a wide variety of microbial pathogens. Mitochondrial DNA fragments serve as one of the ligands that cause STING activation, resulting in the activation of IRF3 and NF-κB and the expression of type I IFNs and other pro-inflammatory genes [29][30][31][32].
Although the Natterin-dependent mechanisms of cellular activation are still poorly understood, in recent years, considerable advances have been made regarding the identification and characterization of aerolysin-mediated damage. Such studies have highlighted the underlying sequential nature, including the recognition as antigen by PRRs in immune cells [33,34] resulting in activation and production of pro-inflammatory molecules [35], and the binding specifically to GPI-anchored proteins at the surface of target cells promoting pore formation and cytosol insertion of the toxin [36]. Accordingly, pore formation triggering further potassium efflux and calcium influx may enable the secretion of cytokines and occurs downstream of p38 mitogen-activated protein kinases (MAPK), inflammasome activation, caspase-1 processing and activation of IL-1β secretion [36].
Here, we revealed that the deficiency of the tlr2/tlr4, myd88 and trif results in decreased neutrophil influx to peritoneal cavities of mice, indicative that in addition to MyD88, TRIF contributes to neutrophilia triggered by TLR4 engagement by Natterin. We can attribute the importance of TRIF in Natterin-induced neutrophilia to its role as an inducer of IL-33 production and as an alternate TRIF-IRF3-axis-mediated IFN-β induction [27]. TLR9 is activated in response to DNA. However, the impact of TLR9 signaling on Natterin-induced neutrophilic inflammation remains to be determined.
Interestingly, the role for cleaved IL-33 alarmin decorating NETs in human systemic lupus erythematosus, linking neutrophil activation, type I IFN production and end-organ inflammation has been recently demonstrated by Georgakis et al. [23]. Ozasa et al. [28] found that IRF3/7, which are signal transducers downstream of TBK1, are required for IL-33 release from lung fibroblasts in response to cGAMP, which functions as an allergy-prone adjuvant inducing strong type-2 immune responses to co-inhaled allergen in the airway.
These findings fit with our model that identified the production of IL-33 by bronchial epithelial cells and the dependence of ST2/IL-33 signaling on the local and systemic neutrophil migration in response to Natterin [7].
Studies have shown that activation of the STING pathway requires TRIF that interacts directly with STING to promote its dimerization and membrane translocation [25,26]. Previously, Yamamoto, Sato and Hemmi [37] demonstrated a prominent feature of TRIFdependent IRF signaling in the production of type I IFN. More recently, a hemorrhagic shock model confirmed that the deficiency of the TLR4 and its intracellular adaptor TRIF results in decreased activation of STING's downstream mediators, TBK1 and IRF3, and the expression of type I IFNs [38].
A recent study with peripheral blood from asthmatic patients revealed that increased STING expression may be associated with exacerbation of the disease [39]. Furthermore, Han et al. [40] described the accumulation of cytosolic dsDNA and cGAS-dependent cytokine production in IL-33-stimulated human bronchial cells and in mice submitted to three different allergic airway inflammation protocols, highlighting the important role of IL-33 induced cytosolic dsDNA accumulation and cGAS/STING pathway activation to asthma pathogenesis.
In our current study, we found the importance of type I (IFN α/β) or III (λ) IFNs signaling in non-canonical inflammasome-dependent neutrophilic inflammation, since mice deficient in IFNAR or IL-28R receptors had a significant reduction in neutrophil recruitment in response to Natterin. Indeed, we found an opposite effect of type II FN (γ) over type I and III IFNs in the regulation of the number of infiltrating neutrophils, corroborating the findings that report the negative reciprocal counter-regulation. Saikh [41] describe that MyD88 up-regulation with many viral infections is linked to decreased antiviral type I IFN response, and MyD88 exert an inhibitory effect on the TRIF-mediated downstream signaling pathway of the type I IFN response.
cGAS serves as a cytosolic sensor of dsDNA and it activates STING, leading to a type I IFN response via synthesis of the secondary messenger, cGAMP [42]. Together, our data that show the requirement of cGAS for Natterin-induced neutrophilic inflammation confirm the crucial importance of cGAS-STING-IRF3 axis as a common pathway.
There are several possible mechanisms by which mtDNA leaks into the cytosol to induce cGAS-STING signaling-mediated inflammation. First, it has been described that IL-1β signaling causes DNA damage and self-DNA release [9,43]. ROS can induce oxidative mitochondrial damage, resulting in mtDNA leaks into the cytosol [44]. Additional contributions for the leakage of self-DNA come from increased or prolonged mPTP opening in activated neutrophils or from dead or stressed cells [20].
We observed that, compared with the WT Natterin-injected mice, gpcr91 KO mice or DPI and cyclosporin A treated-mice had a lower number of neutrophils in the peritoneal cavity, implying that the ischemic accumulation of succinate-dependent ROS production plays a decisive role in neutrophilic inflammation. Interestingly, the released DNA in our model seems to be associated to a mechanism partially dependent of mPTP opening. Together, these data suggest that Natterin is a potent trigger for mitochondrial damage and mtDNA leakage into the cytosol, which activates the cGAS cytosolic DNA sensor that causes STING signaling, driving IRF3-mediated secretion of type I IFNs, which synergizes with IL-33 to promote neutrophilic inflammation (Figure 4). Our study indicates a sophisticated interplay between cGAS/STING and type I IFN pathways that connect the non-canonical pathways of inflammasome activation for the regulation of neutrophilia in response to Natterin. Neutrophilic inflammation induced by Natterin requires cGAS/STING/IRF3 via type I IFN receptor. Natterin induces neutrophilic infiltration with cell activation and release of cytosolic molecules, such as DNA, succinate and ROS. cGAS/STING drives IRF3-mediated inflammation dependent on type I IFN receptor. The activation of the STING pathway requires TLR4/TRIFdependent pathway, essential for the production of type I IFNs, which synergizes with processed IL-33 to coordinate inflammation. Our data clarify that the neutrophilic inflammation induced by Natterin an aerolysin-like toxin is the result of activation of cytosolic DNA sensors pointing to the possibility of new pharmacological tools for its control.
Natterin Preparation
T. nattereri fish venom was obtained from fresh captured specimens at the Mundau Lake in the state of Alagoas, Brazil, with a trawl net from the muddy bottom of lake. Fish were transported to the Immunoregulation Unit of Butantan Institute according to the Brazilian Environmental Agency (IBAMA-Instituto Brasileiro do Meio Ambiente e dos Recursos Naturais Renováveis) under the license no. 16221-1. Venom was immediately extracted from the openings at the tip of the spines by applying pressure at their bases. After centrifugation, venom was pooled and stored at −80 • C before use. After that, fish were anesthetized with 2-phenoxyethanol prior to sacrifice by decapitation. The purified 35-38-kDa Natterin solution from T. nattereri fish venom was prepared with a pool of venom collected in different months of the year in Alagoas according to Komegae et al. [33]. The venom was fractionated by cation exchange chromatography, using the fast protein liquid chromatography system (FPLC-Pharmacia, Uppsala, Sweden). Immediately before chromatography, 2 mg venom was diluted in 500 µL of buffer A (20 mM Trishydroxymethylaminomethane, pH 8.3) and the solution centrifuged at 10,000× g for 5 min. The sample was applied on Mono S column HR 5/5 equilibrated with buffer A. The retained proteins were eluted with a linear gradient of NaCl (sodium chloride) 0-2 M and collected at a flow rate of 1 mL/min. The elution profile was determined by measuring absorbance at 280 nm. Fractions 1-4, except the 5th, corresponding to the Natterins, were pooled (referred to as Natterin), dialyzed against 50 mM Tris/HCl pH 7.4 and evaluated with respect to its protein content and kept at −20 • C until use. The obtained Natterin were analyzed by polyacrylamide gel electrophoresis with 12% SDS (SDS-PAGE). Endotoxin content resulting in a total dose <0.8 pg LPS was evaluated with chromogenic Limulus amoebocyte lysate assay (no. QCL-1000, Bio-Whittaker) according to the manufacturer's instructions.
Peritoneal Cell Suspension Collection
After 2 h, mice were sacrificed by isoflurane inhalation, their peritoneal cavities were washed with 2 × 2.5 mL of cold PBS and the exudates harvested were centrifuged at 1500 rpm at 4 • C for 10 min. According to Santos et al. [45], total leukocyte counts were performed using a hemocytometer and cytocentrifuge slides containing 100 µL of cell suspension were prepared, air dried, fixed in methanol and stained with the Diff-Quick staining set, and analyzed in an optical microscope a 40× objective. For differential cell counts, 300 leukocytes were classified as macrophages or polymorphonuclear neutrophils and counted, based on staining and morphological characteristics, using a light microscope Axio Imager A1 (Carl Zeiss, Jena, Germany) with an AxioCam ICc1 digital camera (Carl Zeiss).
Double-Stranded DNA Content Measurement
The supernatants obtained after peritoneal exudate centrifugation were precipitated for protein concentration by 12 h of incubation at −20 • C with acetone. The concentrated supernatant was analyzed for the content of double-stranded DNA using Quant-iTPicoGreen dsDNA reagent (no. P11495, Invitrogen, Carlsbad, CA, USA), according to Nascimento et al. [46].
Western Blot
The cell pellets collected from the peritoneal exudate were resuspended in lysis buffer solution (RIPA no. 9806, cell signaling added of Pierce protease and phosphatase inhibitor no. 88668, Thermo Fisher Scientific, MA, USA), and kept for 30 min on ice and sonicated for 3 s × 10 amplitude at 4 • C. It was then centrifuged at 14,000 rpm at 4 • C for 15 min. The collected supernatant was precipitated in acetone to obtain the cytoplasmic proteins and the pellets were resuspended in lysis buffer solution and kept for more 30 min on ice followed by 5 min of immersion bath in liquid nitrogen. After centrifugation at 14,000 rpm at 4 • C for 15 min, the collected supernatant was precipitated in acetone to obtain the nuclear proteins.
Statistical Analysis
All values were expressed as mean ± SEM. Experiments using 3 to 5 mice per group were performed independently during the last two times of this study. Parametric data were evaluated using analysis of variance, followed by the Bonferroni test for multiple comparisons. Non-parametric data were assessed using the Mann-Whitney test. Differences were considered statistically significant at p < 0.05 using the Graph Pad Prism software (Graph Pad Software, v6.02, 2013, La Jolla, CA, USA). | 2022-03-31T16:30:41.284Z | 2022-03-25T00:00:00.000 | {
"year": 2022,
"sha1": "cdc8a60e14f0ffa3129a3832e6ba91d784ccbd04",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/7/3600/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0981ba1fc29e0d0f1aaaec976f40718b26b4600",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55354512 | pes2o/s2orc | v3-fos-license | VSS Degradation Kinetics in High Temperature Aerobic Digestion and Microbial Community Characteristics
Piggery wastewater is a kind of high concentration organic wastewater with high concentration of pollutants, large amount of emissions, and serious environmental pollution and is difficult to deal with. Piggery wastewater was treated with autothermal hyperthermia aerobic digestion process (ATAD) and its biodegradation kinetics was studied. The ATAD system was automatically heated up and the reaction temperature rose from ambient temperature of 20C to a maximum temperature of 64C. Based on Arrhenius formula, the empirical model is obtained through dimensional analysis. The removal of volatile suspended solids (VSS) was correlated with the initial VSS concentration, water inlet temperature, aeration rate, and agitation rate in the model. In the empiricalmodel, the apparent activation energywas 2.827 kJ⋅mol−1.The exponentials for the initial VSS concentration, aeration rate, and stirring rate were 1.0587, −0.0976, and −0.1618, respectively.The correlation coefficient of the exponential factor was 0.9971.The VSS removal efficiency predicted by the model was validated with an actual test, showing a maximum relative deviation of 8.82%. Sludge systems show a lower diversity of microbial populations and Bacillus occupies a very important position in the reactor. The data obtained will be useful for optimizing piggery wastewater treatment process. The new model provided good theoretical guidance with good practicality.
Introduction
With the continuous expansion of pig industry, piggery wastewater production continued to increase [1].Piggery wastewater contains a lot of nitrogen and organic matters, which has posed serious threats to the environment and human health [2].There are three main approaches to removing manure from piggery in China, namely, urinefree manure (UFM), combined manure with urine (CMU), and soaked manure with urine (SMU).As a traditional mode, UFM collection has been widely used in China [3,4].Numerous studies have been conducted on the treatment of piggery wastewater.For example, Zhang et al. [5] studied anaerobic codigestion of piggery wastewater and food waste and identified the key factors governing codigestion performance.Han et al. [2] investigated the effect of feeding strategy on the treatment of swine wastewater, showing that the feeding ratio had a more significant effect on the removal of phosphorus and nitrogen than on the removal of chemical oxygen demand (COD) in the sequencing batch reactor (SBR) system.
Pig manure wastewater generally contains high concentrations of COD, N (nitrogen), P (phosphorous), pathogenic bacteria, organic matters, and nutrients [6].At present, incineration technology is predominant in the treatment of high concentrations of pathogenic microorganisms.However, it exposes many disadvantages, such as high energy consumption and complicated operation [7].Self-heating high temperature aerobic digestion process uses the thermophilic microbial metabolism (i.e., cell death, hydrolysis, and biosynthesis) so as to achieve the degradation of organic matters and the eradication of pathogens.In this process, the microorganisms use oxygen for their own oxidative decomposition, releasing heat.There are two main roles in the ATAD process: the degradation of organic matters by microorganisms under the presence of active enzymes and the disintegration of extracellular enzymes.The second process has a key effect on the inactivation of pathogens and the removal of VSS [8].ATAD is more stable than other sludge treatments (e.g., anaerobic digestion) and is not susceptible to fluctuations in operating conditions.Therefore, ATAD has gained widespread attention in the field of environmental engineering, and some ATAD processes have been successfully applied in other fields.[9][10][11].
During aerobic digestion of sludge, the microorganisms are in the endogenous respiration phase, so the reaction rate and biomass follow the first-order reaction equation [12].The most commonly used model is Adams proposed model.However, the Radall study found that solid suspended solids decreased during the remaining activated sludge digestion, suggesting that the Adams model is not the same as the actual situation [13].Gomez et al. [14,15] did not consider the effect of temperature.They put the ATAD model as ASM1 model.
In order to provide theoretical guidance for wastewater treatment and engineering design, the effects of parameters such as agitation rate, reaction temperature, aeration rate, and influent concentration on VSS removal were studied.The empirical model of laboratory scale was established.
Wastewater Sources and Water
Quality.The raw manurefree piggery wastewater was collected from a local pig farm in Harbin, China.Influent quality of the wastewater fluctuates along with breeding seasonality.Compositions of the piggery wastewater and the experimental operation parameters are summarized in Table 1.
ATAD Reaction System.
As shown in Figure 1, the device of ATAD consists of a mixing system, an aeration system, and a thermometer.A water bath was used to maintain the reaction temperature.The main body of the reactor is wrapped with a 2 cm thick insulating material.The effective volume of the reactor is 2.8 L.
In the experiment, a prescribed amount of piggery wastewater was added to the reactor.The stirring speed and aeration rate were adjusted.The temperature of the reaction was measured using a thermometer and adjusted through the water bath.There is a sampling tube in the bottom of the reactor.In order to minimize the heat loss caused by the temperature difference between the environment and the reaction system, the temperature of the water bath is about 5 ∘ C lower than the temperature of the reactor (except Section 3.4).In addition, in order to keep the amount of water in the reactor constant, deionized water of the same amount of the same temperature was added to the reactor daily.The reaction temperature was 20-60 ∘ C, the stirring rate was 125-215 r⋅min −1 , and the aeration rate was 10-30 L⋅h −1 .
Analytical Methods.
The effects of influent concentration, aeration amount, and stirring speed on the reaction temperature, the removal rate of TSS and VSS, and the reaction time were investigated.TSS, VSS, NH 3 -N, phosphorus (P), alkalinity, pH, COD Cr , dissolved oxygen (DO), and temperature were measured once a day, and BOD 5 was measured once every four days.These indicators were used to reflect the operation of the ATAD reactor.At the same time, we also evaluated the number of total flora, roundworm eggs, fecal coliforms, fecal streptococci, and salmonella to assess the contribution of the ATAD process to sludge stabilization.The analytical standard methods were adopted.
Domestication Process.
Two litters of activated aerobic sludge, taken from Jilin Sewage Treatment Plant, was placed in a 2.5 L glass reaction vessel combined with aeration and stirring apparatus.The jar was firstly placed in the water bath thermostat and the insulation layer was added around it to prevent heat loss.The temperature of the water bath was compatible with the system temperature (below system 1∼2 ∘ C).The aeration rate was adjusted to 10∼15 L⋅h −1 and stirring speed to 180∼200 r⋅min −1 [16].
The activated sludge was cultured for 7 days and the temperature and settling ratio (SV) were measured daily.After 7 days of domestication, aerobic sludge volume has been reduced by about 1/3.The temperature can be automatically increased, and the maximum temperature reached 62 ∘ C, indicating that the success of domesticated strained.By changing the temperature and sludge volume, the thermophilic bacteria that were domesticated successfully can be used as bacteria into the reactor.
The Effect of Feed Concentration on VSS Removal Efficiency.
Figure 2 shows the degradation results at different feed VSS concentrations in ATAD at an initial temperature of 25 ∘ C. The feed VSS concentration varied from 7.00 to 55.00 g⋅L −1 , and the effluent VSS concentrations ranged from 4.40 to 31.00 g⋅L −1 on the 17th day of reaction.Removal efficiency of VSS ranged from 37.14% to 61.00%.When the feed VSS concentration was as high as 40 g⋅L −1 , the VSS removal efficiency was 52.08%, indicating that the ATAD process was applicable to biodegradation of a wide range of VSS concentrations.However, when the VSS concentration is greater than 30 g⋅L −1 , the viscosity of material increases so that the mixing is poor and oxygen transfer is inhibited.At low VSS concentrations (below 30 g⋅L −1 ), the self-heating is not enough to maintain a relatively high temperature and degradation rate [17,18].The removal efficiency of Ascaris eggs in the ATAD system was 100%.When the feed concentration was high (greater than 30 g⋅L −1 ), the removal rates of the total numbers of flora and the number of fecal coliforms and Streptococcus faecalis were higher.The removal rate of fecal coliforms reached 99.95% and the fecal coliform removal rate reached 99.99% when the sludge concentration was 40 g⋅L −1 .When the sludge concentration was 30 g⋅L −1 , the removal rate of S. faecalis was 99.94%.When the feed concentration was high, the organic content was large.Oxidative decomposition of the heat can effectively increase the temperature of the system, which is conducive to the inactivation of pathogens.Thus, the optimum influent VSS concentration is 30.00 g⋅L −1 .
The Effect of Stirring Rates on VSS Biodegradation.
Better mixing is needed for mass transfer, reaction substrate, DO, and thermophilic bacteria.Mixing rate is an important factoring affecting mass transfer, reaction substrate, concentrations of DO, and thermophilic bacteria [1].The feed VSS concentration was 29.20-31.50g⋅L −1 , and the reaction temperature finally reached 62 ∘ C from room temperature.The effects of stirring rate on the VSS degradation are shown in Figure 3.At the same feed VSS concentration and aeration rate, low mixing intensity (below 185 rpm) was not enough for fully mixing, adequate endogenous respiration, complete heat release, quick increase of self-heating rate in the system, and high VSS removal rate.However, when the stirring rate was increased to 215 rpm, the self-heating rate in the system declines and was 4.0 ∘ C⋅d −1 in the first 12 days, with a maximum VSS removal efficiency of 43.15%.When the stirring rate was 185 rpm, the biodegradation rate of VSS increased by 6.05% compared to that at 215 rpm.Ascaris egg removal rates had reached 100%.When the stirring rate was 185 rpm, the removal rate of the total bacterial population was the highest, reaching 99.95%.Higher mixing intensity may dissipate more reaction heat and decrease the temperature in the system.Thus, 185 rpm was considered as the optimal stirring rate.
The Effect of Aeration Rate on VSS Biodegradation.
At the VSS concentration of 30 g⋅L −1 , five different aeration rates (10,15,20,25, and 30 L⋅h −1 ) were used at stirring rate of 185 rpm with running for 18 days.When the aeration rate was 10 L⋅h −1 , the average temperature rising rate in the reactor is 2.07 ∘ C⋅day −1 , the system temperature reaches 61 ∘ C with the removal rate of VSS reaching 52%.When the aeration rate was 20 L⋅h −1 , the VSS removal rate was the most efficient, reaching 50.32%, while the sludge stabilization standard was achieved, and the final temperature of the system reached 63 ∘ C. When the aeration rate was 30 L⋅h −1 , the removal rate of VSS was 45.27%, which was 12.8% lower than that of 15 L⋅h −1 .When the oxygen supply was less than 15 L⋅h −1 , the oxygen needed for aerobic digestion of microorganisms cannot be satisfied, resulting in poor microbial activity and slow VSS degradation.When the aeration rate is more than 20 L⋅h −1 , the aeration will make the system heat dissipation, which will lead to the system temperature decrease, thereby affecting the removal of VSS.Under different aeration rates, the removal rates of Ascaris eggs were all 100%.When the aeration rate was 15 L⋅h −1 , the removal rate of total number of bacteria was the highest, reaching 99.99%.As shown in Figure 4, the removal rates of VSS were the highest at the aeration rate of 15 L⋅h −1 , which is considered as the best rate under the experimental conditions.
The Effect of Temperature on VSS Biodegradation.
In general, the effect of temperature on reaction rate may be described by Arrhenius-type formula.The action rate increases with temperature considerably due to the exponential function of temperature [19].Therefore, high temperatures were generally chosen.The influent VSS was about 30 g⋅L −1 , the aeration rate was 15 L⋅h −1 , and the stirring rate was 185 rpm.The temperature changed in the range of 20-60 ∘ C. Figure 5 shows that the VSS biodegradation rate increased with temperature significantly.At 60 ∘ C, the VSS removal rate was 61.19%, increasing by 22.48% compared with that at 20 ∘ C, and it was increased by 4.94% at 50 ∘ C. For the cost-effective and efficient removal of VSS, it was advantageous to maintain higher temperature.The maximum temperature obtained from self-heating was 64 ∘ C in the process.
Regression Results and Model Validation.
At different temperatures, the reaction rates were shown in Figure 5.The rate constant [exp(−Ea/)] in the model was obtained by the least square regression method.The regression was achieved from the second day to the twelfth day.The value was calculated from the regression equation.The results obtained are shown in Table 2.
By the logarithm of the Arrhenius formula, the relationship between ln and 1/ was obtained, which resulted in a straight line.From the slope, the apparent activation energy could be calculated, and from the intercept the factors can be obtained (as shown in Figure 6), which are 2.827 kJ⋅mol −1 and 3.59 (g⋅L −1 ) −1.0587 ⋅(ml⋅s −1 ) 0.0976 ⋅(r⋅min −1 ) 0.1618 , respectively.The correlation coefficient for 0 is 0.9971.By substituting both sides of formula (1), the exponents , , and in the equation were obtained from the data in Figures 2, 3, and 4, respectively.The values of , , and are shown in Figures 7-9, which are 1.0587, −0.0976, and −0.1618, respectively.The reaction order for VSS concentration is 1.0587 in this study, which is different from the first-order reaction kinetics obtained by Liu et al. [25,26] for the aerobic sludge digestion or biological system [21].
The VSS concentration at time , VSS t , was obtained according to (2) The data calculated by (3) is compared with the average of the experimental data, which is shown in Table 3. Table 3 shows that the maximum deviation is 8.82%, so the VSS removal efficiency can be predicted by (3).
Microbial Community Structure Characteristics
The sludge aerobic digestion system is affected by the digestion temperature and the source of sludge, and the population of thermophilic microorganisms is more complex.Studies have shown that there are clostridium species other than aerobic thermophiles in high temperature aerobic reactors.Based on OTU cluster analysis results, Table 4 shows the analysis of various types of bacteria and their share of the results.
As can be seen from Table 4, the sludge system showed a lower level of microbial population diversity; a total of seven species were detected.More than 89% of the thermophilic microorganisms in the device belong to the genus Bacillus, with a large number of Bacillus stearothermophilus being the most abundant species in Bacillus (32% of isolates), followed by Bacillus stearothermophilus (22% of isolates).In addition to the species of aerobic thermophilic bacteria in the system, there are other thermophilic microorganisms such as Schineria larvae and Clostridia.Ugwuanyi study found that only when the sludge digestion temperature is higher than 55 ∘ C, the Bacillus stearothermophilus becomes the dominant population.When the digestion temperature was 50 ∘ C, Bacillus licheniformis and Bacillus coagulans became the dominant species.The types of thermophilic bacteria are affected by sludge source and other factors.There may be great differences in the types of thermophilic bacteria in ATAD reactors of different wastewater treatment plants, but Bacillus plays a very important role in the reactor.
Conclusions
An empirical biodegradation kinetic model of Arrheniustype was obtained in the ATAD test for the treatment of piggery wastewater.The indices of VSS concentration, aeration rate, and agitation rate are , , and , respectively.Each index is solved by logarithmic differential method.The values of , , and are 1.0587, −0.0976, and −0.1618, respectively.In the experimental temperature range, the apparent activation energy Ea is 2.827 kJ⋅mol −1 , the preexponential factor 0 is 3.59 (g⋅L −1 ) −1.0587 ⋅(ml⋅s −1 ) 0.0976 ⋅(r⋅min −1 ) 0.1618 , and the correlation coefficient for 0 is 0.9971.The relative deviation between the data obtained from the empirical formula and the experimental data is less than 8.82%.Sludge systems show a lower diversity of microbial populations.More than 89% of the thermophilic microorganisms in the device belong to the genus Bacillus.The model provides theoretical guidance for wastewater treatment and engineering design.
Table 1 :
Compositions of the piggery wastewater.
PumpFigure 1: The sketch map of the ATAD experimental equipment.
Table 2 :
The rate constants in the empirical model.
Table 3 :
Relative deviation between experimental and calculated VSS concentrations. | 2018-12-09T04:34:06.708Z | 2018-04-23T00:00:00.000 | {
"year": 2018,
"sha1": "c84d5aa3764a5923aa33b2fb4d2b25795cf64294",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jchem/2018/8131820.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c84d5aa3764a5923aa33b2fb4d2b25795cf64294",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
22335250 | pes2o/s2orc | v3-fos-license | Role of Janus Kinase/Signal Transducer and Activator of Transcription and Mitogen-activated Protein Kinase Cascades in Angiotensin II- and Platelet-derived Growth Factor-induced Vascular Smooth Muscle Cell Proliferation*
In vascular smooth muscle cells, the induction of early growth response genes involves the Janus kinase (JAK)/signal transducer and activators of transcription (STAT) and the Ras/Raf-1/mitogen-activated protein kinase cascades. In the present study, we found that electroporation of antibodies against MEK1 or ERK1 abolished vascular smooth muscle cell proliferation in response to either platelet-derived growth factor or angiotensin II. However, anti-STAT1 or -STAT3 antibody electroporation abolished proliferative responses only to angiotensin II and not to platelet-derived growth factor. AG-490, a specific inhibitor of the JAK2 tyrosine kinase, prevented proliferation of vascular smooth muscle cells, complex formation between JAK2 and Raf-1, the tyrosine phosphorylation of Raf-1, and the activation of ERK1 in response to either angiotensin II or platelet-derived growth factor. However, AG-490 had no effect on angiotensin II- or platelet-derived growth factor-induced Ras/Raf-1 complex formation. Our results indicate that: 1) STAT proteins play an essential role in angiotensin II-induced vascular smooth muscle cell proliferation, 2) JAK2 plays an essential role in the tyrosine phosphorylation of Raf-1, and 3) convergent mitogenic signaling cascades involving the cytosolic kinases JAK2, MEK1, and ERK1 mediate vascular smooth muscle cell proliferation in response to both growth factor and G protein-coupled receptors.
ilar glomerular mesangial cells has shown that protein tyrosine phosphorylation plays a critical role in angiotensin II (Ang II)-mediated intracellular signaling cascades. This is true despite the fact that G protein-coupled receptors in general and the Ang II AT 1 receptor in particular possess no intrinsic tyrosine kinase activity. It is also now recognized that Ang II can act not only as a vasoactive peptide but also as a growth factor. In particular, Ang II has been shown to stimulate proliferative and hypertrophic growth in VSMC, glomerular mesangial cells, cardiac fibroblasts, and myocytes via AT 1 receptor binding (4,(7)(8)(9). Like classic growth factors (e.g. platelet-derived growth factor (PDGF) and epidermal growth factor) and some cytokines (e.g. interferons and interleukins) (4, 8 -10), Ang II is also capable of stimulating a rapid increase in the mRNA levels of c-fos, an early growth response gene implicated in VSMC proliferation (4,7,8). However, the Ang II-stimulated intracellular signaling cascades responsible for c-fos induction and therefore proliferation in VSMC have not been well defined.
One candidate mitogenic signaling cascade involves the activation of the small GTP-binding protein, Ras, which is traditionally mediated via classic growth factor receptors (4). Ras activation promotes the formation of a membrane-bound complex with Raf-1 (a serine/threonine protein kinase). Subsequent tyrosine phosphorylation of Raf-1 leads to its activation and the sequential stimulation of several cytoplasmic protein kinases, collectively known as the mitogen-activated protein kinase (MAPK) pathway. This phosphorylation cascade in turn activates a set of regulatory elements leading to the stimulation of early response genes and cellular growth (4). Our laboratory (5) has previously shown that as with classic growth factors, Ang II-induced protein tyrosine phosphorylation promotes the activation of p21 ras in VSMC.
A second mitogenic cascade that is activated by many cytokine receptors (e.g. interferons and interleukins) involves the JAK (Janus kinase) family of cytoplasmic tyrosine kinases (11,12). JAK-mediated tyrosine phosphorylation of STAT (signal transducers and activators of transcription) family members promotes the translocation of these transcription factors to the nucleus, where they bind to specific DNA motifs and induce c-fos gene transcription (11)(12)(13)(14). In VSMC, our laboratory (6) has previously shown that Ang II stimulates the tyrosine phosphorylation of JAK isoforms (JAK2 and TYK2), the tyrosine kinase activity of JAK2, and the tyrosine phosphorylation of STAT isoforms (STAT1, STAT2, and STAT3). Finally, Ang II induces the formation of a complex between JAK2 and the AT 1 receptor itself.
Our present study examines the role of JAK/STAT and Ras/ Raf-1/MAPK signaling cascades in the cellular proliferation mediated by activation of G protein-coupled AT 1 receptor and classic growth factor receptors (e.g. PDGF). Ang II plays a crucial role in the regulation of systemic arterial blood pressure, cardiovascular and renal growth, and sodium homeostasis (7). Importantly, angiotensin-converting enzyme inhibitors have become a mainstay in the treatment of hypertension, congestive heart failure, cardiac hypertrophy, myocardial infarction, and chronic renal failure (4). Better definition of Ang II-mediated mitogenic signaling provides the potential for additional specific therapeutic interventions.
Cell Proliferation Assay and Coulter Counting-Proliferation was measured using the Cell Titer 96 AQ ueous nonradioactive cell proliferation assay (Promega, Inc., Madison, WI) (16). This assay is based on the cellular conversion of the colorimetric reagent, MTS (3,4-(5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium salt), into soluble formazan by dehydrogenase enzymes found only in metabolically active, proliferating cells. MTS in Dulbecco's phosphate-buffered saline (pH 6.0) was mixed with the electron-coupling reagent, phenazine methosulfate. The absorbance of formazan, measured at 490 nm using a 96-well enzyme-linked immunosorbent assay plate reader interfaced with a personal computer (model 3550, Bio-Rad), is directly proportional to the number of living cells in culture. To confirm the accuracy of our MTS proliferation assay, the actual increase in cell number was also directly assessed with a Coulter counter (model ZM, Coulter Corp., Hialeah, FL).
VSMC were grown in a 75-mm 2 flask to confluence and detached with trypsin-EDTA (0.05% trypsin, 0.53 mol/liter EDTA; Life Technologies Inc.). 20,000 cells were plated into 96-well plates and allowed to settle for 4 h in DMEM supplemented with 10% fetal bovine serum. Prior to experiments, cells were then growth-arrested in serum-deprived DMEM for 24 h (time 0). Cells were then stimulated with 10 Ϫ7 mol/liter Ang II (Sigma) or 0.33 mmol/liter PDGF (Life Technologies Inc.). After timed ligand exposure, the phenazine methosulfate/MTS mix was added to each well (final volume, 20 l/100 l medium) and then incubated for an additional 60 min in (5% CO 2 at 37°C). A 10% SDS solution was then added to stop the reaction, and the absorbance of formazan was measured at 490 nm.
[ 3 H]Thymidine Incorporation-VSMC were plated in 96-well plates and maintained in DMEM supplemented with 10% fetal bovine serum as described for the cell proliferation assay above. 24 and 48 h after ligand exposure, cells were pulsed with 1 mCi/ml [ 3 H]thymidine (New England Nuclear, Boston, MA) and then harvested into trichloroacetic acid-precipitable material. Cells were washed with phosphate-buffered saline, incubated in 10% trichloroacetic acid at 4°C, dissolved at room temperature in 1 mol/liter, and dried on filter paper. The paper was washed three times with phosphate-buffered saline, and then the samples were placed in scintillation liquid and counted on a scintillation counter (Beckman Inc., Palo Alto, CA). Data were plotted as the number of cpm/well. Each experimental data point represents duplicate wells from at least four different experiments.
Electroporation Procedure-Cells were plated in 96-well plates and growth-arrested in serum-deprived DMEM for 24 h prior to experiments. As described previously (1,5), VSMC were electroporated in 96-well plates using a Multi-Coaxial electrode (model P/N 747, BTX Inc. San Diego, CA) was performed in Ca 2ϩ -and Mg 2ϩ -free Hanks' balanced salt solution (pH 7.4, 5 mmol/liter KCl, 0.3 mmol/liter KH 2 PO 4 , 138 mmol/liter NaCl, 4 mmol/liter NaHCO 3 , and 0.3 mmol/liter NaHPO 4 ) containing antibodies at a final concentration of 10 mg/ml. Following electroporation, cells were incubated for an additional 30 min at 37°C (5% CO 2 ), washed once with serum-free DMEM, and then left in serumfree DMEM prior to the experiments.
FIG. 1. Effect of anti-STAT, -ERK1, or -MEK1 antibody electroporation on Ang II-and PDGF-induced VSMC cell proliferation.
A, VSMC were exposed to serum-free DMEM only (Ⅺ) or serum-free DMEM supplemented with Ang II (10 Ϫ7 mol/liter; E) or PDGF (0.33 mmol/liter; q) for timed periods prior to measuring cell proliferation. B, VSMC were electroporated with 10 mg/ml rabbit anti-MEK1 (closed symbols) or -ERK1 (open symbols) prior to exposure to Ang II (circles) or PDGF (squares). C, VSMC were electroporated with anti-STAT1 (open symbols) or -STAT3 (closed symbols) prior to exposure to Ang II (circles) or PDGF (squares). Cell proliferation is expressed as absorbance of formazan at 490 nm. Data represent the means Ϯ S.D. for at least four experiments (each in duplicate).
(New England Biolabs, Inc.) antibodies. The latter antibody recognizes only the catalytically activated forms (phosphorylated on tyrosine residue 204) of p44 and p42 MAPK (ERK1 and ERK2, respectively) (15). Finally, proteins were visualized using a horseradish peroxidase conjugated to goat anti-mouse or donkey anti-rabbit IgG and an enhanced chemiluminescence kit.
Statistical Analysis-Data were reported as the means Ϯ S.D. for at least four experiments (each in duplicate). Statistical analysis of the raw data was performed by one-way analysis of variance followed by appropriate post-hoc test (Bonferroni) for comparison between groups. Data were analyzed and plotted using SigmaStat and SigmaPlot (Jandel Scientific, San Rafael, CA). Probability Ͻ 0.05 was considered significant.
RESULTS
Cellular proliferation, determined by the MTS assay (see "Materials and Methods"), was measured in VSMC after timed exposures to 10 Ϫ7 mol/liter Ang II or 0.33 mmol/liter PDGF. Both PDGF and Ang II significantly stimulated proliferation within 12 h when compared with cells that had not been exposed to either growth factor or G protein-coupled receptor ligands (Fig. 1A). PDGF-induced proliferation exceeded Ang II-induced proliferative responses.
MEK1 and ERK1 Are Required for Both Ang II-and PDGFinduced VSMC Proliferation-Physiologic cell growth and differentiation mediated by the Ras/Raf-1/MAPK cascade involves the activation of the serine/tyrosine MAPK kinase, MEK1, and the serine/threonine MAPK, ERK1 (4). Other investigators have recently demonstrated that both Ang II and PDGF are capable of activating ERK1 in VSMC (4). To evaluate the potential role of the MAPK cascade in VSMC proliferation, antibodies against MEK1 and ERK1 were electroporated into VSMC prior to exposure to Ang II or PDGF. VSMC proliferation in response to Ang II or PDGF was abolished in the presence of anti-MEK1 or -ERK1 antibodies (Fig. 1B). In serum-free negative controls or electroporation experiments with pooled rabbit IgG or sham-absorbed anti-MEK1 or -ERK1 antibodies, no inhibition of Ang II-or PDGF-induced VSMC proliferation was observed (data not shown).
In VSMC electroporated with mock antibody (anti-IgG), DNA synthesis measured as [ 3 H]thymidine incorporation increased significantly within 24 h of Ang II or PDGF exposure ( Fig. 2A). Also consistent with our proliferation results (Fig. 1A), [ 3 H]thymidine incorporation was greater after PDGF than after Ang II exposure. We then tested the role of MAPK cascade components in VSMC DNA-synthesis. Indeed, the electroporation of anti-MEK1 or -ERK1 antibodies abolished DNA synthesis in response to either Ang II or PDGF (Fig. 2A).
These results suggested that VSMC proliferation and DNA synthesis, in response to both G protein-receptor coupled (i.e. Ang II) and growth factor (i.e. PDGF) receptor ligands, involve the MAPK cascade and are dependent on the activation of MEK1 and ERK1.
STAT1 and STAT3 Are Required for Ang II-but Not PDGFinduced VSMC Proliferation-Previous work by our laboratory (6) has shown that the cytosolic tyrosine kinase, JAK2, plays a critical role in Ang II-mediated signaling events, including the activation of STAT proteins. In the present study, we found that Ang II-induced proliferation was virtually abolished by the electroporation of anti-STAT1 or anti-STAT3 (Fig. 1C). In contrast, there was no statistical difference between PDGF-in- duced proliferative responses observed in normal VSMC (Fig. 1A) compared with VSMC electroporated with anti-STAT1 or -STAT3 antibodies (Fig. 1C). The latter observation suggested that blockage of Ang II-induced proliferation was not simply a toxic effect of the electroporated antibodies. In electroporation experiments with sham-absorbed anti-STAT1 or -STAT3 antibodies, no inhibitory effect on Ang II-induced VSMC proliferation was observed (data not shown).
Similarly, Ang II-induced [ 3 H]thymidine incorporation was completely prevented in cells electroporated with either anti-STAT1 or -STAT3 antibodies (Fig. 2B). However, DNA synthesis in response to PDGF was not affected by electroporation of antibodies against STAT1 and STAT3 isoforms. However, when VSMC were electroporated in the presence of a mock antibody (anti-IgG), the typical increase in [ 3 H]thymidine incorporation was observed in response to Ang II or PDGF.
JAK2 Tyrosine Kinase Activity Is Essential for Both Ang IIand PDGF-induced VSMC Proliferation-Our laboratory (6) has previously shown that in VSMC Ang II induces the rapid tyrosine phosphorylation of and activation of the cytoplasmic tyrosine kinase, JAK2. JAK2 activation, in turn, promotes the phosphorylation of STAT1 and STAT3 tyrosine residues. Our above results suggested that STAT1 and STAT3 are necessary and specific for VSMC proliferative responses linked to the G protein-coupled AT 1 receptor but not the PDGF- receptor. Therefore, we investigated the role of JAK2 tyrosine kinase in VSMC proliferation. We were unsuccessful in blocking Ang IIor PDGF-induced VSMC proliferation, DNA synthesis, or autotyrosine phosphorylation of JAK2 with the electroporation of commercially available anti-JAK2 polyclonal antibodies (data not shown). Because not all antibodies block or neutralize the biologic activities of the respective antigens, we investigated the effect of AG-490, a specific JAK2 inhibitor (17,18). AG-490 belongs to the tyrphostin family of tyrosine kinase inhibitors, and these inhibitors inhibit protein tyrosine kinases by binding to the substrate binding site (19). Pretreatment of VSMC with 10 M AG-490 did indeed block Ang II-induced VSMC proliferation (Fig. 3A), DNA synthesis (Fig. 3B), and the tyrosine phosphorylation of JAK2 (Fig. 4). AG-490 also blocked PDGFinduced VSMC proliferation (Fig. 3A), DNA synthesis (Fig. 3B), and JAK2 tyrosine phosphorylation (Fig. 4). We found that 16 h of pretreatment with AG-490 produced maximal inhibition of Ang II-and PDGF-induced JAK2 tyrosine phosphorylation events while still allowing recovery of VSMC proliferative responses when the AG-490 was removed from the bath. Because PDGF-induced VSMC proliferation and DNA synthesis required JAK2 activity but were unaffected by anti-STAT1 or -STAT3 antibody electroporation, we examined the possibility that JAK2-dependent proliferative responses were mediated through an alternative mitogenic pathway other than the JAK/ STAT cascade.
Evidence from several groups suggests that JAK2 forms a membrane complex with Ras/Raf-1 and is required for Raf-1 activation in several different nonvascular mammalian cell types (20 -22). We found that JAK2 inhibition with AG-490 pretreatment blocked both Ang II-and PDGF-induced complex formation between JAK2 and Raf-1 (Fig. 5) and the tyrosine phosphorylation of Raf-1 (Fig. 6). We then examined the effect of JAK2 inhibition on Ang II-and PDGF-mediated stimulation of ERK tyrosine phosphorylation. VSMC lysates were probed with a phospho-specific MAPK antibody that recognizes only the catalytically activated forms (phosphorylated on tyrosine residue 204) of p44 and p42 MAPK (ERK1 and ERK2, respectively) (15). AG-490 blocked both ERK1 and ERK2 tyrosine phosphorylation in response to Ang II or PDGF (Fig. 7). Finally, we examined the specificity of AG-490 for blocking tyrosine phosphorylation-dependent events known to be induced by Ang II and PDGF (2,5,7,8). We found that JAK2 inhibition with AG-490 did not prevent PDGF-induced tyrosine autophosphorylation of the PDGF- receptor itself (Fig. 8), nor did it prevent Ang II-or PDGF-induced Ras/Raf-1 complex formation (Fig. 9) or phospholipase C-␥1 tyrosine phosphorylation (data not shown). Therefore, this specific JAK2 inhibitor does not block non-JAK cytosolic tyrosine kinases (e.g. pp60c-src) or the intrinsic tyrosine kinase activity of growth factor receptors (e.g. PDGF- receptor). These results suggested that JAK2 plays a specific role in the tyrosine phosphorylation of Raf-1 and the activation of ERK1 and ERK2, providing a mechanism for cross-talk between two diverse mitogenic pathways, namely the JAK/STAT and MAPK cascades. DISCUSSION Several groups have previously shown that two mitogenic cascades, the JAK/STAT and Ras/Raf-1/MAPK, are stimulated by the AT 1 receptor in VSMC (4,6,14,23,24). Both cascades link the binding of ligands to cell surface receptors, with intracellular signaling elements that promote nuclear transcription events resulting in cellular growth (4). Our laboratory (6) has shown that Ang II stimulates the tyrosine phosphorylation and activation of JAK2 and subsequently the tyrosine phosphorylation of STAT isoforms in VSMC. Bhat et al. (23,24) have also demonstrated in cultured neonatal fibroblasts that Ang II induces STAT protein phosphorylation, translocation of STAT proteins into the nucleus, and initiation of early response gene transcription. Our laboratory (5) has shown that in VSMC Ang II stimulates the proto-oncogene, p21 ras . Activated Ras then forms a membrane-bound complex with Raf-1 (a serine/threonine protein kinase), leading to the activation of Raf-1 by tyrosine phosphorylation (4). Raf-1 then phosphorylates and activates MEK1, which in turn leads to the activation of ERK1. Because G protein-coupled receptors lack intrinsic tyrosine kinase activity, the activation of these mitogenic signaling cascades requires the recruitment of cytosolic tyrosine kinases, such as pp60 c-src , MEK1, and JAK2. Previous work by our laboratory (5) has shown that blocking of pp60 c-src with electroporated anti-pp60 c-src antibodies prevented Ang II-induced formation of the Ras/Raf-1 membrane complex in VSMC. Our present study shows that inhibition of JAK2 with AG-490 and inhibition of MEK1 with antibody electroporation prevents VSMC proliferation and DNA synthesis in response to Ang II. Together, these observations indicate that the G protein-coupled AT 1 receptor stimulates major mitogenic signaling pathways in VSMC via tyrosine phosphorylation, in particular the JAK/STAT and the Ras/Raf-1/MAPK cascades.
In contrast to the G protein-coupled AT 1 receptor, classic growth factor receptors that possess intrinsic tyrosine kinase activity (e.g. PDGF and epidermal growth factor receptors) are thought not to require cytosolic tyrosine kinases to mediate downstream proliferative signaling events (7,8). Consistent with this premise, our laboratory (2, 5) has previously demon- (15). Representative bands corresponding to the molecular masses of ERK1 and ERK2 are shown from lysates from cells with (lower band) or without (upper band) 10 M AG-490 pretreatment for 16 h prior to timed exposures to Ang II (10 Ϫ7 mol/liter; right) or PDGF (0.33 mmol/liter; left). Bottom, VSMC were exposed to serum-free DMEM only (circles) or serum-free DMEM supplemented with the specific JAK2 inhibitor, AG-490 (10 M) (triangles), for 16 h prior to timed exposure to Ang II (10 Ϫ7 mol/liter) (open symbols) or PDGF (0.33 mmol/liter) (closed symbols). Bands were quantitated by densitometry using a La Cie scanner interfaced with a personal computer. Each band was scanned in two dimensions, and the density was corrected for the background present in the lane. Data represent corrected densities for each time point and are expressed as arbitrary units plotted against time of Ang II or PDGF exposure (mean Ϯ S.E.; n ϭ 3). strated that inhibition of cytosolic pp60 c-src tyrosine kinase activity prevents Ang II-but not PDGF-induced stimulation of phospholipase C-␥1 and Ras/Raf-1 complex formation in VSMC. In contrast, JAK2 inhibition with AG-490 does not block Ang II-or PDGF-induced Ras/Raf-1 complex formation or phospholipase C-␥1 tyrosine phosphorylation. In addition, AG-490 did not prevent PDGF- receptor autophosphorylation in response to PDGF. However, in the present study we find that inhibition of the JAK2 tyrosine kinase virtually abolishes VSMC proliferation in response to PDGF. Our results indicate that JAK2 plays a crucial role in the VSMC proliferation mediated by both G protein-coupled (i.e. Ang II) and classic growth factor (i.e. PDGF) receptor ligands. Surprisingly, electroporation of VSMC with anti-STAT1 or -STAT3 antibodies did not prevent PDGF-induced proliferation or DNA synthesis. Therefore, the JAK2-dependent VSMC proliferation that we observe in response to PDGF is likely not mediated via the traditional JAK-induced tyrosine phosphorylation of STAT1 or STAT3 transcription factors. Several groups have shown that growth hormone-, interferon-, and interleukin-induced activation of early growth response genes (e.g. c-myc, c-fos, and c-jun), cell proliferation, Ras/JAK2/Raf-1 complex formation, and Raf-1 kinase activity are dependent on JAK2 in several nonvascular mammalian cell types (20 -22). Indeed, in the present study we find that both Ang II-and PDGF-induced JAK2/Raf-1 complex formation, Raf-1 tyrosine phosphorylation, and ERK1 and ERK2 kinase activity are dependent on JAK2 activity. Therefore, our data provide a key molecular link between the mitogenic JAK/STAT and Ras/Raf-1/MAPK cascades in VSMC.
In summary, our present study emphasizes the important role played by the JAK/STAT and Ras/Raf-1/MAPK cascades in mediating VSMC proliferation in response to both G proteincoupled AT 1 receptors and classic growth factor receptors. We have shown that JAK2 activation by both G protein-coupled and growth factor receptors provides a convergent signaling element for these two diverse mitogenic cascades in VSMC. Our present and past studies (1-6) suggest that Ang II-induced VSMC proliferation requires protein tyrosine phosphorylation via JAK2, MEK1, and pp60 c-src , which in turn are necessary for the activation of several diverse mitogenic factors, specifically STAT proteins, Raf-1, ERK1, and phospholipase C-␥1. More importantly, the inhibition of these individual signaling molecules prevents VSMC proliferation. Current clinical therapeutic interventions for the prevention or regression of maladaptive cardiovascular growth (e.g. hypertension, congestive heart failure, cardiac hypertrophy, atherosclerosis, angioplasty injury) include the inhibition of G protein-coupled AT 1 receptors (e.g. AT 1 receptor antagonist and losartan) or their respective ligand (e.g. angiotensin-converting enzyme inhibitors) (4,7,8,25). Our better understanding of the two mitogenic signaling pathways investigated in the present study presents potential new and specific targets for future therapeutic interventions in various cardiovascular diseases associated with VSMC proliferation. | 2018-04-03T06:19:34.999Z | 1997-09-26T00:00:00.000 | {
"year": 1997,
"sha1": "ed4d32006b65121aec5b9c63ef94dfe45bb1194f",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/272/39/24684.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "cf74f78485a83b187e7f015e3d212d90af37f9d2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
224862545 | pes2o/s2orc | v3-fos-license | Evaluating the impact of CNES real-time ionospheric products on multi-GNSS single-frequency positioning using the IGS real-time service
Multi-global navigation satellite system (GNSS) real-time (RT) single-frequency (SF) positioning with a low-cost receiver has received increasing attention in recent years due to its large amount of possible applications. One major challenge in single-frequency positioning is the effective mitigation of the ionospheric delays since it is a dominant error source. Nowadays, a high-precision RT ionospheric vertical total electron content (VTEC) product is released by the Centre National d’Etudes Spatiales (CNES) through its real-time service (RTS). The effect of this product on RT single-frequency positioning needs to be investigated. In this study, we provide an evaluation of the quality of multi-GNSS CLK93 orbit and clock products through the comparison to the final precise products, and comprehensively evaluate the impact of CNES VTEC products on multi-GNSS RT-SF-SPP (Standard Point Positioning)/PPP (Precise Point Positioning) performances. Datasets from 46 Multi-GNSS Experiment (MGEX) stations and the CLK93 corrections for 14 consecutive days in 2019 are collected to process with different scenarios. Experimental results show that the CNES VTEC products can replace the final GIM products in the singleand multi-GNSS SF-SPP with the same positioning accuracy requirements during the period of mild solar activity (Kp index is less than 3). Regarding the kinematic RT-SF-PPP, the (re-)convergence also can be improved by adopting the prior CNESVTEC constrains. Compared with the IF RT-SF-PPP with quad-constellation, the positioning accuracy of the CNES-VTEC-constrained RT-SF-PPP can be improved by about 10.30%, in which the average RMS can achieve 17.9, 19.8 and 32.3 cm in the North, East and Up components, respectively. Compared with the final precise products of GBM, the satellite orbit accuracy of CLK93 products is 4, 5, 12 and 16 cm for GPS, Galileo, GLONASS and BDS, respectively. As for the CLK93 satellite clock, its RMS accuracy of GPS, Galileo, GLONASS and BDS is 0.3, 0.4, 2.5 and 1.8 ns, respectively. 2020 COSPAR. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Introduction
With the rapid development of multi-frequency and multi-global navigation satellite system (GNSS), standard point positioning (SPP) and precise point positioning (PPP), as the absolute positioning techniques with a stand-alone GNSS receiver, has been widely used in numerous fields such as vehicle navigation, meteorology and natural hazard monitoring (Zumberge et al., 1997;Kouba and Heroux, 2001;Zhang and Andersen, 2006;Jin et al., 2017;Wang et al., 2019b). In general, dual-frequency (DF) PPP has the ability to provide millimeter-level and centimeterlevel positioning accuracy in static and kinematic mode, respectively (Li et al., 2011. However, the cost of the geodetic multi-frequency GNSS receiver limits its commercial application, and a great number of possible realtime applications only require sub-meter-level or decimeter-level positioning accuracy. Therefore, real-time single-frequency precise point positioning (RT-SF-PPP) with a low-cost receiver has attracted great attention in the GNSS market (van Bree and Tiberius, 2012;de Bakker and Tiberius, 2017;Odolinski and Teunissen, 2017).
The biggest error source for single-frequency positioning is the ionospheric delay, as it cannot be eliminated precisely and may lead to the range errors of 100 m in navigation signal during solar activity (Liu and Yang, 2016). Nowadays, several broadcast ionospheric models can be employed to mitigate the ionospheric effect in RT single-frequency positioning. The Klobuchar model broadcasted in the broadcast ephemeris is widely used by both GPS and GLONASS users in RT mode, which has the advantages of simple structure and high efficiency, but the ionospheric errors can only be mitigated by 50% (Feess and Stephens, 1987;Klobuchar, 1987). For the European Galileo, a high complexity model named NeQuick has been established and can mitigate approximately 70% ionospheric errors for single-frequency users (Bidaine, 2012). As to the second phase of the Chinese Bei-Dou navigation satellite system (BDS-2), the satellites broadcast an improved Klobuchar model named CIM (COMPASS Ionospheric Model), which can correct for around 65% of the ionospheric errors (Wu et al., 2013). With the completion of the BDS-3 since the end of 2018, a BeiDou global broadcast ionospheric delay correction model (BDGIM) is proposed and can mitigate around 77% ionospheric errors on the global scale (Yuan et al., 2019). Besides, Hoque and Jakowski (2015) proposed an alternative ionospheric correction algorithm called Neustrelitz TEC broadcast model (NTCM-BC), its performance is comparable to the NeQuick model and as easy to compute as the Klobuchar model. Although these abovementioned ionospheric models support RT processing, their relatively low accuracy of the ionospheric model cannot meet the requirements of high-precision RT singlefrequency users.
Since 1 April 2013, an open-access real-time service (RTS) has been launched by the International GNSS Service (IGS). Currently, available RTS products, including the satellite orbit corrections, clock corrections, code bias and phase bias as well as the vertical total electron content (VTEC) message, are formatted into state space representation (SSR) messages (RTCM Special Committee 2016). Using these messages, RT-PPP can be conducted by GNSS single or multi-frequency receivers anywhere in the world. the Centre National d'Etudes Spatiales (CNES) is providing the VTEC products and other corrections though its two RTS streams (CLK92 and CLK93) including GPS, GLONASS, BDS and Galileo satellites . Roma et al. (2016) showed the initial results of CNES VTEC products referenced to six different Global Ionospheric Maps (GIMs) for 15 days. Nie et al. (2019) evaluated the quality of CNES VTEC products for 374 consecutive days and conducted RT-SF-PPP experiments using Multi-GNSS Experiment (MGEX) static data and automotive kinematic data. The result showed that the root mean square (RMS) of CNES VTEC products compared with the IGS final GIM is about 1-3 TECU, and the RT-SF-PPP using this ionosphere product can achieve sub-meter-level and meter-level positioning accuracy in horizontal and vertical components, respectively. To improve the convergence of RT-PPP, the CNES VTEC products are applied to undifferenced and uncombined RT-PPP as an extra constraint due to their high-precision (Liu et al., 2018). On the one hand, the majority of current contributions mainly focused on the validation and performance evaluation of RT-DF-PPP using RTS orbit and clock corrections. On the other hand, the previous studies only concerned the contribution of CNES VTEC products to single-or dual-constellation (e.g. GPS/GLONASS, GPS/Galileo) SF-PPP users. With the development of multi-GNSS and IGS RTS, it is necessary to perform more comprehensively performance evaluation of RT-SF-PPP with quad-constellation using the CNES VTEC products as a priori constrain. Besides, up to now, there is no literature dedicated to the impact of CNES VTEC products on SF-SPP positioning performance. The numerical results will be generated in this work.
Firstly, we briefly introduce the GPS + GLONASS + BDS + Galileo combined model for SF-SPP/PPP and RT processing strategies. Next, the quality of RT orbit, clock and VTEC messages from the CLK93 stream is investigated. Thereafter, a comprehensive analysis of the contribution of CNES VTEC products to multi-GNSS SF-SPP/PPP is performed. Finally, some findings of this paper are summarized.
Multi-GNSS positioning models
The original multi-GNSS code and phase observation equations on the frequency i at a particular epoch can be expressed as (Li et al., 2015) A. Wang et al. Advances in Space Research 66 (2020) where the indices s, r and i represent the satellite, receiver and frequency, respectively. The superscript Q is satellite system (i.e. G for GPS, R for GLONASS, C for BDS, E for Galileo). P s;Q r;i and U s;Q r;i denote the observed code and phase in meters, respectively; q s;Q r is the geometric range between the satellite and receiver antennas in meters; cis the light velocity in meters per second; dt r and dt s;Q are the receiver and satellite clock offsets in seconds, respectively. h s;Q is the satellite orbit error in meters; Trop s;Q r and Iono s;Q r;i is the slant tropospheric delay and slant ionospheric delay in meters, respectively; k r;Q i is the carrier wavelength of frequency i in meters per cycle; N s;Q r;i is the integer phase ambiguity in cycles; d r;i and d s;Q i are code hardware delays for the receiver and satellite in meters, respectively; b r;i and b s;Q i are phase hardware delays for the receiver and satellite in meters, respectively. Different from other satellite systems, GLONASS adopts frequency division multiple access (FDMA) technique to distinguish signals from different satellites, the GLONASS satellite-specific frequencydependent biases (i.e., inter-frequency biases, IFBs) x s;R r;i need to be considered. In our study, the IFBs are modeled as a linear function of channel numbers in SF-PPP. e s;Q r;i and n s;Q r;i are the noise of code and phase measurements in meters.
Single-frequency SPP
SPP plays a vital role in the RT positioning, navigation and timing (PNT) services due to its simple model and high calculating efficiency. As for SF-SPP, the threedimensional (3D) coordinates of receiver x and receiver clock error dt r can be determined by at least four satellites observations. The satellite orbit h s;Q and clock dt s;Q are calculated from the broadcast ephemeris. Both tropospheric errors Trop s;Q r and ionospheric errors Iono s;Q r;i are usually corrected by the external models. Besides, noted that the inter-system bias (ISB) parameters should be estimated in multi-GNSS processing since the different constellations have different time systems (Wanninger, 2012;Torre and Caporali, 2015;Zhou et al., 2019). The GPS receiver clock is selected generally as a reference, hence, the GPS + GLO NASS + BDS + Galileo SF-SPP models can be rewritten as where dt À r is the new GPS receiver clock offset containing the code hardware delay of receiver. The timing group delay (TGD) parameters can be used to correct the satellite code hardware delay. Therefore, three types of parameters to be estimated in multi-GNSS SF-SPP are as follows 2.2. Ionosphere-free single-frequency PPP Thanks to the ionospheric delay of one satellite have the same values but with the opposite sign in the code and phase observations, a linear ionospheric-free (IF) combined model named GRoup And PHase Ionospheric Correction (GRAPHIC) is widely utilized in SF-PPP (Cai et al., 2013). Since the number of the GRAPHIC observation equations derived from single-frequency code and phase measurements is half of that of the traditional SF-PPP model, the code observations are required to avoid the rank deficiency (Montenbruck, 2003). It is worth noting that the ionospheric delay in the code observations is corrected by the CNES VTEC products from the CLK93 stream in this study. As to the tropospheric delay, most of them are corrected by the external model, and the residual errors are estimated as a random-walk noise process. The CLK93 orbit and clock products are used to fix the satellite orbit and clock offsets in RT-SF-PPP. Thus, the IF multi-GNSS SF-PPP models can be expressed as where Mw s r is the mapping function of zenith tropospheric wet delay; ZWD r is the zenith tropospheric wet delay in meters; e s r;IF is the noise of IF observations in meters. In summary, six types of parameters to be estimated in IF multi-GNSS SF-PPP are as follows (Li et al., 2020)
Ionosphere-constrained single-frequency PPP
In the undifferenced and uncombined SF-PPP, the ionospheric delay can be estimated as an unknown parameter. Many studies have shown that proper constraints for the ionospheric parameter can improve SF-PPP performance, especially in terms of convergence (Choy et al., 2008;Juan et al., 2012;Shi et al., 2012;Zhang et al., 2013). The process strategies of the satellite orbit, satellite clock offsets and the residual tropospheric delay are the same as in Section 2.2. The ionosphere-constrained (IC) SF-PPP can be expressed as Noted that the above dt À r is the same as equation (3). A virtual observation equation (11) is introduced to the SF-PPP to solve the problem of rank deficiency. The setting of the weight of virtual ionospheric constraints can refer to Wang et al., 2019a), and the variance of the zenith ionospheric delay can be defined as 0.25 m 2 . Therefore, seven types of parameters to be estimated in IC multi-GNSS SF-PPP are as follows (Li et al., 2020) 3. Data description and processing strategy
Dataset
To evaluate the impact of CNES VTEC products on the multi-GNSS SF-SPP/PPP performance, GPS, GLONASS, BDS and Galileo observation datasets collected from 46 MGEX stations are selected, which covered 14 days of DOY (Day of Year) 117-130 in 2019. These selected stations are globally evenly distributed, as shown in Fig. 1. It should be noted that the datasets are collected in the post mode, whereas SF-SPP/PPP is simulated in the RT mode, where the real-time corrections of orbit, clock and ionospheric data from CLK93 are also archived in files, and later they are read epoch-by-epoch like real-time ''stream" and the kinematic solutions are derived.
Processing strategy
A software named Net_Diff was used to conduct SF-SPP/PPP in this test. Net_Diff is a software for GNSS download, positioning and analysis, which is developed by GNSS Analysis Center, Shanghai Astronomical Observatory, Chinese Academy of Sciences and supports both desktop and online versions Zhang et al., 2020). The details of how this actually works can refer to the website (http://202.127.29.4/shao_gnss_ac/ Net_diff/Net_diff.html). As for SF-SPP, both satellite orbits and clock offsets are corrected by the broadcast ephemeris. Due to the difference of signal in space range error (SISRE) accuracy between different GNSS, a proper stochastic model provided by Zhang et al. (2019b) is adopted in this processing. Concerning SF-PPP, the RT precise orbit and clock offset products are used, which are computed from the CLK93 orbit and clock corrections as well as the broadcast ephemeris (Elsobeiey and Al-Harbi, 2016;Kazmierski et al., 2018a). The standard deviation of code and phase observations for GPS and Galileo is set to 0.3 m and 0.003 m, respectively. According to the precision of RT orbit and clock from the CLK93 stream in Section 4.1, the standard deviation of observations for GLONASS is set as twice as that of GPS, while for BDS IGSO (Inclined Geosynchronous Orbit)/MEO (Medium Earth Orbit), its standard deviation is set as four times as that of GPS. Noted that BDS GEO (Geostationary Earth Orbit) satellites are excluded in this experiment as the corresponding accuracy of their RTS products is too low to meet the demands of SF-PPP Cao et al., 2018). Adopted models and process strategies for multi-GNSS SF-SPP/PPP are presented in Table 1. In this study, SF1 and SF2 are used to represent observations on the first and the second frequency to keep the consistency of both code and phase observations for quadconstellation.
RTS orbits and clock offsets
The BNC (BKG NTRIP Client, https://igs.bkg.bund. de/ntrip/download) software is utilized to receive and decode the RTS corrections from the CLK93 stream for 14 consecutive days (from April 27, 2019, to May 10, 2019). Since the RTS data interruption is caused by loss of network connection, the mean availability of CLK93 products for quad-constellation is about 95.25% during the test period. When the RTS orbits are missing, the most updated IGS Ultra-rapid (IGU) orbits are used as an alternative since they have the same accuracy (EI-Mowafy et al., 2017). As to RTS clock offsets, however, the IGU predicted part cannot work well in RT-PPP. Hence, the missing RTS clock corrections can be predicted by polynomial fitting with the recorded RTS clock corrections in a short time (Hadas and Bosy, 2015). According to the standard of RTCM (Radio Technical Commission for Maritime Services)-SSR, the RTS orbit and clock corrections are combined with broadcast ephemeris to generate the RT precise products. The details of the matching algorithm can refer to the following literature (Hadas and Bosy, 2015;Kazmierski et al., 2018a;Cao et al., 2018). In order to evaluate the quality of RTS orbits and clock offsets for quad-constellation, the final precise products of GBM released by Deutsches GeoForschungsZentrum (GFZ; Deng et al., 2016) were employed as references. The orbit comparison was performed every 5 min for the radial, along-track and cross-track components. Clock offsets were compared every 30 s in terms of the interval provided by the final high-rate clock products. The triple times of standard deviations (SD) of the analyzed datasets as a threshold is used to remove outliers in this contribution (Kazmierski et al., 2018a). It is worth noting that the CLK93 stream refers to the satellite antenna phase center (APC) whereas the GBM final products directly adopt the center of mass (CoM) of the satellite, thus the phase center offset (PCO) correction must be taken into account. With regard to satellite clock, broadcast ephemeris (i.e., CLK93 products) exhibits small offsets from each satellite because it refers to constellation-specific timescale, while precise ephemeris (i.e., GBM products) applies a productspecific timescale. The difference between the aforementioned two ephemerides is generally unknown but common to all satellites of a constellation. It must be excluded from the assessment of real-time clock by adjusting an epochwise average CLK93-minus-GBM clock value of all satellites for one constellation (Montenbruck et al., 2018). To avoid the influence of gross error from some satellites, a medium value of CLK93-minus-GBM clock at each epoch is computed as the ensemble clock difference to remove this systematic bias (Zhang et al., 2019b). Fig. 2 shows the satellite-specific RMS of differences in the radial, along-track and cross-track components as well as clock offsets between the CLK93 products and the GBM final products. For each GNSS, their mean RMS values over all satellites are also presented in Table 2. Generally, the orbit accuracy of each system in the radial component is much better than that in both along-track and cross-track components. The quality of GPS orbits and clock offsets is the best, its accuracy is better than 4 cm in all orbit components and 0.3 ns in the clock errors. The Galileo has slightly worse accuracy than GPS, with the mean RMS of orbit and clock errors being less than 5 cm and 0.4 ns, respectively, which are close to the results as reported by Kazmierski et al. (2018b). For GLONASS, the orbit accuracy is about 4.2, 11.6 and 7.3 cm, respectively, in the radial, alone-track and cross-track components, which is about twice worse than of GPS. However, there is an exception for GLONASS-K R09 satellite, whose radial accuracy is up to 19.5 cm. As to clock accuracy, the RMS of GLONASS is about 2.5 ns, which is much higher than other systems. BDS satellites (C06-C14) have the worst orbit performance among all GNSS and its mean RMS in radial, along-track and cross-track components are individually 7.3, 16.2 and 14.1 cm. Regarding the accuracy of the BDS clock, the mean RMS value is 1.76 ns, which is much worse than GPS and Galileo since the number of contributed ground stations that can track BDS satellites is insufficient.
The results of BDS satellite orbit and clock are almost the same accuracy as those indicated by Wang et al. (2018).
Real-time VTEC products
Since the prior ionosphere constrains is applied to the STEC rather than VTEC, the quality of STEC (Slant Total Noted that the CNES VTEC is based on the 12degree and 12-order of the spherical harmonic function. The referenced STEC derived from a post-processing GIM product from CODE (Center for Orbit Determination in Europe) agency because it has the highest accuracy in all ACs (Cai et al., 2017). In the test period, the solar activity and ionosphere variation are relatively mild as the radio flux index F10.7 is no more than 80 sfu and the most of geomagnetic Kp index is less than 3 (Fig. 3). The bias and RMS values of the CNES STEC for 46 stations during the testing are shown in the Fig. 4. The bias value varies from À4.46 to 2.85 TECU, and the averaged bias of all selected MGEX stations is À0.72 TECU. As for RMS, the average value for all stations is 3.43 TECU. The maximum RMS of 5.9 TECU comes from low latitude (22:4 o N ) MGEX station HKSL, which is located in Southeast China, and the minimum RMS is derived from high latitude (67:9 o N ) MGEX station KIRO at 1.44 TECU. In general, the accuracy of the CNES STEC for low latitude stations is worse than middle and high latitude stations, which accords with a feature of any ionospheric model (Rovira-Garcia et al., 2020). On the other hand, the stations located in ocean areas such as KOKB and TONG show relatively larger RMS computed from the CNES STEC. The main reason is that the ionospheric properties cannot be accurately described by the global spherical harmonic function model because of the sparse reference stations in these areas.
Single-frequency SPP
To investigate the performance of CNES VTEC products on the multi-GNSS SF-SPP, the high-precision global ionosphere product from the CODE agency as the reference is also applied to the same positioning processing, which means that only the external ionospheric models used in SF-SPP e different. Fig. 5 depicts the SF1 epochwise positioning errors of HARB station on 27 April 2019 for the south-north (N), west-east (E) and up (U) components based on the ionospheric delay correction of CNES VTEC products (i.e. CLK93-VTEC) and CODE-GIM. It can be seen that the single-and multi-system SF-SPP results with CLK93-VTEC and CODE-GIM products perform similarly. The vertical error of SF-SPP with broadcast ephemeris is relatively larger than that of horizontal error and can achieve meter-level. Due to the Table 2 The averaged RMS values of CLK93 orbit errors in the radial (R), alongtrack (A) and cross-track (C) components as well as clock errors (T) in a 14-day test period. Table 3. For all single-system SF-SPP solutions, the performance of GPS-only and Galileo-only is at the same level and much better than the other systems, which is mainly attributed to the higher accuracy of broadcast orbits and clocks. Although GLONASS currently has the smaller position dilution of precision (PDOP) than BDS, its positioning accuracy is still worse than that of BDS. The main reason is that the GLONASS code observation has a higher measurement noise level (Montenbruck, 2003) and the IFB is neglected in SPP. Fig. 6 shows the mean PDOP of SF1 and SF2 BDS-only SF-SPP at all selected stations, but stations with mean PDOP more than 6 are excluded. As for BDS-2, tri-frequency signals (B1/B2/ B3) can be received by the 46 selected MGEX stations, while for BDS-3, only observations of B1 and B3 can be tracked. Therefore, the number of B2 tracked satellites is less than that of B1 for BDS-2 + 3 during the test period. The satellite geometry of SF1 BDS-only with the introduction of BDS-3 satellites is much better than that of SF2, which results in better positioning accuracy of SF1 solution.
For the multi-GNSS SF-SPP, the performance of GPS + GLONASS + BDS + Galileo with CODE-GIM correction shows the best, in which the positioning accuracy of the horizontal and vertical components is better than 0.7 m and 1.5 m, respectively. One reason is that the PDOP is smallest with the increased number of visible satellites, another is that the CODE-GIM as the postprocessed product has higher accuracy than RT models. However, compared with the CODE-GIM correction, for the single-or multi-system users, the 3D positioning accuracy of SF-SPP with CLK93-VTEC correction is only reduced by no more than 7%, which is mainly reflected in the N direction. It should be noted that the errors of broadcast orbits and clocks and the pseudorange noise used in SPP are interfering with the results of the assessment of ionospheric models. During the period of mild solar activity, the CNES VTEC products can substitute for the final GIM products in SF-SPP considering the meter-level positioning accuracy requirements. Besides, whether this Table 3 RMS values of SF-SPP with CLK93-VTEC and CODE-GIM based ionospheric delay correction for SF1 and SF2 (unit: m). SF1 and SF2 are used to represent results on the first and the second frequency.
System
Iono-Corr SF1 SF2 A. Wang et al. Advances in Space Research 66 (2020) 2516-2527 finding is applicable to low latitudes areas or high solar activity period needs further study.
Kinematic RT-SF-PPP
To compare the kinematic positioning performance between the GRAPHIC and CLK93-VTEC-constrained models, the multi-GNSS observations from GRAZ station on 5 May 2019 are selected for the test. Fig. 7 presents the positioning errors of SF1 kinematic RT-SF-PPP with different schemes. It is obvious that the time series of the CLK93-VTEC-constrained model is smoother than that of the GRAPHIC model. By introducing the multi-GNSS observations, the improvement of positioning accuracy is mainly reflected in the vertical component, whereas the horizontal component has little change. Fig. 8 shows the RMS values of 3D positioning errors of SF1 kinematic RT-SF-PPP with different schemes for 46 globally distributed MGEX stations. Noted that the results of GPS + BDS are not shown in Fig. 8 due to its performance is basically the same as GPS-only. In the GRA-PHIC model, the 3D positioning accuracy of 32 stations for GPS-only exceeds 0.5 m, whereas, for the quadconstellation, the amount of stations is reduced to 14. By adopting the CLK93-VTEC-constrained model, the accuracy of almost all stations for single-or multi-system has been improved to some extent, and there are 42 stations with a 3D positioning accuracy of less than 0.5 m for RT-SF-PPP with quad-constellation. From the results of GPS-only with CLK93-VTEC-constrained, the positioning accuracy of the stations in the adjacent sea area is relatively poor, which is probably caused by the quality degradation of CLK93-VTEC products over the areas lacking observations. Table 4 summarizes the RMS values of positioning errors of RT-SF-PPP with different schemes. It should be noted that the statistical result is computed from the converged epoch to the last epoch of a day, and the conver- , ionospherecorrected model) using the CNES VTEC products is 0.7 m and 1.0 m, respectively. Thus, our results are twice better than that of Nie et al. (2019). The main reason is that the ionospheric errors in the ionosphere-corrected SF-PPP model are only partially mitigated by the CNES VTEC products and the residual of ionospheric errors still has some effect on the position accuracy, whereas the residual of ionospheric errors has been removed by estimating them as parameters or using a GRAPHIC model in our contribution. With the combination of multi-GNSS observations, the 3D positioning accuracy of GPS + GLONASS + BDS + Galileo SF2 RT-SF-PPP based on the GRAPHIC and CLK93-VTEC-constrained model is improved by 22.46% and 18.68%, respectively, compared with the GPS-only. On the other hand, compared with the GRAPHIC model, the improvement in 3D positioning accuracy of CLK93-VTEC-constrained RT-SF-PPP with quad-constellation for SF1 and SF2 is 8.72% and 10.30%, respectively. The main reason is that the GRAPHIC combination introduces half of the code noise, which is much larger than the phase noise in the CLK93-VTECconstrained model (Li et al., 2020). To summarize, the CLK93-VTEC-constrained GPS + GLONASS + BDS + Galileo RT-SF-PPP has the best positioning performance, which RMS in N, E and U components can reach 17.9, 19.8 and 32.3 cm, respectively. It should be noted that the simulated kinematic positioning using MGEX static data is under an ideal situation and theoretically better than true real-time applications. The (re-)convergence time is a key indicator of the RT kinematic PPP. In order to present the (re-)convergence difference between the IF and IC models, the simulated signal interruptions are introduced every 4 h by adding a new set of ambiguities for all used satellites, while all other estimated parameters are kept in the positioning filter with their covariances from the previous epoch (Shi et al., 2012). Taking the GPS + GLONASS + BDS + Galileo data from GRAZ station on 5 May 2019 as an example, the convergence performance of the abovementioned two RT-SF-PPP models is compared and shown in Fig. 9. From the position time series on the top panel of Fig. 9, the positioning errors of the GRAPHIC model after each interruption is visibly much worse and need around 20-30 min to (re-)convergence. However, the CLK93-VTECconstrained model has almost no obvious discontinuity in the whole time series. Compared with the first convergence, the re-convergence can be accelerated significantly as expected (Shi et al., 2012). The major reason is that the ionospheric effect is adequately compensated by applying proper priori constraints in the CLK93-VTECconstrained model. Therefore, the CLK93-VTECconstrained kinematic RT-SF-PPP has a great advantage in (re-)convergence and should be recommended.
Conclusions
Single-frequency GNSS receivers play an important role in most fields of PNT due to their low costs and highprecision. With the increasing demand for RT applications, the RT-SF-SPP/PPP technique has attracted more and more attention in the GNSS market. Typically, the performance of RT single-frequency positioning is seriously affected by the low precision of several existing broadcast ionospheric models such as Klobuchar. However, since the release of the CNES VTEC products through SSR messages from CLK93, this situation has begun to improve.
In this contribution, we focus on evaluating the impact of CNES VTEC products on multi-GNSS RT SF-SPP/ PPP performance. The CLK93 corrections and the observations of 46 globally distributed MGEX stations for 14 consecutive days were selected to process with different positioning scenarios. Compared with the final precise products of GBM, the satellite orbit accuracy of CLK93 products is 4, 5, 12 and 16 cm for GPS, Galileo, GLO-NASS and BDS, respectively. As for the CLK93 satellite clock, its RMS accuracy of GPS, Galileo, GLONASS and BDS is 0.3, 0.4, 2.5 and 1.8 ns, respectively.
The results of positioning experiments strongly demonstrated the following conclusions. The single-and multi-GNSS kinematic SF-SPP with CNES VTEC correction is comparable with positioning accuracy based on the final GIM products in mild solar activity period, and its slight differences are mainly reflected in the N component. Statistical results indicated that the positioning accuracy of GPS + GLONASS + BDS + Galileo SF-SPP with CNES VTEC correction is better than 1.0 m and 1.5 m in the horizontal and vertical components, respectively. Regarding the kinematic RT-SF-PPP, the (re-)convergence can be accelerated by adopting an appropriate ionosphere information. Since the GRAPHIC observations are affected by the code noise, the positioning accuracy is slightly worse than that of the CNES-VTEC-constrained model. For the GPS + GLONASS + BDS + Galileo users, the improvement in positioning accuracy of SF1 and SF2 CNES-VTEC-constrained kinematic RT-SF-PPP is 8.72% and 10.30%, respectively, compared with the GRAPHIC model. The best positioning accuracy of kinematic RT-SF-PPP can be achieved by introducing the quadconstellation observations and CNES VTEC products, in which the average RMS is 17.9, 19.8 and 32.3 cm in the N, E and U components, respectively. Furthermore, the impact of the CNES real-time VTEC products on multi-GNSS single-frequency positioning in high solar activity will be evaluated in future works.
Declaration of Competing Interest
None. | 2020-10-19T18:11:30.457Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "daf33ff0d5f236f1a2edacd287895f5001a62de0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.asr.2020.09.010",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1adf8e87e0341f67aa4211969403e2235d5e68c8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
17722325 | pes2o/s2orc | v3-fos-license | $B \to K\pi$ decays and the weak phase angle $\gamma$
The large branching ratios for $B \to K\pi$ decays as observed by the CLEO Collaboration indicate that penguin interactions contribute a major part to the decay rates and provide an interference between the Cabibbo-suppressed tree and penguin contributions resulting in a CP-asymmetry between the $B \to K\pi$ and its charge conjugate mode. The CP-averaged decay rates depend also on the weak phase $\gamma$ and give us a determination of this phase. In this talk, I would like to report on a recent analysis of $B \to K\pi$ decays using factorisation model with final state interaction phase shift included. We find that factorisation seems to describe qualitatively the latest CLEO data. We also obtain a relation for the branching ratios independent of the strength of the strong penguin interactions. This relation gives a central value of $0.60 \times 10^{-5}$ for ${\mathcal B}(\bar{B}^{0} \to \bar{K}^{0}\pi^{0})$, somewhat smaller than the latest CLEO measurement. We also find that a ratio obtained from the CP-averaged $B \to K\pi$ decay rates could be used to test the factorisation model and to determine the weak angle $\gamma$ with more precise data, though the latest CLEO data seem to favor $\gamma$ in the range $90^{\circ}-120^{\circ}$.
With the measurement of all the four B → Kπ branching ratios, we seem to have a qualitative understanding of the B → Kπ decays. The measured CP-averaged branching ratios(B) by the CLEO Collaboration [1] show that the penguin interactions dominates the B → Kπ decays, as predicted by factorisation. The strong penguin amplitude, because of the large CKM factors, becomes much larger than the tree-level terms which are Cabibbo-suppressed and the non-leptonic interaction for B → Kπ is dominated by an I = 1/2 amplitude. This is borne out by the CLEO data which give B(B − →K 0 π − ≃ 2B(B − → K − π 0 ) and B(B − →K 0 π − ) ≃ B(B 0 → K − π + ) : B(B + → K + π 0 ) = (11.6 +3.0+1.4 If the strength of the interference between the tree-level and penguin contributions is known, a determination of the weak phase γ could be done in principle. Previous works [2,3] shows that factorisation model produces sufficient B → Kπ decay rates, in qualitative agreement with the CLEO measured values. Also, as argued in [4], for these very energetic decays, because of color transparency, factorisation should be a good approximation for B → Kπ decays if the Wilson coefficients are evaluated at a scale µ = O(m b ). Infact, recent hard scattering calculations with perturbative QCD shows that factorisation is valid up to corrections of the order Λ QCD /m b [5]. It is thus encouraging to use factorisation to analyse the B → Kπ decays, bearing in mind that there are important theoretical uncertainties in the long-distance hadronic matrix elements, as the heavy to light form factors for the vector current and the value of the current s quark mass are currently not determined with good accuracy. In this talk, I would like to report on a recent work [6] on the B → Kπ decays as a possible way to measure the angle γ and to see direct CP violation. In the standard model, the effective Hamiltonian for B → Kπ decays are given by [7,8,9,10,11], in standard notation. At next-to-leading logarithms, c i take the form of an effective Wilson coefficients c eff i which contain also the penguin contribution from the c quark loop and are given in [9,11].
The parameters V ub etc. are the flavor-changing charged current couplings of the weak gauge boson W ± with the quarks as given by the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix V . V is usually defined as the unitary transformation relating the the weak interaction eigenstate of quarks to their mass eigenstate [12]: where d, s, b and d ′ , s ′ , b ′ are respectively the mass eigenstates and weak interaction eigenstates for the charge Q = −1/3 quarks. Since the neutral current is not affected by the unitary transformation on the quark fields, flavor-changing neutral current is absent at the tree-level as implied by the GIM mechanism. The unitarity condition V V † = 1 gives, for the (db) elements relevant to B decays [12] This can be represented by a triangle [12] with the three angles α, β and γ expressed in terms of the CKM matrix elements as [13]: The angle γ enters the B → Kπ decay amplitudes through the factor V ub V * us /V tb V * ts which can be approximated by −(|V ub |/|V cb |) × (|V cd |/|V ud |) exp(−i γ) after neglecting terms of the order O(λ 5 ) in the (bs) unitarity triangle, λ being the Cabibbo angle in the Wolfenstein parametrisation of the CKM quark mixing matrix. The B → Kπ decay amplitudes, expressed in terms of the I = 1/2 and I = 3/2 isospin amplitudes are given by [2], A 1 is the sum of the strong penguin A S 1 and the I = 0 tree-level A T 1 as well as the I = 0 electroweak penguin A W 1 contributions to the B → Kπ I = 1/2 amplitude; similarly B 1 is the sum of the I = 1 tree-level B T 1 and electroweak penguin B W 1 contribution to the I = 1/2 amplitude, and B 3 is the sum of the I = 1 tree-level B T 3 and electroweak penguin B W 3 contribution to the I = 3/2 amplitude. δ 1 and δ 3 are, respectively, the elastic πK → πK I = 1/2 and I = 3/2 final state interaction (FSI) phase shift at the B mass. The inelastic FSI contributions are also included through the internal quark loop contributions to the penguin operators, for which the Wilson coefficients now have an absorptive part and are given in [9,11,14]. The B → Kπ isospin amplitudes in the factorisation model are given by [2], where with q = u, d for π ±,0 final states, respectively. In this analysis, f π = 133 MeV, f K = 158 MeV, F Bπ 0 (0) = 0.33, F BK 0 (0) = 0.38 [3,15] ; |V cb | = 0.0395, |V cd | = 0.224 and |V ub |/|V cb | = 0.08 [12]. The value of m s is not known to a good accuracy, but a value around (100 − 120) MeV inferred from m K * − m ρ , m D + s − m D + and m B 0 s − m B 0 mass differences [16] seems not unreasonable and in this work, we use m s = 120 MeV. a j are the effective Wilson coefficients after Fierz reordering in factorisation model and are given by [6] a 1 = 0.07, a 2 = 1.05, for the contributions from the tree-level and the strong penguin operators at N c = 3 and m b = 5.0 GeV. The strong penguin contribution P = a 4 + a 6 Y , as obtained from Eq.(8) is enhanced by the charm quark loop which increases the amplitude by 30% as pointed out in [9]. This enhancement brings the predicted branching ratios closer to the CLEO measured values, as shown in Fig.1. where the CP-averaged B → Kπ branching ratios obtained for γ = 70 • [3], are plotted against the rescattering phase difference δ = δ 3 − δ 1 .
For a determination of γ, two quantities obtained from the sum of the two CPaveraged decay rates Γ B − = Γ(B − → K − π 0 ) + Γ(B − →K 0 π − ) and Γ B 0 = Γ(B 0 → K − π + ) + Γ(B 0 →K 0 π 0 ) which are independent of δ could be used [6]. As the CPaveraged B → Kπ decay rates depend on γ, the computed partial rates Γ B − and Γ B 0 would now lie between the upper and lower limit corresponding to cos(γ) = 1 and cos(γ) = −1, respectively. As shown in Fig.2, where the corresponding CPaveraged branching ratios (B B 0 and B B − ) for Γ B − and ΓB0 are plotted against γ, the factorisation model values with the BWS form factors [15] seem somewhat smaller than the CLEO central values by about 10 − 20%. Also, B B − > BB0 while the data give B B − < BB0 by a small amount which could be due to a large measured B 0 →K 0 π 0 branching ratio.
Note that smaller values for the form factors could easily accommodate the latest CLEO measured values, if a smaller value for m s , e.g, in the range (80 − 100) MeV is used. What one learns from this analysis is that B → Kπ decays are penguindominated and the strength of the penguin interactions as obtained by perturbative QCD, produce sufficient B → Kπ decay rates and that factorisation seems to work with an accuracy better than a factor of 2, considering large uncertainties from the form factors and possible non-factorisation terms inherent in the factorisation model and the uncertainties in the penguin amplitude which is sensitive to the current s quark mass. Since the four B → Kπ decay rates depend on only three amplitudes A 1 , B 1 and B 3 , it is possible to derive a relation between the decay rates independent of A 1 . Thus, the quantity ∆ obtained from the decay rates, is independent of the strong penguin term. It is given by the tree-level and electroweak penguin contributions. As can be seen from Fig.2, where its values for δ = 0 are plotted against γ. ∆ is of the order O(10 −6 ) compared with B B − and BB0 which are in the range (1.6 − 3.0) × 10 −5 . Thus, to this level of accuracy, we can put ∆ ≃ 0 and obtain the relation (r b = τB0/τ B − ).
which can be used to test factorisation or to predict B(B 0 →K 0 π 0 ) in terms of the other measured branching ratios. Eq.(10) then predicts a central value B(B 0 → K 0 π 0 ) = 0.60 × 10 −5 .
respectively. Since BK0 π 0 is not known with good accuracy at the moment, it is useful to use another quantity, defined as which contains a negligible δ-dependent term of the order O(10 −7 ). The quantity R defined as is thus essentially independent of δ and could also be used to obtain γ, as it does not suffer from large uncertainties in the form factors and in the CKM parameters. As can be seen in Fig.3, it is not possible to deduce a value for γ with the present data which give R = (0.80 ± 0.25) as the prediction for R lies within the experimental errors. If we could reduce the experimental uncertainties to a level of less than 10%, we might be able to give a value for γ. Thus it is important to measure B → Kπ branching ratios to a high precision. Also shown in Fig.3 are two other quantities more sensitive to γ, but involved B(B 0 →K 0 π 0 ) and are given as [6] Thus a better way to obtain γ would be to use R 1 when a precise value for B(B 0 →K 0 π 0 ) will be available. The central value of 0.80 for R corresponds to γ = 110 • , close to the value of (113 +25 −23 ) • found by the CLEO Collaboration in an analysis of all known charmless two-body B decays with the factorisation model [17].
It seems that the CLEO data favor a large γ in the range 90 • − 120 • . With a large γ, for example, with the central value of 110 • , as shown in Fig.4, the predicted B → Kπ branching ratios are larger than that for γ = 70 • and are closer to the data. The data also show that B − →K 0 π − andB 0 → K − π + are the two largest modes with near-equal branching ratios in qualitative agreement with factorisation. However, for γ = 70 • , Fig.1 shows that these two largest branching ratios are quite apart, except for δ < 50 • while for γ = 110 • , Fig.4 suggests that these two branching ratios are closer to each other only for δ in the range 40 • − 70 • . With γ < 110 • and some adjustment of form factors, the current s quark mass and CKM parameters, it might be possible to accommodate these two largest branching ratios with δ < 50 • .
The CP-asymmetries, as shown in [6], for γ = 110 • , are in the range ±(0.04) to ±(0.3) for the preferred values of δ in the range (40 − 70) • , but could be smaller for δ < 50 • . The CLEO measurements [18] however, do not show any large CPasymmetry in B → Kπ decays, but the errors are still too large to draw any conclusion at the moment.
In conclusion, factorisation with enhancement of the strong penguin contribution seems to describe qualitatively the B → Kπ decays, Further measurements will allow a more precise test of factorisation and a determination of the weak angle γ from the FSI phase-independent relations shown above. the CP-averaged branching ratios B − → K − π 0 ,K 0 π − andB 0 → K − π + ,K 0 π 0 , respectively.
I would like to thank S. Narison and the organisers of QCD00 for the warm hospitality extended to me at Montpellier. | 2014-10-01T00:00:00.000Z | 2000-09-12T00:00:00.000 | {
"year": 2000,
"sha1": "9cfa7acf391bf4b65dce89f081b3aa399136f4bc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0009142v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9cfa7acf391bf4b65dce89f081b3aa399136f4bc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
40881014 | pes2o/s2orc | v3-fos-license | The impact of intrahepatic cholestasis of pregnancy on fetal cardiac and peripheral circulation.
OBJECTIVE
The aim of this study was to evaluate changes in fetal cardiac and peripheral circulation in pregnancies complicated with intrahepatic cholestasis.
MATERIAL AND METHODS
The Doppler examination results of 22 pregnant subjects complicated with intrahepatic cholestasis of pregnancy (ICP) and 44 healthy controls were compared. The parameters of fetal cardiac circulation were pulmonary artery and aortic (Ao) peak systolic velocity (PSV), pulmonary vein (Pv), peak velocity index (PVI) and pulsatility index (PI), mitral valve (MV) and tricuspid valve (TV), early diastole (E)- and atrial contraction (A)-wave peak velocity ratio (E/A), and isthmus aortic peak systolic velocity (IAo PSV). The parameters of fetal peripheral circulation were middle cerebral artery (MCA) and umbilical artery (UA) PI, resistance index (RI), systolic/diastolic (S/D) ratio. Fetal obstetric Doppler monitoring was conducted weekly before 36 weeks and biweekly after that, and the results were compared with the normal reference values for gestational age.
RESULTS
The Doppler parameters of fetal cardiac and peripheral circulation did not significantly differ between the two groups. S/D ratio readings in the ICP group were significantly above 2 SD before 35 weeks of gestation. Women with ICP had increased risks of preterm delivery, neonatal unit admission, and meconium-stained amniotic fluid compared with those in the controls.
CONCLUSION
Fetuses of pregnant women with ICP showed no differences in the evaluation of cardiac and peripheral Doppler measurements compared with fetuses of healthy mothers. The Doppler investigation of the umbilical artery may be useful in monitoring of pregnancies complicated by early onset intrahepatic cholestasis.
Introduction
The characteristic features of intrahepatic cholestasis of pregnancy (ICP) include abnormal liver function and maternal pruritus that most frequently occur in the third trimester. Adverse fetal outcomes such as spontaneous pre-term labor, fetal distress, and even intrauterine death are frequently associated with ICP (1-3). The pathophysiology and epidemiology of fetal morbidity are not well characterized. Intrauterine death may occur without prior symptoms, such as uteroplacental insufficiency or intrauterine growth restriction, with no significant findings during fetal autopsy (4,5). Non-specific pathology suggestive of hypoxia has been reported in placental histological samples; however, hypoxia has not been established as a primary pathophysiological process in ICP (6). Several studies have reported that fetal complications of ICP occur more commonly in pregnancies where the mother displays elevated levels of serum bile acids (7,8). It is postulated that raised levels of fetal serum bile acids may be cardiotoxic to the fetus (9). There are currently no methods present for predicting the risks for the fetus in pregnancies complicated with ICP.
Abnormal heart rate (≤100 or ≥180 bpm) is associated with an elevated risk in some studies, although cardiotocograph monitoring cannot reliably predict the risk of complications, and normal cardiotographs are observed within 24 h of intrauterine demise (10)(11)(12). Furthermore, fetal heart rate tracing does not correlate with disease severity (13). The results of a study using the fetal biophysical profile and obstetric Doppler examination findings were not conclusive, mainly because of the absence of fetal mortality and morbidity in that series (14). However, there is very little information about whether changes of fetal cardiac Doppler parameters present in pregnancies complicated with ICP. The aim of this study was to evaluate whether some Doppler alterations exist at the examination of fetal cardiac and peripheral circulation in pregnancies complicated with ICP and to compare them with healthy pregnancies.
Material and Methods
This observational study was conducted with the approval of the Ethical Committee of Aegean Maternity and Teaching Objective: The aim of this study was to evaluate changes in fetal cardiac and peripheral circulation in pregnancies complicated with intrahepatic cholestasis.
Material and Methods:
The Doppler examination results of 22 pregnant subjects complicated with intrahepatic cholestasis of pregnancy (ICP) and 44 healthy controls were compared. The parameters of fetal cardiac circulation were pulmonary artery and aortic (Ao) peak systolic velocity (PSV), pulmonary vein (Pv), peak velocity index (PVI) and pulsatility index (PI), mitral valve (MV) and tricuspid valve (TV), early diastole (E)-and atrial contraction (A)-wave peak velocity ratio (E/A), and isthmus aortic peak systolic velocity (IAo PSV). The parameters of fetal peripheral circulation were middle cerebral artery (MCA) and umbilical artery (UA) PI, resistance index (RI), systolic/diastolic (S/D) ratio. Fetal obstetric Doppler monitoring was conducted weekly before 36 weeks and biweekly after that, and the results were compared with the normal reference values for gestational age.
Hospital, and the procedures followed were in accordance with the Declaration of Helsinki of 1975 (revised in 2008). All study participants provided informed consent. The study was conducted in the clinic of Aegean Maternity and Teaching Hospital between January 2013 and January 2014. During the study period, there were 4037 normal vaginal and 2987 cesarean section deliveries, and 94 pregnant women with ICP were admitted to our hospital. The criteria that were used for diagnosing ICP were raised levels of serum bile acids (total bile acid level was ≥10 μmol/L) and/or pruritus coinciding with liver dysfunction during the third trimester (29-40 weeks), and the absence of skin lesions, and resolution of these symptoms following delivery. Elevated levels of liver alanine transaminase (ALT) and aspartate transaminase (AST) were used to confirm the diagnosis. Liver transaminase results exceeding 40 IU/L were considered abnormal (15,16). Abnormal liver function tests were followed-up with viral marker testing and liver ultrasonography. The presence of biliary obstruction or gallstones during liver ultrasound or acute infection with hepatitis A, B, or C precluded the diagnosis of obstetric cholestasis. A medical history including pruritus in a prior pregnancy, outcome of prior pregnancies, skin disorders, liver/gall-bladder disorders, and use of oral contraceptives was obtained. For both patients and control subjects, exclusion criteria consisted of known multiple gestation, systemic lupus erythematosus, age <18 and >40 years, and fetuses with a known cardiac anomaly or arrhythmia. Additional causes of liver dysfunction, including hemolysis, elevated liver enzymes and low platelets syndrome, preeclampsia, primary biliary cirrhosis, acute fatty liver, and progesterone or any other medications were considered as criteria for exclusion from the study. The presence of confounding characteristics including intrauterine growth retardation (fetal weight below 10 th percentile) and oligohydramnios (amniotic fluid index <50) were also considered as criteria for exclusion. Serum AST and ALT levels were measured using Roche methods on a Hitachi 917 Analyzer (Roche Diagnostics, Basel, Switzerland). Total levels of serum bile acids were analyzed with an enzymatic, colorimetric method (Enzabile, Biostat Diagnostic Systems, Stockport, UK). Ursodeoxycholic acid (UDCA 2×300 mg daily) was provided to all patients when the diagnosis of ICP was confirmed. The daily dosage of UDCA was further increased up to 900 mg or to a maximum of 1.500 mg for patients with severe symptoms. Twenty-two pregnant women with ICP were found to be in compliance with the criteria of the study period. For each case, two healthy pregnant controls were matched for maternal age and gestational age at ultrasound (±1 week). An a priori sample size calculation was performed with an α value of 0.05 and a β value of 0.20 based on the estimated umbilical artery systolic/diastolic (UA S/D) ratio of 2.64±0.49 between 34 and 37 gestational weeks. It was estimated that a total sample size of 20 pregnant women would be required to reveal a difference of 0.5 in the UA S/D ratio between the two groups. Therefore, we observed that the number of patients that we evaluated during the study period was sufficient for analysis.
Sonographic examinations
A color Doppler unit (Toshiba Aplio 500, Tokyo, Japan) with a 3.5-MHz convex probe was used to perform all ultrasonographic measurements. Gestational age was determined based on fetal measurements and the date of last menstruation. The presence of congenital heart abnormalities was excluded by the ultrasonographic evaluation of fetal cardiac anatomy. The angle of the transducer beam relative to the direction of blood flow was maintained at <20° throughout Doppler ultrasonography, and the high-pass filter was set at 100 Hz. All cardiac parameters were evaluated over 3-5 consecutive cardiac cycles and were stored for off-line analysis; a single investigator (S.K.) completed all measurements. All measurements were obtained in the absence of uterine contractions, fetal breathing, or other fetal movements, and the mother was positioned in the left lateral recumbent position. The Doppler measurement of UA was conducted at the umbilical cord midsection. Insonation of the middle cerebral artery (MCA) occurred via the occipital or temporal bone window identified by the circle of Willis on the axial section of the brain. Resistance index (RI), pulsatility index (PI), and S/D ratio were evaluated for both UA and MCA. The isthmus aortic peak systolic velocity (IAo PSV) was measured from either the sagittal longitudinal aortic view or three vessel trachea view with an insonation angle <30°. The atrioventricular valves were imaged from the apical four-chamber view of the heart. Two diastolic peaks were used to characterize flow velocity at the tricuspid valve (TV) and mitral valve (MV), corresponding to active ventricular filling during atrial contraction (A-wave) and early ventricular filling (E-wave). The E/A ratio was calculated from the E-wave and A-wave peak velocities. The pulmonary veins (PV) were visualized by color Doppler imaging, following the four-chamber cross-section imaging of the fetal heart and thoracic cavity. Sample volumes were placed above PV immediately dorsal to the entrance into the left atrium. The angle of the ultrasound beam relative to the direction of blood flow was maintained at <10°, and PI and peak velocity Index (PVI) were evaluated. The study included patients admitted to the hospital with the diagnosis of ICP, and according to our hospital protocol, UDCA treatment was initiated to all patients. For this reason, Doppler sonographic assessment was performed on the ICP group after the administration of UDCA, and it was not possible to assess the effect of UDCA on Doppler parameters. Fetal Doppler examination for cardiac circulation was conducted once, and the results were compared with those of the control group. Fetal Doppler examination for peripheral circulation was conducted weekly before 36 weeks and twice a week after that until delivery in the ICP group, and the results were compared with the reference range consistent with gestational age (17). Thus, 72 Doppler readings were taken from 22 patients with ICP at scheduled intervals during the study period. Figure 1 represents a flow chart of the study design.
Statistical analysis
Depending on the distribution of the measured values, Student's t test (data normally distributed) or Mann-Whitney U-test (data not normally distributed) were used to compare data between the fetuses with ICP mothers and normal control groups using MedCalc Version 9.3.1 (MedCalc Inc., Mariakerke, Belgium). Normal distribution of the continuous variables was assessed with the Kolmogorov-Smirnov test. P<0.05 was considered statistically significant.
Results
Totally, 66 pregnant women were included in this study; 22 pregnant women who had ICP comprised the ICP group and 44 women without ICP comprised the control group. The mean maternal age was 26.4±5.5 years and 27.6±4.8 years for the ICP group and control group, respectively. The median gestational age was 34.7 (29-40) weeks and 33.5 (31-38) weeks for the ICP group and control group, respectively. There were no significant differences in maternal age, gestational age, and gravida and parity between the two groups. The mean serum AST level at diagnosis was 138.90±97.6 IU/L (range 41-477IU/L); the mean serum ALT level was 154.62±104.2 IU/L (range 39-498 IU/L) in the study group. The demographic characteristics of both groups are shown in Table 1.
There was no statistically significant difference in the E/A ratio for each AV valve in the aorta and pulmonary artery peak systolic velocities and IAo PSV values between the study and control groups. Additionally, there were no significant differences in pulmonary vein PI and PVI values between the study and control groups. Doppler-derived fetal cardiac measurements are shown in Table 2. There was no statistically significant difference in the UA and MCA Doppler S/D ratio, PI, and RI. Obstetric Doppler measurements are shown in Table 3. When the findings were compared with the reference values of Doppler flow velocities of UA of a normal pregnant population, it was found that the S/D ratio readings were significantly above 2 SD before 35 weeks of gestation (Table 4). No episode of fetal asphyxia or bradycardia was observed. The overall rate of meconium passage was 27.2% (6/22) in the ICP group (p<0.01). Spontaneous preterm birth was observed in 18.1% (4/22) of the ICP mothers. The mean gestational age was 39.1 weeks in the control group and 36.4 weeks in the ICP group (p<0.01). The mean birthweight was 3460.5 g in the control group and 2987.2 g in the ICP group (p<0.01). The cesarean delivery rate was 54.5% (12/22) in the ICP group. The median Source population was all pergnant women with ICP who were hospitalized during the study period n=94 Pergnant women with ICP and who meet the study criteria were examined for cardiac and peripheral Doppler. n=22 Peripheral Doppler was conducted weekly before 36 weeks and biweekly after that until delivery in the ICP group. n (Doopler results)=72 UA systolic/diastolic ratio were compared with the reference range consistent with gestational age.
For each ICP, two healthy pregnant controls were matched for maternal age and gestational age at ultrasound (±1 week) Apgar score was 8 at 1 min and 9 at 5 min. None of the newborns had an Apgar score <7 at 5 min (Table 5).
Discussion
In this study, we examined fetal cardiac and systemic circulation using routine echocardiographic Doppler parameters which may add information regarding fetal circulatory dynamics in women who have ICP. Bile acids and their toxic metabolic by products are implicated for fetal morbidity associated with ICP. In animal models, bile acids may exert toxic effects on the myometrium and placenta. Elevated serum taurocholate, a bile acid within the fetus, may contribute to fetal dysrhythmia and intrauterine mortality. Taurocholate may impair the propagation of cardiac conduction and disrupt synchronous contraction via the interruption of calcium dynamics of the cardiomyocytes and alter the function of the gap junctions (8,9). In another experimental animal study, investigators evaluated the influence of tauro-conjugated cholic acid administration on in vitro cultures of adult and neonatal rat cardiomyocytes and reported that neonatal rat cardiomyoctes are more sensitive to adverse effects, including altered calcium dynamics, arrhythmias, and abnormal contraction, of bile acids relative to adult cardiomyocytes (18). These data are consistent with the observation that pregnant women with ICP do not have arrhythmia and cardiotoxic effects. It is not possible to investigate the effects of bile acids on the intact human fetal heart at a cellular level. However, it is postulated that if elevated bile acids are toxic on the fetal heart, there may be alterations in cardiac circulation and cardiac Doppler measurements. In our study, we aimed to evaluate the fetal cardiac function using relatively simple Doppler measurements, which do not require special training and equipment. Based on this idea, we performed Doppler analysis to assess the diastolic function of the fetal heart using the E/A ratio of both mitral and tricuspid valves and pulmonary vein PI and PVI. Changes observed in pulmonary vein flow velocity indicate left atrial pressure dynamics occurring during the cardiac cycle (19). Peak flow velocities of the aorta and pulmonary artery in pulsed Doppler pattern were evaluated for the systolic function of the heart. Additionally, for investigating the balance between both ventricular outputs and differences in the impedance of both vascular systems, we performed aortic isthmus Doppler. We found no difference in the fetal cardiac and peripheral Doppler parameters between the two groups. A recent study evaluated fetal echocardiographic examinations of fetuses in pregnant women who have ICP (20). The researchers reported that the left ventricular longitudinal strain, systolic strain rate, and diastolic strain rate are significantly decreased in fetuses with severe cholestasis compared with those in control fetuses. Furthermore, there was a positive correlation between fetal myocardial deformation and maternal total bile acid levels.
Maternal prognosis is excellent in ICP, but there are significant risks for the fetus. We observed that both spontaneous and iatrogenic preterm labor, meconium-stained amniotic fluid, and neonatal unit admission ratio in the ICP group were significantly high than those in the control group. There are limited data regarding the association between obstetric Doppler and abnormal fetal outcome in women who had ICP. Guerra et al. (21) concluded that there are no significant changes in any of the blood flow velocity indices determined by Doppler blood flow analysis in patients with obstetric cholestasis. Zimmermann et al. (22) determined the Pourcelot ratio in affected pregnancies with obstetric cholestasis and found Doppler to be of little value Informed Consent: Written informed consent was obtained from patients who participated in this study. | 2018-04-03T04:55:40.394Z | 2015-06-01T00:00:00.000 | {
"year": 2015,
"sha1": "607d4a471fea5a36e85f195f798042bc327dc5e3",
"oa_license": null,
"oa_url": "https://doi.org/10.5152/jtgga.2015.15173",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "21d9d1cda88a399494f83f53dcfeb822a22d7704",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255831764 | pes2o/s2orc | v3-fos-license | Expression of AE1/p16 promoted degradation of AE2 in gastric cancer cells
Human anion exchanger 1 and 2 (AE1 and AE2) mediate the exchange of Cl−/HCO3− across the plasma membrane and regulate intracellular pH (pHi). AE1 is specifically expressed on the surface of erythrocytes, while AE2 is widely expressed in most tissues, and is particularly abundant in parietal cells. Previous studies showed that an interaction between AE1 and p16 is a key event in gastric cancer (GC) progression, but the importance of AE2 in GC is unclear. The relationship among AE1, AE2 and p16 in GC cells was characterized by molecular and cellular experiments. AE2 expression and pHi were measured after knockdown or forced expression of AE1 or p16 in GC cells. The effect of AE2 on GC growth and the correlation of AE2 expression with differentiation and prognosis of GC were also evaluated. The effect of gastrin on AE2 expression and GC growth was investigated in cellular experiments and mouse xenograft models. p16 binds to both AE1 and AE2 simultaneously. AE1 or p16 silencing elevated AE2 expression on the plasma membrane where it plays a role in pHi regulation and GC suppression. AE2 expression was decreased in GC tissue, and these decreased levels were correlated with poor differentiation and prognosis of GC. The low AE2 protein levels are due to rapid ubiquitin-mediated degradation that was facilitated in the presence of p16. Gastrin inhibited the growth of GC cells at least partially through up-regulation of AE2 expression. AE1/p16 expression promoted AE2 degradation in GC cells. Gastrin is a potential candidate drug for targeted therapies for AE1- and p16-positive GC.
Background
Gastric cancer (GC) is the fourth most common cancer and the third most common cause of cancer-related deaths worldwide. Despite advances in GC prevention and treatment, the 5-year survival rate for this cancer remains at 20-25 % [1]. GC can be classified into intestinal (well-or moderately-differentiated) and diffuse (poorly-differentiated) types based on histopathological characteristics [2,3]. In general, intestinal GC is associated with gastritis that might progressively lead to atrophy and metaplasia, dysplasia and finally cancer. Although gastritis may, in some cases, account for diffuse types of GC, this GC type does not arise via the above-described cascade of histological events [4][5][6].
We previously found that AE1 is expressed in the cytoplasm of GC cells and its C-terminal 112 residues interacted with the tumor suppressor p16 [17,18]. The cytoplasmic AE1/p16 complex enhanced the stability of both proteins and played a key role in GC progression, which was correlated with short survival time of GC patients [19][20][21]. In addition, siRNA-mediated suppression of AE1 significantly reduced the detection rate of GC in an H. pylori-induced animal model of GC [22]. In this study, we revealed that p16 binds not only AE1 but also AE2, and that the formation of the AE1/p16 complex accounted for the enhanced degradation of AE2 in poorly differentiated GC cells. Moreover, gastrin, a major gastrointestinal hormone, could inhibit GC growth by blocking AE1/p16-promoted AE2 degradation.
Co-immunoprecipitation (co-IP) and immunoblot analysis SGC7901 cells were lysed with Radioimmunoprecipitation assay (RIPA) lysis buffer (Beyotime Institute of Biotechnology, Shanghai, China) containing fresh protease inhibitors and PMSF. The lysates were then incubated with anti-p16 antibody (BD Pharmingen) overnight at 4°C, followed by incubation with Protein G Plus/Protein A Agarose Suspension (Merck) for another 4 h at 4°C. After washing 3 times with the ice-cold lysis buffer, proteins were released from the beads using SDS lysis buffer for 10 min at 95°C and then resolved on 10 % SDS-PAGE gels and analyzed by immunoblotting. To block the nitrocellulose membranes, 5 % skimmed milk in TBST was used to reduce nonspecific background. Membranes were then incubated with primary antibodies overnight at 4°C. After washing in TBST 3 times for 10 min each, membranes were incubated with secondary antibodies for 1 h at room temperature, and then washed again as before. Bound antibodies were detected using a chemiluminescence phototope-horseradish peroxidase kit according to the manufacturer's instructions (Pierce, Rockford, IL, USA).
Ubiquitination assay
SGC7901 cells were transfected with pEGFP-AE2a, HA-Ub and p16 expression plasmids as indicated in Fig. 3c. After transfection for 42 h, cells were treated with 10 μM MG132 for an additional 6 h and then harvested and lysed in RIPA lysis buffer containing fresh protease inhibitors and PMSF. Cell extracts were incubated with anti-GFP antibody overnight at 4°C, followed by incubation with the beads for another 4 h at 4°C. After separation from the beads, the proteins were resolved by 8 % SDS-PAGE gels and analyzed by immunoblotting with anti-Ub antibody to detect ubiquitination.
Cell fractionation
SGC7901 cells were transfected with AE1-or p16targeted siRNA or shRNA plasmids separately. The cells were then harvested and the cell fractionation experiment was carried out using a Membrane and Cytosol Protein Extraction Kit (Beyotime Biotechnology, China) according to the manufacturer's instructions.
Immunofluorescence SGC7901 cells were seeded onto glass coverslips and allowed to adhere for 24 h. After 15 min of fixation with 4 % paraformaldehyde, cells were permeabilized with 0.2 % Triton X-100 for 10 min at room temperature, then washed with phosphate-buffered saline (PBS), and blocked with 3 % bovine serum albumin (BSA). Coverslips were treated with primary antibodies at 4°C overnight. Following 3 brief washes with PBS, the coverslips were incubated with appropriate secondary antibodies for 1 h. The samples were imaged by the Radiance 2100 Laser Scanning System (Bio-Rad, Hertfordshire, UK). DAPI was used to indicate the nuclei.
Measurement of pHi
The pHi was measured using BCECF-AM in a Synergy H4 Hybrid Multi-Mode Microplate Reader (BioTek, Winooski, VT, USA) as previously described. Briefly, the cells were grown overnight in 96-well plates (Greiner Bio-One GmbH, Frickenhausen, Germany). After washing twice with serumfree DMEM, cells were incubated in serum-free DMEM containing 1 μM BCECF-AM for 30 min at 37°C in 5 % humidified CO 2 . The cells were then washed with Ringer's buffer (140 mM NaCl or Na gluconate, 5 mM glucose, 5 mM potassium gluconate, 1 mM calcium gluconate, 1 mM MgSO 4 , 2.5 mM NaH 2 PO 4 , 25 mM NaHCO 3 , 10 mM HEPES, pH 7.4) three times and incubated in Ringer's buffer. BCECF-labeled cells were excited at 440 and 490 nm and emission was measured at 530 nm using a Multi-Mode Microplate Reader. Fluorescence excitation ratios were converted to pHi values using the high K + -nigericin method by linear regression to a calibration curve.
Estimation of cell viability
SGC7901 cells were seeded in 96-well plates and cultured overnight, and then transfected with empty vectors or AE2 expression vectors. At different time points, 10 μl of MTT (5 mg/ml) was added to each well and incubated at 37°C for 4 h. After carefully removing the supernatant from each well, an equal volume of DMSO (150 μl) was added to each well and mixed thoroughly. The absorbance from the plates was read at 490 nm with an ELISA reader.
Transwell cell migration assay
The transwell cell migration assay was performed using transwell culture inserts (12-well, Corning, NY) according to the manufacturer's instructions. SGC7901 cells were transfected with empty vectors or AE2 expression vectors for 48 h. The cells were seeded onto the upper chamber and allowed to migrate toward the lower face of the transwell culture inserts. Cells were then incubated at 37°C for 20 h. The cells inside the upper chamber were scraped off. Migrated cells on the underside of the inserts were fixed in methanol, stained with gentian violet and counted for 5 random 100× fields per well. The cell number per image was determined by using Image J software.
Immunohistochemistry
Paraffin-embedded gastric cancer samples and adjacent normal tissue samples were collected from surgical resection and endoscopic biopsy performed at Renji hospital and Ruijin hospital, Shanghai Jiao Tong University School of Medicine. None of the patients had undergone chemotherapy or radiotherapy before surgery. All the specimens were obtained according to protocols approved by the Committee on Clinical Investigations of the respective institutions. Tumor specimens were fixed in 4 % neutralized formaldehyde, embedded in paraffin and 4 μm sections were stained with hematoxylin and eosin (H&E). For immunohistochemistry, sections were subjected to antigen retrieval by incubation with citric acid (pH 6.0), and the endogenous peroxidase was blocked by treatment with 3 % H 2 O 2 for 15 min. After overnight incubation with primary antibodies at 4°C, tissue sections were incubated for 15 min at room temperature with an appropriate secondary antibody (MaxVision™ Kits) followed by 3,3′-diaminobenzidine staining, then counterstained with hematoxylin. Normal rabbit IgG isotypes served as negative control antibodies.
Mouse tumor model and therapy
Four-week-old female athymic BALB/c nude mice were purchased from Shanghai Slac Laboratory Animal Co., Ltd. The principles governing the care and treatment of animals as stated in the guidelines for the Care and Use of Laboratory Animals, which was formulated by the Ministry of Science and Technology of the People's Republic of China were followed. Mouse experiments were approved by the Animal Research Committee of Shanghai Jiao Tong University. Nude mice were subcutaneously injected into the flank with 5 × 10 6 SGC7901 cells suspended in 0.1 mL 0.9 % NaCl. Twenty xenograft nude mice were prepared to investigate the effect of gastrin on gastric cancer growth. When tumors reached a volume of 200 mm 3 , mice were randomized into either the control group, which received 0.9 % NaCl subcutaneously, or into the therapeutic group, which received subcutaneous injections of gastrin (2 mg/kg, diluted in 0.9 % NaCl) twice daily. All mice were sacrificed after 20 days of subcutaneous injection. Tumor length and width were measured to calculate tumor volume (V) based on the following formula: V = length (mm) × width 2 (mm)/2. proteins. An IP experiment was performed to examine endogenous interactions between AE1, AE2 and p16 in SGC7901 cells. Whole-cell lysates were immunoprecipitated with rabbit anti-p16 antibodies, followed by detection of AE1, AE2 and p16 Statistical analysis SPSS 13.0 statistical package (SPSS, Inc., Chicago, IL, USA) was used to analyze the experimental data. The data were expressed as the mean ± SD. Comparisons between the two groups were performed with a t-test. The χ 2 test was used to analyze the rank data. Multiple comparisons were performed with one-way analysis of variance (ANOVA). Kaplan-Meier survival curves of two groups were compared by log-rank test. Statistical differences were considered to be significant for p < 0.05.
p16 interacted with both AE1 and AE2
Since AE1 has 80 % sequence similarity with AE2 in the C-terminal region [7,23], and the C-terminus of AE1 can interact with p16 as we previously showed [17,19], we hypothesized that p16 could also interact with the AE2 C-terminus. To test this possibility we performed an immunoprecipitation (IP) experiment using an anti-p16 antibody in SGC7901 cells. The results showed that the anti-p16 antibody precipitated detectable levels of both endogenous AE1 and AE2 (Fig. 1).
AE1 and p16 interaction affected AE2 expression and function
We previously reported that AE1 and p16 were largely expressed in the cytoplasm of poorly-differentiated GC SGC7901 cells, but were absent in well-differentiated GC MKN28 cells [19,20]. To further clarify the relationship among AE1, AE2 and p16, we down-regulated AE1 or p16 expression in SGC7901 cells by transfection with either AE1-or p16-targeted siRNA/shRNA plasmids. Confocal microscopy and cell fractionation experiments showed that individual knockdown of AE1 or p16 expression enhanced AE2 trafficking to the plasma membrane and nuclear distribution (Fig. 2a, b) and up-regulated AE2 expression (Fig. 2c, d). In contrast, overexpression of AE1 or p16 in MKN28 cells downregulated AE2 expression (Fig. 2e, f).
Cytoplasmic AE1/p16 promoted ubiquitin-dependent degradation of AE2 in GC cells
We further speculated that the decreased levels of the AE2 protein could be due to the instability that is potentially associated with AE1 and p16 expression. To test this possibility, AE2 levels in SGC7901 and MKN28 cells were dynamically measured after the cells were treated with cycloheximide (CHX), an inhibitor of protein biosynthesis (25 μg/ml for SGC7901 cells and 50 μg/ml for MKN 28 cells), or the proteasome inhibitor MG132 (10 μM). Western blots showed that after blocking protein synthesis with CHX, the AE2 abundance in SGC7901 cells rapidly decreased compared to that in Fig. 3 p16 enhanced ubiquitin-dependent degradation of AE2 protein. a AE2 expression in two GC cell lines treated with CHX (25 μg/ml for SGC7901 cells and 50 μg/ml for MKN28 cells) for the indicated times (left). The ratio of AE2 protein abundance to that of β-actin was normalized to a value of 1.0 for 0 h (right). Data are representative of experiments performed three times in triplicate. AE2 was more stable in MKN28 cells than in SGC7901 cells. b AE2 expression in two GC cell lines treated with 10 μM MG132 for the indicated times (left). Data are representative of experiments performed three times in triplicate. The ratio of AE2 protein abundance to that of β-actin was normalized to a value of 1.0 for 0 h (right). AE2 protein was more susceptible to degradation in SGC7901 cells than in MKN28 cells. c p16 enhanced ubiquitin-dependent degradation of the AE2 protein. HEK293T cells were co-transfected with vectors expressing pEGFP-AE2a, HA-Ub and p16, singly or in combination, as indicated. At 42 h after transfection, cells were treated with 10 μM MG132 for 6 h. Cell extracts were immunoprecipitated with anti-GFP antibodies to reveal polyubiquitination MKN28 cells (Fig. 3a). Moreover, AE2 protein levels were enriched more rapidly in SGC7901 cells than in MKN28 cells after treating the cells with MG132 (Fig. 3b). These results indicated that AE2 was unstable in poorly differentiated SGC7901 cells. To further test whether AE2 degradation was ubiquitin-dependent and was affected by p16 expression, a p16 expression vector was co-expressed with HA-ubiquitin (HA-Ub) in SGC7901 cells. IP experiments showed that polyubiquitin chains were present on GFP-AE2 (Ub-AE2) in cells that overexpressed p16 and were treated with MG132 (Fig. 3c). Taken together, these results indicated that p16 enhanced ubiquitin-dependent degradation of the AE2 protein and promoted AE2 instability in GC cells.
AE2 suppressed GC growth by decreasing pHi
Previous studies showed that the pHi was elevated in AE1-and p16-positive SGC7901 cells and that cellular [19,24]. To explore whether the pHi was affected by AE1 or p16 expression, the pHi was measured after knockdown of AE1 or p16 expression in SGC7901 cells or over-expression of these two proteins in AE1-and p16-negative MKN28 cells. The results indicated that AE1 or p16 knockdown in SGC7901 cells resulted in a significant reduction in pHi compared with control cells (Fig. 4a). In contrast, AE1 or p16 overexpression in MKN28 cells significantly increased the pHi (Fig. 4b). These results suggested that AE2 might play a role in inhibition of GC cell proliferation. Forced AE2 expression in SGC7901 cells confirmed that the GFP-AE2 significantly decreased the pHi of the cells, and this decrease was accompanied by decreased cyclin D1 expression as well as GC growth and migration (Fig. 4c-f ). To further explore the role of AE2 in inhibition of GC growth, AE2 expression in samples from 82 gastric cancer patients was detected by immunohistochemistry. AE2 was exclusively expressed in adjacent normal gastric tissues, but was significantly down-regulated in cancer tissues (Fig. 5a). Statistical analysis indicated that the expression frequency of AE2 in GC tissues was 25.6 % (21/ 82), which is significantly decreased compared with that in adjacent normal tissues (82/82) (Fig. 5b). Consistent with our molecular experiment results (Fig. 2), AE2 expression was negatively correlated with both AE1 and p16 expression in GC tissues (Fig. 5c, d). The clinicopathological analysis demonstrated that reduced AE2 expression was associated with poor differentiation of GC (Table 1). Most importantly, low levels of AE2 and high levels of AE1 and p16 were correlated with poor survival of GC patients (Fig. 5e).
Gastrin suppressed the growth of GC cells in vivo and in vitro through up-regulation of AE2 expression
We previously reported that gastrin inhibited the growth of AE1-and p16-positive GC cells [25]. To investigate a potential role for AE2 in mediating gastrin-induced GC suppression, a SGC7901 xenograft mouse model was established by subcutaneous injection of 5 × 10 6 SGC7901 cells. After the initial tumors reached 200 mm 3 , animals were randomized into control and experimental groups, with the control group receiving 0.9 % NaCl, and the experimental group receiving subcutaneous injections of gastrin (2 mg/kg, diluted in 0.9 % NaCl) twice daily for 20 days. Tumor suppression was observed after treatment with gastrin for 17 days (Fig. 6a), and was accompanied by increased AE2 protein levels in tumor extracts on day 20 (Fig. 6b). To confirm the role of gastrin in AE2 upregulation, AE2 expression was detected by western blot after cells were treated with 10 −7 M gastrin for 3 days. The results showed that gastrin up-regulated AE2 expression, which was accompanied by down-regulation of cyclin D1 expression (Fig. 6c) and a reduction in pHi (Fig. 6d).
Discussion
The sodium-independent Cl − /HCO 3 − transporters that make up the AE family, together with other ion carriers, are involved in pHi regulation [10,26,27]. Under physiological conditions, AE family proteins are generally activated by intracellular alkalosis. All AE members share three common structural domains: an N-terminal cytoplasmic domain, a transmembrane domain and a C-terminal cytoplasmic domain, yet their molecular weights differ (AE1: 95 kDa and AE2:~170 kDa) [7]. The AE1 gene encodes full-length erythroid AE1 and a shorter kidney AE1 that in humans initiates at Met66 [28]. Intracellular trafficking of AE1 is mediated via interactions with other proteins. For instance, interplay between AE1 and p16 facilitates AE1 trafficking to the plasma membrane, and also promotes p16 nuclear transport in AE2-positive HEK293T or K562 cells. Moreover, AE1 expression induces differentiation of the myelogenous leukemia cell line K562 [19,24]. These results suggest that in several cell types, such as renal tubular epithelial cells or hematopoietic cells, AE1 can be transported to the plasma membrane, and that AE1 and p16 play a mutually cooperative role in HEK293T and K562 cells. AE2 and p16 proteins were also normally located at the plasma membrane and in the nucleus, respectively, in these cells [17,24]. On the other hand, AE1, AE2 and p16 mRNAs could be detected under physiological conditions in gastric epithelial cells. The AE2 mRNA was translated into proteins, while translation of both AE1 and p16 was normally silenced by several factors, including miR-24 and gastrin [24,25]. In contrast, under pathologic conditions such as achlorhydria and H. pylori infection, AE1 expression is significantly induced and causes a large accumulation of AE1 in the cytoplasm of gastric epithelial cells [19,22]. Such cells lack a route for AE1 membrane trafficking, leading to sequestration of AE2 in the cytoplasm that arises in response to The data was partially analyzed due to the unparticular information interactions between cytoplasmic AE1 and p16. The cytoplasmic AE2 protein may be misfolded and thus more sensitive to ubiquitin-dependent degradation [29,30]. These results suggested that AE1 and AE2 do not normally coexist in the cytoplasm of cells, although AE2 overexpression can produce protein expression rates that exceed those of degradation. This undegraded AE2 protein could coexist with AE1 in the cellular cytoplasm. AE2 is known to be widely expressed in most cell types, and has a particularly high expression level in gastric parietal cells. Nevertheless, how AE2 protein is removed from cells is unclear. Here we demonstrated that Fig. 6 Gastrin suppressed GC growth in vivo and in vitro through AE2 up-regulation. a Gastrin induced inhibition of tumor cell proliferation. SGC7901 xenograft-bearing nude mice were treated with 0.9 % NaCl or gastrin. Tumor growth rates were determined by calculating the percentage of change in tumor volume (T) compared with initial tumor volume (T0). *p < 0.05, compared with control group (n = 10). b Gastrin increased AE2 abundance in tumor extracts. The ratio of AE2 protein abundance to that of β-actin was normalized to a value of 1.0 for the 0.9 % NaCl group. *p < 0.05, compared with 0.9 % NaCl group (n = 10). A representative immunoblot is presented below. c Gastrin induced up-regulation of AE2 protein expression and down-regulation of cyclin D1 protein expression. The ratio of AE2 and cyclin D1 protein abundance to that of β-actin was normalized to a value of 1.0 for the SGC7901 group (n = 3), *p < 0.05, compared with the SGC7901 group. d Cellular acidification occurred in SGC7901 cells after incubation with 10 −7 M gastrin for 24 h. Data are representative of experiments performed three times in triplicate, *p < 0.05, compared with untreated SGC7901 cells. e Model for gastrin-induced inhibition of GC through up-regulation of AE2 levels, which were decreased by AE1/p16. In GC cells, intracellular retention of AE1 and cytoplasmic sequestration of p16 lead to increased AE2 ubiquitin-dependent degradation, as well as reduced total and plasmalemmal abundance of AE2. The resulting reduction in AE2-mediated anion exchange activity may result in cellular alkalinization, leading to increased cyclin D1 expression. Gastrin up-regulates AE2 expression by blocking formation of the AE1/p16 complex. Enhanced AE2-mediated Cl − /HCO 3 − exchange activity acidifies GC cells that in turn retards cell growth AE2 is degraded by the ubiquitin proteasome pathway in GC cells. Aberrant AE2 expression levels could cause total or partial dysfunction in regulating pHi [7]. Several studies demonstrated that cytoplasmic pH plays crucial roles in controlling DNA synthesis, cell growth, proliferation, differentiation, oncogenesis and malignant transformation [31]. In addition, there is increasing evidence to support that cancer cells have 'malignant' alkaline pHi, which is consistent with our previous findings [32]. Thus, intracellular alkalinization may be an important hallmark of GC tumor cells [33,34]. We speculated that in GC, impaired AE2 expression in turn elevates the pHi and reduces acid secretions via interactions with AE1 and p16, which could worsen achlorhydria syndromes and promote GC progression (Fig. 6e). AE1 was previously demonstrated to be an unexpected factor that is responsible for p16 cytoplasmic sequestration and is associated with both tumor progression and poor prognosis [17,18]. As such, the nuclear distribution of AE2 requires further investigation.
Conclusions
Here we demonstrated that ectopic expression of AE2 together with AE1 and p16 expression is an important pathogenic factor in the development of GC, and that dysfunctional AE2 can be degraded through a ubiquitindependent pathway. Gastrin affects AE2 expression and thus could be a potential candidate drug for targeting therapy for AE1-and p16-positive GC. Our findings will bring new perspectives on future clinical treatments for GC. | 2023-01-16T14:14:52.732Z | 2016-09-05T00:00:00.000 | {
"year": 2016,
"sha1": "08d9d45e45bb7b860e68239d35102d6c4554c587",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12885-016-2751-x",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "08d9d45e45bb7b860e68239d35102d6c4554c587",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
225004867 | pes2o/s2orc | v3-fos-license | RESEARCH PAPER Ethical Development of Young University Players through Involvement in Sports
The research aims to determine the relationship between sports involvement and ethical development (autonomy, collectivism, common good, dignity, and productivity) of young university players. Sports involvement increases young people assets and encourages positive conclusions for young people by providing chances, developing positive relationships, and delivering the support needed to build their leadership strengths. Data was collected from 250 university student players through adapted and modified questionnaire. Descriptive statistics, Pearson’s correlation analysis, and linear regression analysis were employed to analyze the collected data. The findings indicated that involvement in sports was significantly associated with ethical development of young university players. It was concluded that sports involvement of university players may source of ethical development in their practical lives and ponder a positive sign for healthy society. Recommendations were yielded for director sports, sports officers, and coaches in line with administration to improve ethical development of student players at universities
Introduction
Youth development is pro-social method that involves youth within their societies, administrations, peer clusters, and families in a way that is creative and positive distinguishes uses. According to Coakley (2011), youth development is a procedure that makes youngsters to meet the challenges of youth and development and achieve their maximum capacity. Youth development is helped through exercises that help them to create social, moral, enthusiastic, physical, and mental capacities.
Sports considered as an action including physical effort and ability in which a person or a group contends with another or others for stimulation and the most wellknown sorted out actions in which engage (Tremblay et al., 2011). Similarly, physical activity can improve cardiovascular fitness, muscular strength and endurance, flexibility, and bone structure, ensuring optimal development in youth. According to Balsano, Phelps, Theokas, Lerner, & Lerner (2009),wounds, burnout, traded off aptitude obtaining, and undiscovered ability/capability of youth competitors are on the ascent.
Sports can have an optimistic effect on the psychological wellbeing of youth and can build confidence and reduction pressure and anxiety (Fredericks & Simpkins, 2012). It is well recognized that sport affects how youths develop socially. Through sports' individuals can involve in equivalence, independence and empowerment. Youth sports can teach vital life skills (Maslow & Chung, 2013).Regarding mental development, there has long been a tradition that a healthy body leads to a healthy mind and that sports can support academic development in youth as well.
The relation between youth and sport is a solid one and historically youth have been fortified to play sports for a countless of benefits. The benefits of involvement in sports may be physical, existence, affective, communal, and mental. Regarding development, physical inactivity has been recognized as a risk of issues for a quantity of lifestyle illnesses and contributing in sports benefits overall health (Zeldin, Christens, & Powers, 2013).
Sports cause young people to turn out to be better supporters of society. Today's youth needs to create abilities for achievement throughout everyday life. Through their pre-adult and young ages, the childhood is encountering issues in practices, relationships, precariousness and thoughts deficiency (Catalano, Berglund, Ryan, Lonczack, & Hawkins, 2004). The current research was designed to investigate the development of youth through involvement in sports.
Sports activities are useful for appearing various noteworthy aptitudes. Adults face rivalry when they are applying for and keeping livelihoods, while youth face contention in scholastics and sports. Taking part in aggressive group activities at this youthful age offers a chance to comprehend the solid parts of rivalry in a neighbor domain. Understudies of any age who take an interest in games have been found to adapt better to rivalry in different aspects of their life (Lubans, Plotnikoff, &Lubans, 2012).
Sports activities additionally help to physical prosperity. Moreover, physically unique youth will undoubtedly be sustenance mindful in their sustenance choices that are not adequately drawn in with a game (Gavin, Catalano, David-Ferdon, Gloppen, & Markham, 2010). One of the fundamental reasons of school games are huge because it gives the movement, they may not regularly get. Clearly, this kind of development generally occurs after school. Most noticeably, these sorts of activities consolidate genuine amusements, for instance, football, basketball, baseball, tennis, Olympic style sports and soccer yet likewise may join entertainment focus beguilements and various preoccupations (Durlak, Weissberg, Dymnicki, &Schellinger, 2011).
Youth participated in sports well before sorted out classes existed. Most recreations were played casually in parks or in the city. Numerous grown-ups recall those occasions when they played and went after fun. While, youth were having a fabulous time, they additionally profited by games in two noteworthy ways, physically and mentally. Nearby businesses, guardians accept that partaking in youth sports is imperative to a kid's advancement.
A mentor expressed that it is ideal for children to make a commitment, work their hardest, and get the prizes of positive certainty, accomplishments, sportsmanship, request, and time the officials too (Kegler, Young, Marshall, Bui, &Rodine, 2005).According to Choi, Harachi, Gillmore, and Catalano(2005), youth associated with games/exercises revealed a more elevated amount of psychosocial development. Moreover, youth sports inclusion improves the procurement of physical advantages by members, just as different social and mental advantages. In this way, an investigation looking at support in youth sports and its impact on the improvement of enthusiastic knowledge shows up justified.
Sports can contribute fundamentally to global, national, and neighborhood endeavors to give youth a solid begin. Sports can help the individuals who have not got a decent begin, and outfit youth with abilities, individual, and social assets, and bolster expected to make key life changes effectively.
Youth enter in sports for many different reasons and out of a variety of motives, from wanting to join in on an activity their friends are getting involved into parents signing them up in hopes of them enjoying sport as they once had. Fun, ability improvement, association, wellness, challenge, and achievement/status to be the most consistently appearing youth motives for sports participation (Jones & Perkins, 2006). Athletes find the enjoyment motive in sports through mastery climate, positive interactions amongst teammates, receiving support from teammates and coaches, and coach acknowledgement of satisfaction with the player's performance (Fredericks & Simpkins, 2012).Motives for participation also appear to be shifting due to the increasing trend of starting athletes at as early of an age as possible so they have more time to specialize in their sports of choice in hopes of increasing their likelihood of being successful and competitive (Zeldin et al., 2013).
In the context of sports, coaches, parents, and peers typically come to mind. In this regard, the coach is the main contributing agent within sports. The coach is typically seen as an individual who possesses the skills that will need to be learned; naturally putting them in a role as a mentor to those they coach (Catalano et al., 2004). Instructors, at any capacity, have a great influence over how the learning environment is shaped and a child's experience within that learning environment (Lubans et al., 2012).Coaches, parents, and peers not only have a significant impact on youths' development, but also on their participation in sports. Each social agent has a unique influence on the environment and climate of sport, which has a direct relationship with youth development.
Although the sports domain is one that is predominately created by the coaches and athletes, other factors contribute to what the environment and climate is like as well. However, environment and climate progressively should be done to incorporate family, school, and the more extensive network into structure equipped competitors and even more critically, positive and sound natives (Gavin et al., 2010).Athletes who described their team climate as performance-oriented (egoinvolving) described their environment as one where teammates were constantly competing against one another to be the best, mistakes made by athletes were punished, and favoritism was shown to certain athletes (Durlak et al., 2011). The youth sport environment is thought to be this microcosm for development and learning to occur. In comparison, more professional oriented sport environments are primarily focused on production (winning), commercialization, and entertainment (Kegler et al., 2005).
The followings hypotheses of the research were developed for the present research:
H1
There is significant relationship between sports involvement and ethical development of young players at universities.
H2
There is significant influence of sports involvement on ethical development of young university players.
Material and Methods
Population can be explaining as increasing or sum of all objects, subjects, or participants that conform to a set of requirements. The present research was descriptive in nature. Cross-sectional study design was employed for the present research. The population of the present research was overall players belonging to two public sector universities of Pakistan (The Islamia University of Bahawalpur and Government College University Faisalabad) enrolled in various disciplines and have engaged diverse games.
The researchers used adapted and modified survey questionnaire with the permission of original authors (Ziółkowski, Strzałkowska, Sakłak, Zarańska, &Bonisławska, 2012) as an instrument for data collection in the exiting research. The reliability was measured through Cronbach Alpha (.759) by pilot testing and validity was discussed with three referees in relevant subject. Sample may define as representative component of a target population which is worked upon by the researchers during their study. In the current research, total sample size was considered 250 respondents, whereas, responses were found 241.For this research, simple random sampling was used to select the samples. The actual data was collected randomly selected university players personally by the researchers. The SPSS (version-25) was employed for editing the data. Descriptive statistics, Pearson's correlation analysis and linear regression analysis were chosen to test the research hypotheses in the present research.
Results and Discussions
Descriptive statistics (frequency, mean, standard deviation and percentage) was utilized to measure the demographic information of respondents of the present research.
Table 1 Descriptive Statistics of Respondents about Age Level (n-241)
Age level of 241 respondents was found between 18 to 28 years who were enrolled in various disciplines of both universities. Whereas, mean was measured 20.82 and std. deviation was measured 1.62 displayed in Table1.
The results of Pearson's correlation analysis were done to find out the relationship between sports involvement and ethical development of young university players. Linear regression analysis was performed to examine the influence of sports involvement on ethical development of young university players. Results of descriptive statistics in Table 3indicatedthemean and std. deviation scores of sports' involvement (16.39, 4.37) and ethical development of young university players (19.65, 5.15) accordingly. The results shown the value of R was .610 (Adjust R square=.454), a value was found significant whereas, Std. Error of Estimate 4.45 and Durbin-Watson 1.74 as display in Table 4. Sports involvement had a variance of 54.4% in outcome variable and significantly predicted to ethical development of young university players. The ANOVA results showed the value F(1,239)=48.72 and p=.01 that these values were found strongly and statistically significant as showed in Table 5. The results of coefficients presented in Table 6 indicated that standardized of coefficients of sports involvement was β= .387, t=6.98, p=.01. The findings revealed that sports involvement significantly influenced ethical development of young university players. The total values of β, t and p were also found statistically significant.
The findings on the present study revealed that significant (p=.01) relationship was investigated between sports involvement and ethical development of young university players. The sports involvement has positive association with ethical development of young players. Therefore, the results exposed strong relationship between sports involvement and ethical development of young players. The reason behind the significant relationship may be that as the involvement in sports increases as the young players will develop the moral and social values more among each other. A research conducted by Jones and Perkins (2006) concluded that sports involvement and development of players were significantly associated with each other. Another research conducted by Yusof (2013) investigated that significant relationship was found between involvement in sports and ethical development of players. Various prior researches were proved the results of the present research (Catalano et al., 2004;Yusof, Chuan, & Shah,2013;Durlak et al., 2011;Gavin et al., 2010;Maslow & Chung, 2013).
The results of the existing research exposed that model of linear regression analysis was established to find out the influence of sports involvement as a predictor factor on the development of young players (outcome variable). The findings revealed a significant influence of sports involvement on the development of young players. The reason may be behind the significant effect that the youth become sound due to involve in sports and get the advantages physically and physiologically. Vierimaa, Bruner, and Côté (2018) concluded that sports involvement had significant effect on development of players. The previous studies were confirmed the findings of the current research (Jones & Perkins, 2006;Lubans et al., 2012;Maslow & Chung, 2013;Yusof, 2013;Zeldin et al., 2013).
Conclusion
Involvement in sports plays a central character today in the development of young players and other individuals as well. It was concluded that involvement of young players in sports may become a source of development in their social, moral, and physical growth as well as ponder a positive and worthwhile sign for the healthy society. Sports association may likewise add to develop the social and selfawareness of youth. Young players may give advantages to socio-passionate improvement, personality work, character building, and good advancement including the declaration of qualities and sportsmanship practices as well as they may focus on their autonomy, collectivism, common good, dignity, and productivity characteristics.
Recommendations
Keeping in view the findings and conclusion, the following recommendations were proposed. The results revealed significant effect of ethical development of university student players on sports involvement. Therefore, director sports, sports officers, and coaches in line with administration need to arrange and deliver lectures on ethical development of student players at universities. The director sports due to having more closely with university students need to promote such environment enrich with social and moral characteristics to diminish the crime rate in today society and enrich university student players by ethics as well. All dignities involved in sports need to intentionally develop a game setting and such atmosphere that encourage potential advantages to youth character advancement as opposed to undermining it so that they may expand their potentials at national and international level. | 2020-10-19T18:12:36.782Z | 2020-09-15T00:00:00.000 | {
"year": 2020,
"sha1": "95e8bb9a2a306ee2d05aaccedf714c4a1e4b77d8",
"oa_license": "CCBY",
"oa_url": "https://pssr.org.pk/issues/v4/3/ethical-development-of-young-university-players-through-involvement-in-sports.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "bd979a52ec0d0b0ef21f53851f68fc65d597599e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
262030387 | pes2o/s2orc | v3-fos-license | Research on the Assessment and Reform of Professional Core Courses Based on Cultivating Applied Russian Talents
: The reform of the curriculum, classroom teaching and evaluation (course assessment) of foreign language talents in the new era has received much attention and research. The reform of the curriculum system, classroom teaching to teaching evaluation (course assessment) has received the attention and research of many institutions and scholars. The three major platforms of university-enterprise cooperation, interaction between academic and research institutes, and university-local cooperation should be set up for professional development. The university has to build up three major platforms for cooperation between the university and enterprises, interaction between learning and research, and cooperation between the university and the local community. The university is also working on the construction of a specialization, the implementation of the core curriculum assessment reform, and the cultivation of applied Russian language talents for the construction of Hainan Free Trade Port.
Introduction
Under the background of the new era, the training program based on OBE concept is reconstructed, and the reform of curriculum system, classroom teaching and teaching evaluation has attracted the attention and research of many universities and scholars.The implementation of achievement-oriented education is to achieve the goal of talent training, so that the educated through professional training, after the expected results.The core of achievement-oriented teaching design is reverse design, which takes the ultimate goal as the starting point, reverse curriculum design and carries out teaching activities.The starting point of teaching is not what the teacher wants to teach, but what he needs to achieve the final results.Professional classroom emphasizes independent learning, fully mobilizes students 'subjective initiative, and teaching is effective learning.Learning content, learning approach, course assessment, and student-centered classroom teaching reform, teaching content is selected according to the talent training goal, and the focus of evaluation is students' learning effect and performance.
Majors should build three platforms of university-enterprise cooperation, school-research interaction, and school-local cooperation, develop application-oriented courses, application-oriented teaching team, international professional practice base, and academic collaborative platform. [1]he research and demonstration of the applied talent training program adhere to the "student-centered" school philosophy, from the training of serving social students for the need, determining the objectives, specifications and requirements of talent training and adhere to the "output orientation" to construct the curriculum system with the construction idea of "curriculum map" and "matrix map" and fully carry out the Russian industry demand, university Russian major, Russian graduates research.According to the skill requirements of the industry or job group, building courses with "competence as the core" should adhere to the principle of "continuous improvement" and fully understand the requirements of graduates, social employers, should build a curriculum system for quality improvement and adhere to highlight the characteristics and advantages. [2]The advantages and characteristics of the professional school being embodied in the curriculum can improve students' competitiveness.Striving to achieve in-depth industry research, graduates are promoted to research on the related universities and the related professions, on the basis of a comprehensive grasp and analysis of market demand, having "entrepreneurial thinking" and "global vision" to find out the professional talent training corresponding "minimum standard" and "highest post standard", finding professional personnel training, student demand, market demand "combining site", and analysizing professional core competitiveness, determining the talent training goals and standards.
Analysis of the current situation and development of the Russian language profession in the "post-epidemic" period
Main industry research mainly for the application of Russian professional employment, choose a representative enterprise, and several positions of typical tasks and work process research, extract typical tasks and analysis, understand the enterprise of graduate knowledge, ability, quality and evaluation of talent training advice, clear the change of the talent training target requirements.
After full investigation and demonstration, in the post-epidemic period, the demand for Russian talents in China and Russia (especially Russian tourists to Hainan), business (economic and trade exchanges of Belt and Road Russian countries) and translation industries will be as current, and the number of demand in some fields will surge.The elective courses of foreign tourism, international business and translation in the senior level are in line with the demand for Russian talents in the post-epidemic period.
The latest result of this industry demand survey is the large demand for Russian talents in the cross-border e-commerce industry (for Russia) in the post-epidemic period.As an important way for domestic enterprises to go abroad, cross-border e-commerce has been strongly supported by national policies.Foreign trade, as one of the important parts of China's economic growth, foreign trade is undergoing new business changes with the continuous popularization of the global Internet.Traditional foreign trade services are evolving to comprehensive cross-border e-commerce, and cross-border e-commerce is becoming a key way for China's import and export trade.In this context, the Chinese government aims to support the cross-border e-commerce industry through a series of policies.
In addition, the enterprise that the comprehensive quality of the Russian graduates is active, hardworking and hard-working; the main professional ability of the Russian graduates is the ability and plasticity, hardship and hard work.Enterprises believe that the lack of Russian graduates ability is mainly Russian oral communication ability (telephone, face to face communication), etc.; the lack of working experience, lack of Russian professional basic knowledge is not solid.From the perspective of business work, the important language courses are mainly spoken language, etc.When recruiting employees, the professional qualification certificates are Russian or translation certificates.The enterprise is willing to cooperate with higher vocational colleges in building practice training and training base.
Analysis of the cultivation of talents of the same major in universities
In order to do more researches on Russian professional and cultivate applied Russian talents of typical colleges and universities, professional personnel training items should be fully understood, and the schools applied Russian professional setting and admissions employment situation, training objectives and specifications, training mode, curriculum, school arrangement, teaching staff, practice, teaching facilities, teaching management and evaluation status, the integration and international exchanges and cooperation should also be in-depth understood.
The school pays attention to the improvement of Russian Chinese comprehensive application ability and professional skills, and the training of quality, ability and knowledge matching with the post group needs to be strengthened and more targeted.Due to the lack of real double-qualified teachers, practice and training are mere formality, so it is difficult to achieve the training objectives and training specifications in real teaching.Russian graduates should emphasize the practical ability; the weakest ability, and the ability is mainly Russian Chinese oral communication ability (telephone, face to face communication), Russian word processing ability (letter, mail), etc.
Research and analysis of Russian language graduates
Mainly for Russian graduates employment requirements and students to the school teaching problems feedback design questionnaire, We should research on Russian graduates, widely understand the distribution of graduates, graduate employment situation and social talent demand data, understand the graduates training objectives, curriculum, teaching effect and training form of feedback and advice.We should understand the deficiencies of students' own ability found in the actual work and analyze the actual needs of professional positions, and correct the objectives and content of professional talent training.The industries of graduates are widely distributed, of which sales enterprises account for 16.34%; foreign trade companies account for 11.83%; Besides, graduates are scattered in overseas customer service, civil aviation and railway services, administrative secretarial and human resources, education and training, Internet IT, etc.It can be seen that with the development of "Belt and Road", the employment field of applied Russian major graduates tends to diversification and all-round penetration.Industry coverage of Russian graduates, of which Russian translation only 9.3%; foreign trade salesman only 8.45%; other positions include domestic trade salesman, customer service, aviation and railway attendant, travel agent, real estate consultant, administrative assistant, accounting assistant and cashier, insurance recommendation, etc., some graduates start their own business.82.54% of the graduates believed that the ability to be strengthened in school is Russian Chinese oral communication ability (telephone communication, face-to-face communication); 55.49% considered business adaptation and practical ability (such as correspondence, negotiation, sales, software use, etc.).71.55% of the graduates considered basic Russian; 61.41% considered Russian speaking; 50.99% considered Russian translation; 42.54% considered business (trade) Russian.For the most needed improvement in professional teaching, 73.52% of the graduates believe that practice and practice are not enough.84.51% of the graduates think that the certificate helpful for employment is Russian grade certificate (grade 4, level 4 and 6); 58.87% think the college English level 4 and 6 certificates; 41.97% think the Russian tour guide certificate.
Feasibility analysis against the National Standard and the "New Arts"
The new national standard advocates condensing the professional characteristics, repositioning the type of talent training, confirming the talent training mode, and taking serving the regional economic development and the national strategy towards Russia as the talent training goal.
"Comprehensive quality and ability", "serving the national and local economy", "having social responsibility", "China's foreign exchanges", "practical ability" can be added. [3]he length of schooling can be modified to "3-6 years".A flexible credit system can be tried to encourage students to choose minor courses.The requirements of "Chinese feelings", "social responsibility" and "cooperative spirit" can be added to the quality requirements. [4]hinese feelings and social responsibility can be completed in public courses and general core education courses, and "cooperation spirit" can be completed through all curriculum systems, such as group work in small class teaching.
The knowledge requirements of "familiar with Chinese language and cultural knowledge" and "basic knowledge of humanities and social sciences and natural sciences" can be added.
Knowledge requirements can increase "critical thinking ability", "independent learning ability", application ability of tourism Russian, foreign public relations and business Russian; and basic ability of foreign English.
"Social practice" and "international communication" can be added in the practical teaching link.
The professional knowledge practice in the original talent training program can be adjusted to social practice, and the corresponding social survey and other activities should be completed according to the requirements of the professional instructor and the social practice link can be increased alone either.
The new liberal arts does not fundamentally deny the traditional liberal arts, it continues to deepen on the original basis, to the direction of depth development.We often mention "liberal arts" to start from the following dimensions.One is in the college entrance examination and we talk about the science corresponding to each other.Among them, the liberal arts subjects mainly include geography, history, politics, the subjects of science mainly refer to physics, chemistry and biology, and other subjects, such as fine arts, music, physical education, etc., are the basic categories of art subjects.Second, the connotation of liberal arts corresponds to the "practical science", which mainly refers to the history, literature, language, philosophy and other fundamental disciplines.Third, in higher education, there is a kind of disciplines collectively known as philosophy and social sciences, which mainly refer to some humanities disciplines.Simply put, they can be divided into two categories, one is social sciences, the other is humanities.Among them, the former mainly includes management, law, economics, education and so on.The latter mainly refers to art, literature, philosophy and so on."Newliberal arts" is relative to "new science" "new engineering" "new agriculture" "new medical", it is actually the abbreviation of new philosophy and social science.In terms of subject classification, Russian language and literature belong to the "second-level discipline" of "foreign language and literature"; it is the "Russian" major under the category of "literature". | 2023-09-18T15:09:34.552Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "6e06e8de04b96a511f230855c8b8a5d57e229e26",
"oa_license": "CCBY",
"oa_url": "http://www.clausiuspress.com/assets/default/article/2023/08/29/article_1693317726.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4798c4b30b93f1f588a7b9790be77a1a15f6ae3e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
212958804 | pes2o/s2orc | v3-fos-license | Physical and biogeochemical impacts of RCP8.5 scenario in the Peru upwelling system
. The northern Humboldt Current system (NHCS or Peru upwelling system) sustains the world’s largest small pelagic fishery. While a nearshore surface cooling has been observed off southern Peru in recent decades, there is still considerable debate on the impact of climate change on the regional ecosystem. This calls for more accurate regional climate projections of the 21st century, using adapted tools such as regional eddy-resolving coupled biophysical models. In this study three coarse-grid Earth system models (ESMs) from the Coupled Model Intercomparison Project Phase 5 (CMIP5) are selected based on their biogeochemical biases upstream of the NHCS, and simulations for the RCP8.5 climate scenario are dynamically downscaled at ∼ 12 km resolution in the NHCS. The impact of regional climate change on temperature, coastal upwelling, nutrient content, deoxygenation, and the planktonic ecosystem is documented. We find that the downscaling approach allows us to correct major physical and biogeochemical biases of the ESMs. All regional simulations display a surface warming regard-less of the coastal upwelling trends. Contrasted evolutions of the NHCS oxygen minimum zone and enhanced stratification of phytoplankton are found in the coastal region.
Abstract. The northern Humboldt Current system (NHCS or Peru upwelling system) sustains the world's largest small pelagic fishery. While a nearshore surface cooling has been observed off southern Peru in recent decades, there is still considerable debate on the impact of climate change on the regional ecosystem. This calls for more accurate regional climate projections of the 21st century, using adapted tools such as regional eddy-resolving coupled biophysical models. In this study three coarse-grid Earth system models (ESMs) from the Coupled Model Intercomparison Project Phase 5 (CMIP5) are selected based on their biogeochemical biases upstream of the NHCS, and simulations for the RCP8.5 climate scenario are dynamically downscaled at ∼ 12 km resolution in the NHCS. The impact of regional climate change on temperature, coastal upwelling, nutrient content, deoxygenation, and the planktonic ecosystem is documented. We find that the downscaling approach allows us to correct major physical and biogeochemical biases of the ESMs. All regional simulations display a surface warming regardless of the coastal upwelling trends. Contrasted evolutions of the NHCS oxygen minimum zone and enhanced stratification of phytoplankton are found in the coastal region. Whereas trends of downscaled physical parameters are consistent with ESM trends, downscaled biogeochemical trends differ markedly. These results suggest that more realism of the ESM circulation, nutrient, and dissolved oxygen fields is needed in the eastern equatorial Pacific to gain robustness in the projection of regional trends in the NHCS.
Introduction
Eastern boundary upwelling systems (EBUSs) are oceanic systems where alongshore winds generate the upwelling of deep, cold, and nutrient-replete waters. This drives a high biological productivity and thriving small pelagic fisheries which are major sources of income for the adjacent countries. In particular, the Peruvian upwelling system (also known as the northern Humboldt Current system, NHCS in the following), located in the southeastern tropical Pacific, is the most productive EBUS in terms of fish catch (Chavez et al., 2008), due to its rich anchovy fishery. Moreover, the subsurface water masses in the NHCS are located in the poorly ventilated so-called "shadow zone" of the southeastern Pacific (Luyten et al., 1983). This low ventilation creates a subsurface water body with a very low oxygen concentration, the oxygen minimum zone (OMZ). The OMZ results from a balance between oxygen consumption by respiration of large amounts of organic matter exported from the highly productive surface layer and ventilation by the equatorial current system composed of eastward jets transporting relatively oxygenated waters (Czeschel et al., 2011;Montes et al., 2014). A particular aspect of the NHCS OMZ is its very low oxygen concentration (anoxia) at relatively shallow depths, which impacts the local marine ecosystem (Stramma et al., 2010;Bertrand et al., 2011).
In recent decades, public concern has risen about the impact of climate change on EBUSs. Using ship wind observations, Bakun (1990) showed that upwelling-favorable Published by Copernicus Publications on behalf of the European Geosciences Union.
winds increased over recent decades in several EBUSs. He proposed that nearshore winds would continue to intensify due to an enhanced differential heating between land and sea, driven by a stronger greenhouse effect over land. However, this hypothesis has been challenged in the NHCS because of observation bias (e.g., Tokinaga and Xie, 2011) and poleward displacement of the South Pacific anticyclone (Belmadani et al., 2013;Rykaczewski et al., 2015). Nevertheless, in situ and satellite sea surface temperatures (SSTs) have shown a conspicuous surface coastal cooling off southern Peru (15 • S) since the 1950s. This cooling, consistent with a wind increase found in the ERA40 reanalysis, suggests a possible intensification of the wind-driven upwelling (Gutierrez et al., 2011).
Recent analysis of the Coupled Model Intercomparison Project Phase 5 (CMIP5) global circulation models (GCMs) reported that the intensification of nearshore winds under scenarios of carbon dioxide concentration increase is mainly confined to the poleward portions of EBUS (Wang et al., 2015;Rykaczewski et al., 2015;Oyarzún and Brierley, 2019). However, the evolution of winds in the NHCS remains unclear (note that the NHCS stricto sensu was not included in these studies). Furthermore, the realism of IPCC GCMs is hampered by the coarse resolution of the model grids (∼ 100-200 km), which does not allow the representation of the details of coastal orography and coastline that influence the coastal wind structure.
A few downscaling studies focusing on regional wind changes in the NHCS have provided invaluable information. NHCS upwelling-favorable winds may weaken in the future, mainly during the productive austral summer season (Goubanova et al., 2011;Belmadani et al., 2014). However, only idealized extreme scenarios (preindustrial, doubling (2 × CO 2 ), and quadrupling (4 × CO 2 ) of carbon dioxide concentration) from a single GCM (IPSL-CM4; Marti et al., 2010) of the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) were downscaled in these studies. In line with these studies, used a regional ocean circulation model (RCM) forced by statistically downscaled atmospheric winds from Goubanova et al. (2011) to downscale the NHCS ocean temperature and circulation changes under 2 × CO 2 and 4 × CO 2 scenarios. They found a strong warming in the surface layer, of up to ∼ +5 • C nearshore in the 4 × CO 2 scenario with respect to preindustrial conditions, and an upwelling decrease during austral summer. Following the same regional modeling approach and using the downscaled winds from Belmadani et al. (2014), Oerder et al. (2015) found a year-round reduction in upwelling intensity, mitigated by an onshore geostrophic flow. The shoaling of upwelling source waters in the two scenarios suggests that upwelled waters could become less nutrient-rich and thereby reduce nearshore primary productivity (Brochier et al., 2013).
The impact of climate change on the NHCS productivity, oxygenation, and acidification has been even less investigated. Assuming the hypothesis of Bakun (1990) of increasing coastal winds, Mogollón and Calil (2018) found a moderate increase (5 %) the NHCS productivity using a RCM. However, they did not take into account the large-scale stratification changes driven by climate change that may significantly contribute to nearshore stratification and mitigate the upwelling Oerder et al., 2015). Following a similar approach, Franco et al. (2018) found a sustained acidification of NHCS shelf and slope waters under the Representative Concentration Pathway 8.5 scenario (RCP8.5, the so-called worst-case AR5 climate scenario corresponding to a 8.5 W m −2 heat flux driven by the greenhouse effect; e.g., van Vuuren et al., 2011), driven by changes in surface fluxes of atmospheric CO 2 concentration and subsurface dissolved inorganic carbon concentrations. However, as in Mogollón and Calil (2018), the impact of climate change on NHCS surface winds, circulation, and stratification was unaccounted for in Franco et al. (2018).
In brief, previous regional modeling experiments were either obtained from (i) the downscaling of one single GCM or Earth system model (a GCM including a biogeochemical model, hereafter ESM), (ii) the analysis of relatively short time periods (e.g., 30 years in the stabilized phase of the 2 × CO 2 and 4 × CO 2 scenarios in Echevin et al., 2012;Oerder et al., 2015;Brochier et al., 2013), or (iii) simplified approaches that did not account for all physical forcings (e.g., Mogollón and Calil, 2018;Franco et al., 2018). More work is thus needed to evaluate the robustness of these findings under climate scenarios taking into account economic and population growth assumptions (e.g., RCP8.5) and over longer time periods (e.g., 100 years).
In the present work, three different ESMs are dynamically downscaled in the NHCS using a regional coupled dynamical-biogeochemical model. The studied time period is 2005-2100 under the RCP8.5 scenario. The regional trends from RCMs are compared to illustrate the diversity of regional climate change impacts. RCM trends are also contrasted with those of the ESMs in order to highlight the impact of the downscaling process. In the next section (Sect. 2) the regional model, the selection process of ESMs, and the downscaling methodology are described. Results are presented in Sect. 3: we describe the trends of key physical and biogeochemical parameters such as temperature, coastal upwelling, thermocline depth, oxygenation, nitrate, and productivity. The approach and implications of our work are discussed in Sect. 4. The conclusions and perspectives are drawn in Sect. 5. (Penven et al., 2006;Shchepetkin and McWilliams, 2009). A fourth-order centered advection scheme allows the generation of steep tracer and velocity gradients (Shchepetkin and McWilliams, 1998). For a complete description of the model numerical schemes, the reader can refer to Shchepetkin and McWilliams (2005). The model domain spans over the coasts of south Ecuador and Peru from 5 • N to 22 • S and from 95 to 69 • W. It is close to the one used in Penven et al. (2005). The horizontal resolution is 1/9 • , corresponding to ∼ 12 km. The bottom topography from STRM30 (Becker et al., 2009) is interpolated on the grid and smoothed in order to reduce potential errors in the horizontal pressure gradient. The vertical grid has 32 sigma levels.
Biogeosciences
Wind speed, air temperature, humidity, ROMS SST are used to compute latent and sensible heat flux online using a bulk parameterization (Liu et al., 1979).
Biogeochemical model
ROMS is coupled to the Pelagic Interaction Scheme for Carbon and Ecosystem Studies (PISCES) biogeochemical model. PISCES simulates the marine biological productivity and the biogeochemical cycles of carbon and main nutrients (P, N, Si, Fe; Aumont et al., 2015) as well as dissolved oxygen (DO) (e.g., Resplandy et al., 2012;Espinoza-Morriberón et al., 2019). It has three nonliving compartments, which are the semi-labile dissolved organic matter, small sinking particles, and large sinking particles, and four living compartments represented by two size classes of phytoplankton (nanophytoplankton and diatoms) and two size classes of zooplankton (microzooplankton and mesozooplankton). The ROMS-PISCES coupled model has been used to study the climatological (Echevin et al., 2008), intraseasonal , and interannual variability of the surface productivity (Espinoza-Morriberón et al., 2017) and oxygenation (Espinoza-Morriberón et al., 2019) in the NHCS. Detailed parameterizations of PISCES (version 2) are reported in Aumont et al. (2015). Note that we used an earlier version of the model (PISCESv0) in this study, as PISCESv2 had not been coupled to ROMS yet at the beginning of our study. Here we describe the following parameterizations of PISCESv0: (i) diatoms and nanophytoplankton growth, microzooplankton grazing and mortality, and mesozooplankton mortality depend on temperature (T ) and are proportional to e a.T with a = 0.064 • C −1 ; (ii) mesozooplankton grazing on nanophytoplankton and diatoms is proportional to e b.T with b = 0.076 • C −1 . These differences, in particular the larger temperature-enhanced mesozooplankton grazing with respect to phytoplankton growth, can play an important role in the context of surface warming in the NHCS. Boyd et al. (1981) measured grazing of Peruvian copepods; however further laboratory experiments are needed at different temperatures to calibrate these rates.
3 Selection of the Earth system models Three CMIP5 ESMs are selected for the regional downscaling. The selection process is based on the nutrients simulated by the ESMs and on the evaluation of biogeochemical bias. Only five ESMs (CNRM, GFDL, IPSL, CESM, and Nor-ESM) represent the four nutrients (silicate, phosphate, nitrate, and iron) and DO required by PISCES. As different ESM versions were available, a total of eight ESMs (CNRM-CM5, GFDL-ESM2M, GFDL-ESM2G, IPSL-CM5A-MR, IPSL-CM5A-LR, IPSL-CM5B-LR, CESM1, Nor-ESM1-ME) were compared to observations from the World Ocean Atlas (WOA2009, Fig. 1). Following Cabré et al. (2015), the ESM DO, nutrients, temperature, and salinity were averaged at 100 • W between 5 • N and 10 • S, near the location of the western open boundary of the RCM, for the period 1980-2005 (1950-2005 for T and S). This meridional section intersects eastward jets: the equatorial undercurrent (EUC) at 0 • S and the off-equatorial southern subsurface countercurrents (SSCCs) at ∼ 4 and ∼ 8 • S (Montes et al., 2010). These jets transport physical and biogeochemical properties to the Peru upwelling region (Montes et al., 2010(Montes et al., , 2014Oerder et al., 2015;Espinoza-Morriberón et al., 2017. Visual examination of the ESM temperature and salinity profiles ( Fig. 1) suggests that the corresponding biases are weak in comparison with other variables. The comparison between the biases of different variables can be quantified by computing a bias normalized by the mean state, averaged between 0 and 500 m depth (0-250 m for temperature and salinity; see Table 1). The ESM normalized temperature bias is weaker than the biogeochemical biases (Table 1).
All ESMs simulate an oxygen decrease with depth ( Fig. 1a), but oxygen values are too low (i.e., < 10 µmol L −1 ) in CESM1-BGC, GFDL-ESM2M, GFDL-ESM2G, and NorESM1-ME. Slightly negative values are attained below 300 m depth for GFDL-ESM2G. CNRM-CM5. In contrast, the three IPSL model versions, which all include PISCES as a biogeochemical component, overestimate the oxygen content above ∼ 600 m depth. Note that only CESM1-BGC is able to reproduce the observed oxygen increase below 400 m depth, which corresponds to the lower limit of the OMZ.
The IPSL and CESM1-BGC silicate profiles are close to observations above ∼ 250 m depth, whereas the positive bias in GFDL-ESM2M and NorESM1-ME increases below 200 m depth. The CNRM-CM5 negative bias is moderate between 50 and 300 m depth (Fig. 1d).
To conclude, as the three IPSL ESMs and the CNRM-CM5 include the PISCES biogeochemical model also used in the regional simulations and provide reasonable nutrient bias with respect to the other ESMs (Table 1), IPSL-CM5A-MR (in which nitrate and phosphate bias are weaker than in the two other IPSL ESMs, Table 1) and CNRM-CM5 are selected. We also select GFDL-ESM2M, which represents the nitrate and phosphate profiles in the upper layers well, and whose bias did not increase at depth as in GFDL-EMS2G. CESM1-BGC also has weak biases with respect to the latter ESMs (Table 1), but some variables were not available from the archive (e.g., 10 m wind) at the beginning of this study. We thus restrict our study to the downscaling of three ESMs. The main characteristics of the selected ESM ocean models (grid spacing and biogeochemical structure) are summarized in Table 2. We refer to the ESMs as CNRM, IPSL, and GFDL in the following sections and figures.
Atmospheric forcing methodology
A bias correction is used to construct monthly forcing files (e.g., Oerder et al., 2015; note that daily files were not available for all ESMs). For each forcing variable X (i.e., X = wind velocity, air temperature, etc.), the biascorrected variable X is computed as follows: (1) X OBSclim corresponds to a monthly climatology of observed values, X ESM−RCP8.5 corresponds to the coarse-grid ESM values for each month, and X ESM−hist−clim corresponds to a monthly climatology of the coarse-grid ESM values during the historical period (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010). This allows subtraction of the ESM mean bias, assuming that it remains identical over the historical period and over 2000-2100. This method has been used in several papers (Cambon et al., 2013;Oerder et al., 2015). The SCOW (Risien and Chelton, 2008) surface wind and COADS (Da Silva et al., 1994) downward shortwave and longwave flux and air parameter (temperature and specific humidity) climatologies were used for X OBSclim . Note that submonthly wind variability may significantly impact surface chlorophyll in other EBUSs, such as off northern California where the wind variability is much stronger than off Peru (e.g., Gruber et al., 2006). Indeed, a previous regional modeling study in the NHCS showed a weak impact (less than 10 % difference) of daily wind stress with respect to monthly wind stress on 7-year-averaged biogeochemical variables . This suggests that using monthly winds may not significantly impact the climate trends reported in this study.
Open boundary and initial conditions for physics and for biogeochemistry
As in Echevin et al. (2012) and Oerder et al. (2015), the ESM monthly sea level, temperature, salinity, and horizontal velocity at the locations of the RCM open boundaries are directly interpolated on the model grid without bias correction. Given the important bias of the ESM mean biogeochemical state (e.g., Bopp et al., 2013;Cabré et al., 2015), we apply the bias correction described in Eq.
The three simulations are initialized as follows. Initial conditions from the ESM physical parameters of the historical simulation (2000-2010 January average) and WOA biogeochemical values (January) constitute the initial state. A 9year spinup simulation from 1997 to 2005 is then performed to reach equilibrium. The runs are then forced by RCP8.5 conditions until 2100. State variables and biogeochemical rates (e.g., primary production) are stored every 5 d. The regional simulations are named R-IPSL, R-CNRM, and R-GFDL in the following.
Additional data sets
Two ocean reanalysis products are used to evaluate the ESM equatorial circulation and thermocline in present conditions. The SODA 2.3.4 reanalysis (Carton and Giese, 2008) over the period 1992-2000 assimilates observational data in a general circulation model with an average horizontal resolution of 0.25 • . The recently available GLORYS12V1 reanalysis over the period 1993-2017 is also used (Ferry et al., 2012). Altimeter data, in situ temperature and salinity vertical profiles, and satellite SST were jointly assimilated in GLORYS12V1 (Lellouche et al., 2018). This product is freely distributed by the Copernicus Marine Environment Monitoring Service.
Coastal indices
Time series of coastal indices characterizing the variability over the central Peru shelf for specific variables are computed. The variables are averaged in a coastal band extending from the coastline to 100 km offshore and between 7 and 13 • S.
An index of coastal upwelling, the cross-shore transport in a coastal band, is computed from the model output (Colas et al., 2008;Oerder et al., 2015;Jacox et al., 2018). The mean horizontal transport is computed each month in a coastal strip extending from 7 to 13 • S and from the coast to 100 km offshore. The transport is integrated vertically over the Ekman layer depth. The latter is diagnosed as follows: we compute the surface geostrophic current using model sea surface height, and we integrate the thermal wind relationship from the surface to the depth (equal to Ekman layer depth) at which the cross-shore current and the cross-shore geostrophic current differ by less than 10 % (see Oerder et al., 2015, for more details). The computation of this index is more straightforward than one based on model vertical velocity (Jacox et al., 2018) and leads to similar values (e.g., see Fig. 4 in Jacox et al., 2018). In contrast with coastal upwelling indices based on Ekman transport only, this index takes into account the role of the cross-shore geostrophic current which can modulate the coastal upwelling (e.g., during El Niño events; Colas et al., 2008;Espinoza-Morriberón et al., 2017).
Statistical methods
Only timescales longer than 5-7 years (e.g., El Niño timescales) are studied in this work. Therefore the time series are low-pass filtered using a 10-year moving average. This allows us to filter the El Niño-Southern Oscillation (ENSO) variability, which is very strong in the NHCS but not the focus of the present study. Linear trends of the time series are computed using a least-squares method. The percentage of change between 2006 and 2100 associated with the linear trends is listed in three tables (Table 3 for physical variables, Table 4 for oxygen and nitrate, and Table 5 for chlorophyll and zooplankton). Statistical significance is presented as a 90 % confidence interval, based on a bootstrap method: we compute a 10 000-member synthetic distribution derived by randomly removing data in the annual series. The confidence limits of the trends are converted into confidence limits for the percentages reported in the tables.
Results
In the following sections we show that the RCM is able to represent the main characteristics of the NHCS coastal upwelling system thanks to its high spatial resolution (relative to the ESMs) and to the bias correction of the forcing. We then describe the long-term trends over the period 2006-2100 under the RCP8.5 scenario for key downscaled physical (surface and subsurface temperature, heat and momentum fluxes, upwelling) and biogeochemical parameters (oxygen and nutrient content, primary productivity, planktonic biomass) in the upwelling system but also in the equatorial band offshore of the NHCS. For selected variables we also compare the downscaled simulations and the coarse-grid ESMs. In the next section, we first characterize the downscaled physical fields.
Physical mean state and variability
4.1.1 Sea surface temperature spatial patterns We first contrast the sea surface temperature (SST) patterns of the ESMs and RCMs to highlight the efficiency of the dynamical downscaling. The actual observed SST displays the cold water tongue along the coast and associated cross-shore SST gradient, characteristic of coastal upwelling (Fig. 2a). The RCM correctly simulates these upwelling features (Fig. 2b). The fine representation of the coastline, shelf and slope topography, and bias-corrected alongshore winds (see Sect. 2.4) plays a role in the correct representation of the upwelling structure. The upwelling vertical structure is also well reproduced in the RCMs. Mean cross-shore temperature profiles (within 500 km from the coast and between 7 and 13 • S) display the typical nearshore isotherms shoaling in the 0-100 m layer and deepening below, in good agreement with the CARS climatology ( Fig. S1a-d in the Supplement).
Trends of nearshore SST
A steady warming of the surface coastal ocean is found in the three regional simulations (Fig. 3a). SST increases rapidly in R-IPSL starting in the 2020s, reaching +4.5 • C in 2100, whereas it increases from the 2030s in the other simulations, reaching +3.5 and +2 • C in R-CNRM and R-GFDL, respectively. Interestingly, decadal variability can produce decades during which the SST increase is stalled (a.k.a. "warming hiatus"), e.g., in 2035-2045 in R-CNRM and in 2040-2060 in R-GFDL. The ESM linear trends are very similar to the RCM nearshore warming trends (Fig. 3b, Table 3). Here the offset between the three ESM SST evolutions due to the different SST bias in 2005 (between 4 and 6 • C among the ESMs) has been corrected in order to better compare RCM and ESM trends. As an example, the spatial structures of the R-CNRM and CNRM SST anomalies are compared (Fig. 3c, d). The similarity between the two anomaly patterns is striking. Both display a maximum warming near the coasts and west of the Galápagos Islands where upwelling occurs.
In contrast, coastal upwelling remains stable in R-GFDL. The upwelling is modulated by decadal variability, whose amplitude can reach 5 %-10 % of the mean value. Decadal variability may generate decades of upwelling increase (e.g., 2090-2100 in R-CNRM), masking the long-term decrease. Upwelling decadal variability is mainly forced by variations in the onshore geostrophic transport, which on average compensates for ∼ 50 % of the Ekman transport. As Ekman transport decreases over time in R-IPSL and R-CNRM, the relative contribution of the geostrophic transport increases over time. This onshore current is driven by the higher sea level in the equatorial portion of the upwelling system than in its poleward portion (Colas et al., 2008;Oerder et al., 2015). This flow is occasionally remarkably strong (e.g., in 2090 in R-CNRM, 2035-2040 and 2065 in R-GFDL), whereas the trends are weak.
Subsurface temperature anomalies
Nearshore subsurface temperature anomalies are impacted by equatorial subsurface temperature anomalies in two ways: thermocline anomalies may propagate along the equatorial and coastal wave guide (e.g., Echevin et al., 2011Echevin et al., , 2014Espinoza-Morriberón et al., 2017, 2018, and temperature anomalies may be transported eastward and poleward by the near-equatorial subsurface jets ( Fig. 2a; Montes et al., 2010Montes et al., , 2011. The latter is particularly strong during eastern Pacific El Niño events (e.g., Colas et al., 2008Colas et al., , for the 1997Colas et al., -1998. The thermal structure of the upper layer is strongly impacted by climate change in the eastern equatorial Pacific. The depth of the 20 • C isotherm (hereafter D20) is used to characterize the thickness of the warm surface layer. It in- (1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)) are also shown in (a). D20 from IMARPE climatology is marked by a dashed purple line in (b). Annual mean time series are filtered using a 10-year moving average. creases in all ESMs, at different rates (Fig. 6a). The deepening is roughly linear in GFDL (+5 %, Table 3) and CNRM (+26 %). In contrast, it increases nonlinearly in IPSL, first by ∼ 1.5 m per decade between 2005 and 2065 and then by ∼ 5 m per decade between 2065 and 2100. Note that D20 is shallower in the ESMs (∼ 30-40 m) than in observations (∼ 52 m in WOA) and in two ocean reanalyses (∼ 56 m in GLORYS2V1 and −58 m in SODA). A shallow thermocline is likely to be more impacted by greenhouse-induced surface warming in the model simulations than in the real ocean. D20 coastal trends in the RCMs (Fig. 6b) are roughly similar to the offshore ESM equatorial trends. The coastal deep- ening is moderate in R-GFDL (+12 %, Table 3). In contrast, a strong linear deepening is found in R-CNRM (+101 %). As in the equatorial region, the D20 deepening is nonlinear in R-IPSL, and the thickness of the warm surface layer more than doubles (+207 %). The RCM D20 values at the beginning of the century are within the range of estimated values from observations and reanalyses whereas D20 is slightly too deep in the ESMs (Fig. 6c), which highlights the dynamical downscaling ability to reduce part of this systematic bias. The RCM trends are roughly in line with the ESM coastal trends. D20 deepening can be amplified (e.g., 207 % in R-IPSL vs. 126 % in IPSL) or mitigated (12 % in R-GFDL vs. ∼ 21 % in GFDL, Fig. 6c) depending on the model. Decadal variability from the equatorial region propagates to the coastal regions with little change.
We now investigate the evolution of the RCM mixed layer. The RCM surface boundary layer thickness (hbl), determined by comparing a bulk Richardson number to a critical value (K-profile parameterization, KPP;Large et al., 1994), is a good proxy of the model mixed layer (e.g., Li and Fox-Kemper, 2017). The R-GFDL mixed layer in 2006-2015 is in fairly good agreement with the mixed-layer depth (computed from temperature profiles) from the coarse 2 • × 2 • gridded climatology of de Boyer Montegut et al. (2004), whereas R-IPSL and R-CNRM values are ∼ 3 m shallower.
A shoaling of the mixed layer is found in all simulations (Fig. 7), in line with the surface heating (Fig. 4a, b) and reduced wind-driven mixing (Fig. 4c). The shoaling is slightly stronger in R-IPSL and R-GFDL than in R-CNRM, possibly due to the stronger surface warming in R-IPSL (Table 3).
The near-equatorial subsurface, coastal subsurface, and surface temperature linear trends of the RCMs and ESMs are compared in Fig. 8. Near-equatorial subsurface trends are weakest in GFDL and strongest in IPSL, which is consistent with the stronger D20 deepening in IPSL (Fig. 6a). A similar ranking from weakest (R-GFDL) to strongest warming (R-IPSL) is found for the coastal subsurface warming and coastal surface warming. The equatorial water masses are transported towards the coasts (Montes et al., 2010;Oerder et al., 2015) and the subsurface layer trends increase by 6 % in R-GFDL, 23 % in R-CNRM, and 10 % in R-IPSL with respect to the near-equatorial trends. The ESM trends are close to the RCM trends, which suggests that the nearshore subsurface warming is dominated by the eastward transport of warm near-equatorial subsurface waters in both the ESMs and RCMs. In the coastal region, the upper part of the 50-200 m subsurface water volume is upwelled into the mixed layer where additional heat is deposited by the local atmospheric fluxes (Fig. 4a, b).The coastal SST trends increase with respect to the coastal subsurface anomalies (+17 % in R-GFDL, +37 % in R-CNRM, +44 % in R-IPSL), underlining the impact of different local heat fluxes. The amplitude of the ESM SST trend is very close (< 10 % change) to that of the RCM for R-IPSL and R-CNRM, which is consistent with the spatial patterns of SST change shown in Fig. 3c, d. Interestingly, the R-GFDL SST increase is ∼ 20 % weaker than that of GFDL.
Biogeochemical response of the NHCS under the RCP8.5 scenario
We now investigate the impacts of regional climate change on the main biogeochemical characteristics of the NHCS, namely oxygenation, nutrients, and productivity.
OMZ trends in response to the equatorial circulation
The suboxic (O 2 < 5 µmol L −1 ; Karstensen et al., 2008) subsurface waters found in the NHCS result from a subtle balance between the eastward and poleward transport of relatively oxygenated waters from the equatorial region into the upwelling region, the ventilation due to mesoscale circulation (Thomsen et al., 2016; Espinoza-Morriberón et al., In contrast, the eastward flow is underestimated by ∼ 50 % in R-CNRM and R-IPSL with respect to SODA, probably because of a weak EUC and/or weak SSCCs in these coarse-grid ESMs (Cabré et al., 2015). Over 2006-2100, the eastward velocity is stable (< 1 %, Fig. 9a, Table 3) in R-CNRM and decreases weakly in R-IPSL (−9 %) and in R-GFDL (−14 %). The evolution of the eastward dissolved oxygen (DO) flux at 95 • W (Fig. 9b) approximately follows that of the mass flux. Due to a strong increase in equatorial DO (not shown), the DO flux uptrend is strong in R-CNRM (33 %, Table 4). This contrasts with the moderate decrease in the DO flux (∼ −5 %) in the other two simulations. Note that the eastward DO flux is ∼ 25 %-30 % stronger in R-IPSL than in R-CNRM at the beginning of the century. As the eastward flow in the 2-10 • S equatorial band is stronger in R-IPSL than in R-CNRM (not shown) and the water is more oxygenated in this latitudinal band than within 2 • S-2 • N (e.g., Fig. 4 in Cabré et al., 2015), this results in a stronger DO eastward flux in R-IPSL than in R-CNRM.
We now investigate the nearshore subsurface DO concentration in a box located between 150 and 300 km offshore, in order to take into account a sufficient number of coarse ESM grid points in the 100-200 m depth range. The RCM is able to represent the cross-shore structure of the OMZ with a fair degree of realism (Figs. S1-2). The OMZ bias is weak (< 10 µmol L −1 , Fig. S2) below ∼ 100 m and increases near ∼ 50-100 m, in the depth range of the oxyclinethermocline. The nearshore DO concentration in the upper part of the OMZ (between 100 and 200 m, Fig. 10a) in 2006-2015 is slightly higher in R-GFDL (∼ +20 µmol L −1 ) than in the observations (∼ 15-18 µmol L −1 ) and lower in R-IPSL (∼ 10 µmol L −1 ) and R-CNRM (∼ −5 µmol L −1 ; see also Fig. S1).
In contrast, the ESMs strongly overestimate DO in the OMZ (Fig. 10b). The eastward flux at 95 • W supplies DO to the nearshore OMZ in greater proportions in R-GFDL than in R-IPSL and R-CNRM (Fig. 9b), partly explaining the discrepancies at the beginning of the century.
The nearshore trends are very different in the three regional simulations. The DO content is virtually unchanged in R-GFDL (−3 %, Table 4) and decreases slowly (−21 %) in R-IPSL, whereas it increases strongly in R-CNRM (+483 % ∼ 30 µmol L −1 increase). R-GFDL is also marked by a stronger multidecadal variation than the other RCMs. The trends have the same sign as those of the ESMs (Fig. 10b), but DO changes are reduced by half in the RCMs (e.g., ∼ +60 µmol L −1 in CNRM versus ∼ +30 µmol L −1 in R-CNRM, ∼ −6 µmol L −1 in IPSL versus ∼ −2.5 µmol L −1 in R-IPSL). The depth of the 0.5 mL L −1 (22 µmol L −1 ) DO isosurface (hereafter named "oxycline") is often used as a proxy for the OMZ upper limit (e.g., Espinoza-Morriberón et al., 2019), characterizing the vertical extent of the habitat of many living species of the coastal ecosystem (Bertrand et al., 2010(Bertrand et al., , 2014. As R-CNRM oxycline is quite deep (Fig. S2), we averaged its values over a wider coastal box (0-200 km) in Fig. 10c. The oxycline at the beginning of the century is well positioned in R-GFDL and slightly shallower than the observed oxycline in R-IPSL and R-CNRM (Fig. 10c). Between 2006 and 2100, the oxycline shoals slightly (less than 10 m) in R-GFDL and R-IPSL, whereas it deepens by more than 100 m in R-CNRM. Similar trends are found for the "upper oxycline" defined by the 1 mL L −1 isoline (not shown; see Table 4).
Nitrate trends
We now investigate the evolution of subsurface nitrate concentrations at 95 • W, the western boundary of the RCM (see red line in Fig. 2b). A decrease is found in all simulations. This is illustrated by the shoaling of the 21 µmol L −1 nitrate iso-surface (Fig. 11a). The downtrends vary between strong (78 % in R-CNRM) and moderate deepening (24 % in R-IPSL and 26 % in R-GFDL, Table 4). Nitrate depletion was also found in the IPSL CMIP3 4 × CO 2 scenario (Brochier et al., 2013). It is likely caused by a reduced nutrient delivery from the deep ocean to the upper layers of the ocean associated with enhanced thermal stratification, reduced vertical mixing, and overall slowdown of the ocean circulation (e.g., Frölicher et al., 2010). Due to the stronger eastward flow in R-GFLD (Fig. 9a), the associated nitrate eastward flux is ∼ 50 % stronger than in R-IPSL and R-CNRM (Fig. 11b). The fluxes decrease in all simulations (−27 % in R-CNRM, −20 % in R-IPSL, −18 % in R-GFDL, Table 4, Fig. 11b).
Following Espinoza-Morriberón et al. (2017), the depth of the 21 µmol L −1 nitrate iso-surface (hereafter D21) in the coastal region is chosen as a proxy of the nearshore nitracline depth (Fig. 11c). In spite of the offshore nitracline deepening (Fig. 11a) and decreasing nitrate flux (Fig. 11b), the nearshore nitracline shoals in R-GFDL (−25 %). In contrast, it deepens in R-IPSL (+32 %) and in R-CNRM (+82 %). This shows that the equatorial forcing is not always the main forcing of the evolution of the nearshore nitracline depth: whereas it seems to drive nitrate depletion in R-CNRM and R-IPSL, the maintained coastal upwelling in R-GFDL (Fig. 5a) may partly compensate for this effect. It is also notable that the nitracline may shoal even though coastal upwelling does not increase (e.g., in R-GFDL, Fig. 5a). This points to potential changes in nitrate vertical distribution, possibly due to a reduction of nitrate assimilation driven by biomass variations (see Sect. 3.3). The ESM and RCM nearshore nitracline trends are consistent for CNRM and IPSL: nitracline deepens by 97 % (34 %) in CNRM (IPSL) and by 82 % (32 %) in R-CNRM (R-IPSL). In contrast, nitracline shoaling is strong in R-GFDL (−25 %) and negligible in GFDL (+2 %). However, note that D21 is too shallow in RCMs (∼ 20-35 m over 2006-2015) with respect to observations (∼ 100 m in CARS) due to an overly high nitrate concentration in subsurface layers (figure not shown). This bias was also found in previous ROMS-PISCES regional simulations of the NHCS (e.g., see also Fig. 3 in Espinoza-Morriberón et al., 2017) possibly due to a lack of denitritification.
Chlorophyll and primary productivity annual variations
Regional downscaling has a strong impact on the nearshore planktonic biomass. Chlorophyll is used in the following as a proxy of total phytoplankton biomass. The surface chlorophyll concentration at the beginning of the century (Fig. 12a) agrees relatively well with MODIS mean chlorophyll (∼ 4.25 mg Chl m −3 ) in R-IPSL (∼ 4.2 mg Chl m −3 ) and R-GFDL (∼ 4.5 mg Chl m −3 ) whereas it is ∼ 30 % higher in R-CNRM (∼ 5.5 mg Chl m −3 ). Note that MODIS and Sea-WiFS satellite observations differ by ∼ 1 mg Chl m −3 due to different algorithms (O'Reilly et al., 1998;Letelier and Abbott, 1996) and different time periods (see Sect. 2.6). Moderate uptrends are found in R-GFDL (+12 %) and R-IPSL (+17 %, Table 5). The latter seems at odds with the weak nitracline deepening (< 10 m between 2006 and 2100) in R-IPSL (Fig. 11c). Strong multidecadal variability with almost no trend (2 %) is found in R-CNRM, in spite of the marked nutricline deepening (∼ 20 m, Fig. 11c).
RCMs are able to correct the ESM inability to represent nearshore surface chlorophyll concentration (Fig. 12b). Indeed, ESM surface chlorophyll ranges between ∼ 0.6-0.7 mg Chl m −3 (GFDL) and ∼ 0.01-0.1 mg Chl m −3 (CNRM), almost an order of magnitude smaller than observed values. The ESM trends display very contrasted patterns (Fig. 12b). Surface chlorophyll concentration decreases in all cases, with negative trends between −11 % and −104 %, a behavior not simulated in the RCMs.
The different evolution of the RCM surface and total chlorophyll content implies that the vertical distribution of phytoplankton biomass is modified in the long term. The vertical and cross-shore structure of seasonal chlorophyll trends indicates that both R-GFDL and R-IPSL simulate a chlorophyll increase in the mixed layer near the coast, and a decrease below (Fig. 13a-c). Interestingly, this suggests that total biomass changes cannot be monitored using satellite measurements, as the subsurface plankton depletion cannot be observed. The seasonal trends in R-GFDL and R-IPSL are consistent with a shoaling of the mixed layer (Fig. 7), which reduces light limitation of phytoplankton growth (e.g., Echevin et al., 2008;Espinoza-Morriberón et al., 2017) and increases surface primary productivity in summer and winter. In contrast, the R-CNRM trend in the mixed layer is negative in summer. This is likely caused by the strong deepening of the nitracline in R-CNRM (Fig. 11c) and the seasonality of the wind-driven upwelling. As the upward flow is weaker in summer, the upwelling of less-rich waters into the mixed layer may trigger a nutrient limitation of phytoplankton growth. On the other hand, as the upward flow remains strong during winter, nutrient limitation does not occur. Light limitation of phytoplankton growth decreases because of the shoaling of the mixed layer, enhancing phytoplankton growth (as in the two other RCMs). Moreover, visual correlation between decadal variability of the chlorophyll content and nitracline depth in R-CNRM (e.g., the oscillations in 2070-2100 in Figs. 11c and 12c) also suggests that nitrate limitation of phytoplankton growth may play a role.
To further investigate the drivers of the surface chlorophyll trends, RCM and ESM primary productivity (PP) trends are shown in Fig. 14. RCM PP surface trends are weak (between −2 % and +7 %). In particular, the weak trend in R-IPSL (−2 %) is at odds with the surface chlorophyll increase (+17 %, Fig. 12a). In all RCMs, PP is strongly impacted by decadal variability as a consequence of upwelling (Fig. 5a) and nitracline depth variability (Fig. 11c). These surface trends contrast with the more pronounced ESM PP trends, in particular for IPSL (−25 %) and CNRM (−113 %). However, one may question the meaning of the ESM PP trends associated with very weak (an unrealistic) ESM chlorophyll concentrations (Fig. 12b, d). The RCM depth-integrated PP trends are consistent with those of surface PP but differ from the ESMs, especially for R-CNRM (−7 %) and CNRM (−66 %).
Overall, the contrasted trends found in the RCMs and ESMs, even when a similar biogeochemical model is used (e.g., PISCES in IPSL and CNRM), illustrate the necessity to regionally downscale ESM variability to reduce systematic bias and better represent local processes impacting on productivity.
Zooplankton biomass variations
The two zooplankton groups represented by RCMs are aggregated in a single group to allow a comparison with the ESMs. In contrast with surface phytoplankton, the order of magnitude of surface zooplankton biomass is comparable in ESMs and RCMs, with the exception of CNRM in which zooplankton concentrations are very weak. In addition, RCM surface zooplankton also displays a different evolution than RCM phytoplankton. First, multidecadal variability is quite strong and trends are weak. Zooplankton slightly accumulates in R-GFDL (+4 %, Fig. 15a, Table 5), in line with phytoplankton (+12 %, Fig. 12a), suggesting the possibility of a grazing increase. In contrast, surface zooplankton displays no trend in R-IPSL in spite of a marked surface phytoplankton increase (+17 %). These weak surface zooplankton trends contrast with the stronger ESM downtrends (from −15 % (GFDL) to −98 % (CNRM), Fig. 15b).
Depth-integrated zooplankton biomass decreases moderately in all RCMs, from −5 % (R-GFDL) to −15 % (R-IPSL) (Fig. 15c). The GFDL and IPSL depth-integrated zooplank- ton downtrends are relatively close to the RCM downtrends. CNRM stands out as atypical with a decrease in half of its zooplankton biomass, while the decrease in R-CNRM is moderate (−11 %). The spatial structure of the trends varies significantly over the vertical and in the cross-shore directions (Fig. 16a-c). The accumulation of zooplankton in R-IPSL and R-CNRM near the coast is consistent with a reduction of the offshore advection due to Ekman transport (Fig. 5c). As for chlorophyll (Fig. 13), the zooplankton decrease below 10 m depth suggests that monitoring of zooplankton must be carried out in the surface layer and below to measure long-term trends.
Summary of the main results
The dynamical downscaling of the ocean circulation and ecosystem functioning for three ESMs is performed in the NHCS for the strongly warming, so-called worst-case RCP8.5 climate scenario. The RCM simulations all show an intense warming of the surface layer within 100 km from the Peruvian coasts, reaching between +2 and +4.5 • C in 2100. We can speculate that the nearshore surface warming is closely associated with a subsurface warming in the near-equatorial region (95 • W, 2 • N-10 • S) which propagates into the NHCS. The coastal warming is weakest when the wind-driven upwelling is maintained (e.g., in R-GFDL) and strongest when it is reduced (e.g., in R-IPSL and R-CNRM; see also Echevin et al., 2012;Oerder et al., 2015). The coastal warming found in the RCMs is close to that found in the ESMs, but surface and subsurface temperature mean biases (for the period [2006][2007][2008][2009][2010][2011][2012][2013][2014][2015] are greatly reduced in the RCMs. Biogeochemical trends from the RCMs and ESMs are compared. Two of the three RCMs display a weak decrease in the near-equatorial (95 • W, 2 • N-10 • S) eastward oxygen flux into the NHCS, associated with a moderate slowdown of the eastward equatorial circulation and weak changes in oxygen concentrations in the equatorial region. Consequently, a relatively weak deoxygenation occurs in the nearshore region. This contrasts with the third RCM, in which the nearequatorial region becomes very oxygenated, which triggers a strong oxygenation of the OMZ.
Nutrient supply from the near-equatorial region to the NHCS decreases in all RCMs due to progressive nitrate depletion of equatorial waters and to decreasing eastward flux. This drives a deepening of the nearshore nitracline in two of the RCMs and a shoaling in the third RCM in which winddriven coastal upwelling is maintained.
Chlorophyll concentration displays contrasted coastal trends. First, in all RCMs, surface chlorophyll does not decrease, in contrast with ESM downtrends (from −11 % to −104 %). Surface chlorophyll increases (> 10 %) in two RCMs, while the total chlorophyll biomass remains stable, indicating an enhanced vertical stratification of phytoplankton in the surface layer in 2100. Total phytoplanktonic biomass (i.e., integrated over the water column) in the coastal zone remains relatively stable in spite of a slightly decreasing primary productivity driven by a weakening upwelling (in two RCMs) and a deepening nutricline (in two RCMs). This counterintuitive evolution of surface phytoplankton could be partly driven by the reduced offshore transport (related to coastal upwelling) which allows floating organisms to accumulate in the coastal band. Reduced offshore transport may also induce a greater residence time of phytoplankton in the coastal area and hence a stronger prey availability favoring grazing and a larger zooplankton biomass. However, the total zooplankton biomass tends to decrease in all RCMs, which shows that complex nonlinear effects (e.g., temperature and predator-prey relations) drive plankton trends. Note that RCM zooplankton downtrends can be weaker than the ESM downtrends used to drive fish global models (e.g., Tittensor et al., 2018). In the following subsections we discuss in more detail the surface temperature trends, the near-equatorial conditions impacting the NHCS, and the impact of the downscaling on the plankton trends.
Selection of the ESMs
The choice of which ESMs to downscale has been justified on the basis of the comparison of the ESM historical simulations to climatological observations. We are aware that these evaluations do not necessarily correspond to how well a model may capture the response to future climate forcing. The "emergent constraints" approach has been offered as a relevant method for evaluating climate models (e.g., Hall et al. 2019). In this approach, a statistical relation (F ) between a present-state variable (X) and a future-state variable (Y ) is derived (Y = F (X)) using an ESM ensemble, regardless of ESM bias. The relation is then used to derive a future response using the best knowledge of the present state (X_obs) using Y = F (X_obs). Following such an approach would have been useful to select the ESM models that fit best with the relation F . However, as we are interested in several variables (thermal stratification, upwelling, productivity, OMZ), this would necessitate finding distinct emergent constraints for these variables and thus possibly selecting different ESMs for each constraint, which may be intricate. Such an approach is however promising and should be envisaged in future work. Enhanced surface heat fluxes and coastal upwelling of offshore-warmed source waters appear to be the main drivers of the nearshore SST evolution. The strongest nearshore warming (+4.5 • C in 2100) found in R-IPSL likely results from the superposition of four effects: (i) a stronger warming of subsurface waters in the near-equatorial region subsequently transported towards the coastal region, (ii) a reduced cooling due to a decreasing coastal upwelling driven by the wind relaxation, (iii) a stable shortwave flux, and (iv) an increasing downward longwave flux due to the greenhouse effect. Moreover, IPSL-CM5 ranks among the high-sensitivity climate models of CMIP5 due to a large positive low-levelcloud feedback (Brient and Bony, 2013). The weaker surface warming in R-CNRM (+3.5 • C in 2100) may be mitigated by the weaker insolation. Last, the weakest warming in R-GFDL (+2 • C in 2100) can be explained by (i) the weakest offshore subsurface temperature anomalies, (ii) the strongest wind-driven coastal upwelling (which brings deeper colder waters to the surface layer), and (iii) the weakest greenhouse forcing. As upwelling-favorable winds are more likely to decrease than to increase in low-latitude EBUSs such as the Peruvian system (Goubanova et al., 2011;Belmadani et al., 2014;Rykacsewski et al., 2015), an upwelling reduction and strong SST warming appear to be the most robust projection. However, a rigorous estimate of the forcing terms in the nearshore heat budget would necessitate the online computation of each term (e.g., Echevin et al., 2018). Warmer surface waters may have severe consequences on the functioning of the Humboldt current ecosystem as a whole (Doney, 2006;Doney et al., 2012). For instance, in spite of the broad temperature range of small pelagic fish species (e.g., anchovy, sardine, or jack mackerel) habitat (e.g., Gutierrez et al., 2008), the temperature anomalies associated with El Niño events may drive the NHCS into conditions detrimental for pelagic recruitment. Moreover, previous modeling studies based on the RCP8.5 scenario suggest that Peruvian fisheries will be impacted by the poleward migration of exploited species to encounter cooler waters (e.g., Cheung et al., 2018).
Near-equatorial eastward flow and OMZ variability
Eastward EUC and SSCCs are supposed to be strong drivers of OMZ variability as they transport relatively oxygenated equatorial waters into the OMZ (Cabré et al., 2015;Shigemitsu et al., 2017;Montes et al., 2014;Espinoza-Morriberón, 2019;Busecke et al., 2019). This is in line with our results: in all RCMs, the DO trend in the OMZ is consistent with the trend of the offshore eastward DO flux. The EUC is supposed to be mainly forced by the zonal pressure gradient across the equatorial Pacific, associated with the trade winds and the Walker circulation (hereafter WC; Stommel, 1960). However, most of the CMIP5 climate models fail to reproduce the WC intensification observed in the recent period (1980-2010) (e.g., Kociuba and Power, 2015). Furthermore, the EUC decrease in the eastern equatorial Pacific in GFDL and in IPSL (respectively −26 % and −22 % decrease between 2005 and 2100 for the mean velocity between 2 • N and 2 • S, 95 • W, 50-200 m depth, figure not shown) is not consistent with the WC trends reported in Kociuba and Power (2015). Note also that EUC trends vary significantly across the equatorial Pacific (Drenkard and Karnauskas, 2014). EUC dynamics are also likely sensitive to stratification changes in the equatorial thermocline (Mc-Creary, 1981). In brief, to our knowledge, the mechanisms driving long-term EUC variability in the eastern equatorial Pacific remain to be investigated. Long-term SSCC variability, which contributes to the NHCS trends (e.g., Montes et al., 2014), is also unknown. At basin scale, the primary SSCC (near 4-6 • S at 90 • W) is supposed to be forced partly by trade winds and alongshore winds in the NHCS, by mass exchange between the Pacific basin and the Indian Ocean, and by surface heating in the tropics (McCreary et al., 2002;Furue et al., 2007). The problem is that SSCCs are not resolved in CMIP5 models due to coarse resolution (e.g., see Fig. 4 in Cabré et al., 2015). Last, the observed deoxygenation of water masses in equatorial regions is underestimated in global models (Oschlies et al., 2018). These uncertainties imply that the ventilation of the NHCS OMZ by the eastward jets may be difficult to project using CMIP5 ESMs.
In order to investigate further the impact of the ESM oxygen conditions on the RCM results, we conducted a series of sensitivity simulations (called R-GCM') using climatological seasonally varying WOA DO concentrations at the regional model open boundaries. Boundary conditions for all the other biogeochemical variables are unchanged with respect to the reference simulations (RCM) (note that we are aware that this simplification introduces inconsistencies in the biogeochemical properties of the water masses, but the results are worth reporting). As expected, the eastward DO flux (Fig. 17a) now follows roughly the mass flux evolution (Fig. 9a) and decreases weakly in each simulation. The huge nearshore DO trend previously found in R-CNRM (+483 %, Fig. 10b) is now much weaker in R-CNRM' (+36 %) and of a comparable order of magnitude as the other RCM's ( Fig. 17b). Furthermore, the marked decrease in the eastward DO flux in R-GFDL' appears to drive a strong nearshore DO decrease. This confirms that strong changes in the nearequatorial eastward ventilation flux impact the OMZ, in line with previous studies (e.g., Shigemitsu et al., 2017). However, ventilation of the OMZ by this mechanism is not the only driver of oxygen variability. Indeed, nearshore deoxygenation can vary (it is slightly more intense in R-IPSL than in R-GFDL, Fig. 10a) in spite of a rather similar decrease in the near-equatorial eastward DO fluxes, possibly owing to different local physical and biogeochemical processes (and thresholds). Computing a rigorous DO budget in the coastal region is needed to investigate in more detail the local processes at stake.
Plankton trends
A stable and, in one case, increasing concentration of chlorophyll is found in the surface layer (0-5 m), in spite of primary production decrease (e.g., in R-CNRM and R-IPSL, Fig. 14).
Several mechanisms could contribute to partly compensate for the PP decrease.
The shoaling of the mixed layer may constrain phytoplankton vertically and increase surface concentration. The increased temperature in the near-surface layer (0-50 m depth) induces a faster growth rate of phytoplankton cells (Eppley, 1972). Furthermore, the decrease in upwelling and offshore export (Fig. 5) may concentrate more biomass in the coastal region and contribute to the phytoplankton persistence in R-IPSL and R-CNRM. However, performing a budget of phytoplankton in the model would be needed to precisely estimate the relative contribution of each process, but this is beyond the scope of the present study.
Examination of RCM zooplankton biomass shows weak trends (0 %-4 %) in the surface layer and weak downtrends (between −5 % and −15 %) for total biomass (Fig. 15). R-IPSL and R-GFDL zooplankton biomass decrease faster than phytoplankton, which corresponds to a trophic attenuation of the transfer of biomass to upper levels. A similar attenuation has been found in regional simulations of the Benguela upwelling system under the IPCC-AR4 A1B scenario (corresponding to the more moderate RCP6.0 scenario; Chust et al., 2014). The RCM zooplankton trends also contrast with the ESM downtrends. These discrepancies can be attributed to local physical processes (transport and mixing associated with the mesoscale) not represented in the ESMs, but also partly to the use of an earlier version of the ecosystem model (PISCES) run with a set of biogeochemical parameters adapted for the NHCS (see Table 1 in Echevin et al., 2014). The stronger total zooplankton biomass downtrends in R-CNRM and R-IPSL suggest a strong impact of the temperature increase, possibly due to the higher zooplankton mortality in a warmer environment. However, the model's microzooplankton and mesozooplankton result from a nonlinear interplay of temperature and predation-mortality effects. Further interpretation of these trends would require dedicated sensitivity experiments and performing a zooplankton budget. This is beyond the scope of the present study, which aims to present an overview of the main low-trophic-level trends.
Conclusions and perspectives
Regional downscaling of three coarse-grid ESMs is performed in the NHCS over the 21st century so-called worstcase RCP8.5 climate scenario using a high-resolution regional coupled biodynamical model. The downscaling proce-dure allows correction of ESM bias. All regional simulations reproduce an intense warming (2-4.5 • C) of the surface layer within 100 km from the Peru coasts. The surface warming is strongest when the subsurface equatorial warming is strong and the wind-driven coastal upwelling weakens in the future. Downscaled trends are consistent with those obtained from the ESMs.
The biogeochemical impacts of climate change are more contrasted among RCMs and ESMs. A slowdown of the eastward near-equatorial circulation may reduce the ventilation of the NHCS and induce a nearshore deoxygenation trend. However the long-term variability of oxygen content of equatorial water masses also impacts the nearshore oxygen trends. As observed deoxygenation trends in the eastern equatorial Pacific are not well reproduced by ESMs (Stramma et al., , 2012 and CMIP5 ESM systematic biases are strong in this region (Cabré et al., 2015;Oschlies et al., 2018), these shortcomings limit the predictability of downscaled oxygen trends in the NHCS. One important conclusion of our study is that reducing the biases in oxygen concentration and zonal circulation trends in the eastern equatorial Pacific ocean is crucial to project the future evolution of the NHCS oxygen minimum zone.
Downscaled surface chlorophyll in the coastal region does not decrease, in contrast with the signal projected by the ESMs. In two RCMs, the surface chlorophyll remains high in the coastal region. We can speculate that this happens for two reasons: the enhanced thermal stratification due to the warming may alleviate light limitation and vertical dilution, and the reduction of wind-driven offshore transport may allow plankton to accumulate near the coast. These processes could partly compensate for the reduction of primary productivity due to a deeper nitracline and reduced wind-driven coastal upwelling. Downscaled zooplankton downtrends are also relatively weak (between −5 % and −15 %) but appear to strengthen when the warming is stronger. In all RCMs, downscaled plankton trends differ markedly from those simulated by ESMs, in particular in the surface layer (0-5 m), which illustrates the strong impact of the regional dynamical downscaling. This also underlines the necessity to interpret ESM biomass-based regional projections of fisheries (e.g., FISHMIP; Tittensor et al., 2018) with great caution.
As previous works point to a relaxation of upwellingfavorable wind conditions in the NHCS (e.g., Belmadani et al., 2014), dynamically downscaled wind projections as well as more realistic large-scale dynamical and biogeochemical conditions in the near-equatorial regions are needed to improve the robustness of our results in future studies. Furthermore, many aspects of the regional impact of climate change have not been explored, such as for example interannual variability associated with ENSO in a warmer NHCS or the acidification of coastal waters. These impacts will be addressed in future studies.
Data availability. ESMs data can be downloaded from the World Climate Research Programme: Coupled Model Intercomparison Project 5 (CMIP5), available at: https://esgf-node.llnl.gov/projects/ cmip5/ (last access: 30 June 2020). ROMS_AGRIF regional model code and ROMSTOOLS preprocessing tools can be downloaded on ROMS-AGRIF project, available at: http://www.crocoocean.org/ download/roms_agrif-project/ (last access: 30 June 2020). GLO-RYS12V1 model output can be downloaded from the Copernicus Marine Service at https://marine.copernicus.eu/ (last access: 30 June 2020). SODA model output can be downloaded from the Asia-Pacific Data-Research Center at the International Pacific Research Center at http://apdrc.soest.hawaii.edu/data/data.php/ (last access: 30 June 2020). CARS2009 climatology can be downloaded from the Asia-Pacific Data-Research Center at the International Pacific Research Center at http://apdrc.soest.hawaii.edu/ datadoc/cars2009.php/ (last access: 30 June 2020). Access to Instituto del Mar del Peru (IMARPE) regional climatologies is available at IMARPE formulario descarga de datos at http://www.imarpe. gob.pe/imarpe/servicios/climatologias/ (last access: 30 June 2020). AVHRR Seas surface Temperature data can be obtained from the National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information at https://www.ncdc. noaa.gov/oisst/ (last access: 30 June 2020). SeaWiFS and MODIS surface chlorophyll data can be downloaded from NASA Ocean Color Data portal at https://oceandata.sci.gsfc.nasa.gov/ (last access: 30 June 2020). Data used in this study can be obtained directly by contacting the authors.
Author contributions. VE and FC co-designed the study, participated in the analysis of the simulations and wrote the paper. MG processed the ESM data, performed the RCM simulations, produced the figures, and participated in the analysis of the simulations and the writing of the paper. DE-M and JT participated in the analysis and writing of the paper. DG co-designed the study and participated in the writing of the paper. OA provided expertise on the biogeochemical model. | 2020-01-23T09:06:28.292Z | 2020-01-20T00:00:00.000 | {
"year": 2020,
"sha1": "21143da59018bfdef89dae60e3d0046127d98d88",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/17/3317/2020/bg-17-3317-2020.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "36bc6f9ee05f4b5070dfc9fcdec188968a7a0e2d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
235593249 | pes2o/s2orc | v3-fos-license | Beyond the relaxation time approximation
The relaxation time approximation (RTA) is a well known method of describing the time evolution of a statistical ensemble by linking distributions of the variables of interest at different stages of their temporal evolution. We show that if all the distributions occurring in the RTA have the same functional form of a quasi-power Tsallis distribution the time evolution of which depends on the time evolution of its control parameter, nonextensivity $q(t)$, then it is more convenient to consider only the time evolution of this control parameter.
Introduction
Many problems in science involve understanding the time evolution of a statistical ensemble. Here we will focus on a system of particles described by the probability distribution f (r, p,t) which depends on position r, momentum p and time t. In general, for an evolving physical system operating irreversibly out of thermodynamical equilibrium f (r, p,t) will differ from that of a Boltzmannian ensemble and its evolution is usually studied using the Boltzmann transport equation (BTE), the general form of which is [1], where F is the external force, u the velocity and C[ f ] the collision term. Narrowing our interest to situations where the considered system is homogeneous (i.e. ∇ r f = 0) and when no external forces are acting (i.e. when F = 0) Eq. (1) simplifies to the form a e-mail: grzegorz.wilk@ncbj.gov.pl b e-mail: zbigniew.wlodarczyk@ujk.edu.pl Because of the freedom in choosing the functional form of C[ f (t)] it is still a very general equation that allows one to deal with a variety of situations. However, usually in many applications it is further simplified to a form called the relaxation time approximation (RTA) which consists in using such a simple form of the collision term [1][2][3][4]: where f eq is the local equilibrium distribution and τ is the relaxation time, understood as the time taken by the nonequilibrium system to reach equilibrium. In this approximation the BTE simplifies further to Solving this equation for the initial conditions such that at t = 0 one has as initial (assumed) distribution, f = f in , and at freeze-out time, t = t f one has a final distribution, f = f f o (which we identify with the distribution we are looking for that actually describes the distribution obtained experimentally) 1 one finds that The continued popularity of such an approach to the analysis of various particle production processes can be proved by the fact that recently the Boltzmann transport equation in the RTA approximation was used to analyze various observables in nucleus-nucleus collisions (in particular, to study the time evolution of temperature fluctuations in a non-equilibrium 1 The statistical system produced in multiparticle production processes quickly reaches an initial distribution (pre-equilibrium state) which slowly evolves to equilibrium but becomes frozen at the freeze-out time (usually the system at freeze-out is not in thermodynamic equilibrium). The experimentally measured spectra of the produced particles reflect the state of the system at freeze-out.
Beyond the RTA -time evolution of internal variable
Note however that a given non-equilibrium process, regardless of the specific dynamics, evolves the probability distribution over the system configurations. The system traverses a probability manifold, which is not a manifold of equilibrium states because the distribution at each point on the manifold need not correspond to a BG distribution [11]. This means that for every t one has some distribution f (z,t) and all objects in the system are described by that (single) distribution. In our case this distribution should smoothly evolve with time from power to exponential (equilibrium) form, i.e., it should be a quasi-power like distribution. The most common distribution of this type is the Tsallis distribution [12][13][14], characterized by the energy dependent parameter q = q(t) (note that for q → 1 the Tsallis distribution becomes the BG from Eq. (6)). To justify such a choice let us recall a unique feature of distribution (9) which distinguishes it from all other distributions used so far in this context. Namely, whereas for z → ∞ (or for z >> 1/(q − 1)) it becomes a power distribution, for z → 0 (or for z << 1/(q − 1)) it goes into an exponential distribution. Thus, for an appropriate selection of the parameter q(t), it can describe all the distributions occurring in the RTA formula: f in (z,t) for q t f = 0 = q in > 1 and f eq for q t f → ∞ = 1. The effectiveness of the Tsallis distribution is best evidenced by the results of works [15][16][17]. In particular, as shown in [16], it nicely describes a wide range of the measured transverse momenta (0.1 < p T < 100 GeV which corresponds in this case to 1 < z < 700) in which the cross section spans a range of ∼ 14 orders of magnitude 3 .
The time-dependent parameter q = q(t) (more specifically, its deviation from q = 1) represents the degree of the non-extensivity or, in other words, the degree of deviation of the system from the thermalized or equilibrated system, which is usually described by the well known BG statistical mechanics. It is also a control parameter that fully defines the shape of the Tsallis distribution, in particular its evolution over time through moments such as the expected value, z(t) and variance, Var[z(t)]: .
In general, the moments < z n >, limited to n+1 < 1/(q−1), are related by the recurrence relation As mentioned above, we assume that the dynamic evolution of a system over time smoothly and monotonically transforms the probability distribution f (z,t) in (z,t)-space and that f (z,t) is a Tsallis distribution fully described by the time-dependent control parameter q(t).
Now note that if we had completely formally used the Tsallis distributions for all distributions in Eq. (5) defining the RTA and comparing f (z = 0) or < z >, we would get such a relationship between the parameters q appearing there (remembering that f eq is assumed as a BG distribution which is equavalent to a Tsallis distribution with q eq = 1): However, if from the very beginning we decide to describe the entire process using only quasi-power Tsallis distributions, the time evolution of which is given only by the time evolution of their control parameters q = q(t), which means that f (t) = f [q(t)], we should go back to Eq. (2), which is now This equation replaces Eq. (2). The form of the function F from Eq. (14) can be deduced by taking f (t) given by the Tsallis distribution with q = q(t) and calculating d f /dt. As a result, we get that . The distributions f in and f eq are the same as in Fig. 1 (given by the Tsallis formula (9) with q in = 1.25 and q eq = 1, respectively). and expressing the dependence of Q(z) on the variable z by To go further, we need to set the time dependence of the parameter q in some way. Note that the nonextensivity parameter q(t) describes deviations of the state of a statistical system from equilibrium and in this sense it plays the role of an internal variable discussed in Refs. [19,20]. Therefore, following such an approach [20] we assume that the equation of the dynamics describing the control parameter q(t) is of the form of the equation of a relaxation: Remembering that we always assume that q eq = 1, the solution of (17) is which coincides with Eq. (13). Fig. 2 shows the resultant schematic distributions f f o for different t f /τ; they all have the form of a Tsallis distribution with q = q t = t f as given by Eq. (18). As one can see the result is now different from that using the RTA approximation shown in Fig. 1. Notice that the relaxation times τ = τ f in Eq.(3) and τ = τ q in Eq.(17) describe the relaxation of different quantities, respectively the entire distribution and its control parameter. Comparing (for the same time) mean values z = 1 + exp − t τ f ( z in − 1) evaluated from the RTA, Eq. (7), and its q version, , Eq. (9) for q(t) given by Eq. (18), we have that and the ratio of relaxation times in both approaches is changing from τ f /τ q = 1/ < z > in for t → 0 to τ f /τ q = 1 for t → ∞ (cf. Fig 3). Note that in a situation where in some isolated system we have a fixed number of particles N and a fixed total energy U, we have a constant average energy E . Therefore, in such a case, the variability in time of z = E /T must be caused by the appropriate variability in time of the scale parameter T (here the temperature). This means that the scale parameter in our scaled variable z = x/x 0 also changes with time: where x 0 = x * 0 at equilibrium (t −→ ∞). Because, as known from [21], fluctuations of the scale parameter x 0 are directly connected with the parameter q: we can write that This therefore means that the relaxation time τ now describes the temporal evolution of the fluctuations of the scale parameter x 0 (in the scaled variable z = x/x 0 ).
Conclusions
A few remarks that may inspire further research in this direction could be of interest here [11]. Notice that in the language of information theory based on Shannon entropy, our time evolution Eq. (14) can be expressed as where the S z = − ln[ f (z)] is suprisal, which measures the information gained by observing the outcome z in the system and the Shannon entropy is its expectation value. Now note that for the Tsallis distribution (9) we have that S z ∼ z and ∂ S z /∂t ∼ z. The entropy rate is given by From [11] we know that the linear relationship between ∂ S z /∂t and z guaranties that distribution (9) saturates the time-information uncertainty bound: More precisely, for the Tsallis distribution because ∂ S z /∂t ·z = 0 and (for any distribution) ∂ S z /∂t = 0. The distance of a given distribution (in our case defined by q(t)) from the equilibrium distribution (defined by q = 1) is given by the difference of the corresponding entropies: From equations (16), (17) and (29) we get that In conclusion, we propose a new, modified form for the relaxation time approximation for the collision term in the Boltzmann equation (2) allowing a smooth transition to the thermalized distribution. It consists of replacing the simple form of this term, given by the Eq. (3), where the relaxation time τ determines how fast the equilibrium state of the studied distribution f is reached, by Eq. (17) describing the time evolution of the most important (control) parameter of the analyzed distribution. The relaxation time τ would now control the rate of change of this parameter from some value to one that corresponds to the equilibrium state. We argue that this is possible if we use for the phenomenological description of the distributions of interest the quasi-power law Tsallis distribution given by Eq. (9) which is able to describe a given process at all stages of time evolution. Its control parameter is the time-dependent non-extensivity parameter q(t) and the relaxation time parameter τ describes its time evolution, as shown in Eq. (17). This single quasi-power law probability distribution (9) smoothly evolving towards thermalization would then replace the two-component distribution given by Eq. (7) which arises from the RTA. The proposed scheme offers multiple applications in situations where one wants to study the time evolution of an ensemble but one does not want to invoke the kinetic theory with complicated collision integrals. | 2021-06-23T01:16:17.993Z | 2021-06-21T00:00:00.000 | {
"year": 2021,
"sha1": "48cb57322bdac181dcb7cdafbc47a87b2070ee25",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epja/s10050-021-00538-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "48cb57322bdac181dcb7cdafbc47a87b2070ee25",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
46897848 | pes2o/s2orc | v3-fos-license | The impact of comorbid impulsive/compulsive disorders in problematic Internet use
Background and aims Problematic Internet use (PIU) is commonplace but is not yet recognized as a formal mental disorder. Excessive Internet use could result from other conditions such as gambling disorder. The aim of the study was to assess the impact of impulsive–compulsive comorbidities on the presentation of PIU, defined using Young’s Diagnostic Questionnaire. Methods A total of 123 adults aged 18–29 years were recruited using media advertisements, and attended the research center for a detailed psychiatric assessment, including interviews, completion of questionnaires, and neuropsychological testing. Participants were classified into three groups: PIU with no comorbid impulsive/compulsive disorders (n = 18), PIU with one or more comorbid impulsive/compulsive disorders (n = 37), and healthy controls who did not have any mental health diagnoses (n = 67). Differences between the three groups were characterized in terms of demographic, clinical, and cognitive variables. Effect sizes for overall effects of group were also reported. Results The three groups did not significantly differ on age, gender, levels of education, nicotine consumption, or alcohol use (small effect sizes). Quality of life was significantly impaired in PIU irrespective of whether or not individuals had comorbid impulsive/compulsive disorders (large effect size). However, impaired response inhibition and decision-making were only identified in PIU with impulsive/compulsive comorbidities (medium effect sizes). Discussion and conclusions Most people with PIU will have one or more other impulsive/compulsive disorders, but PIU can occur without such comorbidities and still present with impaired quality of life. Response inhibition and decision-making appear to be disproportionately impacted in the case of PIU comorbid with other impulsive/compulsive conditions, which may account for some of the inconsistencies in the existing literature. Large scale international collaborations are required to validate PIU and further assess its clinical, cognitive, and biological sequelae.
INTRODUCTION
The Internet has gone from being a narrowly available technology in the 1980s to constituting an all-pervasive aspect of society in the present day. At least 90% of young adults use the Internet in the USA, Europe, and Asia (Durkee et al., 2012;Kuss, Griffiths, Karila, & Billieux, 2014). Its availability is rapidly increasing in other parts of the world such as in Africa. While there are positive spects of the Internet, such as rapid and convenient availability of information, it is recognized that some people develop a maladaptive use of the Internet, spending a large amount of time online and neglecting other areas of life. Problematic Internet use (PIU) is a putative entity not yet recognized by diagnostic classification systems, but which has received growing research and clinical attention (Bernardi & Pallanti, 2009;Weinstein & Lejoyeux, 2010). PIU has been strongly associated with mental disorders [depressive and anxiety disorders, attention-deficit hyperactivity disorder (ADHD)] (Ho et al., 2014) and impaired functioning (Derbyshire et al., 2013). Depending on the precise operational definition used, the prevalence of PIU has been estimated to be 1%-38% in young people (Durkee et al., 2012).
Whether or not PIU should be regarded as a formal mental health disorder remains contentious (Demetrovics & Griffiths, 2012;Przybylski, Weinstein, & Murayama, 2017). In part, this may not only reflect the relative newness of the associated behaviors (due to the Internet only being developed in the 1980s) but also concern regarding overpathologizing human behavior, and whether to focus on a particular type of online behavior (e.g., Internet gaming disorder) Kiraly & Demetrovics, 2017;Kuss & Billieux, 2017) or many, as would be suggested by maladaptive Internet use severity correlating with a range of online behaviors (Ioannidis et al., 2016(Ioannidis et al., , 2017. Another important issue is whether PIU simply reflects other underlying mental health disorders (Kuss & Billieux, 2017). For example, if a person uses the Internet excessively to gamble or shop, this may reflect gambling disorder or compulsive buying disorder respectively, and if they use it for compulsive sexual acts, this may reflect compulsive sex behavior disorder. It has also been noted that formal mental health disorders, including ADHD, mood, and anxiety disorders, substance use disorders, commonly occur in PIU (odds ratios of about three per disorder in a meta-analysis of the literature) (Ho et al., 2014). In these cases, excessive Internet use could be a consequence or counterbalancing act due to the presence of a well-known mental illness (e.g., online social contact to counteract social phobia).
Cognitive dysfunction relating to decision-making, executive function (e.g., set-shifting difficulties), and impulse control have been implicated in the context of other behavioral addictions and thus may also be relevant for PIU Leeman & Potenza, 2013;Smith, Mattick, Jamadar, & Iredale, 2014). However, studies of cognition in PIU are few in number and so far have yielded quite contradictory findings, even relative to those for other behavioral and substance addictions (Smith et al., 2014). There are many potential reasons for this, including the use of non-standard cognitive tests, different operationalizations of PIU, and failure to control for comorbidities notably ADHD (Ho et al., 2014), which itself is associated with marked cognitive impairment in certain of these cognitive domains (Chamberlain et al., 2010). Similarly, existing cognitive studies did not typically rule out other underlying impulsive-compulsive disorders, such as gambling disorder, hair-pulling disorder, compulsive buying disorder, compulsive sex behavior disorder, compulsive stealing (kleptomania), or obsessive-compulsive disorder (OCD).
Therefore, the aim of this study was to compare demographic, clinical, and cognitive measures between three groups: those with PIU who did not have any potentially contributing comorbid impulsive/compulsive disorders, those with PIU who had such comorbidities, and healthy controls who did not have any mental health diagnoses. We hypothesized that PIU without impulsive/compulsive comorbidities would occupy an intermediate position between healthy controls and PIU comorbid with impulsive/compulsive disorders in terms of impaired quality of life, cognitive function, and scores on relevant trait questionnaires.
Participants
A total of 123 individuals aged 18-29 years were recruited using media advertisements in a US city. The only inclusion criterion was gambling at least five times in the preceding year, as the study was part of a broader project examining gambling in young adults. As such, this can be seen as an enriched sample. The only exclusion criterion was an inability to understand/undertake the assessments. Participants provided written informed consent after receiving a complete description of the study, and attended the study center to complete a face-to-face structured psychiatric interview, questionnaires, and cognitive testing using a touch-screen computer.
Assessments
Validated screening tools for PIU are underresearched. We opted to use Young's Diagnostic Questionnaire (YDQ;Young, 2009) to identify PIU, because it is convenient to administer. The YDQ is an eight-item set of questions, which was derived from prior criteria for substance-use disorder and pathological gambling, but applied to maladaptive Internet use. The YDQ considers preoccupation with the Internet, escalating quantities of time spent using the Internet, repeated unsuccessful attempts to cut back, restlessness/irritability when attempting to cut back, staying online longer than intended, jeopardizing careers/ scholarship/relationships, lying to others, and using the Internet to escape from life or emotional difficulties. Thus, the YDQ captures a broad range of PIU thoughts and behaviors. Problematic Internet addiction was defined as endorsing four or more of these symptoms over the preceding 12-month period, based on the number of criteria often used for gambling disorder, from which this instrument was partly derived, but accounting for YDQ having fewer total items. It should be noted that our definition of "PIU" identifies people with relatively more problems, rather than constituting a formal mental disorder diagnosis, because such a diagnosis (and its definition) is not yet listed in psychiatric classification systems.
Presence of current psychiatric disorders was evaluated using the Mini International Neuropsychiatric Inventory (MINI; Sheehan et al., 1998), the Minnesota Impulse Disorder Inventory (MIDI; Grant, Levine, Kim, & Potenza, 2005), and the World Health Organization Adult ADHD Self-Report Scale (ASRS v1.1; Kessler et al., 2005Kessler et al., , 2007. The MINI identifies mainstream psychiatric disorders, such as mood and anxiety disorders (including OCD), whereas the MIDI identifies impulse control disorders (including gambling disorder, compulsive sex behavior disorder, hair-pulling disorder, skin-picking disorder, kleptomania, pyromania, intermittent explosive disorder, and compulsive buying). The MINI and MIDI were completed by a trained assessor. The ASRS is a selfcomplete questionnaire, which yields a total score; total score on Part A was used to determine presence or absence of ADHD using the previously published threshold, which yields extremely high classification accuracy (Kessler et al., 2005).
Participants also completed the following questionnaires: Quality of Life Inventory (QOLI) to measure satisfaction in multiple domains (Frisch, 1998), the Barratt Impulsiveness Questionnaire (v11) to measure impulsive personality traits (Stanford et al., 2016), and the Padua Inventory to measure obsessive-compulsive traits (Burns, Keortge, Formea, & Sternberger, 1996;Sanavio, 1988). We also collected relevant background information, including age, gender, educational level, frequency of alcohol use, and numbers of cigarette packs smoked per day.
Cognitive testing was conducted using three computerized tests from the Cambridge Neuropsychological Test Automated Battery (CANTABeclipse, version 3, Cambridge Cognition Ltd., UK). Based on existing models of behavioral addictions Clark, 2010;Dong & Potenza, 2014), we focused on inhibitory control, decisionmaking, and set-shifting.
Inhibitory control was measured using the Stop-Signal Task (SST; Aron, Robbins, & Poldrack, 2014;Logan, Cowan, & Davis, 1984). On the SST, a series of directional arrows were presented on the computer screen one at a time, and volunteers made quick responses depending on the direction of arrows (left button for left-facing arrow and right button for right-facing arrow). On some trials, an auditory stop signal ("beep") occurred a variable time after presentation of the go cue, indicating that the volunteer should attempt to omit a response for the given trial. By dynamically modulating the time between presentation of the arrow and the stop signal, the task calculated the stop-signal reaction timea measure of time taken to suppress a response that would normally be made. Longer stop-signal reaction times equate to worse top-down control.
Decision-making was measured using the Cambridge Gamble Task (CGT; Rogers et al., 2003). On each trial, 10 boxes were shown, some blue and some red, with a token having been hidden behind one of these. The participant selected the color of the box they believed a token was hidden behind, and then decided how many points to gamble on having made the correct decision. The main measures of decision-making on the task were the proportion of points gambled overall, the proportion of rational decisions made (proportion of trials when the volunteer opted for the color that was in the majority), and the extent of risk adjustment (the extent to which individuals modulated the amount gambled depending on the probability of making correct choices).
Set-shifting was assessed using the Intra-Dimensional/ Extra-Dimensional set-shift task (IED; Birrell & Brown, 2000;Owen, Roberts, Polkey, Sahakian, & Robbins, 1991). This paradigm is based on the Wisconsin Card Sorting Task but decomposes different aspects of rule acquisition and flexible responding over nine task stages. Volunteers choose between two stimuli presented on the computer screen on each trial, and receive feedback as to whether their choice was "right" or "wrong." Through trial and error, the volunteer attempts to learn a rule about which stimulus is correct. The computer alters this underlying rule when the current rule has been learnt by the volunteer. The main measure on the task is the total number of errors made, adjusted for stages that were failed. Where this composite measure is statistically significant for a comparison of interest, then scores on individual task stages can be explored to confirm the main cognitive problems driving the overall impairment on the task.
Data analysis
The participants were grouped into three categories: those with PIU who did not have comorbid impulsive/compulsive disorders (OCD, ADHD, gambling disorder, compulsive sex behavior disorder, hair-pulling disorder, skin-picking disorder, kleptomania, pyromania, intermittent explosive disorder, or compulsive buying), those who had one or more of such comorbidities, and healthy controls who did not have any mental health diagnoses. Demographic, clinical, and cognitive characteristics of the three study groups were tabulated and compared using one-way analysis of variance (ANOVA) or χ 2 tests as appropriate. ANOVA was used for continuous variables fulfilling normality assumptions, whereas χ 2 was used for categorical variables. Effect sizes were also reported for the overall effects of group, to give an indication of possible clinical significance (η 2 for ANOVA and ϕ for χ 2 tests). By convention for η 2 , 0.01 is a small effect, 0.06 a medium effect, and 0.14 a large effect; and for ϕ, 0.1 is a small effect, 0.3 a medium effect, and 0.5 a large effect. Where the main effect of group was significant, this was further explored using paired post-hoc t-tests or alternative tests as indicated. This being an exploratory study, statistical significance was defined as p < .05 uncorrected, two-tailed. Data were analyzed using JMP Pro version 13.
Ethics
The participants provided written informed consent after receiving a complete description of the study. This research was approved by an Institutional Review Board.
As can be seen in Table 1, the three groups did not significantly differ in terms of age, gender, education level, nicotine consumption, or alcohol use. The two PIU groups did not significantly differ for the number of YDQ items endorsed [PIU mean (SD), 5.2 (0.3) items; PIU with impulsive/ compulsive disorders, 5.4 (0.2); t-test, t = 0.27; p = .42].
Quality of life on the QOLI (Frisch, 1998) was significantly impaired in both PIU groups compared with healthy controls, and PIU groups did not significantly differ from each other on quality of life. Approximately, 60%-65% of PIU participants had comorbid mental disorders that were not impulsive/compulsive (e.g., depression and anxiety) according to the MINI, and the two groups did not significantly differ for this rate.
For the personality questionnaires, significantly increased Padua obsessive-compulsive scores were found in both PIU groups versus the controls, whereas only the PIU cases with comorbid impulsive/compulsive disorders had significantly elevated Barratt impulsiveness scores on all three subscales (motor, non-planning, and attentional impulsiveness) compared with the controls. The PIU group with comorbid impulsive/compulsive disorders had significantly higher Barratt scores than the pure PIU cases, whereas the two PIU groups did not significantly differ from each other for Barratt impulsiveness scores. In terms of cognitive performance, the PIU participants with comorbid impulsive/compulsive disorders showed significant stop-signal impairment and decision-making (proportion of points bet) impairment compared with the controls, whereas pure PIU cases did not significantly differ from the healthy controls. PIU with comorbid impulsive/ compulsive disorders had significant stop-signal impairment versus the other PIU group, but these two groups did not differ from each other significantly for proportion of points bet. There were neither main effects of group on other task measures from the gamble task nor for the set-shifting task.
DISCUSSION AND CONCLUSIONS
In this study, we assessed PIU in a sample recruited from a large US city through media advertisements. Our aim was to clarify whether the profile of PIU was influenced by presence or absence of co-occurring impulsive and compulsive disorders. Several studies exploring demographic, clinical, and cognitive measures in PIU have not considered the impact of such comorbid conditions (e.g., ADHD and gambling disorder). The key finding here was that quality of life was impaired in people with PIU (large effect size), even in the absence of comorbid impulsive/compulsive disorders. Both PIU groups had mean quality of life in the low range, significantly lower than the healthy control group, whose mean quality of life was in the normal range. These results suggest that PIU can occur in the absence of such impulsive/ compulsive conditions, i.e., it is not always purely a conduit. The most common comorbid impulsive/compulsive disorders in the comorbid PIU group were gambling disorder and ADHD, although a broad spread of other conditions was also observed. It should be noted that the sample was somewhat enriched for gambling symptomatology.
The finding that the majority of PIU participants recruited in this study had one or more impulsive/compulsive disorders is in keeping with a previous meta-analysis in PIU as pertains to ADHD (Ho et al., 2014) as well as with the notion that comorbid behavioral addictions (e.g., gambling disorder) can contribute to this condition. Nonetheless, our findings militate against disregarding PIU in its own right, as a substantial proportion of cases did not have such impulsive/compulsive comorbidities, and were also functionally impaired significantly and to a similar degree, with mean quality of life in the poor range. Furthermore, PIU cases without comorbid impulsive/compulsive problems had similarly elevated rates of other mental disorders (those not related to impulsivity and compulsivity) compared with the other PIU group, suggesting that its clinical associations are not benign.
Intriguingly, compared with the healthy controls, only the PIU group with comorbid impulsive/compulsive disorders had significantly impaired response inhibition (stop-signal test; medium effect size), gambled more points (CGT; medium effect size), and had elevated personality traits of impulsiveness (Barratt Questionnaire; large effect size). Some caution is needed when interpreting these findings as the PIU group without comorbid impulsive/ compulsive disorders had a smaller sample size.
These results suggest that impaired performance on inhibitory control tasks reported in elements of the PIU literature could have stemmed in part from the impact of comorbid impulsive or compulsive symptoms. This may help to explain, in part, why inconsistencies have been found on inhibition tasks in PIU, as noted in a systematic review (Smith et al., 2014). In contrast, we found that obsessive-compulsive tendencies as measured by the Padua Inventory were elevated in both PIU groups (large effect size). It is important to note that this did not stem from OCD at a categorical level, because none in our sample had OCD based on the M.I.N.I. interview. PIU can be seen as being relatively compulsive from a trait personality point of view, but we did not find set-shifting impairment on the IED; hence, this compulsivity does not seem to reflect generalized attentional rigidity.
Several limitations should be noted for this study. We defined PIU using YDQ, which is a convenient short format derived from criteria for gambling and substance-use disorders but applied to pathological use of the Internet. We believe our choice of a score of 4 or more on YDQ was appropriate based on parallels with those approaches used in gambling-use disorder. Nonetheless, there are various ways of defining PIU and there is, to date, no consensus in the field as to which method constitutes the "gold-standard" or what cut-offs are optimal. For example, Young's Internet Use Questionnaire is longer and may be more useful for assessing severity as opposed to the diagnosis, and several other scales exist, or are in development. YDQ (and the related Internet Use Questionnaire) has received only limited psychometric validation since their inception, and may have an inconsistent factor structure (Kiraly, Nagygyorgy, Koronczai, Griffiths, & Demetrovics, 2015). For these reasons, future work may prefer to use alternative scales that have received more comprehensive validation. For example, the short (6-item) version of the Problematic Internet Use Questionnaire appears to have good properties based on initial validation in a nationally representative sample of adolescents . This was a relatively small cross-sectional study; as such, causality cannot be inferred, and the study was only powered to detect medium-large as opposed to small group differences. For this reason, we did not correct for multiplicity. No participants had a prior diagnosis of Autism based on clinical screening, but we did not include a dimensional measure of autistic spectrum disorder in our protocol. Our sample can be seen as enriched as participants were recruited on the basis of some level of gambling over the past year; hence, findings may differ in participants recruited without this criterion in other research. Finally, it remains to be seen whether the findings generalize to other populations, such as PIU presenting in treatment settings, more severe cases, or in other age groups.
In conclusion, this study found that while most people with PIU had one or more impulsive-compulsive disorders, quality of life was still impaired even in the group of PIU participants without these comorbidities (large effect size). However, there were differences in the presentation of PIU contingent on such comorbidities. PIU with comorbid impulsive/compulsive disorders had more marked abnormalities in response to inhibition and decision-making on the utilized cognitive tasks (medium effect sizes), and elevated trait impulsiveness on the Barratt Questionnaire (large effect size) than did the PIU group without such comorbidities. Both PIU groups showed elevations in dimensional compulsivity (Padua Inventory; large effect size). Future work should refine and arrive at a consensus regarding the definition of PIU and how best to identify it and should further explore the impact of impulsive and compulsive comorbidities on the presentation of this prevalent, putative mental disorder.
Funding sources: This research was supported by a grant from the National Center for Responsible Gaming to Dr. JEG and by a Wellcome Trust Clinical Fellowship Grant to Dr. SRC (reference no.: 110049/Z/15/Z). Dr. KI's research is supported by Health Education East of England Higher Training Special interest sessions. This article is based on work from COST Action (CA16207), supported by COST (European Cooperation in Science and Technology). The authors would like to thank Dr. Naomi Fineberg for providing feedback on a draft version of this manuscript, undertaken as part of the COST Action Network.
Authors' contribution: Dr. JEG developed the study protocol and undertook the data collection. Dr. SRC undertook the data analysis. All the authors substantially contributed in writing the manuscript and interpretation. SRC had access to all data from the study, both what is reported and what is unreported, and had complete freedom to direct its analysis and its reporting. He affirms that there was no external editorial direction or censorship. | 2018-06-12T19:24:48.911Z | 2018-05-15T00:00:00.000 | {
"year": 2018,
"sha1": "cf9aefed409353c6c2bb672d0e089a3a6ae7c957",
"oa_license": "CCBYNC",
"oa_url": "https://akjournals.com/downloadpdf/journals/2006/7/2/article-p269.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf9aefed409353c6c2bb672d0e089a3a6ae7c957",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
225506466 | pes2o/s2orc | v3-fos-license | Fe-catalyzed three-component dicarbofunctionalization of unactivated alkenes with alkyl halides and Grignard reagents
A highly chemoselective iron-catalyzed three-component dicarbofunctionalization of unactivated olefins with alkyl halides (iodides and bromides) and sp2-hybridized Grignard reagents is reported. The reaction operates under fast turnover frequency and tolerates a diverse range of sp2-hybridized nucleophiles (electron-rich and electron-deficient (hetero)aryl and alkenyl Grignard reagents), alkyl halides (tertiary alkyl iodides/bromides and perfluorinated bromides), and unactivated olefins bearing diverse functional groups including tethered alkenes, ethers, protected alcohols, aldehydes, and amines to yield the desired 1,2-alkylarylated products with high regiocontrol. Further, we demonstrate that this protocol is amenable for the synthesis of new (hetero)carbocycles including tetrahydrofurans and pyrrolidines via a three-component radical cascade cyclization/arylation that forges three new C–C bonds.
Introduction
Olens are ubiquitous in natural products and bioactive compounds and serve as versatile commodity feedstocks. 1,2-Difunctionalization of olens represents one of the most widely used strategies to build synthetic complexity in organic synthesis and serves as a platform to introduce concepts of chemo-, regio-, and stereoselectivity. 1 Recently, there has been a surge in the development of three-component transition metal-catalyzed difunctionalization that employs olens because of its potential to rapidly increase diversity in a single step (Scheme 1a). [2][3][4] However, selective transition metal-catalyzed three-component alkylarylation of unactivated alkenes without electronically biased substrates or directing groups is rare. 5 Moreover, despite the inherent attractive features of iron as a catalyst (Earth abundant, less toxic, inexpensive, and environmentally benign in comparison to Pd or Ni) in pharmaceutical settings, there are no general methods for iron-catalyzed three-component 1,2-dicarbofunctionalization of olens. [6][7][8][9][10][11][12][13] Recently, our group reported the use of a strained vinyl cyclopropanes to promote a three-component Fe-catalyzed reaction leading to 1,5-alkylarylation products (Scheme 1b). 14,15 Unfortunately, despite numerous attempts, the 1,2-difunctionalization products were not observed, presumably due to much more rapid ring-opening of the incipient alkyl radical followed by C-C bond formation. Herein, we report the rst iron-catalyzed 3component dicarbofunctionalization of unactivated alkenes with both alkyl iodides and bromides with sp 2 -hybridized Grignard nucleophiles leading to 1,2-alkylarylation or 1,2alkylvinylation of alkenes with broad scope and excellent regioand chemoselectivity (Scheme 1c). Further, we applied this concept to develop a three-component radical alkylation/cyclization/arylation cascade leading to diverse (hetero)cyclic compounds. We anticipate that this report will lead to greater application of Fe as a catalyst in three-component difunctionalization of olens.
As shown in Scheme 2, we hypothesize that alkyl halide 1 would react with Fe species A to form the alkyl radical int-1 and B. 12,13 Due to the high barrier associated with sterically hindered alkyl radicals and aryl iron B to undergo direct cross-coupling, we anticipate that the tertiary radical int-1 (or a fast reacting alkyl radical) would favor regioselective Giese addition to olen 2 to form, in the absence of cyclopropyl groups, a transient secondary alkyl radical int-2. 16 Then the longer lived (persistent) aryl iron species B can trap the less sterically hindered 2 alkyl radical int-2, and undergo reductive elimination from C to form the desired 1,2-dicarbofunctionalization product and D. Finally, facile transmetallation with aryl Grignard 3 restarts the catalytic cycle. 17 Recognizing that the success of the 3-component dicarbofunctionalization hinges on driving the equilibrium towards formation of int-2, presumably by favoring Giese addition over addition to aryl iron B, we initiated our studies under solvent-free conditions and at high concentrations of alkenes.
The challenge remains whether (a) we can drive the kinetics towards the Giese addition to 2, (b) int-2 is sufficiently longlived to be intercepted by the persistent iron species B, and (c) C will undergo reductive elimination to form the desired 1,2dicarbofunctionalization product.
Results and discussion
Initially, we elected to use tert-butyl iodide 1, 4-phenyl-1-butene 2, and meta-methoxy phenyl Grignard 3 as model substrates (Table 1). Gratifyingly, under our modied conditions for radical cross-coupling with vinyl cyclopropanes (i.e., using Fe(acac) 3 as a precatalyst and 1,2-bis(dicyclohexylphosphino) ethane as a ligand), 14a we observed the formation of the desired 1,2-alkylaryl product 4 in 86% yield and complete regioselectivity with unactivated olen 2 ( Table 1, entry 1). Notably, other bisphosphine ligands commonly employed in direct Fecatalyzed cross-coupling reactions with alkyl halides 10 signicantly decrease the yield (entries 2-5). Further, the use of the iron precatalyst bearing strongly coordinating ligands inhibits the reaction (entry 6) while other precatalysts were less efficient (entries 7 and 8). Moreover, the use of THF as solvent had a minor effect on the overall efficiency of the 3-component 1,2dicarbofunctionalization (entry 9). Finally, we could also perform the reaction in high yield under lower catalytic loading (entry 10). Control experiments show that the Fe and ligand are both critical for the reaction (entries 11 and 12). For full details of reaction optimization and screening conditions, see the ESI. † With a set of optimized reaction conditions in hand, an exploration of the reaction scope and limitations of this bisphosphine iron-catalyzed 3-component dicarbofunctionalization was undertaken. As shown in Scheme 3, the reaction tolerated a wide range of electron-rich (e.g., 4, 6, 7, 9, 12, 13, 15, and 16) and electron-decient aryl Grignard nucleophiles (e.g., 5, 8, 11, 14, and 17) forming the desired 1,2-alkylaryl products. Further, various substituent positions on the aryl nucleophiles were tolerated including meta and para mono-and disubstituted aryl Grignard nucleophiles. Importantly, vinyl Grignard reagents are also competent nucleophilic partners forming the regioselective 1,2-alkylvinyl product 18 in 41% yield. This represents the rst example of transition-metal catalyzed 1,2alkylvinyl functionalization of unactivated olens. Unfortunately, sterically hindered Grignard reagents are not compatible reagents in this transformation, presumably due to the high energy required to undergo inner-sphere reductive elimination. 11,12 Next, we explored the olen scope using tert-butyl iodide 1 and meta-methoxy phenyl Grignard 3 as dicarbofunctionalization partners (Scheme 4). In general, a wide range of Scheme 2 Proposed pathway to realize the 1,2-dicarbofunctionalization of alkenes using iron catalysis. unactivated olenic partners were tolerated. Compatible partners include olens with tethered aliphatic chains, alkenes, alkoxy, protected alcohols, aldehydes and amines, esters, and even pyridine and furan moieties producing the desired products in 32-83% yield (19-34). However, alkenes bearing O-and S-heteroatoms were not compatible with this transformation (see the ESI †). Importantly, this Fe-catalyzed three-component method provides unique reactivity with dienes. In particular, we found that the method is highly chemo-and regioselective for monofunctionalization of less substituted alkenes (23-25) even at lower concentrations of alkenes (see the ESI †). To showcase the practical application of this method, we also scaled up the reaction that formed the monofunctionalized product 22 in 83% yield (1.38 g). Furthermore, we also found that the per-uororated n-alkyl bromides were competent partners with unactivated cyclic alkenes (35 and 36) yielding the desired products as single diastereoisomers in 53-74% yield. For aliphatic chain internal alkene (37) using the peruororated nalkyl radical, we obtained the desired products as a mixture of diastereomers (dr ¼ 1.2 : 1; see the ESI †) in 46% yield. As shown in Scheme 5, contrary to current state-of-the-art TM-catalyzed three-component dicarbofunctionalization, this method tolerates a range of diverse radical precursors and operates under short reaction times and at low temperatures. Specically, tertiary alkyl bromides also form the desired 1,2alkylaryl products 38-50 with similar efficiency to alkyl iodides. These results represent the rst examples of using alkyl bromides in a transition metal-catalyzed 3-component intermolecular 1,2-alkylarylation of unactivated olens and can complement existing methods using reductive cross-couplings as reported by Nevado. 5 Furthermore, other tertiary alkyl iodides/bromides are compatible in this transformation yielding the desired products 51-55 in 31-63% yield. Finally, consistent with our hypothesis (Scheme 2), we also found that peruororated n-alkyl radicals (much more reactive towards Giese addition to alkenes) 18 were competent in this Fe-catalyzed three-component dicarbofunctionalization reaction yielding the desired products 56-57 in 77-87% yield. Unfortunately, other primary and secondary alkyl halides are not compatible in this transformation due to the competing direct cross-coupling formation (see the ESI †).
To expand the synthetic utility of this Fe-catalyzed threecomponent dicarbofunctionalization, we next explored the possibility of performing a radical cascade cyclization/arylation with a series of 1,6-dienes leading to the formation of three carbon-carbon bonds in one synthetic step (Scheme 6a). We hypothesize that regioselective Giese addition to the olen will form the secondary alkyl radical intermediate G. If the rates of Fe-arylation are slower than the rate of ring-closure, then we should only observe the ring-closed arylated product (i.e., 58). However, if the rate for Fe-arylation of G is faster than the rate for Fe-arylation of radical 5-exo-trig, then we should observe only the uncyclized product (i.e., 59). As shown in Scheme 6a, we found that this method delivered the desired carbocycle 58 in This journal is © The Royal Society of Chemistry 2020 Chem. Sci., 2020, 11, 8301-8305 | 8303 good yield (71%). We also observed the uncyclized product 59, presumably from direct arylation of G, albeit in low yield (9%).
Notably, incorporation of heteroatoms (O or N) or addition of diester linkage results in exclusive formation of the cyclic product. Specically, we found the desired formation of alkylaryl tetrahydrofuran 60, di-ester substituted carbocycle 62, and pyrrolidine 64 in good to excellent yield (51-95%) and without the formation of the uncyclized product. DFT calculations [UPBEPBE-D3/6-311+G(d,p)-CPCM(THF)//UB3LYP/6-31G(d)] using the tBu radical and 1,6-heptadiene predict a barrier of 13.2 kcal mol À1 for irreversible Giese addition leading to G, 5.2 kcal mol À1 downhill in energy. In agreement with the experiment, G preferentially favors radical cyclization leading to a cis isomer, while (irreversible) radical cyclization leading to a trans isomer is only 1.2 kcal mol À1 higher in energy. However, consistent with the experiment, the rates for radical cyclization for X]O substituted diene are faster and the energy difference between cis and trans radical cyclization is much higher (1.7 kcal mol; see the ESI †). However, at this stage, we cannot rule out alternative mechanistic pathways such as olen coordination to the metal center preceding alkyl radical addition or 1,2migratory insertion of the iron-aryl into the alkene. Future work on elucidating the mechanism of this transformation is ongoing and will be reported in due course. Given the prevalence of saturated heterocyclic compounds (tetrahydrofurans and pyrrolidines) in pharmaceuticals, we used an oxygensubstituted diene as a model compound to explore the reaction scope of this Fe-catalyzed three-component radical cascade cyclization/arylation (Scheme 6b). As shown in Scheme 6b, this reaction is very robust with aryl Grignard nucleophiles forming the desired products in excellent yields, and the cis-isomer is the major product (as determined by 1 H NMR and via crystal structure determination of 66; see the ESI †). The use of sterically hindered, heteroaryl or vinyl nucleophiles was also tolerated (69-72). Moreover, other tertiary alkyl iodides and per-uorinated alkyl and tertiary bromides also work in this transformation forming the radical cascade cyclization/arylation products 73-77 in 51-88% yield. Finally, the method is regioselective for addition to conjugated 1,3-diene to form 1,4-alkylaryl products 78-79 in good yield (up to 11 : 1 E : Z, Scheme 6c).
Conclusions
In summary, we have developed a three-component 1,2-alkylarylation of unactivated olens using bisphosphine iron as the catalyst. Further, we demonstrated that this protocol can forge three carbon-carbon bonds in one synthetic step leading to a diverse set of carbo-and heterocyclic compounds. We expect that this method will be adapted by the pharmaceutical community for the synthesis of bioactive products, ne chemicals, and late-stage diversication of promising leads. Although this method is currently limited to the use of a large excess of olens, preliminary experiments show that the use of activated alkenes could circumvent the need for excess alkenes, and this will be reported in due course. Future work is ongoing to elucidate the mechanism of this transformation using computational, experimental, and spectroscopic tools. We are actively pursuing other three-component Fe-catalyzed reactions with other p-acceptors, nucleophiles, and electrophiles including asymmetric variants and will report in due course.
Conflicts of interest
There are no conicts to declare. | 2020-07-30T02:04:29.454Z | 2020-07-24T00:00:00.000 | {
"year": 2020,
"sha1": "3664c170d757a7d74738927336d93331981f9234",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/sc/d0sc02127j",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e07d161d676dd76dcabb8e870baa23d2e7fc6e10",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
233426033 | pes2o/s2orc | v3-fos-license | Inosine in Biology and Disease
The nucleoside inosine plays an important role in purine biosynthesis, gene translation, and modulation of the fate of RNAs. The editing of adenosine to inosine is a widespread post-transcriptional modification in transfer RNAs (tRNAs) and messenger RNAs (mRNAs). At the wobble position of tRNA anticodons, inosine profoundly modifies codon recognition, while in mRNA, inosines can modify the sequence of the translated polypeptide or modulate the stability, localization, and splicing of transcripts. Inosine is also found in non-coding and exogenous RNAs, where it plays key structural and functional roles. In addition, molecular inosine is an important secondary metabolite in purine metabolism that also acts as a molecular messenger in cell signaling pathways. Here, we review the functional roles of inosine in biology and their connections to human health.
Introduction
Inosine was one of the first nucleobase modifications discovered in nucleic acids, having been identified in 1965 as a component of the first sequenced transfer RNA (tRNA), tRNA Ala [1]. Inosine is a purine nucleoside formed by hypoxanthine (IUPAC name: 1, 7 dihydropurin-6-one; molecular formula: C 5 H 4 N 4 O) linked by its N9 nitrogen to the C1 carbon of ribose ( Figure 1A).
It has been proposed that life on earth developed either on submarine vents in deep oceans [2] or in warm little ponds [3], and around 3.7 billion years ago [4]. A pre-existent environment containing N 2 , CO 2 , SO 2 , H 2 O, and traces of H 2 and CO [5] possibly served as a source for the chemical synthesis of nucleobases. Experimental UV radiation of icy mixtures of these molecules may have formed compounds such as 4(3H)-pyrimidone (a precursor of uracil), 4-aminopyrimidine (a precursor of cytosine), and 4-pyrimidinemethanol [6].
The RNA world hypothesis [7] posits that protocells relied on the physico-chemical properties of RNA for catalysis, replication, and selective evolution [8]. However, the actual base composition of RNAs in the RNA world is unknown. Beyond the four major nucleosides (adenosine (A), uridine (U), guanosine (G), and cytosine (C)), extant RNAs typically contain a significant number of noncanonical nucleosides like inosine [9,10], which may have been important for the control of primordial ribozyme activities. Recent discoveries regarding the ability of inosine to improve the fidelity and efficiency of nonenzymatic RNA replication [11] support the possibility that inosine may have been an important component of early nucleic acids [12,13].
Extant non-canonical bases are generated post-transcriptionally by modification enzymes-a process referred to as RNA editing-and play structural and functional roles that depend on both the nature and the position of the modified base. Deamination of adenosines by specific RNA deaminases is the major biological mechanism for inosine generation, through a reaction that converts the 6-aminopurine ring of adenosine to a 6-oxopurine ring ( Figure 1B). In extant organisms, molecular inosine serves as a key intermediate in purine metabolism and is a widespread component of various nucleic materials. In RNAs, inosine plays two major functional roles. Inosine at the wobble position (I 34 ) of tRNAs allows the translation of C-, A-, and U-ended codons. This expands the repertoire of triplets that the modified tRNA can recognize and, in doing so, profoundly modifies the balance between codon usage and tRNA abundance in the organisms where the modification is abundant.
In mRNAs, on the other hand, inosine changes the informational content of transcripts, and it can modify the three-dimensional structure of double-stranded regions, thus influencing interactions with RNA-binding proteins. Inosine is interpreted as guanosine by the splicing and translation machineries, affecting transcript localization, splicing, and translation accuracy. Its combined effect upon mRNA and tRNA functions makes inosine a major modulator of translational efficiency and accuracy that contributes to proteome diversity among species.
Here we review the distribution of inosine among extant organisms, and its known biological functions, including its role as an additional regulatory layer for translation, and the links of these functions to human disease.
Detection and Quantification of Inosine
Since the discovery of inosine by means of laborious purifications of specific RNA species, followed by selective RNA degradation and chromatographic studies [1], a number of techniques now exist for mapping inosine modifications. All these strategies have their strengths and limitations, and their preferential use is dependent on the RNA species of interest and the biological/biochemical question that needs to be addressed. Molecular inosine can be readily detected and quantified using standard biochemical methods that mostly rely on conversion of inosine into hypoxanthine. Detection of inosine within RNA species, on the other hand, is more challenging and will be the focus of this section.
Chromatography-Based Methods
Chromatography is still used today to detect and quantify inosines. It is frequently used when working with in vitro-derived samples (e.g., synthetic or in vitro-transcribed RNAs bearing inosine modifications). The RNA of interest is usually radiolabeled, digested to single nucleotides, and resolved by thin-layer chromatography [14]. This is a semiquantitative and cost-effective method but cannot be used in a high-throughput manner and does not give information on the location of the modified residue.
To study inosine modifications in in vivo-derived samples (e.g., inosine-containing RNAs derived from cellular extracts), liquid chromatography coupled with mass spectrometry (LC-MS/MS) can be used [15]. This is a highly quantitative non-radioactive method, but it is also low throughput, requires previous purification (in large amounts) of the RNA species of interest, does not give positional information about the modification, and necessitates expensive specialized equipment.
Reverse Transcription (RT)-Based Methods
Several methods for inosine detection and quantification are based on reverse transcription (RT) of RNAs and PCR amplification. Inosine is structurally a guanosine analogue ( Figure 1A) that reverse transcriptases read as G instead of the A that it derives from. This artifact can be exploited to detect and quantify inosine by calculating the A-to-G mismatch proportion within PCR products (amplicons), while determining the position of the modifications. A simple, fast, semi-quantitative, and cost-effective method to characterize these amplicons is restriction fragment length polymorphism (RFLP), which can be used when the A-to-I(G) conversion creates or abolishes a restriction enzyme recognition site [16][17][18]. This method allows the evaluation of multiple samples at once but is low throughput in terms of the number of A-to-I edited sites that can be studied.
RT-PCR products can also be sequenced. This can be done by standard Sanger sequencing when only inosines at specific sites and on particular RNA species are evaluated [17,18], and it is a semi-quantitative and inexpensive approach. Most frequently, however, highthroughput RNA sequencing (RNA-seq) is used instead. This is a powerful and highly quantitative technique that allows the identification of multiple inosine sites in a given sample [19][20][21]. However, the method is expensive and requires a good knowledge of analytical computational tools.
Sequencing errors, or A-to-G genomic mutations, may lead to false-positive inosine assignments. To validate whether an A-to-G mutated site is indeed an A-to-I edited site, inosine chemical erasing (ICE)-Seq has been developed [22]. In this method, total RNA is treated with acrylonitrile prior to RNA-seq. This compound cyanoethylates inosines, and the resulting N1-cyanoethylinosines block RT. By comparing RNA-seq data obtained from the same sample with and without acrylonitrile treatments, inosine sites can be unequivocally detected. This method, however, cannot detect sites with 100% A-to-I editing or multiple inosine modifications located in close range.
Other Methods
Specific RNases can be used to cleave inosine-containing RNAs and resolve the digested RNA by gel electrophoresis. These methods are low throughput and not fully quantitative but are simple, inexpensive, and particularly useful when inosine cannot be readily detected by RT-based methods (e.g., certain tRNA species) [23].
For example, RNase T1 is an enzyme that cleaves both guanosine and inosine. It is possible to treat inosine-containing RNA with glyoxal/borate to protect guanosines (but not inosines) from cleavage by RNase T1. In this manner, only inosine-containing sites will be cleaved and can be readily detected [24,25]. Alternatively, endonuclease V (EndoV) specifically cleaves single-stranded RNA at inosine sites, generating fragments that can be detected by Northern blotting [26,27]. EndoV has also been used to develop splinted ligation-based inosine detection (SL-ID). In this method, RNA is treated with EndoV and the resulting (inosine-containing) cleavage products are captured by specific bridge oligonucleotides and splint-ligated to a radiolabeled ligation oligonucleotide, prior to the analysis of the reaction products by gel electrophoresis and autoradiography [23].
More recently, novel developments on Nanopore technologies are allowing the detection and quantification of inosine on native RNAs by high-throughput sequencing without the need of RT [28].
Molecular Inosine in Metabolism and Signaling
Purine nucleotides act as sources of energy, cofactors for metabolic enzymes, and signaling molecules. Accordingly, molecular inosine is a central intermediate in purine biosynthetic and degradation pathways (Figure 2), while also playing an important role in neuronal signaling. The de novo purine synthetic pathway involves 10 enzymes that sequentially construct purines on the ribose moiety from phosphoribosyl pyrophosphate (PRPP) [29]. Inosine monophosphate (IMP) is the first purine product of this pathway. Highly proliferating cells such as tumor cells adopt an energy-intensive de novo biosynthetic pathway to build IMP. The metabolic enzymes of the de novo synthetic pathway are overexpressed in various cancers [30][31][32][33], and the tumor microenvironment is rich in purine nucleotides [34]. Enzymes involved in folic acid metabolism, such as dihydrofolate reductase (DHFR), play an essential and limiting role in de novo purine biosynthesis. As a result, inhibitors of the de novo purine synthetic pathway, such as antifolates, serve as chemotherapy agents against various cancers [35].
The salvage pathway is a purine anabolic pathway that shares enzymes with the de novo purine synthetic pathway and recycles IMP to replenish the levels of adenosine and guanosine nucleotides. Inosine monophosphate dehydrogenase (IMPDH) and hypoxanthine phosphoribosyltransferase (HPRT) are the key enzymes of the purine salvage pathway. IMPDH converts IMP to xanthine monophosphate (XMP), an immediate precursor to guanosine monophosphate (GMP). The expression of IMPDH is enriched in human leukemic cells and various other cancers [36,37]. Targeting IMPDH is a potential therapeu-tic strategy for leukemia [38]. Similarly, targeting HPRT with substrate analogs such as 6-mercaptopurine is effective against various cancers and autoimmune diseases [39,40].
In the purine degradation pathway, inosine produced from adenosine is converted by purine nucleoside phosphorylase (PNP) to hypoxanthine, which is further degraded to uric acid [41]. Enhancing the purine degradation pathways is another strategy to reduce the pool of purines of rapidly proliferating cells [42].
Human inosine triphosphatase (ITPase) is a ubiquitously expressed enzyme that hydrolyzes inosine triphosphate (ITP/dITP) to inosine monophosphate (IMP/dIMP) [43]. Functional loss of ITPase can lead to the incorporation of inosines into RNAs and DNAs. ITPase-null mouse embryonic cells show enriched inosine base content in RNAs but not in DNA [44], where it is supposedly removed by DNA repair mechanisms. In humans, recessive ITPase mutations are implicated in pediatric encephalopathies characterized by lack of development, seizures, cardiac abnormalities, and cataracts [45].
In purinergic signaling, nucleotides mediate neurotransmission by serving as signaling molecules to purine and pyrimidine receptor families [46]. Adenosines act as neurotransmitters in both peripheral and central nervous systems [47], and inosine exerts similar effects to adenosine, activating A1, A2A, and A3 adenosine receptors [48]. Inosine administration is neuroprotective in rats with spinal cord injury possibly through its free radical scavenging metabolite, urate [49]. By functioning as an intracellular signaling molecule, inosine also acts as anti-depressant in mice [50], promotes axonal outgrowth, and improves behavioral outcome after stroke [51,52].
Oral administration of inosine has been explored in clinical trials to treat neurological conditions such as Parkinson's disease (PD). Inosine administration elevates the urate levels in serum and cerebrospinal fluid (CSF), thus affording neuroprotection through radical scavenging [53][54][55]. In PD patients, inosine administration may slow the progression of mental disability, but a phase 3 trial was prematurely terminated as the anticipated efficacy was not met [56]. Inosine pranobex (IP), an inosine derivative, is known for its immunomodulatory and antiviral properties [57] and is being explored for the treatment of COVID-19 in elderly patients for the enhancing effects of IP on lymphocyte proliferation, cytokine production, and natural killer cell cytotoxicity [58].
Inosine in tRNA
tRNAs are the translators of the genetic code during protein synthesis and are crucial to the efficiency and fidelity of translation [59]. tRNAs fold into a cloverleaf secondary structure and adopt an L-shaped architecture [60] where the nucleobases at positions 34, 35, and 36 form the anticodon that recognizes complementary codon triplets in mRNA. The nucleobases at position 34 do not strictly adhere to Watson-Crick rules when paired with the third base of codons (wobble pairing) in the ribosome.
In a functionally equivalent reaction, uridines at position 34 of tRNAs can be modified by tRNA-specific uridine methyltransferases to form xo 5 U 34 . This modification enables tRNAs to base-pair with A-, G-, or U-ended codons. The nature of the preferred modification at position 34 of tRNAs is a distinguishing feature of archaeal, bacterial, and eukaryotic organisms, and the expansion of I 34 in eukaryotes was an important influence in the establishment of eukaryotic tRNA gene populations and overall genomic codon usage [68]. Archaea lack both A 34 -and U 34 -tRNA base modifications, whereas extensive U 34 methylation is a prominent characteristic of bacterial tRNAs [68]. In bacteria, the formation of I 34 in tRNAs is catalyzed by a homodimeric enzyme: tRNA-specific deaminase (TadA) [14]. Most bacteria have a single A 34 substrate for TadA: tRNA Arg with the anticodon ACG. However, several bacterial species express more than one A 34-tRNA. For instance, Oenococcus oeni (Firmicutes) contains four A 34 -tRNAs cognate for Arg, Leu, Thr, and Ser. Interestingly, in this species I 34 has been detected in tRNAs cognate for Arg and Leu but not in A34 tRNAs for Thr and Ser, indicating that the expansion of tRNA substrates modified by tRNA deaminases likely starts with the emergence of unmodified A 34 -tRNA substrates [65]. TadA is an essential enzyme in Escherichia coli, a fact attributed to the importance of I 34 -tRNA Arg in translation [14]. In agreement with this, those bacterial species that lack A 34 -tRNA genes also lack TadA [65,[69][70][71]. In these species other tRNA isoacceptors compensate for the lack of A 34 -tRNA genes. A second function of TadA-dependent inosine deamination in E. coli is discussed later in this manuscript [72].
In eukaryotes, the situation regarding I 34 -tRNAs is more complex than in bacteria, because the eukaryotic adenosine deaminase acting on tRNAs (ADAT) deaminates A 34 in multiple tRNAs (seven tRNAs in some fungi and plants, and eight tRNAs in most wellcharacterized species) [73]. Eukaryotic ADAT emergence was accompanied by a dramatic genomic enrichment in A 34 -tRNA genes [68,[73][74][75].
Eukaryotic ADATs are heterodimeric enzymes that evolved from the duplication of a bacterial tadA gene. The catalytic subunit is known as ADAT2, while its tRNA-binding partner is named ADAT3. A conserved proton-shuttling glutamate, which is essential for the catalytic activity of ADAT2, was lost during the evolution of ADAT3, rendering this subunit catalytically inactive [76,77]. A recent study on the crystal structure of ADAT2/3 from Saccharomyces cerevisiae suggests that the positively charged residues at the N-terminal region of ADAT3 may play a role in substrate recognition [78].
It is unclear how ADATs evolved to expand their substrate specificity for multiple tRNAs. In Trypanosoma brucei, in addition to A 34 modification in tRNAs, ADAT2/3 carries out C-to-U editing in single-stranded DNAs [79] and the same enzyme is necessary for the C-to-U editing in tRNA Thr [80]. In this species, substrate recognition by ADAT requires a KR domain containing stretches of Arg and Lys at the C-terminal of ADAT2 [81].
In fungi, reduced levels of I 34 -tRNAs arrest the cell cycle of Schizosaccharomyces pombe, and the deletion of the enzyme is lethal in S. cerevisiae [82]. Inosine modification in tRNA Arg of Arabidopsis thaliana chloroplasts improves the efficiency of translation of the organelle's genome [83]. The ADAT activity is, in fact, essential in all tested eukaryote species [21,66,79,82], which is to be expected given the fact that most eukaryotic genomes lack genes coding for several G 34 -tRNAs. Therefore, A-to-I editing is required to compensate for the lack of G 34 -tRNAs otherwise needed to decode C-ended codons [74]. In bacteria, on the other hand, G 34 -tRNAs are abundant because prokaryotes have not adopted I 34 as a general solution for the translation of C-ended codons (Figure 4). Interestingly, G 34 -tRNAs were shown to be toxic to eukaryotic cells as they are prone to induce miscoding in the context of eukaryotic translation systems [84]. Leu AAG is also found in a few prokaryotes such as O. oeni [65]. In eukaryotes, diverse A 34 tRNAs serve as substrates for heterodimeric ADAT2/3 where the population of G 34 tRNAs is limited. The expansion of A 34 tRNA diversity co-evolved with multisubstrate specificity in ADATs. (The anticodons are boxed, and the corresponding amino acids are one-letter-abbreviated).
Codon composition and RNA structure are important factors that influence translation rates, and the clustering of rare codons (codons that have few copy numbers of cognate tRNAs) in regions of mRNAs limit the rate of translation [85,86]. Thus, the translation of genes rich in ADAT-sensitive codons (codons translated by I 34 -tRNAs: amino acids T, A, P, S, L, I, V, and R, hereinafter TAPSLIVR) might benefit from the increased decoding capacity of inosine-modified tRNAs [64,87,88]. In agreement with this prediction, self-renewing embryonic stem cells that express a large number of genes enriched in ADAT-sensitive codons display enhanced ADAT2 levels [89].
In general, eukaryotic proteomes are highly enriched in protein sequences with ADAT-sensitive amino acid stretches when compared to bacterial proteomes (~4-fold enrichment) [90], and the codon composition of the transcripts coding for TAPSLIVR-rich proteins is biased in favor of I 34 -tRNA dependent codons (~70% enrichment) [90]. Thus, eukaryotes (that preferentially use I 34 -tRNAs for decoding TAPSLIVR, Figure 4) display different proteome composition in terms of proteins rich in TAPSLIVR amino acids as compared to bacterial species, and their transcripts are also enriched in codons that require this modification [65,90,91]. We have proposed that inosine at position 34 of tRNAs represents a eukaryote-specific evolutionary trait selected because it contributes to proteome complexity expansion [92].
Interestingly, I 34 -tRNAs are prone to internal cleavage by endonuclease V, a highly conserved ribonuclease, that cleaves inosine-modified tRNAs at their anticodon [26]. Stress conditions such as oxidation and starvation can trigger the cleavage of tRNAs at their anticodon loops, and the resulted fragments play a number of regulatory roles that are, as of yet, largely unexplored [93][94][95].
Inosine in mRNA
Bass and Weintraub first identified A-to-I editing in Xenopus laevis double-stranded mRNAs [109]. Since then, more than 36,000 non-repetitive A-to-I editing sites (excluding Alu repeats) have been predicted in the human genome [110].
In eukaryotes, inosines in mRNAs are generated through the activity of adenosine deaminases acting on RNAs (ADARs) [111], which are widely conserved across the eukaryotic kingdom. There are three vertebrate ADAR enzymes (ADAR1, ADAR2, and ADAR3), of which ADAR3 is apparently catalytically inactive [112]. Although ADAR-mediated RNA editing is the main mechanism for inosine introduction in mRNAs, RNA polymerase can occasionally introduce inosines on elongating transcripts [113].
Structurally, inosine alters the stability of double stranded RNA (dsRNA) in a manner that depends on the nucleotide it pairs with. For instance, the I-U base pair is less stable than A-U, whereas the I-C pairs are more stable than A-C pairs [114]. The effects of inosine modification on mRNA structure and function also depend on its position on the mRNA (i.e., untranslated regions (UTRs), introns, and coding regions).
During translation, tRNAs recognize inosines in the coding regions of mRNAs as guanosines. Thus, the modification of adenosine to inosine in mRNA has the potential to generate substitutions in the protein sequence. Inosine editing in a coding region was first reported in the mRNA for subunit 2 of AMPA glutamate receptors (GluR-B) [115]. ADAR-mediated editing of a CAG codon (Gln) to CIG (CGG, Arg) in these transcripts modulates calcium permeability [115]. Similar ADAR-editable sites are present in serotonin receptors [116], squid potassium channel [117], Xenopus basic fibroblast growth factor [118], sodium channels in Drosophila [119], and various other proteins of physiological importance [120].
In addition to protein recoding, in vitro hypermodification of inosine in the coding region of mRNA may lead to ribosome stalling and truncation of peptides. In particular, an INI codon (codon with inosine as a first and third base) results in the truncation of peptides by 60-80% when compared to the codons with single inosine [121]. In yeast, the inosine-mediated synonymous codon changes do not result in protein recoding but they may affect the RNA stability and the translation efficiency, depending on tRNA availability and codon usage for the modified codon [122], and the same was identified in mouse oocytes [123,124].
A-to-I editing of GluR-B mRNA is required for brain function, and mice with depleted modification levels present with severe seizures and premature death [125,126]. Remarkably, altered A-to-I editing in the pre-mRNA transcripts of serotonin 2C receptors was reported in suicide victims with a history of depression [127].
In eukaryotic transcriptomes, A-to-I editing is widespread in noncoding regions of transcripts. Inosines in untranslated regions (UTRs) and introns modulate the stability, localization, and integrity of the transcripts (Figure 6) [128]. For example, inosines within the 3 UTR may result in nuclear retention of the mRNA [129,130]. This was first discovered in mice, where A-to-I editing regulates the nuclear retention of an 8 kb poly(A)+ RNA [129]. However, posterior analyses have reported cytosolic distribution of multiple mRNAs with hyperedited 3 UTRs in Caenorhabditis elegans and Homo sapiens [131], indicating that nuclear retention is not always a consequence of 3 UTR inosines.
A-to-I editing in introns of pre-mRNAs can modulate their splicing because inosines are recognized as guanosines, thus creating or removing alternate splicing sites. The modification of splicing sites by inosine can lead to the translation of alternative reading frames, a phenomenon first observed in mitochondrial transcripts of the Trypanosoma mitochondrial CoxII gene [132]. In ADAR2 mRNA itself, an intronic inosine modification generates a highly conserved alternate 3 splicing site that results in the addition of 47 nucleotides to the mature transcript, shifting the reading frame and reducing ADAR2 protein levels [133]. Interestingly, this effect also modulates A-to-I editing levels in pre-mRNA, as higher spliceosome activity limits A-to-I editing by restricting the spatial access of the editing enzymes to the transcript [134].
In C. elegans, deletions of the ADAR gene affect vulva development and chemotaxis [135], while ADAR-deleted Drosophila mutants exhibit paralysis, uncoordinated locomotion, and tremors caused by the depletion of inosine at 25 sites in transcripts coding for three different ion channels [16]. In mice, ADAR1 inactivation results in an embryonic-lethal phenotype, which is a consequence of the activation of interferon-and dsRNA-sensing pathways [136], liver disintegration [137], aberrant hematopoiesis, and increased apoptosis [138,139].
In humans, the amount of A-to-I editing on mRNAs is strongly tissue-dependent (higher in the brain and thymus, and lower in transformed cells) [140]. The contribution of A-to-I editing to the translation of proteins of oncogenic importance is explored in cancer research for its diagnostic and therapeutic potential [141,142]. For instance, progression of gastric tumor from healthy tissue is associated with enhanced editing at ADAR1-specific sites and downregulation of editing at ADAR2-specific sites [143], pointing at inosine modifiers as potential biomarkers for gastric cancer [144]. Moreover, increases in ADAR1 activity through gene amplification enhance lung tumorigenesis [145], while loss of ADAR1 function allows tumor cells to overcome resistance to immunotherapy by removing the checkpoint that restrains the dsRNA-mediated immune response pathway [146]. Point mutations in ADAR1 are observed in patients with genetic disorders such as Aicardi-Goutières syndrome (AGS), characterized by aberrant immune response mediated encephalopathies [147]. Hyper-mutated ADAR1 is associated with an autosomal dominant condition known as dyschromatosis symmetrica hereditaria (DSH), a phenotype with varied hyper-and hypopigmentation in skin [148].
Though inosine modification in mRNAs is mostly confined to eukaryotes, a recent study identified inosine in the transcripts of hok-like genes in prokaryotes. [72]. Interestingly, E. coli TadA recognizes a hairpin structure in the coding region of hok transcripts (hokB, hokC, hokD, hokE) that resembles the anticodon stem-loop of tRNA Arg . The editing event in hokB recodes a TAC codon (Tyr) to TIC (TGC, Cys), and this hokB-Cys29 variant is more toxic to E. coli than proteins from unedited transcripts [72]. Levels of hokB-Cys29 increase with rising cell density [72] in a mechanism thought to mediate programmed cell death and antibiotic tolerance in bacteria [149,150].
In summary, the regions of mRNAs that form a secondary structure with an editable sequence can be subjected to A-to-I editing by ADARs. Inosine is interpreted as guanosine by the molecular machineries acting on mRNAs. Inosine modifications at the coding regions of mRNA lead to amino acid substitutions in the protein sequence and at the noncoding regions of mRNA modulate the stability, splicing, and transport of mRNAs.
Inosine in MicroRNAs
MicroRNAs (miRNAs) are short, single-stranded non-coding RNAs that attenuate translation via RNA interference (RNAi). RNAi is the process of posttranscriptional gene silencing through the action of the RNA-induced silencing complex (RISC), which involves the pairing of complementary regions (between position 2 and 8) of miRNAs (known as seed regions) with the target transcripts. Interactions with miRNAs mark mRNAs for translation repression or degradation.
The primary miRNA (pri-miRNAs) transcripts are first cleaved by the ribonuclease Drosha to produce pre-miRNAs, which are further processed by Dicer to generate mature miRNAs [151,152]. Pri-and pre-miRNAs that form secondary hairpin-like structures are targets for editing by ADARs [153]. Interestingly, ADAR1 forms a complex with Dicer that promotes the processing of miRNAs [154]. More than 130 A-to-I editing sites have been identified in miRNAs [155], and these modifications reduce miRNA function by impairing their ability to form RNA duplexes with target mRNAs.
A-to-I editing in miRNAs was first identified in miR-376, a repressor of phosphoribosyl pyrophosphate synthetase 1 (PRPS1) translation. ADAR-mediated editing of miR-376 RNA clusters perturbs its function, and ADAR2-null mice show increased PRPS1 levels [153]. The extent of editing varies with the species and tissue types. Among the miR-376 RNA clusters, 41% of miR-376a1-5p and 92% miR-368-3p are edited in the human medulla oblongata, whereas in mice, 56% of miR-376c-3p and 54% of miR-376a1-5p are edited in the cortex and kidney, respectively [153]. Adenosines from the UAG motifs located in secondary structures of miRNAs serve as targets for ADARs in a tissue-dependent manner [156].
Translation regulation via A-to-I editing of miRNAs has profound effects on tumor progression and metastasis. A-to-I edited miR-200b promotes tumor progression, as the ability of miR-220b to repress ZEB1/ZEB2 transcription factors is modulated [157]. On the other hand, ADAR1-mediated A-to-I editing in miR-376a impairs the translation repression of the glioblastoma tumor suppressor RapA (RAP2A). Strikingly, A-to-I editing enables an isoform of miRNA miR-376a to target autocrine motility factor receptor (AMFR) in glioblastoma cells [158]. AMFR, an internalizing surface receptor, is not a target for unedited miR-376a and its upregulation is correlated with advanced stages of several cancers [159]. Therefore, inosines in the seed regions of miRNAs either attenuate their interaction with target mRNAs or enable them to obtain new target transcripts, with consequences that depend on the function of the target mRNA.
Inosine in Viral RNAs
Adenosines in viral RNAs can be modified to inosines by hosts' deaminases upon infection, a process initially identified in samples of human brains infected by the measles virus [160]. Along with A-to-G transitions (a hallmark of inosine modifications), U-to-C conversions were also enriched in the reverse-transcribed cDNA of viral matrix genes from the same samples [160,161]. ADAR1-mediated A-to-I hyperediting weakens the pathogenesis of lymphocytic choriomeningitis virus (LCMV), resulting in nonfunctional viral glycoproteins [162]. In contrast, inosine hypermodification of viral transcripts represses the immune response by masking the transcripts from Mda5, a cytoplasmic sensor that regulates the synthesis of interferons and other inflammatory proteins [136,163].
Inosine also plays an interesting role in viral hepatitis. Co-infection of hepatitis δ virus (HDV) with hepatitis B virus (HBV) increases the risk of severe liver damage in hepatitis patients [164]. The subviral pathogen HDV encodes only one protein, namely hepatitis delta antigen (HDAg), in two isoforms. The shorter isoform (HDAg-S) assists in replication, whereas the longer isoform (HDAg-L) inhibits replication and promotes viral assembly [165,166]. An A-to-I editing event in HDAg transcripts modifies an amber stop codon UAG to UIG (UGG, Trp) by 40% to 60%, leading to an enrichment of the longer isoform HDAg-L [167]. The predominance of HDAg-L diminishes the virulence of the infection, maintaining viral titers constant due to an imbalance between replication and viral assembly.
Inosine in Mobile Elements
Retrotransposons (class I elements or retroposons) are mobile insertion sequences of an approximate length of 300 bases with the capacity to integrate themselves into different parts of the genome via an RNA intermediate. Short interspersed elements (SINEs) are one of three major subclasses of retroposons [168] and constitute up to 11% of the human genome, with over 1 million copies of Alu elements typically found in each genome. These retroposons are derived from 7SL RNA and emerged around 65 million years ago during early primate evolution [169]. Alu elements are abundant in UTRs and intronic regions of mRNA, and their genomic re-integration generates new exons and plays a major role in species evolution [170]. Inosine modifications in intronic Alu elements modify splicing sites and generate new exonic sequences. For instance, A-to-I editing led to the inclusion of a primate-specific Alu-exon in the human nuclear prelamin-A recognition factor by altering a splicing site in its RNA intermediate [171].
A-to-I editing in Alu elements was first discovered in the early 2000s [114]. More than 100 million Alu-RNA editing sites can be detected in human genes [172]. Integration of an Alu element in the opposite direction to another Alu element at a short distance results in the formation of a loop-like structure that serves as a substrate for ADARs [173] and the introduction of inosine at these sites disrupts base-pairing patterns and destabilizes their secondary structure, with gene-dependent effects.
For example, the interaction of two inverted Alu elements flanking an exon may result in the formation of circRNAs by a mechanism called backsplicing ( Figure 6). The formation of circRNA structures affects the splicing and nuclear export of transcripts. A-to-I editing suppresses the formation of complete circRNAs and promotes the standard splicing and export of the modified mRNAs [174].
RNA editing at Alu elements embedded in 3 UTR of dihydrofolate reductase (DHFR) aids the transcript to escape from miRNA-mediated silencing [175]. On the other hand, ADAR interactions with the Alu site may modify the properties of the transcript. For example, the association of ADAR1 with the inverted Alu elements at the 3 UTRs of proto-oncogenes XIAP2 and MDM2 suppresses their apoptotic inhibitory functions [130].
Inosine in Ribosomal RNAs
The presence of inosine in ribosomal RNA (rRNA) is not well reported. In rRNA of Crithidia fasciculata, O 2 '-methylinosine was first identified, but the role of this nucleoside in ribosomal structure and function is unclear [176]. Four decades later, transcriptome analyses of Diplonema papillatum mitochondria identified inosine in mt-SSU rRNA, where it is proposed to destabilize the structure of rRNA [177], but the function of these inosines and the identity of the enzymes editing the rRNA remain unclear.
Inosine in DNA
Deoxyinosines are observed in DNA, where they are introduced by various independent mechanisms. On the one hand, nitrosative compounds released by macrophages, or exposure to exogenous agents such as nitrous anhydride, can deaminate adenosine to inosine in DNA [178]. Adenosines from DNA strands that form DNA/RNA hybrids are also found to be edited by ADARs [179]. Alternatively, the nuclear accumulation of dITP due to the loss of functional ITPase can lead to the misincorporation of deoxyinosines into newly synthesized DNA [113,180]. These events can lead to point mutations in DNA as deoxyinosine preferentially pairs with cytosine than thymine [181].
DNA repair mechanisms remove deoxyinosine by base excision repair (BER) or alternate excision repair (AER). In BER, alkyl-adenine DNA glycosylase cleaves the N-glycosidic bond between hypoxanthine and the sugar moiety and releases the modified base from DNA. Then, AP lyase seals this apurinic site with adenine using the information from the complementary strand [182]. In AER, endonuclease V creates a nick by hydrolyzing the second phosphodiester bond in the 3 direction from deoxyinosine. A 3 -5 exonuclease cleaves the nucleotides at the nicked site [183]. A segmental gap created by this excision is elongated by DNA polymerase with the help of a complementary strand [184].
Interestingly, targeted inosine modifications in DNA have significant implications in gene editing. The bacterial tRNA-specific inosine modifier, TadA, can be synthetically fused to catalytically impaired CRISPR-cas9, which can be programmed to modify selective adenosines to inosines in DNA [185]. Such inosines will be recognized as guanosines by polymerases leading to converting the A·T base pair to G·C. This inosine modification machinery can be exploited in gene therapy to correct disease-causing point mutations.
Concluding Remarks
Since its discovery in 1965 in yeast tRNA Ala , inosine has emerged as a universal and widespread component of nucleic acids with a heterogeneous set of functions and activities. Inosine is present in a range of RNA molecules, where it modulates the efficiency and accuracy of translation, as well as several other biological activities. Moreover, inosine is an important intermediary in purine biosynthetic pathways and a secondary metabolite of purine degradation. Because of its biochemical similarity to adenine, molecular inosine plays a number of physiological roles, such as acting as a neuroprotective purine analog during purinergic signaling.
Inosine modifications in anticodons of tRNAs expand their decoding capacity by their multi-base-pairing chemistry and improve the efficiency of translation. In eukaryotes, tRNAs with inosine at position 34, and cognate for several amino acids, compensate for the absence of tRNA isoacceptors with G 34 to decode C-ended codons. The relevance of this function is reflected in the neurological disorders caused in humans by mutations in ADAT, the enzyme catalyzing the I 34 modification.
By mimicking guanosine, and depending on their localization, inosines in mRNAs modulate translation accuracy, splicing, and nuclear export. Defective A-to-I editing in the GluR-B receptor leads to motor neuron death in sporadic amyotrophic lateral sclerosis (ALS), and altered inosine modification levels in the transcripts of serotonin-2C receptors are associated with neuropsychiatric disorders. The physiological importance of inosine, and of the specific proteins whose synthesis is regulated by this modification, turn ADARs (the enzymes responsible for inosine modifications in mRNAs) into promising pharmacological targets.
In miRNAs, A-to-I editing can either impair the ability to repress target translation or enhance their repertoire of target transcripts. The ability of inosine modifications to modulate miRNA function in highly proliferating cells, including silencing oncogenes and tumor suppressors of various cancers, highlights the therapeutic potential of targeting adenosine deaminases as targets for chemotherapy. Indeed, suppression of ADAR activity sensitizes tumor cells and virally infected cells to the immune response.
In addition to translation regulation, inosine perturbs immunomodulatory RNAsensing pathways through the destabilization of the secondary structures of Alu elements. The activity of inosine in retrotransposons possibly contributes to species evolution through their impact upon slicing sites, and the resulting generation of transcripts with alternative exon arrangements.
With the advances in RNA-seq and data processing, the landscape of inosine influence upon genomes, transcriptomes, and proteomes will become clearer and its impact upon human health will be better understood. Just as with so many other modified bases, we are just scratching the surface of inosine's physiological significance.
Data Availability Statement:
No new data were generated or analyzed in this study. Data sharing was not applicable. | 2021-04-29T05:22:54.467Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "56a6ae813a0aecc8082a0d99957364b8b9096ab0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/12/4/600/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56a6ae813a0aecc8082a0d99957364b8b9096ab0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4844575 | pes2o/s2orc | v3-fos-license | Photonic band structures of periodic arrays of pores in a metallic host: tight-binding beyond the quasistatic approximation
We have calculated the photonic band structures of metallic inverse opals and of periodic linear chains of spherical pores in a metallic host, below a plasma frequency $\omega_{\text{p}}$. In both cases, we use a tight-binding approximation, assuming a Drude dielectric function for the metallic component, but without making the quasistatic approximation. The tight-binding modes are linear combinations of the single-cavity transverse magnetic (TM) modes. For the inverse-opal structures, the lowest modes are analogous to those constructed from the three degenerate atomic p-states in fcc crystals. For the linear chains, in the limit of small spheres compared to a wavelength, the results are the"inverse"of the dispersion relation for metal spheres in an insulating host, as calculated by Brongersma \textit{et al.} [Phys.\ Rev.\ B \textbf{62}, R16356 (2000)]. Because the electromagnetic fields of these modes decay exponentially in the metal, there are no radiative losses, in contrast to the case of arrays of metallic spheres in air. We suggest that this tight-binding approach to photonic band structures of such metallic inverse materials may be a useful approach for studying photonic crystals containing metallic components, even beyond the quasistatic approximation.
I. INTRODUCTION
The photonic band structures of composite materials have been studied extensively. Such band structures are defined by the relation between frequency ω and Bloch vector k in media in which the dielectric constant is a periodic function of position. A major reason for such interest is the possibility of producing photonic band gaps, i.e., frequency regions, extending through all k-space, where electromagnetic waves cannot propagate through the medium. Such media have many potentially valuable applications, including possible use as filters and in films with rejection-wavelength tuning. [1] In systems with a complete photonic band gap, the spontaneous emission of atoms with level splitting within the gap can be strongly suppressed. [2] Since light cannot travel through the photonic band gap materials (Bragg diffracted backwards), one of their applications can be a complete control over wasteful spontaneous emission in unwanted directions when a device, such as a laser, is embedded inside a 3D photonic crystal. [3] 2D photonic crystals can be used as optical microcavities, microresonators, [4] waveguides, [5] lasers, [6] or fibers [7] while 1D photonic crystals can be used as Bragg gratings or optical switches. [8] The photonic band structure of a range of materials has been studied using a plane wave expansion method. Typically, the method converges easily when the dielectric function is everywhere real, but more slowly, or not at all, when the dielectric function has a negative real part, as occurs when one component is metallic. For example, McGurn et al. [9] used this method to calculate the photonic band structure of a square lattice of metal cylinders in two dimensions (2D) and of an fcc lattice of metal spheres embedded in vacuum in 3D. They found that that method converged well when the filling fraction f (i.e., volume fraction of metal spheres or cylinders) sat-isfied f ≤ 0.1%.
Kuzmiak et al. [10] used the same method to calculate the photonic band structures for 2D metal cylinders in a square or triangular lattice in vacuum. For low f and ω > ω p , the calculated photonic band structures are just slightly perturbed versions of the dispersion curves for electromagnetic waves in vacuum. However, for ω < ω p and H-polarized waves (magnetic field H parallel to the cylinders), they obtained many nearly flat bands for ω < ω p ; these bands were found to converge very slowly with increasing numbers of plane waves. They later extended this work to systems with dissipation. [11] To describe dispersive and absorptive materials, they used a complex, position-dependent form of dielectric function. They also introduced a standard linearization technique to solve the resulting nonlinear eigenvalue problem.
Zabel et al. [12] extended the plane wave method to treat periodic composites with anisotropic dielectric functions. In particular, they studied the photonic band structures of a periodic array of anisotropic dielectric spheres embedded in air. They found that the anisotropy split degenerate bands, and narrowed or even closed the band gaps. Much further work on anisotropic photonic materials has been carried out since this paper (see, e.g., Ref. [2]).
A different type of periodic metal-insulator composite is a periodic arrangement of metallic spheres in an insulating host. Brongersma et al. [13] studied the dispersion relation for coupled plasmon modes in such a linear chain of equally spaced metal nanoparticles, using a near-field electromagnetic (EM) interaction between the particles in the dipole limit. They also studied the transport of EM energy around the corners and through tee junctions of the nanoparticle chain-array.
Park and Stroud [14] also studied the surface-plasmon dispersion relations for a chain of metallic nanoparticles in an isotropic medium. They used a generalized tight-binding calculations, including all multipoles. This approach is more exact than the previous point-dipole calculation, [13] in a quasistatic limit, but still leaves out non-quasistatic effects associated with radiative damping (i.e., effects associated with the non-vanishing of ∇ × E, where E is the electric field. They calculated the lowest bands as well as many higher bands and compared their results with those in Ref. [13].
Weber and Ford [15] have shown that all calculations within the quasistatic approximation omit important interactions between transverse plasmon waves and free photon modes, even if the interparticle separation is small compared to the wavelength of light. Thus, most quasistatic calculations need to have certain corrections included at particular values of the wave vector.
Recently, Gaillot et al. [16] have studied the photonic band structures of another type of structure, a so-called inverse opal structure. This structure is an fcc lattice of void spheres in a host of another material. Such a structure can be prepared, e.g., starting from an opal structure made of spheres of a convenient substance, infiltrating it with another material, then dissolving away the spheres. In the work of Ref. [16], the photonic band structure of Si inverse opal was calculated as a function of the infiltrated volume fraction f of air voids using three-dimensional finite difference time domain (3D FDTD) method. It was found that for certain values of f , a complete band gap opens up between the eighth and ninth bands.
In the present work, first we study the photonic band structure of an inverse opal structure, such as that investigated in Ref. [16], but instead of dielectric materials such as Si, we consider metals as the infiltrated materials. Thus, the material we study is also the inverse of the fcc array of metal spheres studied by McGurn et al. [9] Such metallic inverse opal structures have recently become of great interest, because it has been found that Pb inverse opals exhibit superconductivity. [17] These workers have studied the response of these materials to an applied magnetic field, and have found a highly non-monotonic fractional flux penetration into the Pb spheres as a function of the applied field.
As a second example, we study the photonic band structure of a linear chain of nanopores in a metallic medium. This is an inverse structure of a linear chain of metallic nanospheres, of which the dispersion relation is given in Ref. [13]. As anticipated, we get a kind of "inverse image" of the dispersion relation found by Ref. [13] in our system.
For both types of structures, our primary method for studying the photonic band structures below the plasma frequency ω p is a tight-binding approximation which is valid even in the non-quasistatic regime. Because the analogs of the tight-binding atomic states decay exponentially in the metallic host medium, the resulting tight-binding waves do not lose energy radiatively, as do the corresponding waves along one-dimensional chains of metallic nanoparticles in air. Furthermore, because the modes are expanded in "atomic" states rather than plane waves, there is no convergence problem as there can be in the plane wave case.
The remainder of this paper is organized as follows. In Section II, we first present the formalism for calculating the transverse magnetic (TM) and transverse electric (TE) modes of a single spherical cavity in a metallic host. We then describe the method for calculating the photonic band structures of metallic inverse opals and of linear chains of nanopores in a metallic host, using a simple tight-binding approach for ω < ω p . In Section III, we give the numerical results for the TM and TE modes of a single cavity and those of the tight-binding method for the metal inverse opals and the linear chain of nanopores. Section IV presents a summary and discussion.
II. FORMALISM
In this section, we present a summary of the equations determining the band structure of a photonic crystal containing a metallic component with Drude dielectric function ǫ(ω) = 1 − ω 2 p /ω 2 and an insulating component of dielectric constant unity. The insulating component is assumed to be present in the form of identical spherical cavities of radius R. We first write down the equations for the TM and TE modes of a spherical cavity in a Drude metal. Then, we present a tight-binding method for ω < ω p .
A. Spherical Cavity
As a preliminary to calculating the photonic band structure, we first discuss the modes of a single spherical cavity in a Drude metal host. We begin with the TM modes of the cavity, then the TE modes.
TM Modes
It is convenient to describe the modes of the embedded cavity in terms of the B field. To that end, we combine the two homogeneous Maxwell equations to obtain a single equation for B: Here, we have expressed the position-and frequencydependent dielectric function ǫ(x, ω) as 1/ǫ(x, ω) = θ(x)/(1 − ω 2 p /ω 2 ) + 1 − θ(x), where the step function θ(x) = 1 inside the metallic region and θ(x) = 0 elsewhere. Multiplying this equation by ω 2 − ω 2 p and simplifying, we obtain Thus, inside the spherical void, we have while inside the metal, For a spherical void within a metallic host, it is convenient to solve these equations in spherical coordinates. It is readily found that, for the TM modes, the nonvanishing components of the solutions for B and E for Eq. (5) and Eq. (2) are [18] where k = ω/c, j ℓ and n ℓ are spherical Bessel functions, and the subscripts φ, r, and θ denote components of the corresponding fields in spherical coordinates. Likewise, the solutions of Eqs. (6) and (2) within the metal will be where k ′ = ω 2 − ω 2 p /c. The requirements that the normal displacement D and tangential E should be continuous at R gives the two conditions and ∂u ℓ (r) where R is the radius of the spherical cavity. Since the fields at the center of the void sphere must be finite, we also have where we have normalized the solution so that the coefficient A ℓ = 1. From Eq. (9) we have whereas, from Eq. (10), we get The coefficients C ℓ and D ℓ can then be determined from these boundary conditions. The results for ω < ω p , can be obtained by making the substitution k ′ → ik ′ , with k ′ real. In this case, the radial component of Eq. (6) takes the form which is the modified Bessel equation. The solutions of Eq. (14) are the modified spherical Bessel functions i ℓ and k ℓ (note that this k ℓ is different from the wave vectors k and k ′ ). In this case, Eq. (8) becomes where k ′ = ω 2 p − ω 2 /c. In addition, Eqs. (10), (12), and (13) are transformed, respectively, into and It is of interest to consider the specific case of a spherical cavity in an infinite medium. In this case, C ℓ = 0 because i ℓ (x) diverges at large x. As a result, Eqs. (15), (17), and (18) and From Eq. (20) we have D ℓ = j ℓ (kR)/k ℓ (k ′ R), and hence Eq. (21) becomes We can readily obtain the asymptotic forms of the solutions when kR ≪ 1 and k ′ R ≪ 1. In this case j ℓ (kR) and In this limit, Eq. (22), after some algebra, reduces to simply Since The largest value, ω = 2/3ω p , occurs at ℓ = 1 and the limiting value for large ℓ is ω = ω p / √ 2.
TE Modes
For the TE mode, inside the spherical void, we have whereas inside the metal, we have We now use these equations to calculate E φ , B r , and B θ . From Eq. (25) we get where k = ω/c. The solutions of Eq. (26) are where k ′ = ω 2 − ω 2 p /c.
Since normal B and tangential H should be continuous on the boundaries, we obtain the conditions as in the TM case, and Since the fields must be finite at the center of the void sphere, we can choose we also take the coefficient while, from Eq. (30), we get The corresponding equation for ω < ω p , can again be obtained by the transformation k ′ → ik ′ . The TE modes for ω < ω p using the modified spherical Bessel functions i ℓ (x) and k ℓ (x) satisfy Eqs. (14), (15), (30), and (17). The only changes are in Eq. (18), which becomes In an infinite medium, these conditions become, from Eq. , (35) If we consider the asymptotic forms of the solutions when kR ≪ 1 and k ′ R ≪ 1 as we did for the TM modes, Eq. (35) simplifies to which gives ℓ = −1/2. Since ℓ must be a positive integer, we see that there are no eigenvalues for TE modes in the limit kR ≪ 1 and k ′ R ≪ 1.
B. Tight-Binding Approach to Modes for ω < ωp
We now turn from describing the single-cavity modes to a discussion of the band structure for a periodic array of such cavities. In conventional periodic solids, the tightbinding method is very useful in treating narrow bands.
In what follows, we try to suggest an analogous tightbinding approach for the lowest set of TM modes in a periodic lattice of spherical cavities in a metallic host, in the frequency range ω < ω p . We apply the resulting method, first, to an fcc lattice of pores, and then to a linear chain of spherical pores in a metallic host.
Even though these are TM modes, it is convenient to describe them now in terms of their electric fields. We denote the electric field of the λth mode by E λ (x). This field satisfies is the "Hamiltonian" of this system. Since O is a Hermitian operator, the eigenstates corresponding to unequal eigenvalues ω 2 λ /c 2 and ω 2 µ /c 2 are orthogonal and may be chosen to be orthonormal. (The orthogonality may also be proved directly by integration by parts.) The orthonormality relation is Since E λ (x) is real for ω < ω p , the complex conjugation is, in fact, unnecessary. In Sec. II A 1, our paper already gives the equations determining the electric and magnetic fields of isolated TM modes for a spherical cavity. The lowest set corresponds to ℓ = 1, and there should be three of these. For a spherical cavity, all three are degenerate, i.e., all three have the same eigenfrequencies. Even though the three modes have equal frequencies, one can always choose an orthonormal set, with electric fields E 1 , E 2 , and E 3 satisfying the orthonormality relation in Eq. (38).
In order to obtain the tight-binding band structure built from these three modes, we need to calculate matrix elements of the form corresponding to two single-cavity modes associated with different cavities centered at the origin and at R. Here, O is the "Hamiltonian" of the system as defined implicitly in Eq. (37). Next, we introduce normalized Bloch states associated with the three ℓ = 1 single-cavity modes. In order to do this, we first make the usual tight-binding assumption that the "atomic" states corresponding to different cavities are orthogonal: This orthogonality of states on different cavities is reasonable since the fields fall off exponentially with separation. The orthonormal Bloch states then take the form where k is a Bloch vector, and the R's are the Bravais lattice vectors. In writing Eq. (41), we have assumed that there are N identical spherical cavities, and that the Bloch states satisfy the usual periodic boundary conditions of Born-von Karman type. We also introduce the elements of the "Hamiltonian" matrix We can then obtain the frequencies ω(k) by diagonalizing a 3 × 3 matrix as follows: where ω at is the eigenvalue of a single-cavity mode. The solutions to these equations give the three p-bands for a periodic lattice of cavities in a metallic host. This procedure is analogous to that used in the well-known procedure for obtaining tight-binding bands from three degenerate p-bands in the electronic structure of conventional solids (see, for example, Ref. [19]). We briefly comment on the connection between this approach and that used by earlier workers. [13,14] In this work, the authors treat wave propagation along a chain of metallic nanoparticles. They use the tight-binding approximation, as we do, but in the quasistatic approximation in which one assumes that ∇ × E = 0. This approximation is reasonable when both the particle radii and the interparticle separations are small compared to a wavelength, but is not accurate in other circumstances. Furthermore, even in the small-particle and small-separation regime, this approximation still fails to account for the radiation which occurs at certain wave numbers and frequencies. The present approach would generalize this tight-binding method to (a) three dimensions as well as one; (b) pore modes instead of small particle modes; and most importantly (c) larger pores and larger interparticle separations, via extension beyond the quasistatic approximation.
Next, we discuss the numerical evaluation of the required matrix elements, Eq. (39). The relevant electric fields are given in this paper, but in spherical coordinates. It is not difficult to convert these into Cartesian coordinates. The operator O is just a little trickier. We first since E β is an eigenstate of O R with an eigenvalue ω 2 at /c 2 . But since we are assuming that the overlap integral between "atomic" electric field states centered on different sites vanishes, the term involving O R does not contribute to the matrix element M α,β , which is therefore just given by We can also write where is a step function which is unity inside the cavity centered at R ′ and is zero otherwise. A reasonable approximation to Eq. (46) might be to include just R ′ = 0. In this case, we finally will get where the integral runs just over the cavity centered at the origin. As a further approximation, we can just replace E β (x−R) by the value of this function at the origin, i.e., E β (−R). Then this field can be taken outside the integral and we just have where once again the integral runs over the cavity centered at the origin. Next, we attempt to calculate the relevant quantities needed to solve for this matrix element. In order to use the tight-binding approach we will need to normalize the individual eigenstates E α . Therefore, we will begin by obtaining this normalization. For ℓ = 1, the u ℓ (r)'s are r times spherical Bessel functions. We write this field as where we have used the relation P 1 (cos θ) = cos θ, and introduced the normalization constant C 1 , which will be determined below. Similarly, where we use P 1 1 (cos θ) = − sin θ. For r > R, we have and We will need the integrals of the Cartesian components of the field over the volume of the sphere centered at the origin. Let us assume we are considering the z mode, i.e., the one for which θ refers to the angle from the z axis. Then the symmetry of the problem shows that only the z component of the electric field will have a nonzero integral. Also, we have that E z,in (r, θ) = E r,in cos θ − E θ,in sin θ. (54) Thus, after a little algebra, we find that the integral of this field over the volume of the cavity is Next, we work out the coefficient D 1 . It is determined by the boundary conditions at r = R. These conditions are that D r and E θ should be continuous at r = R. These two conditions determine not only the value of D 1 but also the allowed frequency. After a bit of algebra, we find that The allowed value of ω is given by Eq. (22). Finally, we need the normalization constant C 1 . We choose this so that the integral of the square of the electric field for a single-cavity mode should be normalized to unity. This condition may be written If we write and then we can express the normalization condition as Therefore, we can now write out an explicit expression for the matrix element M α,β (R) given in Eq. (49). For the αth mode, the integral of E α over the volume of a cavity is a vector in the αth direction. To evaluate Eq. (49), we need the component of the αth mode in the βth direction at a position R. Let us first consider the zth mode (α = z). We can use Eqs. (52) and (53) to rewrite this field in Cartesian coordinates with the additional equations cos θ = z r , We just substitute these expressions back into Eqs. (52) and (53) to get the Cartesian components of the field for a mode parallel to the z axis. For the mode parallel to the x axis, we just permute the coordinates cyclically: z → x, x → y, and y → z. Similarly, for the y modes, we make the permutation (x, y, z) → (z, x, y).
Using these results, we should be able to compute all the elements in the tight-binding matrix and hence obtain the band structure for the photonic p-bands in the tightbinding approximation, in either one or three dimensions.
III. NUMERICAL RESULTS
For the inverse opals we arbitrarily assume a lattice constant d = 500 √ 2 nm, and a void sphere radius R = 150 nm as in Fig. 1(a). This choice is the same as that of Ref. [17], where the Pb inverse opal has this lattice constant. Since the volume of the primitive unit cell is v c = d 3 /4, this corresponds to a void volume fraction f = 0.160. For the linear chain of nanopores (see below) this d is the separation between two nanopores and R is the radius of a nanopore as in Fig. 1(b).
The metallic dielectric functions we assume for the inverse opals and linear chain of nanopores are of the usual Drude form, where ω p is the plasma frequency of the conduction electrons. ǫ(ω) < 0 when ω < ω p , while ǫ(ω) > 0 when ω > ω p . Our calculations are thus carried out assuming that the Drude relaxation time τ → ∞. For a metal in its normal state, ω 2 p = 4πne 2 /m, where n is the conduction electron density and m is the electron mass. Note that with this choice of dielectric function, the entire band structure can be expressed in scaled form. That is, the scaled frequency ωd/c is a function only of the scaled wave vector kd, and the band structures are parameterized by the two constants ω p d/c and f for the case of inverse opals.
Since we are considering void spheres in inverse opals and linear chains of nanopores, it is of interest to consider electromagnetic wave modes in a single cavity, which could be considered a single "atom" of the void lattice. We show only results for ω < ω p , since these are the results most relevant to possible narrow-band photonic states in the inverse opal structure. Our results for ω < ω p for an isolated spherical cavity in an infinite medium, and when kR ≪ 1 and k ′ R ≪ 1 are given in Table I. These two inequalities are reasonable for our inverse opal system parameters d = 500 √ 2nm, R = 150nm, and ω p d/c = 1, because The (modified) spherical Bessel functions in Eq. (22) are extremely close to the ω axis for ℓ > 5, so that it is difficult to get eigenfrequencies for ℓ > 5 in the isolated spherical cavity. However the eigenfrequencies continue to exist even for ℓ > 5 when kR ≪ 1 and k ′ R ≪ 1. The solutions to Eq. (35) do not exist for ω < ω p with ω p d/c = 1. This fact is consistent with that the eigenvalues for ω < ω p do not exist for TE modes when kR ≪ 1 and k ′ R ≪ 1.
Assuming ω p d/c = 1 and using ω at d/(2πc) = 0.1296 for ℓ = 1 in an infinite medium, we get the tight-binding results in Fig. 2. This figure shows three separate bands in the X-U -L region and X-W -K region as expected for the p-bands. The bandwidth is relatively small as M α,β (R)d 2 ∼ 0.001, which proves the general relation between the bandwidth and the overlap integral. [19] All three bands are degenerate at k = 0 (the Γ point). In addition, there is a double degeneracy when k is directed along either a cube axis (Γ-X) or a cube body diagonal (Γ-L), the higher (concave upward) bands being degenerate in both cases. The lower two bands have a band gap at the U point, and these bands cross at the W point.
Next, we turn to the band structure of a periodic linear chain of spherical nanopores in a Drude metal host. For this linear chain, the Bravais lattice vectors are R = d(0, 0, ±n), where ±n is the nth nearest-neighbor, d is the separation between two nanopores and we assume that the chain is directed along the z axis. We can calculate the tight-binding band structure including as many sets of neighbors ±n as we wish. To compare our results with those in Ref. [13], we first use their parameters, R = 25 nm and d = 75 nm, together with their overlap parameter ω 1 = 1.4 × 10 15 rad/s. These combine to give ω p d/c = 0.35. The "atomic" frequency is found by solving Eq. (22) and gives ω at d/(2πc) = 0.0454 for ℓ = 1 in an infinite medium. Our resulting tightbinding dispersion relations are shown in Fig. 3 with only nearest-neighbors included. Note that our frequencies are given in unit of 2πc/d while the results of Ref. [13] are not scaled. Our results are exactly the inverse images of theirs -that is, we would get their curves (to within a constant of proportionality) if we reflect our curves through the horizontal line of the atomic level, and the transverse (T) branches are twofold degenerate as are theirs, while the longitudinal (L) branch is nondegenerate. Our eigenfrequency for a single-cavity ω at corresponds to their resonance frequency ω 0 . As we increase the number of nearest-neighbors (nn's) included, the separation between the L and T branches increases at the zone center but decreases at the zone boundary, as shown in Fig. 4; the same trend is seen in Fig. 1 in Ref. [13]. The sum also converges quickly, so there is little difference between the dispersion relation including through the next-nearest-neighbors and that including through the 5th nearest-neighbors.
We have carried out similar calculations using other values of the parameter ω p d/c, namely 1.0, 2.0, and 5.0. Such calculations are possible here because our calculations are non-quasistatic, so that the overlap integral between neighboring spheres falls off exponentially with separation. The results are given in Figs. 5, 7, and 9, respectively. The corresponding results including more overlap integrals are shown in Figs. 6, 8, and 10, respectively. It is also striking that, as ω p d/c increases in going from Fig. 3 to Figs. 5, 7, and 9, the ratio r LT of the width of the L band to that of the T band steadily decreases. In Fig. 3, r LT > 1, in Fig. 9, r LT < 1, while in Fig. 7 (for which ω p d/c = 2.0), r LT ∼ 1.
One could also say that, except for an overall scale factor, Fig. 9 looks like an inverted image of Fig. 3 about the horizontal line of ω at . The dispersion relations for the intermediate value ω p d/c = 2.0 has nearly perfect symmetry about the horizontal line of ω at (the T branches are nearly reflections of the L branch about the horizontal line of ω at ), as in Fig. 7. For the nn case, the T and L bands cross at ±π/(2d), as can be seen in Figs. 3, 5, 7, and 9. When further neighbors are included, they cross at smaller values than |π/(2d)|, as can be seen in Figs. 6, 8, and 10, but the crossing points get closer to ±π/(2d) as ω p increases. Also, the effects of including further neighbors become smaller as ω p increases; they are smallest at ω p d/c = 5.0, as can be seen in Fig. 10. Next we consider values of R/d other than 1/3, but still keeping the same value of ω 1 = 1.4 × 10 15 rad/s (i.e., ω p d/c = 0.35). For a smaller R/d = 0.25, the variation of the band energies with k becomes smaller, as seen in Fig. 11, than it is in Fig. 3, but the crossing points between the L and T branches still occur at ±π/(2d). This behavior becomes clearer when the results for several values of R/d are plotted together as in Fig. 12. As R/d increases, the variation of the band energies with k, and the separation between the L and T branches at both the zone center and zone boundary, increase, but the L and T branches still cross at ±π/(2d). If we include more neighbors up to fifth nearest-neighbors, but consider only up to R/d = 0.4, we get the dispersion relations shown in Fig. 13. These show the same trends as in Fig. 12, except that the band crossing points occur at values of |k| slightly less than |π/(2d)|. Furthermore, the separation between the L and T bands increases slightly at k = 0, but decreases slightly at k = ±π/d.
IV. DISCUSSION
In this work we have calculated the photonic band structures of metal inverse opals and of a linear chain of spherical voids in a metallic host for frequencies below ω p , when ℓ = 1 using a tight-binding approximation. In both cases, we include only the ℓ = 1 "atomic" states of the voids. As a possible point of comparison, we have also computed the same band structures using the asymptotic forms of the spherical and modified spherical Bessel functions for small void radius. In this asymptotic region, there are only TM modes. The results for the linear chain of voids can be considered as the "inversions" of those in Ref. [13], in the sense discussed earlier. In other words, if we reflect our L and T branches with respect to the atomic energy level, we would get their L and T modes. Although we did not discuss this approach, we did attempt to use the plane wave expansion method to calculate the band structure for the inverse opals, similarly to Refs. [9] and [10]. Just as found in those papers, the photonic bands for modes below ω p depend on the number of plane waves included in the expansion and on the type of field, B or E, used in the expansion. Furthermore, this plane wave expansion method gives a large number of flat bands below ω p , which are difficult to interpret physically. Because of this problem, and because of the apparent non-convergence of this approach with the number of plane waves, we do not present these results here. By contrast, the band structures above ω p , when calculated using the plane wave expansion, varied smoothly with k and converged well with the number of plane waves included.
In calculating the tight-binding band structure for the linear chain (and the inverse opal structure), one should, in principle, include all the neighbors. But in practice, for the linear chain, it is sufficient to include only up to the fifth nearest-neighbors. This calculation is easily carried out, since the matrix in Eq. (39) is already diagonal in x, y, z and the sum converges quickly. In fact, even the inclusion of neighbors beyond the first two sets changes the band structure very little. The smallness of the further neighbor effect is particularly apparent when ω p d/c = 5.0, as in Fig. 10.
Although we studied the 3D and 1D lattices, we have not investigated a 2D lattice of spherical pores in a metallic host. For the 2D case, the matrix element M α,β (R) can again be readily calculated using our tight-binding approximation. It is expected to have some nonzero offdiagonal elements in addition to the diagonal elements. Thus band structures somewhat similar to the 3D band structures shown in Fig. 2 are also expected in the 2D case.
In the quasistatic case, for metal grains in air, when R/d is greater than about 0.4, it becomes important to include more than just ℓ = 1 as in Ref. [14]. Inclusion of such higher ℓ's might be rather difficult in the present dynamical case, though it would be possible in the quasistatic limit for 1D chains of spherical nanopores.
In the present work, we have considered only the case of one cavity per primitive cell and ℓ = 1. It would be of a great interest to consider multiple cavities per unit cell. Of course, in this case, the dimension of the matrix in Eq. (39) will increase and the Bloch states in Eq. (41) will acquire an additional index. It should be straightforward to extend the present work to such a case, which would make an interesting subject for future work.
The surface effect of the metal nanopores on the eigenstates becomes prominent as the radius of a pore (R) or the ratio of radius to center-to-center separation (R/d) increases. Shockley surface states form on metal surfaces depending on the type of metal, such as the Fermi wavevector k F or the Fermi energy ǫ F . These surface states will interact with the eigenstates, resulting in the change of eigenvalues. However we think these effects are negligible, especially in the quasistatic limit since in the asymptotic limit, these oscillatory interactions take the form [20] E asym pair (a) ∝ where a is the interatomic separation, Θ is the effective interaction phase shift, k F the Fermi wavevector of the isotropic surface state, and ǫ F the Fermi energy. The proportionality constant gives the consequences of scattering into bulk states. [20] The very slow a −2 decay of these interactions allows them to play a role at large separations, but the overall magnitude is small due to the sinusoidal term.
In summary, we have described a tight-binding method for calculating the photonic band structure of a periodic composite of spherical pores in a metallic host, and have applied it to both 1D and 3D systems. The method is fully dynamical, and is not limited to very small pores. The method does not have the convergence problems found when the magnetic or electric field is expanded in plane waves. Furthermore, there are no radiation losses to consider, unlike the complementary case of small metal particles in an insulating host, because the fields associated with these modes outside the pores are exponentially decaying. Thus, this method may be useful for a variety of periodic metal-insulator composites. It would be of interest to compare these calculations to experiments on such materials. | 2011-11-10T12:29:20.000Z | 2011-11-10T00:00:00.000 | {
"year": 2013,
"sha1": "a743a172cb6ae979d00932eb8811261e9ccab5e4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.21.019834",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "a743a172cb6ae979d00932eb8811261e9ccab5e4",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science",
"Medicine"
]
} |
13758277 | pes2o/s2orc | v3-fos-license | Human Meibum Age, Lipid–Lipid Interactions and Lipid Saturation in Meibum from Infants
Tear stability decreases with increasing age and the same signs of instability are exacerbated with dry eye. Meibum lipid compositional changes with age provide insights into the biomolecules responsible for tear film instability. Meibum was collected from 69 normal donors ranging in age from 0.6 to 68 years of age. Infrared spectroscopy was used to measure meibum lipid phase transition parameters. Nuclear magnetic resonance spectroscopy was used to measure lipid saturation. Increasing human meibum lipid hydrocarbon chain unsaturation with age was related to a decrease in hydrocarbon chain order, cooperativity, and in the phase transition temperature. The change in these parameters was most dramatic between 1 and 20 years of age. Meibum was catalytically saturated to determine the effect of saturation on meibum lipid phase transition parameters. Hydrocarbon chain saturation was directly related to lipid order, phase transition temperature, cooperativity, changes in enthalpy and entropy, and could account for the changes in the lipid phase transition parameters observed with age. Unsaturation could contribute to decreased tear film stability with age.
Introduction
Tear lipids, mostly from the Meibomian gland and a minor amount from sebaceous glands [1], may be important for tear stability [2][3][4]. Changes in tear film lipid composition with age could give us insights into lipid compositional-functional relationships with dry eye. For instance, the signs of dry eye such as decreased breakup time and increased blink rate, are exacerbations of the same signs observed with aging [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. The spontaneous blink rate of adults is as much as 20 times per minute, much higher than that of infants which blink less than one time a minute [15]. The spontaneous blink rate is related to the tear break-up time. Tear break-up time is as high as 35 s in infants and decreases to 8-16 s in adults. Tear break-up time is even lower (5 s) in adults with Meibomian gland dysfunction [16][17][18][19][20][21][22].
This project is an extension of previous nuclear magnetic resonance (NMR) [23,24] and Fourier transform infrared (FTIR) [1,[24][25][26][27][28] spectral studies relating age with meibum composition, structure, This project is an extension of previous nuclear magnetic resonance (NMR) [23,24] and Fourier transform infrared (FTIR) [1,[24][25][26][27][28] spectral studies relating age with meibum composition, structure, and function. Using deuterated chloroform as a solvent rather than deuterated cyclohexane used previously [23,24], in the current study, the double bond resonance assigned to cholesterol was resolved and quantified from the double bond resonance associated with hydrocarbons using a 700 MHz NMR spectrometer. The 700 MHz NMR spectrometer is more powerful than the 500 MHz NMR spectrometer used previously [23,24]. Furthermore, catalytic hydrogenation was used to examine the relationships between hydrocarbon chain order and the level of saturation. This was an improvement over the previous study where only native meibum was compared with meibum that was 100% saturated [25], a level that is not physiological. This study provides insights into how the increase in meibum lipid saturation can be related to the observed decrease in tear film stability with age.
H-NMR Spectroscopy
Average proton NMR ( 1 H-NMR) spectra of human meibum were typical of meibum collected from a 700 MHz spectrometer ( Figure 1). Band assignments were made based on previous 1 H and carbon 13 NMR studies [23,29]. The largest resonance in this region was observed at 5.32 ppm with a shoulder at 5.35 ppm assigned to protons of the cis =CH moieties from hydrocarbon chains and to the proton attached to carbon #6 of cholesterol esters, respectively. The resonance at 4.6 ppm is from cholesteryl esters and the resonance at 4.0 ppm is from wax esters ( Figure 1). The resonances near 5.1 ppm have been assigned to squalene [29]. The total level of double bonds from cholesteryl and wax =CH resonances (5.32 and 5.35 ppm) relative to the sum of wax (4.0 ppm) and cholesteryl ester (4.6 ppm) resonances increased significantly (p = 0.03) from 1.0 ± 0.1 for infants to 1.4 ± 0.1 in children. Unsaturation of human meibum increased with age and the relative level of hydrocarbon cis =CH unsaturation of infants was significantly lower (p < 0.0001) compared with that of adults ( Figure 2a). Samples above 20 years of age were grouped together based on the developmental Tanner stage and changes in blink rate, free fatty acids and meibum lipid phase transition parameters (see Discussion). The total level of double bonds from cholesteryl and wax =CH resonances (5.32 and 5.35 ppm) relative to the sum of wax (4.0 ppm) and cholesteryl ester (4.6 ppm) resonances increased significantly (p = 0.03) from 1.0 ± 0.1 for infants to 1.4 ± 0.1 in children. Unsaturation of human meibum increased with age and the relative level of hydrocarbon cis =CH unsaturation of infants was significantly lower (p < 0.0001) compared with that of adults ( Figure 2a). Samples above 20 years of age were grouped together based on the developmental Tanner stage and changes in blink rate, free fatty acids and meibum lipid phase transition parameters (see Discussion). [27]; (■) Reference [1]; (▲) this study; (▼) and Reference [26]. (▬) Curve fit to data using the parameter, hyperbola, hyperbolic decay equation: f = y0 + (a × b)/(b + x). All donors were normal and did not have signs or symptoms of dry eye. Data are average ± the standard error of the mean.
Infrared Spectroscopy
Infrared spectroscopy was used to study lipid-lipid interactions and composition. The CH2 stretching and bending bands are predominant in the infrared spectra of lipids due to the large number of CH2 groups in their hydrocarbon chains. The CH stretching region of meibum is composed of five major bands ( Figure 3) [26]. Note the catalytically hydrogenated sample has no =CH stretching band ( Figure 3b). In this study, we used the frequency of the symmetric CH2 stretching band near 2850 cm −1 (ṽsym) to estimate the trans to gauche rotamer content of the hydrocarbon chains. The ṽsym increased with an increase in temperature and the number of gauche rotamers, concurrent with a decrease in intensity ( Figure 4) [26,30]. The peak height of the CH2 symmetric stretching band at 9 °C was approximately 0.23 absorbance units. The absolute intensity of the CH stretching region decreased by about 20% with an increase in temperature from 9 to 65 °C which was attributed partially to a 50% decrease in the CH2 symmetric stretching band [30]. A sigmoidal equation was used to fit and quantify the lipid phase transitions [26]. Two of the fitted parameters, the minimum and maximum ṽsym, correspond to the most ordered and disordered states of hydrocarbon chains, respectively. Another fitted parameter was the phase transition temperature, which is the temperature at which half of the lipid molecules undergo a phase change. The fourth fitted parameter was the relative cooperativity of the phase transition that describes how the order of a lipid influences that of neighboring lipids. Broad phase transitions have a relatively smaller absolute value of the cooperativity. Lipid phase transition parameters for a pool of human meibum used in the saturation study are listed in Table 1. Lipid order was measured close to the surface temperature of the human eye, 33.4 °C, by extrapolating the ṽsym at 33.4 °C from the fit of the phase transition and then converting ṽsym to the percentage of trans rotamers [26]. The lipid order measured in this study, (31 ± 2) % trans rotamers), reinforced the correlation between a decrease in lipid order with increasing age ( Figure 2b, r = 0.963, p < 0.01). [27]; ( ) Reference [1]; ( ) this study; ( ) and Reference [26]. ( ) Curve fit to data using the parameter, hyperbola, hyperbolic decay equation: f = y 0 + (a × b)/(b + x). All donors were normal and did not have signs or symptoms of dry eye. Data are average ± the standard error of the mean.
Infrared Spectroscopy
Infrared spectroscopy was used to study lipid-lipid interactions and composition. The CH 2 stretching and bending bands are predominant in the infrared spectra of lipids due to the large number of CH 2 groups in their hydrocarbon chains. The CH stretching region of meibum is composed of five major bands ( Figure 3) [26]. Note the catalytically hydrogenated sample has no =CH stretching band ( Figure 3b). In this study, we used the frequency of the symmetric CH 2 stretching band near 2850 cm −1 (ṽ sym ) to estimate the trans to gauche rotamer content of the hydrocarbon chains. Theṽ sym increased with an increase in temperature and the number of gauche rotamers, concurrent with a decrease in intensity ( Figure 4) [26,30]. The peak height of the CH 2 symmetric stretching band at 9 • C was approximately 0.23 absorbance units. The absolute intensity of the CH stretching region decreased by about 20% with an increase in temperature from 9 to 65 • C which was attributed partially to a 50% decrease in the CH 2 symmetric stretching band [30]. A sigmoidal equation was used to fit and quantify the lipid phase transitions [26]. Two of the fitted parameters, the minimum and maximumṽ sym , correspond to the most ordered and disordered states of hydrocarbon chains, respectively. Another fitted parameter was the phase transition temperature, which is the temperature at which half of the lipid molecules undergo a phase change. The fourth fitted parameter was the relative cooperativity of the phase transition that describes how the order of a lipid influences that of neighboring lipids. Broad phase transitions have a relatively smaller absolute value of the cooperativity. Lipid phase transition parameters for a pool of human meibum used in the saturation study are listed in Table 1. Lipid order was measured close to the surface temperature of the human eye, 33.4 • C, by extrapolating theṽ sym at 33.4 • C from the fit of the phase transition and then convertingṽ sym to the percentage of trans rotamers [26]. The lipid order measured in this study, (31 ± 2) % trans rotamers), reinforced the correlation between a decrease in lipid order with increasing age (Figure 2b, r = 0.963, p < 0.01). Meibum lipid was catalytically saturated and the lipid phase transition parameters were measured. Lipid order at 33.4 °C increased significantly (p < 0.0001) from 39 ± 3% to 82 ± 1% between 0% and 25% saturation ( Figure 5a). Above 25% saturation, lipid order reached a maximum. The lipid phase transition temperature for meibum lipids increased significantly (p < 0.01, r = 0.963) with saturation, as expected, from about 30 to 51 °C (Figure 5b). The change in enthalpy (∆H) (Figure 5c Meibum lipid was catalytically saturated and the lipid phase transition parameters were measured. Lipid order at 33.4 °C increased significantly (p < 0.0001) from 39 ± 3% to 82 ± 1% between 0% and 25% saturation (Figure 5a). Above 25% saturation, lipid order reached a maximum. The lipid phase transition temperature for meibum lipids increased significantly (p < 0.01, r = 0.963) with saturation, as expected, from about 30 to 51 °C (Figure 5b). The change in enthalpy (∆H) (Figure 5c Meibum lipid was catalytically saturated and the lipid phase transition parameters were measured. Lipid order at 33.4 • C increased significantly (p < 0.0001) from 39 ± 3% to 82 ± 1% between 0% and 25% saturation (Figure 5a). Above 25% saturation, lipid order reached a maximum. The lipid phase transition temperature for meibum lipids increased significantly (p < 0.01, r = 0.963) with saturation, as expected, from about 30 to 51 • C (Figure 5b). The change in enthalpy (∆H) (Figure 5c Arrhenius plots used to calculate the ∆H and ∆S values from the lipid phase transitions were linear, with correlation coefficients greater than 0.998 ( Figure 6). For comparison of phase transition parameters of catalytically saturated meibum with age related changes, we refitted the phase transition curves from previous studies and recalculated the percent trans rotamers because in previous publications [1,[25][26][27], the Equation used to curve fit the phase transitions was a general equation for sigmoidal curves. Equation (1) used in the current study is more physiologically relevant as it is related to the "Hill" Equation used to measure enzyme kinetics. Another reason to recalculate the previously measured phase transitions is that the minimum and maximum ṽsym used in the older studies were less accurate. In studies before our 2007 study [26], the maximum ṽsym of 2854.5 cm −1 was estimated from phosphatidylcholine in CHCl3. In this study, we used a maximum ṽsym of 2855.36 cm −1 calculated from an isomeric distribution of hexanes [26]. In addition, in previous studies, the minimum ṽsym of 2849 cm −1 was estimated from dipalmitoylphosphatidylcholine at −20 °C. In this study, we used a minimum ṽsym of 2848.00 cm −1 calculated from distearoylphosphatidylcholine at −50 °C [26]. Data using the parameters in citation 26 are plotted in Figures 2a and 7c,d,f. Arrhenius plots used to calculate the ∆H and ∆S values from the lipid phase transitions were linear, with correlation coefficients greater than 0.998 ( Figure 6). For comparison of phase transition parameters of catalytically saturated meibum with age related changes, we refitted the phase transition curves from previous studies and recalculated the percent trans rotamers because in previous publications [1,[25][26][27], the Equation used to curve fit the phase transitions was a general equation for sigmoidal curves. Equation (1) used in the current study is more physiologically relevant as it is related to the "Hill" Equation used to measure enzyme kinetics. Another reason to recalculate the previously measured phase transitions is that the minimum and maximumṽ sym used in the older studies were less accurate. In studies before our 2007 study [26], the maximumṽ sym of 2854.5 cm −1 was estimated from phosphatidylcholine in CHCl 3 . In this study, we used a maximumṽ sym of 2855.36 cm −1 calculated from an isomeric distribution of hexanes [26]. In addition, in previous studies, the minimumṽ sym of 2849 cm −1 was estimated from dipalmitoylphosphatidylcholine at −20 • C. In this study, we used a minimumṽ sym of 2848.00 cm −1 calculated from distearoylphosphatidylcholine at −50 • C [26]. Data using the parameters in citation 26 are plotted in Figures 2a and 7c,d,f. x); (e) Data from Reference [54]. All donors were normal and did not have signs or symptoms of dry eye. Data are average ± the standard error of the mean.
Discussion
A major finding of this study is that human meibum lipid hydrocarbon chain unsaturation increases with age in agreement with previous FTIR [27], and NMR [23] spectroscopic studies. However, a greater number of samples, 69, were measured in the current study and, in previous x); (e) Data from Reference [54]. All donors were normal and did not have signs or symptoms of dry eye. Data are average ± the standard error of the mean.
Discussion
A major finding of this study is that human meibum lipid hydrocarbon chain unsaturation increases with age in agreement with previous FTIR [27], and NMR [23] spectroscopic studies. However, a greater number of samples, 69, were measured in the current study and, in previous studies, a 500 MHz NMR spectrometer was used and the resonance from the =CH of cholesterol was not resolved from the =CH due to the hydrocarbon cis =CH resonance. The contribution of the =CH resonance of cholesterol was significant, 20% of the total intensity of =CH resonances. In the current study, we used a 700 MHz NMR spectrometer that allowed the resolution of the two resonances thus circumventing this shortfall. In the current study, unsaturation was related to the amount of wax and cholesteryl esters. This is more meaningful and accurate than relating saturation to the intensity of all the resonances as in the previous study. Furthermore, in the previous study [24], the resonance at 1.39 ppm was the major resonance in the NMR spectra and was from protonated h-cyclohexane, a contaminant of the d-cyclohexane that was incorrectly assigned to the meibum lipid CH 2 resonance. In this study, we used CDCl 3 to circumvent this issue.
Increasing human meibum hydrocarbon chain unsaturation with age ( Figure 2a) was related to hydrocarbon chain order (fluidity, Figure 7c), and a significant decrease in cooperativity (Figure 7f, r = 0.940, p < 0.01), and the phase transition temperature (Figure 7d, r = 0.982, p < 0.01). The change in these parameters was most dramatic between 1 and 20 years of age. The significant decrease in the phase transition parameters between 1 and 20 years of age can be explained by the observation that the phase transition temperature is linearly related to meibum lipid order [54]. The largest decline in the meibum phase transition temperature and hence the largest decline in lipid order occurred between 1 and 20 years of age (Figures 2b and 7d). The change in blink rate with age (Figure 7a) was closely related with the increase in hydrocarbon chain disorder (Figure 7c), decrease in the plasma levels of free fatty acids (Figure 7b), phase transition temperature ( Figure 7d) and cooperativity (Figure 7f). Correlation does not necessitate cause, but it is interesting that the breaks in the curves in Figure 7 occur around 20 years of age, at a Tanner stage V and adult level of development [55]. It is reasonable to speculate that endocrine changes with adolescence could be responsible for the observed break in the curves since the metabolism of lipids is under hormonal control.
Our catalytic saturation study showed that meibum hydrocarbon chain saturation was directly related to lipid order, phase transition temperature, cooperativity, ∆H and ∆S. Saturated hydrocarbon chains contain more trans rotamers and pack much more tightly than unsaturated hydrocarbon chains due to bends introduced into the hydrocarbon chains from the cis C=C bond. As the saturated hydrocarbon chains containing more trans rotamers pack more tightly together compared with hydrocarbon chains containing cis C=C bonds, it takes more enthalpy to break the van der Waal's interactions between the saturated hydrocarbon chains, thus the ∆H of the lipid phase transition is greater for saturated hydrocarbon chains compared with unsaturated chains. As expected, a 40% increase in saturation from adult meibum to meibum from infants (Figure 2a) would be expected to increase the phase transition of adult meibum from 28 to 40 • C, similar to the observed increase from 28 to 36 • C, (Figure 7d). Our catalytic hydrogenation study also showed that the saturation driven increase in the phase transition temperature (Figure 5b) could account for an increase in lipid hydrocarbon chain order (Figure 5a) from about 30% in adults to about 80% for infants, a little more than the 60% order observed for infants (Figure 2b). Other factors such as hydrocarbon chain branching and hydroxyl groups could contribute to disordering meibum [34], whereas protein (Figure 7e) could contribute to the ordering of meibum [32,56]. Saturation correlated with the phase transition temperature of pure and native membranes and may contribute to lipid order more than phospholipid, wax, cholesteryl ester content or hydrocarbon chain length or branching, [54].
The lipid phase transition temperature and cooperativity measured by FTIR in this work were reasonably close to those measured in our previous FTIR study and those of others using different techniques (Table 1), especially considering that the age, race and gender of the samples were not exact. The value we obtained for the ∆H of the meibum lipid phase transition is much larger than that reported using differential scanning calorimetry (DSC, Table 1). The reason for this difference may be due to technical differences or because DSC measures the total ∆H of the phase transition which includes the ∆H of hydrocarbon and interface interactions whereas the ∆H reported in the current study measures the ∆H for the transition of a mole of trans rotamers to a mole of gauche rotamers. There may be about eight trans rotamers per hydrocarbon chain. From the maximum and minimum infraredṽ sym of the phase transition, we calculate that in the ordered "gel phase" at low temperature, 72% of the rotamers are trans and 18% of the rotamers are trans in the disordered "liquid crystal phase" at higher temperature. Therefore, we estimate that DSC measures the ∆H for only 53% of the total isomers, the ones that undergo a trans to gauche change. Because the hydrocarbon chains are not completely ordered (solid) below the phase transition temperature and not completely disordered (liquid) above the phase transition temperature, the transition is called a gel to liquid crystalline phase transition and not a melting. Meibum compositional differences in hydrocarbon chain saturation can account for meibum structural differences with age as suggested in the current study. Lipid saturation [56], order and phase transition temperature [54] are higher in donors with Meibomian gland dysfunction compared with adults. Intuitively, meibum should be ordered enough to flow out of the Meibomian glands and fluid enough to spread on the surface of the tears. The relationships between meibum lipid structure and tear film function are less clear with dry eye that they are with age. The hydrocarbon chain order and phase transition temperature of meibum from donors with dry eye and unstable tears is 49% trans and 28 • C, respectively, comparable to that of donors younger than 10 years old with extremely stable tears, 50% to 60% trans and 35 • C, respectively. Therefore, other factors in addition to meibum lipid structure such as elevated levels of protein [32,56], cooperative unit size [33], loss of squalene [29], inflammation [57], sebum [1], interactions between meibum and moieties in tears [58][59][60], differences between the lipid composition of tears and meibum [1,26,58,60], and aqueous deficiency could all contribute to functional derangements with dry eye. Future studies focused on the role of meibum structure and tear film function are needed.
The infrared spectroscopic parameters discussed above are relevant to bulk meibum in the Meibomian gland and on the surface of the eye lid. The change in structural order of meibum with age could also be related to the structural order of meibum on the surface of tears since most (94%) of the lipid on the tear film surface is not in contact with the aqueous interface. However, we used Langmuir trough technology to measure how saturation influenced the surface properties of meibum [25] and compared native meibum with meibum that was 100% saturated, a level that was not physiological. We have completed a study comparing the rheology of meibum at physiological saturation levels for comparison with the composition, structure and functional data from the current study. We may speculate that the functional consequence of a more ordered, more elastic saturated meibum as observed for infants and the higher maximum surface pressure observed in pressure area curves of saturated meibum compared to native meibum suggests that more saturated meibum films could be more stable, especially under the high shear stress of a blink [25].
Materials
Silver chloride windows for infrared spectroscopy were obtained from Crystran Limited, Poole, UK. Platinum (IV) oxide was obtained from the Sigma Chemical Company (St. Louis, MO, USA).
Diagnosis of Normal Status
Normal status was assigned when the patient's Meibomian gland orifices showed no evidence of keratinization or plugging with turbid or thickened secretions and no dilated blood vessels were observed on the eyelid margin. Normal donors did not recall having dry eye symptoms. Written informed consent was obtained from all donors and protocols and procedures were approved by the University of Louisville Institutional Review Board # 11.0319, August 2016. All procedures were in accord with the Declaration of Helsinki.
Collection and Extraction of Lipid from Meibum
Meibum lipid was expressed from the eye lids [61] and was collected with a platinum spatula with attention to avoiding scraping of the eyelid margin. Donors had no signs or symptoms of dry eye. Expressed meibum was dissolved in 1.5 mL CDCl 3 . The samples were pooled for catalytic hydrogenation.
Catalytic Hydrogenation
Half the pooled meibum was decanted to be catalytically hydrogenated. Saturated meibum was prepared as we did for sphingomyelin [26,62,63]. Platinum (IV) oxide (7.4 mg) was used as a catalyst to reduce the samples with hydrogen at room temperature and atmospheric pressure for approximately 4 h with stirring. Centrifugation was used to separate the catalyst from the solution. Catalytically saturated samples were quantitatively mixed with sample that was not catalytically saturated to provide mixtures containing 1%, 2%, 3%, 4%, 5%, 10%, 25%, 50%, and 67% of catalytically saturated meibum.
Saturation Analysis Using H-NMR
On the day of NMR measurement, the sample was sonicated under an atmosphere of argon gas in an ultrasonic bath (Branson 1510, Branson Ultrasonics, Danbury, CT, USA) for 10 min and placed into a NMR tube for spectral measurement. Meibum-CDCl 3 samples were transferred from the microvial to a NMR tube using a glass pipet. Spectral data were acquired using a Varian VNMRS 700 MHz NMR spectrometer (Varian, Lexington, MA, USA) equipped with a 5-mm 1 H{ 13 C/ 15 N} 13 C enhanced pulse-field gradient cold probe (Palo Alto, CA, USA). Spectra were acquired with a minimum of 250 scans, 45 • pulse width, and a relaxation delay of 1.000 s. All spectra were obtained at 25 • C. Spectra were processed and integration of spectral bands was performed with GRAMS/386 software (Galactic Industries, Salem, NH, USA).
To quantify the relative level of cis hydrocarbon =CH (5.32 ppm) bonds, the intensity of the =CH resonance from cholesteryl esters (5.35 ppm) was subtracted from the total area of the 5.32 and 5.35 ppm resonances then divided by the sum of the resonances from cholesteryl and wax esters at 4.6 and 4.1 ppm, respectively.
Measurement of Lipid Phase Transitions Using FTIR Spectroscopy
Lipid phase transitions were measured as described previously [25]. About 500 µL of sample in CDCl 3 was applied to a AgCl infrared window. The solvent was evaporated under a stream of Argon gas and the window was placed in a lyophilizer for 4 h to remove all traces of solvent. A Fourier transform infrared spectrometer (Nicolet 5000 Magna Series; Thermo Fisher Scientific, Inc., Waltham MA, USA) was used to measure the infrared spectra of the lipid on a AgCl window. The window was placed in a temperature-controlled infrared cell. The sample temperature was adjusted by an insulated water coil connected to a circulating water bath (model R-134A; Neslab Instruments, Newton, MA, USA) surrounding the cell. A thermistor touching the sample cell window was used to measure the sample temperature. The sample was cooled or heated at a rate of 1 • C/15 min. Temperatures were maintained within ±0.01 • C. Exactly 100 interferograms were recorded and averaged. Spectral resolution was set to 1.0 cm −1 . Infrared data analysis was then performed (GRAMS/386 software; Galactic Industries, Salem, NH, USA).ṽ sym was used to estimate the content of trans and gauche rotamers in the hydrocarbon chains. The OH-CH 2 stretching region of the spectra was baselined between 3500 and 2700 cm −1 .ṽ sym was calculated from the center of mass of the CH symmetric stretching band by integrating the top 10% of the intensity of the band. The baseline for integrating the top 10% of the intensity of the band was parallel to the OH-CH region baseline. The change inṽ sym versus temperature was used to characterize lipid phase transitions as described previously [25]. Since rotamers are in either trans or gauche conformations, phase transitions were fit to a two-state sigmoidal equation using Sigma plot 10 software (Systat Software, Inc. Chicago, IL, USA): v sym = (ṽ sym ) minimum + ((ṽ sym ) maximum − (ṽ sym ) minimum )/(1 + (temperature/Tc) hillslope ) (1) v sym is the frequency of the symmetric CH 2 stretching band near 2850 cm −1 . Tc is the phase transition temperature.
v sym at 33.4 • C was extrapolated from the fit of the phase transition and then converting to lipid order which is the percentage of trans rotamers [25]. ∆H and ∆S were calculated from the slopes of Arrhenius plots [25].
Statistics
Curves were fit using Sigma plot 10 software (Systat Software, Inc., Chicago, IL, USA) and the confidence levels, p, were obtained from a critical value table of the Pearson product-moment correlation coefficient. A value of p < 0.05 was considered statistically significant. Error bars are the standard error of the mean.
Conclusions
Hydrocarbon chain saturation was directly related to lipid order, phase transition temperature, cooperativity, changes in enthalpy (∆H) and entropy (∆S) and could account for the changes in the lipid phase transition parameters observed with age. Unsaturation could contribute to decreased tear film stability with age. | 2017-10-04T00:36:26.145Z | 2017-08-28T00:00:00.000 | {
"year": 2017,
"sha1": "adc5ed74f6c17ffc561909e03dcf00e6631008e9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/18/9/1862/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "adc5ed74f6c17ffc561909e03dcf00e6631008e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
134622820 | pes2o/s2orc | v3-fos-license | Shelf life estimation of rujak cingur instant sauce using accelerated shelf life testing (ASLT) method
Rujak cingur is one of Indonesian traditional cuisine. The process required to produce Rujak cingur sauce is time consuming. Manufacturing Rujak cingur instant sauce becomes an innovation to solve the problem. However, there is no information about shelf life of this sauce to guarantee its quality and safety. Therefore, it is necessary to estimate the shelf life of Rujak cingur instant sauce. The experimental method employed in estimating shelf life of Rujak cingur instant sauce is Accelerated Shelf Life Testing (ASLT). The temperatures used in this experiment were 30°C, 34°C, and 37°C. Parameters observed during the storage process were water content, Aw, fat content, FFAs, peroxide value, TBA value, and Total Plate Count. ANOVA testing results of sensory analysis as conducted by applying the spectrum method shows that storage time of Rujak cingur instant sauce affected salty taste, savoury taste, and rancid aroma of instant sauce. The estimation of shelf life of Rujak cingur instant sauce was performed by using the Arrhenius equation of: y = −10482x +31.864. The shelf life of Rujak cingur instant sauce at 25 °C was 89 days based on limitation of total microorganism contamination in product and was 113 days based on panellist rejection.
Introduction
Rujak cingur is one of Indonesian specialties made of a mixture of ingredients such as: peanuts, shrimp paste, brown sugar, cayenne pepper, Batu banana, salt, tamarind and mineral water. The process of serving Rujak cingur takes around 10-15 minutes. The time required to make the sauce is considered less effective and is time consuming as Rujak cingur sauce needs to be prepared by traditional pounding method [1].
The demand of more convenient traditional food product with longer shelf life could be addressed such as by the innovation of an instant Rujak cingur sauce. Research related to the formulation of Rujak cingur instant sauce has been carried out by Karunia [2] and Sakinah [1]. Previously, another research had also been carried out to select methods and types of packaging as well as the addition of appropriate preservatives to optimize the storage capability of Rujak cingur instant sauce [3]. However, the determination of the shelf life of Rujak cingur instant sauce ingredients has not been studied. Therefore, this study aimed to examine the shelf life of Rujak cingur instant sauce by using the Accelerated Shelf Life Testing (ASLT) method with the Arrhenius approach, that use different temperature to accelerate damage of the product [4,5].
Materials
The ingredients used in the manufacture of Rujak cingur instant sauce were peanuts, shrimp paste, brown sugar, Batu banana, cayenne pepper, tamarind, salt, sodium benzoate, and multilayer packaging as obtained from traditional markets in Indonesia. The materials used for analysis include: acetic acid, chloroform, potassium iodide, Na 2 S 2 O 3 , starch solution, HCl, TBA reagent, ethanol, PP, NaOH indicator, PCA media, distilled water, petroleum ether, commercial citrus scent, acetic acid, vanilla aroma, caramel aroma, refined sugar, citric acid, refined salt (NaCl), pure caffeine, refined MSG (monosodium glutamate) and mineral water.
Determination on characteristics of rujak cingur instant sauce quality
As much as 25 g of Rujak cingur instant sauce was packed and then stored at 37°C. The characteristics of Rujak cingur instant sauce were assessed from regular observations for every 7-day time, starting from day 0 until the result was declined by the panelist. If declined, the final analysis presented the following results: the moisture content of the vacuum method [6], the TBA value of the spectrophotometric method [7], and the Total Plate Count [8].
Determination on shelf life of rujak cingur instant sauce
A 25 g of packed Rujak cingur instant sauce were taken and then grouped into three and was stored at 30°C, 34°C, and 37°C. Observations were made periodically every 7-day time at all three different storage temperatures. Observations were made through running the sensory testing by the 12 trained panellists and through running the tests on the characteristics of Rujak cingur instant sauce quality including: moisture content of vacuum method [6], TBA value of spectrophotometric methods [7], and Total Plate Count [8]. Furthermore, determination of shelf life is carried out by using the Arrhenius method.
Determination on characteristics of rujak cingur instant sauce quality
In determining the characteristics by estimating the shelf life of Rujak cingur instant sauce, it is necessary to analyze the parameters which affect the quality degradation at the beginning and end of storage. Observations were carried out periodically every 7-day time by the 12 trained panelists through product acceptance tests since day 0 storage, in which up to 50% more panelists reject Rujak cingur instant sauce.
The rejection of Rujak cingur instant sauce result by the panellists on the 28 th day is used as a limit to determine the final quality characteristics (At). The characteristics of Rujak cingur instant sauce can be seen in
Thio barbituric acid (TBA) value
The data plot, as observed from the TBA value of Rujak cingur instant sauce in three different storage temperature conditions, follows order one. The greater the value of k in a parameter, the faster the reaction rate changes in the parameter [9]. The Arrhenius equation is obtained by plotting the relationship between 1/T and ln k of the order zero in linear regression equation. Linear regression presents that y (ln k) = -32295 (1/T) + 104.42 with the coefficient of determination (R 2 ) of 0.9083 and the activation energy of the change in the value of TBA was 64138.8 cal / mol.K.
Water activities
Based on the order zero of linear regression equation and order one in the parameters, Rujak cingur instant sauce water activities during storage were at temperatures of 30°C, 34°C, and 37°C. Linear regression equation results of Arrhenius plot were based on the water activity parameters which were obtained by the formulation of: y (ln k) = -101574 (1/T) + 328.68 with the determination value (R 2 ) of 0.8244. The value of R 2 presented that the factor of temperature or storage time had an effect on the water activity parameters as the value of R 2 was close to 1 or R 2 = 1 [10]. Activation energy required in changing the Rujak cingur instant sauce water activity parameters was 201726.70 cal / mol.K.
Determination of shelf life
There are several criteria in selecting the most appropriate quality parameters important for determining the shelf life of the product [11], which include: 1) the highest quality parameter decreasing during storage as indicated by the absolute coefficient value or the greatest coefficient of determination (R 2 ); 2) the quality parameter being the most sensitive to temperature changes as can be seen based on the slope value in the Arrhenius equation or from the lowest activation energy (Ea); and 3) the existence of more than one quality parameter meeting the criteria, quality parameters having a shorter shelf life would be selected.
The parameter which has the lowest activation energy is the Total Plate Count parameter of 20816.30 cal / mol.K. Based on the value of activation energy, the total plate count parameter and the highest determination coefficient (R 2 ) are 0.9160. Therefore, Total Plate Count is used as a parameter in determining the shelf life of Rujak cingur instant sauce.
This study also refers to food security standard as related to the limit of contamination of microorganisms of 5 x 10 5 (Total Plate Count) in SNI 7388: 2009 09.2.4. Based on the maximum limit of microorganism contamination allowed to study the shelf life of Rujak cingur instant sauce, the study was stopped on the 21 st day at 37°C storage with total contamination of microorganisms reaching 2.2 x 10 6 . The calculation of shelf life of Rujak cingur instant sauce was conducted by using the Arrhenius equation parameter of Total Plate Count with microbial contamination limit as a basis for rejection, which is presented in Table 2. In the study of shelf life, it was found that there was no rejection of Rujak cingur instant sauce by the panellists in the 21 st day. Therefore, organoleptic analysis was further continued until 50% panellists decided that the characteristics of the Rujak cingur instant sauce was unacceptable. Panellist rejection upon the result of the study occurred on the 28 th day; hence, the study to estimate the shelf life of Rujak cingur instant sauce was halted. The estimation of the shelf life of Rujak cingur instant sauce based on panellist rejection is presented in Table 3. Table 3 indicates that the shelf life of Rujak cingur instant sauce at 25°C and 30°C were 113 days and 63 days, respectively. This result means that the lower storage temperature will prolong the shelf life of Rujak cingur instant sauce. The increase on storage temperature may lead to a greater reaction speed as indicated by a sharp slope of the line and the constant decreasing rate of reaction quality [12].
Conclusions
Three determinants of the decrease in the quality of Rujak cingur instant sauce ingredients are: water activity parameters, TBA values, and Total Plate Count. The R 2 values and activation energies of those three parameters are 0.8244 and 201726.70 cal / mol.K; 0.9083 and 64138.80 cal / mol.K; and 0.9160 and 20816.30 cal / mol.K, respectively. Based on the highest R 2 value with the lowest activation energy, the quality parameter of Rujak cingur instant sauce is assessed from the Total Plate Count parameter. The shelf life of Rujak cingur instant sauce based on the Arrhenius equation on the maximum limit of total microorganism contamination is 89 days at 25°C. However, based on panelists' rejection, the shelf life is 113 days at 25°C. | 2019-04-27T13:13:02.035Z | 2019-02-19T00:00:00.000 | {
"year": 2019,
"sha1": "ca18cc89408127e1a3a972a462f79afb88b118f7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/230/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "41cbb0437fc91b8cef54dcd6da94c1f7bbdd7dff",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
1275699 | pes2o/s2orc | v3-fos-license | The topology of the monodromy map of the second order ODE
We consider the following question: given $A \in SL(2,R)$, which potentials $q$ for the second order Sturm-Liouville problem have $A$ as its Floquet multiplier? More precisely, define the monodromy map $\mu$ taking a potential $q \in L^2([0,2\pi])$ to $\mu(q) = \tilde\Phi(2\pi)$, the lift to the universal cover $G = \widetilde{SL(2,R)}$ of $SL(2,R)$ of the fundamental matrix map $\Phi: [0,2\pi] \to SL(2,R)$, \[ \Phi(0) = I, \quad \Phi'(t) = \begin{pmatrix} 0&1 q(t)&0 \end{pmatrix} \Phi(t). \] Let $H$ be the real infinite dimensional separable Hilbert space: we present an explicit diffeomorphism $\Psi: G_0 \times H \to H^0([0,2\pi])$ such that the composition $\mu \circ \Psi$ is the projection on the first coordinate. The key ingredient is the correspondence between potentials $q$ and the image in the plane of the first row of $\Phi$, parametrized by polar coordinates, which we call the Kepler transform. As an application among others, let $C_1 \subset L^2([0,2\pi])$ be the set of potentials $q$ for which the equation $-u'' + qu = 0$ admits a nonzero periodic solution: $C_1$ is diffeomorphic to the disjoint union of a hyperplane and cartesian products of the usual cone in $R^3$ with $H$.
Introduction
For a given potential q ∈ H 0 ([0, 2π]) = L 2 ([0, 2π]), the homogeneous equation −v ′′ (t) + q(t)v(t) = 0, t ∈ [0, 2π] ( * ) The fundamental matrix Φ : [0, 2π] → SL(2, R) is and evaluation at t = 2π obtains the Floquet multiplier Φ(2π) ∈ SL(2, R). We study the geometry of the set of potentials q with given Floquet multiplier: it turns out that this set has countably many connected components and in order to describe them it is useful to consider the lifted version of these objects to a covering map of SL(2, R).
Thus, the set of potentials q with given monodromy g ∈ G 0 is parametrized by Ψ(g, h), h ∈ H, and is therefore a (topological) subspace of codimension 3. This theorem will be extended to other function spaces (H p (S 1 ) and H p ([0, 2π]) for p ≥ 0) in theorem 3.
Up to differentiability class (to be detailed in section 4), these constructions define bijections between the following three sets: (a) P, the set of potentials q; Luckily, monodromy is easy to handle in K: two potentials have the same monodromy if and only if their orbits have the same θ M , ρ(θ M ) and ρ ′ (θ M ). The level sets of µ are thus parametrized by the set of positive ρ's with prescribed behavior at endpoints and integral equal to 2π.
We then proceed to apply theorem 3 to the theory of periodic Sturm-Liouville operators. Let C ⊂ H 0 ([0, 2π]) be the set of potentials q for which equation ( * ) admits a periodic nontrivial solution v. It is easy to see that q ∈ C if and only if tr µ(q) = 2, thus reducing the study of C to the study of the set of matrices in G 0 with trace equal to 2. The upshot is the following: let Σ 0 ⊂ R 3 be the plane z = 0 and, for n > 0, let Σ n be the cone Theorem 2 There is a diffeomorphism between (R 3 , Σ) ×H and (H 0 ([0, 2π]), C).
The images of the vertices of the cones in Σ × H form a countable union of topological subspaces of codimension 3, the set of potentials q for which all solutions of equation ( * ) are periodic.
Standard oscillation theory is incorporated in the following geometric property, stated in theorem 5. Consider a straight line in H 0 ([0, 2π]) of the form q 0 + sq + , s ∈ R, where q + is almost everywhere strictly positive. This line meets the image of Σ 0 × H exactly once and the intersection is transversal. Also, for each n > 0, the line meets the image of Σ n ×H either exactly twice (transversally, once in each leaf) or once at the image of a vertex.
As an application, we describe the critical set of the nonlinear periodic Sturm-Liouville operator with quadratic nonlinearity. Let p ≥ 2 and F : This result should be contrasted to those obtained in [7] and [1] for a nonlinear Sturm-Liouville operator with Dirichlet boundary conditions and convex nonlinearity. In [2], the authors characterized the critical set with the weaker, generic hypothesis on the nonlinearity: the components of the critical set are topological hyperplanes. Analogous results for the periodic case, the original motivation for this paper, will be discussed in a forthcoming paper ( [3]).
The counterpart to the set of vertices of C in the third order case is the set are periodic. Using monodromy arguments ( [9]), this set is shown to be homeomorphic to the set of closed locally convex curves in S 2 with a prescribed basepoint, a very complicated space with nontrivial homology for every even dimension ( [8]).
The problem of characterizing potentials having 0 in the spectrum is clearly related to the description of isospectral classes of potentials, as accomplished in [10], [6] and [5]. However, we do not think our results are corollaries of these powerful techniques.
Back to the linear Sturm-Liouville problem, we proceed to consider more general boundary conditions. For a 2 × 4 real matrix U, we say a solution v of equation ( * ) satisfies U-boundary conditions if We are again interested in the geometry and topology of C, the set of potentials q for which equation ( * ) admits a nontrivial solution satisfying U-boundary conditions. This again can be reduced to the study of certain algebraically defined subsets of G 0 .
In section 2 we present the relevant geometric facts about G = SL(2, R) and in section 3 we do the same for SL ± (2, R), the group of real 2 × 2 matrices with determinant ±1. In section 4 we present the monodromy map µ and the Kepler transform which is then used in section 5 to prove theorem 3, a more general version of theorem 1 above. In section 6 we study the periodic Sturm-Liouville problem, proving theorems 4 and 5, improved versions of theorem 2. Finally, in section 7, we study more the Sturm-Liouville problem with more general boundary conditions. The last two authors received the support of CNPq, CAPES and FAPERJ (Brazil). The second author acknowledges the hospitality of The Mathematics Department of The Ohio State University during the winter quarter of 2004.
We are interested in the stratification of G in conjugacy classes. The center Z(G) of G is formed by the elements of the form ι n , n ∈ Z, where ι = φ C (π, 0, 0): we have Π(ι n ) = (−1) n I. From the connectivity of G, conjugacy classes are contained in connected components of level sets T c = tr −1 ({c}) of the trace function tr : G → R. We systematically abuse notation by writing tr g instead of tr(Πg). For any matrix A ∈ SL(2, R), A = ±I, the centralizer {B | AB = BA} is a Lie group of dimension 1 and, since G is a covering of SL(2, R), the same holds for the centralizer of any g ∈ G, g = ι n , n ∈ Z. Thus, the conjugacy class of any such g is a 2-dimensional manifold.
A straightforward computation yields tr φ C (α, r cos η, r sin η) = 2 cos α cosh r. The sets φ −1 C (T c ) are obtained by rotating figure 1 around the horizontal axis (r = 0). The figure indicates the level curves for c ∈ Z, solid for c > 0, thicker for c = 0 and dotted for c < 0. The V shaped curves correspond to c = ±2. Notice that T 0 is the countable union of planes α = kπ + π/2 in Cartan coordinates. rag replacements The sign of the trace is determined by cos α. Defining A n = φ C ((nπ−π/2, nπ+ π/2) × R 2 ), the regions bounded by the thick vertical lines in the figure, the sign of the trace is constant equal to (−1) n in each open set A n . Since A n = ι n A 0 it suffices to study the trace function in A 0 . From the picture, level sets T c look like cones or hyperboloids. To make this precise, define the real analytic functions it is easy to verify that φ X is a diffeomorphism and that tr(φ X (x, y, z)) = 2 exp(−x 2 + y 2 + z 2 ).
is the surface −x 2 + y 2 + z 2 = log(c/2). For 0 < c < 2 this is a hyperboloid with two connected components, diffeomorphic to the disjoint union of two planes: in this case, the set T c is a disjoint union of countably many surfaces diffeomorphic to which, except for one point, the vertex, is a submanifold. We call the cone ⊲⊳. Thus, for c = 2, T c is a disjoint union of countably many copies of ⊲⊳, one in each A 2n The connected component of T 2 containing I is the image under the exponential map of the cone of nilpotent matrices in the Lie algebra of G (naturally identified with sl(2, R)). The cases c < −2, c = −2 and −2 < c < 0 are similar, with the components now lying in A 2n+1 .
Summing up, for each c = ±2, the connected components of T c are conjugacy classes in G. The vertices of the cones in T ±2 are precisely ι n : each vertex is a conjugacy class by itself. A cone minus the vertex consists of two leaves, each of them diffeomorphic to S 1 × R: each leaf of a cone is a conjugacy class. Let T 0 2 ⊂ T 2 be the connected component containing the origin. The two leaves of the cone T 0 2 meet at the vertex I and consist of lifted matrices with both eigenvalues equal to 1. Thus, g ∈ T 0 2 − {I} projects to I + N ∈ SL(2, R), N a nonzero nilpotent matrix. Define sgn(g) to be sgn(det(Nv, v)), v / ∈ ker N; this sign is well defined and may be used as a label for the leaf.
Consider now the left Iwasawa decomposition
and the open nested half-spaces G θ = φ L ((θ, +∞) × (0, +∞) × R) ⊂ G. The set G θ consists of the elements g ∈ G for which the variation in argument from e 2 to ge 2 is smaller than −θ (the variation in argument is computed along a path γ : Recall that ⌊x⌋ is the only integer in the interval (x − 1, x]. Along the proof of the proposition, we will give geometric descriptions of the five pairs in the statement. Proof: Since φ L is a diffeomorphism, the boundaries ∂G θ are smooth (topological) hyperplanes. The surface ∂G 0 consists of (lifts of) lower triangular matrices with positive diagonal entries. Clearly, for g ∈ ∂G 0 , tr g ≥ 2, and on the curve of lower triangular matrices with diagonal (1, 1) we have tr g = 2. This implies that the surface ∂G 0 is tangent to the cone T 0 2 . For g ∈ T 0 2 , except for the curve of tangency, sgn(g) coincides with the sign of θ: indeed, the sign sgn(g) is also the sign of the variation of argument from gv to v if v is not an eigenvector of g. Thus, the positive leaf of T 0 2 is contained in the closure of G 0 and the negative one is disjoint from G 0 . The intersection of T 0 2 with G 0 is therefore the positive leaf minus a closed half-line: it is thus diffeomorphic to a plane. Figure 2 shows the set G 0 , together with the cones T ±2 , in two kinds of representations. The drawing on the left is an attempt to give a 3d perspective view of T 0 2 and ∂G 0 as a cone and a tangent plane. The drawing on the right is far more schematic: the connected components of T 2 and T −2 are shown as big Xs, the parts contained in G 0 drawn in solid lines and the others in dotted lines; ∂G 0 is represented by a thick line.
frag replacements The connected component of T 2 ∩ G 0 contained in A 0 , the solid half-line starting at the thick line in figure 2, is (diffeomorphic to) a plane while the other components, one in each A 2k , k > 0, are (diffeomorphic to) cones with horizontal axis. On the other hand, the components of Similarly, as we shall soon prove, the connected component of T 4 ∩ G 0 in A 0 , drawn as a branch of a fake hyperbola in figure 2, is a plane, while the other components, drawn as complete fake hyperbola, are cylinders (i.e., diffeomorphic to S 1 × R). The components of T −4 ∩ G 0 are cylinders and those of T 0 ∩ G 0 , drawn as vertical lines, are planes.
Let B n ⊂ G be G nπ −Ḡ (n+1)π = φ L ((nπ, (n + 1)π) × (0, +∞) × R). Thus, the sets B n are open and disjoint and, together with the sets A n , form an open cover of G with A n ∩ B n ′ = ∅ if and only if n = n ′ or n = n ′ + 1. The map φ X provided a normal form for the trace on A n . For B n instead, consider the diffeomorphism for which tr(φ Y (θ, ρ, c)) = c. Thus, (B n , T c ∩ B n ) is diffeomorphic to the pair (R 3 , {z = c}) and so is (T c ∩ B n ∩ G θ , B n ∩ G θ ) assuming nπ < θ < (n + 1)π.
Consider now arbitrary values of θ and c. We may assume θ ∈ [0, π) by multiplying everything in sight by an appropriate element ι n of the center of G, an operation which, up to sign, preserves traces. Set ǫ > 0, ǫ < π − θ. The diffeomorphism φ Y yields a diffeomorphism between the regions G 0 − G θ+ǫ and G θ − G θ+ǫ , coinciding with the identity near their common boundary and preserving trace. We therefore have a diffeomorphism between the pairs (G 0 , T c ∩ G 0 ) and (G θ , T c ∩ G θ ), which, together with the geometric descriptions in figure 2, completes the proof.
The automorphism r lifts tor : G → G, also an automorphism of order 2. Usẽ r to define G ± as the semidirect product G ⋊ (Z/(2)). More concretely, set G ± to be the disjoint union of G andRG = {Rg, g ∈ G}, the product being defined by gR =Rr(g) and Π ± : G ± → SL ± (2, R) is a homomorphism extending Π : G → SL(2, R) with Π(R) = R. Clearly, G ± has two connected components G + = G and G − , each homeomorphic to R 3 and the projection Π ± is a universal cover on each connected component.
The Schur decomposition induces a diffeomorphism φ S : R × (0, +∞) × R → G − with φ S (0, 1, 0) =R, As before, we consider the level sets is much simpler than that of their positive counterparts.
We now construct smooth natural bijections between the following three sets: The condition det Φ = 1 is translated as v ∧ v ′ = 1: in particular, the argument θ of v always has positive derivative. We call v the fundamental curve associated with the potential q: the map from P to F takes q to v.
This map is indeed a continuous bijection: if
Since v is continuous and nonzero, v ′′ is a multiple of v, i.e., v ′′ = qv and it is straightforward to check that and the potential q lies in H p ([0, 2π]) with v being its associated fundamental curve.
The fact that these bijections preserve smoothness class is left to the reader. We call this bijection between F and K the Kepler transform.
The restrictions of these bijections to the periodic case work well but, for p > 0, we still have to describe the image in F and K of H p (S 1 ) ⊂ H p ([0, 2π]) = P. More precisely, we translate the conditions q (j) (0) = q (j) (2π), 0 ≤ j < p, in terms of the functions v and ρ. For v, we clearly must have v (j) (2π) = v (j) (0)µ(q), 2 ≤ j < p + 2. For ρ, the conditions become far more complicated. From equations 7 and 8, the conditions q(0) = q(2π) and q ′ (0) = q ′ (2π) become where b 0 and b 1 are smooth functions. More generally, formulae for higher derivatives of q yield a translation from q (j) (0) = q (j) (2π) to where b j is a rather complicated expression. Summing up, there exists smooth maps B p : Proposition 4.1 For any p ≥ 0, the image of µ : Clearly, for any q ∈ H p , since θ : Apply the Kepler transform on the pair (θ M , ρ) to obtain a potential h with µ(q) = p. Minor adjustments at the boundary points may be performed to guarantee that q ∈ H p (S 1 ).
Global geometry of the monodromy map
We are ready to prove the first main result of this paper. Geometrically, the theorem states that level sets of the monodromy map are, after a smooth change of variables, parallel affine subspaces of codimension 3. The claim holds for the restriction of the monodromy to H p ([0, 2π]) and to H p (S 1 ), p ≥ 0. Let H be the real separable infinite dimensional Hilbert space. The subscript [0, 2π] or S 1 for the diffeomorphisms Ψ p will be omitted whenever it is clear from the context. The proof yields an explicit construction of the maps Ψ p .
We first choose a base point Ψ 0 (g 0 , 0). There exists a unique polynomial P 0 = P θ 0 ,ρ 0 ,ν 0 of degree 4 or less such that The exponential is used to guarantee the positivity of the function ρ = exp •P 0 . Indeed, from Lagrange interpolation there exists a unique polynomial P 1 of degree at most 3 satisfying the boundary conditions; thus, a polynomial P of degree at most 4 satisfies the boundary conditions if and only if P is of the form P (θ) = P 1 (θ) + cθ 2 (θ M − θ) 2 . The integral on the fifth condition is now a continuous strictly increasing function of c ranging from 0 to +∞ as c varies in R: there exists therefore a unique value of c for which P 0 = P satisfies boundary and integral conditions. Set Ψ 0 (g 0 , 0) to be the potential associated to the orbit (θ M , exp •P 0 ). Now let H ⊂ H 2 ([0, 1]) be the closed subspace of functions r with Define Ψ 0 (g 0 , r) to be the potential with orbit (θ M , ρ) where the parameter c being again uniquely chosen so that ρ satisfies the integral condition.
The nonperiodic case for p > 0 is similar. We now consider the periodic case for p > 0. Take 9). The values of a j and b j will indicate the j-th derivative of ρ at 0 and θ M , respectively. We claim that there exists a unique polynomial P of degree at most 2p + 4 such that the following conditions hold: This follows from a monotonicity argument analogous to that used to construct P 0 in the case p = 0. Finally, define Ψ p (g 0 , (a, r)) to be the potential corresponding to where c is again the unique constant for which is a diffeomorphism with all the required properties.
Periodic Sturm-Liouville operators
For p ∈ Z, p ≥ 0, and q ∈ H p (S 1 ) we consider the operator L = L p (q) : H p+2 (S 1 ) → H p (S 1 ), Lv = −v ′′ + qv. It is easy to verify that L is a Fredholm operator of index 0 with kernel of dimension at most 2. In particular, the spectrum σ(L) is given by σ(L) = {λ | dim ker(L − λI) > 0} and we call dim ker(L − λI) the multiplicity of the eigenvalue λ. For p = 0 this operator is self-adjoint and it follows that for all p ≥ 0 the spectrum of L consists only of real eigenvalues with multiplicity (geometric equal to algebraic) at most 2. We are interested in the geometry of the triple (C 0 , Recall that Z(G) = {ι k , k ∈ Z}, the center of G, is the set of vertices of the cones in T ±2 (see figure 1). The diffeomorphism Ψ p S 1 is the one constructed in theorem 3.
In particular, the set C 1 of potentials q ∈ H p (S 1 ) with 0 in the spectrum is a disjoint union of a (topological) hyperplane Ψ p S 1 ((T 0 2 ∩ G 0 ) × H) and countably many cones Ψ p S 1 ((T 2 ∩ A n ) × H), n > 0. Recall that each cone has two sheets, meeting at a vertex, a topological subspace of codimension 3.
Let q + ∈ H p (S 1 ) be an almost everywhere strictly positive function and for q 0 ∈ H p (S 1 ), consider the parametrized straight line q 0 − sq + , s ∈ R. Standard oscillation theory implies the existence of a sequence of continuous functions such that 0 is the n-th eigenvalue of the potential q 0 − s n (q 0 )q + . In particular, 0 is the (simple) ground state of q 0 − s 0 (q 0 )q + .
Combining these two points of view, we have the following result.
Theorem 5 Each straight line q 0 − sq + , q 0 , q + ∈ H p (S 1 ), q + strictly positive a. e., meets the hyperplane and each sheet of a cone in C 1 exactly once. More precisely, Thus, the 2n − 1 and 2n-th eigenvalues of q 0 coincide if and only if the line q 0 + s, s ∈ R, passes through the vertex of Ψ p S 1 ((T 2 ∩ A n ) × H). Also, the set of potentials q for which 0 is the double eigenvalue in positions 2n − 1, 2n is a (topological) subspace of codimension 3.
As a final application, we describe the critical set of the nonlinear periodic Sturm-Liouville operator with quadratic nonlinearity. Corollary 6.1 Let p ≥ 2 and F : H p (S 1 ) → H p−2 (S 1 ) be given by F (u) = −u ′′ + u 2 /2. Let C ⊂ H p (S 1 ) be the critical set of F . Then the pair (H p (S 1 ), C) is diffeomorphic to (G 0 , T 2 ∩ G 0 ) × H.
A standard regularity argument shows that for u ∈ H p ⊂ H p−2 , ker L p (u) = ker L p−2 (u) ⊂ H p+2 (S 1 ) and therefore C = {u ∈ H p (S 1 ) | L p (u) : H p+2 → H p has nontrivial kernel} which is C 1 in the notation of theorem 4, completing the proof.
Other boundary conditions
The results above extend appropriately to other boundary conditions. For a real 2 × 4 matrix U of rank 2, let H 2 U ([0, 2π]) ⊂ H 2 ([0, 2π]) be the space of functions v satisfying U-boundary conditions: In particular, H 2 (I −I) ([0, 2π]) = H 2 (S 1 ) and H 2 (−I −I) ([0, 2π]) is the space of antiperiodic functions, where I is the 2 × 2 identity matrix. We shall not discuss higher orders of differentiability in this setting.
Oscillation theory for A ∈ SL(2, R) works as in the periodic case: the straight lines q 0 − sq + meet the ground hyperplane in C 1 (if it exists) exactly once and each cone in C 1 twice, unless the straight line goes through the vertex. It is not clear how oscillation theory fits in for the cases A ∈ SL(2, R). For instance, for A = 1 0 0 −1 , q 0 = 0 and q + = 1, the whole line q 0 − sq + is contained in C 1 : all functions q ∈ H 0 ([0, 2π]) satisfying q(2π − t) = q(t) belong to C 1 . | 2014-10-01T00:00:00.000Z | 2005-07-06T00:00:00.000 | {
"year": 2005,
"sha1": "71500abd4c1aabbc7458f2342f5dd6b65d0b8e32",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jde.2005.11.009",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8e59e4978e0c6d81ea70b1f538e88c49e55c3abe",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
233283025 | pes2o/s2orc | v3-fos-license | Urban planning elements affect thermal environment from solar radiation in subtropics
Urban planning elements are crucial factors affecting the absorption and storage of heat from solar radiation in ground surfaces. This study examined the influence of heat storage capacity and urban planning methods on environmental hotspots. We examined the campus of National Taipei University of Technology (NTUT), including urban planning elements such as the building structures, road pavement materials, water bodies, and vegetation. The study was divided into measurement and simulation. The study results indicate that: The thermal properties of urban planning elements exert significant influence on the heat storage of urban sites. The arrangement of building clusters and the mutual exchange of energy among them causes stagnant wind fields and heat storage. The evapotranspiration from water bodies and grass creates high moist enthalpy, which is difficult to dissipate.
Introduction
Urbanization has caused natural surfaces to be replaced by human-made materials, thereby increasing anthropogenic waste heat and energy consumption, altering ambient radiant heat, air humidity, and atmospheric fluids, and affecting microclimates at the mesoscale [1]. Most of the land cover in urban areas are impervious artificial surfaces, which upset the balance of energy in the Earth's surface [2,3,4]. During the day, urban surfaces absorb radiant heat from the sun and then slowly release the heat that they have accumulated after sunset, affecting the local microclimate [5,6]. The cement and concrete in buildings and the asphalt mixtures in road pavement all have high solar heat absorption and storage capacities [7,8], which allows them to store large amounts of heat during the day and then release it back into the atmosphere during the night [6,9,10]. Climate change and the reduction of rising ambient temperatures are major climate issues in many countries.
Rough surfaces are the main factor of urban heat storage, so urban planning and design must take into consideration changes in ambient energy as well as the influence of the heat properties and capacity of pavement material. A wide variety of design methods and strategies must be employed to cope with climate changes. Most past studies focused on analyzing and comparing various surface materials or investigated the influence of surface temperatures on ambient heat flux. Little research has been conducted into the influence of surface heat properties on the heat budget of urban environments and the key factors of heat storage.
This study examined the influence of urban planning elements on the ambient heat storage of specific sites. We measured physical factors of a campus environment at the site scale, input the measurements
Theories and Methodology
We chose National Taipei University of Technology (NTUT) during summer as the site of this study. The campus of NTUT comprises varying urban planning elements and has the diversity of a small city. We therefore chose it as our simulation target and applied the typical meteorological year 3 (TMY3) of Taipei to our simulations. The measurements obtained from the site were used for heat storage calculations and converted into background conditions and the data needed for CFD simulations. Integration of simulation software and measurement data increased the accuracy of the simulation results
Theories associated with urban thermal environments
Urban climates are the climate characteristics that differ from surrounding natural areas due to urban development. Urbanization alters heat balance and changes the regional climate to an urban climate. The heights and shapes of building structures, the directions of streets and buildings, and the properties of surface materials in urban areas all exert a certain degree of influence on urban climate. This study investigated the mutual influence and connections between urban climate conditions, namely, solar radiation and fluid wind field, and surrounding urban planning elements. Urban energy transfer The balance of energy between the atmosphere and urban environments. Influencing factors include shortwave radiation from the sun, longwave radiation reflected off rough surfaces, thermal energy released by different ground surfaces, the sensible heat exchanged between fluids and the ground, evapotranspiration heat from plants or the ground, and anthropogenic heat. Oke [1] developed an equation of energy balance between the air and ground structures in urban environments: (1) Q*: net radiant flux, which is the shortwave and longwave radiation absorbed by urban ground surfaces QF: anthropogenic heat flux QH: sensible heat flux QE: latent heat flux ΔQS: heat stored by ground surfaces, buildings, and aboveground objects ΔQA: convection heat This study mainly examines the heat stored by ground surfaces, buildings, and aboveground objects (ΔQS) and includes other influencing factors of ambient energy for reference and discussion. As our scope is limited to the thermal energy reactions generated by urban planning elements, we did not conduct an in-depth discussion of anthropogenic heat Solar radiation Givon [11] pointed out that the amounts of solar radiation received by urban and suburban areas are basically the same, but the longwave radiation released by the ground and the dense buildings in urban areas are absorbed and reflected numerous times by surrounding surfaces, which results in significant differences between urban and suburban areas in longwave radiation near the ground surface. Urban ground surfaces exchange heat with the air near the ground surface by reflecting shortwave solar radiation and engage in convective heat transfer by releasing longwave radiation. Urban wind field The roughness of different urban areas impacts ground surface resistance, airflow scale and intensity, and wind speed conditions within the site environment [12]. When air flows through areas with rough surfaces, the roughness influences the boundary layer of the ground surface, which in turn changes the average speed, pressure, and thickness of winds and impacts the turbulent flow field. The influence of ground surface roughness on the wind field generates different gradient wind speeds in different areas. Evapotranspiration Evapotranspiration is an important factor of microclimate balance and is closely related to climate background conditions, soil water content, ground surface vegetation, and plant species and characteristics.
We referred to the Penman-Monteith equation to estimate the evapotranspiration from water bodies and vegetation:
Thermal properties of urban planning elements
The thermal properties of urban planning elements have significant influence on the thermal properties of pavement materials and building environments. The primary factors of their heat storage capacity include the thermal conductivity coefficient (k), density (ρ), and specific heat (c). These thermal properties are fixed values within substances. The thermal conductivity coefficient (k) indicates the capacity of a material to conduct or transmit heat and influences heat conduction speeds at high and low temperatures (Table 1). Pavement materials with low thermal conductivity will be heated at the surface, while those with high thermal c1nductivity will transmit the heat to other pavement layers [11]. Gui [13] noticed that reducing the thermal conductivity coefficient of pavement surfaces can lower the heat flux of pavements under solar radiation and at high air temperatures, which will lower road surface temperatures and temperatures near the ground surface. Specific heat (C), also known as specific heat capacity, is the amount of energy that a unit mass of a substance requires to rise 1 C in temperature. Depending on the circumstances, C can be further divided into specific heat capacity at constant pressure (cp), specific heat capacity at constant volume (cv), and specific heat capacity at saturation with unit kJ/kgK. In contrast, thermal resistance (R) refers to the capacity of a substance to resist the transmission of heat and is the reciprocal of k.
Types of urban planning elements
Urban planning elements are crucial factors of urban climate. Complex and diverse structure surfaces can form different microclimate characteristics in an environment and exert a major impact on humans [14]. The thermal properties of surface materials, the shapes and configurations of buildings, and the distances between roads and buildings in urban environments all influence buildings, asphalt roads, brick pavements in plazas, vegetation, and park plazas and change the local microclimate. Guan [15] examined the thermal properties of surface materials and the influence of the ambient temperatures surrounding asphalt, concrete, brick, and grassy surfaces on microclimate. Assessing the influence of various surface materials on the microclimate of near-ground environments, Qin [3] derived that the interactions among road structures, material properties, and environmental factors are crucial to understanding the influence of urban ground surfaces on the heat island effect.
Xi [16] observed that urban planning elements can create different outdoor thermal environments and that common elements on campuses, such as building structures, plaza spaces, grass, soil, and vegetation, can be used to design subtropical urban environments. This study examined the most representative planning elements in urban areas, including building structures, asphalt roads, brick pavements in plazas, natural grass and soil, trees and vegetation, and water bodies.
Experiment Environment and Simulation Model
The NTUT campus in Da'an District of Taipei City, which covers 14.78 ha, served as the study area. The campus contains concrete buildings, asphalt roads and pavement, plazas and paths paved with bricks, pervious green spaces with vegetation, a small ecological pool, and a waterscape connected to a water course that runs outside the campus. With these various urban space elements, the campus environment resembles a miniature city.
Contents and background of experiment measurements
For the background of this experiment, we chose two different sets of weather conditions common during summer. We measured solar irradiance and cloud cover over four time periods between 9:00 and 17:00 on June 5, which was a cloudy day, and on June 25,2017, which was a sunny day. We chose three measurement locations with a high degree of pedestrian and vehicle traffic and one on building rooftop: A, B, C, and D. These locations included various surface textures, such as grass, brick pavement, concrete, and asphalt, which are the most representative material surfaces in urban environments (Fig 1).
Fig. 1. Locations of measurement point
We used the following instruments to collect data. For global solar irradiance, we used the LP PYRA 03 pyranometer. To measure air temperature and humidity at different elevations, we used an HOBO temperature relative humidity data logger. Comprehensive heat indexes air temperature TA, globe temperature TG, and relative humidity RH were measured using a handheld wet bulb globe thermometer. Heat flux sensors were used to measure the heat flux of surface materials. We used thermocouple wires to measure surface temperatures and then used CR1000 and CR800 to monitor and record data, as shown in Table 2.
Settings of numerical model
We constructed a model of the campus using Rhino and then used computational fluid dynamics (CFD) software to simulate and analyze steady-state wind field characteristics under extreme weather conditions in the summer.
For the environmental parameter settings, we used the hot and humid summer environment in northern Taiwan for our background values and referred to the values of June 25, 2017 from the Taipei City weather station for our parameter settings. The actual measurements served as the input values for solar irradiance, temperature, humidity, and heat flux. The time intervals were calculated in hours, so there were 24 intervals in a day. Assuming that heat conduction remained constant during the selected time periods, the mean water-vapor flux and latent heat of evaporation of each time period were calculated, with the environmental measurements taken at 12:00 noon as the input parameters of the CFD simulations. The wind field settings were based on the mean wind speed on that day as shown by the weather station. Turbulent flow conditions were considered in the wind field analysis in the simulation model and calculated using a standard k-ɛ model.
Measurement Results on Site
With a subtropical summer climate as the background, we utilized mobile measurement in the campus experiment from 9:00 to 17:00 and obtained two sets of data, one from a sunny day and the other from a cloudy day. This enabled us to examine the influence of various urban planning elements on the heat budget of the site and understand their relationships with their surrounding environments. We used solar irradiance, surface heat flux and temperature, and relative temperature and humidity to conduct a preliminary comparison of the influences of the urban planning elements on the thermal environment of the site.
Influence of urban planning elements on physical parameters of environment
Based on our results, we can derive the thermal behavior of various road surface materials and their influence on the heat in the surrounding environment. We arrived at the following conclusions: As the measured data show, the temperatures for these 2days are similar, yet the surface temperatures of the sunny day (25, Jun.) are clearly higher than those of cloudy day (05, Jun.).
Among the thermal properties of the materials, thermal resistance R, thermal conductivity coefficient K, heat storage coefficient S, and heat storage capacity impact the surrounding air temperature and thermal energy release of road surface materials. Asphalt has high thermal conductivity, which causes it to store large amounts of heat during the day and release large amounts of heat during the night. It is thus one of the primary factors degrading thermal comfort in surrounding environments. Improvements can be made with regard to the heat storage capacity of asphalt concrete road surfaces through adjustment of the thermal conductivity coefficient and specific heat capacity of the material.
During some time periods, the surface temperatures of brick pavement may be close to or higher than those of asphalt concrete because the thermal inertia and heat storage coefficient of brick pavement are second only to those of asphalt and because brick pavement also accumulates heat. However, brick pavement has a lower absorption rate and slower evapotranspiration, so the temperatures of air above brick pavement are lower than those above asphalt. As a result, brick pavement and grass can reduce the amount of heat above road surfaces and thereby improve the thermal environment of surrounding areas (Fig 2). Grass can absorb substantial amounts of heat but does not store all of it internally, which is beneficial to the lowering of temperatures. Although it is inevitable that urban environments will have more road surfaces, permeable or natural pavement materials are recommended so as to reduce the impact of road surfaces on urban climate. Such pavement materials conduct heat from the surface layer to the soil underneath, reduce surface temperatures as well as the amount of heat stored, and mitigate the influence of solar radiation on urban thermal environments.
Our psychrometric chart and enthalpy diagram reveals that the total enthalpy of asphalt concrete is higher than those of other road surface materials. Although grass presented lower temperatures, it exhibited the highest moist enthalpy value. This shows that a mutually influencing relationship exists between the enthalpy of surrounding air and the heat storage capacity of surface materials (Fig 3,4).
Analysis and Discussion of Heat Storage in Site Simulations
We used measurement analysis and CFD simulation software to investigate the site environment further.
Influence of thermal properties of urban planning elements on physical environment and space
The studied site was divided in two zones i and ii. Zone i was planned mainly as grass, trees and pavement. Around noon the temperatures went down surround the trees. The cool air among tree canopies diffused outwards. The temperatures went up as the distance increased from the trees. Zone ii consists of planning elements besides the pavement, more important, of the buildings nearby, which had absorbed amounts of solar heat and appeared high temperature. For the building masses, wind fields were stagnated, the cool air among trees couldn't act the best results. (Fig 5,6) The results of this study indicate a close relationship between heat and material type that generates environmental hotspots. These hotspots change depending on the weather; some move, others become worse, and still others display reduced capacity to store heat. This makes hotspots an unpredictable topic of research.
Finally, the study results indicate that greater building density will result in a greater heating effect, such that the cooling effect of trees and vegetation will be outweighed. Thus, in urban planning and design, the density of heat-accumulating bodies should be decreased so as to reduce the amount of heat accumulated between buildings. Green spaces should be designed to include good ventilation so that the cool air from the trees and vegetation can effectively regulate the microclimate.
Influence of wind on heat storage in urban environments
Airflows can be found near the ground throughout urban environments, and the factors that influence the wind environment are varied and complex. Airflow has a wide scope of influence. In recent years, many researchers and organizations have examined the issue of using wind to improve urban environments. For instance, wind can enhance ground cooling efficiency, convective wind can diffuse exhaust fumes, and convection and heat exchange can alter effective air temperature and regulate climates and environments. Many issues involving urban waste heat or thermal comfort have been effectively improved. This study further discovered that while wind direction and speed may resolve existing environmental hotspots, they can also form new hotspots. The simulations show, that the increasing wind speed dose diffuse the heat in surroundings, that cuts down the temperatures in the microclimate (Fig 7). In the hot and humid conditions of subtropical urban environments, the evapotranspiration associated with parks and green spaces has a profound impact on thermal comfort. Areas with high moist enthalpy make people feel uncomfortable and also generate environmental hotspots unlike those caused by sensible heat (Fig 8).
The results of this study show that grassy surfaces fall within the uncomfortable zone (Fig 9). While the air temperatures above brick pavement were higher than those above grass, the air above brick pavement was less humid, thereby placing brick pavement closer to the comfort zone in the psychrometric chart, with a small portion within the comfort zone. Thus, in extreme weather conditions of high temperatures and high humidity, the air above grassy surfaces is actually less comfortable than that above brick pavement. We therefore recommend brick pavement in place of some of the grassy surfaces in areas with high humidity and no wind so as to prevent the sensible heat caused by evapotranspiration from creating environmental hotspots.
Assessment of hotspot distribution in site environment
This study measured three influencing factors of heat storage to analyze and assess hotspots on campus.
Hotspot A is located in a space between two buildings. Because the space is very narrow, the airflow there is poor, which prevents the heat in the air from dissipating. The ambient temperature was thus around 38.5 C.
Hotspot B is located in a corner between buildings, where there is a variety of plants and multiple layers of vegetation. Evapotranspiration releases moisture unto the air, and the area is in a wind shadow, so the heat cannot dissipate.
Hotspot C is surrounded by concrete building structures. When wind hits the surrounding structures, a wake forms on the leeward side. However, when too many wakes collide in the air, the airflows are blocked, which causes the wind to slow down. Added to the heat from the roofs, the temperature here increases to around 37.5 C.
Hotspot D is located at the south entrance of the campus. The building on the east side means this location has almost no wind. Airflow takes away the heat from the roof, and because the art center is a three-story building, the wind sweeps downward and rebounds, returning the heat from the roof. The dual heating effects of the two buildings caused the location to become an environmental hotspot. In the afternoon as the direction of the sun changes, the temperature at the hotspot gradually drops (Fig. 10).
Conclusion
In hot and humid climates, the thermal properties of urban planning elements have significant influence on the heat storage of urban environments. Hotspots form and change with irradiation duration and the wind field. These should be taken into consideration in the design of urban spaces.
Convection and heat exchange in wind fields can lower the overall temperature of an environment and reduce ground surface temperatures. They can mitigate the impact of road surfaces on the heat storage of surrounding areas both during the day and during the night.
In extreme weather conditions of high temperatures and high humidity, evapotranspiration from water bodies and grass creates high moist enthalpy, and it is difficult for the enthalpy in the air to dissipate. This means vegetation between closely-spaced buildings and green spaces with low wind speed are likely to become environmental hotspots. | 2021-04-17T14:56:12.288Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "111bdbd6af6700b652501dc713027686b4b27088",
"oa_license": null,
"oa_url": "https://doi.org/10.12720/sgce.8.6.763-772",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "111bdbd6af6700b652501dc713027686b4b27088",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
253961077 | pes2o/s2orc | v3-fos-license | High-Risk Non-Small Cell Lung Cancer Treated With Active Scanning Proton Beam Radiation Therapy and Immunotherapy
Purpose Non-small cell lung cancer (NSCLC) is a deadly malignancy that is frequently diagnosed in patients with significant medical comorbidities. When delivering local and regional therapy, an exceedingly narrow therapeutic window is encountered, which often precludes patients from receiving aggressive curative therapy. Radiation therapy advances including particle therapy have been employed in an effort to expand this therapeutic window. Here we report outcomes with the use of proton therapy with curative intent and immunotherapy to treat patients diagnosed with high-risk NSCLC. Methods and Materials Patients were determined to be high risk if they had severe underlying cardiopulmonary dysfunction, history of prior thoracic radiation therapy, and/or large volume or unfavorable location of disease (eg, bilateral hilar involvement, supraclavicular involvement). As such, patients were determined to be ineligible for conventional x-ray–based radiation therapy and were treated with pencil beam scanning proton beam therapy (PBS-PBT). Patients who demonstrated excess respiratory motion (ie, greater than 1 cm in any dimension noted on the 4-dimensional computed tomography simulation scan) were deemed to be ineligible for PBT. Toxicity was reported using the Common Terminology Criteria for Adverse Events (CTCAE), version 5.0. Overall survival and progression-free survival were calculated using the Kaplan-Meier method. Results A total of 29 patients with high-risk NSCLC diagnoses were treated with PBS-PBT. The majority (55%) of patients were defined as high risk due to severe cardiopulmonary dysfunction. Most commonly, patients were treated definitively to a total dose of 6000 cGy (relative biological effectiveness) in 30 fractions with concurrent chemotherapy. Overall, there were a total of 6 acute grade 3 toxicities observed in our cohort. Acute high-grade toxicities included esophagitis (n = 4, 14%), dyspnea (n = 1, 3.5%), and cough (n = 1, 3.5%). No patients developed grade 4 or higher toxicity. The majority of patients went on to receive immunotherapy, and high-grade pneumonitis was rare. Two-year progression-free and overall survival was estimated to be 51% and 67%, respectively. COVID-19 was confirmed or suspected to be responsible for 2 patient deaths during the follow-up period. Conclusions Radical PBS-PBT treatment delivered in a cohort of patients with high-risk lung cancer with immunotherapy is feasible with careful multidisciplinary evaluation and rigorous follow-up.
Introduction
The lung cancer mortality rate has declined substantially in recent years owing in large part to improved treatment options. 1 Despite these advances, lung cancer continues to be the leading cause of cancer-related death in the United States, making up approximately 25% of all cancer fatalities. 2 A unique challenge in the treatment of non-small cell lung cancer (NSCLC) is the narrow therapeutic window in high-risk patients. Delivering aggressive concurrent chemoradiation therapy in patients who on average are quite elderly and have significant cardiopulmonary dysfunction is challenging. 3,4 The fragility of such patients is most notably manifested by the survival detriment observed with dose escalation in Radiation Therapy Oncology Group (RTOG) trial 0617, which serves as a cautionary tale. 5 Nevertheless, RTOG 0617 demonstrated several critical factors implicit in modern radiation therapy management of NSCLC. First, lung dose, specifically V20 Gy, is significantly associated with severe pulmonary toxicity. 6 Second, heart dose is correlated with overall survival (OS). 6 Third, intensity modulated radiation therapy (IMRT) can improve upon the aforementioned dose-volume histogram parameters and optimize clinical outcomes. 6 Hence, it is postulated that advanced proton therapy techniques may translate to further clinical improvements.
Proton beam therapy (PBT) has shown the ability to reduce cardiopulmonary radiation exposure compared with IMRT in numerous dosimetric studies. [7][8][9][10][11][12] Clinical results have been reported by multiple institutional series and have explored oncologic and toxicity outcomes of concurrent PBT chemoradiation for locally advanced NSCLC. [13][14][15][16][17] However, the majority of publications to date have used passive scatter PBT as opposed to pencil beam scanning (PBS) systems, and to our knowledge none have reported treatment planning using Monte Carlo algorithms exclusively. Moreover, with improvements in systemic therapy, particularly immunotherapy, in the localized and metastatic setting, the therapeutic landscape of NSCLC has dramatically changed. [18][19][20] The majority of patients with NSCLC receiving chemoradiation will now go on to receive immunotherapy, either as consolidation or upon disease progression. With the addition of immunotherapy, there are concerns regarding an increased risk of overlapping side effects, particularly pneumonitis, in this patient population. However, little data exists in this space for those receiving PBT. 21 Despite the significant aforementioned advances in the management of NSCLC in the modern era, these improvements can be reduced by the effects of a pandemic. The cardiopulmonary frailty of patients with highrisk NSCLC, especially while undergoing immunosuppressive or immunostimulatory therapy, places them in arguably the highest-risk COVID-19 category, which has been demonstrated in a recent meta-analysis. 22 In this article, we review the outcomes of patients with diagnoses of high-risk NSCLC treated with PBS-PBT followed by immunotherapy during the COVID-19 pandemic.
Patient eligibility
This single institutional review of consecutive patients treated with NSCLC was approved by the local institutional review board (2017-0695). All patients were evaluated by a multidisciplinary thoracic oncology team which included radiation oncology, interventional pulmonology, medical oncology, and thoracic surgery. All patients underwent diagnostic tests including computed tomography (CT) scan, positron emission tomography (PET)/CT scan, magnetic resonance imaging or CT scan of the brain, and pulmonary function tests. All patients underwent bronchoscopy and endobronchial ultrasound for biopsy of the primary mass and lymph node sampling. Patients were staged using the American Joint Committee on Cancer eighth edition staging system. Patients with implanted cardiac devices were not PBT candidates based on institutional practice. Patients were determined to be high risk if they had severe underlying cardiopulmonary dysfunction, history of prior thoracic radiation therapy, and/or large volume or unfavorable location of disease (eg, bilateral hilar involvement, supraclavicular involvement).
Simulation and contouring
All patients underwent CT-based radiation treatment planning simulation with accompanied 4-dimensional computed tomography (4D-CT) for assessment of respiratory motion (GE LightSpeed RT16). Respiratory motion management in the form of abdominal compression was used in cases of excess motion, which was assessed at the time of simulation. A contrast CT scan was also obtained at the time of simulation and fused with the primary simulation CT scan. Diagnostic imaging including CT or PET/CT was fused with the simulation CT scan to assist in target volume delineation. Patients who demonstrated excess respiratory motion (ie, greater than 1 cm in any dimension noted on the 4D-CT simulation scan) were deemed to be ineligible for PBT. Target volume contours were generated using previously defined definitions from the RTOG 1308 protocol. 23 Elective nodal radiation was not incorporated for any definitive treatment. Organs at risk (OARs) were contoured and included lungs, heart, esophagus, spinal cord, brachial plexus, proximal bronchial tree, and skin (3 mm).
Treatment planning and delivery
Dose calculations and planning optimization were performed on the average phase of the simulation 4D-CT. Proton plans were generated using RayStation version 8A (RaySearch Laboratories, Stockholm, Sweden). Beam angles were created to optimize target volume coverage, mitigate dose degradation due to motion or geometric changes, and minimize exposure of normal structures (Fig. 1). Single-field optimization was used for all PBT plans. All plans were optimized using a Monte Carlo dose calculation algorithm, which is very rarely used in prior publications for lung cancer PBT treatment. Apertures were created using the Adaptive Aperture multileaf collimator system (Mevion Medical Systems, Littleton, MA).
Planning overrides were used for artifact created by fiducial markers, if present. Quality assurance 4D-CT scans were obtained at regular intervals, typically every 1 to 2 weeks during treatment (Supplementary Figure E1). PBT replans were performed on the 4D-CT scan average phase to ensure intrinsic anatomic changes during treatment did not significantly alter target coverage or OAR dose constraints. Replans were performed if target coverage or doses to OARs deviated from institutional standards. All patients were treated with standard fractionation. Patients were set up using orthogonal kV imaging with gross set up to bony anatomy and subsequent final adjustment based on the bronchopulmonary tree with or without fiducial marker adjustment.
Follow-up
Patients were seen for weekly on treatment visits and acute toxicity was defined as that occurring within 90 days of treatment completion. Late toxicity was defined as that occurring greater than 90 days after radiation therapy completion. Toxicity was reported using the Common Terminology Criteria for Adverse Events, version 5.0. All toxicities were graded by the attending radiation oncologist. Patients were typically followed using serial CT scans and multidisciplinary clinical examination at 3-month intervals for the first 2 years and every 6 to 12 months thereafter.
Statistical analysis
The Kaplan-Meier method was used to calculate OS and progression-free survival (PFS). All patients were Figure 1 Seventy-two-year-old female patient with a diagnosis of non-small cell lung cancer of the left lower lobe, squamous cell carcinoma histology, clinical stage T3 N0 M0, stage IIB. She was deemed medically inoperable due to significant pulmonary dysfunction (FEV1 of 37% and diffusing capacity of the lungs for carbon monoxide 31%) and was treated with proton beam therapy to a total dose of 6000 cGy in 30 fractions with concurrent chemotherapy. Color-wash dose distribution is demonstrated for (a) 3-dimensional conformal radiation therapy comparative plan, (b) intensity modulated radiation therapy comparative plan, and (c) proton beam therapy plan using the Monte Carlo algorithm.
Advances in Radiation Oncology: 2022
Proton therapy for high-risk lung cancer included for the acute toxicity analysis. Patients who did not progress during treatment and were not lost to follow-up were included in our OS and PFS analysis. OS was defined as the time from the end of treatment to death from any cause. PFS was defined as the time from the end of treatment to disease progression or death from any cause. Median follow-up was defined as time from the end of treatment to last clinical follow-up or death. Local control was defined as any new or progressing disease within the radiation treatment field per response evaluation criteria in solid tumors, version 1.1. Regional recurrence was defined as disease in the adjacent mediastinum or ipsilateral lobe(s) outside of the radiation field. Distant recurrence was defined as any recurrence not meeting the local or regional recurrence definition. All statistical analysis was performed using SPSS, version 24 (IBM, Armonk, NY).
COVID-19 analysis
We define the beginning of the COVID-19 pandemic as March 1, 2020. All patients included in the COVID-19 portion of the analysis of the present study were either treated during the pandemic or seen in follow-up thereafter. We reviewed the following COVID-19 data for this patient cohort: infection rate, severity of infection, and death rate (confirmed and suspected). We also reviewed the vaccination status of surviving patients.
Patient and tumor characteristics
A total of 29 patients with high-risk NSCLC were consecutively treated from 2018 to 2020 with thoracic PBS-PBT. The median age of the cohort was 70 years (range, 49-86 years). A significant proportion of patients (24%) required supplemental oxygen before treatment due to severe baseline pulmonary disease. The most common diagnosed histology was adenocarcinoma (n = 14). Over half of the cohort had diagnoses of unresectable, locally advanced NSCLC stage IIIB-C. Most patients were considered high risk due to severe cardiopulmonary dysfunction (n = 16, 55%) or tumor location or size (n = 13, 45%). A notable proportion were considered high risk due to prior thoracic radiation therapy (n = 7, 24%), all for metachronous nonrelated malignancies. Of note, the median dose of the previous radiation therapy course was 6000 cGy (range, 3000-7380 cGy). The median volume of disease as measured by planning target volume was 471 cc (range, 45-1286 cc). Table 1 illustrates patient and tumor characteristics (dosimetric data can be found in Supplemenary Table E1). Figure 1 illustrates comparative radiation therapy plans for a patient treated in our cohort with underlying severe pulmonary disease.
The vast majority of patients were treated with definitive intent (n = 26, 90%). Patients were treated to a median total dose of 6000 cGy (relative biological effectiveness) in 30 fractions (4500-6600 cGy relative biological effectiveness). The majority of patients (n = 25, 86%) received chemotherapy, with 84% receiving it concurrently. During treatment, 28% of patients (n = 8) had geometric target volume changes that significantly altered coverage and/or OAR dose constraints necessitating a PBT replan. Nearly all of these patients (n = 7) received concurrent chemotherapy and were evenly distributed between adenocarcinoma (n = 3) and squamous cell carcinoma (n = 4) histology. Interestingly, none of the patients who required a replan went on to develop local or regional disease recurrence, perhaps reflective of the rapid treatment response identified during treatment. Table 2 lists specific treatment characteristics. Supplementary Figure E1 demonstrates radiation therapy changes seen during radiation treatment (ie, 3-week Quality Assurance-CT scan) as well as 2 years following treatment completion.
Immunotherapy characteristics
The majority of eligible patients (20 of 21) went on to receive immunotherapy either for consolidation or upon disease progression. The most common immunotherapy used was durvalumab (n = 13). Ineligibility for immunotherapy was documented for the following reasons: (1) radiation delivered without radical intent (n = 3), (2) contraindications due to systemic autoimmune diseases (n = 2), (3) early-stage disease (n = 2), (4) targeted therapy used (n = 1), and (5) rapid disease progression (n = 1). Grade 2 or higher pneumonitis was identified in a total of 7 patients, 2 of whom were found to have grade 3 toxicity. Grade 2 or higher pneumonitis occurred at a median of 3.75 months following completion of radiation. Of these cases, 3 were attributed to radiation, 2 were attributed to immunotherapy, and 2 had an unclear etiology (ie, immunotherapy vs radiation). Of note, immunotherapy-related grade 3 thyroiditis and grade 3 colitis was identified in 2 additional patients.
Late PBT toxicity
A total of 7 high-grade (grade 3+) toxicities were observed in 5 patients. Nearly all of these toxicities were pulmonary and had the following distribution: pneumonitis (n = 2), pleural effusion (n = 2), lung infection (n = 1), dyspnea (n = 1), and esophageal stricture (n = 1). No grade 4 or higher late toxicities were observed. Highgrade pneumonitis was attributed to immunotherapy in 1 case and had an unclear etiology (ie, immunotherapy vs radiation) in the other. The late grade 3 esophageal stenosis occurred in a patient who previously underwent a course of definitive thoracic irradiation, highlighting the risk of late normal tissue toxicity with reirradiation. The most commonly observed low-grade (≤grade 2) late toxicities were cough (n = 9, 35%), fatigue (n = 9, 35%), and chest wall pain (n = 8, 30%). Low-grade acute toxicities demonstrated a clear improvement over time with fatigue, esophagitis, and radiation dermatitis dissipating with longer follow-up. Of note, 3 patients were lost to follow-up shortly after completion of radiation treatment and were excluded from late toxicity and survival analysis. Late toxicity information is illustrated in Table 4.
Oncologic outcomes
With a median follow-up of 17.36 months, median OS and PFS has not been reached. The 1-and 2-year Advances in Radiation Oncology: 2022 Proton therapy for high-risk lung cancer estimated PFS was 60% and 51%, respectively ( Fig. 2A). The 1-and 2-year estimated OS was 76% and 67%, respectively (Fig. 2B). Notably, progression of disease was typically observed within the first 6 months, and for those patients who remained disease free, control appeared to be durable with extended follow-up. The predominant pattern of failure was distant progression with only 1 case of regional recurrence identified. A total of 10 patients
COVID-19 effect
A total of 24 patients were included in our COVID-19 analysis. Of these, only 2 were found to have polymerase chain reaction−documented COVID-19 infections. For those who were found to have COVID-19 infections, 1 patient required hospitalization and subsequently died of their infection, and the other patient recovered quickly. In addition, due to difficulty with respect to follow-up and availability of polymerase chain reaction testing during the initial phase of the pandemic, 1 additional patient died at an outside hospital with a suspected COVID-19 infection but was never tested. Of the remaining 22 patients, 6 individuals died before the availability of the COVID-19 vaccine. A total of 16 patients were alive at the time of last follow-up with only 9 being vaccinated.
Discussion
The present article reports clinical outcomes for a cohort of patients with high-risk lung cancer at the intersection of novel advanced active scanning PBT in concert with immunotherapy delivered during the COVID-19 pandemic. The high-risk nature of our cohort reflects a more generalizable patient population that is often not reported upon in clinical trials. In the present study, over half of the cohort had diagnoses of unresectable stage IIIB-C disease. Furthermore, 24% of the patients required supplemental oxygen at baseline, 24% had prior thoracic irradiation, and over half carried a diagnosis of severe cardiopulmonary dysfunction. Older patients with severe cardiopulmonary disease typically represent the rule rather than the exception in the average lung cancer clinical encounter. Moreover, prior publications not surprisingly demonstrate that the risk of radiation-related toxicity can escalate as age and medical comorbidities increase. 24,25 As a consequence, these patients may be exquisitely sensitive to low doses of radiation to thoracic organs. 25 As such, many practitioners often recommend against aggressive definitive intent locoregional therapy in effort to avoid potential harm to the high-risk patient. Nevertheless, locoregional progression of lung cancer is strongly associated with morbidity and decreased quality of life and is a leading cause of lung cancer−related death. 26 Taken as a whole, it is critical to widen the therapeutic ratio in this patient population and a theoretical method of doing so is improved radiation technique such as the use of PBS-PBT.
Fundamentally, it is the physical dose superiority afforded by the Bragg peak that makes PBT an attractive radiation option particularly when minimization of integral radiation dose exposure is critical. The use of PBT in the treatment of locally advanced NSCLC has been well reported in the literature and has prompted the randomized control trial, RTOG 1308, comparing PBT with IMRT. [15][16][17] The vast majority of PBT lung cancer literature uses older passive scatter technology, whereas we describe the clinical results of modern PBS delivery in concert with Monte Carlo-based planning, which will likely become standard for thoracic PBT in the near future. 27 Despite improvements in conformality with PBS-PBT, it is critical to monitor tumor response during treatment, particularly in heterogeneous Proton therapy for high-risk lung cancer tissue such as the lung, to avoid target dose degradation and OAR overdosage. This is demonstrated by the fact that 28% of our cohort had geometric changes that required PBT replans during treatment. Ultimately, the comparative effectiveness of PBT versus x-ray−based therapy will be determined by randomized trials with particular attention placed on PBT toxicity mitigation, which is all the more important for a high-risk cohort such as that described in this article. Without question the most meaningful therapeutic advance in lung cancer in the last several decades has been the development of immunotherapy. In cases of locally advanced NSCLC following curative treatment, the predominant pattern of failure has historically been distant, and the use of effective immunotherapy has yielded dramatic PFS and OS improvements. However, concerns regarding overlapping toxicities with radiation therapy, specifically pneumonitis, persist and appear to be higher than was initially reported in the PACIFIC trial. 18,19,28 In the present study we identified limited severe radiation therapy− and immunotherapy-related pneumonitis (n = 2), with 5 additional patients with diagnoses of lowgrade pneumonitis. Moreover, despite underlying comorbidities, the majority of patients who were eligible for immunotherapy went on to receive it after upfront radiation. It would appear with careful multidisciplinary evaluation and close follow-up, curative radical PBS-PBT in concert with immunotherapy in a high-risk cohort is feasible with manageable toxicity.
In 2020, patients with lung cancer simultaneously faced the deadliest cancer in America and the deadliest pandemic in modern history, often while immunosuppressed from antineoplastic treatment and handicapped by underlying medical comorbidities. As the COVID-19 pandemic initially flared, intense management decisions were made on the fly as our understanding of the virus evolved. 29,30 With little effective treatment identified early in the pandemic, oncologists sometimes faced the decision of minimizing viral exposure or offering curative lung cancer treatment. 31 As one of the first countries hit with COVID-19, Italy reported the consequences of the infection on radiation therapy with a 17% reduction in radiation treatments, but despite this drop, nearly half of patients who received a diagnosis of COVID-19 continued radiation therapy without interruption. 32 In contrast, data reported from the initial epicenter in the United States, New York City, the severity of COVID-19 infection in patients with lung cancer was more grim with a 62% hospitalization rate and a 25% mortality rate in consecutive patients treated from March 12, 2020, to May 6, 2020, at Memorial Sloan Kettering Cancer Center. 33 Although cancer-specific factors did not seem to affect the severity of infection, patient-specific factors such as smoking and pulmonary disease dramatically increased the risk of COVID-19 severity. Such comorbid factors placed the high-risk patient population of the present study at a profound risk during the pandemic with 62% of patients having a greater than 30-pack-year smoking history and nearly 25% on pretreatment supplemental oxygen. In the present study, COVID-19 was responsible for 1 confirmed and 1 suspected death. Fortunately, the significant clinical impact seen at Memorial Sloan Kettering Cancer Center during the peak in New York City was not observed in the present cohort in Washington, DC.
Limitations of the present study include its retrospective nature, limited patient numbers, and heterogeneous cohort. It is difficult to remark on the oncologic outcomes of the present study relative to previously published literature given the heterogeneity of our patient population and lack of similar publications for high-risk patients. 34 Certainly the high-risk nature of this group poses significant limitations on life expectancy. Nevertheless, we included a wide range of lung cancer stages some of which would be expected to achieve long-term disease control. Thus, direct comparison to previously published PBT literature 13 or modern radiation therapy followed by consolidative immunotherapy 18 is challenging. 17,19 Moreover, the occurrence of the COVID-19 pandemic as a competing cause of mortality makes interpretation even more nebulous.
Conclusion
The present article reports a cohort of patients with high-risk lung cancer at the juncture of novel, advanced, active scanning PBT in concert with immunotherapy. Modern PBS-PBT with Monte Carlo-based planning was delivered for curative intent. Close monitoring of tumor changes was required as 28% of cases required a PBT replan during treatment. Despite their high-risk status, the vast majority of patients went on to receive immunotherapy and only 2 cases of severe pneumonitis were identified. A total of 6 acute grade 3 toxicities were observed, most commonly esophagitis. Seven severe late toxicities were identified, most commonly pulmonary in origin. Infection with COVID-19 was confirmed or suspected to be responsible for 2 patient deaths during the follow-up period. Two-year PFS and OS was estimated as 51% and 67%, respectively. Radical PBT treatment delivered in curative fashion in a cohort of patients with high-risk lung cancer appears to be feasible with careful multidisciplinary evaluation with rigorous follow-up. | 2022-11-26T17:06:14.935Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "890ea73eb260d59b08e33399dc7d9c5a6da6d1f0",
"oa_license": "CCBYNCND",
"oa_url": "http://www.advancesradonc.org/article/S2452109422002317/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0793ed9f847c2a4a7db790286d7c39543350598",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213570570 | pes2o/s2orc | v3-fos-license | Dendroindication of ecoclimatic condition in forest remediation area within Northern Steppe of Ukraine
We analyzed ring width, latewood width and earlywood width of Pinus sylvestris trees under normal and flood condition in Dnipropetrovsk region, within Northern Steppe of Ukraine. Precipitation from February to August seems to be the most stable climatic factor which influenced Scots pine growth rate and caused the difference between maximum and minimum ring width in normal conditions. Meteorological conditions were mainly associated with general ring values and earlywood width, and were less associated with latewood width values. Assessment of the effect of climatic signals on tree rings’ growth process in living and dead trees and in the normal and flood condition by analyses of correlation and response function was conducted. Average annual temperatures affected the tree growth negatively in normal conditions and tree increment positively in flood conditions. Annual precipitation was correlated positively with ring width, earlywood width series in normal conditions, but negatively with these series in flood conditions.
Introduction
Forest ecosystems are currently exposed to a wide range of natural and anthropogenic disturbances caused by global warming and climate changes. These disturbances create a real hazard not only to the state of forests and their beneficial functions, but also to human society as a whole (Lindner et al., 2010). In this work, the effect of environmental changes has been identified by the growth response of trees. Vegetation cover in general and woody plants in particular are among the first to respond to negative changes in the air and underground environment. Fire, wind, flooding and drought act as the main abiotic negative factors affecting forest ecosystems. Flooding tolerance was evaluated in terms of tree growth response, level of injury sustained and survival (Kozlowski, 1997) in relation to flooding characteristics and used to express capacity to survive in anoxic conditions (Hook, 1984).
As defined by the same author, mechanisms of "flooding tolerance" include the associated anatomical and physiological adaptations (Kozlowski, 1984;Armstrong et al., 1994;Glenz et al., 2006;Brygadyrenko, 2015Brygadyrenko, , 2016. The process of tree survival under flood conditions primarily depends on their ability to control their metabolism, reach available energy resources, obtain basic gene material, synthesize macromolecules and their ability to protect themselves against post-anoxic injuries. Processes of morphological and physiological adaptations of trees that grow in flood conditions have been studied in numerous works of European scientists (Chirkova & Gutman, 1972;Hook, 1984;Hughes et al., 1997).
Lack of oxygen caused by flooding is accompanied by accumulation of toxic metabolites and carbon dioxide; it leads to inhibition of new root and branching formations and of existing roots and mycorrhiza development (Ewing, 1996;Kozlowski, 1997). As a result, it leads to a disturbance in the vital processes of the plant body metabolism and photosynthesis inhibition with a proportional decrease in productivity and to a decrease in leaf mass. Tree species are not physiologically adapted to such conditions, and they could die as a result of an excess of the prolonged anoxic environment produced by changes in the channel pattern in response to a flood .
Location of coal mines near river basins and enclosed water bodies causes their ecological deformation and leads to the death of forest stands because of flooding; as a result, it leads to development of anthropogenic landscapes with unpredictable development prospect, especially typical to the Western part of the Donetsk coal basin, Dnipropetrovsk region (Pakhomov et al., 2008). Tree-ring data provide rare opportunities to understand the ecological dynamics of plant communities (Brienen et al., 2006). Analyses of annual growth rings can be used as source of indirect data to recognize the harmful effects on the environment and can determine the main directions of their improvement (Badeau et al., 1996;Borgaonkar et al., 2009). The results of environmental influence on forest growth have been reflected in several studies, and a lot of them have focused on understanding the relation between radial increment and meteorological factors such as amount of precipitation and air temperature (Cleaveland et al., 2003;Villanueva et al., 2005). Most studies were based on analyses of total ring-width series during the detection of plant growth response to environmental changes. This approach has identified a climatic signal covering a period of several months including the growing season and previous months (Fritts, 1991;Speer, 2010).
In addition, analysis of radial increment in earlywood and latewood parts of total ring-width help to understand seasonal climate variations and their influence on the formation of biomass production (Villanueva-Díaz et al., 2007;Torbenson et al., 2016). Usually earlywood is developed at the beginning of the growing season, whereas latewood is formed at the end of summer or early fall (Fritts, 2001;Vaganov et al., 2006;Griffin et al., 2013;Carlón-Allende et al., 2018). Latewood has higher density and is usually darker than earlywood. Latewood percentage is one of the most widely used wood quality characteristics (Larson et al., 2001;Kretschmann et al., 2007). Proportion of latewood has a strong impact on wood specific gravity in conifers (Zobel & Jett, 1995;Jayawickrama et al., 2011).
Scots pine (Pinus sylvestris L.) is a tree species very common in forests of all Europe and particular within Ukraine, and therefore it has been widely used as timber. Based on cluster analysis and expert knowledge, Glenz et al. (2006) classified 65 Central European tree and shrub species into 5 classes by their flooding tolerance. According to the classification given, Scots pine refers to second lowest class or to floodintolerant trees. The purpose of our research was studying the changes of general ring width, latewood width and earlywood width chronologies in trees of P. sylvestris under condition of flooding in the Western Donbass, Dnipropetrovsk region.
Materials and methods
Study area. The study presented was performed in the steppe zone of Ukraine in Western Donbass, Pavlograd district. Data on amount of precipitation and temperature were obtained from the Pavlogradskaya Meteorological Station as historical data for 32 years. This meteorological station is situated in Dnipropetrovsk region (48°05' N, 35°08' E, 91 m a.s.l.). Dnipropetrovsk region is an administrative division in the Central part of Ukraine. It located within the middle and lower stream of the Dnieper River.
The subject of the study was Scots pine trees selected from the temporary sample plots laid out in 1961. Sampling was done at two sites, the control (forest reclamation area with no flooding) and the experimental (zone with mining activity in the areas of forest reclamation). Forest reclamation sites were situated on the right valley wall of the Samara River within a mine zone of 5 km from Pavlograd city in the northeast of Dnipropetrovsk region. The control sampling area has a flat aspect, with altitude of 112 m a.s.l., northeast aspect and slope 10%. The flood sampling area has also a flat aspect, with altitude of 70 m a.s.l., northeast aspect and slope 0%.
Сlimatic date and definition of climate-growth relationships. The climate of the region studied is moderately continental, with mild winters having a small amount of snow and frequent thaws (average January temperature -5 °C) and hot, dry summers with frequent rainstorms and strong southern winds (average July temperature +22 °C). The average annual air temperature is 8.1 °C, the temperature of the soil surface is 10 °C. Duration of the period with temperatures above +10 °C is 178 days, and the frost-free period is 187-228 days. Most precipitation falls in the warm period, and the annual quantity of precipitation averages 446 mm. Depth of snow cover reaches 10-15 cm. The period with a stable snow cover continues 3 months and lasts from about December 27 to March 4. Among the negative climatic phenomena there are thaws, windy frosts, dry winds and dust storms.
Monthly averaged temperatures (°C) and monthly sums of precipitation (mm) were used as explanatory variables. The data were used for making a regional series of annual total precipitation and annual temperatures for the period from 1961 to 1991. Mean annual precipitation for the reference period is 506.0 mm and annual temperature averages 8.5, 6.7 and 10.3 °C for mean, minimum and maximum temperature respectively. The number of rainy days averages 161.
The sum of positive average monthly air temperatures was taken as the characteristic of heat supply, and of negative temperatures as cooling. Water availability in the warm and cold parts of the year at the weather station was calculated in exact accordance with the value of the average monthly air temperatures.
We used the method of residual mass curves, which allows determination of the directions of long-term changes in climate elements. Primary processing of the data involved calculation precipitation per month in the cold (XI-III) and warm (IV-X) parts of the year. In deviation calculation we used the average data on the entire weather sequence for one or another part of the year. Precipitation of the warm part of the year was characterized by a significant difference in the amplitudes of its fluctuations compared with the precipitation fluctuations in the cold part of the year. Longterm data on temperature and rainfall observations within the territory studied were applied in a detailed analysis of the relation between in-crement and climatic factors. The interval of months for which the analysis was conducted covered the period from April of the previous growing season to July of the current one. The "previous" and "current" seasons were used only in relation to the season or year for which the climate-increment comparison was performed in the correlation analysis. According to our observations, xylogenesis of P. sylvestris in the Pavlograd area begin in April and May with cambium activation and a start of earlywood formation at about the same time as swelling of buds begins; it continues during latewood formation in June and July, whereas vegetation may occur to end of October. Thus, the full dendroclimatic year of P. sylvestris in the study area begins from April of the previous year to October of the current one, and also includes the rest period from November to March.
The relationship between annual variations in P. sylvestris chronologies and monthly climatic-hydrological variables was established using correlation analysis. The time interval covered the period 1961 to 1991 for the calendar, hydrological, dendrological, and vegetative periods from April of the previous year to July of the current year. The hydrological window covered the period from October of the previous season to September of the current season. For these climatic-hydrological variables, their correlation coefficient with increment, standard error was calculated, and they were tested with Student's t test.
To study temperature and humidity effects on tree growth, groups of years were used, which were characterized by the response uniformmity of the stands studied. These years were termedyears of negative and positive anomalies, which respectively were associated with the inhibition or improvement of woody plant increment values.
Field sampling and data collection. Ten living trees from the control sampling area and ten from the flooded sampling area with straight stems were selected as the sample trees. Table 1 presents the properties of the sample trees from the control and flood sampling areas. Scots pine trees in the normal condition had a diameter at breast height of 21.4 ± 3.6 cm (mean ± standard deviation), height of 14.2 ± 1.1 m. Sample trees in flood condition were comparable, with a diameter at breast height of 18.3 ± 2.4 cm, height of 11.5 ± 0.9 m.
Collecting of samples in the stands was carried out by obtaining cross sections of trees, performed at 1.3 m height. The samples of wood from the trunk were cut out off its peripheral part on the south side. After selecting model trees, we set the places for boring on the trunk, and the number of cores from one tree was determined after extraction of the first core. The bore was set and perpendicularly introduced into the trunk as near to the root collar as possible, so the loss of annual layers was minimal, and the obtained series of measurements maximally approximated to the true age of the model trees.
The sampled trees aged 32 years were bored at a right angle to the slope direction so as to avoid tension wood. Sampling consisted of collecting tree cores with a Pressler increment borer. The samples were taken and or each model selected, two cores were measured. We obtained the average measurements for trees -total height (H) and diameter at breast height with bark (DBH). Samples of cores were taken from the base 1.3 m above the root collar. The longest radius that was obtained at the line of greatest increment was established by measuring the distance from trunk core to periphery. The cuts were made towards the end of the base on both sides of this line at a distance of 1.5-2.0 cm to the end of trunk. As a result of this the sample took a trihedral shape, with the help of which detection of false rings and clarification of the boundaries of seasonal parts within annual layer became possible. Preparation and core measurements were carried out in tubular racks. The core inserted into the rack was skinned to a clear appearance of the annual layer edges, then preliminary marking was carried out. Immediately after sampling, the cores were placed in individual paper containers to avoid damage and wood deformation during field research and transportation. In order to reach a target moisture content of 12% prior to testing, all of the specimens were conditioned in a climate chamber at a temperature of 20 °C and a relative humidity of 65% until constant weights of specimens were provided.
Further work with the samples was carried out in the laboratory. For measuring, the cross sections were treated with a sharp knife along the directions where the measurement lines were to be, prior to which the layers were divided according to decades (control samples).
Individual chronological series of radial increment were calculated on each model tree by cross dating of the series of data from all its samples and cores. Extreme or minimal increment values (typical years) were used to check the data accuracy.
The earlywood (EW) and latewood (LW) width for each ring was measured along two radial files (upper and lower portions of the image) and averaged. Earlywood and latewood width within the investigated annual rings were defined according to such qualitative aspects as darkening. Earlywood is light-coloured compared with latewood (Larson, 1969). For a clearer appearance of the season part edges, the surface of the sample was wetted with glycerine when measuring the surface. Growth rings are visible because of the difference in texture between the latewood (usually comprised of relatively small and thicker-walled cells) and the earlywood of the subsequent year (with relatively large and thin-walled cells). The EW and LW widths were measured to the nearest 0.01 mm using MBS-1 and MBS-9 microscopes in the units of scale of ocular-monometer. The annual ring widths (RW) were calculated from the EW and LW widths.
Measurement errors and quality of dating were verified with the software Statistica (Version 12.6, USA, 2015). All samples are checked for compliance by normal distribution.
Results
During the period researched, mean annual ring width (RW), as well as mean increment of earlywood (EW) and latewood (LW), was greater in the pine trees growing on the control site, with a corresponding increase of 5%, 3% and 9% compared to the experimental variant. To assess the impact of flooding on the change in the RW values of Scots pine trees, at first we initially made samples of air temperature and precipitation from the Pavlograd Weather Station for the hydrological year from previous October to September of the following year (X-XII-I-IX months).
Over the period of ecoclimatic observations of the hydrothermal condition dynamics we have identified irregularity in distribution of precipitation, mainly in spring-summer-autumn periods. A lack of moisture is more typical for spring and autumn. Unstable weather associated with increased anticyclonal activity is characteristic of the cold season of the year. In the area surveyed, the wet period begins in late October and ends in March-April; for a second time it is observed in June-July.
Average annual air temperature of 10.1 ºC and amount of precipitation of 676 mm serve as a positive extreme, which caused the maximum effect in the formation of Scots pine increment. Whereas, the lowest effect on the process of wood increment was recorded with an average annual temperature of 7.3 ºС with amount of precipitation of 434 mm. We adopted data on each dendrological year from the second half of the previous vegetation season to the first half of the current one, which to assess the influence of climatic factors on the annual radial growth of Scots pine is a time period characterizing all the features of the growth performance in the studied region. Precipitation of the cold part of the year (November-March) was very effective for preliminary prediction of increment. Regardless of stand location, in the spring-summer period (April-June) the improvement in increment was usually associated with rainfall in May and June. Meteorological conditions in the period from July to September determine the increment values of the subsequent growing season. Of great practical importance is the question of whether a change in meteorological conditions in only one of these periods can lead to occurrence of increment anomalies. In this regard, we analyzed the dependence of increment on meteorological conditions in years between the peaks of highs and lows within the ascending and descending sections of the curve characterizing the long-term course of changes in the increment of pine stands. Meteorological elements were summarized by groups of years, followed by correlation analysis (Table 2). Note: t-3 -before three years to increment, t-2 -before two years to increment, t-1 -before one year to increment, t -current year; Asterisks indicate statistically significant period: *-Р < 0.05, **-Р < 0.01.
The data processing allows us to represent the importance and role of meteorological elements in the following periods: reproductive development (process of specialized plant organs' formation); cumulative state (summation of environmental factors influencing growth development); assimilation development (process of assimilation apparatus and annual ring formation) of pine stands. In the area of survey, the optimum air temperature is on average 17.0 °C during the assimilation period when a maximum increment of annual ring occurs. Average temperature of 18.9 °C is favourable in the reproductive period. In the years with a maximum increment of annual ring, humidification values increase 1.4-1.5 times more than in years with a minimum increment. So, the mean optimal amount of precipitation in the generative period is 138.3 mm, in the cumulative period is 186.5, and in the assimilation period is 131.1 mm. Having regard to the clear response of forest stands to May rainfall, we consider their average effective value as 71.4 mm. Figure 1 shows dendroclimatogram that demonstrates the distribution of temperature and precipitation by months with highest and lowest increment of Scots pine in normal environmental conditions. It is obvious that the greatest differences in the years of maximum and minimum recorded radial increment were observed in amount of precipitation from February to June with its significant decrease in July and August. At the same time, slight differences in the average monthly air temperature were registered from June to September, ranging from 21 to 25 °C. A very different situation was in the cold season in December, January and February. During the negative anomalies of Scots pine increment a deeper cooling of the territory occurred with less precipitation in January and February, where the average February temperatures vary by almost 7 °C. Preliminary results on assessment of influence of climatic factors on formation of increment served as the basis for comparing the radial pine increment after adding flooding as an additional impact factor in the research system.
The analysis of the data obtained allowed us to trace the weather changes and long-term increment trends in the control and flood areas using living specimens of Scots pine (Fig. 2, 3). There is no doubt that there was a high-degree of consistency between two curves characterizing the same reaction in the trees at the control and in the flooding area of mine workings. However, the response amplitudes were significantly different. It should be noted that maximum increment values were registered for both RW and EW at average annual temperature of about 9 °C, and the maximum on the plants of the flood area. In general, we did not establish a certain influence pattern of average annual temperature on the values of whole ring and early wood increment. Curves constructed for these components have an abrupt pattern, although in most cases the values of the absolute increment are higher in the control variant. A much lesser variation of changes was noted for LW, both of control and experimental groups. In flood conditions, pine trees are situated in the zone with greater deficit of soil moisture since their planting, which determined their longterm clear response to fluctuation of precipitation. Years with the sum of precipitation amounting to 368, 434, 447 and 581 mm were the most unfavourable for trunk radial increment. As can be seen from the presented data, the indicated years were characterized by low values, and the additional deterioration of soil water supply in the area of mine workings leads to a weak formation of increment. Figure 3 shows the influence of precipitation on Scots pine radial increment changes. Similar to the temperature regime, hydrological conditions do not give a clearly expressed growth response of plants regardless of the habitat characteristics of the plants studied. The maximum effect of RW and EW formation was observed in plants of the experimental group, besides with a low precipitation amount 368 mm. For the same group of plants, a peak of high RW and EW values was also observed in the year with precipitation amount of 676 mm. Along with the maximum effect, conditions of flooding caused also an absolutely opposite response in plants with the formation of a minimum RW and EW with amount of precipitation 437 mm per year. Relative to the control group, it should be noted that RW and EW values, like the flooding variant, have a jump-like pattern of changes, while not depending directly on the amount of precipitation during the year. However, the curves showing dependence of the increment on hydrological conditions have less variability of changes in comparison with the group under flood conditions. On the contrary, radial increment values of LW in two groups of plants studied in the annual dynamics have insignificant magnitude of EW fluctuations compared with RW, with a less significant share of precipitation influence, respectively. LW values ranged 0.2-2.3 mm and were slightly higher in the control plant group compared to plants of the experimental group.
In general, the data obtained on the average values of the RW were higher by 17% for the group of control plants, compared with trees growing in the flood area, by 30.5 % for LW and 11.5 % for EW (Table 3). (Table 4). Thus, for example, the relationship between pine radial increment and air temperature in the conditions of flooding was characterized by not significant correlation coefficients in a range 0.22-0.31.
In contrast, width of the annual ring had an inverse correlation with the temperature, but not more than -0.28 under normal conditions. A positive not significant correlation (r = 0.271) between the increment values of early and late wood was recorded for Scots pine plants growing on the control variant. Close value of pair correlation (r = 0.267) between those values was also noted for plants of the experimental group.
Discussion
The obtained results make it possible to underscore the importance of the winter temperature regime once again for coniferous trees, which has a significant impact on the formation of radial increment in the next growing season. Dendroclimatological research using Scots pine tree-ring widths has been conducted, e.g. by Cedro (2001), Vitas (2004), Zunde et al. (2008), Lindholm et al. (2010).
The results of observations showed that the growth of the pine trunk along the radius in the investigated conditions usually starts in the first and second decades of May, which was due to conditions of air temperature. Radial increment, investigated in this study, was finished in the first or second decade of August, when the average monthly temperature was 21 °C, and the amount of precipitation was 51 mm. According to studies of several authors, the start of trunk radial increment usually coincides with the start of needles' growth, and it is finished in I-II decades of August (Vaganov & Kachaev, 1992;Zabuga & Zabuga, 2003;Nikolaeva & Savchuk, 2008). Development (differentiation, growth by stretching, maturation) of early tracheids covers the period from the III decade of May to the beginning of August, and late tracheids from the beginning of June and almost to the end of September (Antonova, 1999). According to assumption of García-Suárez (2009), more favourable climatic conditions lead to a longer growing season for pines, followed by a weakening of the summer temperature influence. Studies on relationship between climatic factors in winter and tree growth have shown that it is either negative or absent (Krasnobaeva & Mityashkina, 2006;Nikolaeva & Savchuk, 2008).
Conditions of habitat definitely brought about the observed differences in the trend of changes in Scots pine increment values depending on the average annual air temperature. According to Zabuga & Zabuga (2003), the magnitude of radial increment variation and the proportion of external factors influencing them may reflect the strategy of plant growth processes and specificity of lateral meristem response of pine trees. In this case, a relatively high proportion of the increment direct response to the impact of environmental factors was determined by the provision of apical growth with products of photosynthesis, primarily reserves of assimilates created during the previous autumn and used for shoot growing in the current year. In optimal soil moisture conditions, those favourable for photosynthesis are engendered which guarantee sufficient tissue hydration and a high level of transpiration (Scherbatyuk et al., 1990;Suvorova et al., 2005). In such years the greatest radial increment in coniferous xylem is formed (Vaganov & Kachaev, 1992). The photosynthesis intensity is inhibited under the influence of constant soil flood, which leads to a decrease in formation of ring width. This process has been recorded in our studies and is consistent with the results obtained by Arbellay et al. (2012aArbellay et al. ( , 2012b and Ballesteros-Canovas et al. (2010a), who note that in coniferous trees, the main indicators are decreased ring widths along with a significant reduction in earlywood tracheid size.
The contribution of latewood to the total annual increment is 27.9% in the case of its formation in the group of plants of the control variant, whereas in case of flooding it is less and amounted 23.3%. This is inconsistent with data obtained for deciduous plant species by a number of authors, who note a greater contribution of latewood width to ring width, compared with earlywood width (Phipps 1982;Tardif 1996;Lebourgeois et al., 2004).
As in our work, a similar inverse correlation with the temperature regime and a direct correlation with the amount of precipitation per year were established by Zabuga & Zabuga (2006) for plants growing in normal conditions of the forest-steppe Prebaikalia, Russia. Researchers who studied coniferous plant species in the dryland communities of southern Russia (Nikolaeva et al., 2006) also noted a positive correlation between the general ring width and precipitation of the growing season (or individual months of vegetation). At the same time, as shown in our results, correlation with temperature was negative (Glebov & Litvinenko, 1976). On the other hand, Ferrio et al. (2015) investigated two pines, of which one was P. sylvestris L. and found that spring precipitation showed a strong negative effect on earlywood, while temperature was positively correlated. Such effect the authors related to the suggestion that the spring effect is mainly related to enrichment effects at the leaf level, which in turn are associated with the tight stomatal regulation of pines (Ballesteros et al., 2010). Similar to the data obtained, a positive correlation between the latewood width in one year and the earlywood width in the following year is recorded in a number of publications (Phipps, 1982;Tardif, 1996;Lebourgeois et al., 2004).
Conclusions
Analysis carried out on the assessment of Scots pine increment allowed us to reveal the characteristics of fluctuations caused by annual temperature and sum of precipitation. The greatest differences in the years of highest and lowest increment values of Scots pine were observed in the amount of precipitation falling from February to June. Increment values of pine under flood conditions were characterized by large amplitude of fluctuations in values of WR and EW compared with fluctuations in LW values compared with the trees growing under normal conditions. Average WR, EW, LW values of plants growing under normal conditions were higher by 17.0%, 30.5%, 11.5%, compared with the trees growing in the flood area. Degree of closeness of the pair correlation between the studied increment parameters and temperature was higher compared to the average annual precipitation. Variability of the annual ring width in conditions of flooding had a direct correlation with variability of the average ring width for the period of its temperature growth and the negative correlation in normal conditions of plant growth. | 2020-02-06T09:09:12.391Z | 2019-11-03T00:00:00.000 | {
"year": 2019,
"sha1": "7172320d96a108e1ab9ea543b36e2144b5be4c98",
"oa_license": "CCBY",
"oa_url": "https://medicine.dp.ua/index.php/med/article/download/567/586",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f8115826833b56c58ac43787533d66f3f2c89598",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
225691789 | pes2o/s2orc | v3-fos-license | DIFFERENTIATE FACTORS OF PREGNANT WOMEN WITH CHRONIC ENERGY DEFICIENCY OCCURRENCE IN BAJULMATI VILLAGE,
ABSTRACT
INTRODUCTION
Essential public health efforts include the promotion of health, environmental health, maternal and child health, family planning, nutrition services, and prevention and control of diseases, as well as treatment. Of the six essential public health efforts available, there are issues to be addressed, namely the prevention and control of disease. Chronic Energy Deficiency (CED) is a lack of energy intake that lasts longer. Anthroprometically, CED can be established if MUAC <23,5 cm or BMI is less than 18.5kg / m2. Indonesia is a country rich in natural resources but there are many cases of chronic energy deficiency (CED). This is caused by an imbalance in nutrient intake so that it can lead to imperfect body growth both physically and mentally. 1 According to the 2007 Riskesdas results, the province of East Java is one of 10 provinces in Indonesia with a CED prevalence of women of childbearing age above the national prevalence (13,6%). While the results of Riskesdas in 2013 showed that the prevalence of female population of childbearing age (ages 15-49 years) was pregnant and at risk of CED in East Java by 29,8%, while at the national level the prevalence of WUS population who were pregnant and at risk of CED was 24,2%. This shows that the population of pregnant women and CED in East Java is still higher than at the national level. 2 The prevalence of CED risk in pregnant women (15-49 years) is 24.2%, especially the highest prevalence found in adolescents (15)(16)(17)(18)(19) years) by 38.5% compared to the older group (20-24 years) of 30, 1%. The proportion of pregnant women with an energy adequacy level of less than 70% of the energy adequacy rate is slightly higher in rural areas compared to urban areas, which is 52.9% compared to 51.5%. While the proportion of pregnant women with a protein adequacy level of less than 80% the protein adequacy rate is also higher in rural areas compared to urban areas that is 55.7% compared to 49.6%. 2 Chronic energy deficiency is one thing that causes a high risk in a pregnancy. During pregnancy, energy needs increase by requiring an additional about 80,000 calories for approximately 280 days. 3 Poor dietary patterns and poor portions are a common cause of chronic energy deficiency sufferers. 4 Chronic energy deficiency can cause problems for pregnant women and the fetus they contain. To identify pregnant women who are at risk of chronic energy deficiency (CED), measurement of upper arm circumference (MUAC) and body mass index (BMI) can be used. If a pregnant woman has a MUAC size of less than 23,5 cm and or a BMI of less than 18,5 kg / m2 then the pregnant woman has reached the risk limit for CED. 5
MATERIALS AND METHODS
This is an observational analytic study with cross sectional research design. This research will find out the factors that distinguish the incidence of CED in pregnant women in Bajulmati Village, Wongsorejo District, Banyuwangi Regency. Collection in this study is to use primary data, namely data taken directly by researchers with questionnaires, the questionnaire used is a questionnaire to know the factors which distinguishes the incidence of CED in pregnant women. Data processing is done through several stages, the first stage of editing is the activity to check the questionnaire sheet and observation for completeness of the data so that if there is a discrepancy can be completed immediately by the researcher. The second stage of coding is to give a certain code or number to the questionnaire to make it easier when conducting tabulation and analysis. The third stage is the entry of entering data from the questionnaire into SPSS software version 17.0. Data analysis to see the difference was done by chi-square test and fischer exact.
The population is pregnant women in Bajulmati Village, Wongsorejo District, Banyuwangi Regency. The sample selection technique uses total sampling with 15 pregnant women. The target sample is pregnant women in Bajulmati Village. In the study sample, researchers included criteria for the samples taken. Samples taken are based on inclusion criteria, which are characteristics of samples that can be included or deserve to be studied. Inclusion criteria in this study are: Pregnant women, willing to be respondents, able to communicate actively. Exclusion criteria: cannot read, write, hear.
The instruments used in this study were structured questionnaires, gauges, body scales, and stationery. The questionnaire instrument to measure the nutritional status of pregnant women in this study is a meter to measure MUAC and maternal height and scales to measure maternal weight. The questionnaire instrument to determine the age of pregnant women in this study was measured by 1 question in the questionnaire. The questionnaire instrument to find out the work of mothers in this study was measured by 1 question in the questionnaire. The questionnaire instrument to determine family income in this study was measured by 1 question in the questionnaire. The questionnaire instrument to determine the history of previous maternal pregnancy in this study was measured by 2 questions in the questionnaire. The questionnaire instru-ment to determine the history of suffering from chronic illness in this study was measured by 1 question in the question-naire. The questionnaire instrument to determine the knowledge of pregnant women about nutrition in this study was measured by 12 questions in the questionnaire with an Alpha Cronbanch questionnaire value of 0,621.
Variable Factors Distinguishing CED Incidence in Pregnant Women in Bajulmati Village, Wongsorejo Subdis-trict, Banyuwangi Regency in 2019 Dependent Variables are CED in pregnant women and Independent Variables are the age of pregnant women, maternal occupa-tion, family income, previous pregnancy history, history of chronic illness and levels of chronic illness and levels knowledge about nutrition. Test Validity According to Notoatmodjo (2010), validity is an index that shows the measuring instrument actually measures what is measured. 6 The instrument used in this study was a questionnaire. To get valid and reliable data, the questionnaire must be tested for validity and reliability. Before the questionnaire was used in the study, the questionnaire was tested for validity using the Pearson product moment correlation formula. If the value of r count is greater than r table means it is valid whereas if the value of r count is smaller than r table means it is invalid. Reliability is an index that shows the extent to which a measurement tool can be trusted or reliable. This means showing the extent to which the measurement results remain consistent when measuring twice or more of the same symptoms, using the same measuring instrument. Reliability measurement using the help of computer software with Cronbach Alpha formula. A variable is said to be reliable if it gives an Alpha Cronbanch value> 0,60. Validity and Reliability Test Results The questionnaire used in this study had carried out a reliability test with the results of the Alpha Cronbanch 0,621, so the questionnaire was declared valid. From the table above, it can be seen that there were no significant differences between groups based on age during pregnancy, occupation, history of chronic illness, and the level of maternal knowledge in the incidence of CED in pregnant women, indicated by a significance value > 0,05. There were significant differences between groups based on family income and previous pregnancy history in the incidence of CED in pregnant women, indicated by a significance value < 0,05.
DISCUSSION
Maternal age is one of the factors that influence the nutritional status of pregnant women. In this study, there were no significant differences between groups based on the age of pregnant women in the incidence of CED. This is in accordance with research conducted by Wijayanti (2016) which states that there is no significant corellation between age and the incidence of CED in pregnant women. This is consistent with the theory which states that the best age to get pregnant is more than 20 years and less than 35 years in the hope that the nutrition of pregnant women will be better. 7 However, based on data from research conducted by Triatmaja (2017) and Mulyaningrum (2009), maternal age is related to the prevalence of CED in pregnant women. This is because pregnant women who are young, ie less than 20 years, require far more nutrients for fetal growth and growth of the mother's own body than pregnant women who are of ideal age, thereby increasing the risk of CED. [8][9] Work is also thought to affect the occurrence of CED in pregnant women. In this study, there were no significant differences between groups based on the workings of pregnant women in the incidence of CED. This is in accordance with research conducted by Indriani et al (2014) which found no significant corellation between maternal work with CED in pregnant women. This is because work does not directly affect the nutritional status of the mother. However, different results were shown by several other studies. According to Mahmudiono (2017), many working mothers experience CED incidents. This is because pregnant women who work have less time in preparing food that affects the amount of food consumed so that it affects the nutritional status of pregnant women. On the contrary, according to Surasih (2005), mothers who do not work are IRT (Housewives), in fact many experience CED incidents. This is due to mothers who do not work just do not have the time to meet the required energy and do not have access to much information because of the lack of time between homework. Mothers also need a high energy intake because the workload that is done everyday is very much to do homework such as taking care of the house, children, and husband. [10][11][12] In this study, family income provides a significant difference in the incidence of CED in pregnant women. This is in accordance with research conducted by Mahirawati (2014), which revealed a significant corellation between monthly income and the incidence of CED in pregnant women. This situation concludes that the proportion of CED pregnant women is higher for mothers from families with income less than Rp.1,120,000 in a month. The higher the level of family income, the higher the purchasing power of the family to meet household needs so that the nutritional status of pregnant women tends to be better so that it is less likely to risk CED compared to pregnant women who come from low socioeconomic status.
This study found significant differences between groups based on previous pregnancy history in the incidence of CED in pregnant women. This is consistent with research conducted by Mahmudiono (2017) that first-time pregnant mothers and young people tend to be more at risk of CED because the mother's body is not ready to fulfill the energy for fetal growth. In addition, pregnancy that is too frequent (a distance of < 2 years) can cause malnutrition because it can deplete the body's nutritional reserves and reproductive organs are not yet perfect as before pregnancy. Mothers are also still in the breastfeeding period and must meet their nutritional needs during breastfeeding, where when they are breastfeeding they need extra calories every day to meet their nutrition and milk production (Handayani and Budianingrum, 2011). This can also be caused by more and more pregnant women, so the mother will become less concerned about her pregnancy because it is considered normal and has already been experienced, so that mothers do not pay much attention to their health compared to early pregnancy (Mahmudiono, 2017). Different results were found in a study by Wijayanti (2016), namely there was no significant corellation between parity and CED in pregnant women. This result is supported by research by Syafuruddin et al (2018) who also did not find a significant corellation between the number of children with the incidence of CED. 11,13,14,15 In this study, there were no significant differences between groups based on a history of chronic illness in the incidence of CED in pregnant women. This is in accordance with research conducted by Wijayanti, 2016 which states there is no corellation between the history of the disease with the incidence of CED in pregnant women. Research conducted by Hidayati (2011) also showed results that there was no corellation between tuberculosis and diarrheal disease and the risk of CED in pregnant women. 16 Chronic diseases that have occurred for a long time cause the mother's body has adapted to the increased energy needs, so it does not affect the incidence of CED. Conversely, there are research results that show a corellation between infectious diseases with Chronic Energy Deficiency Events (CED), Infectious diseases can act as a starter for malnutrition as a result of decreased appetite, impaired absorption in the channel digestion or increased need for nutrients by disease. The association of infectious diseases with poor nutrition is a reciprocal corellation, that is a causal corellation. Infectious diseases can worsen nutritional conditions and poor nutritional conditions can facilitate infection. Diseases that are commonly associated with nutritional problems include diarrhea, tuberculosis, measles and whooping cough. 17 Mother's knowledge about nutrition is thought to influence the mother's diet in fulfilling balanced nutrition. One way to increase one's knowledge is through education, where the higher a person's education the higher the knowledge that person has. Education will enlighten a person, especially in the knowledge of nutrition in pregnant women. But a person's education is not the only guarantee of one's knowledge, but the higher a person's education is, the easier they will receive information, and the more knowledge he has. Mothers with high levels of education have a high interest to find out early on the kinds of nutrients needed by mothers while pregnant and preparing for pregnancy. With good knowledge, an individual will try to apply that knowledge into his life practice, such as the fulfillment of balanced nutrition during pregnancy. 6 Research conducted by Suryaningsih and Trisusila (2017) found a significant corellation between the level of knowledge of pregnant women with the incidence of CED. Good knowledge about a person's nutrition, making that person will increasingly take into account the amount and type of food they choose to consume. Those who have good knowledge tend to use more rationally and knowledge about the nutritional value of these foods. The knowledge possessed by a mother will influence the decision making and also influence the behavior. Mothers with good nutritional knowledge are likely to provide adequate nutrition for their babies. However, in this study there were no significant differences between groups based on mother's knowledge about nutrition in the CED incident. The same results were also obtained in another study, there was no corellation between maternal knowledge and the incidence of CED. This is because even though the mother's knowledge about nutrition is already good, her application in fulfilling nutritional needs is still not applied. 17 The study in Bajulmati Village, obtained high rates of CED pregnant women in Bajulmati Village, Wongsorejo District, Banyuwangi Regency. The results of data analysis found that the factors that significantly differ in the incidence of CED were family income and previous pregnancy history. The diagnosis of the community that causes the problem is the lack of knowledge of mothers regarding balanced nutrition in preparation and during pregnancy, the lack of mother's knowledge about health conditions that must be prepared before pregnancy, and the lack of pregnant women nutrition.
Community therapy conducted in the village of Bajulmati is nutrition counseling for adolescent classes targeting junior and senior high schools, administration of folic acid and iron tablets for brides, and giving PMT counseling for brides candidates for CED.
CONCLUSION
There were no significant differences between groups based on age during pregnancy, occupation, history of chronic illness, and the level of maternal knowledge in the incidence of CED in pregnant women and there were significant differences between groups based on family income and previous pregnancy history in the incidence of CED in pregnant women.
An evaluation of community therapy activities has been carried out. Based on the indicators of success, the three programs carried out were fully realized or the success rate was 100%. | 2020-08-13T10:05:33.422Z | 2020-06-28T00:00:00.000 | {
"year": 2020,
"sha1": "d038374cb0d75458deedc451a80342275125f3dd",
"oa_license": "CCBYSA",
"oa_url": "https://e-journal.unair.ac.id/JCMPHR/article/download/20297/11179",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0889f84cf3b89f048f395005ea9f60b4831ca4fc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243812066 | pes2o/s2orc | v3-fos-license | Analysis of The Characters’ Concern of The Natural Environmental Problems in Japanase Animation Miyori no Mori Directed by Nizo Yamamoto
Currently, many literary works discuss environmental problems. Animation is a modern literary work that often discusses environmental issues. One of which is an animation by director Nizo Yamamoto entitled Miyori no Mori. This animation is full of criticism related to environmental issues. Through literary ecocritical research methodology, this study discusses the problems of the natural environment and the manifestations of the characters' concern for the natural environment to save the natural environment. From the research results, it is known that the main problems in this animation are: the efforts of evil humans who want to drown the forest and the village where the main character lives into a dam. The forest located near this village is the source of life for the surrounding community. With various efforts made, the main character struggled to thwart the government's efforts to sink forests and villages. The efforts made by the characters in this animation are a manifestation of their concern in protecting and protecting the natural environment from the damage caused by evil humans who want to submerge forests and their surrounding areas into the dam.
Introduction
Literary work is a reflection of the environment. Literary works are a place to express the thing that occurs in nature. Nature must be treated as best as possible because nature is an important part of human life. Humans need nature for growth and development and vice versa. Nature needs humans to care for it and take care of it [1].
So far, many people think that literature is only concerned with and contains social, psychological, religious, and educational problems. Many people do not know that literature can be connected with environmental problems. Nowadays, many literary creators are aware of the importance of preserving and protecting the environment. In creating a literary work, they raise environmental issues in which it is full of moral messages about the importance and value of the environment to always be preserved and preserved to be sustainable.
Endraswara, in his book, states that concern and sensitivity to the environment must be articulated by anyone with any background, including in the field of literature. Perhaps the literary field is at the forefront of voicing concern for nature conservation [2].
Concern and sensitivity of the creators of works towards the environment recently can be felt by the increasing number of literary works that take themes or issues about the environment. One of the literary works with environmental themes is the Japanese animation entitled Miyori no Mori by director Nizo Yamamoto which was produced in 2007 [3].
Animation Miyori no Mori tells the story of the struggle of a 10-year-old elementary school girl named Miyori who tries to save the forest near Komori village where she was entrusted by her parents who live in Tokyo. The forest that supports the village of Komori, where there are still many wild animals and spirits who guard the forest, is planned to be drowned by a company in Tokyo and will be made a dam. Teamed up with her new friends at Elementary School and a forest guard spirit who has chosen herself to be a forest ranger, Miyori tries her best to prevent the efforts of those from the company who want to submerge the forest into the dam.
When talking about the preservation of the environment, the main emphasis lies on the attempt to ensure -excluding unpredictable large-scale natural catastrophes -those unlimited natural resources, such as sunlight, air or water, remain unlimited, and on the use of limited natural resources in a way that would provide that those resources remain available for future generations. In most cases, the major obstacle towards sustainable development is a conflict between economic and environmental considerations, whereby in most cases, the former outweigh the latter [4].
The conflict between economic and environmental considerations is clearly illustrated in this anime. The people of Tokyo city want to build a dam for personal gain at the expense of the surrounding environment. This animation is very thick with environmental themes. The moral message about the importance of protecting and loving the environment was felt from the beginning to the end of the story. There are many criticisms conveyed in this animation regarding human selfishness in destroying the environment without paying attention to the ecology in it, which is related to environmental sustainability and the creatures in it.
Animation, which is part of many literary works, features themes about nature. The relationship between nature and literature raises a concept about ecological problems in literature among literary critics. The term ecocriticism (ecocriticism) is used as a term regarding the concept of literary criticism relating to nature and the environment. According to Harsono, the term eco-critic comes from English ecocriticism which is a formation of the word ecology and the word critic. Ecology can be defined as a scientific study of the patterns of relationships, plants, animals, and humans to one another and their environment [5].
Endraswara, in his book, explains, meanwhile, that eco-centrism as the foundation for ecocritical existentialist theory holds that humans and their natural origins have consistency and dependence to create harmony and health in the human mind to create sustainability and maintenance. However, in its development to fulfil personal survival as well as the needs of joint development, humans often make changes to nature. This results in the loss of natural species, deterioration of the quality of nature, and even threatens the sustainability and harmony of human life itself [6]. What Endraswara described above is reflected in the plot of the anime Miyori no Mori. There is a group of people from a company who are trying to make a change in nature because of the ambition for material gain. They try to destroy the forest, which is the source of life for local people and is home to various kinds of animals and plants, including one of the rare animals protected by the Japanese government, namely the golden eagle. The efforts of this evil human, of course, met with resistance from residents, including the main character in the animation. The characters in this animation care about the natural environment around them. They will not allow anyone to destroy the environment around them. In this paper, by using ecocritical theory, researchers will try to analyze the kind of concern shown by the characters in this animation for the natural environment around them and what efforts they make to prevent damage to the natural environment due to the construction of a dam in their area of residence.
One of the things discussed in the literature ecocritical is about environmental management. In managing the environment, humans must pay attention to applied ethics. Without a type of ethics (rights and responsibility theory) and a theory of values, humans will lack guidance and direction to deal with problems, whether they are global, environmental, or otherwise [7]. Najmuddin explained that environmental ethics is a human moral policy in dealing with the environment [8]. Keraf explains that there are several forms of environmental, ethical principles, including Respect for Nature, Attitude of Responsibility, Moral Response to Nature, Attitude of Solidarity to Nature, and Attitude of Compassion and Care for Nature [9]. The principles of environmental ethics that are part of literary eco-critic will serve as a basis for analyzing the character's care for the natural environment in the animation Miyori no Mori by director Nizo Yamamoto.
Research Method
The method used in this paper is the descriptive analytical method. This study uses an ecocritical approach that focuses on the form of environmental, ethical principles which cover Respect for Nature, Attitude of Responsibility, Moral Response to Nature, Attitude of Solidarity to Nature, and Attitude of Compassion and Cares for Nature. This type of research is qualitative. The data source in this study is the animation Miyori no Mori by Director Nizo Yamamoto. The data analysis was carried out by investigating environmental problems that occurred in this animation as well as matters relating to the form of concern of the characters in the animation towards the natural environment. The data were then analyzed and interpreted. After that, the data is presented in the form of a description, then it is concluded.
Environmental Problems in Animation Miyori no Mori by director Nizo Yamamoto
Animation Miyori no Mori is an animation in which it discusses the importance of protecting the environment to exist and avoid the damage done by humans who want to destroy the environment for their interests. The director of this animation, Nizo Yamamoto, tries to convey a moral message about the importance of protecting and defending the natural environment from the ambitions of evil humans who want to destroy nature. Through the characters in this animation. Director Nizo Yamamoto has widely criticized the damage to the natural environment caused by humans who have the ambition to take material benefits by exploring and destroying nature. One of the criticisms related to the environment conveyed in this anime is a statement conveyed by one of the characters in this anime, the ghost guarding the springs in the forest. The ghost satirized human behavior that likes to destroy the environment in the following quote below.
"This clean water. This place will soon be destroyed because it will be drowned. After all, a dam will be built. All of these areas will become dams. Why do humans have the heart to destroy important things? Humans are really stupid creatures " This statement was conveyed by the forest guardian ghost to the main character named Miyori as a form of the ghost's concern and hatred for humans who like to destroy nature. This animation is full of environmental elements. The setting of this animation is a village called Komori. Near the village, there is a forest where various kinds of living things live, both in the form of flora, fauna, and astral creatures who are guardians of the forest. This forest is also home to one of the endangered animals protected by the Japanese state, the golden eagle.
This forest is also a source of life for residents. The local people make medicines from the plants that grow in the forest. In addition, the residents also make household utensils such as chairs, beds, and others using the wood that grows in the forest. There are several scenes in this anime that depict local people who use the forest as a source of life.
The peaceful village suddenly got into trouble when some bad guys from the city came to the village to investigate the dam construction in the area. This is very unsettling for the local community because with the construction of a dam in the area, the forest and the area where they live will be drowned and the villagers have to move to another place. In addition, by sinking the forest, there will be damage to the ecosystem. The animals in the forest will drown. This is a natural environmental problem that occurs in this animation. Knowing that the forest and village will be drowned and turned into a dam, the main character named Miyori teamed up with her friends and forest ranger creatures to prevent the dam construction plan. Miyori does not want to see the forest as the source of life for the surrounding community, where animals and plants live, and as a source of water for the surrounding community being destroyed and made into a dam. The following is one of the expressions made by Miyori as a protest against the dam construction. This expression can be seen in the following sentence excerpt.
"Aren't humans the enemy who will build a dam? Humans are selfish creatures. " Miyori's remarks were conveyed as a form of protest against the selfishness of humans who arbitrarily destroy nature for the benefit of certain groups. Building a dam, if it is done in a proper place and does not sacrifice the surrounding environment, is not a problem. It is a good thing because dams are very useful for irrigation, main water supply, power generation, flood control, fisheries, tourism, and water sports. In this animation, the construction of the dam will sacrifice many things, one of which will be to drown the forest, where the local people live, as well as where animals live, including one of Japan's rare animals that are protected by the state. This is a problem and raises resistance from the surrounding community to prevent the construction of the dam.
Director Nizo Yamamoto, through his anime, criticizes human selfishness, who likes to destroy and destroy the natural environment for personal or group gain.
From the results of the analysis, there are two problems related to the natural environment contained in this animation Miyori no Mori. The two problems are. 1. There are attempts by evil humans to destroy the forest and its surroundings to be used as dams for the interests of certain groups. With the construction of the dam, it will destroy the forest which is a source of water and a place to live for animals and one of the endangered species that are protected by the state, 2. There was an attempt to destroy one of the rare animals in Japan, namely the golden eagle that lives in the forest for the sake of smooth construction of the dam.
To overcome this problem, there are efforts made by the characters in this anime to save the forest and birds that are protected by the state from extinction. This business effort is a form of concern for the leaders of the environmental problems that occur around them. One of the efforts made by the characters in this animation is that they work together to formulate a dam prevention strategy. Their first strategy was to find a golden eagle in the forest. Some laws prohibit large-scale development in forests or mountains where there are rare species, as conveyed by Daisuke, one of the characters in this animation to his friends.
"Only 500 of them are left in Japan. Here it is written that the golden eagle used to live here. Some laws prohibit large developments in forests with endangered species " Another strategy taken by the characters in this animation is to work together to get rid of the people who are going to build the dam. Miyori, as the main character in this animation, collaborates with the forest guardian spirits in a way that the forest guardians are told to show their form to people who enter the forest, to frighten and drive these people out of the forest.
Meanwhile, village children were in charge of keeping people from the forest from escaping to the city. The village children reported this to village officials and the police about the behaviour of people from the company who were going to kill the golden eagle so that the dam could be built.
The effort was successful, people from the company who were going to hunt and kill the golden eagle were taken to the police officers, and finally, the dam was cancelled so that the forest and the surrounding area were not drowned.
The Form of The Character's Concern for Nature
Concern for nature can be manifested in several forms of environmental, ethical principles, including Respect for Nature, Attitude of Moral Responsibility towards Nature, Attitude of Solidarity with Nature, and Attitude of Compassion and Care for Nature. The following is an analysis of the character's concern for nature depicted in the animation Miyori no Mori.
1. Respect for nature. The respect for nature, according to Keraf is integrated into (1) the ability to respect nature, (2) the awareness that nature has a value in itself, (3) the awareness that nature has the right to be respected, (4) Nature has integrity, and (5) respect for nature to live, live, grow and develop naturally by the purpose of its creation. Respect for nature is shown in the form of respect for nature carried out by residents [10]. The real form of respect for nature shown by the characters in this anime is by living in harmony with nature. They do not want to disturb or destroy the ecology of the forest by cutting down trees arbitrarily or by hunting animals that live in the forest. The following are some of the remarks conveyed by several figures in this animation that have something to do with respect for nature.
"Humans decide whether forests live or die. You should be able to hear the voices of the forest creatures." This statement was conveyed by the grandmother to Miyori so that Miyori appreciates all types of creatures in the forest ranging from plants, trees, animals, and all the ecosystems that exist in the forest.
The attitude of moral responsibility towards nature
Apart from respecting nature, humans are also required to jointly take concrete actions in protecting the environment. Humans who live and live on this earth have a responsibility for the preservation and destruction of nature, not just an individual burden. This form of responsibility is to remind, prohibit, and punish anyone who deliberately destroys and endangers nature [11].
One of the forms of moral responsibility shown by the characters in this animation is by trying to save the forest from the efforts of evil humans who want to drown the forest and become a dam for the benefit of certain groups.
The real form of an attitude of moral responsibility towards nature is in the form of: a. Reminds evil humans who will drown the forest and all its contents for the sake of dam users. Following is the warning was given by the main character Miyori to people who are about to drown in the forest. "I know your plan. The company sent you to prevent the blocking of the dam that was planned to be built here, right. If you love your life, leave this forest." The remarks above are Miyori's warning to these bad people not to continue her intention to build a dam in this forest. b. Punish bad people who will drown the forest and all its contents for the sake of building the dam. Miyori, the main character in this animation, along with all forest-guarding spirits, punish them by frightening them by showing their true forms to bad people in the forest so that they stop their efforts to drown the forest. The people saw many terrifying figures in the forest, they were very scared and ran away from the forest.
The police and villagers also punished them by interrogating them after they were caught on the outskirts of the forest.
"Why are you sneaking in the forest with rifles?" why are you hunting when it's not hunting season now? Come with us to the office and explain everything to us. I won't make you bear food if you explain it honestly." After being interrogated, the bad guys were brought to a police actor for trial. It is a form of punishment for those who try to destroy nature. 3. Solidarity toward nature An attitude of solidarity towards nature is integrated into (1) an attitude of sharing what nature feels; (2) efforts to save nature, prevent humans from damaging and polluting nature and its ecosystem; and (3) efforts to align human behavior and ecosystems [12]. As explained by Keraf above, one form of natural solidarity is an effort to save and prevent humans from destroying nature and the ecosystem in it. Efforts to save nature from damage caused by humans are the main theme in this animation. The forest and the ecosystem in it will be destroyed and drowned by people from Tokyo for the sake of a company. The plan to sink this forest becomes a serious problem for the villagers because this forest is a source of life for them, so they try to save the forest from the drowning plan. Efforts to save forests are a concrete form of solidarity with nature. One of the ways that the forest can survive drowning is by finding a rare bird protected by Japan, namely the golden eagle.
The following is a quote that describes the main character Miyori's efforts to save the forest.
"Everyone knows that this forest will soon sink and become a dam, but there is still hope to stop it. Find the golden eagle. They won't be able to build a dam if there's a golden eagle here." This statement was conveyed by Miyori to her friends who also wanted to save the forest from damage. And this is a concrete manifestation of the solidarity attitude towards nature shown by the characters in this animation.
4. An attitude of love and concern for nature Sukmawan concludes that human affection and care for nature is realized by the awareness that (1) all living things have the right to be protected, maintained, not harmed; and (2) protect and protect all living beings without expecting anything in return [13].
As explained by Sukmawan above, one form of compassion and concern for nature is to have the principle that all living things have the right to be protected, cared for, not harmed. This principle is owned by the main characters in this animation. They believe that protecting and not hurting living things is the duty of everyone so that when they know the forest in which there are many living things, they will be drowned and made into a dam, making the characters feel anxious and at the same time feel sorry for the living creatures in the forest. So they made various efforts to prevent the dam's construction.
The following quotes describe an attitude of love and care for nature.
"We will kick them out again, no matter when they will come again" "Being a forest ranger is not bad". "If you are with me, i can protect this forest ".
The above quote was pronounced by Miyori, the main character in this anime where she was asked by the forest dwellers to become a forest ranger so that the existence of the forest is always awake and secure and safe from the disturbance of people who want to destroy it.
Rahmah in her paper writes the preservation of nature and the environment is one of the positive attitudes of Japanese people that must be emulated by the nations and the world [14]. The concern for the natural environment shown by the characters in this animation as explained above is a reflection of one of the positive attitudes of the Japanese people in protecting the natural environment.
Japan is one of the countries in the world in which technology has been rapidly progressing and able to create things that were unthinkable in human civilization before. Their sophisticated technology developed into a global commodity that expanded throughout the world, making the country one of the countries whose existence has been known to the wide world. But behind its existence as a developed country, a modernization that developed in that country also had negative impacts on humans' lives [15].
The dam construction plan using the advanced technology described in this animation turns out to be less beneficial for the surrounding community and even has a negative impact because the dam construction plan will result in the forest and village where the people live will be drowned. However, thanks to the concern of the characters in this anime for nature and the efforts of the characters to prevent the construction of this dam, the plan to build this dam was cancelled.
Conclusion
Animation Miyori no Mori is full of moral messages about the importance of protecting nature and the environment from damage either caused by society or people who want to take advantage of the destruction of nature. Japan is one of the countries that maintain the preservation of the natural environment, but even so, there are still humans who try to destroy the environment for material gain. In this animation, the director wants to give a very deep message about the importance of preserving nature and the environment. The efforts made by the characters in this anime to save the forest and surrounding areas from being destroyed due to the construction of a dam is a form of the director's hope that the audience of this anime can do the same thing if the environment around them wants to damage or destroy it. Meanwhile, people from Tokyo who are trying to build Dam at the expense of the forest and surrounding areas depicted in this animation are indirect criticism from the director to people who do not care about nature, don't do anything that can destroy nature because they will get good punishment from them. Nature itself as well as from the local community and government. The concern for the environment shown by the characters in the animation is expected to be an example that we as humans must have a sense of care and empathy for nature because we live in nature. We should always try to protect and improve the environment from damage done by irresponsible people, as depicted in this animation. | 2021-11-07T16:05:35.420Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "46c6ca435f375e32479beebd49231b41380dc412",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/93/e3sconf_icenis2021_01003.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "89a96fceca9acac28e03c2ad006a13fcdf72c497",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
59945195 | pes2o/s2orc | v3-fos-license | Blockade of glycolysis-dependent contraction by oroxylin a via inhibition of lactate dehydrogenase-a in hepatic stellate cells
Background Contraction of hepatic stellate cells (HSCs) plays an important role in the pathogenesis of liver fibrosis by regulating sinusoidal blood flow and extracellular matrix remodeling. Here, we investigated how HSC contraction was affected by the natural compound oroxylin A, and elucidated the underlying mechanism. Methods Cell contraction and glycolysis were examined in cultured human HSCs and mouse liver fibrosis model upon oroxylin A intervention using diversified cellular and molecular assays, as well as genetic approaches. Results Oroxylin A limited HSC contraction associated with inhibiting myosin light chain 2 phosphorylation. Oroxylin A blocked aerobic glycolysis in HSCs evidenced by reduction in glucose uptake and consumption and lactate production. Oroxylin A also decreased extracellular acidification rate and inhibited the expression and activity of glycolysis rate-limiting enzymes (hexose kinase 2, phosphofructokinase 1 and pyruvate kinas type M2) in HSCs. Then, we identified that oroxylin A blockade of aerobic glycolysis contributed to inhibition of HSC contraction. Furthermore, oroxylin A inhibited the expression and activity of lactate dehydrogenase-A (LDH-A) in HSCs, which was required for oroxylin A blockade of glycolysis and suppression of contraction. Oral administration of oroxylin A at 40 mg/kg reduced liver injury and fibrosis, and inhibited HSC glycolysis and contraction in mice with carbon tetrachloride-induced hepatic fibrosis. However, adenovirus-mediated overexpression of LDH-A significantly counteracted the oroxylin A’s effects in fibrotic mice. Conclusions Blockade of aerobic glycolysis by oroxylin A via inhibition of LDH-A reduced HSC contraction and attenuated liver fibrosis, suggesting LDH-A as a promising target for intervention of hepatic fibrosis. Electronic supplementary material The online version of this article (10.1186/s12964-019-0324-8) contains supplementary material, which is available to authorized users.
Background
Hepatic fibrosis is a compensatory repair process in response to a variety of chronic liver injuries. Current paradigm has established that hepatic stellate cells (HSCs) are key effector cells in the initiation and development of hepatic fibrosis [1]. In fibrogenic liver, the quiescent HSCs undergo transdifferentiation into myofibroblasts with high proliferative and migratory capacities, and subsequently secrete massive extracellular matrix molecules, accumulating in liver parenchyma and promoting the pathogenesis of hepatic fibrosis [2]. Recent recognition of HSCs as liver-specific pericytes with contractile property is a key milestone in understanding of the biology of these cells [3]. HSCs regulate sinusoidal resistance and blood flow around sinusoids by contraction [4]. In addition, the contractile force generated by HSCs aggravates extracellular matrix remodeling during chronic liver injury [5]. Therefore, elucidating how HSC contraction is regulated may facilitate the development of therapeutic strategies for chronic liver disease.
Cell contraction involves dynamic synthesis and decomposition of actin and formation of large cytoskeletal structures [6]. When cells contract, the myosin cross-bridge periodically binds to actin, which dissociates and hydrolyzes ATP, releasing energy for actin filaments [7]. Cell contraction is thus a highly energy-consuming process. It has been well established that a key metabolic hallmark of cancer cells is aerobic glycolysis, termed Warburg effect [8]. Although glycolysis produces less ATP than oxidative phosphorylation does, the Warburg effect favors cell growth by rapidly providing ATP and carbon sources [8]. Glycolysis has some rate-limiting enzymes including hexokinase 2 (HK2), phosphofructokinase 1 (PFK1) and pyruvate kinas type M2 (PKM2) successively, converting glucose to pyruvate [9]. Notably, final conversion of pyruvate to lactate is a crucial step catalyzed by lactate dehydrogenase (LDH), of which LDH-A is a major subtype [10]. High expression or activity of LDH-A allows for rapid glycolysis flux so as to meet the energy demands of rapidly proliferating cells [11]. Recent evidence suggests that the activated HSCs are similar to the highly proliferative cancer cells with regard to their biosynthetic and bioenergetic requirements [12]. Aerobic glycolysis is a striking metabolic phenotype of activated HSCs during liver fibrosis [12]. However, little is known about the role of aerobic glycolysis in controlling of HSC contraction.
Natural products have been an importance source of drug candidates nowadays. Oroxylin A is an attractive natural compound with promising pharmacological activities. For example, oroxylin A was found to inhibit the growth and proliferation of hepatoma cells [13,14]. Oroxylin A could also reduce glucose uptake and lactate production in HepG2 cells under hypoxia [15], and inhibit glycolysis-dependent growth of human breast tumors [16]. Our previous studies demonstrated that oroxylin A reduced liver fibrosis associated with induction of HSC autophagy [17]. We hold that the mechanisms underlying the antifibrotic effects of oroxylin A have not been fully understood. Here, we investigated whether and how oroxylin A affected HSC contraction with a focus on the association of aerobic glycolysis.
Chemicals and antibodies
Oroxylin A (HPLC purity 99.9%) was kindly provided by Professor Qinglong Guo (China Pharmaceutical University, Nanjing, China). Compounds 2-deoxy-D-glucose (2-DG) and galloflavin were purchased from Apexbio Technology (Houston, TX, USA). These reagents were dissolved with dimethylsulfoxide at indicated concentrations for in vitro experiments. The following primary antibodies were used for Western blot analysis in current study: antibodies against HK2, PFK1, PKM2, LDH-A, β-actin and GAPDH were obtained from Proteintech Group (Chicago, IL, USA); antibodies against p-MLC2 Ser19 , MLC2, α-SMA, fibronectin and α1(I) procollagen were obtained from Cell Signaling Technology (Danvers, MA, USA). Horseradish peroxidase-conjugated anti-mouse and anti-rabbit secondary antibodies were obtained from Proteintech Group (Chicago, IL, USA).
Cell culture and transfection
Human HSC line LX2 cells and human normal hepatocyte line LO2 cells were obtained from the Cell Bank of Chinese Academy of Sciences (Shanghai, China). Cells were characterized using human short tandem repeat markers. Cells were cultured in Dulbecco's modified eagle medium (Invitrogen, Grand Island, NY, USA) with 10% fetal bovine serum (Wisent Biotechnology Co., Ltd., Nanjing, China), 1% antibiotics, and grown in a 5% CO 2 humidified atmosphere at 37°C. LDH-A siRNA (sc-43,893) and control siRNA (sc-37,007) were obtained from Santa Cruz Biotechnology (Santa Cruz, CA, USA). LDH-A overexpression plasmid pcDNA3.1(+)-LDH-A was constructed by Jiangsu KeyGEN Biotechnology Co. Ltd. (Nanjing, China). Transfection with LDH-A siRNA or overexpression plasmid was performed using the Lipofectamine 2000 Transfection Reagent (Life Technologies, Grand Island, NY, USA) according to the protocols provided by the manufacturer.
Collagen gel contraction assay
Collagen gel contraction assays were performed as we previously described [18]. Percentages of original gel area were quantified using the Image J software (Media Cybernetics, Rockville, MD, USA). Representative views are shown.
Cytoskeleton staining
Cytoskeleton was visualized using FITC-conjugated phalloidin (Beyotime Biotechnology, Haimen, China) according to our previously described methods [18]. The nuclei of cells were stained with diamidino-phenyl-indole (DAPI). Photographs were blindly taken at five random fields under a microscope (ZEISS Axio vert. A1, Germany). Representative views are shown.
Immunofluorescence staining
Staining with LX2 cells or mouse liver tissues was performed according to our descried methods [19]. The nuclei of cells were stained with DAPI. Photographs were blindly taken at five random fields under a microscope (ZEISS Axio vert. A1, Germany). Representative views are shown.
Glucose uptake assay
The glucose uptake by LX2 cells was determined using a Glucose Uptake Assay Kit (Abnova, Taiwan, China) according to the manufacturer's instructions. In this assay, the glucose analog 2-DG is metabolized to 2-DG-6-phosphate, which is proportional to glucose uptake by cells. The accumulated 2-DG-6-phosphate is enzymatically coupled to generate NADPH, which is specifically monitored by a NADPH sensor. The signal can be read by an absorbance microplate reader by reading the OD ratio at wavelength 570 to 610 nm.
Glucose consumption assay
The glucose consumption by LX2 cells was determined using an enzyme-linked immunosorbent assay kit (Shanghai Meilian Biology Technology Co. Ltd., Shanghai, China) for measuring glucose oxidase (GOD) activity according to the protocols provided by the manufacture. GOD is an endogenous oxido-reductase efficiently catalyzing the oxidization of glucose into gluconic acid. Its activity is an alternative indicator of glucose consumption [20].
Measurement of lactate levels
Lactate levels in lysates of LX2 cells or mouse liver tissues were measured using kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) following the manufacturer's instructions.
Measurement of extracellular acidification rate (ECAR)
ECAR was measured using a pH-Xtra™ Glycolysis Assay Kit from Luxcel Biosciences (Cork, Ireland) following the manufacturer's instructions and reported methods [21]. The pH-Xtra™ assay uses a pH-sensitive fluorophore which detects acidification due to glycolysis-related release of lactate.
Measurement of intracellular ATP levels
Intracellular ATP levels were determined using an ATP Assay Kit provided by Beyotime Institute of Biotechnology (Haimen, China) according to the protocols provided by the manufacture.
Enzyme activity assay
The intracellular activities of HK2, PFK1, PKM2 and LDH-A in LX2 cell were measured using kits (Shanghai Meilian Biology Technology Co. Ltd., Shanghai, China) according to the protocols provided by the manufacture.
Cell viability assay
The viability of LX2 cells or LO2 cells treated with 2-DG or galloflavin was evaluated using MTT assays. Briefly, the medium of treated cells was replaced with 100 μl phosphate buffered saline containing 0.5 mg/ml MTT and then was incubated at 37°C for 4 h. The crystals were dissolved with 200 μl dimethylsulfoxide. The spectrophotometric absorbance at 490 nm was measured by a SPECTRAmax™ microplate spectrophotometer (Molecular Devices, Sunnyvale, CA, USA). Cell viability was expressed as percentage of control.
Human liver samples
Liver samples from five healthy subjects and five patients with liver fibrosis were provided by the Nanjing Hospital Affiliated to Nanjing University of Chinese Medicine (Nanjing, China). The study followed the tenets of the Declaration of Helsinki, and informed written consents were obtained from all patients followed by explanation of the nature and possible consequences of the study. The study protocol was approved by the Medical Ethical Committee of the Second Hospital of Nanjing.
Animal procedures and treatments
Animal experimental procedures were approved by the Institutional and Local Committee on the Care and Use of Animals of Nanjing University of Chinese Medicine, and all animals were received humane care according to the National Institutes of Health (USA) guidelines. Thirty male ICR mice (8-week old) were obtained from Shanghai Slac Laboratory Animal Co., Ltd. (Shanghai, China). Mice were housed in standardized conditions at 20 ± 2°C room temperature, 40 ± 5% relative humidity and a 12 h light/ dark cycle. A mixture of carbon tetrachloride (CCl 4 ) and olive oil [2:3 (v/v)] was used to induce hepatic fibrosis in mice via intraperitoneal injection (0.1 ml/100 g body weight). Thirty mice were randomly divided into five groups (n = 6): (1) control, (2) model, (3) oroxylin A treatment, (4) oroxylin A treatment plus adenovirus vector, and (5) oroxylin A treatment plus LDH-A plasmid adenovirus (constructed by OBiO Technology Co. Ltd., Shanghai, China). Initially, mice in groups 4 and 5 were received caudal vein injection with corresponding adenovirus once. Two weeks later, mice in groups 2-5 were received intraperitoneal injection with CCl 4 every three days for 4 weeks. Simultaneously, mice in groups 3-5 were orally given oroxylin A suspended in CMC-Na solution at 40 mg/kg once daily for 4 weeks. This dose was determined by preliminary experiments. Mice in group 1 were orally given equal amount of CMC-Na solution once daily and injected with olive oil intraperitoneally every three days for 4 weeks, and mice in group 2 were also orally given equal amount of CMC-Na solution once daily for 4 weeks. At the end of experiments, all mice were anesthetized with isoflurane followed by blood collection via retro orbital sinus and isolation of liver.
Measurement of hepatic hydroxyproline (Hyp)
The Hyp levels in mouse liver tissues were measured using a kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the protocols provided by the manufacture.
Liver histopathology and collagen staining
Mouse liver tissues were fixed in 10% neutral buffered formalin and embedded in paraffin. Hematoxylin-eosin (H&E) staining was used for assessment of histopathology according to standard methods. Masson staining and Sirius Red staining were used for exanimation of collagens according to standard methods. Photographs were blindly taken at five random fields under a microscope (ZEISS Axio vert. A1, Germany). Representative views are shown.
Immunohistochemistry
Mouse liver tissue sections were incubated with primary antibody against α-smooth muscle actin (α-SMA) for immunohistochemical evaluation using standard methods. Photographs were blindly taken at five random fields under a microscope (ZEISS Axio vert. A1, Germany). Representative views are shown.
Scanning electronic microscopy (SEM)
Sinusoidal fenestration of mouse liver was examined by SEM according to our previously reported methods [19]. Photographs were blindly taken at five random fields, and representative images are shown.
Real-time PCR
Total RNA was extracted from LX2 cells, mouse liver tissues, or human liver samples using Trizol reagent (Sigma, Saint Louis, MO, USA). Total RNA was subject to reverse transcription to cDNA using the TransScript All-in One First-Strand cDNA Synthesis SuperMix for qPCR (One-Step gDNA Removal) Kits provided by TransGen Biotech Co., Ltd. (Beijing, China) according to the protocols. Real-time PCR was performed using the SYBR Green Master Mix (Vazyme Biotech Co., Ltd., Nanjing China) according to the protocol. Fold changes in the mRNA levels of target genes were related to the invariant control glyceraldehyde phosphate dehydrogenase (GAPDH). The primers (GenScript Co., Ltd., Nanjing, China) are listed in Additional file 1: Table S1.
Western blot assay
Whole cell protein extracts were prepared from LX2 cells or mouse liver tissues with RIPA buffer containing protease inhibitors and phosphatase inhibitors. Protein detection and band visualization and quantification were performed as we previously described [22]. β-Actin or GAPDH was used as an invariant control for equal loading of total proteins. Representative blots were shown.
Statistical analysis
Data from at least triplicate experiments are presented as mean ± SD. One-way ANOVA was performed to analyze the data using GraphPad Prism 7 (San Diego, CA, USA). In all cases, a P value of 0.05 or lower was considered significant.
Oroxylin a inhibits HSC contraction
Results of collagen gel contraction assays showed that oroxylin A concentration-dependently inhibited HSC contraction, and that oroxylin A at 30 and 40 μM produced significant effects (Fig. 1a). The organization of cell contractile machinery can be manifested by cytoskeleton arrangement [23]. Cytoskeleton fluorescence staining revealed that oroxylin A reduced the formation of actin stress fibers and disturbed the microfilament skeleton in a concentration-dependent manner in HSCs (Fig. 1b). Phosphorylation of myosin light chain 2 (MLC2) is an important event during cell contraction [24]. Western blot analyses demonstrated that oroxylin A concentration-dependently decreased MLC2 phosphorylation in HSCs (Fig. 1c). Immunofluorescence analyses of MLC2 phosphorylation provided consistent results (Fig. 1d). Together, these data indicated that oroxylin A inhibited HSC contraction.
Oroxylin a blocks aerobic glycolysis leading to inhibition of HSC contraction
We then tested the effects of oroxylin A on aerobic glycolysis of HSCs. We observed that oroxylin A decreased glucose uptake (Fig. 2a) and glucose consumption indicated by GOD activity (Fig. 2b) in a concentration-dependent fashion in HSCs. Lactate production and ECAR were also reduced by oroxylin A concentration-dependently (Fig. 2c, d). We further detected the effects of oroxylin A on three rate-limiting enzymes HK2, PFK1 and PKM2, and observed that the mRNA and protein expression of these enzymes were downregulated by oroxylin A in HSCs (Fig. 2e, f). Meanwhile, oroxylin A decreased the intracellular activities of HK2, PFK1 and PKM2 (Fig. 2g). Additional data showed that the intracellular ATP levels were reduced by oroxylin A concentration-dependently in HSCs (Additional file 2: Figure S1). The above findings collectively revealed that the overall glycolytic flux and activity were effectively blocked by oroxylin A, cutting of the energy supply within HSCs.
We next asked whether blockade of aerobic glycolysis was associated with the reduced contractile capacity in oroxylin A-treated HSCs. The glycolysis inhibitor 2-DG was used to test the association. We used 2-DG at 5 mM for experiments based on the observation that 2-DG at this concentration suppressed HSC viability but did not affect hepatocyte viability (Additional file 3: Figure S2a, b). Collagen gel contraction assays showed that 2-DG at 5 mM, similar to oroxylin A at 40 μM, significantly suppressed HSC contraction, and their combination produced more potent inhibitory effects (Fig. 3a). Cytoskeleton fluorescence staining revealed that microfilament skeleton was disrupted by 2-DG and its combination with oroxylin A (Fig. 3b). Examinations of MLC2 phosphorylation using Western blot analysis and immunofluorescence staining consistently exhibited that 2-DG at 5 mM alone, or combined with oroxylin A at 40 μM, significantly downregulated the phosphorylation levels of MLC2 in HSCs (Fig. 3c, d). Altogether, these results indicated that blockade of aerobic glycolysis by oroxylin A resulted in the suppression of HSC contraction.
Oroxylin a inhibits LDH-A in HSCs
Given that LDH-A is a central player in glycolysis and has a multifunctional role in cell biology [25], we next focused on the regulation of LDH-A by oroxylin A in HSCs. We observed that the mRNA levels of LDH-A in liver tissues from patients with hepatic fibrosis were significantly higher than that in healthy liver tissues (Additional file 4: Figure S3), strongly suggesting the role of LDH-A in the pathology of liver fibrosis. We then found that the LDH-A mRNA expression was reduced by oroxylin A in a concentration-dependent manner in cultured HSCs (Fig. 4a). Oroxylin A also downregulated the protein abundance of LDH-A in HSCs evidenced by Western blot and immunofluorescence analyses (Fig. 4b, c). Consistently, the intracellular enzyme activity of LDH-A was decreased by oroxylin A concentration-dependently (Fig. 4d). Taken together, these results revealed that oroxylin A inhibited the expression and activity of LDH-A in HSCs.
Inhibition of LDH-A is required for oroxylin a to block aerobic glycolysis and reduce contraction in HSCs
The above results suggested LDH-A as a potential target molecule for oroxylin A in HSCs, and we subsequently attempted to confirm this hypothesis. The compound galloflavin, a selective pharmacological inhibitor of LDH-A [26], was used to test the role of LDH-A in oroxylin A blockade of aerobic glycolysis. We used galloflavin at 20 μM for experiments based on the observation that galloflavin at this concentration inhibited HSC viability but did not affect hepatocyte viability (Additional file 3: Figure S2c, d). We observed that galloflavin at 20 μM, similar to oroxylin A at 40 μM, significantly inhibited glucose uptake and consumption and reduced the production of lactate in HSCs, and that combination of galloflavin and oroxylin A produced more potent reducing effects on these parameters (Fig. 5a-c). Further examinations of glycolysis rate-limiting enzymes showed that galloflavin at 20 μM significantly decreased the expression and activities of HK2, PFK1 and PKM2 in HSCs, and that its combination with oroxylin A resulted in more potent inhibitory effects on these enzymes (Fig. 5d-h). To confirm the results, HSCs were transfected with LDH-A siRNA to knockdown LDH-A expression at both mRNA and protein levels (Fig. 5i, j). Consistently, transfection with LDH-A siRNA alone or combined with oroxylin A treatment significantly downregulated the mRNA expression of HK2, PFK1 and PKM2 (Fig. 5k). Additionally, overexpression of LDH-A increased the expression of HK2, PFk1 and PKM2 and rescued oroxylin A-induced reduction of these enzymes in HSCs (Additional file 5: Figure S4). Overall, genetic deficiency of LDH-A or pharmacological inhibition of LDH-A effectively diminished the glycolytic activity in HSCs, and synergistic effects could be achieved when combined with oroxylin A, suggesting that inhibition of LDH-A was required for oroxylin A to block aerobic glycolysis.
We further investigated the association between inhibition of LDH-A and suppression of contraction by oroxylin A in HSCs. As expected, collagen gel contraction assays and cytoskeleton fluorescence staining showed that galloflavin at 20 μM, similar to oroxylin A at 40 μM, significantly reduced HSC contraction, and that combination of the two compounds produced more potent effects (Fig. 6a, b), which were confirmed by analyses of MLC2 phosphorylation by Western blot and immunofluorescence assays (Fig. 6c, d). To validate the results, siRNA-mediated knockdown of LDH-A was performed in HSCs. The obtained data exhibited that transfection with LDH-A siRNA, similar to oroxylin A treatment alone, apparently disrupted the microfilament skeleton evidenced by cytoskeleton fluorescence staining (Fig. 6e) and reduced MLC2 phosphorylation demonstrated by Western blot assays (Fig. 6f). Additionally, overexpression of LDH-A promoted dense arrangement of cytoskeleton and increased MLC2 phosphorylation, and considerably abrogated oroxylin A-inhibited HSC contraction (Additional file 6: Figure S5). Collectively, these discoveries indicated that inhibition of LDH-A was required for oroxylin A reduction of HSC contraction.
Oroxylin a alleviates liver fibrotic injury and inhibits HSC glycolysis and contraction by targeting LDH-A in mice intoxicated with CCl 4
We used the classical liver fibrosis model induced by intraperitoneal injection of CCl 4 in mice to establish the in vivo relevance of the above culture-system findings. Because our recent studies have clearly demonstrated that oroxylin A had potent in vivo antifibrotic effects [17,27], we here focused on testifying whether the effects of oroxylin A were dependent on regulation of LDH-A using adenovirus-mediated overexpression of LDH-A in mice. Oroxylin A reduced the liver/body weight ratio and downregulated the serum levels of hepatocyte injury markers (ALT, AST, TBIL, and IBIL) in fibrotic mice, but these effects of oroxylin A were abolished by overexpression of LDH-A (Fig. 7a-c). Similar changes were observed in the measurements of serum levels of fibrotic markers (HA, LN, and PC-III) and hepatic Hyp contents (Fig. 7d, e). Histological assessments and collagen staining assays showed that oroxylin A amelioration of hepatic structure and collagen deposition was abolished by overexpression of LDH-A in vivo (Fig. 7f). We then examined HSC activation markers and found that oroxylin A significantly reduced the expression of α-SMA, fibronectin and α1(I) procollagen at both mRNA and protein levels in mouse fibrotic liver, but these effects were counteracted by overexpression of LDH-A in fibrotic mice (Fig. 7f-h). Interestingly, SEM data exhibited that treatment with oroxylin A inhibited sinusoidal capillarization and restored the fenestrae of liver sinusoidal endothelial cells in fibrotic mice, but overexpression of LDH-A diminished oroxylin A improvement of hepatic vascular architecture during liver fibrogenesis (Fig. 7i). Altogether, these observations indicated that oroxylin A alleviated liver fibrotic injury by targeting LDH-A in mice.
We subsequently evaluated the effects of oroxylin A on HSC glycolysis and contraction in fibrotic mice. We observed that hepatic lactate levels in fibrosis model mice were significantly elevated compared to that of control mice, and that oroxylin A intervention considerably decreased hepatic lactate levels, which was abrogated by overexpression of LDH-A (Fig. 8a). Treatment with oroxylin A downregulated the mRNA and protein expression of HK2, PFK1, PKM2 and LDH-A in mouse fibrotic liver, but their reduction was remarkably rescued by overexpression of LDH-A (Fig. 8b, c). Further immunofluorescence analyses with α-SMA staining for indicating HSCs revealed that these key glycolysis rate-limiting enzymes had lower abundance in the HSCs of oroxylin A-treated fibrotic mice compared to the model group, but overexpression of LDH-A impaired the effects of oroxylin A (Fig. 8d, e). We finally examined HSC contraction, and found that MLC2 phosphorylation was significantly increased in mouse fibrotic liver but was decreased by oroxylin A treatment; whereas overexpression of LDH-A rescued oroxylin A-inhibited MLC2 phosphorylation (Fig. 9a, b). Vimentin is a major component of cytoskeleton responsible for stabilization of cytoskeletal interactions [28], and is frequently used as a marker of cell contraction [29]. Here, immunofluorescence analysis of vimentin showed that HSC contraction was enhanced in mouse fibrotic liver but was inhibited by oroxylin A treatment; however, overexpression of LDH-A restored HSC contractile capacity in oroxylin A-treated fibrotic mice (Fig. 9b). Taken together, suppression of HSC glycolysis and contraction by oroxylin A contributed to the reduction of liver fibrosis in mice, and these effects were dependent on inhibition of LDH-A.
Discussion
HSCs are located in the space of Disse and contact closely with sinusoidal endothelial cells. The contractile phenotype of HSCs has been critically implicated in liver's response to various injuries, and the density and coverage of HSCs in the sinusoidal lumen are found to be increased during hepatic fibrosis [5]. It is recognized that the enhanced contractility of HSCs increases the resistance in sinusoidal blood flow and aggravates hepatic sinusoidal capillarization and remolding, leading to the development of portal hypertension, a highly lethal complication of advanced chronic liver disease [30]. Accordingly, restriction of HSC contraction represents a novel intervention strategy for liver fibrosis or cirrhosis, as well as portal hypertension. We recently reported that oroxylin A had significant antifibrotic and hepatoprotective effects in vitro and in vivo [17,31,32], and observed that oroxylin A could improve sinusoidal vascular remodeling [27]. These observations directed us to investigate whether modulation of HSC contractile phenotype was involved in oroxylin A's effects. Interestingly, our current data uncovered the association and identified regulation of aerobic glycolysis as a linking molecular event in oroxylin A' effects.
Accumulating evidence suggests that metabolic reprogramming controls the fate and transdifferentiation of HSCs, and is a conserved response to liver injury. Induction of aerobic glycolysis, similar to the Warburg effect described in tumor cells, has been proven to be a driving force of the dramatic phenotypic alterations of HSCs during hepatic repair, including the high proliferative and fibrogenic activities [12]. This phenomenon can be explained by the fact that glycolysis produces ATP at a faster rate than oxidative phosphorylation, although it only generates two ATP molecules per molecule of glucose. Glycolysis thus is a faster and shorter pathway for energy generation used by some cells to meet the high demands of rapid proliferation [33]. This metabolic switch has important therapeutic relevance and implication for liver fibrosis. Indeed, our previous work demonstrated that the well-known natural product curcumin inhibited HSC activation and reduced hepatic fibrosis through disrupting aerobic glycolysis [34,35]. In current work, we postulated that the contractile phenotype of HSCs could also be governed by aerobic glycolysis and drug-induced metabolic perturbation could affect HSC contraction and related pathology in liver fibrosis. We found that oroxylin A potently inhibited HSC contraction evidenced by interruption of cytoskeleton arrangement and reduced MLC2 phosphorylation, and meanwhile, the glycolytic flux and activity were effectively blocked by oroxylin A evidenced by reduced glucose uptake and consumption, decreased lactate production and downregulation of three key rate-limiting enzymes. More importantly, we identified that oroxylin A blockade of aerobic glycolysis contributed to the restriction of HSC contraction. This point was easily understandable, because many components of the contraction machinery are involved in the efficient coupling of energy source and dependent on myosin-actin interaction using ATP [36]. The energy-contraction coupling was disrupted by blockade of aerobic glycolysis and reduction of energy supply in oroxylin-treated HSCs.
We subsequently investigated the potential upstream molecule mediating oroxylin A disruption of the energycontraction coupling machinery. We focused on the role of LDH-A because of the following points. (i) LDH-A was highly expressed in human fibrotic liver, implying a close association between LDH-A and hepatic fibrogenesis. (ii) LDH-A converts pyruvate, the final product of glycolysis, to lactate, shifting the use of glucose metabolites from simple energy production to acceleration of cell growth and replication, and thus LDH-A activity has been characterized as a promising target in cancer therapy by preventing cancer cells from proliferating [10]. (iii) LDH-A was newly recognized as a regulator of gene transcription via translocating into nucleus and binding to DNA, and phosphorylation of LDH at Tyr238 has been characterized to be important for its nuclear translocation [37]. Here, we observed that oroxylin A suppressed the expression and activity of LDH-A in HSCs, and, using chemical and genetic approaches, confirmed that inhibition of LDH-A was a prerequisite for oroxylin A reduction of glycolysis-dependent HSC contraction and liver fibrosis in vitro and in vivo. These results raised an interesting question that why modulation of LDH-A could be the causative event in this context given that LDH-A works at the final stage of glycolysis pathway. We postulated that this could be explained by two reasons. (i) Inhibition of LDH-A by oroxylin A synergistically blocked the glycolytic flux, leading to the reduced energy production and resultant restriction of contraction. (ii) LDH-A could regulate the expression of glycolysis rate-limiting enzymes such as HK2, PFK1 and PKM2. LDH-A might act as a transcription factor or co-activator to increase the transcription of these enzymes. This speculation could be, at least partially, supported by the observation that the de novo synthesis of these enzymes was inhibited by oroxylin A, blocking each rate-limiting step of glycolysis. We understand that our results could not rule out the possibility that the expression of these enzymes was inhibited by oroxylin A directly, or indirectly by targeting other molecules, given the fact that natural products commonly have multiple targets within cells. | 2019-02-12T15:01:15.443Z | 2019-02-11T00:00:00.000 | {
"year": 2019,
"sha1": "00425a58de7e671685e7b723a36748433feed1cc",
"oa_license": "CCBY",
"oa_url": "https://biosignaling.biomedcentral.com/track/pdf/10.1186/s12964-019-0324-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4143b056eda519d9f06f44485097bfea5e2d21d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
147435690 | pes2o/s2orc | v3-fos-license | The Impact of the 5 E Teaching Model on Changes in Neuroscience , Drug Addiction , and Research Methods Knowledge of Science Teachers Attending California ’ s ARISE Professional Development Workshops
This study examined how science teachers’ knowledge of research methods, neuroscience and drug addiction changed through their participation in a 5-day summer science institute. The data for this study evolved from a four-year NIH funded science education project called Addiction Research and Investigation for Science Educators (ARISE). Findings were based on preand post-test evaluation data from three annual cohorts in June 2010, 2011 and 2012. Researchers found significant improvement in teacher knowledge overall and on all subscales. Teachers with lower pre-test scores showed the greatest gain in post-test scores. What made this in-service unique was that the 5E pedagogical model was used to teach the teachers and demonstrate 5E instruction in the science classroom. Through the use of the 5E teaching method, we found that teachers in our cohorts with the least skill had higher rates of gain. A strategy that has been used extensively to teach science to children, this model moves away from didactic methods of in-service pedagogy. These findings suggest that the 5E model could be an effective way to teach teachers as well as students, particularly new and or less skilled teachers, who often tend to have high numbers of English Learner (EL) students in their classes.
Introduction
Science education research since the 1980s has focused on strategies to improve science education and develop effective school-based science education programs.Despite these efforts, there has been a decline in science education performance, especially in low-income school districts with high numbers of English Learner (EL) and minority students.In the mid-1990s, the National Science Education Standards (National Research council [NRC], 1996) shifted science education to more science inquiry-based approaches.More recently, the implementation of the New Generation Science Standards (NGSS) presents unique challenges for science teachers as they are charged with fostering an inquiry-based instruction through the integration of the dimensions outlined in the Framework for K-12 Science Education [Framework] (NRC, 2012).The three dimensions include: 1) science engineering practices; 2) crosscutting concepts; and 3) core ideas in each of the science disciplines.The science and engineering practices redefine the inquiry-based science concept as these are aligned with scientific inquiry (research methodology), and in turn help students learn, understand, and do science (Lee, Quinn, & Valdes, 2013).The science and engineering practices require students to be active participants in science inquiry by engaging in discourse about the scientific model or a science concept.In light of the NGSS and the Framework, Weinburgh, Silva, Horak Smith, Groulx, and Nettles (2014) indicate that teacher preparation programs must equip science teachers with the skills and knowledge to integrate language and science learning.The integration of language and science learning presents a more pressing challenge for science teachers of ELs.Education, 2012).As the U.S. population is becoming more ethnically and linguistically diverse, it is crucial that science teacher preparation and professional development programs help teachers develop a science pedagogical knowledge base and pedagogical strategies that includes skills and activities that engage all students, especially EL populations, in learning science.
Given the high numbers of ELs in the U.S. classrooms, science teachers are pressed to serve as language teachers.Although there is some debate on whether language development can occur in the science classroom, Simich-Dudgeon and Egbert (2000) indicate that English speakers and EL students can jointly learn science through collaborative discourses about the science activities.The debate on science and language integration stems from the misunderstanding that scientific terminology presents a barrier to learning for ELs (Crowther, Tibbs, Wallstrum, Storke, & Leonis, 2011).Dong (2013) recommends that rather than focusing on word recognition, teachers can integrate students' previous knowledge and language in the science concept learning process.For the example, through the use of the 5E model (Bybee, 1993) can aid language acquisition by fostering a classroom environment where students are able to use their own examples and explanations.
As the U.S. population is becoming more ethnically and linguistically diverse, it is crucial that science teacher professional development programs help teachers develop a science pedagogical knowledge base and pedagogical strategies that includes skills and activities that engage all students, especially EL populations, in learning science.
Teacher Preparation
Traditionally, state policies associated with school funding, resource allocations, and tracking leave high poverty school districts with fewer and lower-quality books, curriculum materials, laboratories, and less qualified and experienced teachers.The fact that the least-qualified teachers typically end up teaching the least-advantaged students is particularly problematic in low-income school districts (Gagnon & Mattingly, 2012).
Studies have found that the difference in teacher quality may represent the single most important school resource differential of academic success between minority and white children (Ferguson & Brown, 2000;Darling-Hammond & Post, 2000).The literature on science teacher quality indicates that many teachers are not prepared to teach science content and integrate inquiry-based science instruction into their education of EL students (Garet, Porter, Desimone, Birman, & Yoon, 2001;Penuel, Fishman, Yamaguchi, & Gallagher, 2007).These findings support the need to provide in-service education that is targeted to improve the science knowledge of less prepared teachers to bring them on par with their counterparts.
Other scholars have indicated that science teachers may also lack adequate preparation to address the needs of linguistically diverse students in the science classroom (Bryan & Atwater, 2002;Janzen, 2008;Lee, Hart, Cuevas, & Enders, 2004).Moreover, teacher quality is more problematic in racially diverse school districts with high levels of poverty.Gagnon and Mattingly (2012) found that schools with a high percentage of minority students are more likely to have beginning teachers.The staffing of beginning teachers in schools with high levels of poverty creates even greater academic risks for minority students, as these schools do not have the resources to support the pedagogical development of new teachers.Moreover, Miller (2011) found that teachers face greater challenges in culturally and linguistically diverse schools as they have multiple work demands, coupled with the challenge of meeting the learning needs of their diverse students.In addition, Darling-Hammond and Sykes (2003) indicated that areas of high poverty tend to have higher rates of teacher turnover.
Professional Development for Science Teachers
Since the late 1980's, one program has been modeled extensively in the development of new curriculum materials and professional development experiences is the Biological Sciences Curriculum Study (BSCS) 5E Instructional Model.In spring 2006 web-based research showed that the BSCS 5E model had been used in 235,000 lesson plans, over 97,000 posted examples of universities using the 5E model in course syllabi, over 73,000 examples of curriculum materials incorporating the 5E, over 131,000 examples of teacher education programs or resources using the 5E and three states endorsing the model (Bybee et al., 2006).Numerous articles support the 5E model for student learning (Akar, 2005;Cardak, Dikmenli, & Saritas, 2008;Acisli, Yalcin, & Turgut, 2011;Cherry, 2011;Tuna & Kacar, 2013).
In their analysis of the data from the Teacher Activity Survey collected through the Eisenhower Professional Development Program, Garet et al. (2001) found that effective teaching practices can be fostered through professional development.Previous research has indicated that professional development interventions that target science teachers of EL students need to be focused on a specific content and provide teachers with strategies on how to make the concepts accessible to ELs (Lee & Fradd, 1998;Lee, 2005;Penuel et al., 2007).Lee (2004) indicated that teachers may not have a clear idea of how to make science more accessible to ELs, but through professional development they can acquire the strategies and knowledge to do so.
Other studies have identified: 1) content-focused activities, 2) knowledge about best teaching practices for teaching science to the targeted student populations, and 3) learning how to engage students in the learning process as some of the key characteristics of effective professional development interventions (Dass, 2001;Garet et al., 2001;Penuel et al., 2007).More specifically, these studies indicated that high quality professional development for science teachers should be content-focused, model inquiry style pedagogies, and provide teachers with enhanced knowledge and skills to work with diverse student populations.
The Scientific and Engineering Practices and the 5E Model
The 5E model has been used for years in teaching science method courses (Goldston, Dantzler, Day, & Webb, 2012).The 5E model consists of five phases: 1) Engagement-creates student interest in the subject by generating curiosity, raising questions, and eliciting thought and responses that uncover previous knowledge.
2) Exploration-often working in groups, activities that provide students with concepts, skills to help them use prior knowledge to generate new ideas, and help them explore new possibilities and increase interest in the subject.
3) Explanation-allows students to explain their understanding of a concept.Teachers may introduce a concept or skill and provide deeper understanding and/or clarify misunderstandings.4) Elaboration-encourages students to apply or extend their learning of a concept in new directions, provides opportunities to expand thinking and skills, and allows students to apply their understanding through additional activities.
5) Evaluation-allows for student self-assessment, allows teachers to observe student learning and look for evidence that students have changed their thinking or behavior and evaluate for student misunderstanding.(Bybee et al., 2006;Bybee, 2009).
Use of the 5Es in science instruction can help teachers address science content as well as the scientific and engineering practices.For example, through engagement teachers can use students' prior knowledge to initiate the engagement of students in the science classroom.During the engagement phase, students can begin to develop questions or engineering problems by drawing from their previous knowledge.Furthermore, through exploration the students can also begin to ask questions and define engineering problems.Teachers can promote the development and use of models by encouraging students to elaborate on the different ways they can represent science concepts.Through the elaboration and exploration phases, students have the opportunity to plan and carry out investigation by examining the different ways they can answer scientific questions developed in the classroom and generate the evidence to test their theories.Students can also explore the different ways that they interpret and can make sense of the raw data.Additionally, through explanation the students are encouraged to find ways to communicate their data to different audiences.Exploration and elaboration allows students to find tools within the mathematical and computational fields and encourages them to apply the tools to solve their science questions and engineering problems using previous knowledge and skills in the reconceptualization of concepts and models.Teachers can foster constructing explanations and designing solutions by having students explain their rationale and their connections to science knowledge.The goal is for students to articulate in various forms the explanations of a phenomenon.By having students explain a phenomenon, teachers can evaluate students understanding and learning of the scientific ideas presented.Students are required to engage in argument based from evidence to defend their findings and rationale.In order to do so, students must elaborate on their thinking and procedures to provide the necessary evidence.Finally, the obtaining, evaluating, and communicating information practice evaluates the students' ability to communicate and reproduce the science and engineering concepts.This practice can be achieved through evaluation of students' performance throughout the other practices by gauging their levels of engagement, elaboration, exploration, and explanation.The use of the 5E model in the Science and Engineering Practices presents a unique opportunity for science teachers to integrate language development.Table 1 presents the Science and Engineering Practices and their alignment with the 5E Model.Exploration can help students to ask questions and defining problems.
2. Developing and using models Elaborate encourages students to expand their learning, and expand new concepts by discussing different representations of science concepts.
Planning and carrying out investigations
Exploration allows student to design comprehensive scientific investigations that generate data to support their hypotheses.
Elaborate:
The exploration phase in this practice allows the students to continue to expand their skills to become more systematic when conducting investigations.
Analyzing and interpreting data
Explore: Student must be able to explore the different ways to analyze raw data and interpret it.
Explanation allows students to communicate the data analyses in different forms.
Using mathematics and computational skills
Exploration of tools and concepts to elaborate and build knowledge across the academic disciplines.
Constructing explanations and designing solutions
Explanation can help students provide solutions to their science and engineering questions by articulating in various forms causes of a phenomenon.
Engaging in argument from evidence
Elaborate allows students to defend their conclusions and findings based on the evidence formulated.
Obtaining, evaluating, and communicating information
Evaluation of students understanding of the concepts via their explanations, elaborations, and exploration.
This study examined how science teachers' knowledge of research methods, neuroscience and drug addiction changed through their participation in a 5-day summer science institute.We address the following research questions: 1) Was the Summer Science Institute successful in increasing teacher knowledge of Neuroscience, Drug Addition, and Research Methods?
2) Were there teacher demographic and situational variables that impacted teacher learning?
3) Were less experienced and/or qualified teachers more likely to teach in schools with high EL enrollment and did this impact their learning?
Population
The population for this study consisted of 91 science teachers who made up three consecutive cohorts of 30 to 35 teachers attending the 2010, 2011, 2012 ARISE Summer Institutes.The population included only teachers who attended the entire institute and completed both the pre and post tests/surveys.All teachers voluntarily signed up to participate in the ARISE project and the only requirement for participation was that they were teaching 7th through 12th grade science classes in public schools located in the California Central Valley.
The mean age for science teachers in the ARISE Summer Institute cohorts were 41.9 years, 62 (68.1%) were female and 29 (31.9%) were male.Regarding education: 42.3% of the participants completed some form of post-graduate education.The average number of years of post-graduate education was 5.7 years.Additional demographic information can be reviewed in Table 2.In order to better prepare science teachers in the Central Valley to improve their delivery of science instruction to students, the 5E instructional approach was introduced and modeled throughout the content delivery of the institute.This model is based on the constructivist approach to learning whereby learners build or construct new ideas on top of previous experiences and knowledge (Enhancing Education, 2002).Each of the 5Es described below actively engage students in a series of phases that help them build their knowledge and experiences, construct meaning, and assess their understanding of new information.
This study focused on the impact of the 5E teaching model to improve participant teachers' neuroscience, drug addiction and research methods content knowledge.In order to help engage students in science and foster student centered inquiry-based instruction a requirement of the ARISE Institute was that teachers return to their classrooms and lead their students in drug addiction and/or neuroscience research experiments.The research process allows students to be active participants in science inquiry by engaging in discourse about their experimental design, data collection, analyses and reporting.).Major topics presented included: 1) Localization of brain function, 2) General functions of specific brain areas, 3) Anatomy of the neuron, 4) Neurotransmission, 5) Mechanism of drug action and neurons, 6) Environmental, behavioral and genetic influences on addiction, and 7) Addiction as a chronic disease.Delivery of neuroscience content included information about the nervous system, structure and function.Drug addiction presentations included a discussion of addiction and information on the classes/categories of drugs, the basics of drug pharmacology, and the effects of specific drugs on the body Presenters incorporated stories or photos to engage the learner; detailed animated PowerPoint presentations were used to explain brain functions, action potentials and effects of drugs.Participants were allowed to ask questions when needed.Hands-on exploration included sheep brain, frog and cow eye dissections and group activities were used to demonstrate action potentials.Presenters were careful to relate new information to previous knowledge prior to elaborating on new content.Time was allowed for exploration and classroom clickers that recorded the number of correct responses to questions were used during the instructional sessions to check for understanding before moving on to new information.
The 5E pedagogical approach was introduced to teachers as a model for teaching 7th-12th grade science lessons during the first day of the workshop and was used and modeled by UC Davis science faculty workshop presenters throughout the institute.This approach was also used to demonstrate and model how to deliver 5E instruction to EL students in a separate training session supported by two texts: "Making Science Accessible to English Learners" (Carr, Sexton, & Lagunoff, 2007) and "Building Academic Vocabulary, Teacher's Manual" (Marzano & Pickering, 2005).Thus, teachers were able to both visualize science teaching strategies and discuss these strategies with faculty workshop presenters in order to better understand and implement these practices in their own science classroom settings.
Instrumentation
The educational effectiveness of the science component of the summer institutes were measured by way of a pre-test and post-test administered during the start and conclusion of each institute.A demographic survey was given during the start of each institute that collected relevant information about teacher gender, ethnic/race, educational, socio-economic background, and percent of EL students in schools where they taught science.
The Neuroscience, Drug Addiction, and Research Methods test consisted of an objective-referenced test of 24 multiple-choice items and 2 positively phrased true/false items.Seventeen multiple choice items had one correct answer and three distractors, 6 items included "All of the above", "None of the above", and "Answers a and c" as distractors and/or correct answers.One true/false question on drug addiction and the other on research methods started the test, followed by multiple-choice questions where 9 addressed drug addiction, 4 addressed research methods, and 11 addressed neuroscience.To ensure content validity, the test was developed by three university faculty members and was based on neuroscience, drug addiction and research methods content they presented during the ARISE Summer Institutes.The faculty included two members from the Department of Neurobiology, Physiology and Behavior and one from the Department of Animal Science.Neuroscience and drug addiction content followed guidelines provided by: The Brain: Understanding Neurobiology Through the Study of Addiction (January 2000).The test was field-tested by 15 science teachers not participating in the study.Distractor and item analysis measures were generated from these data and items with low discrimination values were removed.Reliability tests were conducted using Cochran's alpha estimates.Internal Consistency reliability estimates resulted in a value of Cochran alpha=0.520for the pre-test and Cochran alpha=0.663for the post-test.Table 3 presents additional reliability information for each of the cohorts.
Data Collection and Analysis
The test and survey were coded to ensure teacher confidentiality and pre-and post-tests were matched by coded numbers.Teachers were asked to respond to the test using Scantron forms.These data were scanned and uploaded into an Excel data file for processing.The statistical package used in analyzing the data was SPSS.Counts and frequencies were tabulated for all teacher demographic variables.Only completed data from teachers taking both the pre-and post-test/surveys were used in the analyses (n=91).For the purpose of this article, data from the three cohorts were pooled into one group for analyses.Due to the ARISE Summer Institute focus on teaching EL students, teacher demographic and school EL percentages were used to determine relationships with high or low-test scores of content knowledge.Teachers were placed into different groups depending on the percentages of EL students in their classrooms.In order to determine if teachers had low or high numbers of EL students in their classrooms, a binary variable was created whereby low numbers of EL students in teachers' classes included those with 15% EL students or fewer (n=21) and high numbers of EL students equaled 16% or higher (n=54).This threshold was selected to reflect the average number of EL students in schools in high-income districts reported by the State Department of Education (California Department of Education, 2014).Changes in pre-and post-test scores were analyzed using t-test and analysis of covariance.Relationships between test scores and teacher independent and school variables were analyzed using descriptive statistics, t-test, cross tab and correlation analyses.
For purposes of analyses, the test was divided into three subscales based on content knowledge delivered and subsequently developed questions addressing neuroscience, drug addiction, and research methods knowledge.
Results
Paired t-test procedures were used to determine changes in teacher knowledge between the Neuroscience, Drug Addiction and Research Methods pre-test and post-test (n=91).A t-value of 10.19 (p=0.000)indicated a statistically significant increase in knowledge between pre-and post-test scores; indicating a significant increase in teacher knowledge.Paired t-test analyses were also calculated to determine changes in teacher knowledge between pre-test and post-test on the subscales of Neuroscience, Drug Addiction, and Research Methods.Statistically significant increase in teacher knowledge was observed (p=0.000) on the three subscales.The results of these analyses are presented on Table 4.In general, the greatest changes in scores were observed in the Neuroscience subscale with an average change of 17.3%, followed by Drug Addiction (16%) and Research Methods subscale with 12% gains.Statistically significant differences in pre and post-test paired performance were observed in 6 out of 11 items at the p=0.05.Paired t-test analyses showed significant changes in knowledge of 7 out of 10 items on the drug addiction subscale.Paired t-test analyses were used to determine changes in teachers' knowledge of the research methods process, however, out of 5 items, only 1 item, a question on self-administration studies, showed a statistically significant change of knowledge (p=0.00).
Group t-tests were completed to examine if the number of EL students in a teacher's school influenced performance on both the pre-and post-tests.Teachers with high numbers of EL students in their schools (16% and higher) presented lower scores across all subscales on the pre-test, (t=2.39,p=0.0095) compared to teachers with low numbers of EL students in their schools.Both groups increased their test scores on the post-test, however, no differences between groups with low and high EL student enrollment were observed on the post-test (t=1.5, p=0.06).In addition, pre-test scores on the research methods subscale illuminated significant differences between teachers with low numbers of EL students in their schools compared with teachers with high numbers of EL students (t=2.6,p=0.005, n=91).However, post-test scores on the research methods subscale revealed no significant differences after the training (t=0.85,p=0.199).These finding suggest that content provided during the institutes helped minimize the score difference on the post-test, such that the number of EL students in teachers' classrooms did not influence their post-test scores.
Teachers born in the U.S. appeared to have an advantage on the pre-test, (M=11.50,n=79) particularly in the drug addiction and research methods subscales compared with teachers born outside of the U.S. (M=9.0,n=12), (Pre-test t=3.3, p=0.001).However, content provided during the institutes appeared to minimize the score difference on the post-test between groups such that there was no difference between teachers born in the U.S. (M=15.5, n=79) compared with teachers not born in the US (M=14.1,n=12) (Post-test t=1.45, p=0.14).
There were no differences between teachers having a science background (i.e., a B.S. degree or science major) (n=46) compared with teachers teaching science from non-science majors (n=41) on both the pre-and post-test scores (t=0.201,p=.84, n=87 and t=1.36, p=.17, n=87, respectively).Scores for both groups improved on the post-test, however there were no significant differences between groups (post-test M=14.9, n=41 and 15.8, n=46).Additional analyses with teachers' characteristics such as adult socioeconomic status, gender, age and diverse background showed no statistical differences in pre-test or post-test scores.
In order to determine if educational differences existed between teachers with high or low EL student enrollment in our study, we used descriptive and cross-tabulate analyses to compare teachers' education levels and the percent of EL students in their schools.A binary variable was created whereby low numbers of EL students equaled 15% or lower (n=21) and high numbers of EL students equaled 16% or higher (n=54).We found that teachers with low EL enrollment reported more years of post-secondary education (6.7 yrs.) compared with teachers with high EL enrollment (5.5 years) (t=2.17,p=02).These findings suggested that teachers in our study from schools with high EL enrollment had less years of post-secondary education compared with teachers with low percent of EL students in their schools.
Discussion
In-service professional development programs such as the ARISE Summer Institutes can be instrumental in increasing teacher preparedness for delivering science content to EL students.Formal instructional settings which incorporate hands-on modeling of an evidence-based instructional approach such as Bybee's 5E model are important because they go beyond a simple description of an effective teaching approach by incorporating tangible examples of the specific pedagogical techniques along with the science instruction.Rather than providing teachers with instruction on science content and pedagogical approaches separately, the literature suggests that the most effective way to demonstrate pedagogical techniques is to do so while delivering strong science content (e.g., Santau et al., 2014).
The ARISE Summer Institutes delivered inquiry-based science instruction to in-service teachers while modeling the 5E pedagogical approach; with direct examples that were designed to reach culturally EL populations.The literature suggests that the foundation of good science pedagogy is a deep understanding of science content (Aydin et al., 2013).This study examined how science teachers' knowledge of Neuroscience and Drug Addiction, and Research Methods changed through their participation in a 5-day ARISE Institute.In addition, demographic data collected from participants were used to determine if experience and socioeconomic factors influenced test scores.Additionally, relationships between teacher experience and school EL enrollment data were explored to see relationships with current literature.Below we outlined our conclusions based upon our results and suggest future research.
In general, teachers who received training during the ARISE Summer Institutes showed significant improvement in their knowledge of neuroscience, drug addiction, and research methods with the greatest gains of knowledge in the Neuroscience and Drug Addiction subscales.Teachers had the lowest knowledge gains in the Research Methods subscale.While neuroscience and drug addiction topic areas are not addressed in most 7 th through 12 th grade science classes, the State Science Content Standards (California Department of Education, 2013) touch on neuroscience in the physiology section of Biology/Life Sciences courses, traditionally taken by students in the 9 th grade and drug addiction is introduced into the health curriculum as early as the 2 nd grade, whereby students learn the effects of alcohol, tobacco and other drugs on the human body.These health topics are expanded in 7 th through 8 th grades and into high school curriculum where more health classes are offered.Following their participation in the Summer Institutes teachers in our study led their students in a research project during the next academic semester.Neuroscience and drug addiction were chosen as the content areas for the Institutes because it was anticipated that teachers (and their students) would have greater interest and therefore greater gains in knowledge in these topic areas.Our findings indicated significant gains in knowledge in all science content sections, highlighting the success of the instructional approached used throughout the institutes (5E pedagogical model).Faculty presenters demonstrated numerous strategies while presenting science lessons, including creative ways to interest and engage the teachers in the subject matter; allowing them to work in groups to explore new ideas and concepts; providing activities to help deepen understanding, encouraging teachers to extend their learning in new directions, checking for understanding before moving to another topic and asking questions to check for and correct misunderstandings.
Of the three science content areas, our participants showed the least knowledge improvement in the research methods subsection.This finding is interesting since a basic understanding of the scientific process is widely considered to be a crucial foundational component of science education.In our study, the smaller improvement in understanding of research methods by our participants may be due to the difference in the inherent interest in the subject matter itself, as neuroscience and drug addiction can be more engaging and interesting curriculum topics, such that the teachers are more interested to learn this information and integrate it into existing curriculum.The topics of neuroscience and drug addiction were used to increase student interest in learning science, while knowledge of research methods information was incorporated to help teachers actively engage their students in a drug addiction research experiments following the end of the ARISE Institute.
Overall the ARISE project sought to help teachers be more effective, by not only creating more effective learning environments for their students, but also by better preparing the teachers to help their own students conduct future research projects.The literature suggests that teachers make uneven gains in their knowledge base during training, improving in one aspect of teaching more easily than others (Henze et al., 2009;Aydin et al., 2013).The lack of teacher knowledge on how to conduct simple classroom experiments was an interesting finding in this study and is a subject that needs more exploration.Perhaps the research process is a topic that needs to be emphasized in science pre-service and in-service programs to better prepare science teachers to engage their students in classroom research activities.
Our findings support earlier research of Gagon and Mattingly (2012) that suggested that less qualified teachers end up teaching the least-advantaged students particularly in low-income school districts.We found that teachers in our cohorts who taught in schools with high numbers of EL students had fewer years of post-graduate education and were more likely to have non-science majors in college.
Teachers from schools with high percentage of EL students scored lower on the pre-test, particularly in the research methods section compared with teachers teaching in less diverse districts.In addition, teachers who were born in a country other than the United States had lower pre-test scores than those born within the U.S.However, teachers with the lowest pre-test scores showed the most knowledge gain on the post-test, suggesting that information was presented in a format and knowledge level that allowed teachers with fewer years of post-graduate work and less knowledge of the subject areas to learn difficult science content over the course of a week and catch up to their higher scoring counterparts.This finding is important because in the initial planning of the ARISE Institute, science content presenters were concerned that the content would be too difficult for teachers new to the subject areas.Further, these findings indicate that the 5E Model and cultural nuanced learning strategies integrated into teaching of science content could be successful in impacting teacher knowledge levels such that less skilled and less prepared teachers were able to catch up with their higher scoring counterparts on the post-test.This finding is consistent with the work of Wilson and Berne (1999), indicating that with adequate support, professional development interventions can be successful for teachers of various backgrounds and subject knowledge levels.
Conclusion
Our purpose with the ARISE project was to catalyze teacher education by providing culturally nuanced instruction of specific science content areas while modeling the inquiry-based 5E pedagogical approach.Overall, teachers who participated in the ARISE summer institutes improved in their knowledge of neuroscience, drug addiction and research methods.Again, these findings are consistent with Garet et al. (2001) who found that content-focused professional development has a positive impact on teacher learning.While our results point to the success of the 5E instruction model to enhance science teacher training and support in order to more effectively engage motivate, explore, explain, elaborate and evaluate teacher education, they also suggest areas for improvement, such as a basic understanding of how research is conducted.This is important since there is a large literature demonstrating the importance of incorporating engaging activities like basic classroom experiments into instruction as a means for our students to become competent with science content.
Our results are also consistent with the demographics suggested by other studies addressing the teacher and student populations in California (California Department of Education, 2012, 2014).Our participants were teachers from the California Central Valley, a region that has an increasing number of EL students.This situation of cultural diversity makes it all the more important to continue a dialog on efforts to achieve equitable education across K-12 classrooms (Lee & Fradd, 1998;Lee, 2005;Penuel et al., 2007).Teacher education that focuses on culturally nuanced learning will help to bring pedagogical strategies to these teachers with diverse student populations such that they can deliver science instruction in a way that is accessible to all.
Table 1 .
Science and engineering practices in the 5E model
Table 2 .
Characteristics of ARISE teachers in all cohorts The findings described in this article are based on data collected under the National Institute on Drug Abuse NIDA, and Science Education Drug Abuse Partnership Awards (SEDAPA) funded project, Addiction Research and Investigation for Science Education (ARISE).An essential feature of the ARISE project was to provide professional development to science teachers working in the Central Valley with a specific aim of improving teachers' neuroscience, drug addiction and research methods content knowledge such that they could lead neuroscience and drug addiction research projects in their classrooms.Neuroscience and drug addiction were chosen as the content areas for the Institutes because it was anticipated that teachers (and their students) would have greater interest and therefore greater gains in knowledge in these topic areas.An ambitious and broader goal of ARISE was to provide an important model to address the science education achievement gap that exists between ELs and English speaking students attending public schools in California's Central Valley by combining evidence-based instruction in science content with an effort to directly engage students in a drug addiction research project.
The importance of instruction in scientific methodology is evidenced by the fact that Investigation and Experimentation standards, focusing on the scientific process, were included in every grade level of the California State Science Standards (California Department of Education, 2013) and in the Framework for K-12 Science Education [Framework] (NRC, 2012).Curriculum for the Neuroscience and Drug Addition content of the Summer Institute were derived from "The Brain: Understanding Neurobiology Through the Study of Addiction" an interactive curriculum for teachers and Students grades 9 through 12 (NIH, NIDA, March 2010 The ARISE Institute consisted of a five-day (8-hours/day) intensive training: consisting of four hours/day of neuroscience content material with a focus on drug addiction research, 1 hour/daily of research methods training, and 3 hours/day of cultural nuanced learning and 5E Model pedagogy.
Table 4 .
Paired t-test Analyses of changes in knowledge on the Neuroscience, Drug Addiction, and Research Methods pre-test and post-test and subscales The import of instruction in scientific methods is evidenced by the fact that Investigation and Experimentation standards, focusing on the scientific process, are included in every grade level of the California State Science Standards (California Department of Education, 2013), starting in Kindergarten and in the Framework for K-12 Science Education [Framework] (NRC, 2012). | 2018-08-10T17:18:16.926Z | 2016-03-16T00:00:00.000 | {
"year": 2016,
"sha1": "256a5f2cf7ce95df07af4ae220aecd80201655a3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/jel.v5n2p109",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "256a5f2cf7ce95df07af4ae220aecd80201655a3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
59377990 | pes2o/s2orc | v3-fos-license | Evolution of binary black holes in self gravitating discs: dissecting the torques
We study the interplay between gas accretion and gravity torques in changing a binary elements and its total angular momentum (L) budget. Especially, we analyse the physical origin of the gravity torques (T_g) and their location within the disc. We analyse 3D SPH simulations of the evolution of initially quasi-circular massive black hole binaries (BHBs) residing in the central hollow of massive self-gravitating circumbinary discs. We use different thermodynamics within the cavity and for the numerical size of the black holes to show that (i) the BHB eccentricity growth found previously is a general result, independent of the accretion and the adopted thermodynamics; (ii) the semi-major axis decay depends both on the T_g and on the interplay with the disc-binary L-transfer due to accretion; (iii) the spectral structure of the T_g is predominately caused by disc edge overdensities and spiral arms developing in the body of the disc and, in general, does not reflect directly the period of the binary; (iv) the net T_g changes sign across the BHB corotation radius. We quantify the relative importance of the two, which appear to depend on the thermodynamical properties of the instreaming gas, and which is crucial in assessing the disc-binary L-transfer; (v) the net torque manifests as a purely kinematic (non-resonant) effect as it stems from the cavity, where the material flows in and out in highly eccentric orbits. Both accretion onto the black holes and the interaction with gas streams inside the cavity must be taken into account to assess the fate of the BHB. Moreover, the total torque exerted by the disc affects L(BHB) by changing all the elements (mass, mass ratio, eccentricity, semimajor axis) of the BHB. Common prescriptions equating tidal torque to semi-major axis shrinking might therefore be poor approximations for real astrophysical systems.
INTRODUCTION
In the currently favoured hierarchical framework of structure formation (White & Rees 1978), galaxies evolve through a complex sequence of merger and accretion events, and the existence of massive black holes (BHs) at their centres is nowadays a well established observational fact (see Gültekin et al. 2009, and references therein). By combining together these two pieces of information, the formation of a large number of massive black hole binaries ⋆ E-mail: croedig@aei.mpg.de (BHBs), following galaxy mergers throughout cosmic history, is a natural consequence of the structure formation process (Begelman et al. 1980). Although this is corroborated by several observed quasar pairs at a ∼ 100kpc projected separation (Hennawi et al. 2006;Myers et al. 2007Myers et al. , 2008Foreman et al. 2009;Shen et al. 2011), and by few ∼ < kpc dual accreting BHs embedded in the same galaxy (e.g. Komossa et al. 2003;Fabbiano et al. 2011), identification of gravitationally bound BHBs in galaxy centres remains elusive (for an up-to-date review on the candidates see Dotti et al. 2012, and references therein).
Even though observationally there is little evidence for their existence, much theoretical work has focused lately on Keplerian massive BHBs residing in galactic nuclei. One reason is that a deep understanding of the interplay between BHBs and their dense (stellar and gaseous) environment is required to predict robust signatures that may allow their identification. Additionally, it is still unclear how the gap between the two theoretically well understood stages of BHB evolution is bridged: (i) the dynamical friction driven stage, when the two BHs spiral in toward the centre of the merger remnant down to pc separations and (ii) the final inspiral driven by gravitational waves (GWs), which become efficient when the two BHs are at a separation ∼ < 10 −2 pc. Both dense stellar and gaseous environments have been shown to be effective in extracting the binary energy and angular momentum (see, e.g., Escala et al. 2005;Dotti et al. 2007;Cuadra et al. 2009;Khan et al. 2011;Preto et al. 2011), likely driving the system to final coalescence (an extensive discussion on the fate of sub-parsec BHBs can be found in Dotti et al. 2012). Scenarios involving cold gas are particular appealing not only because they might produce distinctive observational signatures, but also because cold gas dominates the baryonic content in most galaxies at redshifts higher than one, providing a natural reservoir of energy and angular momentum to drive the BHB towards coalescence.
Numerical simulations of wet galaxy mergers (e.g., Mihos & Hernquist 1996;Mayer et al. 2007) have led to the following picture for the post merger evolution. Cold gas is funnelled to the central ≈ 100 pc by gravitational instabilities, where it forms a puffed, rotationally supported circumnuclear disc. Disc-like structures are actually observed in ULIRGs which are thought to be gas rich postmerger star-forming galaxies (e.g., Sanders & Mirabel 1996;Downes & Solomon 1998;Davies et al. 2004a,b;Greve et al. 2009). In the models, the two nuclear BHs efficiently spiral to sub-pc scales owing to dynamical friction against the massive circumnuclear disc (Dotti et al. 2007), eventually opening a cavity (or hollow) in the gas distribution (Goldreich & Tremaine 1980). The subsequent evolution of the system is determined by the efficiency of energy and angular momentum transfer between the BHB and its outer circumbinary disc.
The investigation of coupled disc-binary systems has a long-standing tradition in the context of planetary dynamics (Goldreich & Tremaine 1980;Lin & Papaloizou 1986;Ward 1997;Bryden et al. 1999;Lubow et al. 1999;Nelson et al. 2000), where the focus usually lies on the extreme mass ratio situation, i.e., a star surrounded by a circumstellar disc, with a planetary companion embedded in it. The comparable mass case has also been extensively investigated in the context of binary star formation (Artymowicz & Lubow 1994;Bate & Bonnell 1997;Günther & Kley 2002;Günther et al. 2004), where a boost of activity has been triggered by imaging of nearby young binary stars embedded in hollow circumbinary discs (Dutrey et al. 1994). More recently, the techniques adopted in these fields have been applied to the BHB case. In the last decade, several investigations were devoted to the study of comparable mass BHBs evolving in circumbinary discs, exploiting a variety of analytical and numerical techniques (Ivanov et al. 1999 Roedig et al. 2011;Shi et al. 2011). However only few of them focused on the details of the dynamical discbinary interplay. MacFadyen & Milosavljević (2008) (hereinafter MM08) made use of two-dimensional grid-based hydrodynamical simulation to study a BHB embedded in a thin α-disk (Shakura & Sunyaev 1973). They showed that the gas flowing through the cavity increases the energy and angular momentum transfer between the disc and the binary. Shi et al. (2011) (hereinafter S11) confirmed that result using full 3D magnetohydrodynamics (MHD) simulations. However, in both studies the BHB was on a fixed circular orbit, the central region of the disc (i.e. within twice the binary separation) was excised from the computational domain, and the disc self-gravity was neglected.
We employ full 3D smoothed particle hydrodynamical (SPH) simulations to study the interaction between BHBs and their surrounding discs. Our goal is to give a detailed description of the coupled disc-binary dynamics, paying particular attention to the competing effects of disc-binary gravitational torques and gas accretion onto the BHs in the evolution of the binary angular momentum budget. We simultaneously evolve the disc and the two BHs in a selfconsistent way. This enables us to separately investigate the effect of gravitational torques coming from different disc regions and to directly link different physical mechanisms (gravitational torques and accretion) to the evolution of individual quantities describing the binary (mass, mass ratio, semi-major axis and eccentricity). Our approach allows a broader understanding of the coupled dynamics with respect to previous studies (MM08, S11), where the binary was modelled as a fixed forcing quadrupolar potential and the central region was excised. Thanks to the relative low computational cost of the SPH implementation we could perform different runs varying physical and numerical prescriptions that may play a relevant role in the disc-binary energy and angular momentum exchanges. Specifically, we checked the possible dependence of our results on the 'numerical size' of the two BHs (i.e., their sink radii, see next Section), and on the adopted equation of state (EoS) for the gas within the central cavity.
The paper is structured as follows: we first describe the numerical setup and initial conditions in Section 2, then we show the importance of both accretion and gravitational torques on the binary evolution in Section 3. We study in detail the origin and strength of the mutual disc-binary gravitational torques in Section 4 highlighting the influence of the thermodynamics prescription. In Section 5 we briefly discuss the temporal behaviour and the spatial geometry of the accretion, speculating on the evolution of the binary mass ratio and BH individual spins. Finally, we compare our results to previous work in Section 6 and discuss the implications of our findings and give our conclusions in Section 7.
SIMULATIONS
The model and numerical setup of this work is closely related to that of Roedig et al. (2011) and Cuadra et al. (2009), hence we only outline the key aspects in the following, and refer the reader to these two papers for further details.
We simulate a self-gravitating gaseous disc of mass M d = 0.2 M around two BHs of combined mass M = Figure 1. Relaxed meridional density maps of the disc for the adia (top) and iso (bottom) runs; the black dots indicate the BHs, the axis are in units of the binary semi-major axis a. Note that this is not an azimuthal average but a vertical slice of the disc. M1 + M2, mass ratio q = M2/M1 = 1/3, eccentricity e and semi-major axis a, using the SPH-code Gadget-2 (Springel 2005) in a modified version that includes sink-particles which model accretion on to the BHs Cuadra et al. 2006). Moreover, the orbit of the BHB is followed very accurately by using a fixed small time-step and summing up directly the gravitational force from every other particle in the simulation (Cuadra et al. 2009). The disc, which is co-rotating with the BHs, radially extends to about 7a, contains a circumbinary cavity of radial size ∼ 2a and is numerically resolved by 2 million particles. The numerical size of each BH is denoted as r sink , the radius below which a particle is accreted, removed from the simulation, and its momentum is added to the BH (Bate et al. 1995). The gas in the disc is allowed to cool on a time scale proportional to the local dynamical time of the disc t dyn = f −1 0 = 2π/Ω0, where Ω0 = (GM0/a 3 0 ) 1/2 is the initial orbital frequency of the binary 1 . To prevent it from fragmenting, we force the gas to cool slowly, setting β = t cool /t dyn = 10 (Gammie 2001;Rice et al. 2005). We use two different treatments for the thermodynamics inside the cavity, denoted as adia and iso, respectively. In the adia runs, the gas within the cavity is allowed to both cool via the β prescription and to heat up adiabatically, as it is the remainder of the disc (Cuadra et al. 2009). In the iso runs, we define a threshold radius rcavity = 1.75a, below which the gas is treated isothermally, meaning that the internal energy per unit mass is set to be u ≈ 0.14(GM/R) (Roedig et al. 2011).
As initial conditions we use a relaxed snapshot from Cuadra et al. (2009) taken at their t = 500Ω −1 0 (see Roedig et al. (2011), Section 2.1) 2 . The nomenclature of the runs and the parameters used are listed in Tab. 1. Each run is evolved for ≈ 90 binary orbits, and we store the output in single precision ∼ 6 times per orbit. We also closely track accretion by storing the position and velocity of the two BHs and of each accreted particle at the time the latter crosses the sink radius. For both prescriptions (adia and iso) we show the relaxed density configurations sliced along the meridional plane in Fig. 1 (see also Figs. 8-9 for a face-on view) 3 and the measured, time-averaged values for both the scale height of the disc H/r, and the eccentricity of the disc e disc in Fig. 2. H is taken to be the height above and below the disc-orbital plane in which 70% of the mass inside the annulus (r, r + dr) is found. In their bodies (i.e. at r > 2a), the discs may be considered thin, as shown in the upper panel of Fig. 2, with a maximum thickness H/r ≈ 0.2, and quasi circular (lower panel of Fig. 2). Generally, we can consider the physics to be resolved if the smoothing length h is smaller than the characteristic lengthscale: radially the criterion h/r ≪ 1 is achieved well: h/r ∈ (0.005, 0.2); whereas vertically the resolution in the main body of the disc is about h/H ∈ (0.1, 0.25) 4 .
Unless otherwise stated, we will present all the relevant quantities in the natural units of the simulation by setting G = M0 = a0 = 1. It follows that also the initial circular velocity of the binary is Vc,0 = 1 and its initial period P0 = 2π. We will then discuss the astrophysical implications of In each panel, the red line represents the full evolution directly taken from the simulation whereas the black line is the equivalent evolution when considering the energy and angular momentum exchanges due to accreted particles only.
our findings by scaling them to fiducial astrophysical BHB systems.
BINARY EVOLUTION: CAUSES
For each of the four runs, we measure the two primary orbital elements: eccentricity e and semi-major axis a. Their evolution is shown by the red lines in Fig. 3. In all four runs, we findȧ(t) < 0 andė(t) > 0. While the eccentricity evolution is largely independent on the sink radius value and the adopted EoS within the cavity, the orbital shrinking is much faster in the adia runs, in which the binary shrinks by 4 − 5% over 90 orbits. Conversely, in the iso runs, the two BHs get only about 1% closer. Linear extrapolation of such results at face value implies binary coalescence on a timescale of ∼ 2000P0 and ∼ 10 4 P0 for the adia and the iso runs, respectively. Whereas, assuming a constant eccentricity growth rate, the limiting value predicted by Roedig et al. (2011) would be reached in less than ∼ 1000P0.
Gravitational torque and accretion contribution to the angular momentum budget
Being interested in the physical mechanisms driving the binary evolution, we firstly identify two distinct processes: (i) the gravitational torques exerted by the gaseous particles onto each individual BH, TG (gravity torque) (ii) the accretion of instreaming particles crossing either BH sink radius, (dL/dt)acc (accretion torque).
Conservation of the total angular momentum in the simulations implies
where L is the BHB orbital angular momentum vector 5 . All vectors are computed with respect to a Cartesian reference frame centred in the BHB centre of mass (CoM) 6 ; the binary initially lies in the x-y plane with angular momentum oriented along the positive z axis. In our SPH simulations, TG can be computed at each snapshot by direct summation of individual particle torques onto each BH yielding where j runs over all N gas particles, k identifies the two BHs, r are position vectors, and m and M denote particle and BH masses respectively. On the other hand, (dL/dt)acc can be computed by assuming instantaneous linear momentum conservation of each accreted particle yielding ∆L = r k × mjvj. Here r k is the position vector of the accreting BH at the moment of swallowing the particle, and vj is the particle velocity vector. Numerically, we can evaluate the binary angular momentum change ∆L at each snapshot by writing Eq. (1) in the form where ∆t is the interval between two subsequent snapshots, and the sum runs over all the accretion events occurring in this time lapse. We use Eq.
(3) to assess the relative importance of gravitational torques and accretion in the evolution of the system. In Fig. 4, we plot the evolution of the x, y, z components of the angular momentum L as directly evaluated from the positions and velocities of the two BHs stored in the simulation outputs (red), together with the relative changes due to accretion (black dotted) and gravity torques (black dashed) according to Eq. (3). The solid black line is the sum of the latter two which should overlap with the red line if our decomposition is sufficiently accurate and these are the only two physical processes determining the binary evolution. For comparison, in simulation units, the initial binary angular momentum is ≈ 0.18. The Lx and Ly components are almost constant, showing fluctuations at the 10 −4 level. The mismatch between the red and the solid black lines here is because simulation outputs are in single precision, i.e. accurate to the fourth digit, which is the fluctuation level of the computed quantity. On the contrary, the Lz component changes up to a 10 −2 absolute level (i.e., about 5% of the initial value), and in this case we see very good agreement between the two lines. Note that in three of the runs, the overall angular momentum grows over time, even though a shrinks and e increases. This counter intuitive result is due to the dominance of accretion in the angular momentum budget, and will be explained in § 3.2 below.
As a general rule, the accretion contribution to the binary L evolution (i.e., the accretion torque) is at least comparable to the effect of the gravitational torques. It is therefore also interesting to see the accretion contribution to the evolution of the binary orbital elements. Having stored all the accretion events, this can be done separately by simply evolving the binary from a0 and e0 only by adding one accreted particle after the other, imposing linear momentum conservation. This "would-be" evolution of e(t) and a(t), accounting only for the accretion effect, is shown in black in all panels of Fig. 3. It is clear that e(t) is mainly driven by gravitational torques, with accretion playing a negligible role. This is in line with our previous findings (Roedig et al. 2011), where the evolution of the eccentricity was attributed mainly to the gravitational interaction between the binary and the overdensities excited by its quadrupolar potential in the inner rim of the circumbinary disc. Conversely, the a(t) plots highlight the importance of accretion for the semimajor axis evolution. In the iso05 case, the binary shrinking due to accretion is actually larger than the total one, meaning that the disc torques would force the binary to expand; an effect similar to that described by Lin & Papaloizou (2011) in the context of planetary migration in self-gravitating discs. Note that in the adia casesȧ is dominated by the effect of the disc. This explains the match between such simulations and the Ivanov et al. (1999) analytical model based on angular momentum transport through the disc (Cuadra et al. 2009).
Dissection of the binary evolution into its components
So far, we have separated the relative contribution of gravitational torques and accretion to the binary angular momentum budget. We now investigate how the angular momentum change is distributed among the relevant binary quantities. As clearly shown in Fig. 4, Lx and Ly show only small fluctuations (and therefore remain small compared to Lz, since the binary initially orbits in the x-y plane). From here on, we will therefore concentrate on the dominant Lz component. The binary angular momentum is where µ = M1M2/M is the binary reduced mass. Eq. (4) can be differentiated to separate the contribution of each single relevant quantity to the angular momentum change: By directly measuring a, e, M , µ from the simulation, we can evaluate each single term and decompose the Lz evolution accordingly. This is shown in Fig. 5 for all the four runs. Note that the sum of the four contributions (solid black line in each panel) closely matches the measured Lz evolution directly measured from the simulation (red), validating our first-order expansion. It is worth noting that the increase of Lz is perfectly compatible with semi-major axis shrinkage and eccentricity growth. In fact, Eq. (5) can be inverted to obtaiṅ where we used the fact thatLz ≡ Tz. Even if the total torque is positive, and the eccentricity grows, the binary can shrink because of the negative contribution of the M and the µ terms. In particular, as we will see below, the higher accretion onto the secondary hole results in a large increase of µ (long-dashed line in Fig. 5) which is the main driver of the binary angular momentum growth. Eqs. (5) and (6) highlight the importance of accretion in determining the binary temporal evolution.
GRAVITATIONAL TORQUES
As pointed out in the previous Section, understanding secular dynamics of BHBs in gaseous environments is closely tied to studying the interplay between long distance gravitational forces, hydrodynamics and accretion. In this Section, we focus on the torques coming from the self-gravitating gaseous disc, investigating the influence of regions at different distances from the BHB.
Time averaged torque profiles
Gravitational torques exerted by the disc onto the BHB can be directly calculated at each snapshot from Eq. (2). It is of great interest to understand where such torques originate, and given the cylindrical nature of the problem, it is natural to investigate their azimuthally-averaged radial distribution. We take r to be the projected radial coordinate (in the x-y plane) from the binary CoM, and we define dT /dr to be the differential torque, integrated over the azimuthal coordinate and the disc height. The total average torque exerted by gas in the projected distance interval [a, b] from the binary CoM is where · denotes temporal averaging over the entire simulation. In Fig. 6, we show the time averaged differential torque acting on the binary, dT /dr (blue), together with its integral according to Eq. (7) (black). We also decompose the former in the two components acting on the primary (green) and the secondary (red) BH. All torques are null at the binary corotation radius we therefore show the total torque by integrating from this point inwards ( T [1,0] , binary region) and outwards ( T [1,∞] , disc region).
In all simulations, the local average torque shows an oscillatory behaviour with a sharp maximum at the location of the secondary BH, r ∼ 0.75, and a deep minimum in the cavity region at 1 < r < 2. In the body of the disc (r > 2) positive and negative peaks alternate, but they are shallow and almost cancel out, giving a negligible contribu- Figure 6. Differential torques dT /dr in units of [GM 2 0 a −2 0 ] averaged over the entire simulations. In each panel we show the differential torque on the primary (green), on the secondary (red), the sum of the two (blue), and the total integrated torque (black) according to Eq. 7. This latter is integrated starting from a inwards and outwards. Notably, the inward torque is positive, whereas the outward torque is negative.
tion to the total torque (as witnessed by the fact that the integrated torque is basically constant for r > 2). Note that the torque on the secondary BH is always larger than the torque on the primary, due to its proximity to the outer disc resulting in a stronger interaction. The general behaviour is qualitatively the same in the adia and iso runs. In this latter case, however, we found a much sharper negative peak at r ≈ 1.7 followed by a smaller secondary negative bump at r ≈ 1.2. This appears to be an artefact of the sudden change in EoS of the gas inside the cavity, at r = 1.75. The net result is a smaller negative T [1,∞] which has important consequences on the binary evolution. We see in fact that in the iso runs T [1,0] and T [1,∞] almost cancel out, meaning that, overall, gravitational torques do not change the binary angular momentum. Conversely, in the adia runs T [1,0] is much smaller (in absolute value) than T [1,∞] , implying an efficient angular momentum transfer from the binary to the gas (MM08; Cuadra et al. 2009).
Spectral analysis
We now consider the torque evolution in time. We separately discuss torques coming from different disc regions by showing both the time series and their associated power spectra. We cut the spatial domain into the following radial annuli: (i) 0 a < r < 10 a : the entire domain (ii) 0 a < r < 1 a : the 'binary region' (iii) 1 a < r < 1.8a : the 'cavity region' (iv) 1.8a < r < 2.5a : the 'rim region' (v) 2.5a < r < 10 a : the 'disc region' The associated time series and power spectra are shown respectively in the left and right panels of Fig. 7.
In all the simulations, the overall torque (i), shows a clear periodic oscillation, much larger in amplitude than its average value. The power spectrum unveils several distinctive peaks, with relative amplitudes that can vary significantly for different simulations. In particular, peaks in the iso simulations are much sharper and better defined. This is because in the adia simulations the BHB shrinks significantly, resulting in a broadening of the characteristic frequencies. Moreover, as we shall see in the next section, the disc sub-structures are much better defined in the iso runs, giving rise to neater features.
In the binary region (ii) the torque is mostly coherent and positive. Because the torque strength in (ii) is regulated by the mass inflow, it shows periodicities that are related to the accretion flow: a disc component around f ≈ 0.25P −1 0 (corresponding to the disc peak density at r ≈ 2.5a), the forcing frequency of the binary at f = P −1 0 , and the beat between these two at f ≈ 0.75P −1 0 (see Section 5 and Roedig et al. (2011) § 5.1). In the cavity region (iii) the torque is negative on average but strongly oscillating. Several periodicities are detectable, the most striking being a peak at f ≈ P −1 0 which appears again to be directly related to the binary period. In the rim region (iv) the torque is highly oscillating, and the strongest feature is a sharp peak at f ≈ 1.3P −1 0 , whereas in the disc region (v) the only significant spectral component is at f ≈ 1.7P −1 0 . As a general trend, moving from the inner region to the disc body, torques become incoherent (i.e. they average to zero) and strongly oscillating (compare the power spectra scales in the different panels of each plot).
Interpretation: torque origin and location in the disc
In this Section, we provide a global interpretation of the features observed both in the radial distribution of the time averaged torques and in their temporal evolution. The arguments discussed below are supported by Figs. 8-9-10 and by Tab. 2.
Origin of the positive and negative peaks
It seems natural to compare the torque radial profiles obtained by our simulations with linear theory perturbation, in which torque minima and maxima are connected to outer Lindblad resonances (OLRs). It is in fact tempting to associate the torque minimum at r ≈ 1.6a with the 2:1 OLR. We should however be careful in pushing this interpretation too far, since, as already pointed out by MM08 and S11, the assumptions of linear theory are not satisfied in this context. Most importantly, looking at the upper panels of Figs. 8 and 9, we notice that the region r < 2a is almost devoid of gas, and the streams are almost radial. Mass fluxes reported in Tab. 2 clearly show that, at any radius, there are always fluxes of ingoing and outgoing mass, resulting in a steady net inward flux consistent with the accretion rate onto the two BHs. A strict OLR interpretation of the torque minimum at r ≈ 1.6a would instead require particles in circular orbit at that radius, experiencing a secular effect due Figure 7. Each plot depicts the torques exerted by the disc on the BHB in the time domain (left panels) and in the frequency domain (right panels). In each plot, from the top to the bottom, pairs of panels refer to the five domains discussed in the text: total torque (i), torque exerted by the material located at r < a (ii), a < r < 1.8a (iii), 1.8a < r < 2.5a (iv) and r > 2.5a (v). In each left panel, the red line is the raw oscillating torque, and the blue one is the torque smoothed over three periods to show the average behaviour. In the associated right panel, we plot the power spectrum as a function of frequency in units of the initial binary orbital frequency. All torques strongly oscillate, showing several characteristic frequencies, however, note the different scales of the panels.
to the phase-coherent periodic forcing of the binary; this is not what happens within the cavity region. The strongest 2:1 OLR is certainly responsible for the evacuation of the gas close to the binary and for the formation and maintenance of a cavity, however cannot be directly responsible for the coherent torque seen in the cavity region. This is also supported by the fact that MM08 and S11 find a minimum at the same location (r ≈ 1.6 − 1.7a) for an equal mass binary, where the 2:1 resonance is absent (because of the symmetry of the forcing potential), and the location of the strongest OLR (3:2) would be at r ≈ 1.3a. The strong neg-ative torque in the cavity region has a purely kinetic origin: material ripped off the disc edge forms well defined streams following the two BHs, which are clearly distinguishable in both the surface density plots shown in Fig. 8 and 9. The streams are responsible for the yellow tails following the two BHs at ∼ 1.5a in the torque density panels, which lead to a net negative torque. Conversely, at r ∼ > 2a, we have a well defined, almost circular disc, and the torque density peaks at r ≈ 2a and r ≈ 2.5a can be identified with the loci of the strong 3:1 and 4:1 OLRs (Artymowicz & Lubow 1994).
Our simulations also allow us to investigate the torques within the binary corotation radius at r < a, a region often excised in grid-based simulations (see MM08 and S11). Here we find strong positive torques on both BHs. This is because the infalling material approaches the BHs at super-Keplerian velocities, and bends in a horseshoe fashion, exerting a net positive torque in front of them. In fact, the maximum positive torque basically coincides with the location of the two BHs (sharp peak at r ≈ 0.75a for the secondary and a broader peak around r ≈ 0.3a for the primary, see Fig. 6). The very same effect, in the context of planetary migration, was discussed by Lin & Papaloizou (2011). 7 Note that the positive torque is related to this 'stream bending', and not directly to the small discs of gas orbiting each BH; its magnitude is in fact similar in the iso and in the adia simulations, even though in the former, the mass in the minidiscs is significantly larger as shown by the time and azimuthally averaged surface density profiles in the upper panel of Fig. 10.
Disc structure and characteristic torque frequencies
The surface density profiles in Fig. 8 and 9 highlight some clear differences between the iso and the adia runs: (i) the appearance and shape of the minidiscs; (ii) the amount of gas feeding the streams; (iii) the definition of the inner disc edge. In the iso runs, we find a large amount of gas forming mini-discs around both BHs and an almost empty cavity except for two tenuous streams connecting the outer edge Table 2. Accretion rates onto the binary and mass fluxes F in units M/P 0 × 10 −5 at two selected distances to the binary CoM: r 1 = a and r 2 = 1.5 a.Ṁ 1 andṀ 2 are the average accretion rates on M 1 and M 2 respectively, whileṀ is the sum of the two. At both selected distances, F in and Fout represent the ingoing and outgoing fluxes, and Fnet = F in − Fout is the net ingoing flux. All quantities have been averaged over 90 binary orbits. to the mini-discs. The edge of the disc is quite circular and well defined. Conversely, due to the gas inside the cavity being hotter in the adia runs, the streaming activity is more violent with thick spirals that do not settle into well defined mini-discs, but rather form a diluted, ∞-shaped cloud around the binary. The edge of the disc is strongly disturbed by the ripped out streams and is usually not circular. The different streaming activity is also confirmed by numbers in Tab. 2, where we collect the average inwards and outwards mass fluxes at r = 1.5a and r = a. Mass fluxes are generally larger in the adia case. Note that this does not necessarily result in larger accretion, as outgoing fluxes are also larger in this case. Most of the material in the streams, does not end up in an accretion flow, but is accelerated back to the disc (a sort of 'slingshot') impacting on the inner edge, an effect already seen and extensively described by MM08 and S11. The amount of impacting gas is 50% larger in the adia case, possibly contributing to the inner edge destabilization and to the larger perturbation in the disc structure. The appearance of spirals in the disc is related to the behaviour of the Toomre Q parameter, shown in the bottom panel of Fig. 10, which quantifies the degree of self-gravity of a disc. Not surprisingly the disc develops a spiral pattern in the region 3a ∼ < r ∼ < 5a, where, in fact, we find that the Toomre parameter reaches its minimum value Q ≈ 1.5 (Cuadra et al. 2009). The shape of the spiral arms varies much in time, and their definition is different from simulation to simulation. However, at any time we find spiral structures with 2, 3 or 4 arms, which remain confined between 3a < r < 5a.
The disc structures highlighted above are reflected in the torques depicted in the lower panels of Fig. 8 and 9. The total torque shows a typical quadrupolar structure, which leads to rapid oscillations averaging to zero in the disc body. Conversely, as already discussed, in the inner cavity we can appreciate the yellow tails following the two BHs at∼ 1.5a, leading to a net negative torque.
The main peaks observed in the torque power spectra must therefore arise from the structures we just described. In particular we identified the overdensities at r ≈ 2a in the inner rim of the disc, and the spiral structures at r ∼ > 3a in the main body of the disc. The main torque frequencies must come from the interaction between the forcing binary quadrupolar potential and such overdensities propagating in the disc. To test this hypothesis, we mimic the situation by placing test particles in circular orbits at r = 2a and r = 3.5a around a circular binary. We compute the torques and Figure 11. Simple test model for the periodicity generated in the outside disc, i.e. at r > a. In the top panel we show the temporal evolution of the torque, whereas in the bottom panel we show the Fourier transform. In each panel, the blue curve is obtained by placing three point masses at distance 1.6a, 2.1a, 3.5a from a circular binary, and the the red is the torque found in the iso10 simulation.
show them as blue lines in Fig. 11. The natural frequency between a quadrupolar potential oscillating with frequency f1 and an overdensity orbiting at frequency f2 is the beat frequency 2f1 − f2, and this is what we see in the Fourier spectrum. The beats between the binary and the particles at r = 2a and at r = 3.5a give rise to sharp peaks seen at f ≈ 1.35P −1 0 and f ≈ 1.7P −1 0 , respectively. The fact that such peaks are much sharper in the iso simulations stems from two facts: (i) the binary does not significantly shrink in the iso runs, limiting the broadening of the spectral feature and (ii) the disc features (disc edge and spiral arms) are much more defined in this case, and the torques are well localized. This can be also seen by comparing the much neater torque structure in the bottom panel of Fig. 8 to the blurry one of Fig. 9. The f ≈ P −1 0 peak of the blue line in Fig. 11 is obtained by placing a third particle at r = 1.6a, a separation corresponding to the maximum negative torque inside the cavity. However, Fig. 7 shows that the peak at f ≈ P −1 0 is stronger at r > 1.8a than in the a < r < 1.8a region, meaning that it must be mostly directly related to the periodicity at which the binary rips gas off the disc edge (i.e. the binary period), and not to a specific radius in the cavity. Finally, as already noticed, torques in the binary region are related to the mass fuelling the two BHs, and therefore show periodicities related to the binary (f = P −1 0 ), to the inner rim of the disc (f ≈ 0.25P −1 0 ) and to the beat between the two (f ≈ 0.75P −1 0 ). Note that the latter two are much more evident in the adia runs, where the disc rim overdensities feeding the accretion streams are more pronounced.
ACCRETION
In Tab. 2 we also show the average mass accretion rates. We recall here that a particle is considered accreted when it crosses the sink radius of one of the BHs. Then both its mass and its linear momentum are added to the BH, ensuring the global conservation of both quantities. The mass accretion rate is almost independent on the sink radius in the iso simulation, where instreaming gas settles in well defined accretion discs progressively loosing angular momentum. Conversely, in the adia case, the accretion rate scales almost linearly with the sink radius, suggesting a direct relation to the BH cross section (defined by the sink radius itself).
It is particularly interesting to compare the accretion rate to the mass flow rates. Firstly we notice that in all simulations the accretion rate and the mass flow rates at r = a and r = 1.5a are nearly the same, meaning that the system is in a steady state configuration. Interestingly, the mass accretion rate is ∼ 25% lower than the inflow rate at r = a. This means that not all the material crossing r = a is accreted, an assumption commonly adopted in grid simulations where the r < a region in excised. Part of the gas suffers a slingshot impacting back to the disc, affecting further the disc-binary angular momentum transfer.
In the top panel of Fig. 12 we show the temporal evolution of the accretion rate on the primary hole,Ṁ1, on the secondary hole,Ṁ2, and the sum of the two,Ṁ . As already found in Cuadra et al. (2009);Roedig et al. (2011),Ṁ2 is a factor ∼ 2 larger thanṀ1, due to the closer interaction between the secondary BH and the disc. The relative mass growth δM/M is therefore about six time faster for the secondary BH, implying the binary mass ratio will tend toward q ≈ 1 if this condition persists over the entire evolution of the system. Accretion rates are strongly modulated in time, and notably,Ṁ2 shows a striking periodicity related to the binary orbital period, whereasṀ1 is dominated by longer timescale fluctuation, related to the overdensities developing at the inner rim of the circumbinary disc.
The Fourier transform ofṀ is given in the lower panel of Fig. 12. We observe the usual three distinctive peaks observed in the torques coming from the binary region, as shown by the central panel of the figure. It is clear that the accretion and inner-torque periodicities are intimately connected, both reflecting the temporal evolution of the mass inflow in the binary region (ii). Figure 12. Accretion onto the two BHs. Top box: accretion rate as a function of time. The red line is the total accretion rate while the blue and black lines are the accretion rates onto the secondary and the primary respectively. Lower box: Fourier transform of the total accretion rate (lower panel) and of the torques (upper panel). In the upper panel, the green line is the total torque (highest peak normalized to 1) and the red line is the torque exerted by the material at r < a (multiplied by ten).
Finally we can check the orientation of the angular momentum vector of the accreted material in the reference frame of the accreting BH. This number quantifies the level of 'coherence' of the accretion flow, and has important consequences for the individual BH spin evolution and the magnitude of gravitational recoil at the coalescence (Bogdanović et al. 2007;Dotti et al. 2010;Kesden et al. 2010;Volonteri et al. 2010;Lousto & Zlochower 2011;Lousto et al. 2012). At each snapshot we therefore compute the average Lz and L of the accreted particles in the accreting BH reference frame, and the angle θ = arccos (Lz/L) defining the degree of misalignment with respect to the z axis. Our results are similar to what already found by Dotti et al. (2009), although in the cited study the spin evolution was studied only before the gas scouring. More specifically, an isothermal EoS leads to extremely coherent accretion flows, with fluctuations in the angular momentum direction of the mini-discs of few degrees. In the adiabatic case, the orientation of the minidisc orbiting the secondary changes by at most ≈ 20 degrees, and has an average excursion of 7 degrees, 50% bigger than the oscillations in the primary disc. Assuming that the BH spins have efficiently aligned with the circumbinary disc before the opening of the cavity ) this implies that the efficient spin-up of the binary will continue following cavity opening.
COMPARISON WITH PREVIOUS WORKS
The binary evolution and torque structure have been previously investigated by MM08 and S11, as described in the introduction. In particular, we can compare results regarding the average gravitational torques and the mass accretion rates. While both of them can be expressed in several ways, we find it practical to normalize these quantities to the disc mass, M d , initial binary period, P0, and initial binary angular momentum magnitude, L0. In these units we can write: Here M d = πΣpr 2 p is the disc mass computed using the peak surface density 8 Σp. In both MM08 and S11, rp ≈ 3a, close to what we find in our discs (see Fig. 10). Before proceeding with this comparison we should mention several issues that has to be considered. Firstly L0 = (M0/4)(GM0a0) 1/2 for the circular equal mass binary simulated by MM08 and S11, whereas in our simulation with q = 1/3 we have L0 = (3M0/16)(GM0a0) 1/2 (assuming a circular binary). Secondly, the concept of 'accretion rate' is somewhat ill defined in grid based simulation with an excised central region. The inner boundary of the calculation is at r = a for MM08 and r = 0.8a for S11, and in both cases, whatever crosses that boundary is considered accreted. As already discussed (see Tab. 2) ≈ 25% of the mass crossing r = a is accelerated back to the disc edge extracting angular momentum from the binary (an effect already seen and described extensively by MM08 and S11 for the material inflowing in the cavity but not reaching the excised region). We can therefore consider MM08 and S11 to overestimate the accretion rate by a factor of 25%. Lastly, MM08 and S11 can compute gravitational torques only for r a, outside the binary corotation radius 9 . As shown by both studies (and independently recovered by us), gravitational torques exerted by the disc elements on the binary are negative in this region.
For the sake of the comparison we use the iso05 and the adia05 runs. In the torque computation, we find Γ1 = −3.5× 10 −3 for the former and Γ1 = −5.3×10 −3 for the latter. This can be compared to −1.26 × 10 −3 of MM08 and −1.8 × 10 −2 of S11. Our gravity torques are a factor 3-4 larger than MM08 and a factor 4-5 smaller than S11. Concerning the accretion, we get Γ2 = 1.1 × 10 −3 for the iso05 run and Γ2 = 6 × 10 −4 for the adia05 run, to be compared to Γ2 = 1.2×10 −4 of MM08 and Γ2 = 6×10 −3 for S11. Our accretion rates are a factor 5-10 larger than MM08 and a factor 5-10 smaller than S11. The similar scaling between T [1,∞] anḋ M is not surprising, since, as discussed in Section 4, the former is directly linked to the streams fuelling the BHs.
DISCUSSION AND CONCLUSION
The evolution of BHBs in circumbinary discs remains largely an open issue. Here we carried out a detailed study of 8 Note that we have a physical disc mass in our simulations, but we use this quantity to ease the comparison with MM08 and S11. 9 S11 already pointed out the change of sign of the torques at r = a. the torques acting on the BHB, including the effect of the outer disc, the mass flowing in the cavity and, for the first time, of the gas entering the BHB region and eventually accreting onto the two BHs. We paid particular attention to investigating the individual contribution of different disc regions and we aimed at separating the effect of accretion from the effect of gravitational torques. The emerging picture is far more complex than the often adopted simple assumption that resonant torques arising at the disc inner edge transfer angular momentum from the BHB to the disc, causing the orbital decay (e.g. Papaloizou & Pringle 1977;Goldreich & Tremaine 1980;Ivanov et al. 1999;Armitage & Natarajan 2002;Haiman et al. 2009).
Firstly, as already pointed out by a number of authors (e.g. MM08, S11), the presence of a BHB clearing a cavity in the disc does not prevent gas inflows and eventual accretion onto the two BHs. Such inflowing gas has a non negligible role in defining the BHB-disc mutual evolution; it forms streams that follow the BHs, eventually exerting a net negative torque in the cavity region. The inflows catch-up with the BHs at a super-Keplerian speed, bend in front of them, thus exerting a net positive torque inside the binary corotation radius (i.e. at r < a). We therefore conclude that the origin of the dominating gravitational torques on the binary is purely kinematic, and can be understood without directly invoking resonances of the forcing binary potential with the disc.
The strong positive gravitational torque is related to the gas inflows only, and not to the formation of minidiscs around the two BHs nor to the eventual accretion. This becomes clear by inspecting Figs. 6, 10 and Tab. 2; although there is a factor up to ten difference in the mass bound to the BHs in minidiscs, and a factor of three difference in the accretion rate among the simulations, the positive torque is almost the same. This is because the gravitational torque is only related to the inflowing mass bending in front of the BHs, which is of the same order in all simulations. Note that different thermodynamics lead to different negative/positive torque balance. Although this might be an artefact of the sudden change of EoS in the iso simulations, such a balance has to be better understood in order to assess the fate of the binary.
Torques exerted by gas in the main body of the disc (r > 2a) are instead negligible for the long term evolution of the system, since they average to zero (Fig. 6) 10 . Local torque maxima can be associated to the 3:1 and 4:1 OLRs (Artymowicz & Lubow 1994). Given the nature of the forcing potential, a quadrupolar torque pattern naturally develops (Figs. 8 and 9) causing a time oscillating torque (Fig. 7) with characteristic frequencies related to the beat between twice the binary frequency and the characteristic orbital frequencies of inhomogeneities developing in the disc in form of inner edge overdensities and spiral arms (Fig. 11).
Accretion plays an important role in the binary angular momentum budget. The binary can actually gain angular momentum even if it shrinks and its eccentricity increases. This is because accretion deposits a significant amount of angular momentum in the system by increasing its mass and mass ratio (see Eq. (5)). The peculiar case of iso05 is quite interesting: here the total gravitational torque exerted by the gas is almost null (Fig. 6). However, the BHB-disc mutual torques are responsible for the eccentricity growth, meaning that, if angular momentum was transferred by gravitational torques only, the BHB would be forced to expand. This is what we see in the upper-right panel of Fig. 3, where we show that the accretion-driven shrinking is faster than the actual evolution found in the simulation. In this case, gravitational torques alone would force the BHB to expand, whereas accretion is responsible for the shrinkage. Although whether this happens in Nature is questionable, it highlights the complexity of the BHB-disc interplay.
We should, nonetheless, be careful in drawing any conclusion about accretion from our simulations. If we scale our system to the fiducial binary of Roedig et al. (2011) and set M1 = 2.6 × 10 6 M⊙ and a0 = 0.039 pc, then the accretion rate we find corresponds to about 20-40Ṁ Edd for the secondary BH, and 4-7Ṁ Edd for the primary BH. In such situation, it is likely that most of the gas which is numerically accreted in our simulations will be expelled through outflows and winds. In this case, if the gas binds to the BHs before being expelled, it transfers its linear (and angular) momentum to the holes, even if not accreted (Nixon et al. 2011). However, its mass does not add-up to the binary, making the question of angular momentum transfer more delicate and worthy of deeper and more refined investigations.
The relatively low computational cost of our simulations allowed us to investigate different EoSs and sink radii r sink . The numerical size of r sink has a minor impact on the general behaviour of the system, affecting the mass bound to the BH in minidiscs and the accretion rate in the adia case. Conversely, the adopted EoS has a major impact on the results. In the iso runs, tenuous streams leak from an almost circular disc edge and fuel two well defined minidiscs, whereas streaming activity is more violent in the adia case, with thick spirals leaking from a deeply distorted disc edge, forming a diluted, ∞-shaped cloud around the binary. This difference in streaming activity affects both the overall structure of the outer disc and the long-term binary evolution (as mentioned above). Although the extrapolation of the results of all our four simulations would imply a fast BHB coalescence (in less than 10 7 yr, for the fiducial binary parameters considered above), the almost perfect cancellation of the net gravitational torque in the iso simulations suggest that more realistic models for gas thermodynamics will be necessary to make clear-cut statements on this issue. | 2012-08-14T16:43:44.000Z | 2012-02-27T00:00:00.000 | {
"year": 2012,
"sha1": "8978f8670eb26003f0e30018b2a733a8fdcbb195",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2012/09/aa19986-12.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8978f8670eb26003f0e30018b2a733a8fdcbb195",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
88522442 | pes2o/s2orc | v3-fos-license | Verification and Design of Resilient Closed-Loop Structured System
This paper addresses the resilience of large-scale closed-loop structured systems in the sense of arbitrary pole placement when subject to failure of feedback links. Given a structured system with input, output, and feedback matrices, we first aim to verify whether the closed-loop structured system is resilient to simultaneous failure of any subset of feedback links of a specified cardinality. Subsequently, we address the associated design problem in which given a structured system with input and output matrices, we need to design a sparsest feedback matrix that ensures the resilience of the resulting closed-loop structured system to simultaneous failure of any subset of feedback links of a specified cardinality. We first prove that the verification problem is NP-complete even for irreducible systems and the design problem is NP-hard even for so-called structurally cyclic systems. We also show that the design problem is inapproximable to factor (1-o(1))log n, where n denotes the system dimension. Then we propose algorithms to solve both the problems: a pseudo-polynomial algorithm to address the verification problem of irreducible systems and a polynomial-time O(log n)-optimal approximation algorithm to solve the design problem for a special feedback structure, so-called back-edge feedback structure.
INTRODUCTION
Complex networks and cyber-physical systems have applications in a wide variety of areas including multi-agent networks, power networks, social communication networks, biological networks, and distribution networks [1]. Most of these networks are well represented as linear time-invariant (LTI) dynamical systems. In LTI systems with output feedback, the feedback matrix decides which output to be fed as feedback to which input and what control actions to be taken. Feedback selection for decentralized control in LTI systems is a fundamental problem in control theory. Feedback selection aims at designing a feedback matrix such that the closed-loop system satisfies arbitrary pole placement property and thus guarantees any desired closed-loop performance.
Complex networks often consist of interconnected components with spatially distributed actuators and sensors. Establishing feedback connections among spatially distributed actuators and sensors that are resilient to failure and attacks is difficult. In many complex networks, including power networks and distribution networks, some of the feedback links become dysfunctional over time due to the vulnerability of the actuation, sensing and feedback mechanism. Additionally, many times there are targeted disruptive attacks by adversaries which tampers the structure of the network. Since the structure of the network is endogenous in nature, these changes affect the properties of the network, and the properties affect the system's performance. In order to guarantee any desired performance of the closed-loop system, it is essential that The authors are in the Department of Electrical Engineering, Indian Institute of Technology Bombay, India. Email: ravi5gundeti@gmail.com, shana@ee.iitb.ac.in, chaporkar@ee.iitb.ac.in. the feedback matrix is robust/resilient to disruptive scenarios such as natural failures or attacks by skilled and intelligent, adversarial agents [2], [3].
Moreover, real-world networks are of large system dimension and complex graph pattern, and hence most of the entries of the system matrices are not known precisely. Structural analysis is a framework that is used to analyze the properties of LTI systems when only the sparsity patterns of the system matrices are known [4]. Structural analysis performs control theoretic analysis of systems using the sparsity pattern, i.e., the zero/non-zero pattern, of the system matrices. The strength of structural analysis is that most of the structural properties, like structural controllability, structural observability, and pole placement, of structured systems, are 'generic' in nature [4], [5]. Hence, if the sparsity pattern of a system satisfies these properties, then 'almost all' systems with the same sparsity pattern satisfy the analogous control-theoretic properties. There are graph-theoretic conditions to verify the control-theoretic properties of the structured system. However, there are no known criteria to verify resilience of a system or design resilient systems.
Often cyber-physical systems like power networks and water distribution networks undergo failure of interaction links in the system matrices due to aging and/or attacks by intelligent adversaries that tamper the structure of the system. Verification and design of resilient feedback matrix are critical to guarantee the desired operating condition of the system during such adversarial situations. Developing computationally efficient algorithms to verify and design resiliency of complex cyberphysical systems is the key focus of this paper.
In this paper, we consider the resilience of the closed-loop structured system towards maintaining arbitrary pole placement property and focus on two problems. Given a structured system with state, input, output, and feedback matrices, we first aim to verify whether the closed-loop system is resilient to simultaneous failure of any γ feedback links. The set of feedback links that undergo failure can be any arbitrary set of cardinality at most γ, since in real-world systems the connections that undergo attacks or failure is unknown a priori. At present, there is no computationally efficient algorithm to verify resilience of a closed-loop structured system when any subset of feedback links with cardinality bounded by a specified number can fail. The exhaustive searchbased algorithm requires verifying the arbitrary pole placement property for failure of all possible combinations of feedback links of cardinality γ or less, which is exponential number of cases. Then, we address the associated design problem, in which we need to design a sparsest feedback matrix that ensures the resilience of the closed-loop structured system to simultaneous failure of any subset of feedback links of cardinality at most γ. The key contributions of this paper are as follows: • We prove that, given structured state, input, output, and feedback matrices, verifying resilience of the closedloop structured system towards maintaining arbitrary pole placement property subject to failure of any subset of feedback links whose cardinality is at most γ is NP-complete (Theorem 4.4). We prove that even for irreducible 1 systems, verifying resilience of the closedloop structured system subject to failure of any subset of feedback links whose cardinality is at most γ is NPcomplete (Corollary 4.5). • We prove that, given structured state, input, and output matrices, designing a sparsest feedback matrix such that the resulting closed-loop system is resilient to failure of any subset of feedback links of cardinality at most γ is NP-hard (Theorem 4.11). We also show that the design problem is inapproximable to factor (1 − o(1)) log n, where n denotes the system dimension (Theorem 4.12).
We show that the NP-hardness and the inapproximability results of the design problem hold even for a widely practical subclass of systems, known as structurally cyclic systems, the class of systems in which all state nodes are spanned by a disjoint set of cycles 2 . • We provide a polynomial-time approximation algorithm of approximation factor O(log n) for the sparsest resilient feedback design problem (Theorem 5.4) for structurally cyclic systems with a special feedback structure, so-called back-edge feedback structure. We show that the design problem is NP-hard and inapproximable to factor (1 − o(1)) log n for this class of systems, and hence the algorithm is an an order-optimal polynomialtime approximation algorithm. • We present polynomial time algorithms to verify the resilience of feedback matrix for γ = 1 and γ = 2 (Algorithms 6.1 and 6.2), and prove the correctness and complexity of the algorithms (Theorems 6.1 and 6.2). Then we extend these algorithms for one edge (γ = 1) failure and two edges (γ = 2) failures to a general case and prove its correctness and show that the complexity is pseudo-polynomial with factor γ (Theorem 6.3). Our algorithm performs computationally much better than exhaustive search-based algorithm and is computationally more efficient for small values of γ.
The organization of this paper is as follows: Section 2 presents the formal description of feedback resilience verification problem and sparsest resilient feedback design problem. Section 3 discusses notations, few preliminaries and some existing results used in the sequel. Section 4 analyzes the complexity of both problems and proves NP-completeness of feedback resilience verification problem and NP-hardness of sparsest resilient feedback design problem. Section 5 presents an approximation algorithm for solving the sparsest resilient feedback design problem for structurally cyclic systems with a special feedback structure. Section 6 presents a pseudopolynomial algorithm for solving the feedback resilience verification problem for irreducible systems. Section 7 gives the final concluding remarks.
A. Related Work
Resilience or robustness of complex networks subject to structural perturbations is of interest for a long time [6]. For instance, the robustness of structured systems towards maintaining structural controllability is addressed in [7] by characterizing the role of nodes and links of the network. Classification of sensors based on their importance in the network for structural observability under sensor failures is done in [8]. Papers [9], [10] define indices for measuring the level of resilience of the network towards maintaining structural controllability. Robustness of a power grid towards maintaining structural controllability under γ link failures is addressed in [11].
Resilience of networks is addressed in [12] by studying the various kinds of attacks, monitoring issues that can possibly lead to malfunctioning of the network, and attack detection mechanisms. Paper [13] addressed the optimal selection problem with the minimal placement of additional sensors and among them those with minimal cost for structural observability. The complexity of the robust minimal controllability problem, where the goal is to determine a minimal subset of state variables to be actuated to ensure structural controllability under additional constraints is addressed in [14]. Paper [15] consider the minimum sensor placement problem when the sensors are subject to one sensor failure. Note that, the papers discussed above ([7]- [15]) address i/o selection for resilience towards maintaining structural controllability/observability and this paper focus on verification and design of resilient feedback matrix for arbitrary pole placement of the closed-loop poles.
Optimal cost feedback selection for LTI systems is addressed in literature for various instances (see [16], [17], [18] and references therein). However, papers [16], [17], and [18] deal with optimal design and do not consider failure or malfunctioning of the links. The computational complexity of verifying that the closed-loop system has no structurally fixed modes (SFMs) is polynomial when the feedback links are not subjected to failures [19]. In this paper, we show that verification of the no-SFM condition is NP-complete when the feedback links are subject to failure (exhaustive search-based technique has complexity exponential in the number of states of the system). There has been some effort on the resilience of feedback matrix. Designing minimum cost resilient actuationsensing-communication for regular descriptor systems while ensuring selective strong structural system's properties is addressed in [20]. The pairing of sensors and actuators to design a feedback pattern that is resilient to edge failures is assessed in [21]. The approach in [21] uses the notion of resilient fixed modes and gave conditions to verify non-existence of resilient fixed modes when the subset of feedback links that can be compromised is specified. This paper, on the other hand, deals with the resilience of feedback matrix towards maintaining arbitrary pole placement when any subset of feedback links of cardinality at most γ can fail. During an attack or failure, the subset of feedback links that can be compromised is arbitrary and may not be from a specified subset. Hence algorithms to verify the resilience of a feedback matrix and algorithms to design resilient feedback matrix that can handle the failure of any arbitrary subset of feedback links is important. This paper addresses these problems.
PROBLEM FORMULATION
Consider an LTI dynamical systemẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t), where the state matrix A ∈ R n×n , the input matrix B ∈ R n×m , and the output matrix C ∈ R p×n . Here R denotes the set of real numbers. Consider structured matrices A ∈ {⋆, 0} n×n ,B ∈ {⋆, 0} n×m andC ∈ {⋆, 0} p×n . Here, 0 denotes fixed zero entries and ⋆ denotes indeterminate free parameters. The tuple (Ā,B,C) is said to be the structured system representation of the numerical system (A, B, C) if it satisfies Eq. (1) given below.
Here, (Ā,B,C) represents a class of numerical systems that satisfy Eq. (1). LetK ∈ {⋆, 0} m×p denotes the structured feedback matrix, whereK i j = ⋆ if the j th output is fed to the i th input as feedback. For the structured feedback matrixK, the closed-loop structured system is denoted by (Ā,B,C,K). The concept of fixed modes for structured systems is introduced and a necessary and sufficient graph-theoretic condition for checking the existence of structurally fixed modes (SFMs) is given in [19]. Let [K] := {K : K i j = 0, ifK i j = 0}. Now we define structurally fixed modes. In this paper, we use the no-SFMs criteria to ensure arbitrary pole placement [19], as no-SFMs criteria and the ability for arbitrary pole placement are equivalent when controllers are dynamic [22,Theorem 4.3.5]. We consider two problems in this paper, specifically in the context of the resilience of the closed-loop system towards achieving arbitrary pole placement property, which are described below.
A. Feedback Resilience Verification Problem
Consider a structured system (Ā,B,C) and a structured feedback matrixK such that the closed-loop system (Ā,B,C,K) has no SFMs. Let I ⊂ {1, . . . , m} × {1, . . ., p} be a subset consisting of indices of entries ofK. More precisely, I ⊂ {(i, j) : i ∈ {1, . . . , m} and j ∈ {1, . . . , p}}. DefineK I ∈ {⋆, 0} m×p , wherē Now we formulate the first problem considered in this paper. For a given closed-loop system, the feedback resilience verification problem verifies if the system is resilient to failure of any set of feedback links with cardinality at most γ. Note that, the set of feedback links that undergo failure or attack is not pre-specified and can be any arbitrary set. There are no known results to solve this problem.
B. Sparsest Resilient Feedback Design Problem
Now we describe the design problem associated with the resilience of the feedback matrix. The objective is to design a feedback matrixK such that for I ⊂ {1, . . . , m} × {1, . . . , p},K I defined in Eq. (2) satisfies no-SFMs criteria of (Ā,B,C,K I ).
Let K γ := {K ∈ {⋆, 0} m×p : structured system (Ā,B,C,K I ) guarantees no-SFMs criteria for any set I, where I ⊂ {(i, j) : i ∈ {1, . . . , m} and j ∈ {1, . . . , p}} such that |I| γ}. The set K γ thus consists of all feedback matrices for the structured system (Ā,B,C) that satisfy no-SFMs criteria even after the failure of any γ feedback links. Without loss of generality, assume thatK f = {K f i j = ⋆, for all i, j} lies in K γ , otherwise, no feasible solution to the problem. Thus K γ is non-empty. Our objective here is to design a sparsest feedback matrix that lies in K γ . In the next section, we present few notations and existing results used in the sequel.
NOTATIONS, PRELIMINARIES AND EXISTING RESULTS
For understanding the graph-theoretic condition given in [19] that characterizes the no-SFMs criteria, we define few notations and constructions. Firstly, the state digraph denoted by D(Ā) := (V X , E X ) is constructed as follows: here The digraph D(Ā,B,C,K) captures the effects of states, inputs, outputs and feedback connections in the system.
and the edge set E S has endpoints from V S which is same as in E D . A maximal strongly connected subgraph is a subgraph that is strongly connected and is not properly contained in any other subgraph that is strongly connected.
Condition a) in Proposition 3. There exists a graph-theoretic condition using the concept of information path for checking condition b) in Proposition 3.2 in O(n 2.5 ) operations [25]. In this paper, we use the bipartite graph matching condition given in [26]. We now define bipartite graphs and then give the bipartite matching condition to verify condition b) in Proposition 3.2.
A bipartite graph denoted by In G, a matching M ⊆ E is a collection of edges such that no two edges have a common endpoint and a perfect matching is a matching whose cardinality is |V |. Further, let c : E → R be a cost function. Then, a minimum cost perfect matching M ⋆ is a perfect matching in G such that ∑ e∈M ⋆ c(e) ∑ e∈ M c(e), where M is any perfect matching in G. Finding a minimum cost perfect matching in a bipartite graph has computational complexity O(|V | 2.5 ) [24].
For a closed-loop structured system (Ā,B,C,K), we construct a bipartite graph denoted by B(Ā,B,C,K). Define B(Ā,B,C, Note that, while Proposition 3.2 gives a polynomial-time graph-theoretic condition to verify the no-SFMs criteria, our objective is (i) to verify if the structured system continues to satisfy the no-SFMs condition even after the failure ofĀ x 1 x 5 any subset of feedback links with cardinality at most γ and (ii) to design a feedback matrix that guarantees the no-SFM criteria even after the failure of any subset of feedback links of cardinality at most γ. In the next section, we analyze the complexity of these two problems.
COMPLEXITY RESULTS
In this section, we analyze the complexity of Problem 2.2 and Problem 2.3. We prove Problem 2.2 is NP-complete using a known NP-complete problem, the blocker problem. We prove that Problem 2.3 is NP-hard using minimum set multi-covering problem, a known NP-hard problem. First, we prove NPcompleteness of Problem 2.2.
A. Complexity of Feedback Resilient Verification Problem
The NP-completeness result for the feedback resilient verification problem (Problem 2.2) is obtained by reducing a known NP-complete problem, the blocker problem, to an instance of Problem 2.2. Now we describe the blocker problem. [27]. Note that Block(G, 1, γ) is NP-complete even when G has perfect matching [27]. Thus, we have the following proposition.
Proposition 4.2. [27, Theorem 3.3] Consider a bipartite graph G
. . ,ṽ s }, s r and 1 γ |E |, we construct a closedloop structured system (Ā,B,C,K) with (s + 2) number of states, s number of inputs and r number of outputs. In Step 2, we define the set E X as follows: there exist directed edges from node x 2 to every node in the set {x 1 , . . . , x s+2 }, from every node in the set {x 1 , . . . , x s+2 } to node x 2 and from every node in {x 3 , . . . , x s+2 } to every node in {x 3 , . . . , x s−r+2 }. By this construction ofĀ, D(Ā) is an irreducible graph (see Figure 3). In Step 3, we construct edge set E U as follows: a directed edge exists from every input in {u 1 , . . . , u s } to every state node in {x s−r+3 , . . . , x s+2 }. Thus in theB constructed, no input directly actuates states {x 1 , . . . , x s−r+2 }. In Step 4, the output edge set E Y is constructed in such a way that every state node in {x 3 , . . . , x s+2 } is connected to every output node in {y 1 , . . . , y r }. Thus in theC constructed, states {x 1 , x 2 } cannot be sensed directly. In Step 5, the feedback edges are constructed in such a way that for every edge This completes the construction ofK. An illustrative example demonstrating the construction of the structured system (Ā,B,C,K) for a given instance of the blocker problem is given in Figure 2. Next, we prove the following result. Proof. Let M G be a perfect matching in G. Since r s, To prove the result, we extend the matching M G to a perfect matching in the bipartite graph B(Ā,B,C,K) as follows: from the construction of the structured system (Ā,B,C,K), corresponding to every edge (v i ,ṽ j ) ∈ E there exists an edge (u ′ j , y i ) ∈ E K in B(Ā,B,C,K). Hence there exists a matching of size r, say M 1 ⊆ E K , in B(Ā,B,C,K). Notice that in M 1 , r vertices in V Y and r vertices in . . , y ′ r } of B(Ā,B,C,K) are unmatched on the left side. Thus the total unmatched vertices in the left side add up to (s + 2). On the right side, Proof. If part: Here, we assume B(Ā,B,C,K) has a blocker T ⊆ E K , | T | = γ and then prove that G has a blocker T such that |T | = γ. To the contrary, assume that there exists no blocker in G of size γ. Thus, there exists a perfect matching in G even after removing any of the γ edges in E . From Thus, there exists a matching of size |M ′ B | in G. Further, this is a perfect matching in G as |M ′ B | = r. But this contradicts the assumption that G has a blocker. Hence there is a blocker in B(Ā,B,C,K).
Using the if-part and the only-if part and from Proposition 4.2, checking if the bipartite graph B(Ā,B,C,K) has a perfect matching after removing any set of feedback links of size at most γ is NP-complete. Hence, Problem 2.2 is NPcomplete.
For the state matrixĀ constructed in Algorithm 4.1, D(Ā) is irreducible: for any i, j ∈ {1, . . . , s + 2}, there exists a path from x i to x j through node x 2 . A schematic diagram that shows the digraph D(Ā) for theĀ given in Algorithm 4.1 is given in Figure 3. The following result is an immediate consequence of Theorem 4.4.
Corollary 4.5. Consider a closed-loop structured system (Ā,B,C,K). Then, Problem 2.2 is NP-complete when D(Ā) is irreducible. Now let us consider Problem 2.2 for systems in which all state nodes are spanned by disjoint union of cycles. In other words, condition b) in Proposition 3.2 is satisfied without using any feedback connections. This class of systems is called structurally cyclic systems [18]. There is a wide class of systems so-called self-damped systems that include multiagent systems and epidemic systems, that are structurally cyclic [28]. Proof. In structurally cyclic systems (systems in which all state nodes are spanned by disjoint union of cycles), condition b) is satisfied without using any feedback edges. In order to maintain the no-SFMs criteria, the closed-loop system must maintain condition a) of Proposition 3.2 even after removing any γ feedback links. This can be verified in O(n 2 ) operations by finding all SSCs of D(Ā,B,C,K) and checking whether each SSC has at least (γ + 1) feedback links. Finding SCCs in a digraph has O(n 2 ) operations [24] and hence the proof follows.
Subsection 4-A concludes that Problem 2.2 is NP-complete for general structured systems and structured systems whose state digraph D(Ā) is irreducible. However, Problem 2.2 is polynomial-time solvable when condition b) in Proposition 3.2 is satisfied (Lemma 4.6). In the next subsection, we analyze the complexity of Problem 2.3.
B. Complexity of Sparsest Resilient Feedback Design Problem
In this section, we analyze the complexity of Problem 2.3. Firstly, we claim that Problem 2.3 is NP-hard for general systems. This result is a consequence of Theorem 4.4 as Problem 2.2 which is the decision problem [29] corresponding to the optimization problem, Problem 2.3, is NP-complete. From Corollary 4.5 we also infer that Problem 2.3 is NPhard even when D(Ā) is irreducible. However, the complexity of Problem 2.3 is not straightforward for structurally cyclic systems, i.e., the class of systems whose state nodes are spanned by disjoint cycles. In this section, we show that Problem 2.3 is NP-hard for structurally cyclic systems, while Problem 2.2 is polynomial-time solvable for structurally cyclic systems (Lemma 4.6).
The NP-hardness result of Problem 2.3 for structurally cyclic systems is obtained using reduction from minimum set multi-covering problem (MSMC). The MSMC problem is described in Problem 4.9 for the sake of completeness. We first define a cover in Definition 4.8.
Definition 4.8. Given a set of N elements U = {1, . . . , N} referred to as universe and collection of r sets P = {S 1 , . . . , S r }, such that ∪ j∈J S j = U and |{ j ∈ J : i ∈ S j }| α.
A coverŜ that is a feasible solution to Problem 4.9 is called as a multi-cover. To prove the NP-hardness of Problem 2.3, we now present a reduction of a general instance of the MSMC problem to an instance of Problem 2.3.
The pseudo-code showing a reduction of Problem 2.3 to an instance of MSMC problem is given in Algorithm 4.2.
Given a general instance of the MSMC problem consisting of universe U = {1, . . . , N}, sets P = {S 1 , . . . , S r } and constant demand α, we construct a structured system (Ā,B,C) with states x 1 , . . . , x N , input u 1 , and outputs y 1 , . . . , y p (Step 1). In A, every diagonal entry is ⋆ (Step 2). Thus the system is structurally cyclic. Moreover, B(Ā) has a perfect matching . . , N}}. The input matrixB consists of a single input u 1 which is connected to every state node (Step 3). The output matrixC is constructed depending on P such that an output node y i senses all state nodes x j 's that satisfy j ∈ S i (Step 4). An illustrative example demonstrating the construction of the structured system (Ā,B,C) for a given instance of the MSMC problem is given in Figure 4. Note that, the value of γ for Problem 2.2 for the constructed structured system is uniquely defined by the value of α of the corresponding MSMC problem. Using the structured system (Ā,B,C) constructed in Algorithm 4.2, we have the following result. Proof. Only-if part: Here we assumeK ∈ K γ and prove that S(K) covers each element in U at least α times, i.e., S(K) is a multi-cover of the universe U satisfying demand α. We prove this using contradiction. Suppose there exists an element j ∈ U that is not covered α times by S(K). Let S(K) consists of sets S i 1 , . . . , S i k and the corresponding outputs are y i 1 , . . . , y i k . From the construction (Step 4 of Algorithm 4.2), an output node y i has incoming edges from state nodes x j 's for all j ∈ S i . As element j appears less than γ + 1 times, since α = γ + 1, in the union of sets S i 1 , . . . , S i k , corresponding state node x j there are less than γ + 1 outgoing edges to outputs. Without loss generality, assume that corresponding state node x j there are γ outgoing edges to outputs. Notice that, asĀ is a diagonal matrix x j is not connected to any other state node other than itself. Thus x j lies in an SCC with γ feedback links. Then, the closed-loop system will have SFMs when γ feedback links fail. This contradicts the fact thatK is a solution to Problem 2.3. This completes the only-if part. If part: Here we assume that S(K) is a solution to Problem 4.9 and prove thatK is a solution to Problem 2.3. Suppose not. From the construction, an output node y i has incoming edges from state nodes x j 's for all j ∈ S i . Since B(Ā) has a perfect matching (asĀ is diagonal), condition b) of Proposition 3.2 is satisfied without using any feedback edge. Thus, the as-sumptionK / ∈ K γ implies thatK violates condition a) in Proposition 3.2 after removing some subset of feedback links which has cardinality at most γ. Without loss of generality, assume that the subset of feedback links have cardinality γ. In other words, there exists a state node x j which does not lie in an SCC in D(Ā,B,C,K) with more than γ feedback links. AsĀ is a diagonal matrix, state x j is not connected to any other state node and the feedback links corresponding to the SCC in which x j lies are those which correspond to the outputs connected to x j . In D(Ā,B,C,K), x j has less than γ + 1 outputs connected to it. This implies that the corresponding element j ∈ U lies in less than γ + 1 sets in S(K). This is a contradiction to the assumption that S(K) is a multi-cover of universe U with demand α, as α = γ + 1. This completes the proof.
Next, we discuss the NP-hardness of Problem 2.3 using a reduction from MSMC Problem. We show that any instance of MSMC Problem can be reduced to an instance of Problem 2.3 such that an optimal solution to Problem 2.3 gives an optimal solution to the MSMC Problem. Proof. (i) Given a general instance of the MSMC problem (U , P, α), we reduce it to an instance of Problem 2.3 using Algorithm 4.2. LetK be a feasible solution to Problem 2.3. Using Lemma 4.10 the sets selected underK, i.e., S(K), covers each element in U at least α times. Hence S(K) is a feasible solution to set multi-covering problem. Next, we prove optimality.
LetK ⋆ be an optimal solution to Problem 2.3. Using Lemma 4.10, S(K ⋆ ) is a feasible solution to set multicovering problem. From Step 5 of Algorithm 4.2 we know that K 0 = |S(K ⋆ )|, where |D| denoted cardinality of set D. Using contradiction, we now prove that S(K ⋆ ) is an optimal solution to the MSMC problem. Assume S(K ⋆ ) is not an optimal solution. Then there exists a solution S ⊆ P such that | S| < |S(K ⋆ )| and satisfies ∪ S i ∈ S = U and each element in U is covered at least α times. With respect to S, define K , where K i j = ⋆ if S j ∈ S. From Lemma 4.10,K is a solution to Problem 2.3. Further, | S| < |S(K ⋆ )| implies K 0 < K 0 . This is a contradiction asK ⋆ is an optimal solution to Problem 2.3. This proves that S(K ⋆ ) is an optimal solution to Problem 2.3. Proof. (i) Recall thatK ⋆ is an optimal solution to Problem 2.3 and let S ⋆ be an optimal solution to Problem 4.9. To prove (i), we need to show that ifK ′ ∈ K γ and K 0 ε K 0 , then |S(K ′ )| ε |S ⋆ | . Note that K 0 = |S(K ′ )| and K 0 = |S(K ⋆ )| (see Step 5). Thus |S(K ′ )| ε |S(K ⋆ )|. By Theorem 4.11 (i), S(K ⋆ ) is an optimal solution to Problem 4.9. This implies |S(K ⋆ )| = |S ⋆ |. Thus |S(K ′ )| ε |S ⋆ |.
(ii) From Theorem 4.12 (i), for any ε 1, if there exists an ε-optimal solution to Problem 2.3 then there exists an εoptimal solution to the MSMC problem. The MSMC problem is inapproximable to factor (1 − o(1)) log n, as the minimum set cover problem [30], [31] is a special case of MSMC problem for α = 1. Thus Problem 2.3 is inapproximable to (1 − o(1)) log n. This completes the proof.
We conclude that Problem 2.3 is NP-hard and also inapproximable to factor (1 − o(1)) log n for general systems as well as structurally cyclic systems. Table I summarizes the complexity results obtained in this paper for different cases of Problem 2.2 and Problem 2.3. In the next section, we provide an approximation algorithm to solve Problem 2.3 for a special graph topology of practical importance.
APPROXIMATION ALGORITHM FOR SPARSEST RESILIENT FEEDBACK DESIGN PROBLEM FOR BACK-EDGE
FEEDBACK STRUCTURE In this section, we propose an order optimal, O(log n), approximation algorithm for the sparsest resilient feedback design problem for structurally cyclic systems with a special feedback structure so-called back-edge feedback structure. We assume that the feedback matrix satisfies a structural constraint that all feedback edges (y j , u i ) ′ s are such that there exists a directed path from input u i to output y j in D(Ā,B,C). In other words, an output from a state is fed back to an input which can directly or indirectly influence the state associated with that output. We refer to a feedback structure that satisfies this constraint as back-edge feedback structure. Back-edge feedback structure is applicable in various networks that include hierarchical networks and multi-agent systems in which the state measurement is fed back to the leader agent. The hierarchical network structure is common in reallife networks [32]. A power distribution system follows a hierarchical network structure and finding an optimal resilient control strategy aims towards designing a least cost feedback pattern to maintain the system parameters such as voltages and frequency at different layers of the network at specified levels even under adversarial conditions [33], [34]. There is a wide class of practically important systems called selfdamped systems [28] that are structurally cyclic, for example, consensus dynamics in multi-agent systems and epidemic dynamics.
For structurally cyclic systems with back-edge feedback structure, we propose a polynomial time algorithm to find an approximate solution to Problem 2.3 with an optimal approximation ratio. We describe below the graph topology considered in this section.
Consider a digraph D G = (V G , E G ) and let v i ∈ V G and v j ∈ V G be such that there exists a directed path from v i to v j . Then Moreover, all the state nodes in the structured system are spanned by disjoint union of cycles and hence the system is structurally cyclic. A schematic diagram that shows this special feedback structure is given in Figure 4. The following theorem is an immediate consequence of Theorem 4.11.
Corollary 5.1. Consider a structurally cyclic structured system (Ā,B,C). Let Assumption 1 holds. Then, Problem 2.3 is NP-hard for structured systems with this graph topology.
Now we propose an approximation algorithm for solving Problem 2.3 for a structured system (Ā,B,C) under Assumption 1. First, we propose an algorithm to reduce the general instance of Problem 2.3 for a structurally system satisfying Assumption 1 to an instance of minimum set multi-covering problem.
Given a general instance of the structured system (Ā,B,C) satisfying Assumption 1 and a constant γ, we construct an instance of the MSMC problem (U , P, α) using Algorithm 5.1. Let {x 1 , . . . , x n } be the state nodes, {u 1 , . . . , u m } be the input nodes, and {y 1 , . . . , y p } be the output nodes of the structured Step 5). The set S d consists of those elements in the universe that corresponds to state nodes in D(Ā,B,C) that lie in some directed path from u i to y j (Step 7). Note that there may be multiple paths from u i to y j as shown in Figure 5. For a solution S ′ to the MSMC problem for (U , P, α), we define feedback matrixK(S ′ ) (Step 10). Herē K(S ′ ) consists of all those feedback edges that correspond to sets in S ′ under the definition given in Step 7. An illustrative example demonstrating the construction of MSMC problem for a given instance of a structured system (Ā,B,C) is given in Figure 5. Proof. As the system is structurally cyclic condition b) in Proposition 3.2 is satisfied without using any feedback edges and only condition a) has to be satisfied. Only-if part: Here we assume that S ′ is a solution to the MSMC problem and prove thatK(S ′ ) is a solution to Prob-lem 2.3. We prove this using contradiction. Assume thatK(S ′ ) is not a solution to Problem 2.3. Since B(Ā) has a perfect matching, condition b) of Proposition 3.2 is satisfied without using any feedback edge. ThusK(S ′ ) must violate condition a) after removing some γ feedback links. There exists a state node, say x q , that lies in an SCC in D(Ā,B,C,K(S ′ )) with less than γ + 1 feedback links. Notice thatK(S ′ ) consists of feedback edges corresponding to all the sets in S ′ . From the construction (Step 7), set S d consists of all state nodes which lie in some directed path from u i to y j in D(Ā,B,C). Thus in D(Ā,B,C) there are less than γ different directed paths from some input node to some output node through x q . Hence the element x q is covered less than α times by the cover S ′ as α = γ + 1. This is a contradiction as S ′ is a solution to the MSMC problem. This completes the only-if part. If part: Here we assume thatK(S ′ ) is a solution to Problem 2.3 and prove that S ′ is a solution to the MSMC problem. We prove this using contradiction. Assume S ′ is not a solution to Problem 4.9. Then there exists an element x q which is covered less than γ + 1 times by the cover S ′ (since α = γ + 1). So S ′ consists of less than γ + 1 sets which contain x q . By the construction of sets S d (Step 7), S d consists of all state nodes that lie in some directed path in D(Ā,B,C) from input u i to output y j for e d = (y j , u i ). So in D(Ā,B,C) there are less than γ + 1 different directed paths from some input node to some output node through x q . Notice thatK(S ′ ) consists of feedback edges corresponding to all the sets in S ′ . Thus state node x q lies in an SCC in D(Ā,B,C,K(S ′ )) with less than γ + 1 feedback links. So after removing γ feedback links, condition a) of proposition 3.2 is violated. This is a contradiction asK(S ′ ) is a solution to Problem 2.3. This completes the if-part. Theorem 5.3. Consider a structurally cyclic structured system (Ā,B,C) and a constant γ. Let Assumption 1 holds and (U , P, α) be the MSMC problem constructed using Algorithm 5.1. Let S ⋆ be an optimal solution to MSMC problem andK(S ⋆ ) be the feedback matrix selected under S ⋆ . Then, (i)K(S ⋆ ) is an optimal solution to Problem 2.3, and (ii) For every ε 1, if there exists a ε-optimal solution to MSMC problem, then there exists a ε-optimal solution to the Problem 2.3, i.e., |S ′ | ε |S ⋆ | implies K 0 ε K 0 , where S ⋆ is an optimal solution to MSMC problem and K ⋆ is an optimal solution to Problem 2.3.
Proof. (i) Given S ⋆ is an optimal solution to Problem 4.9. By Lemma 5.2,K(S ⋆ ) is a feasible solution to Problem 2.3. Now we prove optimality ofK(S ⋆ ) using contradiction. Assume thatK(S ⋆ ) is not an optimal solution to Problem 2.3. Then there exists a solution to Problem 2.3, sayK 1 , such that K 0 < K 0 . Consider S 1 := {S j : S j consists of x q 's which lie in some directed path from u i to y j ifK 1 i j = ⋆}. Here, K 0 = |S 1 |. From Lemma 5.2, S 1 is a feasible solution to MSMC problem. As K 0 < K 0 , |S 1 | < |S ⋆ |. This is a contradiction as S ⋆ is an optimal solution. HenceK(S ⋆ ) is an optimal solution to Problem 2.3.
(ii) Let S ⋆ be an optimal solution to the MSMC problem. Given |S ′ | ε|S ⋆ | and we need to show that K 0 ε K 0 . From Step 10 of Algorithm 5.1 we get K 0 = |S ⋆ |. As |S ′ | ε |S ⋆ |, K 0 = |S ′ | ε |S ⋆ | = ε K 0 . Moreover, from Theorem 5.3 (i),K(S ⋆ ) selected under S ⋆ is an optimal solu-tion to Problem 2.3. Hence K 0 = K 0 and K 0 ε K 0 . This completes the proof. In the next section, we provide an algorithm to solve Problem 2.2 when D(Ā) is irreducible.
ALGORITHM FOR FEEDBACK RESILIENT VERIFICATION PROBLEM
In this section, we propose an algorithm to solve Problem 2.2 for irreducible systems. Note that, Problem 2.2 is NPcomplete even for irreducible systems (Corollary 4.5). The proposed algorithm is computationally efficient for smaller values of γ. Typically, while attacking cyber-physical systems the attacker targets few links due to resource and infrastructure constraints and to remain undetected by the system. Henceforth, the following assumption holds. Assumption 2. For the structured system (Ā,B,C,K), the digraph D(Ā) is irreducible.
If D(Ā) is irreducible, then only one feedback link is enough to satisfy condition a) in Proposition 3.2. Thus the class of irreducible systems satisfies the no-SFMs criteria if the system bipartite graph B(Ā,B,C,K) has a perfect matching and D(Ā,B,C,K) has at least one feedback link present in it. Hence, Problem 2.2 for irreducible systems boils down to checking the existence of a perfect matching in B(Ā,B,C,K) after the failure of any γ feedback links. However, if there exists a perfect matching in B(Ā,B,C,K) that uses no feedback links, then condition b) in Proposition 3.2 is satisfied without using any feedback edge: in such a case, any one feedback if matching does not exist then 8: result ← False, go to Step 11 9: end if 10: end for 11: return result edge is sufficient to satisfy the no-SFMs criteria and the system is resilient for any γ < |E K |. Thus, the case where all perfect matchings in B(Ā,B,C,K) have at least one feedback edge is of interest and considered here.
In this section, we discuss our approach to solve Problem 2.2. If a system is resilient to failure of any γ feedback links, then it is resilient to failure of any set of feedback links of cardinality less than γ. Thus, to solve Problem 2.2 it is enough to verify if the system is resilient to failure of any γ feedback links. Problem 2.2 is NP-complete for a general γ. As m = O(n) and p = O(n), the number of feedback links is O(n 2 ). So there are n 2 γ bipartite graphs possible for removal of any γ feedback links. The exhaustive search-based technique requires checking perfect matching in n 2 γ bipartite graphs which is exponential in n 2 and the complexity is huge even for small γ as n is large for complex systems. We now present an algorithm to solve Problem 2.2 with a significant saving in computations when compared to the exhaustive search-based approach. On the other hand, if a matching exists, then the algorithm proceeds with the removal of the next edge in F 0 . Finally, it returns result as the output.
For a given structured system, the theorem below proves that Algorithm 6.1 solves Problem 2.2 for γ = 1. Further, we also give the complexity of Algorithm 6.1.
0. Thus for any one edge removal from E K \ F 0 , the system (Ā,B,C,K) is resilient (since perfect matching M 0 exists). Hence it is enough to check if there exists a perfect matching in B(Ā,B,C,K) for every one edge removal from F 0 . Note that, |F 0 | = ℓ 1, otherwise the system is always resilient. In each iteration of the for loop of Algorithm 6.1, we remove an edge from F 0 and check for the existence of a perfect matching. If a perfect matching does not exist, then the algorithm concludes that the system is not one edge resilient. On the other hand, if there exists a perfect matching for every one edge removal from F 0 , then the algorithm concludes that the system is resilient to any one edge removal from F 0 . Hence Algorithm 6.1 solves Problem 2.2 for γ = 1.
Finding minimum cost perfect matching in B(Ā,B,C,K) has complexity O(n 3 ) [24]. In every iteration, Algorithm 6.1 finds a perfect matching. Note that the maximum number of iterations is ℓ. Further, m = O(n), p = O(n) together implies ℓ = O(n). Finding perfect matching in B(Ā,B,C,K) has complexity O(n 2.5 ) [24]. All the other steps are of linear complexity. Hence complexity of Algorithm 6.1 is O(n 3.5 ).
B. Algorithm and Results for γ = 2
In this subsection, we present an algorithm to solve Problem 2.2 for γ = 2 and then prove its correctness and complexity.
The pseudocode of the proposed algorithm is given in Algorithm 6.2. It takes as input a bipartite graph B 2 = (V 2 , V 2 , E 2 ) and an edge set S 2 ⊆ E 2 , and outputs if B 2 has a perfect matching for removal of any two edges from S 2 . For a given structured system, the result below proves that Algorithm 6.2 solves Problem 2.2 for γ = 2. Further, we also give the complexity of Algorithm 6.2. Proof. The structured system (Ā,B,C,K) is resilient for γ = 2, if B(Ā,B,C,K) has a perfect matching for removal of any two edges from E K . With inputs B(Ā,B,C,K) and E K to Algorithm 6.2, M 2 is a perfect matching in B(Ā,B,C,K) and F 2 ⊆ E K . Then, removal of any two feedback edges can be done in the following ways: (a) both edges from set F 2 , (b) one edge from F 2 and the another from E K \ F 2 , and (c) both edges from set E K \ F 2 . For the system (Ā,B,C,K) to be resilient for γ = 2, B(Ā,B,C,K) must have perfect matching for all these cases. Case (a): For B 2 = B(Ā,B,C,K) and S 2 = E K , Steps 5-12 of Algorithm 6.2 checks for existence of perfect matching in B(Ā,B,C,K) for removal of every pair of edges from F 2 . If there exists no perfect matching after removing some pair of edges in F 2 , then the algorithm sets result 1 to 'False' and concludes in Step 21 that the system is not resilient. Case (b): For B 2 = B(Ā,B,C,K) and S 2 = E K , Steps 13-21 of Algorithm 6.2 checks if the system (Ā,B,C,K) is resilient to removal of any edge f i ∈ F 2 and the other edge from E K \ F 2 . Recall that here B i 2 is obtained after removing edge f i from B(Ā,B,C,K). If for any B i 2 there does not exist a perfect matching, then result 2 is set to False. Thus Algorithm 6.2 concludes if the system is resilient to any two edge removal for case (b) precisely. Case (c): As M 2 ∩{E K \ F 2 } = / 0, for removal of any two edges from E K \ F 2 , matching M 2 exists. Hence the system is resilient to the removal of any two edges for this case. So, case (c) requires no verification.
In the end, the algorithm outputs 'True' if both result 1
C. Algorithm and Results for General γ
In this subsection, we present the key steps of the recursive algorithm to solve Problem 2.2 for general γ. The inputs to the algorithm are bipartite graph B γ = (V γ , V γ , E γ ) and edge set S γ ⊆ E γ .
Consider a bipartite graph B γ and an edge set S k . Find a perfect matching in B γ , say M γ , that consists of minimum number of edges from S γ (using a min-cost perfect matching algorithm with non-zero uniform cost on edges in S γ and 0 cost on other edges). Define F γ := M γ ∩ S γ and let |F γ | = ℓ γ . Removal of γ edges from S γ can be done in (γ + 1) ways, case (0), . . . , case (γ), where for case (q) we consider removal of (γ − q) edges from F γ and remaining q edges from S γ \ F γ .
Step 1: For case (0), check the existence of perfect matching for removal of every possible γ number of edges from F γ .
Step 2: For case (1), remove (γ − 1) number of edges from F γ and apply Algorithm 6.1 with B 1 as the modified bipartite graph obtained after removing (γ − 1) edges from B γ and set S 1 as S γ \ F γ . This is done for every possible (γ − 1) edges.
Step 3: For case (2), remove (γ − 2) number of edges from F γ and apply Algorithm 6.2 with B 2 as the modified bipartite graph obtained after removing (γ − 2) edges from B γ and set S 2 as S γ \ F γ . This is done for every possible (γ − 2) edges.
Step 4: We follow the similar lines for case (3) to the case (γ − 1). In case (q), we remove (γ − q) number of edges from F γ and apply the algorithm for γ = q with B q as the modified bipartite graph obtained after removing (γ − q) edges from B γ and set S q as S γ \ F γ . This is done for every possible (γ − q) edges.
For B γ = B(Ā,B,C,K) and S γ = E K , the above steps solve Problem 2.2 for general γ. The proof of this claim and computational complexity involved is given in Theorem 6.3. Proof. For B γ = B(Ā,B,C,K) and S γ = E K , M γ is a perfect matching in B(Ā,B,C,K) and F γ ⊆ E K . Removal of any γ edges from E K can be done in (γ + 1) ways, case (0), . . . , case (γ), where for case (q) we consider removal of (γ − q) edges from F γ and remaining q edges from E K \ F γ . For the system to be resilient, it must have a perfect matching in all of these cases. Steps 1-5 checks for the existence of perfect matching for each of these cases and hence verifies resilience of the system for removal of any γ feedback edges correctly. Now we prove the complexity of the algorithm. Let the theorem statement be denoted as P(γ). We prove the theorem using Strong Induction. Base step: For γ = 1, P(1) is the complexity of Algorithm 6.1, which is O(n 3.5 ) (Theorem 6.1). Hence P(γ) is true for γ = 1. Induction step: Assume that the statement P(γ) is true for γ ∈ {1, 2, . . . , q}. Now we have to prove that the statement P(γ) is true for γ = q + 1. Consider removal of q + 1 links. This can be done in q + 2 possible cases as shown in Steps 1-5.
In case (0), we check the existence of perfect matching for removal of every possible (q + 1) number of edges from F q+1 . As ℓ q+1 is O(n), there are n q+1 possible combinations which is O(n q+1 ) and time complexity to check the existence of perfect matching for each combination is O(n 2.5 ), so the time complexity of case (0) is O(n q+3.5 ).
In case (i), where i ∈ {1, . . . , q}, for removal of every possible (q + 1 − i) number of edges from F q+1 we apply the algorithm for γ = i with B i as the modified bipartite graph obtained after removing (q + 1 − i) edges from B q+1 and set S i as S q+1 \ F q+1 . From the strong induction hypothesis, the time complexity of algorithm for γ = i is O(2 i−1 n i+2.5 ). As ℓ i is O(n), the possible combinations for removing (q + 1 − i) edges from F q+1 is O(n q+1−i ) and hence the total time complexity of case (i) is O(2 i−1 n q+3.5 ).
Note that, solving Problem 2.2 using our approach has complexity polynomial in n and exponential in γ, where γ << n. The value of γ is typically small as the attacker will attack fewest number of links to disable the operation of the system on account of the resource and the infrastructure constraints involved in an attack.
Remark 1. For a fixed γ, 2 γ−1 is not varying with n. So the time complexity of the algorithm for general γ can be written as O(n γ+2.5 ).
Solving Problem 2.2 using an exhaustive search-based technique involves checking existence of perfect matching for n 2 γ number of bipartite graphs. Thus the computational complexity of an exhaustive search-based technique is O(n 2γ+2.5 ) = O(n γ n γ+2.5 ). Our algorithm is computationally efficient than applying a brute-force exhaustive search-based algorithm. When the links under attack are small, the computational efficiency of our approach is better.
CONCLUSION
This paper considered the resilience of a large scale closedloop structured system when subjected to dysfunctional feedback connections. We discussed two problems in this paper: (i) given a structured system, input, output, and feedback matrices, verify whether the closed-loop system retains the arbitrary pole placement property even after the simultaneous failure of any subset of feedback links of cardinality at most γ (verification problem) and (ii) given a structured system, input and output matrices, design a sparsest feedback matrix such that the resulting closed-loop system retains the arbitrary pole placement property even after the simultaneous failure of any subset of feedback links of cardinality at most γ (design problem).
Firstly, we showed that the verification problem is NPcomplete (Theorem 4.4). The complexity of the problem is obtained from a reduction of a known NP-complete problem, the blocker problem. We also showed that the verification problem is NP-complete even when the state digraph of the structured system is irreducible. Subsequently, we proposed an algorithm for solving Problem 2.2 for the class of irreducible structured systems. We first proposed polynomial time algorithms to solve the resilience problem for the case of one edge removal and two edges removal, i.e., γ = 1, 2, respectively (Algorithms 6.1, 6.2). The correctness and complexity of Algorithms 6.1, 6.2 are also proved in the paper (Theorems 6.1, 6.2). Finally, we considered the general case, where γ is any positive number. For the general case, we proposed a recursive algorithm which is pseudo-polynomial in factor γ and proved the correctness (Theorem 6.3). We show that our algorithm performs much better than an exhaustive search-based algorithm in general and specifically for smaller values of γ.
We proved that the sparsest resilient feedback design problem is NP-hard (Theorem 4.11). The NP-hardness of the problem is obtained from a reduction of a known NP-complete problem, the minimum set multi-covering problem. We also proved that the design problem is inapproximable to factor (1 − o(1)) log n, where n denotes the system dimension (Theorem 4.12). We showed that the design problem is NP-hard for two special cases as well; when the state digraph of the structured system is irreducible and also when all the state nodes in the state digraph are spanned by disjoint cycles (structurally cyclic). We then analyzed structurally cyclic systems with a special feedback structure, so-called backedge feedback structure, for which the NP-hardness and the inapproximability results hold. A polynomial-time O(log n)optimal approximation algorithm to solve the design problem for structurally cyclic systems with back-edge feedback structure is presented by reducing the design problem to minimum set multi-covering problem (Theorem 5.4). Identifying other relevant feedback structures that have computationally efficient solution approach for the verification and the design problem is part of future work. Improving the complexity of the pseudopolynomial algorithm for solving the verification problem is also part of future work. | 2019-04-01T13:02:23.230Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "26d6d1048a1e388135fb1c0e37f364dcd7040d25",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b3d858bc6d21dc4a28db42673fafcca098904d76",
"s2fieldsofstudy": [
"Engineering",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
1090451 | pes2o/s2orc | v3-fos-license | Custom titanium sleeve for surgical treatment of mechanically assisted crevice corrosion in the well-fixed, noncontemporary stem in total hip arthroplasty
Adverse local tissue reaction associated with total hip replacement may occur when mechanically assisted crevice corrosion occurs at metal-metal modular junctions in which at least one of the components is fabricated from cobalt-chromium alloy. Complete removal of components may be associated with significant morbidity; when components are well fixed and in acceptable position, it may be appropriate to consider modular rather than complete revision. We have diagnosed mechanically assisted crevice corrosion in total hip arthroplasty patients with noncontemporary but well-fixed femoral components and found that modular conversion to a ceramic femoral head to remove a source of CoCr corrosion and fretting products was only possible by having a custom titanium sleeve manufactured. Surgical implantation with a revision style Biolox ceramic head (CeramTec, Plochingen, Germany) was then achieved.
Introduction
Serious adverse local tissue reactions (ALTRs) have become associated with metal-on-metal (MoM) joint failures, modular femoral neck components, and more recently, mechanically assisted crevice corrosion (MACC) at the taper interface of metal-onpolyethylene (MoP) total hip replacements [1][2][3]. Although some contemporary surgeries seem to have a higher prevalence of this issue [1], nonmodern implants may also fail in this manner, particularly after many years in service or after prior femoral head revision.
We present an approach to patients with MACC and noncontemporary total hip arthroplasties (THAs). When only "off-the-shelf" cobalt-chromium (Co-Cr) alloy femoral head options for revision of their well-fixed femoral components are available, we describe commissioning a custom titanium sleeve to be made that could be used with an already manufactured Biolox ceramic femoral head (CeramTec, Plochingen, Germany) to remove the source of Co alloy at revision. In our experience, metal ion levels decrease postoperatively, and patients are satisfied and improved at follow-up. This is the first report, to our knowledge, that describes the use of a custom titanium sleeve for surgical treatment of MACC in conjunction with a well-fixed nonmodern stem in THA.
Surgical technique
Once revision THA for MACC is contemplated, the exact implant is researched, preferably by obtaining the implant identification stickers. The manufacturer is then contacted to confirm the availability of a revision femoral head other than one made from a Co alloy (eg, ceramic with a titanium revision sleeve, BioBall Adapter System, and ceramic head [Merete, Germany] or zirconium alloy metal substrate that transitions into a ceramic zirconium oxide outer surface [Oxinium; Smith & Nephew, Inc., Memphis, TN]).
One or more of the authors of this paper have disclosed potential or pertinent conflicts of interest, which may include receipt of payment, either direct or indirect, institutional support, or association with an entity in the biomedical field which may be perceived to have potential conflict of interest with this work. For full disclosure statements refer to http://dx.doi.org/10.1016/j.artd.2015. 10.001. When no other options are available, in the case of a well-fixed and well-positioned femoral component, we have requested a custom titanium sleeve to be manufactured that works in conjunction with an "off-the-shelf" revision Biolox ceramic femoral head (Biolox Option; CeramTec, Plochingen, Germany). We have requested that the femoral stem manufactures make this product, as they have the exact specifications of the femoral trunnion. Table 1 summarizes the steps necessary to manufacture such a custom product. Note that if the company has made <5 such custom products in the last year, the Federal Food, Drug, and Cosmetics act was amended to allow the company to manufacture these for compassionate use without institutional review board approval. Our institutional review board, however, noted that the Food and Drug Administration recommends that physicians should follow as many of the patient protection procedures as possible (Table 2).
Of note, the surgeon should consider the exact femoral neck length that is being sought when manufacturing the custom sleeve. We choose the same or slightly longer length of the implanted neck if the leg length is acceptable to ensure stability at the time of revision. Also, we have had two sleeves per patient made, in case one is inadvertently contaminated during surgery or a repeat revision is needed.
The technique itself is exactly as for the Biolox Option (Ceram-Tec, Plochingen, Germany) technique [4]. The ceramic femoral head is placed on the head adapter, and pressure is applied until resistance is felt. The ceramic femoral head must be placed straight down on the sleeve. The system components are then assembled on the femoral stem; no washing or cleaning is necessary.
Case example
The patient is a 52-year-old man who reported newly onset groin and buttock pain of the left hip 18 years after total hip replacement surgery for osteonecrosis and 5 years postrevision total hip replacement for instability. A 36-mm, medium-plus Co-Cr head was used on the 6 taper of the patient's titanium fiber metal ingrowth stem (Zimmer, Inc., Warsaw, IN). Physical examination of the patient demonstrated no gait impairment, and abductor strength was found to be satisfactory. Medical history was positive for diabetes mellitus, type 1.
Radiographic examination showed no obvious evidence of osteolysis or loosening and that hip components were satisfactorily positioned. The calcar osteolysis associated with his prior surgery had not increased (Fig. 1). Laboratory tests conducted approximately 3 months after the onset of hip pain revealed serum Co (2.2 ppb; normal, <0.3 ppb) and Cr (2.4 ppb; normal, 0.0-0.9 ppb) ions in the blood. Complete blood count revealed a white blood cell count of 5600 cells/mm 3 (normal, 4200-9900); C-reactive protein was found to be normal at 0.9 mg/L (normal, 0.0-8.0 mg/L), as was erythrocyte sedimentation rate at 8 mm/h (normal, 0-10 mm/h). Axial, coronal, and sagittal sequencing performed using metal artifact reduction sequence magnetic resonance imaging found no Table 1 General process for obtaining manufacture of a custom titanium sleeve to allow implantation with a revision style Biolox ceramic head (CeramTec, Plochingen, Germany).
Confirm that ceramic or Oxinium (Smith & Nephew, Inc., Memphis, TN) revision femoral head component is not currently available for fixed femoral stem. Request prosthesis trunnion specifications from femoral stem manufacturer. Confirm that the custom titanium sleeve will work with implanted prosthesis (fixed femoral stem). Obtain an assessment from a physician (orthopaedic surgeon) who is not biased concurring with plan to use the custom component. Obtain compassionate use device documentation through hospital or practice IRB if indicated. Submit device description including planned neck length and head size (Special products Implant Request Form). Review manufacturing plan. Forward purchase order to manufacturer for two devices (in case second urgent surgery is needed or the first implant is contaminated). Obtain patient consent after reviewing risks, benefits, goals, and alternatives. Sign informed risk document for surgery. Proceed with surgery when device is available but have appropriate backup plan for revision if device is not suitable. large volume of synovitis or effusion. There was a small collection of heterogeneous fluid detected in the periprosthetic soft tissues anteriorly measuring 1 Â 1.8 cm, with no decomposition into the trochanteric or iliopsoas bursa. No evidence was found of muscle edema or tear of the muscle detachment. Joint aspiration demonstrated very high Co and Cr levels of 322 and 598.8 ppb, respectively. Gram stain and culture showed no bacteria present. MACC of the trunnion was diagnosed. After shared decisionmaking discussion, the patient opted for revision surgery of his well-fixed, well-positioned total hip replacement. After contacting Zimmer, Inc. (Warsaw, IN), it was determined that a non-Co alloy head was not available for the 6 taper of the stem, so a custom titanium sleeve for use with a Biolox Option ceramic head was commissioned (Fig. 2).
At surgery, MACC was confirmed with black discoloration and debris of the trunnion and inside of the femoral head (Fig. 3a). The trunnion was cleaned (Fig. 3b) after irrigation and debridement and polyethylene revision to treat third-body debris [5]. The sleeve was placed on the clean trunnion (Fig. 3c) until resistance was felt. The ceramic femoral head was then placed straight down on the sleeve and impacted as per the Zimmer protocol [4] (Fig. 3d). The hip was carefully reduced, and the hip capsule was reconstructed before closure. The patient recovered and is pain free with full function at 3 months postoperatively (Fig. 4). Serum Co is undetectable and serum Cr is 1.9 ppb.
Discussion
MACC has been linked to a number of adverse outcomes in THA, including mechanical deficiency of the implant and ALTRs or adverse reaction to metal debris, so-called "pseudotumor" formation, osteolysis, and muscle and tissue necrosis and deficiency [1][2][3]. Development of appropriate diagnosis and treatment guidelines for MACC is important, but until these are available, minimizing patient morbidity while removing or minimizing the intra-articular and serum Co and Cr metal ions seems prudent. This technique may be considered in select patients, where removal of a well-fixed, well-positioned stem is difficult or unwise, and the stem is noncontemporary without an option of an "off-the-shelf" non-Co alloy replacement head.
Our current definition of MACC agrees with that recently reported by the Rush University group [3]. They found that patients who present with new onset of postoperative pain, in whom the implants are well fixed and infection has been ruled out, should be evaluated with serum Co and Cr levels. Because previous work from their institution suggested that the serum Co level should be <1 ppb in a well-functioning THA with a MoP bearing [6], they used a Co level of >1 ppb as a diagnostic cutoff. We have also found that intra-articular serum levels of Co and Cr, although not mandatory, can be helpful in confirming MACC. We have found that they are commonly elevated 50 to 100 times or more the serum levels.
The Co-to-Cr ratio may also be helpful but is not pathognomonic. Although Plummer et al. [3] found that the abnormal serum Co level was significantly elevated above the serum Cr level (by a mean 11-to-2 ppb ratio) in MACC, less dramatic but asymmetrical ratios have been described with MoM total hip replacements [7,8]. On the other hand, the patient diagnosed with MACC presented here had a Co:Cr ratio of approximately 1, although his implant was a MoP articulation. Furthermore, Fehring et al. [9] have found that the Co:Cr ratio is not a predictive biomarker for ALTRs (they looked at MoM total hip replacement only.) The logic of using only a ceramic head with a titanium sleeve in revision settings is based on manufacturer recommendations [10], and the fact that deformed areas of the previously used taper may create stress risers that can lead to ceramic crack initiation and propagation [11]. Although placement of a ceramic head directly on an undamaged but previously used trunnion has been successful at short-term follow-up in one study of 61 hips [11], case reports of ceramic head fracture in this situation have been reported [12,13]. Conversely, we could find no reports of a fractured third-generation ceramic head used with a revision sleeve, and the fracture rate for Biolox femoral heads (CeramTec, Plochingen, Germany) manufactured after 1994 has been reported to be 0.004% [10].
A thorough discussion with the patient before accepting the risks associated with the use of a custom implant should be undertaken. The patient should understand the risk of ceramic fracture and the ramifications of such a fracture. In cases where the well-fixed stem is made from a Co alloy, MACC may theoretically continue despite removing the Co alloy femoral head. On the other hand, the technique of revising the femoral head only, even for Co-Cr stems, is successful at significantly reducing serum Co and Cr levels at an average of 2.7 years [3]. Next, there are significant challenges for the surgeon in pursuing this approach including painstaking preoperative workup and the need for multiple surgical plans if the custom implant does not fit properly or instability is noted after reconstruction.
Summary
ALTR associated with MACC occurs at metal-metal modular junctions in which at least one of the components is fabricated from Co-Cr alloy. Complete removal of THA components may be associated with significant morbidity, and it may be appropriate to consider modular rather than complete revision in select patients. . Anteroposterior pelvis radiograph of the patient after revision using a custom titanium sleeve for conversion to a ceramic femoral head. | 2018-04-03T01:38:41.621Z | 2015-10-31T00:00:00.000 | {
"year": 2015,
"sha1": "62563b1bfb3f5f0bf76b0e84cefbcc1d111081d2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.artd.2015.10.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62563b1bfb3f5f0bf76b0e84cefbcc1d111081d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
218595233 | pes2o/s2orc | v3-fos-license | Exploring Interpretability in Event Extraction: Multitask Learning of a Neural Event Classifier and an Explanation Decoder
We propose an interpretable approach for event extraction that mitigates the tension between generalization and interpretability by jointly training for the two goals. Our approach uses an encoder-decoder architecture, which jointly trains a classifier for event extraction, and a rule decoder that generates syntactico-semantic rules that explain the decisions of the event classifier. We evaluate the proposed approach on three biomedical events and show that the decoder generates interpretable rules that serve as accurate explanations for the event classifier’s decisions, and, importantly, that the joint training generally improves the performance of the event classifier. Lastly, we show that our approach can be used for semi-supervised learning, and that its performance improves when trained on automatically-labeled data generated by a rule-based system.
Introduction
Interpretability is a key requirement for machine learning (ML) in many domains, e.g., legal, medical, finance. In the words of (Ribeiro et al., 2016), "if users do not trust the model or a prediction, they will not use it." However, there is a tension between generalization and interpretability in deep learning, as interpretable models are often generated by "distilling" a model with good generalization, e.g., a deep learning one that relies on distributed representations, into models that are more interpretable but lose some generalization, e.g., linear models or decision trees (Craven and Shavlik, 1996;Ribeiro et al., 2016;Frosst and Hinton, 2017). Here, we argue that both generalization and interpretability are equally important. For example, in the medical space, a patient will likely reject a treatment recommended by an algorithm without an explanation. Closer to natural language processing (NLP), a statistical information extraction method that converts free text in a specific domain to structured knowledge should also provide human-understandable explanations of its extractions. This allows the subject matter expert to quality check such output without a deep knowledge of the underlying machinery, which is a necessity in successful inter-disciplinary NLP collaborations.
In this work, we propose an interpretable approach for event extraction (EE) that mitigates the tension between generalization and interpretability through multitask learning (MTL). Our approach uses an attention-based encoder to encode the input text and given entities of interest (e.g., proteins in the biomedical domain), and a decoder that jointly trains two tasks. The first task is event classification, which identifies which event applies for a given entity (e.g., phosphorylation). The second task decodes a rule in the Odin language (Valenzuela-Escárcega et al., 2018;, which explains the prediction of the classifier in a format that can be read and understood by human end users. An example of such a rule is shown in Figure 1. Importantly, both tasks share the same encoder, and are trained using a joint objective function.
Supporting earlier findings, we observe that joint training leads to performance improvements both within and across tasks. In our unique pairing of tasks, however, we are able to shed light on an opaque process by generating rules that provide an interpretable distillation of an event classifier's decisions.
The major contributions of this paper are: (1) A simple neural architecture for EE that jointly learns to extract events and explain its decisions. While here we investigate event extraction, we believe this approach is applicable to many other information extraction tasks.
(2) We extend a subset of the BioNLP 2013 GENIA Figure 1: An example of an event extraction rule in the Odin language that extracts phosphorylation events driven by a nominal trigger ("phosphorylation"). The event's sole argument or theme (the phosphorylated protein) is identified through both semantic constraints (its type must be Protein), and syntactic ones (it must be attached to the trigger through a certain syntactic dependency pattern: a prep of followed by an optional (?) appositive (appos), followed by up to two ({,2}) other dependencies, e.g., nn). This rule would extract a Phosphorylation(PKC) event from the text ". . . which includes the phosphorylation of PKC by. . . ". event extraction (Kim et al., 2013) dataset with a set of rules designed to extract and explain three of the GENIA biomedical events: protein phosphorylation, localization, and gene expression. The result is a parallel dataset that aligns some of the GENIA event labels with rules that extract them. We release this dataset 1 for reproducibility.
(3) We train and evaluate our approach on this dataset and demonstrate that: (a) our approach achieves reasonable event classification performance, despite the fact that it uses no syntactic or part-of-speech information; (b) it decodes explanations with high accuracy, e.g., with a BLEU overlap score between the generated rules and hand-written rules of up to 93%, and (c) most importantly, we show that MTL improves performance over the individual event classification task. To our knowledge, this is the first work that demonstrates that interpretability improves classification performance.
(4) Our approach can be easily extended to a semisupervised setting, where we use the rules associated with the events of interest to extract additional training data with "silver" labels, i.e., where we use the rule predictions as training labels for the classifier. We show that despite the inherent noise in this process, the performance of our approach improves considerably in this semi-supervised setting.
Interpretability in machine learning is an area of active research involving a multitude of approaches.
In this work, we focus on post-hoc interpretations that explain a model's output (Lipton, 2016). A common theme of prior research in interpretable machine learning is producing a definite decision process (e.g., a decision tree) that preserves generalization. (Craven and Shavlik, 1996) explored converting a trained network to a decision tree. Similarly, (Frosst and Hinton, 2017) trained soft binary decision trees using the predictions of a neural model. These decision trees are trained with mini-batch gradient descent using as labels a trained network's results. In the same vein, (Che et al., 2016) proposed a mimic learning framework, which trains gradient boosting trees to mimic the soft predictions of the original neural network. One unaddressed challenge with this direction, however, is that a decision tree's interpretability tends to decay as the tree increases in size.
Rather than converting a statistical model into an interpretable model such as a decision tree, other efforts have focused on jointly learning a statistical model with explanations for the model's output. Our work falls in this camp as well. (Hendricks et al., 2016) proposed a system for image classification that generates a natural language (NL) explanation to accompany each decision. Similarly, (Blunsom et al., 2018) learned NL explanations for the natural language inference (NLI) task, and (Ye et al., 2018) applied this idea to crime case prediction. Inspired by such approaches, here we learn to generate declarative information extraction rules that serve to explain the predictions of an event classifier.
Approach
Our approach jointly addresses classification and interpretability through an encoder-decoder architecture, where the decoder uses MTL for event extraction (Task 1) and rule generation (Task 2). In this paper, we apply this architecture to the extraction of unary events in the biomedical domain. The two tasks are framed as follows: Task 1 (T1): Given a sentence and an entity in focus, it must identify which event applies to the entity, and what is its trigger, i.e., the verbal or nominal predicates that drives the lexicalization of the event (e.g., "phosphorylation").
Task 2 (T2): Decode a rule in the Odin language that explains the prediction of the event classifier. That is, the rule should identify the lexical constraints on the event trigger, e.g., its lemma, the semantic type expected of the argument, e.g., that is must be a Protein, and the syntactic pattern that connects the event trigger with the argument (Figure 1 shows a complete example for such a rule).
Consider this text as a walkthrough example: which includes the phosphorylation of PKC by . . . , where the text in bold indicates the entity that is provided in the input in this task. This follows the settings of the standard event extraction task of BioNLP 2013 (Kim et al., 2013). For Task 1, we train a series of binary event classifiers (one for each event type), which predict the position of the event's lexical predicate (i.e., trigger) that modifies each given entity (phosphorylation here). Drawing upon the state information from Task 1, we prime our decoder in Task 2 using a contextualized representation of the predicted event trigger to generate an information extraction rule in the Odin language that captures the same event (i.e., entity-predicate structure) identified in Task 1 (see Figure 1). We detail these two tasks next.
Task 1: Event Classifier
We train a binary event classifier for each event type, which must identify if the corresponding event type applies to the entity under consideration, and, if so, which token in the input sentence is the event's trigger.
The classifier uses an encoder with entity attention to encode its input. For each sentence with words w 1 , . . . , w n and a given entity z, we associate each word i with a representation x i that concatenates three embeddings: is the word embedding of token i, p i is the word's relative position to the entity under consideration, and char(w i ) is the output of a bidirectional character-level LSTM (charLSTM) applied over w i . e(w i ) is initialized with the pretrained embeddings of (Hahn-Powell et al., 2016) using the word2vec Skip-gram model (Mikolov et al., 2013) trained on the full text of over 1 million biomedical papers taken from the PubMed Central Open Access Subset. 2 while e(p i ) and char(w i ) are initialized randomly.
The sequence of x i s serves as input to a sentencelevel bidirectional LSTM (biLSTM), whose hidden states h i s serve as input to the attention layer below.
The entity-attention layer computes a sequence of context vectors (the matrix C C C in the equations below), which weighs the biLSTM's hidden states by their importance to the entity z. Our attention mechanism is inspired by the transformer network (Vaswani et al., 2017). Similarly, we compute the attention function on a set of keys and values that are packed together into matrices K K K and V V V . The difference is that our approach is entityfocused in its query, so we only compute the attention on a single query vector q q q. Further, unlike the conventional encoder in a transformer network, we don't produce a single vector, but a sequence of vectors (the matrix C C C).
a a a = softmax(s s s) ) and feed the concatenated vector to two feedforward layers with a softmax function, and use its output to predict if there is a trigger in this position. We calculate the classifier's loss using the binary log loss function.
Task 2: Rule Decoder
Inspired by neural machine translation (Luong et al., 2015), we use another LSTM with attention as the decoder. To center rule decoding around the trigger, which must be generated first, we first feed the trigger vector from the encoder's context as the initial state in the decoder. Then, in each timestep t, we generate the attention context vector C C C D t by using the current hidden state of the decoder, h h h D t : a a a t = softmax(s s s t ) (8) where W W W A is a learned matrix of dimensions 100 × 200, and C C C E are the context vectors from the previous entity-focused attention layer. Note that the learned matrix W W W A here is distinct from the matrices learned in the previous entity-attention layer. We feed this C C C D t vector to a single feed forward layer that is coupled with a softmax function. We predict the next word from a vocabulary extracted from the existing Odin rules used in our experiments (see next section for details). During training, we calculate the decoder's loss using the multiclass cross-entropy loss function.
Note that the losses corresponding to these two tasks are jointly optimized. Formally, the loss function is defined as: where loss c is the cross-entropy loss of the event classifier, which relies on: t c , the target label (i.e., 1 for positive examples, 0 for negative), and y, the likelihood predicted by the model. loss d is the cross-entropy loss of the rule decoder, where i iterates over the tokens in the rule, and p i is the decoder's probability of the correct token at position i.
Dataset
We train and evaluate on three events from the BioNLP 2013 GENIA Events extraction shared task (Kim et al., 2013): Phosphorylation (P), Localization (L), and Gene Expression (GE). To facilitate comparison with previous work, we use the standard training, development, and test partitions from the original dataset. To generate data for the rule decoder, we extend this dataset with rules from the rule-based system of (Valenzuela-Escárcega et al., 2018), which reported high-precision results for Phosphorylation (92%). We manually added new rules using existing syntactic templates that cover common syntactic forms of subject-verb-object patterns to cover more events. Further, because the system of Valenzuela-Escárcega et al. (2018) did not cover L and GE events, we extended it with rules for these two events. All in all, we used: 32, 20, and 21 rules for P, L, and GE, respectively. Most of these rules rely on syntactic structures denoted in terms of dependency paths to extract event arguments (see Figure 1 for an example of such a rule). From these rules, we obtained a token-level vocabulary for the rule decoder. This poses an additional challenge on our decoder, which must now decode from raw text both the semantics necessary for these events, and the syntactic patterns needed to match event arguments. Further, note that these rules do not have perfect recall, i.e., there are events in the data that are not covered by rules. In other words, the two tasks in our MTL framework are not perfectly aligned: there are data points which are part of the training examples of T1, but not of T2 (for those training examples, the loss of decoder is set to be 0).
In addition to using these rules for explainability, we used the rule-based system to generate additional "silver" training data for these three events, by using its extractions from a collection of PubMed publications. From these papers, we extracted an additional 6592, 6321, and 2056 positive training examples for P, L, and GE, respectively. To avoid biasing the classifier to the positive classes, we also generated 3467, 3532, and 2876 negative training examples for P, L, and GE by extracting entities assign to extract evented to other event types in the BioNLP data.
Evaluation Metrics
We used precision, recall, and F1 scores to measure the performance of the event extractor (classifier), and used the BLEU score to measure the quality of generated rules, i.e., how close they are to the corresponding gold rules that extracted the same output. Note that the BLEU score provides an incomplete evaluation of rule quality. The more complete solution would be to evaluate these rules by executing them over free text and verifying the quality of the extracted output. However, this is not a trivial process, as some of the decoded rules break the Odin syntax, and are only executable after a manual cleanup process. We leave this evaluation for future work. Table 1: Results for the three events in the BioNLP 2013 test partition. T1 and T2 indicate the two tasks in our MTL approach, i.e., the event classifier and the rule decoder, respectively. Silver indicates that that configuration used the silver data created by the rule-based system (see §4.1). BioNLP best and median indicate the best/median results during the 2013 shared task. We do not include T1 + T2 results because in this configuration we observed that there is not sufficient data to train the decoder.
Baseline
We compared our proposed methods with the rulebased baseline proposed by (Valenzuela-Escárcega et al., 2018). They used their rule-based system to extract Phosphorylation events in BioNLP 2013 Genia Events (GE) task data using 42 manually written rules (which we extended for our experiments -see Section 4.1). On the development partition, they reported a precision of 92.9%, a recall of 56.0%, and an F1 score of 69.9%. We also evaluated their system on the formal test partition and obtained a precision of 84.2%, a recall of 43.8%, and an F1 score of 57.6%. As mentioned in Section 4.1, we adjusted the grammar in this system to cover gene expression and localization events. The complete results for this system are listed in Table 1 as "Rule baseline."
Results and Discussion
Tables 1 analyzes the performance of our approach for the three events, compared against the rulebased system described in §4.1. These results highlight several important observations: (1) T1 by itself performs generally worse than the rule baseline and the median BioNLP result. This is caused by: (a) the small size of this dataset, e.g., there are only 117 training examples for P; and (b) the fact that our approach uses no part-ofspeech (POS) or syntactic information, which have been shown to be important for this BioNLP task (Kim et al., 2013). However, adding the silver data improves T1 performance considerably, e.g., 35 F1 points for Localization. This demonstrates that our approach provides a simple but effective platform for semi-supervised learning.
(2) Most importantly, jointly training for classification and explainability helps the classification task (T1) itself. As shown in the tables, combining T1 and T2 generally improves F1 scores considerably, e.g., 4 F1 points for Phosphorylation and 10 for Localization. To our knowledge, this is the first NLP work to demonstrate that aiming for interpretability also helps the main task addressed. All in all, we approach the median performance in the shared task, a respectable result considering that our approach uses only raw text as input, whereas all participants in this shared task used some form of syntactic representation. Importantly, our approach outperforms considerably the rule-based method of (Valenzuela-Escárcega et al., 2018), which served as the starting point of this work (see Section 4.3).
(3) The only negative results in our experiments are the GE results in the test partition, where T1 outperforms both T1 + Silver and T1 + Silver + T2. We hypothesize that this is caused by the larger training data for this event, e.g., there are 6 times more training samples for GE than P, which allows the T1 classifier to learn by itself, without the scaffolding offered by MTL, and the additional (noisy) data in the silver dataset. This suggests that our approach is best suited for EE scenarios with minimal training data, an important subset of information extraction tasks. But are the decoded rules actually interpretable? To answer this, we compared in Table 2 the decoded rules against the hand-written rules that matched in the BioNLP development partition. Table 3: Examples of mistakes in the decoded rules. The first column shows hand-written rules, while the second shows the rules decoded by our approach from sentences where the corresponding hand-written rules matched. We highlight in the hand-written rules the tokens that were missed during decoding (false negatives) in green, and in the decoded rules we highlight the spurious tokens (false positives) in red. The first row lists a partial mistake, which does not affect the interpretability of the decoded rule, since it only misses one token that can be inferred by the human experts from context. The second row lists a partial mistake, which impacts the semantics of the rule. For example, the decoder missed that the path between the trigger and the theme argument starts with an optional prop of and appos. This rule was marked as partially correct because some simple syntactic patterns, e.g., nn, can still be correctly matched by the decoded rule. The last row lists a larger decoding error that was marked as completely incorrect by the annotator. For example, in the last decoded rule, the decoder generated an incorrect cause argument, which does not exist in the data, as well as an incorrect syntactic pattern for the theme argument, i.e., the protein being phosphorylated.
That is, we performed this analysis on the subset of the development partition, where each data point is accompanied by a matching hand-written rule. This reduced this dataset to approximately 60% of the total BioNLP development set. In particular, we analyzed 108, 82, and 296 event instances with matching rules for P, L, and GE events, respectively. The table shows that our rules have high BLEU overlap with hand-written rules, e.g., 93% for P, and, by and large, they exactly match them. We believe this is an exciting result, as it shows that our approach is able to decode directly from the raw text the declarative semantics necessary for the task, as well as the syntactic patterns that match the event arguments.
Lastly, Table 3 shows examples of typical decoding errors, ranging from partial mistakes that do not affect the interpretability of rules to complete decoding mistakes. As we mentioned above, we cannot ensure the validation of the generated rules with our current approach. Table 3 shows that this indeed happens in our output. For example, the decoder generates a binary operator such "!=" without the left operand (first row in the table).
Conclusions
We introduced an interpretable approach for event extraction that jointly trains an event classifier with a component that translates the classifier's decisions into interpretable extraction rules. We implemented this approach using an encoder-decoder architecture, where the decoder jointly optimizes the decoding of extraction rules and event classification. We evaluated the proposed approach on three biomedical events and demonstrated that the decoder generates interpretable rules, and that the joint training improves the performance of the event classifier. We also showed that the performance of our approach further improves when trained on automatically-labeled data generated by a rule-based system.
In the longer term, we envision a decoder with constraints, which enforces that the generated rules follow correct Odin syntax. We plan to include constraints as part of decoding to aid in rule synthesis. For example, in the Odin language, brackets must be paired to produce syntactically valid rules. This can be enforced with different strategies in the decoder, ranging from constrained greedy decoding to globally optimal solutions that could be implemented with integer linear programming. We suspect that including such validity constraints will further improve the quality of the decoded rules.
Further, we plan to use this decoder in an iterative, semi-supervised learning scenario akin to co-training (Blum and Mitchell, 1998). That is, the newly decoded, executable rules can be applied over large, unannotated texts to generate new training examples for the event classifier. | 2020-05-11T16:50:35.700Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "0ad11c577ea92f32e58f935cbb85a2a6457bca82",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/2020.acl-srw.23.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "5104a1a21cda11a7b5edd58bfb7b76ccc60d1c58",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
254964959 | pes2o/s2orc | v3-fos-license | Dynamic measuring of force-displacement-characteristics of shockmounts
Shockmounts are widely used to isolate sensitive equipment from vibrations and mechanical shock. Despite the highly dynamic nature of shock events, the force-displacement-characteristics of shockmounts which are provided by manufacturers are gained by static measurements. Therefore, this paper presents a dynamic mechanical model of a setup for dynamic measurements of force-displacement-characteristics. The model bases on acceleration measurement of an inert loading mass, which displaces the shockmount, when the arrangement gets excited by means of a shock test machine. The influence of the shockmount’s mass in measuring setup is considered as well as special needs for handling measurements under shear or roll loading. A method for allocating the measured force data on the displacement axis is developed. An equivalent of a hysteresis-loop in a decaying force-displacement-diagram is proposed. Based on exemplary measurements, error calculation and statistical analysis show the qualification of the proposed method for attaining dynamic FDC.
Introduction
Shock isolators, in naval applications also called shockmounts (SM), are used to mount sensitive equipment of a system, for example electronic equipment on a naval vessel, in order to absorb shock energy which is applied, when the system's structure is exposed to mechanical shock.
In order to asses and improve ship safety, many studies were conducted in different fields. In a recent overview [1], Gargano and Mouritz summarize numerous studies from various authors concerning the explosive blast resistance of different materials. Other authors focus on damage effects on the hull of the vessel. Wang et al. [2] list relevant experimental and simulation studies in this field. Few research is done with respect to simulation of the shock wave propagation through the naval ship foundation, for example by Mannacio et al. [3].
On the other side, SM itself are subject to investigation in several fields. Kluczyk et al. [4] resume relevant studies concerning material depending SM parameters and their change under environmental conditions. Grzadziela et al. investigated damping characteristics and the shock transmittance of SM [5]. New SM designs are proposed, for example by Zhang [6] and Akram [7], or existing designs were improved, as Prost and Abdelnour report [8].
However, in the field of measuring the force-displacement-characteristics (FDC) of SM, no resent research can be found in literature. It is rather the case that national standards reflect proven practice. SM in naval context are intended for use in dynamic applications with short time changes of displacement, velocity and acceleration. Commonly, in civil context, the reduction of vibration is the main purpose of SM. In military marine applications however, the consideration of shock effects takes a prominent place in assessing ship safety. Exposed to mechanical shocks, the motion of SM typically changes within few milliseconds, absorbing shock energy and protecting crew and equipment.
However, FDC are typically measured and reported by manufacturers as virtually static 1 characteristics, whereas dynamically generated FDC are not available. For naval architects evaluating ship safety applications, this could be problematic, since ship threatening events like underwater explosion of a sea mine are of a highly dynamic nature.
The relevant DIN standards refer to procedure 6.5.4.2 according to Ref. [9] for determining the characteristics of elastomer SM under shock loads, and to Chapter 7.6.3.1.3 of [10] for wire rope SM. Displacement of SM specimen is driven by spring measuring machine, the restoring force is measured. Three displacement cycles are run, force and displacement of the third cycle are recorded at a rate of 1 min per cycle. FDC then is calculated as the arithmetic mean force-values of the recorded force-displacement hysteresis [11,12].
Since the available FDC from manufacturer of SM are static, these characteristics do not consider their dynamic behavior, when exposed to shock situations with high rates of change.
Manufacturers additionally often specify a dynamic stiffness, stiffening factors, the storable spring energy and the maximum shock load and displacement in order to take the dynamic behavior of shock absorbers into account when designing systems. For elastomer SM, the dynamic stiffness corresponds to the average stiffness under sinusoidal excitation recorded after at least ten cycles [10]. For wire rope SM, dynamic stiffness can be considered proportional to the static spring stiffness, with the proportionality factor corresponding to the amplitude-and velocity-dependent stiffening factor [13]. For examples see Refs. [13][14][15].
Another and more comprehensive approach of characterizing the dynamic behavior of SM is given by the concept of force surface. This approach is elaborated by a NATO working group and described in the NATO-recommendation ANEP 63. The force surface considers and evaluates force, velocity and displacement data of multiple measurements in one data set, therefore it takes into account the velocity dependence of the dynamic behavior of SM. Therefore, dynamic FDC of a SM is a subset of force surface data [16].
ANEP 63 suggests to excite the basepoint of SM under test by means of a shock test machine. It also gives advice on how to measure the restoring force: "mass 2 factoring the absolute acceleration as measured on the mass gives the force (F) across the mount" [16, p. 8].
No comment is given on dealing with hysteresis in force-deflection-measurements.
The work presented here bases on the ANEP 63 proposed principles of excitation and force calculation. Therefore, the scope of this study is the dynamic modelling and implementation of an inertia-based approach for measuring dynamic FDC.
The model implements a vertical shock testbench for the purpose of basepoint excitation of SM, and a loading mass which loads and unloads the SM specimen during and after the applied shock.
Further scientific contributions of this study are: -In naval applications the mass of SM is small compared to the mass of mounted devices and therefore neglectable. However, when the loading mass of the FDC measurement setup is only up to one order of magnitude greater than the SM mass, both masses have to be considered when calculating the restoring force from measured acceleration data. This is taken into account in the dynamic model. -Also, a method for allocating the measured displacement data on the displacement axis is developed, whichespecially for wire rope shockmountsis not intuitive. -Another issue when calculating the force is dealt with: Due to structural non-symmetry, dynamic measurements with load in shear or roll direction incorporate two SM specimens simultaneously, which not necessarily have identical dynamic properties. Data from multiple measurements has to be evaluated. -In this paper an equivalent of a hysteresis-loop in a decaying force-displacement-diagram is proposed and FDC and damping is calculated.
After statistical analysis and error considerations on displacement and force calculations, dynamic FDC for typical SM are calculated from some exemplary measurements and are presented as first results. They show the suitability of the proposed method for generating dynamic FDC curves.
SM types, properties and definitions
In this study two main types of SM are regarded: Elastomer SM (ESM) and wire rope SM (WSM). Fig. 1 shows both types. Properties of considered SM can be taken from Table 1. Yield straps and other commercially available SM types are not in the scope of this study.
Naming convention in this paper: For short the ESM are named ESM32, ESM40 and ESM55, with the numbers referring to the shore 1 Virtually static refers to the fact that when measuring the characteristic curve, the rate of change is very much smaller than the rate of change at dynamic shock, it is almost static. For convenience, further on the word virtually will be left out. 2 A known inert mass, attached to the SM, opposite to basepoint.
hardness of the SM, sorted with respect to the stiffness in ascending order. WSM are named WSM175, WSM135, WSM125, where the numbers correspond to the geometrical dimension in width of the unloaded SM, sorted with respect to the stiffness in ascending order.
These SM were chosen due to their natural frequencies, which are clearly below the frequencies that are excited by a ship's powertrain. Furthermore, the maximum displacement is in the range of a realistic displacement in an operational environment.
The naming of load directions of the SM is defined in Fig. 2. ESM are rotationally symmetrical, therefore only one direction orthogonally to compression and tension is defined.
Dynamic mechanical model
In order to be able to correctly record and evaluate the relevant quantities, the dynamic mechanical model and the coordinates are shown in Fig. 3. In the system, the SM is modeled with the advanced Kelvin Voigt model, a nonlinear spring-viscous damper [13,14]. combination, as introduced by Clasen and Sachau [17]. The SM is loaded with an inert mass. The mass of SM is distributed equally to the loading mass and to the supporting structure. The coordinates and other quantities in Fig. 3 are listed and explained in Table 2.
Coordinates and aligning displacement axis
For correct interpretation of the ± and ∓ operators in Fig. 3 and in the following equations, read the upper operator as valid in compression mode and the lower operator as valid in tension, shear and roll mode.
The quantities ΔA, ΔB are measured by means of a linear potentiometer, each attached at diametrically opposite sides of the loading mass. In order to eliminate possibly existing tilting oscillations in measured signals, both signals are averaged according to In tension or compression mode, the reference length d 0 is the installation length of SM according to specification. In case of shear or roll configuration, d 0 represents the distance from the surrounding to an arbitrary reference point on the load-side of the undeflected SM.
The displacement d then includes any deviation of the SM length d 0 + d from d 0 , contributions to deviation are listed in Table 2.
Quantity d off is a time-invariant offset that takes different implementation situations of displacement sensor and SM into account. It is not quantified and will be eliminated later.
Displacement d(t) for WSM
Structural adhesion between the fibers of WSM at low displacement itself contributes to displacement and prevents from a clear distinction between displacement due to deviation in building length (d bd ), due to static load (d stat ) and due to adhesion (d adh ). Thus, only the actual SM length (d 0 +d |t0 ) can be and is determined by separate measurement prior to every dynamic measurement. Time t 0 refers to the state before the drop table is triggered to drop.
Since d 0 is available from manufacturer, the displacement prior to drop is known for each dynamic measurement. Also, (2) is evaluated at t = t 0 , yielding where ΔAB |t0 is the DC-value in dynamic measurement data.
Equating (3) with (4), rearranging it to (d 0 +d off ) and putting this into (2) leads to the result The first and second summand in (5) reduce to d |t0 with whereas the third and fourth summand in (5) form the dynamic part d dyn in measured data.
Displacement d(t) for ESM
Since no structural adhesion at ESM is present and because no significant deviation in building length at the specimens could be observed, the quantity (d 0 +d |t0 ) has not been determined in ESM setups. Instead, the static displacement is calculated for each individual ESM with Hooke's law, assuming linear stiffness with k stat at small displacement. 3 The portion 4m R g in this equation refers to the gravitational force of four guiding rolls, which are attached to loading mass in this setup. When inserting (6) into (5), the displacement for ESM can be determined to Equations (5) and (8) are used to calculate the displacement from measured and known data.
Restoring force
The restoring force can be retrieved from the equilibrium condition (9), which is derived from the free body diagram at loading mass in the middle of Fig. 3: where 4F V corresponds to the bearing forces of four guiding rolls. In order to get the vertical bearing force F V , two equilibrium conditions from the free body diagram (right part of Fig. 3) of the guiding roll are considered, together with the kinematic equation and the moment of inertia of the guiding roll Equations (10)-(13) yield where the term m Rz1 is due to translation and 1 2 m Rz1 comes from rotation of one guiding roll. When putting (14) into (9), the restoring force of the (massless) spring-damper-element is which can be calculated with measured acceleration z 1 of loading mass and the other known quantities. The negative sign in the z 1 -term of (15) represents the correct orientation of the SM restoring force. However, the axes of FDC are typically oriented such, that force values are positive at tensile loadings, i.e. for positive displacement. This corresponds to positive z 1 in previous equations and Fig. 3. In order to follow this convention, when implementing (15) in software the negative sign is omitted.
In (15) the portions 4m R + 2m R of guiding rolls mass contribute to the translational and the rotational part of inertial force. Since in the implemented setup the mass m P + 4m R is measured as one quantity, 4m R and 2m R are kept separated in (15).
Note that in (15) the second summand represents the static load, leading to the static displacement d stat . Here it has to be considered that this static displacement is only given before and after the drop. During the freefall phase of the table howeverfrom release of the brake at the beginning to the point where the brake connects the table again with the testbench's base after the impactgravitation is not supported by the seismic base and therefore leads to a measurable acceleration of the loading mass and table. Moreover, during freefall the system of loading mass and SM, which acts as a single mass oscillator, relaxes from static displacement with the natural frequency of the damped oscillator.
In order to consider this behavior in dynamic measurement, the implemented sensor has to be capable of measuring static 1gaccelerations.
Mass of specimen that contributes to inertial force
In Chapter 2.2, the mass m S of SM is distributed with 50 % each to the loading mass and to the droptable. Since both employed SM Types have clearly different construction and masses, a closer look at the situation is taken. A SM consists of an interface 1 with mass m 1 , an interface 2 with mass m 2 and the actual spring-damper-element with mass m sd in between (Fig. 4). Both interfaces connect the spring-damper-element with the application of the SM. In this study the loading mass always is connected to interface 1.
While the mass of interface 1 contributes fully to inertial force, which dynamically loads the SM, the mass of interface 2 does not. The mass of spring-damper-element contributes partially with an equivalent mass of 1 2 m sd . ESM differ in the chemical composition and therefore in the mass of the rubber. WSM differ in geometric dimensions and mass of the implemented wire ropes.
Since the masses of individual SM components are not directly accessible, exemplary a WSM and an ESM have been disassembled, the masses of interfaces m 1 , m 2 and the total mass of SM are measured.
With this information spring-damper mass can be calculated as where m S can be easily measured for every SM.
All ESM under test have the same construction with identical geometric dimensions. Masses of the interfaces are m 1 = 0, 730 kg and m 2 = 0, 557 kg, respectively. Likewise, all WSM under test have the same construction with identical masses of interfaces 1 and 2 m 1 = m 2 = 0, 01 kg.
Total mass of SM contributing to inertial force is then where f denotes the relevant fraction of the SM's mass. Then f is calculated from equations (16) and (17): Evaluating (18) yields for ESM under test f ESM = 0.54 and for WSM f WSM = 0.5. At least the latter result isdue to the symmetric constructionintuitive.
Special treatment of measurements at shear or roll configuration
At shear and roll configurations, measurements were always conducted with two parallel arranged SM (SM p , SM q ) of the same type, with SM numbers p, q ∈ {1, 2, 3}, p ∕ = q. One side of each SM is connected via adapters to the supporting structure, the other sides are commonly connected to the loading mass. An example is shown in Fig. 5. This arrangement ensures, that no displacement in compression or tension direction is present at shear or roll testing.
The stiffnesses (k p , k q ) of both SM sum up to a common value In general, individual specimens differ in their stiffness coefficients from each other, so that the measured force values in shear or roll configurations cannot simply be divided by two, in order to get the values for the FDC of one SM. Since the displacement is equal for both employed SM, the measured force canin analogy to equation (19) be regarded as the common restoring force to which both SM contribute individually. When conducting shear or roll measurements, always all three possible combinations of SM are considered under identical conditions. Therefore, it is possible with (20) to calculate from the three known force datasets the individual force contribution of each SM: This method is limited by the assumption that the stiffnesses of the involved SM don't significantly change during the measurements, for example due to temperature changes.
Data processing
In this study all data processing is done with Matlab. All measurements are listed together with relevant parameters in a crossreference file.
Data preparation
Before assembling the FDC, measured data has to be prepared: 2.7.1.1. Allocate displacement data on the d(t) axis. Displacement data is adjusted according to (5) and (8).
-(d 0 +d |t0 ) as well as d 0 is read from cross-reference file.
-ΔAB |t0 is calculated as mean value from the first 80 ms of data in ΔAB(t).
-d stat is calculated from (7), where the static stiffness k stat is retrieved from self-measured static FDC in the region ± 10 mm around the origin.
Low pass filtering the measured datasets.
Noise in measurement data comes not only from sensors and data acquisition hardware. There are also strong and higher frequency oscillations which are induced when the drop table impacts the seismic base. Therefore, a lowpass filter is used to reduce the noise on all measured channels. By choosing a FIR-filter, the group delay is with 24 samples constant over the whole frequency range. The cutoff frequency is set to 100 Hz. Fig. 6 shows the magnitude response of the Kaiser window filter. (15) is used for calibration of the acceleration data of loading mass z 1 (t) so that z 1|t0 = 0. This makes sure that without acceleration only gravitational force are present in data set. For this, the measurement offset is calculated as mean value from the first 80 ms of data in z 1 (t).
Calculate force data. Equation
After compensating for this offset, measured acceleration data z 1 (t) is transformed to force data F(t) with (15). Here, 1/ 2m S is substituted by fm S with f according to chapter 2.5. During measurement campaign, the summands m P + 4m R were measured together as one quantity (loading mass) and recorded together with the SM's mass m S in the cross-reference file.
Dynamic FDC
Static FDC is calculated from the third hysteresis loop of a forced oscillation, as described in 0. When determining the dynamic FDC with a shock testbench however, there is only a single impact, no periodic energy input to the SM is given. Thus, a free oscillation of loading mass is excited by the shock impact and due to damping of the SM the oscillation is decaying. Fig. 7 shows a force-displacement-dataset of a WSM. Here, displacement prior to drop is d |t0 = + 6 mm. Then, during shock impact, inertia of loading mass compresses the SM to d min = − 62 mm. Clock wise in the diagram, the SM released stored energy and drives the loading mass until the restoring force reaches zero. Again, inertia of loading mass causes tension of SM up to d max = + 44 mm. From here oscillation further decreases until it ends at d = 8 mm, which is a 2 mm higher displacement due to adhesion than it was before drop.
Since no hysteresis from a periodic excitation is given, dynamic FDC is calculated from the enclosing hysteresis loop of the decaying force-displacement-diagram.
Step 1: Identifying time data corresponding to the enclosing loop and separating it into three sections Time data d(t) and F(t) is shown in Fig. 8. Force-displacement data F(d) is printed in Fig. 9. The enclosing loop corresponds to an interval somewhere between the begin of displacement and the third extremum, marked by black x in Fig. 8, top. The first point is freely chosen to the time, when measurement trigger occurs, shortly before impact. The second point is found by simple min-/max-detection.
One side of the enclosing loop (section 2, marked red) corresponds to displacement data between first and second extrema (green x). The other side is assembled of a portion before (section 1, blue) and a portion after (section 3, yellow) this.
Furthermore, the enclosing loop is characterized by an intersection of sections 1 and 3 (black x in Fig. 9, top). Force and displacement data have the same time base with equally spaced samples. Because force and displacement evolve independently over time, there is no common displacement-basis for force data in section 1 and section 3.
Therefore, in order to find the intersection of sections 1 and 3, the interpolation of force data to an interpolated displacement basis is necessary. Another benefit of this interpolation is an easier data processing when calculating and comparing FDC from different SM and when calculating damping from the hysteresis loop.
Step 2: Interpolation
Interpolation with respect to a unified basis d q = [− 120 mm, 120 mm] with resolution of Δd q = 0.1 mm (N = 2401 values) is done for all three sections. Further on, index 'q' represents interpolated data.
The interpolated displacement and force vectors contain N = 2401 datapoints as well. The value structure of each vector is [NaNblock, real-block, NaN-block]. NaN-blocks 4 represent the areas where the sections have no real values, see Table 3 for an example.
Step 3: Combinng sections 1 and 3 In the interpolated d q , F q domain, the intersection from F q1 (d q1 ) and F q3 (d q3 ) now is found by searching the minimum of In fact, the result may be not unique, since there can be more than one intersection in the overlapping area of both sections.
Having found an intersection, the open ends from both sections are cut away and the remainder is combined to the new data set 1', as shown in Fig. 9 (bottom).
Step 4: Calculating the FDC and damping
Analogous to static FDC calculation [12], dynamic FDC is calculated by averaging the force values of sections 1 and 2 of the hysteresis loop.
Therefore, the dynamic FDC derived from measured data after offset correction is where KF denotes an inherent offset of measured force. KF is read out of FDC-data such, that F q (d q = 0) = 0. Offset correcting the force makes dynamic FDC comparable to manufacturer provided static FDC.
In [18] a method for calculating the damping ratio ζ from the hysteresis of force-displacement-data is described. This method is applied here. The equations given there can be summarized to where ΔW is the area of the hysteresis loop and a measure of dissipated energy during the related movement. Δd q = d q,max − d q,min and ΔF q = F q,max − F q,min are the span of displacement and of the corresponding force values. The offset corrected measured data F(d) − KF together with the enclosing hysteresis, the dynamic FDC and calculated damping (24) Fig. 10. Note thatespecially for SM with strong nonlinear behavior like WSMthe calculated damping corresponds to specific sections 1, 2 and 3 of measured data. An evaluation via logarithmic decrement for example would yield other results, which in turn are dependent on the choice of periods to be evaluated (see Fig. 8).
In shear and roll configurations, the FDC calculated with (23) is characteristic for both SM together, see (21). In order to evaluate FDC for one SM in shear or roll configuration, (22) has to be applied afterwards.
Linear error propagation in single measurement
With liner error propagation, the maximum error Δy max of a quantity y = f(x j ) can be estimated by means of the total differential where f xj denotes the partial first order derivative to the independent quantity x j , and Δx j are the standard deviations of their mean values (if known) or estimations of uncertainty thereof. The true value of the quantity y then is within the interval y = y± Δy max [19]. Applying (25) to displacement calculation of WSM (5) and with the known and estimated errors of independent quantities, listed in Table 4, yields the uncertainty of allocation of force values to the displacement-axis: For ESM the uncertainty calculated from (8) gets: The same procedure can be applied for the force measurements and in consequence for the force portion of FDC. Equation (25) used on F = (m P +fm S +4m R +2m R )z 1 ∓ (m P +4m R )g results in the maximum uncertainty: Fig. 10. Dynamic force-displacement-diagram, WSM125#2, compression loading at 50 cm drop height.
(28) is calculated for each measurement and put into ΔF q,max = 1 where ΔF q2,max and ΔF q1 ′ ,max are the calculated maximum force errors ΔF q,max in sections 1' and 2. Fig. 11 shows exemplary a FDC (blue) calculated from measurements together with the confidence interval (red) F q ± ΔF q,max . Clearly the relative error δz 1 dominates the maximum error of FDC. This results in increasing absolute uncertainty with growing force values, whereas the influence of the other (absolute) errors vanishes at higher forces.
When calculating the error of damping ratio δζ = Δζ/ζ with (25), the involved error of hysteresis surface is in the range of 500 %, just because of the aforementioned errors ΔF q2,max and ΔF q1 ′ ,max . They sum up over every interpolated datapoint, this makes the result δζ unusable. For this reason, the error of damping is not calculated here.
Arithmetic mean, standard deviation and confidence interval of measurement series
In a series of n = 10 dynamic FDC measurements with identical configuration, the arithmetic mean F q (d q ) is the best estimate of the true FDC [19]. Here, attention is only paid at force values F q,k (d q ), with index k = 1…n.
At every ith data point d q,i , arithmetic mean of corresponding force data is calculated as Regarding statistical errors of the measurements, with the certainty of 95%, the true value of F q,i then is within the confidence interval For n = 10, the n-dependent and tabulated [19] parameter t 0.95 = 2.26. The term si ̅̅ n √ represents the standard deviation of the arithmetic mean. Arithmetic mean (29), standard deviation (30) of arithmetic mean and confidence interval of single data points (31) are assembled to error vectors F q , s and F q ± t 0.95 s ̅̅ n √ .
An example of F q with its confidence interval is shown in Fig. 12, bottom. As can be seen, confidence interval here is in the linewidth-range of mean FDC. This is a proof of good repeatability of the used measuring setup.
The arithmetic mean of damping ratio ζ and its confidence interval are calculated analogues and are printed in the figure, too. As described in 2.6, at shear and roll configurations die FDC of an individual SM has to be calculated from three measurements with paired SM in each measurement.
In order to get arithmetic mean, standard deviation of arithmetic mean and confidence interval, the error propagation according to Gauβ has to be applied [19].
This is the standard deviation of y with Δx j = s j / ̅̅̅ n √ as standard deviations of x i . Using (32) on (22) yields the standard deviations of F 1q,i , F 2q,i and F 3q,i , where the standard deviations of the mean force values s 12,i / ̅̅̅ n √ , … of the combined measurements are determined in the step before. From this, the confidence interval is calculated analogues to (31) with t 0.95 = 4.3 [19]. mean damping values, which are calculated before. can be adjusted. Depending on installation situation of SM in the testbench, due to shock excitation the SM is displaced first in compression, tension, shear or roll direction, before it relaxes and shoots over to the opposite direction. When the SM under test is displaced in compression direction first, the loading orientation is called compression loading. An equivalent term used in this work is Excitation in compression mode. Consequently, with tension loading, shear loading and roll loading the other loading orientations are described.
Results, discussion and conclusion
The main scientific contributions of this work, as documented in section 2, are the development of a suitable model for dynamic FDC measuring and the implementation thereof. The model includes the consideration of the SM mass. The proposed evaluation method has a detailed view on the allocation of the measured force data to the displacement axis. The uncertainty thereof is calculated with (26) and (27) to Δd WSM,max = 3.25 mm for WSM and Δd ESM,max = 0.65 mm for ESM. Furthermore, the method comprises special treatment of measurements under shear and roll conditions and the substitution for the hysteresis-loop in a decaying forcedisplacement-diagram.
The results of this study are valid for inertia-based measuring setups which incorporate vertical shock test benches for excitation. For approaches using horizontal shock excitation, a modification of (15) is necessary. Setups with hydraulic excitation are not covered from the results, since the way to acquire force and displacement data is fundamentally different. Figs. 15-17 contain the measurements with ESM of three different stiffnesses and three different loading orientations, yielding useable dynamic FDC curves and damping ratios. This also applies for WSM, whose exemplary data is shown in Figs. 18-21. Displayed data show that for all loading directions the developed mechanical model and proposed calculation yield reasonable FDC curves.
The good repeatability of measurements is stated by the small scattering of single measured data, which is within the linewidth of the displayed mean FDC. This corresponds to the narrow confidence interval for the mean FDC.
It is to note, that the displayed errors of damping and the force confidence intervals representaccording to section 2.8.2 -only the statistical errors, which are based on the 10 single measurements (compression and tension mode) or 30 single measurements (shear and roll mode).
At measurements of WSM in compression mode, there is a remarkable feature (Fig. 18). A ripple in force-deflection data in the transition from almost linear to strong nonlinear behavior of the compression branch can be seen. This feature is always and only observable in compression mode, but not in tension mode (Fig. 19).
In real life use cases of WSM, the surrounding structure as well as the equipment to be mounted by WSM will limit motion of the wire rope loops. Fig. 22 shows a comparable situation, where measuring adapter and loading mass are designed such, that wire rope loops get in touch with them at higher compression displacement. From this point on, stiffness of WSM gets strong non-linear at further compression. Caused by specific design of the used WSM and the smooth surfaces of measuring adapters, the left loops shown in the figure start to slip, releasing previously stored energy. This in turn interrupts the steadiness of measured force-displacement-curves. FDC from all single measurements with this configuration (blue) and arithmetic mean (red). Right, bottom: arithmetic mean (blue) and confidence intervall for 95% certainty (red) of FDC. Fig. 20. Dynamic force-displacement data of WSM125, shear mode with drop height 35 cm. Left: Data from a single measurement, involving specimens #1 and #2 of WSM125. Right, top: allready averaged data from the combinations of SM #1 and #2, SM #1 and #3, and SM#2 and #3. Rigth, bottom: from the three averaged datasets calculated FDC for SM #1 (blue) and confidence intervall for 95% certainty (red) of FDC. Fig. 21. Dynamic force-displacement data of WSM125, roll mode with drop height 35 cm. Left: Data from a single measurement, involving specimens #1 and #2 of WSM125. Right, top: allready averaged data from the combinations of SM #1 and #2, SM #1 and #3, and SM#2 and #3. Rigth, bottom: from the three averaged datasets calculated FDC for SM #1 (blue) and confidence intervall for 95% certainty (red) of FDC. In tension mode however, WSM are shock-excited in positive direction first, followed by a free oscillation with a strong decay. Therefore, not enough energy is left to compress WSM to this specific ripple-point.
This study reports theory and implementation of the methods described in previous section. It is supported by exemplary measurements. A comprehensive study incorporating these methods, for systematic investigation of dynamic FDC for SM of different types and stiffness under various loading configurations, together with a more detailed and reasoned description on the developed measuring setup, is subject to further publication.
Author contribution statement
Bernhard Heinemann: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Jan Dreesen: Delf Sachau: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data.
Data availability statement
Data will be made available on request.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-12-22T16:09:41.001Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "2b9393b1ee964e92eced5c419cb86ac7bf8f2fdf",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e16743",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51d1066e6f38885367646971a5d236218cd67396",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
119608187 | pes2o/s2orc | v3-fos-license | Hyperspherical theory of anisotropic exciton
A new approach to the theory of anisotropic exciton based on Fock transformation, i.e., on a stereographic projection of the momentum to the unit 4-dimensional (4D) sphere, is developed. Hyperspherical functions are used as a basis of the perturbation theory. The binding energies, wave functions and oscillator strengths of elongated as well as flattened excitons are obtained numerically. It is shown that with an increase of the anisotropy degree the oscillator strengths are markedly redistributed between optically active and formerly inactive states, making the latter optically active. An approximate analytical solution of the anisotropic exciton problem taking into account the angular momentum conserving terms is obtained. This solution gives the binding energies of moderately anisotropic exciton with a good accuracy and provides a useful qualitative description of the energy level evolution.
I. INTRODUCTION
The interest to the anisotropic exciton problem 2,3 has been revived with the progress in the physics of semiconductor heterostructures. In semiconductor superlattices the miniband formation causes a strong mass anisotropy. 4 In fact, the localization of carriers inside quantum wells and their tunneling trough barriers can be described in terms of anisotropic medium approximation as the effect of mass renormalization. The dielectric constant becomes anisotropic also if the superlattice constituent layers have different dielectric susceptibilities. Recently such a formalism has been used in the theory of excitons in short-period superlattices (see, e.g., Refs. 5,6).
The main complication of the uniaxial anisotropic exciton problem is that the Coulomb potential symmetry is broken (the spherical symmetry as well as the "hidden" one, the intrinsic property of the hydrogen-like system) so that only the angular momentum projection and parity conserve. As a consequence, the solution of the Schrödinger equation is no more factorized into radial and angular parts and cannot be represented as a finite combination of standard special functions.
The anisotropic exciton problem was first studied by Kohn and Luttinger 3 (for donor states in silicon and germanium) by means of the variational approach with allowance for a group symmetry of the particular materials. Further theoretical studies 7-17 were focused on perturbative solutions of the anisotropic exciton problem. For slightly anisotropic system Hopfield and Thomas 7 found the first-order solution, treating the anisotropy of the kinetic energy as a perturbation 18 linear in the anisotropy parameter. The effects in a weak magnetic field also have been taken into account in this approximation. For a moderate exciton anisotropy Wheeler and Dimmock 8 used an expansion of the anisotropic potential over its asymmetric part z 2 /r 2 up to the second order in the anisotropy parameter terms, thus calculating in part the second-order perturbation solution. This partial diagonalization was completed by Deverin,9 who considered the diagonal elements of the exact anisotropic kinetic energy (for nondegenerate levels) as well as the transcendental solution of a secular problem for degenerate levels. The full expansion of the anisotropic potential was considered by Segal 10 , where only the spherically symmetric part of the full expansion was taken into account. Finally, Faulkner 11 performed calculations of donor energy levels by means of Rayleigh-Ritz perturbation method containing numerous (depending on hydrogen quantum numbers) variational parameters. Being included in the radial part of hydrogen basis functions, these variational parameters served as scaling factors depending on the anisotropy degree. In the limit of an extreme anisotropy, the exciton binding energies were calculated 3,12,13 in adiabatic approximation. Recently, an elegant model of fractional-dimensional space has been developed [see Refs. 19,20 and references therein]. It allows to treat self-consistently the bound as well as continuum states in hydrogen problem of noninteger dimension. However, its direct applicability to the anisotropic exciton problem is problematic. The reason is that the fractional-dimensional hydrogen problem conserves the Coulomb degeneracy of levels (so that the binding energies depend on the principal quantum number only), whereas in reality the anisotropy lifts this degeneracy and restores it only in 2D and 3D cases.
In spite of a long history of theoretical study, the investigation of the optical properties of the anisotropic exciton is still not complete. For example, the behavior of exciton oscillator strengths is very important for the understanding the experimental absorption spectra.
However, the evolution of the oscillator strengths of the anisotropic exciton with the increase of the anisotropy has not been investigated, for our knowledge, with two exceptions: calculations for slightly anisotropic exciton 14 and simulations of optical spectra within an isotropic exciton model. 21 One should note that none of the approaches 14,21 is able to describe the drastic changes of oscillator strengths (due to the level anticrossings 11 ) with increase of the anisotropy reported in our paper.
In the present paper we develop 22 a perturbation approach to the uniaxial anisotropic exciton problem, based on the method of stereographic projection of the momentum space to the unit 4D-sphere, proposed by Fock. 23 We use the hyperspherical harmonics, i.e., the irreducible representation of rotation group O(4) of a 4D-sphere, as a basis of Brillouin-Wigner perturbation method. This approach has a number of advantages and clarifies the physical properties of the anisotropic exciton. (i) It allows us to utilize the additional hidden symmetry of Coulomb potential for expansion of anisotropic exciton wave function. Namely, for the bound exciton states the irreducible representation of the full symmetry group O(4) constitutes a complete set for such expansion. This expansion depends explicitly on the exciton energy through scaling parameters which follow adiabatically the changes in anisotropy. These parameters, similar to those introduced in the Rayleigh-Ritz method 11 (where they were defined by minimizing the energy functional) are exactly determined in our method. As a result, the hyperspherical functions turn out to be the most effective basis for numerical calculations. (ii) Within Fock representation, the hydrogenic spectrum with the level series limit transforms into an equidistant one, which provides a good convergence of our method in a wide region of the anisotropy parameter. (iii) The matrix elements of the perturbation are found as analytical elementary expressions. (iv) This analytical form of perturbation matrix elements allows us to construct a spherical approximation with an analytical solution and to summarize exactly the rest part of perturbation in the second order. This spherical approximation, which works well in the region of a moderate anisotropy, turns out to be very useful for qualitative classification of the energy levels.
We calculate numerically the energy spectrum, excitonic wavefunctions and oscillator strengths for flattened as well as elongated excitons.
The paper is organized as follows. In Sec. II the expansion is formulated on the basis of hyperspherical formalism and basic equations of the perturbation method are derived.
Results and discussions are presented in Sec. III.
A. Hyperspherical formalism
The Hamiltonian of the uniaxial anisotropic exciton is given bŷ (1) Here µ is the reduced exciton mass, ε is the semiconductor dielectric constant, and subscripts and ⊥ refer to the quantities along and normal to the axis of symmetry (z-axis), respectively. In Eq. (1) both the kinetic and potential energies are anisotropic. However, a dilatation z → z ε /ε ⊥ makes the potential energy spherically symmetric. In the effective atomic units where ε 0 = √ ε ⊥ ε , Eq. (1) takes the form Here we introduced the perturbation parameter, ǫ = γ − 1, connected to the anisotropy parameter, (0 < γ < 1 and 1 < γ < ∞ for, respectively, flattened and elongated exciton),p andp z denote, respectively, the dimensionless operators of momentum and its z-projection.
We investigate the bound states with eigenenergies E ν < 0, measured in Ry * , Eq. (2). It is convenient to introduce a parameter (for each bound states ν) which will play the role of the adiabatic parameter in the perturbation theory. After the Fourier transform, Eq. (3) takes the integral form Following Fock's paper, 23 we perform a stereographic projection of 3D momentum space to the 4D unit sphere, p/p ν → u, where the 4D vector u on the sphere is defined as p = |p|. In the hyperspherical coordinates, (α, θ, ϕ), the unit vector u takes the form and Let us introduce a new wave function with normalization condition Then Eq. (6) takes the form HereĤ 0 is the Hamiltonian of unperturbed (hydrogen-like) problem, andV is the perturbation operator, If ǫ = 0 (or γ = 1), Eq. (12) describes the isotropic 3D exciton. As it was shown by Fock,23 the solutions of the integral equation Here C m k (x) are the Gegenbauer polynomials 24 and Y lm (θ, ϕ) are the conventional spherical harmonics. The hyperspherical functions, Eq. (16), afford the irreducible representation of the full symmetry group O(4) of the hydrogen-like system. 25 Due to the properties of irreducible representations, the hyperspherical function are orthogonal and normalized as in accordance 26 with Eq. (11). It can be shown 27 that the standard hydrogen wave function nlm (r) with a given set of quantum numbers (n, l, m) (see, e.g., in Ref. 28) can be Fourier transformed into the hyperspherical function, Eq. (16).
B. Formulation of Brillouin-Wigner perturbation theory
We use the Brillouin-Wigner perturbation theory, i.e. the direct diagonalization of a truncated Hamiltonian matrix in order to solve the anisotropic exciton problem in the form of Eq. (12). The set of the hydrogen bound states eigenfunctions is not complete and the scattering states also must be taken into account. However, in Fock representation we are able to construct a complete basis out of the set of the hydrogen bound states. As it was shown in Ref. 25, the scattering states are mapped on a two-sheeted hyperboloid in a 4D space with Minkowski metrics, whereas the bound states are mapped into a unit sphere via the transformation Eq. (8). Thus, the problems of the bound and scattering states are mapped onto different subspaces, each of them to have its own complete basis. The anisotropic problem is mapped into the same subspaces through the transformation Eqs. (7)-(10) for the bound states and the corresponding procedure (with positive energies) for the scattering states. So, being interested in bound states in the whole physical region −1 < ǫ < ∞, excluding the points ǫ = −1 (purely 2D exciton) and ǫ = ∞ (purely 1D exciton), we can use the hyperspherical harmonics Eq. (16) as a complete set of basic functions. 29 As it immediately appears from Eqs. (12) and (14), the perturbation scheme converges for |ǫ| < 1.
The eigenfunctions are expanded as where normalizing constants are defined as Then, the Schrödinger equation takes the matrix form where and the perturbation matrix is Nonvanishing matrix elements V ss ′ are (see Appendix A) with and All the other matrix elements vanish.
The perturbation method in the form of Eq. (21) is very convenient. First of all, the perturbation ǫp 2 z is invariant with respect to rotations around the z-axis and to the transformation p → −p. Thus, each perturbed state has a definite parity and definite magnetic quantum number m, and the perturbation problems Eq. (21) can be solved separately for different parity and m. It implies also that the summation over s ′ in Eq. (21) and thereafter means that only the hydrogen states with a given parity and magnetic quantum number have to be taken into account. The time-conjugated states ±m are still degenerate. Secondly, the precise form of perturbation matrix V ss ′ provides more rigorous selection rules.
The expansion (19) corresponds to the following coordinate representation of the anisotropic exciton wave function where φ We would like to emphasize that presented perturbation method can be easily generalized for an arbitrary integer dimension D ≥ 2 in accordance with Ref. 25, where the method of stereographic projection has been expanded to higher dimensions. In particular, for D = 2 the standard spherical harmonics Y lm (θ, ϕ) have to be used as a basis and the operator (ǫ/2)V = (ǫ/2)(1 + cos θ) cos 2 ϕ -as a perturbation. Here cos θ = (p 2 − p 2 ν )/(p 2 + p 2 ν ), tan ϕ = p y /p x .
The problem of anisotropic exciton scattering states can be approached analogously using hyperspherical harmonics on a two-sheeted 4D-hyperboloid as a basis for the perturbation problem. The eigenvalues should be defined with positive energies, instead of Eq. (22).
However, the eigenvalue problem [analogous to Eq. (21)] becomes more complicated: we have to solve now a system of integral equations, because of dependence on continuum quantum numbers.
One should note that the method of stereographic projection can be formally generalized for the fractional-dimensional exciton problem, the exciton binding energies coinciding with those obtained in Ref. 19. However, due to the generalized hyperspherical symmetry conservation (the anisotropy parameter now appears in a role of the fractional dimensionality), the energy levels are Coulomb degenerate, as it was mentioned above.
III. RESULTS AND DISCUSSIONS
Due to the symmetry properties of uniaxial anisotropic exciton Hamiltonian, matrices with even and odd l as well as with different m can be diagonalized independently. In contrast to the variational technique which provides only the upper bound of the binding energies, the Brillouin-Wigner perturbation method allows us to reach necessary precision by choosing a sufficiently large matrix to be diagonalized. We perform our calculation with a relative energy precision of 10 −4 . In order to provide this precision in the calculation of the ground state energy for 0.6 ≤ γ 1/3 ≤ 2, hydrogen states with the principal quantum number up to 15 and orbital quantum number up to 6 must be taken into account. The numerical procedure becomes unstable for γ → 0 and γ → ∞. This non-convergency is caused by the fact that these points, where the symmetry changes (to 2D and 1D, respectively), are peculiar for the perturbation theory. The dimension change causes the levels' degeneration, when a very large (divergent) number of levels is mixed due to perturbation, and has to be = 1, Fig. 3, left panel). As it is clear from Fig. 2, the ground state eigenvalue dependence is almost linear over γ 1/3 for γ ≤ 1, and The ground state which lies much lower than the excited states almost does not interact with the latter. However, for the first excited state this interaction becomes much more significant, and its energy dependence upon γ 1/3 deviates from the linear one (cf. with dashed line in Fig. 2). The ratio of the energy separation between the ground state and the first excited state to the exciton binding energy is shown in Fig. 4 a good accuracy (see Table 1). As compared to Faulkner, we calculate a large number of excited states (up to 100 for each parity and m considered); we calculate the excitonic parameters in the region of γ ≤ 1 as well as γ ≥ 1, thus covering all possible values of the anisotropy parameter. Note the difference between Faulkner's and our designations of 3S and 3D 0 states. 31 When the states are split off due to perturbation, we always label the states with larger oscillator strengths at γ ≈ 1 as S-state, thus establishing an order reversed to that among the states with m = 0, within our notations (see also discussions in Sec. III C and Fig. 5). Thus, at γ < 1 the 3D 0 level lies lower than 3S, contrary to the classification by Faulkner. 11 The same situation holds if we consider the higher excited states.
B. Spherical approximation
Even in case of small anisotropy |ǫ| ≪ 1, the exciton states are linear combinations of hydrogen states with different l. However, for small ǫ the admixture of such states becomes rather small, and the accounting only for the spherically symmetric part of the perturbation proves to be very useful for understanding the evolution of levels. It is important that within such a spherical approximation, the anisotropic exciton problem is exactly soluble.
In this section we consider the approximate solution of the anisotropic exciton problem in a form ψ(r) = R(r)Y lm (θ, ϕ), thus taking into account only diagonal in l parts of the perturbation, Eqs. (24), (25), and neglecting the perturbation matrix elements mixing different spherical harmonics.
In order to neglect l = l ′ matrix elements, let us replace in the Schrödinger equation, Eq. (6), the operatorp 2 z by the operatorQ, defined aŝ [see also Eq. (25)]. Then, after the substitution which, in fact, corresponds to a (l, m)-dependent mass renormalization, we arrive at a symmetrical (unperturbed) Schrödinger equation with the solution in units of Eq.(2) and with the use of dilatation of z.
One can easily see from Eq. (34) that in this spherical approximation the perturbation compresses (for ǫ < 0) or dilates (for ǫ > 0) the scale of a given hydrogen wave function by the factor 1 + ǫQ lm , which is different for different spherical harmonics. Note, that the hidden hydrogen-like symmetry is broken within this spherical approximation, and the binding energies now depend on l and m. However, the spectrum Eq. (35) still has a hydrogen-like dependence on the principle quantum number n.
In strengths between different states is due to multiple unticrossings between energy levels interacting with each other. This effect can be clearly seen in Fig. 8, where the area of a circle placed on the energy curve is proportional to the oscillator strength of a given excited state, normalized to the ground state oscillator strength. | 2012-05-24T15:25:27.000Z | 2000-08-25T00:00:00.000 | {
"year": 2012,
"sha1": "cca9ecdb182e6fd21527774aa14a78344d4cf9b5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1205.5482",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cca9ecdb182e6fd21527774aa14a78344d4cf9b5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
59606651 | pes2o/s2orc | v3-fos-license | Expression of microRNAs 16, 20a, 150 and 155 in anal squamous intraepithelial lesions from high-risk groups
Anal squamous intraepithelial lesions (ASIL) or anal intraepithelial neoplasia (AIN) are precancerous lesions. microRNAs (miRNAs) have been implicated in cervical carcinogenesis, but have never been assessed in anal precancerous lesions. Our aim was to evaluate the expression of miR-16, miR-20a, miR-150 and miR-155 in several grades of ASIL obtained from high-risk patients, submitted to anal cancer screening from July 2016 to January 2017. Lesions were classified according to the Lower Anogenital Squamous Terminology (LAST) in low-grade (LSIL) and high-grade squamous intraepithelial lesions (HSIL), and the AIN classification in AIN1, AIN2 and AIN3. A hundred and five biopsies were obtained from 60 patients. Ten samples were negative (9.5%), 63 were LSIL (60%) and 32 were HSIL (30.5%) according to the LAST. Twenty seven (26%) were negative for dysplasia, 46 were classified as AIN1 (44%), 14 as AIN2 (13%) and 18 as AIN3 (17%) according to the AIN classification. There was no statistically significant difference in the fold expression of miR-16, miR-20a, miR-150 and miR-155, according to either classification. Although non- significant, there was an increasing trend in the miR-155 fold expression from negative samples to HSIL, with the highest fold expression increase in both LSIL and HSIL compared to the other miRNAs.
is recommended in specific cases, such as -IN2 cases (−IN2/p16 negative considered low-grade and −IN2/p16 positive as high-grade lesions) 5 . p16 is the best performing biomarker for classification currently available, but is not ideal 5 , with the possibility of false positives (7% of all anal LSIL will be p16 positive) and subsequent overtreatment, or false negative results and subsequent undertreatment (24% of AIN2 and 10% of AIN3 will be p16 negative) 8 . microRNAs (miRNAs) are noncoding RNAs, approximately 21-23 nucleotides in length that have been studied and implicated in several types of cancers, acting as tumour suppressors or oncogenes (oncomirs) 9 . Research involving miRNAs may provide insight into HPV-related carcinogenesis and possible new biomarkers for cancer diagnosis, and determination of prognosis and optimal therapy 9 . Several studies have implicated multiple miR-NAs in key pathways linked to cervical cancer, such as cell proliferation, apoptosis, migration and invasion 10 . Many of these studies compared miRNA expression in cervical SCC with that in normal cervical mucosa 10 . There are also studies that have evaluated the expression in cervical precancerous lesions, mostly with relatively small sample sizes 10 .
Anal and cervical carcinogenesis are considered to be very similar HPV-driven processes, although important differences exist. The incidence rate is much higher for cervical cancer, and there is a lower progression rate from anal high-grade lesions to cancer 11 . There are specific high-risk groups for HPV-related anal lesions, namely HIV-positive patients, especially those who are men who have sex with men (MSM) 11,12 , solid organ transplant recipients [13][14][15] and women with a previous history of genital neoplasia [16][17][18][19] . Information on several aspects of anal carcinogenesis is still scarce, and much of our understanding and the approaches used for investigation have drawn on our knowledge of the cervix.
As far as we know, miRNAs have not previously been assessed in ASIL. The aim of this study was to evaluate the expression of miR-16, miR-20a, miR-150 and miR-155 in several histological grades of ASIL, obtained from high-risk patients. This miRNA panel was chosen based on published data related to cervical carcinogenesis, HPV infection and cell cycle influence 10 .
Results
In total, 105 biopsies were obtained from 60 patients with a mean age of 42 ± 13 years. Fifty-three patients were HIV-positive (88%), 51 patients were men (85%), all men were men who have sex with men, and six of the nine women included had a previous history of genital neoplasia (67%). Two patients were on immunosuppressive drugs (3%), both were women also with a previous history of genital neoplasia. HPV 16 anal infection was detected in 28 patients (47%), HPV 18 in 18 patients (30%) and HR-HPV other than HPV 16/18 (but not excluding patients with HPV 16/18 coinfection) in 49 patients (82%), Table 1.
Of the 60 patients included, there were 26 patients (43%) in whom biopsies were performed for more than one anal/perianal area, targeting different lesions. Information on the histological classification of the samples according to the LAST and AIN classification per patient are presented in Supplementary Tables S1 and S2, respectively. There was no statistically significant difference in the fold expression of miR-16, miR-20a, miR-150 and miR-155 for anal/perianal LSIL and HSIL according to LAST, although an increasing trend in the fold expression was seen from negative to HSIL for miR-155. The highest fold expression increase in both LSIL and HSIL samples were seen for miR-155, ( Table 2). Boxplots of ΔCt values for each miRNA according to LAST classification can be seen in the Fig. 1.
There was also no statistically significant difference in the fold expression of miR-16, miR-20a, miR-150 and miR-155 for AIN1, AIN2 and AIN3 according to AIN classification. AIN2 samples showed the highest level of expression for miR-20a, miR-150 and miR-155, the highest difference was observed for miR-150 ( Table 2). Boxplots of ΔCt values for each miRNA according to AIN classification can be seen in the Fig. 2.
There was no statistically significant different change in expression of these miRNAs according to the histological grade when analyses were adjusted for age, anal HPV genotype, lesion location, sex, HIV-positivity, smoking status or history of previous genital neoplasia (data not shown). There was no statistically significant difference in expression of these miRNAs according to HIV status or presence of high-risk HPV when these variables were used to adjust estimates for lesion classification or when analysed alone.
The miR-16 has previously been recognized as a tumor-suppressive miRNA 10,34 , with decreased expression in several different cancers, but not in cervical cancer 10 . In cervical intraepithelial lesions (CIN), Wang et al. 20 found an increasing trend of expression in CIN3, although non-statistically significant, when comparing normal (n = 38), CIN1/2 (n = 13) and CIN3 samples (n = 39). The miR-20a is part of the miR-17-92 cluster, and in one study it was shown that it was upregulated in the serum of CIN patients compared with those from the healthy controls 35 . In a study by Wilting et al. 21 , miR-150 expression was higher in CIN 2/3 samples (n = 18) vs. normal cervical samples (n = 10), although this result needs to be interpreted with caution due to the small sample size. The miR-155 is a recognized oncomiR, promoting cervical cancer cell proliferation through suppression of LKB1 (tumor suppressor in cervical cancer) 30 . Two studies 21,36 evaluating the expression in CIN2/3 samples vs. normal samples, failed to show a statistically significant higher expression in CIN2/3 samples, although there was an increasing trend in CIN2/3. miR-155 results in cervical studies/CIN samples are similar to this study: when comparing anal normal (n = 10) and anal HSIL samples (n = 32), although there was no statistically significant difference, a higher fold expression was seen in HSIL.
Using the AIN classification, the highest fold expression of miR-20a, miR-150 and miR-155 was seen in AIN2 samples, although this was non-statistically significant. In cervical studies involving these miRNAs, CIN2 samples were not analysed separately, so previous information regarding this is not available. We cannot rule out large average differences in expression between histological grades, but the observed distributions of values in individual samples clearly overlap greatly across histological grades indicating a lack of clear-cut differentiation.
The maximum number of CIN samples that have been previously tested for any of these four miRNAs in a single study was 52 samples (in this case for miR-16) 20 . As far as we know, the present study included the largest number of anogenital precancerous lesions tested for miRNA expression 10 , and histology was described according to both the AIN and the LAST classifications (in previous cervical studies only a single classification was used). The former AIN/CIN classification is still widely used, especially in Europe. An association between several (risk) factors and the fold expression of miRNAs was also conducted to understand how these factors could possibly influence the expression in histological grades.
Data from previous cervical studies provided important guidance for the choice of our miRNA panel, given the similarities between the two HPV-driven carcinogenic processes. In both cases, HPV is recognized as the major etiologic agent, there is a similar more susceptible histological area (squamocolumnar junction) and the same type of precancerous lesions (CIN/AIN). Most of the research in HPV-linked anogenital disease is focused on the cervix, with findings then commonly generalized to the anal canal. These generalizations are impaired by the fact that the cervix is by far the anatomical region most commonly affected by HPV-related lesions, and the Parameter Value
AIN Classification
Negative for dysplasia, n (%) 27 (26) AIN1, n (%) 46 (44) AIN2, n (%) 14 (13) AIN3, n (%) 18 (17) Samples Location Anal, n (%) 85 (81) Perianal, n (%) 20 (19) progression rate of high-grade lesions in the cervix is around 1/80 per year vs. 1/377 per year in the anal canal (HIV-positive MSM in the highly active antiretroviral therapy era) 11 . Anal SCC is a largely HPV-driven disease (mainly HPV16), involving high-risk groups, with a very low prevalence in the general population, as for HSIL/ AIN3 (also associated with HPV 16) 37 . These known differences, and the expected increase in incidence of anal SCC 3,4 justify specific studies involving the anus. The expression of these miRNAs in CIN samples showed, in most cases, an increasing trend in more severe grades (although non-statistically significant) 20,21,36 . Further studies with a large number of precancerous samples, including several grades of lesions, are important to clarify any possible association.
A large majority of patients included were HIV-positive MSM because this is the population with highest risk for anal SCC, and in which anal cancer screening has been recommended 38 . There have been several studies evaluating the involvement of cellular miRNAs during HIV infection, and as a potential biomarker in these populations [39][40][41][42] . One study showed that HIV-infected individuals with low or undetectable viral load exhibit a gene expression profile very similar to control or uninfected subjects 41 . Another study, analysing miRNA-150 ("anti-HIV miRNA") levels in the peripheral blood of mononuclear cells of HIV-positive patients revealed that they are restored after highly active antiretroviral therapy, with no difference also shown for miRNA-16 levels according to HIV status/therapy 42 . There is no indication for providing anal cancer screening in healthy populations, so we do not have data/anal samples in low-risk controls/healthy individuals. Our HIV-positive cohort was homogeneous, with all patients well controlled on highly active antiretroviral therapy, so an effect of HIV status on the anal expression of miRNAs seems unlikely (with a different expression in high-risk patients relative to normal low-risk controls). There are some limitations to be considered, although the number of anal samples included, for some comparisons of histological grades such as AIN2, the samples size is small. For estimates of miRNA expression in abnormal tissue relative to normal samples the confidence intervals for estimates were wide, meaning that large average differences between groups cannot be completely ruled out. There was a small number of samples for which p16 immunostaining was performed (n = 19), so comparison of miRNA expression according to p16 results have not been presented. There were only two patients on pharmacological immunosuppression (both were women with a previous history of genital neoplasia) and one patient with a previous history of anal SCC, so the impact of these features in miRNA expression was not analysed. These three patients, although in a small number, fit our inclusion criteria of high-risk patients for anal SCC who underwent anal cancer screening, and so were included. There was no association between miRNA expression and HPV presence or HIV status, but the numbers of patients who were HPV and HIV-negative are small. This is the first ever study evaluating the expression of miRNAs in ASIL. Our findings indicate that at present, miR-16, miR-20a, miR-150 and miR-155 expression cannot be considered as biomarkers for the histological classification/differentiation of these lesions. There is also no indication, that our data in the anus and the previous published data in cervix, might be largely different at this level. For miR-155, although not statistically significant, there was an increasing trend in fold expression from negative samples to HSIL. The highest fold expression increase in both anal LSIL and HSIL was seen for this miRNA, and future studies might explore this further.
Methods
Study design and study population. This was a cross-sectional study, with recruitment of a sample of high-risk patients followed for anal SCC screening, from July 2016 to January 2017, in the Proctology outpatient clinic of Gastroenterology Department of Centro Hospitalar S. João, Porto, Portugal. Both first screening visits and follow-up visits were considered. Inability to provide written consent was an exclusion criteria. Any possible case with a suspicious lesion for anal SCC was not considered for this study. Information regarding gender, age at sample collection, smoking history, HIV-positivity, sexual orientation in men, previous history of genital neoplasia, pharmacological immunosuppression and previous anal SCC history were recorded. Informed verbal and written consent was obtained from all patients that accepted entering the study. This study was approved by the Health Ethics Committee of Centro Hospitalar S. João and was performed in accordance with the 1964 Declaration of Helsinki and its later amendments.
High-resolution anoscopy and sample collection. All patients underwent high-resolution anoscopy and anal/perianal biopsies were collected during the routine patient assessment, under high-resolution anoscopy.
This technique was performed using a Carl Zeiss ® colposcope (Carl Zeiss, Oberkochen, Germany), with patients observed in the knee-chest position (all procedures were performed by A.A.) An anoscope was inserted, and anal and perianal assessment was carried out under magnification, with a colposcope. Initially this was done without staining, and then 5% acetic acid and Lugol's solution were used. Biopsies were performed using a mini-Tischler punch-biopsy forceps. No local anaesthesia was necessary for anal biopsies, for perianal biopsies a 1% lidocaine buffered with 8.4% of sodium bicarbonate was used. Two fragments were obtained, one for histological assessment and one fragment was frozen at −80 °C for miRNA analysis. The number of biopsies performed in each patient was determined by the number of lesions seen. When biopsies were done in several anal/perianal areas in the same patient, they were always targeting different lesion locations and normally done in the same procedure.
An anal cytology sample, collected as part of the regular follow-up/screening of these patients, was used for HPV genotyping (the cytology results themselves were not included for this analysis). Anal cytology was performed using a sterile polyester swab (Thermo Fisher Scientific, Waltham, Massachusetts, USA), previously moistened with water, with the patients in the knee-chest position. The swab was inserted in the distal rectum and then slowly withdrawn with rotational movements over a period of 20 seconds. Samples were placed into PreservCyt ThinPrep ® solution (Hologic UK, Crawley, UK).
Histological analysis.
Histological samples were analysed in the Pathology Department of Centro Hospitalar S. João in Porto, Portugal, by experienced Pathologists using the same protocol and with consensus discussion of all difficult or equivocal cases. For this study, two histological classifications were recorded. One was according to the AIN classification (three-tiered nomenclature), using the presence/absence and grade of dysplasia: AIN1 (mild dysplasia), AIN2 (moderate dysplasia) and AIN3 (severe dysplasia). The other classification was according to the LAST 5 two-tiered nomenclature: LSIL and HSIL. p16 immunohistochemistry was evaluated in equivocal cases as recommended by LAST guidelines, and p16-positive lesions were considered HSIL and p16-negative as LSIL 5 .
Negative biopsies obtained from the same group of high-risk patients were included in the analyses for comparison. The definition for negative anal biopsies is different in the two classifications and results were analysed accordingly. Anal biopsies with normal mucosa/reactive changes and non-dysplastic ASIL (including condylomas) were considered "negative for dysplasia", according to the AIN classification. According to the LAST 5 only anal biopsies with normal mucosa/reactive changes were considered "negative" since the sheer presence of a cytopathic effect of HPV (koilocytosis) and condyloma, even without dysplasia, are considered LSIL (koilocytosis and condyloma are usually a non-dysplastic ASIL).
HPV genotyping. For this analysis, the remaining sample of the liquid-based anal cytology specimen that was collected during the normal patient assessment was used. HPV genotyping was performed using the cobas ® HPV test. The test simultaneously provides pooled results for 12 high-risk (HR) genotypes (HPV 31,33,35,39,45,51,52,56,58,59,66, and 68) and individual results for HPV 16 and HPV 18. A negative result indicates either the absence of any HPV or the presence of only low-risk HPV infection (HPV 6 and 11). This analysis was conducted in the Pathology Department of Centro Hospitalar S. João in Porto, Portugal. The procedure was performed following the manufacturer's instructions. Initially, the DNA extraction (HPV nucleic acids and the control β-globin DNA) was carried out using the fully automated cobas x 480 instrument. The cobas z 480 analyzer was then used for real-time polymerase chain reaction (PCR) amplification of HR-HPV and a β-globin DNA. The interpretation of the results was accomplished using the software provided with the cobas z 480 analyzer. Briefly, the tissue samples were homogenized and macerated into TripleXtractor reagent (Grisp-research solutions ® ) and using a syringe and 21 g needle. For miRNA isolation, the total RNA fraction was first isolated with a chloroform solution (Merck ® ) according Santos et al. 42 protocol and for the purification we used the commercial kit GRS microRNA Kit (Grisp-research solutions ® ) after adjustments 43,44 .
Based on a literature search encompassing studies on miRNA normalization in the cervical tissue, we selected RNU44 and RNU48 as endogenous candidate reference genes 22,45,46 . RNU-48 was used as an endogenous control for data normalization since it presented a stable expression pattern, meaning low mean delta threshold cycle values and standard deviation variations.
The thermal conditions were 16 °C for 30 minutes, followed by 42 °C for 60 minutes and 85 °C for 10 minutes as mention in Dias et al. 43 Outcomes. The primary outcome was to evaluate the expression of miR-16, miR-20a, miR-150 and miR-155 according to histological grade of the ASIL, using both the LAST and AIN classification. The secondary outcome was to evaluate factors that can modify the fold expression of these miRNAs, according to the histological grading.
Statistical analysis. Continuous variables were described as mean ± standard deviation and categorical variables were described as absolute and relative frequencies.
microRNAs expression was initially quantified as delta threshold cycle values (ΔCt), defined as Ct target miRNA minus Ct RNU48. The mean Ct value of a given miRNA from normal/negative anal biopsies served as a basal level to calculate the relative level of the miRNA detected in each type of lesion.
There were several lesions seen in some patients (one biopsy for each lesion was taken), so analyses were conducted using random intercept models with outcome variables on the ΔCt scale. For the primary analyses 'negative' biopsies were treated as the reference categories and the average difference in ΔCt was estimated for each level of the LAST and AIN classifications, respectively. The model for each classification therefore provided estimates of ΔΔCt for each category of lesion relative to the reference group. The estimated ΔΔCt values and their 95% confidence interval (CI) were transformed to obtain estimates of relative change in expression normalized to an endogenous reference (RNU48) (2 −ΔΔCt ) 47 for interpretation. The 2 −ΔΔCt value for 'negative' biopsies was 1 by definition.
Analyses were conducted to evaluate expression of the miRNAs according to histological grade (both classifications) with adjustment for each of the following patient characteristics: age (linear effect), anal HR-HPV positivity (any of the 12 possible types detected), HPV 16 positivity, HPV 18 positivity, lesion location (anal or perianal), sex, HIV-positivity, smoking status and a history of previous genital neoplasia. These analyses were again conducted using univariable linear mixed models with a random intercept term for each patient, and included all samples. | 2019-02-07T15:42:58.901Z | 2019-02-06T00:00:00.000 | {
"year": 2019,
"sha1": "e101a9871561795fdbe1c047a7e6c38b682f96ba",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-38378-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e101a9871561795fdbe1c047a7e6c38b682f96ba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18398825 | pes2o/s2orc | v3-fos-license | Asiatic acid, a pentacyclic triterpene in Centella asiatica, attenuates glutamate-induced cognitive deficits in mice and apoptosis in SH-SY5Y cells.
AIM
To investigate whether asiatic acid (AA), a pentacyclic triterpene in Centella asiatica, exerted neuroprotective effects in vitro and in vivo, and to determine the underlying mechanisms.
METHODS
Human neuroblastoma SH-SY5Y cells were used for in vitro study. Cell viability was determined with the MTT assay. Hoechst 33342 staining and flow cytometry were used to examine the apoptosis. The mitochondrial membrane potential (MMP) and reactive oxygen species (ROS) were measured using fluorescent dye. PGC-1α and Sirt1 levels were examined using Western blotting. Neonatal mice were given monosodium glutamate (2.5 mg/g) subcutaneously at the neck from postnatal day (PD) 7 to 13, and orally administered with AA on PD 14 daily for 30 d. The learning and memory of the mice were evaluated with the Morris water maze test. HE staining was used to analyze the pyramidal layer structure in the CA1 and CA3 regions.
RESULTS
Pretreatment of SH-SY5Y cells with AA (0.1-100 nmol/L) attenuated toxicity induced by 10 mmol/L glutamate in a concentration-dependent manner. AA 10 nmol/L significantly decreased apoptotic cell death and reduced reactive oxygen species (ROS), stabilized the mitochondrial membrane potential (MMP), and promoted the expression of PGC-1α and Sirt1. In the mice models, oral administration of AA (100 mg/kg) significantly attenuated cognitive deficits in the Morris water maze test, and restored lipid peroxidation and glutathione and the activity of SOD in the hippocampus and cortex to the control levels. AA (50 and 100 mg/kg) also attenuated neuronal damage of the pyramidal layer in the CA1 and CA3 regions.
CONCLUSION
AA attenuates glutamate-induced cognitive deficits of mice and protects SH-SY5Y cells against glutamate-induced apoptosis in vitro.
Introduction
Human neurodegenerative diseases, including Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis, are characterized by the progressive dysfunction and loss of neurons induced by particular neurological deficits [1] . Glutamate (Glu)-induced excitotoxicity plays an important role in the pathogenesis of these diseases [2,3] . Glu-induced neuronal death is initiated by overstimulation of N-methyl-D-aspartate (NMDA) receptors, resulting in an increase in intracellular free calcium, followed by the activation of catabolic enzymes and leading to an intracellular cascade of cytotoxic events.
Recently, an increasing number of studies has found that mitochondria -organelles that are vitally important for controlling cell life and death -are involved in Glu-induced excitotoxicity because they possess a large capacity for calcium uptake in response, finally resulting in mitochondrial Ca 2+ overload. Mitochondrial Ca 2+ overload may activate neuronal cell death through the release of pro-apoptotic factors and increased generation of reactive oxygen species (ROS) [4,5] . The extent to which ROS and subsequent oxidative stress may play an essential role in Glu toxicity in both acute insults, such as ischemia [6] , and chronic neurodegenerative diseases [7] has been www.chinaphar.com Xu MF et al Acta Pharmacologica Sinica npg investigated. Thus, oxidative damage, destruction of calcium homeostasis and mitochondrial dysfunction are essentially consequences of Glu-induced excitotoxicity. Consistent with these findings, antioxidants and mitochondrial nutrients [8,9] may be promising candidates for the prevention and treatment of these diseases.
Centella asiatica has long been used in Ayurvedic medicine and traditional Chinese medicine to treat various ailments and to enhance memory. Recent findings suggest that Centella asiatica has cognition-enhancing properties through its ability to protect against oxidative stress [10,11] , reduce the extent of mitochondrial damage [12] and increase axonal regeneration and neurite elongation [13] . Asiatic acid (AA) is a pentacyclic triterpene found in Centella asiatica. Our previous studies demonstrated that AA could attenuate H 2 O 2 -or rotenone-induced neural injury due to its protection from mitochondrial membrane depolarization [14] . AA also showed protective effects against Glu-and Aβ-induced neurotoxicity [15,16] . Moreover, AA may be an effective agent for treating cerebral ischemia [17] . Therefore, AA is interesting as a candidate for potential application in the treatment of neurodegenerative diseases.
In this study, we used an in vitro model of Glu-induced excitotoxicity in SH-SY5Y cells and an in vivo dementia model of perinatal monosodium glutamate (MSG) exposure [18] to investigate the neuroprotective functions of AA and its possible mechanisms of action.
Materials
Glu, MSG, AA and MTT were purchased from Sigma (St Louis, MO, USA). Minimum Essential Medium (MEM), Nutrient Mixture Ham's F-12 (F12), nonessential amino acids and trypsin were purchased from Gibco BRL (Grand Island, NY, USA). Fetal bovine serum (FBS) was obtained from Sijiqing Biological Engineering Materials (Hangzhou, China). DCFH-DA and the BCA Protein Quantitative Analysis Kit were purchased from Beyotime (Nantong, China). Anti-β-actin primary antibody was purchased from Abcam (Cambridge, MA, USA). Anti-PGC-1α and anti-Sirt1 antibodies were purchased from Santa Cruz Biotechnology (San Diego, CA, USA), and all secondary antibodies were purchased from Boster Biological Technology (Wuhan, China). All other reagents were purchased from commercial suppliers and were of standard biochemical quality.
Culture of SH-SY5Y Cells
Human neuroblastoma SH-SY5Y cells (a gift from Dr Zun-ji KE, Institute for Nutritional Sciences, Chinese Academy of Sciences, Shanghai, China) were maintained in MEM/F12 medium, supplemented with 1% nonessential amino acids and 10% FBS, 100 U/mL penicillin and 100 U/mL streptomycin at 37 °C in 5 % CO 2 . The cells were passaged once every 3 d.
MTT assay
To determine cell viability, the MTT assay was used. SH-SY5Y cells were cultured in 96-well plates at a seeding den-sity of 3000 cells per well. Twenty-four hours later, the cells were treated with AA for 24 h and then exposed to the same fresh medium containing 10 mmol/L Glu for 24 h. Next, 100 μL/well MTT (1 mg/mL) was added to each well, and the cells were incubated for 4 h at 37 °C. After incubation, dimethyl sulfoxide (DMSO, 100 μL) was added to each well to dissolve the precipitate. The absorbance was read with a microplate reader (Molecular Devices, Sunnyvale, CA, USA) at 570 nm.
Flow cytometric analysis Briefly, following drug treatment, the cells were harvested and washed twice with ice-cold PBS. The presence of apoptotic cells that expose phosphatidylserine on their outside surface was determined by an Annexin V-FITC Apoptosis Kit (Calbiochem, San Diego, CA, USA). Annexin V-FITC (1.25 μL) was added to 500 μL of 1×Annexin V-FITC binding buffer, and the cells were incubated at room temperature for 15 min in the dark. After washing the cells with 1×binding buffer, 10 μL propidium iodide (PI) was added to the binding buffer, and the cells were analyzed with a flow cytometer (Beckman-Coulter MoFLo XDP, Fullerton, CA, USA). The percentages of apoptotic and necrotic cells were estimated for each sample.
Mitochondrial membrane potential (MMP) assay
The MMP were determined with a fluorescent dye, JC-1 (Molecular Probes, Eugene, OR, USA). The cells were seeded in 24-well plates at a density of 3×10 4 cells/mL. The fluorescence intensity was observed immediately following JC-1 staining (2.5 μg/mL of JC-1 at 37 °C for 30 min) with fluorescence spectrometry (Molecular Devices, Sunnyvale, CA, USA; Ex 488/Em 535 for JC-1 green and Ex 488/Em 595 for JC-1 red) and fluorescence microscopy (Nikon TE2000 inverted microscope, Tokyo, Japan).
Hoechst 33342 staining
The cells were cultured in 24-well plates at a density of 3×10 4 cells/mL. The cells were fixed in 4% paraformaldehyde for 30 min at room temperature. After staining with 10 μg/mL Hoechst 33342 for 10 min, the cells were observed under a fluorescence microscope.
Intracellular ROS determination ROS were measured with the non-fluorescent probe DCFH-DA. The cells were incubated with DCFH-DA at 37 °C for 30 min, and the distribution of DCF fluorescence produced by 1x10 4 cells was detected with a fluorescence microscope or FACscan cytometer at an excitation wavelength of 488 nm and an emission wavelength of 535 nm.
Western blot analysis
After treatment like describing in MTT assay, 1×10 6 cells were collected and subjected to Western blot analysis. The cell proteins were extracted and quantified with a BCA Protein Quantitative Analysis Kit. After addition of the sample loading buffer, protein samples were electrophoresed using 8%-12% The membrane was washed three times for 5 min each using PBST (PBS and 0.1% Tween 20). The membrane was incubated in the appropriate HRP-conjugated secondary antibody at room temperature for 2 h. The immunoreactive protein was visualized using the chemiluminescent reagent ECL (Pierce Biotechnology, Rockford, IL, USA) according to the manufacturer's protocol.
Animals
Neonatal mice at postnatal day (PD) 7 were procured from the Comparative Medicine Center (Yangzhou University, Yangzhou, China) and housed in cages at an ambient temperature of 25 °C with 12 h light/dark cycles. Food and water were freely available.
Drug administration
The animals were randomly assigned to drug or control groups (n=10 in each group), respectively. MSG was dissolved in 0.9% sodium chloride (NaCl). In the drug group, neonatal mice were given MSG subcutaneously (in the neck, 2.5 mg/g body weight) from PD 7 to 13 as previously described [19] . The control pups received equal volumes of 0.9% NaCl. On PD 28, the animals were weaned, and animals of the same sex that had been subjected to the same treatment were housed together. AA was suspended in 0.5% carboxymethylcellulose and administered by oral gavage. The animals were divided into four experimental groups: control, MSG, MSG+50 mg/kg AA and MSG+100 mg/kg AA. The AA doses used in this study were chosen on the basis of previously published experiments [17] . AA was administered after MSG treatment on PD 14 and daily thereafter for 30 d. The mice then performed the Morris water maze. All protocols described were reviewed and approved.
Morris water maze test
The animals were tested with a spatial version of the Morris water maze [20] . It consisted of a circular water tank (90 cm diameter, 50 cm height) that was partially filled with water (25±2 °C). Black ink was used to render the water opaque.
Prior to the water maze testing, all mice were habituated to the water by being allowed to swim freely without a platform present. The pool was in the center of a room containing various salient visual cues and was divided virtually into four equal quadrants, labeled N (north), S (south), E (east), and W (west). The cues remained constant throughout the testing process. An escape platform (6 cm diameter) was hidden 1 cm below the water surface in one of the four maze quadrants (the target quadrant). The platform remained in the same quadrant during the entire experiment. The training consisted of 4 trials per day for 4 d with each trial having a time limit of 60 s and with an interval between trials of approximately 60 s. Each mouse had to swim until it climbed onto the submerged platform. After climbing onto the platform, the animal remained there for 30 s before the commencement of the next trial. If the mouse failed to reach the escape platform within the maximally allowed time of 60 s, it was gently placed on the platform and allowed to remain there for 30 s, and the time to reach the platform (latency) was recorded as 60 s. On the fifth day, a spatial probe test was conducted. Each mouse was given one 60 s retention test trial in which the platform had been removed from the tank. The time spent in the target quadrant was recorded. The time spent in the target quadrant indicates the degree of memory consolidation that took place after learning.
Protein, lipid peroxidation, glutathione and superoxide dismutase assays After completing the Morris water maze test, the animals were sacrificed and their brains were quickly removed to dissect the hippocampus and cerebral cortex. The dissected brains were homogenized in 0.1 mol/L phosphate buffer (PB, pH 7.4). The homogenate was used to estimate the amount of lipid peroxidation, glutathione and superoxide dismutase. The amounts of protein, tissue lipid peroxidation, glutathione and superoxide dismutase were determined using kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). The homogenate was centrifuged for 30 min at 3000×g at 4 °C, and the supernatant was used for enzyme assays. Glutathione (GSH) levels were determined using the DTNB-GSH reductase recycling method [21] . The levels of malondialdehyde (MDA), an intermediate product of lipid peroxidation, were determined with the thiobarbituric acid (TBA) reaction [22] . The protein content was measured by the method of Bradford [23] using bovine serum albumin as a standard.
Histological analysis by hematoxylin-eosin (HE) staining
The hippocampi of two mice from each group were chosen for hematoxylin-eosin (HE) staining. The mice were sacrificed and immediately transcardially perfused with 0.1 mol/L phosphate buffer, pH 7.4, followed by freshly prepared 4% paraformaldehyde in 0.1 mol/L phosphate buffer. The brains were removed and fixed in 4% paraformaldehyde in 0.1 mol/L phosphate buffer (pH 7.4) at 4 °C for more than 24 h. Coronal blocks were embedded in paraffin for staining. The hippocampi stained with HE were analyzed under a microscope at 400×magnification.
Statistical analysis
The data were expressed as the mean±SD. The data were analyzed using a one-way factorial analysis of variance (ANOVA). Tukey's test was then performed to compare treated samples, and the differences were considered to be significant when P<0.05.
AA attenuates Glu-induced toxicity in SH-SY5Y cells
The treatment of SH-SY5Y cells with Glu (8 or 10 mmol/L) for 24 h markedly reduced cell viability ( Figure 1A). A concentration of 10 mmol/L Glu was chosen for our subsequent experiments. To examine the neuroprotective effects of AA, the cells were preincubated with different concentrations of AA (0.01-100 nmol/L) for 24 h, followed by exposure to 10 mmol/L Glu for 24 h. AA provided protection against Glu-induced injury, and the strongest protective effect was achieved with 10 nmol/L AA ( Figure 1B).
Effects of AA on Glu-induced apoptosis
Apoptosis was assessed using Hoechst staining and flow cytometry. As shown in Figure 2A, the exposure of cells to 10 mmol/L Glu resulted in chromatin condensation but not DNA fragmentation. A similar form of chromatin condensation has recently been observed in HT22 cells and cerebellar granule neurons exposed to Glu [24,25] . Pretreatment with AA alleviated Glu-induced nuclear morphological alterations. The flow cytometry results ( Figure 2C) demonstrated that stimulation with 10 mmol/L Glu produced apoptosis in 7.38% of the cells compared with 1.28% of the control group. AA 10 nmol/L treatment reduced the incidence of apoptosis to 2.98%. [26,27] . We sought to determine whether AA has the ability to modulate the mitochondrial membrane potential and levels of intracellular ROS following excitotoxic stimulation. As shown in Figure 3B, FACS analysis revealed that 10 mmol/L Glu treatment increased ROS levels compared with the control group, whereas 10 nmol/L AA pretreatment significantly reduced Glu-induced ROS generation. Representative fluorescence photomicrographs ( Figure 3A) were consistent with the FACS results. To assess the effect of AA on the changes in the MMP induced by Glu, fluorescence spectrometry and fluorescence microscopy analyses were performed using JC-1 staining. Glu induced a decline in the MMP compared with the control. This decline in MMP was prevented by AA ( Figures 3C, 3D). These results indicate that AA may prevent Glu-mediated neurotoxicity partially through a reduction in ROS production and a restoration of MMP.
Effects of AA on the expression of PGC-1α and Sirt1
The silent information regulator 2 family of proteins (sirtuins) are NAD-dependent deacetylases that are believed to regulate survival and longevity [28] . Mammalian species have seven different sirtuin family members with the closest relative to the yeast sirtuin being Sirt1. Its beneficial role in neurodegenerative diseases has been studied extensively, and it holds great potential as a therapeutic target for neurodegeneration [29] . Peroxisome proliferator-activated receptor γ coactivator α (PGC-1α) controls mitochondrial biogenesis and function [30] and thereby plays an important role in brain energy homeostasis and neurodegenerative diseases [31,32] . Recently, an increasing number of studies have indicated that Sirt1 can interact with and regulate the activity of PGC-1α [33,34] . Because of these findings, we examined whether AA affects the expression of Sirt1 or PGC-1α in Glu-treated cells. According to Western blot analysis (Figure 4), cells preincubated with AA showed an upregulation of Sirt1 and PGC-1α compared with the Gluonly group, and this upregulation may have prevented Gluinduced injury in these cells.
Effects of AA on MSG-induced cognitive deficits
The amount of time mice required to find the hidden platform during the acquisition phase of the water maze experiment is presented in Figure 5A. The mean latencies for all groups were similar on the first day; in the following days, controls rapidly improved to locate the hidden platform, whereas the MSG group tended to require more time than controls. The difference in latency to locate the platform between the MSG and control groups was significant by d 3 and 4, whereas improvement of the AA-treated group versus the MSG group became significant on d 4. The MSG-treated animals showed a reduced ability to find the platform, and this poorer performance was partially prevented by chronic treatment with AA. Animals administered the high dose of AA showed a better capacity to reach the platform than those administered the low dose. In the spatial probe on d 5 of the trial, in which the platform was removed and mice were given one 60 s retention test trial, the MSG group spent less time in the platform quadrant than the control group ( Figure 5B). In contrast, the mice treated with AA spent a significantly longer time in this quadrant than the MSG group. The 100 mg/kg dose was associated with better memory consolidation.
Effects of AA on MSG-induced oxidative stress
It has been suggested that MSG can induce oxidative stress in the rat brain and that antioxidants may be effective at ameliorating this effect [35] . Therefore, we investigated the levels of lipid peroxidation and the activity of an antioxidant enzyme in the hippocampus and cortex. The MSG-induced lipid peroxidation levels [determined using malondialdehyde (MDA)] Table 1).
Effects of AA on MSG-induced injury of the hippocampus
Neonatal MSG treatment produces degenerative changes in the developing brain; many studies have found that the injury is attributable to the destruction of the hippocampus [36] . In HE-stained sections, neuronal damage was manifested in the MSG group. The pyramidal layered structure disintegrated, and neuronal loss was found in the CA1 region. Neurons with pyknotic or shrunken nuclei were also observed in the CA3 region ( Figure 6). These injuries were significantly attenuated by AA treatment.
Discussion
In the present study, we report that AA protected SH-SY5Y cells from Glu-induced injury in vitro and improved learning and memory deficits in the MSG-induced dementia animal model in vivo.
First, our data analysis shows that AA significantly protected cells from Glu excitotoxicity (Figure 1), and AA itself caused no conspicuous alterations in the growth of SH-SY5Y cells (data not shown). The neuroprotective effect of AA in the MTT assay paralleled the morphological analyses obtained with Hoechst 33342 staining and the flow cytometry assay. In addition, an important result demonstrated that 0.1-100 nmol/L AA acted against Glu toxicity; however, with increasing concentrations of AA, the protective ability decreased and even showed toxic effects (data not shown). This is because at high doses, AA can induce apoptosis via the activation of caspase-9 and -3 and increased intracellular free Ca 2+ [37,38] .
The overactivation of glutamate receptors has been reported to induce an excessive influx of Ca 2+ , following depolarization of the mitochondrial membrane and increased production of ROS [26] . Mitochondria are known to generate ROS due to mitochondrial electron flow in the respiratory chain [39] . Meanwhile, mitochondria themselves are vulnerable to ROS, and excessive ROS can induce mitochondrial damage. This interaction between mitochondrial dysfunction and ROS genera-tion may contribute to an understanding of why AA inhibits cell death during neurotoxicity. On the basis of the above discussion, reducing the intracellular ROS levels and restoring the MMP significantly affects the neuroprotective function of AA. Our data suggest that, at a concentration of 10 mmol/L, Glu increases ROS levels. Pretreatment with AA reduced ROS levels in Glu-injured cells (Figure 3). These results are consistent with those of our previous study, which showed that AA has hydroxyl radical-scavenging activity in cell-free systems [40] . The disruption of the MMP may lead to cytochrome c release and activation of caspases that may lead to cell death. In our experiments, AA attenuated the decline of the MMP induced by Glu. Our data corroborate the results of a previous study, which reported that AA prevents the collapse of the MMP in rotenone-induced neuronal damage [14] and in the oxygenglucose deprivation (OGD) cell culture model of ischemia [17] .
Recently, an increasing number of studies have indicated that the regulation of mitochondrial biogenesis may be beneficial for neuronal recovery and survival in neurodegenerative disorders [32,41,42] . PGC-1α has been shown to be a master regulator of mitochondrial biogenesis and cellular energy [43] , and powerfully suppresses reactive oxygen species (ROS) in vivo [32] . Furthermore, PGC-1α knockout mice are much more sensitive to damage by oxidative stress, displaying apoptotic cell death in the dopaminergic cells of the substantia nigra and in hippocampal neurons [32] . From these findings, it seems reasonable that therapeutic agents that activate PGC-1α could successfully treat those neurodegenerative diseases in which mitochondrial dysfunction and oxidative damage play an important pathogenic role. Sirt1-an NAD-dependent deacetylase that has been linked to longevity -interacts with and regulates the activity of PGC-1α [34,44] . It was reported that Sirt1 can deacetylate transcription factors such as p53 and the forkhead transcription factor (FOXO) family of proteins and thereby reduce p53, FOXO-induced apoptosis [45,46] . Thus, Sirt1 may be another agent capable of playing a therapeutic role in neurodegenerative disease [29] . Due to the considerable effects that PGC-1α and Sirt1 have on neuronal function, we examined if they mediate the neuroprotective effects of AA. A novel finding is that pretreatment with AA (0.1-10 nmol/L) prior to Glu stimulation dose-dependently increases the expression of PGC-1α and Sirt1 at the same time. This result is consistent with the finding that Sirt1 can regulate PGC-1α as mentioned above. Thus, it is possible that the upregulation of PGC-1α and Sirt1 is responsible for the neuroprotective effects of AA. Another neuroprotective agent, resveratrol, also activates the Sirt1 pathway [47,48] . Interestingly, we also found that after exposure to Glu alone, the cells show a slight increase in PGC-1α. This change in PGC-1α may reflect the cell's intrinsic response to stressful stimuli, but AA pretreatment largely increased the expression of PGC-1α compared with Glu stimulation alone. This is consistent with a previous report that NMDA can directly induce the expression of PGC-1α in neuronal cells [49] . Furthermore, the neuroprotective function of AA ameliorates MSG-induced cognitive deficits in vivo. The neonatal administration of MSG causes cognitive deficits in adult animals [50][51][52] . The Morris water maze is a commonly used method for evaluating learning and memory. MSG-induced deficits in learning and memory were revealed in the mice, and memory was enhanced by AA. The hippocampus has an important role in spatial learning and memory [53] . Damage to and morphological changes of the CA1 hippocampal structure has been described in MSG-treated animals [36,54] . Our data confirmed that the CA1 hippocampal structure underwent disintegration. We also found that the CA3 pyramidal neurons became pyknotic or shrunken, and this damage was significantly prevented by simultaneous administration of AA. However, the CA3 hippocampal neurons appear to be less damaged when analyzed in terms of tissue volume and cell numbers instead of morphology [54] . Previous studies have suggested that the oxidative stress induced by MSG might contribute to hippocampal impairment [51,55] . AA is viewed as a promising antioxidant candidate; it sufficiently abated oxidative stress in MSG-treated rodents by enhancing both GSH levels and SOD activity and by reducing MDA levels. Therefore, it is hypothesized that the attenuation of oxidative stress, following induced hippocampal damage, is responsible for the effects of AA against MSG-induced dementia.
In conclusion, our data support the notion that AA attenuates cognitive deficits in an animal model of MSG-induced dementia. In addition, AA offered beneficial effects in Gluinduced cellular injury by suppressing oxidative stress and protecting mitochondria. Taking these in vitro and in vivo results together, especially the protection of mitochondria by AA, AA and related compounds may develop into a new therapeutic approach for preventing and/or treating neuronal damage and degenerative disorders. | 2017-11-08T00:55:12.681Z | 2012-03-26T00:00:00.000 | {
"year": 2012,
"sha1": "47217750d2f06425f799026f7763edd288aece64",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/aps20123.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "47217750d2f06425f799026f7763edd288aece64",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
95346308 | pes2o/s2orc | v3-fos-license | Exploring the dose response of radiochromic dosimeters
The aim of this study was to explore the dose response of a newly developed radio-chromic hydrogel dosimeter based on leuco malachite green dye in a gelatine matrix. The original dosimeter composition was first investigated in terms of dose response and dose-rate dependence. In addition, the initiating compounds producing chlorine radicals were substituted with compounds producing fluorine radicals, oxygen-centered radicals, carbon-centered radicals and bromine radicals. Also the surfactant was substituted by other compounds of different molecular size and charge. The original composition gave a dose response of 3.5·10−3 Gy−1cm−1 at 6 Gy/min with a dose rate dependence giving a 27 % increase when decreasing the dose rate to 1 Gy/min. None of the substituted initiating components contributed to an increase in dose response while only one surfactant increased the dose response slightly.
Introduction
Radiochromic dosimeters are promising tools for three-dimensional dosimetry [1] since they i) are insensitive to oxygen, ii) offer non-scattering dose response, and iii) are well suited for optical readout [2]. In recent years, new dosimeter compositions have been proposed based on leuco malachite green dye suspended in a gelatine gel [3,4]. However, the dosimeters suffer from low dose response, dose-rate dependence, and a relatively high auto-oxidation. To assess these issues we have in this study investigated the effect on the dose response when substituting chemical components of the dosimeter. Two components were substituted, the initiator in order to change its reactivity, as well as the surfactant to change the micelle structures.
Dosimeter fabrication
The original composition used in this study is based on that proposed in [4]. It consists of 6 % (w/w) gelatine that forms the matrix of the volume dosimeter. The active component is 0.37 mM leuco malachite green (LMG) dissolved in 80 mM trichloromethane (CHCl 3 ) while 5 mM trichloroacetic acid (TCA) was added as an initiator. To dissolve the LMG and CHCl 3 in the gelatine solution 50 mM sodium dodecyl sulfate (SDS) was added as a surfactant. Since SDS consists of a hydrophilic head region and a hydrophobic tail these molecules forms so-called micelles, making it possible to dissolve the non-polar LMG and CHCl 3 in an aqueous solution. A description of the procedure for manufacturing the dosimeter can be found in [4]. The gel was prepared in standard PMMA cuvettes (1x1x4.5 cm) and placed in a refrigerator until the next day.
Substituting initiator
The initiator, TCA, was substituted to investigate the effect of changing the reactivity of both the compound and the radicals produced. TCA was substituted with triflouroacetic acid (TFA) to yield fluorine radicals, 2,4-pentadione peroxide (abbreviated 2,4-peroxide) and hydrogen peroxide (H 2 O 2 ), to yield oxygen-centered radicals, 2,2'-azobis(2-methylpropionamide) dihydrochloride (abbreviated 2,2-azo) and 4,4'-azobis(4-cyanovaleric acid) (abbreviated 4,4-azo) to yield carbon-centered radicals, and HBr and CBr 4 to yield bromine radicals. NaOH was added to very acidic gel formulations since such acidic solution gave a thick opaque gel with two phases. The change in pH was not observed to affect the dose response for the original version.
In addition, a batch without TCA as well as batches with both TCA and CHCl 3 substituted with CCl 4 or dimethylformamide (DMF) were made to investigate the initiating effect of TCA and CHCl 3 .
Substituting the surfactant
Since LMG is expected to be inside the micelles due to its non-polarity, the shape of the micelles might be of influence for the sensitivity of the dosimeter. Therefore the surfactant SDS was substituted in order to change the micelle size. It was hypothesized that larger heads or shorter tails of the surfactant would lead to smaller micelles increasing the surface-to-volume ratio. Three surfactants were investigated: Sodium dodecylbenzenesulfate (SDBS), sodium octyl sulfate (SOS), and dodecyltrimethylammonium bromide (DTAB). SDBS contains a larger head region since unlike SDS a benzene ring is present in the head region while SOS has a shorter tail than SDS. DTAB was similar to SDS but with a cationic head contrary to the anionic SDS.
Irradiation
Irradiation of the dosimeters was performed with x-rays from a linear accelerator set at 6 MV. The dosimeters were placed in a source-to-surface distance of 94.5 cm behind 5 cm of solid water. Additionally 5 cm solid water was placed behind the dosimeters to ensure backscatter. The dose rate was 6 Gy/min and dose sequences up to 80 Gy were given.
Read-out and data analysis
The optical densities of the dosimeters were measured with a spectrophotometer (Helios Alpha, Thermo Spectronic) at 633 nm both a few hours before irradiation and about 16 hours after irradiation. The two measurements were subtracted in order to obtain the optical response, i.e. the optical density change caused by the irradiation. The dose response was then obtained by plotting the optical response as a function of dose and fitting to a linear equation. The dosimeters were assessed in terms of dose response as well as transparency.
Results
The mean dose response of 9 batches of the original composition was found to be (3.5±0.1)·10 -3 Gy -1 cm -1 at a dose rate of 6 Gy/min that increased with 27% when the dose rate was decreased to 1 Gy/min. When TCA was omitted the dose response was (2.78±0.05)·10 -3 Gy -1 cm -1 and when both TCA and CHCl 3 were substituted by DMF or CCl 4 dose responses of (1.7±0.2)·10 -3 Gy -1 cm -1 and (1.72±0.07)·10 -3 Gy -1 cm -1 were obtained. The dose response of all other dosimeter formulations where the initiator TCA was substituted is summarized in table 1.
When adding TFA at the same concentration as that used for TCA in the original formulation, a 12% lower dose response was observed. When doubling the concentration the gel turned opaque, but by adjusting the pH by adding NaOH the transparency was increased. The dose response, however, decreased with increasing concentration. Similar tendencies were observed for both HBr and 4,4-azo with a decreased dose response of 3% and 18%, respectively. CBr 4 , 4,4-azo, and both peroxides gave lower responses even at the original TCA concentration. The highest dose response (15% increase compared to the original formulation) and the only increase compared to the original formulation was obtained with SDBS which, however, resulted in opaque
Discussion
In this study we have explored the dose response of a radiochromic hydrogel dosimeter using a range of different compositions including a range of initiators as well as surfactants. Initially, using the same composition as Vandecasteele et al. [4] resulted in a mean dose response 20 % lower than that reported in [4]. However, this difference is probably partly due to a difference in the dose rate used. In addition, only half the dose response was observed with CCl 4 as initiator compared to the same study.
In [3] the surfactant Triton X-100 was used instead of SDS which resulted in a dose response similar to Vandecasteele et al. [4]. In addition, in both [4] and [5] a dosimeter based on leuco crystal violet dye instead of LMG was investigated, resulting in higher dose responses. They are, however, still lower than that of the commercially available LMG based polyurethane dosimeter Presage™ where a dose response of (2.2±0.3)·10 -2 Gy -1 cm -1 has previously been reported for photon irradiation [6]. A similar dose response for Presage™ was obtained in [7] and this study showed, in addition, a high sensitivity to irradiation temperature. The mentioned dose responses from [6] and [7] as well as all measurements in this study were all performed at room temperature. When modifying the original composition by omitting the initiator TCA a 21% lower response than the original formulation was obtained. However, the fact that a response is observed even without TCA as initiator indicates that a considerable part of the LMG reaction is not caused by TCA but by CHCl 3 and other components. Substituting both TCA and CHCl 3 with DMF or CCl 4 decreased the dose response to half that of the original version. However, adding DMF or CCl 4 to the original composition (data not shown) did not considerably change the dose response and it therefore seems that they do not contribute. A considerable part of the dose response therefore has to be ascribed to other components than TCA and CHCl 3 .
When substituting the initiator TCA with other compounds the general trend was that the dose response decreased when the initiator concentration was increased. At the same concentration as TCA all initiators gave lower dose response, however, considerably closer to the original response than with higher concentrations. It therefore seems that the chemical reaction scheme does not depend on the specific initiator used. The initiator is therefore probably not the limiting factor for the chemical reactions causing the dose sensitivity.
Of the three surfactants used, only DTAB produced transparent gels but the dosimeters gave very low dose response. Only the surfactant SDBS gave a slightly higher dose response than the original version, but at all concentrations the gels were opaque. Large structures therefore must be present in the gel maybe due to large micelles or micelle aggregates. Since opaque dosimeters cannot be used for optical read-out this was not investigated further.
Conclusion
The measurements in this study indicate that while a part of the dose response originates from the initiator TCA and CHCl 3 , a considerable part of the dose response origins from other components. In addition, it was not possible to increase the dose response by substituting the two compounds. When substituting the surfactant a small increase was observed and it cannot be concluded whether the micelle structures are connected with the limitation in dose response. However, due to the dose rate dependence, it seems that the dose rate and the reaction rates giving the dose response are on similar time scales.
Acknowledgement
Anastasia Bochenkova, Grethe Vestergaard Jensen and Jan Skov Pedersen, Aarhus University are gratefully acknowledged for their contribution with knowledge regarding radical and micelle chemistry. This work was supported by CIRRO -The Lundbeck Foundation Center for Interventional Research in Radiation Oncology and The Danish Council for Strategic Research. | 2019-04-05T03:33:53.328Z | 2013-06-26T00:00:00.000 | {
"year": 2013,
"sha1": "af037cc8853d56b37e63e8557ba4a9e25082b685",
"oa_license": "CCBY",
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/444/1/012036/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "093d9debb73f1a90261dde7b3610a00a17f0b337",
"s2fieldsofstudy": [
"Medicine",
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
4534580 | pes2o/s2orc | v3-fos-license | Clinical utility of foam dressings in wound management: a review
Background: The management of chronic wounds is a significant medical burden associated with large health care expenditures. Since the establishment of moist wound healing in the 1960s, several types of wound dressings have been developed. However, the evidence for effectiveness when comparing various types of wound dressings is limited. Objectives: The purpose of this review is 1) to provide a general description of the role of foam in wound therapy and 2) to evaluate the evidence for effectiveness of foam dressings compared to other frequently used products. Summary and conclusion: Foam has a significant role in the clinical management of chronic wounds and in moist wound healing. There are only a few randomized controlled trials, which in general, show no significant difference in the healing effect of different dressing types. The choice of wound dressing should therefore be based on clinical evaluation of the wound and the periwound skin.
Introduction
Research in wound pathology and healing is complex and extensive.2][3] However, studies rigorously examining the optimal choice of wound dressing in randomized controlled trials are limited and data from benchwork (absorption rate, total capacity, vapor transmission, etc) may not always correlate with clinical efficacy.The current clinical use of wound dressings is therefore largely guided by a constellation of consensus agreements, local preferences, and financial considerations.The economic burden of chronic wound management is massive and continues to grow with the expanding aging population in most industrialized countries.The cost of various wound dressings varies greatly.To optimize health care spending, it is important to carefully evaluate the effectiveness of different wound dressings combined with costbenefit analysis.This review is not intended as an exhaustive, systematic review of the literature on current evidence for usage of foam products.Rather, we present an overview of the evidence in the clinical literature, thereby providing a basis for clinicians to make choices in their daily practice of chronic wound management.health care costs.][6] In industrialized countries, the venous leg ulcer is the most frequent chronic wound with an estimated prevalence of 0.1%-0.3%, 7,80][11][12] In approximately 25%-50% of patients, the venous leg ulcer persists for over 1 year 10,11 and two-thirds of these patients will have a reoccurrence within a 5-year period. 8Chronic wounds are estimated to have a prevalence of 1% in the general population and up to 3%-5% in the senior population aged 65 years and older. 13][16][17] Estimates of the yearly cost of health care services and products for patients with chronic wounds range from approximately €1,300 to 2,600 (Sweden, 2006 expenditure estimates). 18Including indirect and intangible costs to the individual and society, the current estimated cost-of-illness is approximately €9,000 per year per patient in Germany. 19
Wound healing and the biological role of exudate
Wound healing is a complex process resulting in repair of a skin defect by re-epithelialization and scar formation.Wound healing is traditionally divided into three overlapping stages: inflammation, granulation, and maturation. 20In the initial inflammation stage, the immune system is activated by release of cytokines, and inflammatory cells are recruited to the wound site.The inflammation causes increased capillary permeability and accumulation of exudate fluid in the wound bed. 21In addition to cytokines, the exudate contains plasma components, growth factors, proteases, and protease inhibitors.In the inflammatory stage, the role of the exudate is to promote tissue debridement and clearing of infection, which in turn, prepares the wound bed for re-epithelialization by formation of provisional matrix. 22In the granulation and maturation stages, the degree of inflammation and exudate formation decreases, which allows the healing process to progress.
In chronic wounds, the inflammatory stage is commonly maintained due to an underlying pathology (eg, venous insufficiency, diabetes, or autoimmune disease) or complicating factors such as a secondary infection or formation of biofilms.Comparative proteomic analysis of wound exudate from venous leg ulcers show that nonhealing wounds express proteins involved in inflammation and tissue destruction while healing venous leg ulcers are characterized by expression of proteins involved in tissue formation. 23Moreover, exudate from chronic wounds also decrease proliferation of keratinocytes, fibroblasts, and endothelial cells, [24][25][26][27][28] while exudate from active wounds stimulate proliferation. 29These findings suggest that optimal management of the exudate plays an important role in stimulating the progression from the inflammatory stage to the granulation stage of chronic wounds.
General considerations for chronic wound management
Treating chronic wounds is dependent on proper identification and treatment of underlying causes including, complicating metabolic factors and optimizing the local wound environment.
Underlying conditions in patients with nonhealing ulcers
Patients with venous and arterial insufficiency must have appropriate diagnostic testing performed (ie, sonography including duplex scans and arteriography, respectively) and, if indicated, relevant venous and revascularization surgical interventions.Patients with diabetic foot ulcers typically require multidisciplinary care including optimal diabetes regulation, wound therapy, and adapted footwear.Healing of pressure ulcers requires meticulous care focused on shifting the bodyweight to relieve pressure from the ulcer (and other skin areas at risk) combined with wound care.Patients with suspected immunological ulcers (eg, vasculitis and pyoderma gangrenosum) or unusual wounds require vigorous investigation at specialized centers to determine the underlying cause.These patients often require systemic immunosuppressive therapy and close clinical follow up.In addition to treating the underlying disease, it is essential to correct complicating metabolic factors such as anemia, malnutrition, vitamin and mineral deficiencies, infection, and poorly regulated blood glucose levels.edema Chronic wounds are associated with edema formation either as a primary event (as seen in venous leg ulcers) or secondary to the inflammatory process.The edema may be localized to the wound and periwound area or extend well beyond the wound.The edema inhibits healing and increases the risk of eczema and secondary infection.Treating the edema is typically achieved by the application of circular compression bandages or use of individually fitted compression stockings.More complicated states of edema may benefit from sustained or intermittent pneumatic compression.
Local wound management
Optimal wound healing is based on the principal of a moist wound environment (with the exception of dry gangrene) requiring optimal control of autolysis and debridement, exudate, infection and periwound skin, and edema.Dry and crusted wounds can be hydrated using gels and occlusive dressings retaining moisture.During the inflammatory stages the exudate production can be very high, necessitating the use of wound dressing with high absorptive capacity and frequent change of the wound dressing.In the granulating and maturation stages, the choice of dressing and absorptive capacity is adjusted to match the normally reduced exudate production rate allowing the wound bed to stay sufficiently hydrated.To address the common issues of infection, pain, and odor, various types of wound dressing containing silver, ibuprofen, and charcoal have been developed -all of which are commonly used.
Complications to wound management
Several complications can arise when wound management is not optimal.A mismatch or imbalance between the absorptive capacity of the foam and exudate formation may lead to drying of the wound bed.This can then lead to adherence of the foam to the wound bed, resulting in pain and trauma upon removal of the dressing.
Conversely, a relative overproduction of exudate may cause leakage of wound exudate leading to unnecessarily frequent dressing changes, as well as trauma, irritation and eczema in the surrounding skin, infection, and foul odor.Infection or colonization with bacteria may also result in release of toxins causing irritations to the skin and wound bed.Contact allergy to modern wound dressings is very rare with the exception of hydrocolloids (∼10%-17%) and silver containing dressings (∼5%). 30Only a few cases of contact allergy to polyurethane foam have been reported.In these cases contact allergy to chemicals used in the production of polyurethane foam were identified (eg, diphenylmethane diisocyanate, toluene diisocyanate, diaminodiphenylmethane). [31][32][33] However, contact allergy to ingredients in other topical wound care products (eg, Balsam of Peru, lanolin, fragrance, triclocarban, and colophony) are frequent and have been reported in up to 57%-78% of patients. 34,35Contact allergy should therefore always be considered as a possible complication if the patient develops persistent dermatitis.
Types of wound dressing
Various wound dressing products are available.These can be categorized based on composition and absorptive capacity and specialized functionality.The most common types include petrolatum impregnated gauze and knit viscose with very low absorptive capacity.Polyurethane foam, silicone, hydrocolloid, hydrofiber, alginate, and advanced combination products are generally of high absorptive capacity.Available on the market are also hydrocellular and hydropolymer foams, which contain polyurethane combined with a wound contact layer of an apertured three-dimensional plastic net, and hydropolymer, respectively.Advanced products containing growth factors and bioengineered epidermal and dermal components are also available on the market.However, these advanced products still lack good evidence for effect and are expensive.They are therefore not commonly used in the general care of chronic wounds.Finally, the so-called "negative pressure wound therapy" (NPWT), is a wound management technique where negative pressure is applied to the wound bed through an occluded polyurethane foam or gauze.NPWT is an active approach to exudate handling and wound closure.NPWT is common and well established in surgical wounds, while its role in the treatment of the various chronic wounds is less established. 36,37The use of advanced foam products and NPWT is beyond the scope of this review and will therefore not be addressed further.
Foam types and indications for use
During the past 30 years, polyurethane foam has become one of the most commonly used wound dressings for exudate management in moist wound healing.Foam consists of a porous structure that is able to absorb fluids into air-filled spaces by capillary action (for detailed information on specific products, visit the respective company websites).The most commonly used foam is polyurethane.Silicone foam is less frequently used as the primary absorbent in wound dressing, but it is often applied as an adhesive wound contact layer.Foam dressings are produced with variable thickness and may be adhesive or nonadhesive.The foams are commonly supplied with a film-backing, which has the purpose of providing a water and microbial resistant barrier to the environment.The film-backings have variable permeability, affecting the capacity of water evaporation and gas exchange.Other types of wound dressings, eg, hydroactive polymers and colloids, are also commonly used in wound therapy.These non-foam materials absorb fluids by expansion, during the binding of fluids into the polymer or colloid, resulting in a gel like substance.
The wound contact layer of the foam products is particularly important because it, both, facilitates transport of the exudate into the foam, and comes in contact with the periwound skin.Adhesion to the surrounding skin helps to keep the dressing in place and prevent exudate from traveling along the skin thereby preventing skin irritation and leakage.However, adhesion to the skin may cause irritation, especially if the skin is fragile or if the wound dressing requires frequent changing.
Self-adhesive polyurethane foam and silicone adhesive have been shown to be the least traumatic to the stratum corneum, while acrylic adhesive (used in composite hydrocolloid and polyurethane foam) is more traumatic. 38n general, foam closely complies with the so-called Tuner criteria 39 for ideal wound dressing which includes 1) the ability to maintain moisture at the wound bed; 2) being easy to remove and being able to protect the skin around the wound; 3) protecting against bacteria and other infectious agents; 4) maintaining temperature; 5) providing mechanical protection, cushioning, and conform to body shape; 6) being nontoxic and nonallergenic; 7) being easy to use; and 8) being economical and having a long shelf life.
Foams are used in the management of both acute and chronic wounds of both partial and full thickness, and with medium to heavy exudate.Foams may be used as a primary dressing or secondary dressing in combination with amorphous gels applied to the wound bed to provide moisture.The hydrogels are not absorbed into the foam due to their high viscosity.As mentioned previously, foams are also used in NPWT and are commonly used in combination with compression bandaging.
It should be noted that, in general, certain antiseptics may damage the foam product.It is therefore advisable to consult the product information before use of antiseptics (eg, iodine, chlorhexidine, hypochlorite, ether, hydrogen peroxide, oxygenated water, and sodium hypochlorite).
Foam and absorption rate
The absorption of the wound exudate is a key function of foam dressings.Ideally, the absorption rate and capacity of the foam dressing should balance with the exudate production of the wound.As described above, exudate formation varies depending on the wound type and stage of healing.The optimal dressing must therefore be chosen based on the rate of exudate formation in the individual wound in order to avoid drying-out or maceration of the periwound skin.Product information regarding the absorption rate, evaporation rate, and total capacity of the individual dressings would therefore appear to be clinically useful. 40,41owever, individual factors, such as the overlaying compression bandage and the relative size of the wound compared to the foam may also markedly affect the performance of the foam.On the one hand, compression may decrease the absorptive capacity due to compression of the air-filled space in the foam.On the other hand, compression may increase the absorption rate due to better wound bed contact.Variable ratios of foam surface area relative to the wound surface area will also affect the effective evaporation.Thus, the choice of which foam will most ideally match the wound is complex and depends on the underlying basic characteristics of the patient's wound.The ability of a given foam to maintain the optimal level of moisture in the wound bed should be evaluated continuously to avoid complications from a mismatch of absorptive capacity to exudate production.
Results from clinical trials
Given the heavy health care cost involved in the treatment of chronic wounds, there is an increasing interest in investigating the efficacy of various wound dressings to identify products that most efficiently result in wound healing.However, many of the investigations conducted over the past 25 years have limitations in study design, which raise general concerns about bias, limited generalizability, and external validity, and thus the significance of the results.This may in part be due to difficulties in carrying out blinded investigations in this patient group and due to the variability in etiology and presentation of chronic wounds.In the following section, results from selected randomized controlled studies will be briefly discussed.
venous leg ulcers
The effect of foam dressing compared to other types of wound dressing in patients with chronic venous leg ulcers has recently been examined. 42In this systematic review of the literature, 12 randomized controlled studies were deemed of sufficient design quality to be included in the review.The overall conclusion was that there was no significant difference in the effect on healing time, proportion of ulcers healed at 12 and 16 weeks, or healing rates, when comparing polyurethane foam with hydrocellular polyurethane foam, [43][44][45] hydrocapillary dressings, 46 hydrocolloid, [47][48][49][50][51] paraffin gauze, 52,53 and knit viscose. 54Although there were no significant differences in the primary outcomes of wound healing, there appeared to be a significantly better exudate handling by polyurethane foam over hydrocellular foam, resulting in less problems with leakage, less frequent changes of the wound dressing, and lower material cost (n=60). 44Similar results and limitations regarding exudate handling were also reported in one study comparing hydrocolloid with foam. 49In addition, hydrocolloid was also evaluated as more troublesome to remove and thus, more time consuming compared to polyurethane foam in two of the studies. 48,50Pain and adhesion of the dressing to the wound bed was reported more frequently when using paraffin gauze compared to polyurethane foam (n=61). 52
Diabetic foot ulcers
The diabetic foot ulcer is the result of diabetic neuropathy and may be complicated with peripheral arterial disease.Diabetic foot ulcers affect approximately 15% of all patients with diabetes at some point during the course of their disease 55,56 and affects approximately 1%-4% of diabetics at any given time. 57,58For diabetic ulcers, the data supporting choice of optimal wound dressing is very limited 59 and there is no evidence of a more favorable outcome when using foam products compared with gauze dressing [60][61][62] and/or hydrocolloid. 63In one study from 1993, polyurethane foam was shown to be superior to alginate wound dressing after 12 weeks of follow up, 64 while the same effect was not observed in another study from 1994 with 8 weeks of follow up. 65
Pressure ulcers
Pressure ulcers are ischemic wounds affecting the skin and underlying tissue.Prolonged pressure against skin by underlying bone or cartilage causes reduced tissue perfusion resulting in necrosis and wound formation.The primary intervention is prevention by alleviating pressure from the threatened tissue area.However, once a wound has been formed, basic wound care is necessary -in addition to alleviating pressure to the area and optimization of the nutritional state. 66With respect to pressure ulcers, there is only limited evidence to support advantages of foam over other wound dressings.One randomized controlled study has shown that foam improved healing compared to simple gauze when used for treatment of superficial pressure ulcers characterized by blisters and abrasions and only partial loss in skin thickness. 679][70] However, there is no evidence indicating whether foam is better than hydrocolloid. 71 clinical trial is currently being undertaken to address this question. 72
Fungating wounds
Fungating wounds arise from late stage cancer.This type of wound is characterized by heavy exudate, malodor, infection, hemorrhage, and pain.4][75][76][77] No studies, to our knowledge, have investigated the effectiveness of foam compared to other wound dressings, with specific regard to fungating wounds.In clinical "best practice", and without supporting evidence, it is important to choose wound dressings with sufficient absorptive capacity, ie, alginates, to avoid leakage.In addition, it may be favorable to use silver containing products which may help reduce odor. 78
Clinical evidence for the use of combination products
Bacterial colonization and infection are important factors that may complicate wound healing, particularly in chronic wounds.Furthermore, widespread use of systemic and topical antibiotics has led to resistant bacterial strains such as methicillin-resistant Staphylococcus aureus (MRSA) which is a noteworthy health issue worldwide.The so-called "best practice" for controlling the microbial burden in wounds is not defined.Clinically infected wounds are commonly treated with systemic antibiotics and there is no evidence for other recommendations. 42Colonization of wounds presents a double problem; by both potentially causing delayed healing and by representing a potential source for cross-contamination.The use of dressings, notably those containing certain antiseptic agents, can be a valuable option to controlling infection while promoting wound healing. 79
Dressings containing silver
Dressings containing silver are often used to control the polymicrobial wound bioburden, although its efficacy against aerobic, anaerobic, and antibiotic-resistant microorganisms is not well established.The use of silver-containing dressings in burn patients has been evaluated in a review, and it was concluded that that silver-containing dressings and topical silver were either "no better" or "worse" than control dressings in preventing wound infection and promoting healing of burn wound. 80Moreover, systematic Cochrane reviews have not found evidence for the use of silver containing wound dressings (not only foam based but various kinds of wound dressings) in the treatment of infected or contaminated chronic wounds. 42Despite this lack of strong evidence, many wound care clinics commonly use silver-containing dressings to treat chronically contaminated/colonized wounds.
ibuprofen-releasing dressing
To combat the common issue of pain in wounds a polyurethane foam product releasing ibuprofen into the wound bed was developed, showing pain reduction and no systemic absorption of ibuprofen. 81Ibuprofen is evenly distributed throughout the dressing and is released when exposed to Powered by TCPDF (www.tcpdf.org) the moist environment at the surface of exuding wounds. 82buprofen-releasing foam was also shown to reduce pain at the donor site in skin graft patients. 83,846][87] Moreover, the quality of the pain reduction was investigated and described as clinically relevant and the capacity to handle the exudate was not affected. 88,89Lastly, ibuprofen-releasing polyurethane foam was studied in combination with a silver-releasing contact layer in an open study which showed evidence of reduced wound pain and promotion of healing without compromising safety. 90
Limitations of the current literature and future goals
Currently there is no solid evidence suggesting clear superiority of any of the commonly used products.The primary outcomes in many studies are typically absolute healing time, proportion healed, or wound size reduction after a given time period of observation.These results may be subject to bias as the participants who did not heal may be censored in the statistical analysis.A time-to-event analysis (ie, time to complete healing or other defined outcome) with adjustment for covariates, such as the baseline wound size, may offer a more rigorous statistical model.
Another common source of bias is due to inadequate methods of randomization and allocation to treatment, no blinding of patient, personnel and outcome assessment, incomplete data, observer and measurement bias, and finally selective reporting.Future studies should follow the CONSORT (Consolidated Standards of Reporting Trials) guidelines to improve the quality of the randomized controlled trials. 91Studies must also clearly report on the wound care regime and concurrent treatments such as compression.Secondary outcomes such as pain and disease-related quality of life should be reported using validated methods and all data should be presented.Lastly, the inclusion and exclusion criteria should be carefully considered such that the results can be generalized.To date, many studies have excluded patients with clinical infection, which is very common in this patient population, and thus limits the clinical applicability of the study result.Carefully designed studies are needed to investigate which wound care product can provide the most optimal balance between cost and effectiveness.
Conclusion
Foam dressings are widely used in the daily management of both acute and chronic wounds of differing etiologies.
In general, the evidence supporting the use of foam products over other wound dressings is limited.In clinical practice, foam dressings are easy to use and fulfill most of the ideal criteria for a dressing used in moist wound healing.Ibuprofen-releasing and silver-containing combination foam products may be appropriate to reduce pain and bioburden, respectively.Given the general lack of solid evidence, wound care professionals ought to choose dressings based on clinical evaluation of the wound and the periwound skin.
Further research should be design based on the CONSORT guidelines and aim to address the efficacy and cost effectiveness of different wound dressings to allow for evidence-based decision-making on the best wound care products. | 2018-04-02T09:43:26.947Z | 2015-02-17T00:00:00.000 | {
"year": 2015,
"sha1": "cdef85624f89fc608d85296f0c272e923314a7a5",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=23782",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "058a0dba76217ae0b0dde33b4034f7bd69ca707c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219139236 | pes2o/s2orc | v3-fos-license | Hegel, Comparative Religion and Religious Pluralism
Hegel’s Lectures on the Philosophy of Religion played an important role in the development of the concept of world religions. Writing at the time of a great wave of interest in nonEuropean cultures in the first half of the 19 th century, Hegel was among the first to realize the reality of religious pluralism. He saw that a philosophy of religion that wanted to favor Christianity must at a minimum have some story to tell about the other religions of the world. Today scholars are rightly skeptical of Hegel’s attempt to establish a hierarchy of world religions and to tell a narrative of how the one religion replaces the other in a teleo logical manner, with some religions occupying a higher stage of development than others. If we reject Hegel’s teleology and evolutionary view, is there anything meaningful left that we can work with? While we want to resist the idea that one religion sublates the next in Hegel’s sense, historians of religion are keen to suggest the many ways in which religious traditions have developed. In many cases religions seem to have overlapped and borrowed ideas from one another. If one focuses on these points of similarity among the world religions, a new approach to plurality presents itself. In this paper I wish to explore this approach, which has been designated as “Comparative Theology”.
Ever since the posthumous publication of his Lectures on the Philosophy of Religion in 1832 [Hegel, 1832], his approach has been both appreciated and reviled. In this article I wish to explore Hegel's contributions in connection with the issue of philosophy of religion in a pluralistic world. Does Hegel have something meaningful to add to this topic? Or can he be safely dismissed so we can move on to more recent figures who have a better understanding of religion in our multicultural and pluralistic 21 st century?
Writing at the time of a great wave of interest in non-European cultures in the first half of the 19 th century, Hegel was among the first to realize the reality of religious pluralism. He saw that a philosophy of religion that wanted to favor Christianity must at a minimum have some story to tell about the other religions of the world. This might bode well for the undertaking, but there are good reasons to proceed with caution since Hegel has also been criticized as a supporter of a pro-European colonial agenda, which would of course undermine any meaningful respect for pluralism. These criticisms need to be acknowledged and taken seriously. However, we also need to recognize that Hegel's thought is not a simple, onedimensional matter. It developed over time and has many nuances and angles that can be emphasized. Depending on which aspect one chooses to focus on, a different picture emerges. Indeed, it is not wrong to talk about many different Hegels in this sense [Kangas, 2004]. This fact has presumably played an important role in the radical split of opinion on Hegel's philosophy, which has evoked passions both positive and negative. While I do not want to dismiss or play down the criticisms, I wish to draw attention to a side of Hegel that indeed looks rather progressive and that welcomes religious pluralism.
I. Traditional Criticisms of Hegel as an Intolerant Thinker Opposed to Religious Pluralism
Initially the goal of my article might seem to be a task destined to failure at the outset, first, since Hegel has frequently been criticized as a straightforward reactionary apologist for Christianity and specifically Protestantism. These criticisms are understandable when one sees that Hegel himself states rather clearly at the outset of the work that his goal is to vindicate the truth of Christianity by restoring its key doctrines, which, he believes, in his time have been largely abandoned, even by those who claim to be defenders of the faith [Hegel, 1984[Hegel, -1987Hegel, 1993Hegel, -1995.
Second, Hegel's teleology or evolutionary theory seems to undermine a genuinely pluralistic approach. As is well known, in his account in the Lectures on the Philosophy of History, Hegel argues that one historical people replaces the next in the development of history. What he calls "spirit" (Geist) moves successively from China to India, Persia, Egypt, Greece and Rome and then culminates in what he refers to as the Germanic world, that is, roughly, Prussia, the German Исторические парадигмы states and Northern Europe. In the Lectures on the Philosophy of Religion, he follows this same general scheme and attempts to apply it to his understanding of the history of the religions of the world 2 . Thus, the various religions represent the different peoples of the world and succeed one another in a similar way. Hegel arranges the religions of the world in a more or less rigid ascending teleological order that culminates in Christianity 3 . He carefully traces the changes in the different conceptions of the divine as they appear in the different world religions. This would seem to imply that the other religions of the world are simply flawed or inadequate and for this reason are passé or, to use his language, aufgehoben. The approach would seem to take a dismissive stance towards all of the different world religions with the exception of Christianity and thus would seem to undermine religious tolerance and an appreciation for religious pluralism.
Even more damaging than this is the fact that the reader does not have to look too hard to find certain racist or ethnocentric elements in Hegel's accounts of the non-European religions. Judged by our modern standards and sensibilities, his language is offensive when he describes, for example, Hindus or followers of the ancient Chinese state religion who venerate the divinity Tian. This has recently evoked a wealth of secondary literature, which rightly condemns this element in Hegel's thought [Tibebu, 2011;Bernasconi, 1998Bernasconi, , 2000Bernasconi, , 2007Camara, 2005;Hoffheimer, 2001Hoffheimer, , 2005. Racial prejudices of this kind would also clearly appear to undermine a sober and objective assessment of the world religions. Thus once again Hegel does not seem to be a good candidate for a spokesman of modern religious pluralism.
II. Evidence for a More Tolerant, Pluralistic Hegel
I readily acknowledge these criticisms and think that they should indeed be taken very seriously. There is, however, other evidence that suggests that Hegel is more open to religious pluralism than we might initially think. It is to this evidence that I now turn.
First, it will be noted that Hegel's account of the so-called "determinate religions", that is, the religions of the world prior to Christianity, is a profoundly rich part of his lectures [see: Labuschagne and Slootweg, 2012;Stewart, 2018]. Contemporary observers noted how seriously Hegel took the non-European religions and how he was at great pains to read everything he could about the new research being 2 This creates a number of problems for him that we cannot enter into here in any detail. For example, Buddhism is not a national religion and thus cannot be geographically pinpointed to a specific people. Moreover, some ancient religions, such as Judaism, are still alive and well today and thus seem to have resisted the force of history to capitulate. It has of course also been noted that there are serious flaws in the very notion of world religions. In Hegel's time complex religious practices and belief systems were categorized under a single general name, but the reality of the phenomena is in fact considerably more complicated. done in the different fields of what we would today call Asian Studies. His first biographer Karl Rosenkranz writes that Hegel developed "an interest for the study of the Orient", and he "cast himself into the study of oriental cultures with genuine enthusiasm and his usual persistence" [Rosenkranz, 1844, p. 378]. Moreover, Hegel seemed to have had a particular interest in ancient China. Eduard Gans, the first editor of Hegel's Lectures on the Philosophy of History, states that Hegel spent an excessive amount of time with this material. Gans uses this as a justification for cutting out a large portion of this in his edition of the work 4 . For whatever the editorial issues involved were, this is clear testimony that Hegel was at pains to learn as much as he could about ancient Chinese history and religion and was not merely doing so in a pro forma manner so that he could hasten on to his account of Christianity.
Second, when we compare Hegel's Lectures on the Philosophy of Religion with then contemporary works in the field, we can see a striking difference. The philosophies of religion of Kant and Fichte are dedicated more or less exclusively to an understanding of Christianity. No historical account of the world religions is given. Neither Kant nor Fichte feels any particular need to make a study of another religion, and certainly not a non-European one. It is only with Hegel that the enormous amount of then new material about Asian culture and religion is first introduced into the field at all. In this sense, Hegel, for better or worse, has clearly played a central role in the introduction of the very idea of world religions. 5 This would seem to imply that he is in fact keenly aware of the importance of pluralism in his own day. This makes sense given that this was a time when Europe was beginning to discover a number of new cultures in Africa and Asia. One can then say in this regard that he recognized the need to take seriously other religions and to try to understand their history and belief systems.
Third, this more tolerant and pluralistic Hegel seems to be confirmed by what he actually says to his students at the outset of the lectures themselves. He is attentive to the fact that some of the material that he will be presenting will strike them as odd or even offensive. So he cautions his auditors as follows: "A survey of these religions reveals what supremely marvelous and bizarre flights of fancy the nations have hit upon in their representations of the divine essence… To cast aside these religious representations and usages as superstition, error, and fraud is to take a superficial view of the matter…" [Hegel, 1984[Hegel, -19871993 "In the first delivery of his lectures on the philosophy of history, Hegel devoted a full third of his time to the Introduction and to China -a part of the work which was elaborated with wearisome prolixity. Although in subsequent deliveries he was less circumstantial in regard to this Empire, the editor was obliged to reduce the description to such proportions as would prevent the Chinese section from encroaching upon, and consequently prejudicing the treatment of, the other parts of the work" [Hegel, 1837, p. XVII]. See the useful reprint of Sibree's translation of this Preface in [Hoffheimer, 1995, p. 97-106, 104]. See also [Bernasconi, 2000, p. 173]. Note that the later editor Lasson attempted to restore this material: [Hegel, 1923, p. 275-342].
5
Of course, the concept of world religions is today a controversial topic since the idea of, for example, a determinate religion called "Hinduism" or "Buddhism" covering a specific set of beliefs and practices has been shown to be problematic. Thomas A. Lewis attempts to avoid this problem by arguing that Hegel's understanding of the different world religions should not be understood as connected to specific religions in history but rather as general conceptions of religious ideas. See his article: [Lewis, 2015, p. 211-231].
Исторические парадигмы
p. 107] 6 . He continues by telling them, "It is easy to say that such a religion is just senseless and irrational. What is not easy is to recognize the necessity and truth of such religious forms, their connection with reason; and seeing that is a more difficult task than declaring something to be senseless" [Hegel 1984[Hegel -19871993 . From this it is clear that he sees something true in the different world religions, and he encourages his students to set aside their prejudices, so that they can see it as well. This reveals a perhaps surprising side of Hegel since he appears to advocate the serious study of non-European religions and to confront polemically dismissive views that ridicule them as superstition.
In the so-called "Tübingen Essay", written long before his Berlin lectures, he also criticizes religious intolerance along the same lines: …whoever finds that other people's modes of representation -heathens, as they are called -contain so much absurdity that they cause him to delight in his own higher insights, his understanding, which convinces him that he sees further than the greatest of men saw, does not comprehend the essence of religion. Someone who calls Jehovah Jupiter or Brahma and is truly pious offers his gratitude or his sacrifice in just as childlike a manner as does the true Christian [Hegel, 1984, p. 38;1907, p. 10].
This passage is particularly striking with its comparison to Christianity. It is not so surprising that he refers to the Roman god Jupiter, but that he also defends the Hindu Brahma bespeaks an openness to non-Western cultures. Here he strikes a considerably more modern and pluralistic tone than one might think. He seems to suggest that there is a general instinct or disposition that unites all religious people across sectarian boundaries, and that this instinct should be the object of respect.
III. The Question of Truth at Earlier Stages of Religious Development
The key question that Hegel's economy of the world religions raises is what precisely the status is of the different religions that lead up to Christianity. As noted, according to one interpretation, his teleology and hierarchy would seem immediately to undercut a respectful evaluation of these other religions. If Christianity alone is true, then all other religions must be ipso facto false. However, I want to ask if this is necessarily true.
As is well known, Hegel often uses images of plants and organic life as analogies in order to illustrate the development of conceptual thinking 8 . The seed, 6 See also: [Hegel, 1975, vol. 1, p. 310f.;1928-1941.
7
See also: "The higher need is to apprehend what it means, its positive and true [significance], its connection with what is true -in short, its rationality. After all it is human beings who have lighted upon such religions, so there must be reason in them -in everything contingent there must be a higher necessity" [Hegel, 1984[Hegel, -19871993. 8 See, for example: "The bud disappears in the bursting-forth of the blossom, and one might say that the former is refuted by the latter; similarly, when the fruit appears, the blossom is shown in its turn as a false manifestation of the plant, and the fruit now emerges as the truth of it instead. These forms are not just distinguished from one another, they also supplant one another as mutually in-the root, the stem, the leaf, the bud and the flower all belong to the same plant, although they are each very different from one another. Each of them plays its own crucial role in the development of the plant, which could not exist without all of them. The plant as a complex organic entity consists of several elements which must all be realized in the correct temporal sequence. It would be wrong to say that the truth, so to speak, is found only in one of these since all of them have an equal claim to be a necessary part of the plant as a whole.
If we take seriously analogies of this kind, this would seem to imply that Hegel's teleology is not so dismissive towards the non-Christian religions as one might at first glance assume. On this view, each of the different religions prior to Christianity has a legitimate and important role to play. Each of them captures a specific truth representative of its time and culture. This is not a far-fetched interpretation. Indeed, the Spanish philosopher José Ortega y Gasset understood Hegel in precisely this way in the context of the philosophy of history. He writes, Hegel's historical philosophy has the ambition of justifying each epoch, each human stage, and avoiding the error of vulgar progressivism that considers all that is past as essential barbarity… Hegel wants to demonstrate… that what is historical is an emanation of reason; that the past has good sense; or… that universal history is not a string of foolish acts. Rather Hegel wants to demonstrate that in the gigantic sequence of history something serious has happened, something that has reality, structure and reason. And to this end he tries to show that all periods have had reason, precisely because they were different and even contradictory [Buchanan and Hoffheimer, 1995, p. 71].
This interpretation is clearly correct. For Hegel, reason appears not just at the end of the development but at every step along the way as well, the trick is to learn how to recognize it.
Hegel himself states straightforwardly that each stage of religious development possesses some truth. In the Lectures on the Philosophy of History, we read the following: "However erroneous a religion may be, it possesses truth, although in a mutilated phase. In every religion there is a divine presence, a divine relation; and a philosophy of history has to seek out the spiritual element even in the most imperfect forms" [Hegel, 1944, p. 195f.;Hegel, 1928Hegel, -1941.
This then raises the question about what exactly is this truth that is found in earlier stages of religious development and how is it different from the "absolute" truth of Christianity. The idea seems to be that the human mind is fundamentally rational, and thus its products, in the multitude of forms found in human culture, also contain an element of this rationality. Although the different myths and stories of the gods and goddesses of the different religions might strike us as confusing and bizarre, there is buried in them some element of human reason that can be discerned if we can find it. These stories are a reflection of the mind of the people who created them.
compatible. Yet at the same time their fluid nature makes them moments of an organic unity in which they not only do not conflict, but in which each is as necessary as the other; and this mutual necessity alone constitutes the life of the whole" [Hegel, 1977, p. 2;1928-1941].
Исторические парадигмы
Greek mythology, for example, is a product of the human mind, but this doesn't mean that it's fictitious or not true. Hegel explains, [The gods] are discovered by the human spirit, not as they are in their implicitly and explicitly rational content, but in such a way that they are gods. They are made or poetically created, but they are not fictitious. To be sure, they emerge from human fantasy in contrast with what is already at hand, but they emerge as essential shapes, and the product is at the same time known as what is essential [Hegel, 1984[Hegel, -19871993].
The point is that while the stories about the gods are literally true in their details, nonetheless they represent something about the conceptions of the people at the time. They are a reflection of necessary ways of thinking at that specific period of history and human development.
We can find an echo of this idea at the beginning of Durkheim's The Elementary Forms of Religious Life. There he acknowledges, "Religions are thought to differ in value and rank; it is generally said that some are truer than others. The highest forms of religious thought cannot, it seems, be compared to the lowest without degrading the former to the level of the latter" [Durkheim, 2001, p. 3f]. He explains his approach as follows: "It is a basic postulate of sociology that a human institution cannot rest on error and falsehood or it could not endure. If it were not based on the nature of things, it would have met with resistance from those very things and could not have prevailed. When we approach the study of primitive religions, then, it is with the certainty that they are rooted in reality and are an expression of it" [Durkheim, 2001, p. 4]. In conclusion to this methodological discussion, he writes, "In reality, then, there are no false religions. All are true in their fashion: all respond, if in different ways, to the given conditions of human existence" [Ibid.]. In a sense this seems to be a restatement of Hegel's basic view. While Durkheim is more focused on the empirical aspect than Hegel, who is concerned with the concept of the divine, they share the idea that religion should be regarded as something essential in a specific community and that religious belief contains some essential truth that is not immediately evident.
IV. Hegel and Comparative Theology
A part of our modern struggle with religious pluralism lies in the perceived tension between one's own religious beliefs and the presence of other religious beliefs and traditions. If I am a religious person, then of course I hold dearly the key doctrines and beliefs of my religion. I take them to be absolute or foundationally true and even try to organize my life in accordance with them. This would seem to imply that I take all other beliefs to be false, especially those that contradict the teachings of my own religion. So there is a natural limit to the idea of religious tolerance, which can be found in one's own religious beliefs. I can, of course, say that other people have the right to exercise religious freedom: they are at their liberty to believe what they want and to practice their religion as they wish. But I cannot say that their beliefs are true in the same way that mine are since this would seem to undermine the absolute claim that every religion places on its believers. This dilemma is present in Hegel's philosophy of religion in the way that we have just discussed: namely, there is a tension between Christianity's claim to being the absolute truth, in contrast to the claim that the other religions are merely relative truths along the way leading up to it. So if we take away for the moment the question of Hegel's teleology, the issue is fundamentally the same.
Here by way of conclusion, I would like to suggest that this tension is based on a misperception, namely, the idea that religious beliefs are necessarily mutually exclusive and to believe the one necessarily means that one must be intolerant towards others. I take as my model the approach which Frank Clooney and others have designated "Comparative Theology" [Clooney, 2010]. This is a movement that seeks interreligious understanding by taking seriously the claims of all religious traditions and learning from the other while not dismissing the faith that one begins with. The guiding premise of Comparative Theology is that religion is a fundamental aspect of the human experience, which arises from a common human need. Therefore, it makes sense to try to find points of overlap in the beliefs and practices of different faiths. Whatever the premise, common sense seems to dictate that one try to learn from the other in any case. According to this view, there is something universal in religion as such, and thus religious truth can be found in different traditions and indeed wherever humans think, act, feel and love. (It will be noted that this is very much in line with Hegel's approach.) So this means that one can find, for example, Christian truths in Hindu or Buddhist texts and vice versa. I submit that the idea of Comparative Theology is a more satisfying way to treat religious pluralism than Hegel's teleology, but it is not necessarily incompatible with it. In fact, in the two approaches one can find both of the key elements that we mentioned above: a sense of one truth found in one's own religious tradition and that of other truths found in others.
Hegel's historical approach starts to look not so implausible if we consider that in many cases religions seem de facto to have overlapped and borrowed ideas from one another. It has long been suggested, for example, that Judaism had its origin in the ancient Egyptian religion in the monotheistic cult introduced by Pharaoh Akhenaten. Scholars have also noted the relations between Hinduism and Zoroastrianism. The historical connections between Judaism, Christianity and Islam are well documented. What do these historical connections tell us? Religious ideas rarely die out. They get appropriated and co-opted in different contexts, where they are further developed in different ways. These kinds of connections might, however, offer a possibility of religious dialogue and respect.
When we examine two different things, this always takes place under the aegis of the categories of identity and difference. The two things are similar to one another in certain respects, and they are different from one another in other respects. In the history of religion, it is the differences which are often underscored, and this had led to a long history of religious wars, persecutions and violence. However, the historical connections between the different world religions also provide a basis for a positive comparison of points of similarity.
I believe that Hegel's approach is in many ways consistent with the view of Comparative Theology, and indeed that this can afford us a fresh look at his philosophy of religion. Both Hegel and Comparative Theology teach us that interest 58 Исторические парадигмы in and respect for the history of religion or other religions does not need to undermine or compromise one's personal belief in one's own religion. Thus the perceived tension between the absolute claims of one's own religion and that of other religions is not as problematic as it might seem.
The tension that we noted with regard to religious tolerance and pluralism is just one aspect of a much more fundamental phenomenon that concerns our basic relation to the world. Every person has certain beliefs -some held more dearly than others. In our interaction with the world, we are constantly comparing our beliefs with the feedback or pushback that the world gives us. We constantly have experiences that contradict our beliefs and cause us to rethink them and modify them in different ways. This is what it means to live in the world as a sentient and thinking being. Religious beliefs are just one example of this. They form a part of our broader belief system that is constantly under evaluation. It does not make sense to reproach someone of intolerance simply because they believe something different from someone else and wish to insist on their own convictions. Indeed, this is the case all the time. The idea of religious intolerance must be something different and much stronger than this. Thus, there is nothing intolerant in believing in a specific religion. This does not in itself undermine respect for other religions or belief systems. Thus, I submit, that the perceived tension between holding a fundamental or absolute belief and the pluralism of religions is not a real tension. It is a pseudoproblem. | 2020-06-01T09:09:12.742Z | 2019-12-31T00:00:00.000 | {
"year": 2019,
"sha1": "d1220d1e178f84772b52d2caa0994f609689baa3",
"oa_license": null,
"oa_url": "https://frai.iph.ras.ru/article/download/3667/2740",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "18bed0f61c9799c765e4c546aaf5d691409b1090",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
4191735 | pes2o/s2orc | v3-fos-license | Loss of Cln3 Function in the Social Amoeba Dictyostelium discoideum Causes Pleiotropic Effects That Are Rescued by Human CLN3
The neuronal ceroid lipofuscinoses (NCL) are a group of inherited, severe neurodegenerative disorders also known as Batten disease. Juvenile NCL (JNCL) is caused by recessive loss-of-function mutations in CLN3, which encodes a transmembrane protein that regulates endocytic pathway trafficking, though its primary function is not yet known. The social amoeba Dictyostelium discoideum is increasingly utilized for neurological disease research and is particularly suited for investigation of protein function in trafficking. Therefore, here we establish new overexpression and knockout Dictyostelium cell lines for JNCL research. Dictyostelium Cln3 fused to GFP localized to the contractile vacuole system and to compartments of the endocytic pathway. cln3− cells displayed increased rates of proliferation and an associated reduction in the extracellular levels and cleavage of the autocrine proliferation repressor, AprA. Mid- and late development of cln3− cells was precocious and cln3− slugs displayed increased migration. Expression of either Dictyostelium Cln3 or human CLN3 in cln3− cells suppressed the precocious development and aberrant slug migration, which were also suppressed by calcium chelation. Taken together, our results show that Cln3 is a pleiotropic protein that negatively regulates proliferation and development in Dictyostelium. This new model system, which allows for the study of Cln3 function in both single cells and a multicellular organism, together with the observation that expression of human CLN3 restores abnormalities in Dictyostelium cln3− cells, strongly supports the use of this new model for JNCL research.
Introduction
The neuronal ceroid lipofuscinoses (NCL) are a group of inherited, severe neurodegenerative disorders also known as Batten disease [1]. At the cellular level, NCL disorders characteristically display aberrant lysosomal function and an excessive accumulation of lipofuscin in neurons and other cell types [2,3]. Clinical manifestations include vision loss, seizures, the progressive loss of motor function and psychological ability, and a reduced lifespan [4]. Recent evidence also points to pathology outside of the central nervous system, more specifically the cardiac and immune systems [5][6][7][8]. North American and Northern European populations have the highest rates of incidence, however the NCL disorders have a worldwide distribution with varying incidence rates depending on the region (1:14000 to 1:100000) [9]. Currently there are no effective treatments or cure for NCL disorders.
Juvenile NCL (JNCL), the most common subtype of NCL, occurs due to recessive mutations in the CLN3 gene with the majority of JNCL patients carrying a ,1-kb genomic deletion spanning exons 7 and 8 [10]. Indel, missense, nonsense, and splice site mutations have also been documented in JNCL patients [11,12]. In mammals, CLN3 encodes a 438 amino acid multi-pass transmembrane protein (CLN3/battenin; ceroid-lipofuscinosis, neuronal 3) that is primarily found in endosomes and lysosomes with evidence that it may also traffic to other subcellular membranes [3,13,14]. In neurons, CLN3 may be important for events localized at the synapse [15]. Evidence from yeast and mouse models independently suggests that CLN3 may function in lysosomal pH homeostasis, endocytic trafficking, and autophagy [16][17][18][19][20]. Despite substantial research efforts using a variety of systems, the precise function of CLN3 remains unclear [21].
A new, unexplored approach to studying CLN3 function involves the use of the social amoeba Dictyostelium discoideum, which has been selected by the National Institutes of Health as a model organism for biomedical and human disease research. This genetically tractable model eukaryote is being used successfully to study the function of genes linked to neurodegenerative disorders and is particularly suited to modeling human lysosomal and trafficking diseases [22][23][24][25][26][27][28][29]. Dictyostelium is a soil microbe that undergoes an asexual life cycle composed of a growth phase in which single cells grow and divide mitotically as they feed on bacteria and a multicellular developmental stage that is induced upon starvation. During the early stages of Dictyostelium development, the starving population of cells secretes cAMP in a pulsatile manner, which serves to attract individual cells chemotactically to form a multicellular aggregate also referred to as a mound. After a series of morphological changes, the mound develops into a slug-like structure that is capable of both photoand thermotaxis. When conditions are suitable, the slug, composed of predominantly two cell types (i.e., pre-stalk and pre-spore), completes the life cycle by forming a fruiting body comprised of a mass of spores supported by a stalk of dead cells. When a food source becomes available, the spores germinate allowing the amoeba to re-start the life cycle. Thus, Dictyostelium serves as a valuable system for studying a variety of cell and developmental processes [30][31][32].
Understanding the normal function of CLN3 is a key step in designing targeted therapies for JNCL. Therefore, in this study, we have established new tools for research into CLN3 function by generating a Cln3-deficient Dictyostelium mutant by targeted homologous recombination and introducing GFP-tagged Dictyostelium Cln3 and human CLN3 into Dictyostelium cells. Assessment of the knockout and overexpression cells during growth and development strongly indicates that the function of CLN3 is conserved from Dictyostelium to human. Furthermore, our results strongly support a key role for CLN3 in regulating the endocytic pathway and calcium-dependent developmental events.
Cells and chemicals
AX3 and cln3 2 cells were grown and maintained at room temperature on SM agar with Klebsiella aerogenes and in HL5 medium supplemented with ampicillin (100 mg/ml) and streptomycin sulfate (300 mg/ml). cln3 2 cells also required blasticidin S hydrochloride (10 mg/ml), while strains carrying the extrachromosomal vector pTX-GFP required G418 (10 mg/ml) [33]. HL5, FM minimal medium, and low fluorescence HL5 were purchased from ForMedium (Hunstanton, Norfolk, UK). The QIAquick PCR Purification Kit, QIAquick Gel Extraction Kit, and QIAprep Spin Miniprep Kit were used for all PCR purifications, gel extractions, and plasmid isolations, respectively, and were all purchased from Qiagen Incorporated (Valencia, CA, USA). Restriction enzymes were purchased from New England BioLabs Incorporated (Ipswich, MA). All primers were purchased from Integrated DNA Technologies Incorporated (Coralville, IA, USA). EGTA and FITC-dextran were purchased from Sigma-Aldrich (St. Louis, MO, USA). Mouse monoclonal anti-p80 was purchased from the Developmental Studies Hybridoma Bank (University of Iowa, Iowa City, IA, USA).
Axenic growth and pinocytosis
Cells in the mid-log phase of growth (1-5610 6 cells/ml) were diluted to 1-2610 5 cells/ml in fresh HL5 or FM and incubated at 22uC and 150 rpm. Cell concentrations were measured every 24 hours over a 120-or 144-hour growth period with a hemocytometer. Pinocytosis assays were performed as previously described [34]. Briefly, AX3 and cln3 2 cells (5610 6 cells/ml) were grown in HL5. FITC-dextran (70,000 M r , 100 ml of a 20 mg/ml solution) was added to a 5-ml cell suspension, which was then incubated for 90 minutes at room temperature and 150 rpm. Equal volumes of cells (500 ml) were harvested at the indicated times, washed 2 times with ice-cold Sorenson's buffer (2 mM Na 2 HPO 4 , 14.6 mM KH 2 PO 4 , pH 6.0), and then lysed with 1 ml of buffer containing 50 mM Na 2 HPO 4 pH 9.3 and 0.2% Triton-X. Lysates were placed in black 96-well plates and fluorescence was measured with a Molecular Devices SpectraMax M2 Multi-Mode Microplate Reader (excitation 470, emission 515). For axenic growth and pinocytosis assays, statistical significance was assessed in GraphPad Prism 5 (GraphPad Software Incorporated, La Jolla, CA, USA) using two-way ANOVA followed by Bonferroni post-hoc analysis. A p-value,0.05 was considered significant (i.e., n = # of independent cell cultures; see relevant Figure legends for additional details). For experiments assessing the effect of cln3 knockout on the intra-and extracellular levels of AprA and CfaD, AX3 and cln3 2 cells grown axenically in HL5 (as described above) were harvested and lysed after 48 and 72 hours of growth. At each of these time points, cells from 15 ml of culture were also spun down and conditioned media was collected and filtered through a 0.45 mm filter unit. Samples were standardized by loading volumes of conditioned media according to cell number (i.e., media from 100000 cells). Whole cell lysates and samples of conditioned media were separated by SDS-PAGE and analyzed by western blotting.
Development
Development assays were performed as previously described [35]. Briefly, cells grown in HL5 were harvested in the mid-log phase of growth (1-5610 6 cells/ml) and washed two times with ice-cold KK2 phosphate buffer (2.2 g/L KH 2 PO 4 , 0.7 g/L K 2 HPO 4 , pH 6.5). Washed cells (3610 7 cells/ml) were deposited in four individual cell droplets (25 ml each droplet) on black, gridded, cellulose filters (0.45 mm pore size) (EMD Millipore Corporation, Billerica, MA, USA) overlaid on four Whatman #3 cellulose filters (EMD Millipore Corporation, Billerica, MA, USA) pre-soaked in KK2 buffer. Cells were maintained in the dark in a humidity chamber at room temperature. Structures were viewed and photographed at the indicated times with a Nikon SMZ800 microscope (Nikon Instruments Incorporated, Melville, NY, USA) equipped with a SPOT Insight color camera 3.2.0 (Diagnostic Instruments Incorporated, Sterling Heights, MI, USA). Images were captured with SPOT for Windows (Diagnostic Instruments Incorporated, Sterling Heights, MI, USA). For each independent experiment, developmental phenotypes were scored for each cell droplet (i.e., 4 total) and then averaged to obtain a mean value for that experiment (i.e., n = # of independent experiments; see relevant Figure legends for additional details). Statistical significance was assessed in GraphPad Prism 5 (GraphPad Software Incorporated, La Jolla, CA, USA). Data that satisfied parametric requirements were analyzed using one-way ANOVA followed by the Bonferroni multiple comparison test. Non-parametric data were analyzed using the Kruskal-Wallis test followed by the Dunn multiple comparison test. A p-value,0.05 was considered significant. See relevant Figure legends for additional details.
Live cell imaging, fxation, and immunolocalization
Cells were viewed live in 6-well dishes containing water or low fluorescence HL5. Fixation in ultra-cold methanol (for cells probed with anti-VatM or anti-Rh50) or 4% paraformaldehyde (for cells probed with anti-p80) followed by immunolocalization, were performed as previously described [36,37]. Prior to fixation, cells were grown overnight on coverslips in low fluorescence HL5. The following primary and secondary antibodies were used; rabbit . For confocal analysis, the separate channels were imaged using sequential scanning mode and z-sections were taken with a pinhole setting of 1 airy unit (AU). Separate channel and overlay (i.e., merge) images were exported from the Leica imaging software (LAS AF), or from the Zeiss AxioVision imaging software (version 4.6.3), as.tif files and opened into Adobe Photoshop CS5 for compilation of figures. For epifluorescence images, the merge of the separate channel images was produced using ImageJ/Fiji software. If minor brightness and contrast adjustments were necessary, these were made in Photoshop uniformly for each set of images of a given co-stain combination. [41], and rabbit polyclonal anti-CfaD (1:1000) [42]. Immunoblots were digitally scanned using a GS800 Calibrated Densitometer scanner and Quantity One software (Bio-Rad Laboratories Incorporated, Hercules, CA, USA). Identified bands were quantified with ImageJ/Fiji and levels were normalized to ß-actin levels. Results were pooled from four independent experiments, each with at least two technical replicates. Statistical significance was determined using a onesample t-test (mean, 100; two-tailed). A p-value,0.05 was considered significant.
Bioinformatic and phylogenetic analysis
Sequence alignments between Dictyostelium Cln3 and human CLN3 were performed using the dictyBase BLAST server (http:// www.dictybase.org/tools/blast). For phylogenetic analyses, the amino acid sequence of Dictyostelium Cln3 was inputted into the NCBI BLASTp server. Amino acid sequences for significant hits corresponding to CLN3 orthologs from 20 different organisms (i.e., mammals and NIH model systems) were obtained and aligned using ClustalX version 1.83. Neighbor-Joining trees were created using ClustalX version 1.83 and PAUP version 4.0 (Sinauer Associates Incorporated Publishers, Sunderland, MA, USA) and viewed using TreeView version 1.6.6.
Gene knockout and validation
Targeted disruption of the cln3 gene in Dictyostelium discoideum was accomplished using an approach that has been previously described [26]. Targeting arms were amplified by PCR using the Expand High-Fidelity PCR System (Roche Diagnostics Corporation, Indianapolis, IN, USA) and cloned into vector pLPBLP, which knocked out the gene of interest by homologous recombination and introduced a blasticidin resistance (bsr) cassette [43]. The 59 targeting arm was amplified using the following primers, which incorporated KpnI and HindIII sites (underlined) to facilitate directional cloning into pLPBLP; 59-GGTACCTCTTTATACTATATATTATACCTCCTTCTC-39 (forward) and 59-AAGCTTCATCTTGAAACTAAAC-CAAATGCAATATTTGC-39 (reverse). The 39 targeting arm was amplified using the following primers, which incorporated PstI and SpeI (underlined) to facilitate directional cloning into pLPBLP; 59-CTGCAGAAAACAAAGATATATTCGTTGTG-CACG-39 (forward) and 59-ACTAGTATGAAGAAT-CAGTTTTTGGAACCTCAGAG-39 (reverse). AX3 cells were electroporated with 10 mg of linearized gene-targeting DNA. 96 colonies resistant to blasticidin S hydrochloride (10 mg/ml) were collected and replica-plated into a 96-well plate. Genomic DNA was extracted using the DNeasy Blood and Tissue Kit (Qiagen Incorporated, Valencia, CA, USA) and targeted gene disruption was validated by nine PCR reactions using a combination of primers (File S1, Table S1). PCR analysis identified eight positive clones that all showed a similar growth phenotype (discussed in Results). Two of these clones were further analyzed by Southern blotting. Genomic DNA from each clone was isolated and digested overnight with HindIII at 37uC, separated by agarose gel electrophoresis, and transferred to positively charged nylon membranes by capillary transfer. Blots were hybridized with a DIG-labelled probe corresponding to the entire sequence of the bsr gene using the PCR DIG Probe Synthesis Kit and the DIG High Prime DNA Labeling and Detection Starter Kit II according to the manufacturer's instructions (Roche Diagnostics Corporation, Indianapolis, IN, USA). The bsr gene was amplified from pLPBLP using the following primers; 59-ATGGATCAATTTAA-CATTTCTCAAC-39 (forward) and 59-TTAATTTCGGGTA-TATTTGAGTGG-39 (reverse). Based on the position of HindIII cut sites in the Dictyostelium genome, a single 2746 bp fragment was expected on Southern blots probed with the DIG-labelled bsr probe (www.dictybase.org). A ,2750 bp fragment was detected in both clones however one of the clones also contained an unexpected ,6600 bp fragment. Since this implied an unintended and possibly complex integration event, we chose to work with the clone containing the single ,2750 bp fragment. We designated this clone as the cln3 knockout strain and used these cells in all subsequent analyses.
Construction of GFP expression constructs and cell lines
Vector pTX-GFP, which incorporates an N-terminal GFP tag, was used to generate all GFP-fusion protein constructs [33]. Fulllength Dictyostelium cln3 was amplified from cDNA using the following primers, which incorporated SacI and XhoI sites (underlined) to facilitate directional cloning into pTX-GFP; 59-GAGCTCATGGGAAAGGATTATACATT-39 (forward) and 59-CTCGAGTTATGTTGAGGATGAAGAAT-39 (reverse). Full-length human CLN3 was amplified from cDNA using the following primers, which also incorporated SacI and XhoI sites (underlined); 59-GAACTTGAGCTCATGGGAGGCTGTG-39 (forward) and 59-TAATCCCTCGAGTCAGGAGAGCTGGC-39 (reverse). To facilitate the expression of Dictyostelium GFP-Cln3 and human GFP-CLN3 at close to endogenous levels, the act15 promoter and the first 11 codons of the GFP open reading frame, which contained the initiation methionine and an amino-terminal 8x histidine tag, was removed from pTX-GFP by digesting the plasmid with SalI and KpnI. Three fragments containing DNA from the non-coding region directly upstream of cln3 were amplified from AX3 gDNA using primers cln3_up_elem_F1, cln3_up_elem_F2, cln3_up_elem_F3, and cln3_up_elem_R1 (File S1, Table S1). Forward primers incorporated SalI restriction sites and reverse primers incorporated KpnI restriction sites to facilitate directional cloning into pTX-GFP. The longest fragment (i.e., upstream element 1) spanned the entire region upstream of the cln3 start site up to the end of the preceding gene (File S1, Fig. S1). The other two fragments (i.e., upstream elements 2 and 3) spanned regions within upstream element 1 up to the cln3 start site. The three upstream elements, which also included the first 36 base pairs (12 codons) of the cln3 open reading frame, were then separately cloned into pTX-GFP upstream and in-frame with the GFP open reading frame. All constructs were validated by agarose gel electrophoresis and DNA sequencing (CHGR Genotyping Resource, Genomics Core Facility, Massachusetts General Hospital, Boston, MA, USA). The ability of each cln3 upstream element to drive GFP expression in AX3 cells was verified by western blotting (File S1, Fig. S1). Since upstream element 1 was the strongest driver of gene expression (File S1, Fig. S1), we used this fragment of DNA, hereafter referred to as 'cln3 upstream element', to drive gene expression in our modified version of pTX-GFP (i.e., act15 promoter removed).
Sequence analysis of Dictyostelium Cln3
The 438 amino acid sequence of human CLN3 was inputted into the dictyBase BLASTp server (http://www.dictybase.org/ tools/blast). The highest match was a 421 amino acid protein (Cln3; DDB_G0291157). There were 117 exact matches (27% identical) and 197 positive matches (46% similar) within a 429 amino acid region of similarity (Fig. 1A). In comparison, the CLN3 homolog in Saccharomyces cerevisiae, Btn1p, is 38% identical and 49% similar to the human protein, while the Schizosaccharomyces pombe homolog is 32% identical and 47% similar. However, the CLN3 homologs in yeast are comparatively smaller than Dictyostelium Cln3 (408 aa and 396 aa vs. 421 aa). Residues that are myristoylated or glycosylated in human CLN3 are conserved in Dictyostelium Cln3 and a putative prenylation motif near the C-terminus of the protein (i.e., 398-CFIL-401) is present, although it does not precisely align with the prenylation motif in the human protein, which is found at the end of the protein (i.e., 435-CQLS-438) (Fig. 1A). Importantly, point mutations (missense and nonsense) documented from JNCL patients are highly conserved in the Dictyostelium ortholog (Fig. 1A). Together, these similarities indicate that the function of the protein is likely conserved from Dictyostelium to human. A phylogenetic tree showing the relationship of Dictyostelium Cln3 to CLN3 orthologs from 20 different organisms of interest (i.e., NIH model systems and mammals) firmly places Dictyostelium Cln3 within the CLN3 family of proteins (Fig. 1B).
Dictyostelium Cln3 fused to GFP localizes to the contractile vacuole network and to vesicles of the endocytic pathway To gain insight into the function of the CLN3 ortholog in Dictyostelium, we transformed AX3 cells with a vector that expressed Dictyostelium Cln3 fused to GFP. We chose to place the GFP tag on the N-terminus since a previous study has reported the mis-localization of CLN3 tagged with C-terminal GFP, presumably due to the masking of the prenylation motif [44]. Protein expression was verified by western blotting and a thorough discussion and analysis of the banding pattern is provided in the supporting information (File S1, Fig. S2). In live AX3 cells incubated in water, Dictyostelium GFP-Cln3 localized to the membranes of vacuolar-shaped structures and small cytoplasmic vesicles, to tubular-like structures within the cytoplasm, and as punctate distributions within the cytoplasm ( Fig. 2A). Time-lapse video microscopy of these cells showed multiple vacuoles undergoing dynamic events of expansion and contraction (File S1, Fig. S3). In free-living amoebae and protozoa, the contractile vacuole (CV) system acts as an osmoregulatory organelle that controls the intracellular water balance by collecting and expelling excess water out of the cell. In Dictyostelium, the CV system consists of tubules and vacuoles that function to collect and expel excess water, respectively [45]. Based on our initial observations of Dictyostelium GFP-Cln3 localization in AX3 cells, we next fixed and probed cells expressing GFP-Cln3 with antibodies directed against two established Dictyostelium CV system markers, the V-ATPase membrane subunit (VatM) and the rhesus-like glycoprotein Rh50 [38,39]. VatM generates an acidic environment in several intracellular compartments and is found in both the CV and endosomal systems, however it is enriched ,10-fold in the CV system, while Rh50 is more specific to the CV system [38,39,46,47]. GFP-Cln3 was found to strongly localize to both VatM-and Rh50-positive compartments (Fig. 2B). Interestingly, much like VatM, GFP-Cln3 localized to both small cytoplasmic vesicles and at distinct punctate distributions within the cytoplasm (Fig. 2B). GFP-Cln3 was also observed to localize as punctate clusters on the vacuolar membrane (Fig. 2B). Since localization of GFP-Cln3 was observed on the smaller, VatM-positive punctate distributions, we also assessed localization of GFP-Cln3 to p80positive compartments. The p80 protein localizes to late endosomes during Dictyostelium growth [40]. Although GFP-Cln3 localized primarily to the vacuoles of the CV system ( Fig. 2A,B), which were unstained by the p80 antibody, we did observe GFP-Cln3 localization on the membranes of a subset of small cytoplasmic vesicles that were also stained by the p80 antibody (Fig. 2B).
To further support the localization of Dictyostelium GFP-Cln3 to VatM-, Rh50-, and p80-positive subcellular compartments, we analyzed the localization of GFP-Cln3 using immunofluorescence and confocal microscopy. Across multiple z-sections of the amoeboid Dictyostelium cells, GFP-Cln3 localized to VatMpositive vesicles and punctate distributions, Rh50-positive tubules and vacuolar-shaped structures, and a subset of p80-positive vesicles (Fig. 3). Taken together, our data strongly suggest that Cln3 localizes to both the CV and endocytic systems in Dictyostelium.
Cln3 2 cells show enhanced rates of proliferation and increased intracellular accumulation of FITC-dextran
To further study the function of Cln3 in Dictyostelium, a cln3 knockout mutant was generated by targeted homologous recombination, which deleted the entire region spanning amino acids 61-421 ( Fig. 4A-C). RNA-Seq data shows that expression of cln3 mRNA decreases by ,30% during the first 4 hours of development, but then increases dramatically during the next 8 hours (i.e., ,8-fold increase), with expression peaking after 12 hours of development [48]. Expression decreases slightly between 12 and 20 hours (,15% decrease), but overall remains high during the mid-to late stages of Dictyostelium development.
Since growth is a major phase of the Dictyostelium life cycle, we first assessed the effect of Cln3 deficiency on the rate of cell proliferation in axenic media. In HL5, cln3 2 cells proliferated at a significantly enhanced rate compared to parental AX3 cells (genotype effect, two-way ANOVA, p,0.001) (Fig. 5A). However, no significant difference was observed between the highest densities attained by both strains after 120 hours of growth (Fig. 5A). Since we were able to successfully overexpress Dictyostelium GFP-Cln3 in AX3 and cln3 2 cells, we next assessed the ability of GFP-Cln3 to alter the enhanced rate of proliferation of cln3 2 cells and the effect of GFP-Cln3 overexpression on AX3 cell proliferation. GFP-Cln3 overexpression significantly suppressed the enhanced proliferation of cln3 2 cells to levels observed in AX3 cells (Fig. 5A). Overexpression of GFP-Cln3 in AX3 cells had no significant effect on cell proliferation however these cells reached a significantly lower final density after 120 hours when compared to all other strains (Fig. 5A). Based on these results, we then assessed the growth of cln3 2 cells in FM minimal media to determine whether limiting available nutrients would suppress the enhanced growth rate. When grown in FM, cells of both strains proliferated at a reduced rate compared to growth in HL5 (Fig. 5A, B). We did not detect any significant differences in the growth rates of AX3 and cln3 2 cells during the first 96 hours of growth in FM (Fig. 5B). However, at the 120-and 144-hour time points, cln3 2 cells were at a significantly higher density than AX3 cells, and the genotype was found to have a significant effect on the overall growth curve, as determined by two-way ANOVA (p, 0.01) (Fig. 5B).
Since pinocytosis is required for the growth of Dictyostelium cells in liquid media, we used a well-established assay to assess whether this process was dysregulated in cln3 2 cells. AX3 and cln3 2 cells were incubated with FITC-dextran, and the amount of intracellular fluorescence was measured at specific time intervals over a 90-minute incubation period. At the 40-minute time point, the intracellular fluorescence was relatively higher (,50%) in cln3 2 cells compared to AX3 cells (Fig. 5C). However, two-way ANOVA analysis of the pinocytic uptake of FITC-dextran over the entire 90-minute incubation period did not indicate a statistically significant genotype effect (p.0.05) (Fig. 5C). Although one of the pathological hallmarks of JNCL is the accumulation of lysosomal storage material in neurons and other cell types [2,3], we were unable to observe any autofluorescent material in cln3 2 cells during growth (unpublished data).
Cln3 deficiency negatively affects the secretion and cleavage of autocrine proliferation repressor a during growth
In an attempt to gain further insight into the possible mechanisms by which Cln3 deficiency leads to enhanced proliferation, we next investigated two secreted proteins that modulate growth in Dictyostelium by repressing cell proliferation: autocrine proliferation repressor A (AprA) and counting factorassociated protein D (CfaD) [41,42]. Whole cell lysates (i.e., intracellular) and conditioned growth media (i.e., extracellular) from AX3 and cln3 2 cells were analyzed for the levels of AprA and CfaD present in each sample. In whole cell lysates, anti-AprA strongly detected a 60-kDa protein and weakly detected a 55-kDa protein (Fig. 6A), consistent with the banding pattern observed in another parental strain of Dictyostelium, AX2 [41]. After 48 and 72 hours of axenic growth, the amount of the 55-kDa protein in cln3 2 whole cell lysates was significantly greater than the amount in AX3 cells (Fig. 6A). In contrast, there were no significant differences in levels of the 60-kDa protein (Fig. 6A). In samples of conditioned growth media, anti-AprA detected the 60-kDa and 55 kDa proteins as well as a 37-kDa protein, which had not been observed in whole cell lysates from either AX3 or cln3 2 cells (Fig. 6A). After 72 hours of growth, the amount of 60-kDa protein in cln3 2 conditioned media, was significantly reduced compared to the amount present in AX3 conditioned media (Fig. 6A). After 48 and 72 hours of growth, the amount of 37-kDa protein in conditioned media from cln3 2 cells was also significantly reduced compared to amounts present in AX3 conditioned media (Fig. 6A). In contrast, the 55-kDa protein was present in significantly greater amounts at each time point in cln3 2 conditioned media (Fig. 6A).
In whole cell lysates and samples of conditioned growth media, anti-CfaD detected two proteins of molecular weights 65-kDa and 27-kDa, consistent with the predicted molecular weights of fulllength CfaD and its putative cleavage product (Fig. 6B) [42]. After 48 hours of growth, there was significantly more CfaD (i.e., both 65-kDa and 27-kDa proteins) in cln3 2 whole cell lysates compared to AX3 lysates (Fig. 6B). However, there was no significant difference between strains in the intracellular level of either protein after 72 hours of growth (Fig. 6B). There was no significant effect resulting from Cln3 deficiency on the amounts of full-length CfaD or its cleavage product in conditioned media after 48 and 72 hours of growth (Fig. 6B). The absence of actin and tubulin from samples of conditioned growth media verified that the samples were not contaminated with intracellular proteins (Fig. 6C). Together, these data suggest that Cln3 deficiency in Dictyostelium leads to an enhanced rate of cell proliferation that is concomitant with alterations in secretory proteins that regulate extracellular proliferation signaling.
Cln3 deficiency accelerates the formation of tipped mounds and slugs during mid-development
Given the dramatic increase in cln3 expression upon entering developmental phases of the Dictyostelium life cycle, we next sought to extend our analysis of Cln3 function to developmental processes. After 12 hours of development, 3365% of cln3 2 structures had progressed to the tipped mound stage of development, compared to only 361% of AX3 structures (Fig. 7A,B). By 15 hours, 8363% of cln3 2 multicellular structures had developed into either fingers or slugs compared to only 1963% of AX3 structures (Fig. 7A, C). Overexpression of Dictyostelium GFP-Cln3, or expression of Dictyostelium GFP-Cln3 or human GFP-CLN3 under the control of the cln3 upstream element in cln3 2 cells, suppressed the precocious development of cln3 2 cells at both the 12-and 15-hour time points to levels that were not significantly different from AX3 (Fig. 7A-C). Thus, Cln3 deficiency leads to precocious mid-stage development of Dictyostelium and this acceleration can be returned to near-normal levels by reintroducing Dictyostelium Cln3 or human CLN3 in an N-terminal fusion with GFP.
Cln3 deficiency increases slug migration and accelerates fruiting body formation during late development
During the later stages of Dictyostelium development, a larger number of cln3 2 slugs were observed to migrate outside the spot of deposition compared to AX3 slugs (Fig. 8A). After 18 hours, 4162% of cln3 2 slugs migrated out of the spot of deposition compared to only 1662% of AX3 slugs (Fig. 8B). Notably, this could not be accounted for by the overall accelerated rate of development observed in cln3 2 cells, since a significantly higher percentage of cln3 2 slugs also migrated out of the spot after 21 hours compared to AX3 slugs (Fig. 8A, unpublished data). Overexpression of Dictyostelium GFP-Cln3, or expression of Dictyostelium GFP-Cln3 or human GFP-CLN3 under the control of the cln3 upstream element in cln3 2 cells, significantly suppressed this slug migration phenotype to levels observed for AX3 slugs (Fig. 8A,B). Interestingly, the slug migration phenotype could not be explained by a defect in phototaxis, since we observed no obvious effect of cln3 knockout on slug migration in a phototaxis assay (unpublished data).
Finally, Cln3 deficiency significantly accelerated fruiting body formation for those structures that remained in the deposition spot. After 18-21 hours of development, 8663% of cln3 2 structures had developed into fruiting bodies compared to only 5566% of AX3 structures (Fig. 8A, C). As it did for the slug migration stage, overexpression of Dictyostelium GFP-Cln3 or expression of Dictyostelium GFP-Cln3 or human GFP-CLN3 under the control of the cln3 upstream element, in cln3 2 cells, significantly suppressed the accelerated fruiting body formation to levels that were not significantly different from AX3 (Fig. 8C).
Taken together, these data strongly indicate that Cln3 deficiency causes an overall accelerated rate of development in Dictyostelium, but that development nevertheless proceeds to the fruiting body stage (Fig. 8D). The ability to rescue the precocious development of cln3 2 cells by introducing human CLN3 strongly supports the notion that these steps require a function that is conserved between Dictyostelium and humans.
Calcium chelation restores the timing of cln3 2 slug formation and suppresses the abnormal migration of cln3 2 slugs Since calcium signaling has been shown to be involved in regulating a number of developmental processes in Dictyostelium [49][50][51][52], the effect of calcium chelation on the substantial acceleration of mid-developmental events in cln3 2 cells was assessed. AX3 and cln3 2 cells were deposited on filters soaked in EGTA at concentrations that have previously been shown to be effective at chelating calcium during Dictyostelium development [51,52]. The timing of slug formation and the extent of slug migration were then assessed. Interestingly, EGTA (1 mM and 2 mM) suppressed the accelerated formation of cln3 2 slugs and fingers after 15 hours of development, and suppressed the enhanced migration of cln3 2 slugs at the 18-hour time point to levels that were not significantly different from AX3 (Fig. 9A-D). EGTA had no significant effect on the accelerated formation of cln3 2 fruiting bodies (unpublished data).
Discussion
In this study, we have shown that Dictyostelium contains an ortholog of CLN3, for which loss-of-function mutations in humans causes the childhood onset neurodegenerative disorder JNCL. We generated a Dictyostelium cln3 knockout mutant that was validated by PCR and Southern blotting and have provided evidence that links Cln3 function to axenic growth and multicellular development. Dictyostelium GFP-Cln3 localizes primarily to the CV system, and to a lesser extent, to compartments of the endocytic pathway. Expression of Dictyostelium GFP-Cln3 or human GFP-CLN3 in cln3 2 cells suppresses the aberrant proliferation, precocious development, and slug migration phenotypes observed in knockout cells. Together, our data strongly suggest that Cln3 is a negative regulator of proliferation and development in Dictyostelium. Finally, we have provided evidence linking AprA secretion and cleavage to Cln3 function during growth, and calcium signaling to Cln3 function during multicellular development.
The enhanced proliferation of cln3 2 cells, coupled with the observation that Dictyostelium GFP-Cln3 overexpression in AX3 cells significantly reduces the final density of stationary phase cultures, strongly support the notion that Cln3 negatively regulates this cellular process in Dictyostelium. In Dictyostelium, extracellular liquid is ingested by macropinocytosis. [53]. An increased rate of pinocytosis would conceivably allow cells to ingest nutrients required for growth at an enhanced rate. Moreover, Journet et al. [54] identified Cln3 in an analysis of the macropinocytic proteome of Dictyostelium amoeba. Our pinocytosis analysis of cln3 2 cells during axenic growth only revealed minor differences suggesting further work is needed to fully elucidate the mechanisms by which Cln3 deficiency affects cell proliferation in Dictyostelium. In other systems, CLN3 has also been reported to localize to the endocytic pathway and its deficiency impairs endocytosis in those systems [19,[55][56][57][58][59]. Together, our results, coupled with those reported by others, indicate that further research is required to determine the precise function of CLN3 in the endocytic pathway, which may be organism or cell-type dependent.
Based on our observations of the intra-and extracellular amounts of AprA and the fact that AprA negatively regulates cell proliferation in Dictyostelium [41], it would appear that the enhanced proliferation of cln3 2 cells could be at least partially explained by the lack of full-length AprA and its putative 37-kDa cleavage product in conditioned media. Since the intracellular amounts of 60-kDa AprA were not significantly different between AX3 and cln3 2 cells, thus excluding the possibility that aprA gene expression or translation were affected by Cln3 deficiency, our results suggest that Cln3 facilitates the secretion of AprA during growth. The detection of a 37-kDa protein by the highly specific anti-AprA antibody in conditioned media, but not whole cell lysates, suggests that AprA is cleaved extracellularly during growth. Since the amount of the 37-kDa protein was significantly reduced in cln3 2 cells, these results suggest that Cln3 deficiency also negatively affects the secretion of a protease required for AprA cleavage. This is supported by previous studies that have reported the proteolytic cleavage of extracellular proteins during growth and development [60][61][62][63]. Furthermore, a study describing the secreted proteome profile of growing and developing Dictyostelium cells also reports the detection of a large number of extracellular proteases in conditioned media [64]. Like AprA, CfaD is part of a ,150 kDa complex that functions extracellularly to repress cell proliferation in Dictyostelium, and chromatography and pull-down assays suggest that CfaD interacts with AprA [42]. Since increased levels of intracellular CfaD were observed in cln3 2 cells during the early stages of axenic growth, our results suggest that altered CfaD secretion could also explain the enhanced proliferation of cln3 2 cells. However, we observed no correlated decrease in the extracellular levels of CfaD over the same time period. Neverthe- Immunoblots that were exposed for a longer period of time (i.e., longer exposure) are included to show the 55-kDa and 37-kDa protein bands detected by anti-AprA. Note that the 37-kDa protein was detected in samples of conditioned growth media, but not in whole cell lysates. (B) Intra-and extracellular protein levels of CfaD. Data in all plots presented as mean amount of protein relative to AX3 48 hour sample (%) 6 s.e.m (n = 4 independent experimental means, from 2 replicates in each experiment). Statistical significance was determined using a one-sample t-test (mean, 100; two-tailed) vs. the AX3 48 hour sample. *p-value,0.05. **p-value,0.01. (C) Detection of tubulin and actin in whole cell lysates (WC; lanes 1-2), but not in samples of conditioned growth media (lanes 3-6). doi:10.1371/journal.pone.0110544.g006 Cln3 Function in Dictyostelium PLOS ONE | www.plosone.org less, our data indicate that Cln3 facilitates the secretion of AprA, and may to a lesser extent, also facilitate CfaD secretion. Taken together, the altered secretion of these extracellular signaling proteins could explain the enhanced proliferation of cln3 2 cells.
During growth, Dictyostelium GFP-Cln3 localized primarily to the CV system in live and fixed cells, and to a lesser extent to the endocytic system. In Dictyostelium, the CV system is dynamic and functions in a number of cellular processes including osmoregulation, calcium storage, protein transport to the plasma membrane, and secretion [45,65,66]. Although Dictyostelium GFP-Cln3 was observed to localize to the CV system, we observed no obvious sensitivity of cln3 2 cells to hypo-osmotic conditions during growth in HL5 (25% HL5, 75% double-distilled water) or during starvation in double-distilled water (unpublished data). However, further analysis is required to determine if there are subtle effects of Cln3-deficiency on osmoregulation during Dictyostelium growth. In Dictyostelium, the CV and endosomal systems appear to be physically separated from each other. However, some experimental evidence also indicates that controlled intracellular transport might occur between these two systems [67][68][69]. The observation that GFP-Cln3 localizes to both the CV and endocytic systems in Dictyostelium is consistent with the localization of mammalian CLN3 to multiple subcellular compartments including the endocytic and lysosomal systems [13]. Notably, endogenous Cln3 has been reported within fractions of the macropinocytic pathway in Dictyostelium, consistent with our localization data presented here [3,46,47,53]. Finally, since Dictyostelium GFP-Cln3 is able to rescue growth and developmental phenotypes, we are confident that we have correctly identified the subcellular localization of Cln3 in Dictyostelium.
Phenotypes were observed in cln3 2 cells during mid-and late Dictyostelium development that further support Cln3 as a negative regulator in Dictyostelium. Consistent with the relatively higher expression of cln3 mRNA during mid-and late development, loss of cln3 by gene knockout significantly accelerated the formation of mid-and late developmental structures. Precocious development has been observed in a number of Dictyostelium knockout mutants. Specifically, early tipped mound formation has been reported in strains overexpressing cyclin C, cyclin-dependent kinase 8, or the G-protein alpha 5 subunit, and in knockout mutants of histidine kinase C, a metabotropic glutamate receptor-like protein, protein inhibitor of STAT, and SCAR/WAVE [70][71][72][73][74][75]. Several knockout mutants that display increased slug migration have been described, including mutants for genes important for oxysterol binding, the assembly of mitochondrial complex I, and the targeting of proteins for degradation via proteasomes [76][77][78]. This phenotype has also been observed in cells overexpressing histidine kinase C or in cells where calcium-binding protein 3 expression has been knocked down by RNAi [72,79]. The diversity of functions associated with these proteins as well as those discussed above for the other developmental phenotypes in cln3 2 cells, highlight the importance of elucidating the signal transduction pathways underlying the function of Cln3 during Dictyostelium development.
The ability to completely restore the timing of cln3 2 slug formation and the enhanced slug migration through the chelation of calcium provides some mechanistic insight into the signaling pathways affected by Cln3 deficiency during these stages of the life cycle. These results are interesting given that Dictyostelium GFP-Cln3 localizes predominantly to the CV system, which has been shown to be a highly efficient store of intracellular calcium, and to be required for cAMP-induced calcium influx [65]. In addition, the primary sensor of intracellular calcium, calmodulin, is found predominantly on the membranes of the CV system [80,81]. Our results are consistent with studies in mammalian systems that have reported altered calcium homeostasis in the absence of functional CLN3, which may lead to synaptic dysfunction and neuronal apoptosis [82][83]. Furthermore, CLN3 has been shown to bind to the neuronal calcium-binding protein, calsenilin, in a calciumdependent manner [84].
Taken together, our data strongly supports Cln3 as a negative regulator of proliferation and development in Dictyostelium. Furthermore, our study indicates that cln3 knockout in Dictyostelium compromises the cell's ability to respond to extracellular and/or environmental cues. This first report of a Dictyostelium model to study NCL should spur further research using this important model organism. In addition to CLN3, Dictyostelium also possesses homologs to most of the other known NCL genes (e.g., CLN1-5, CLN7, indicating that the NCL biological pathway is likely to be conserved in this model system. The cellular processes and signaling pathways that regulate the Statistical significance in C was assessed using the Kruskal-Wallis test followed by the Dunn multiple comparison test (**p-value,0.01 vs. AX3). Scale bars = 1 mm. S, slug; FB, fruiting body. doi:10.1371/journal.pone.0110544.g008 behavior of Dictyostelium cells are remarkably similar to those observed in human cells, strengthening the argument that investigation of NCL gene function in this model organism offers something unique to the study of this devastating group of inherited neurodegenerative disorders. Figure S1 Analysis of gene expression driven by endogenous cln3 upstream elements. AX3 cells were transformed with the appropriate construct (pTX-GFP; act15 promoter replaced with cln3 upstream element 1, 2, or 3) and grown in HL5. Cells were harvested and lysed. Proteins (20 mg) were separated by SDS-PAGE and analyzed by western blotting with anti-GFP, anti-tubulin (loading control), or anti-actin (loading control). Molecular weight markers (in kDa) are shown to the right of each blot. (TIF) Figure S2 Western blot analysis of Dictyostelium strains expressing Dictyostelium GFP-Cln3 or human GFP-CLN3 under the control of the act15 promoter or cln3 upstream element 1. (A-C) AX3 and cln3 2 cells were transformed with the appropriate construct (gene expression driven by the act15 promoter) and grown in HL5. Cells were lysed and sample loading buffer was added to whole cell lysates which were either loaded directly into polyacrylamide gels or heated for 5 minutes at 95uC prior to loading into gels. Proteins (20 mg) were separated by SDS-PAGE and analyzed by western blotting with anti-GFP, anti-tubulin (loading control), or anti-actin (loading control). (D) AX3 and cln3 2 cells were transformed with Statistical significance in B was assessed using the Kruskal-Wallis test followed by the Dunn multiple comparison test (*p-value,0.05 vs. AX3). Statistical significance in D was assessed using one-way ANOVA (p,0.0001) followed by the Bonferroni multiple comparison test (**p-value,0.01 vs. AX3). F, finger; S, slug. doi:10.1371/journal.pone.0110544.g009
Supporting Information
Cln3 Function in Dictyostelium the appropriate construct (gene expression driven by cln3 upstream element 1) and grown in HL5. Cells were lysed and samples were prepared and analyzed as described above. Molecular weight markers (in kDa) are shown to the left of each blot. (TIF) Figure S3 Video of Dictyostelium GFP-Cln3 localization in AX3 cells incubated in water. AX3 cells expressing Dictyostelium GFP-Cln3 were grown overnight in low-fluorescence HL5. Cells were washed two times with double distilled water and then resuspended in double distilled water. (MPG) Table S1 List of primers used for cln3 knockout validation and amplification of cln3 upstream elements.
The following primers were designed to amplify gDNA from AX3 and cln3 2 cells to validate the knockout of the cln3 gene in the bsr resistant clone and to amplify fragments upstream of the cln3 start site. The Dictyostelium gene denoted DDB_G0291155 lies downstream of cln3 and was amplified to confirm that the insertion of the bsr cassette did not affect gene DDB_G0291155. (DOCX) File S1 Results, Discussion, and References specific to the Supplemental Table and Figures. (DOCX) | 2016-05-03T19:42:17.255Z | 2014-10-17T00:00:00.000 | {
"year": 2014,
"sha1": "7920d8e54a634c9913f20291257e54428931f517",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0110544&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52e102889972cb463fd0ab6accab23b489fef74c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
226081658 | pes2o/s2orc | v3-fos-license | Evaluation of Friesian Holstein Bulls Fertility in Lembang and Singosari Artificial Insemination Center using West Java ISIKHNAS Data
Friesian Holstein bull (FH) is one of dairy cattle in Indonesia. It is important to increase the production of dairy cattle to meet the need for milk consumption. The way to develop is through artificial insemination (AI). This study was aimed to evaluate the FH bull fertility by calculating the percentage of first service conception rate (%FSCR) of FH bulls in Lembang and Singosari AI Center using ISIKHNAS data in West Java for the year 2017 until 2018. The data included in this study after editing consisted of AI data (n=141176) and service records as well as pregnancy diagnosis information (n=98120). The study showed that the FH bull semen spread in West Java mostly from Lembang and Singosari AI Center. The fertility rate can be grouped into high fertile (HF), and low fertile (LF) level and also divided into two groups based on the number of AI services. The %FSCR of FH bulls semen that used for AI <1000 services in Singosari 64.29% as HF level; 22.22% as LF level, while in Lembang had 74. 03% as HF; 21.15% as LF level. Then, the AI> 1000 services in Lembang had %FSCR is 61.61% as HF level; 35. 28% as LF level. While Singosari had 63.11% as HF level; 33.78% as LF level. In conclusion, the FH bulls in Lembang and Singosari AI center had the HF of the %FSCR is about 53.13 until 74.03%. It is needed the more accurate assessment through genomic analysis to get the biomarker of HF bulls as a suggestion to improve the FH breeding cattle.
Introduction
FH dairy cattle or the offspring is the cattle which are only kept and raised in highlands. Dairy cattle have existed and domesticated in Indonesia since 1786. In 1891 until 1892, several breeds of dairy cattle were imported from Australia (Hereford, Shorthorn, Ayrshire, and Jersey) and from the Netherlands (Holstein Friesians) to Indonesia [1]. FH dairy cattle are tremendously bred on Java Island. The number of FH dairy cattle population in Java reaches more than 99% of the total population of FH dairy cattle in Indonesia. However, in decade, the growth of the dairy cattle population in Indonesia, especially in Java for the period 2012-2016 decreased by 1.14% per year [2]. The application of reproductive technology of AI is one of solution methods used to increase productivity and develop the FH cattle population faster. The success of AI is determined by semen quality. It said that genetically, AI also supports the dissemination of high genetic quality of selected bull and in economic view, its increase milk production capability [3]. So, the main goal of dairy cattle producers is to achieve high pregnancy rates with semen from genetically superior sires and could produce milk excessive. Then the principal goal of every AI Center is to provide customers with a product that allows them to meet these goals [4].
West Java is one of province in Java island in Indonesia Country which has big number of FH dairy cattle population about 116,400 heads. It has a mild climate, and mountainous region with a height of more than 1,500 m above sea level, ambient temperature 19-23°C and the humidity intensity 74% [5] therefore suitable for FH dairy cattle rearing. For years ago, a mating system in this area has relied on AI which performed by skilled and certified local inseminators according to the government policy. In the other side semen used for AI is mostly provided by the Lembang and Singosari AI Center. It is widely known that reproduction is a key component in enhancing population as well as milk production. However, the efficiency must be measure time to time for evaluation purpose. For that, this paper aims to evaluate to the FH bull reproductive efficiency and fertility in Lembang and Singosari AI Center by the percentage of first service conception rate (%FSCR) using ISIKHNAS data in West Java province for the year 2017 until 2018, following the explanation in the related factor in West Java. The result could be beneficial for strategic planning to enhance dairy cattle fertility.
Data collection
Data of artificial insemination and pregnancy diagnosis information of FH bulls were collected from Isian Sistem Informasi Kesehatan Hewan Nasional (iSIKHNAS) Provinsi Jawa Barat/ National Animal Health Information System of West Java province two years from 2017 to 2018. The semen producers came from Lembang AI Center, Bandung and Singosari AI Center, Malang, Indonesia. Artificial insemination was done in 18 regencies (Bandung, Bandung Barat, Bogor Bekasi, Cianjur, Ciamis, Cirebon, Depok, Sumedang, Sukabumi, Garut, Indramayu, Kuningan, Majalengka, Pangandaran, Purwakarta, Subang, Tasikmalaya) and 9 cities (Kota Bandung, Kota Banjar, Kota Bekasi, Kota Bogor, Kota Cimahi, Kota Cirebon, Kota Depok, Kota Sukabumi and Kota Tasik) in West Java province. The data included in this study after editing consisted of AI data 141,176 records that occurred between January 31, 2017 and September 21, 2018. While service records as well as pregnancy diagnosis information 98,120 records. Each insemination event had 2 possible outcomes: a positive pregnancy diagnosis (i.e., Success), a negative pregnancy diagnosis (i.e., failure). Obvious data errors, e.g., There is not exactly date of pregnancy diagnosis information because the services more than four services, also the data with the services earlier than date of pregnancy diagnoses were discarded. The bulls with the services record data < 100 times services were removed to avoid the any bias. The total number of inseminations per bull (a minimum of 100 inseminations per bull) and the number of conceiving cows was calculated for estimation of conception rate of all the bulls that included in this study.
Then the bull fertility was classifying with calculated the percentage of first service conception rate (FSCR) according to the following formula Siddiqui et al. - [8]. The following formula used for evaluation of FH bull fertility using the %FSCR data described below: The %FSCR of individual bulls were calculated and plotted according to the mean and the standard deviation (STDEV). The determination of high and low fertile of bulls is according to Aslam et al. [8] that those bulls which have %FSCR less than 'mean -1SD' were considered as low fertile while those bulls which have FSCR above 'mean + 1SD' were considered as high fertile.
The range of service sire's ages is about 2 until 9 years old, respectively, and two different seasons (dry season: from April to September, and wet season: from October to March). The data of climate were recorded every month based on Indonesian Meteorological, Climatological, and Geophysical Agency including the temperature and humidity. Bulls were fed and maintained under similar
Result and Discussion
The Friesian Holstein bulls used in this study can be classified into two levels of fertility. They were high fertile (HF) and low fertile (LF) based on the calculation of % FSCR using data of iSIKHNAS from year 2017-2018. Moreover, the iSIKHNAS data that were used in this study have been tabulated and selected according to their validities as well as the method to avoid any bias and it also selected based on the inseminator ability. The inseminator ability that is chosen based on their AI service, technical abilities that showed by more than 100 cows were conceived in the first artificial insemination service using the same ID of FH bulls. The inseminator ability should be considered to get the best result of AI program. Therefore, the selected inseminator that used in this study have been educated and receive certificate as inseminators from the government in this case the livestock service which has a license to do the insemination. The results of this study supported by Hoesni [9] who concluded that the inseminate expertise in implementing of this AI was one of the five critical success factors of AI. According to Ismanto [10], the expertise and inseminator skills in the recognition accuracy of lust, sanitary appliance, handling of the frozen semen, the right thawing, and the ability to implement the AI would determine AI success. It was added by Rivera [11], the inseminator skill in AI implementation on cattle highly influenced the pregnancy rate such as the estrus detection period. AI services were very critical to get a high conception rate. In this present work, it was known that the FH bull in both of AI Centre has been selected based on breeding soundness examination (BSE) and the frozen semen production was standardized by Indonesia national standard (SNI 4869-1:2017).
Although most evidence suggests the decline in FH dairy herd reproductive efficiency is primarily related changes in the management of the female, it is logical to question the portion of this decline that can be attributed to the male. This question is particularly relevant to the genetic components of fertility that may also be expressed in measures of bull reproductive efficiency that it become a main issue [12]. One of the ways to measure the male reproductive efficiency can be estimated using the first service conception rate (FSCR). First-service conception rate is the percentage of heifers that became pregnant after the first AI service [13]. Moreover, fertility is highly influenced by management and environmental factors [14]. The following result of %FSCR was shown in Table 1. According to Saha, [15] in which the normal conception value of the FH dairy cattle is about 50% to 77%. The low fertile of bulls with the low percentage of FSCR could be influenced by some factors. The age of the bulls could be the one of the factors that influence the spermatozoa fertilities. The ID bulls that are given in the Table 1 shows a varying year of birth. The age of bull that used in this study as well as the method mentioned before by the range from 2 until 9 years old. The ID bulls have a meaning that the first number from the ID referring to FH breed. The next two digit number refers to the year of bull's birth and then the following number is stand for the series number of the bulls. According to [16] that all semen traits were significantly affected by the age. In addition, age had a significant interaction effect with the season on the volume of the ejaculate and on the percentage of motile spermatozoa. Volume and the concentration of spermatozoa tended to increase with the age of the bull, regardless of the season or the interval between collections.
Based on this study, the bulls that used is coming from Lembang and Singosari AI Center. Both AI Center has their own environmental condition and management system. It could be influenced to the bull reproductive or fertility. It was observed that some bulls in the HF group exhibited better performance in situations of a greater challenge. Therefore, bulls that present semen with higher fertility in certain types of AI could be utilized on a larger scale to increase the reproductive rates in artificial insemination. The results of this study indicate that despite the tests indicating the semen that is submitted is adequate, there are still variations in quality and the reproductive efficiency of each bull. However, quality and quantity of semen that have a high potential of fertility depend on numerous genetic and environmental factors [17]. The effects of the successful artificial insemination are variable based on fertility and on different environmental and management situations [18]. The seasonal variation during year 2017 until 2018 also occurred. Rain and dry season gave an effect to the spermatozoa production. So, it could influence the spermatozoa quality. In recent, the some of the research told that seasonal variation could reduce the conception rate, because of the semen of the bull utilized or the female inseminated, which are exposed and susceptible to thermal stress and adverse environmental conditions [18].
The ability of individual bulls to produce large quantities of semen with good quality is essential for satisfactory breeding and economic results of artificial insemination. So that, it needs an attention towards the genetic factor of each bull. The genetics and environment (GxE) interactions must be understood if they are to be exploited to improve bulls production, particularly in production systems in this case semen production associated with large environments [19]. Each bull have the differentiating characteristic of fertility phenotypic. The difference of phenotypes is can be affected by epigenetic. The term epigenetics refers to changes in the phenotype caused by mechanisms other than changes in DNA sequences. Dada [20] reintroduced the term to explain that gene action and expression that give rise to the phenotype. Epigenetic changes encompass an array of molecular modifications of DNA, and it can influence to the bull fertility or specifically to the semen quality.
The ability to compare estimates of service bull fertility across different breeding AI center producers of frozen semen, having already adjusted for systematic environmental effects, will be valuable in making decisions, on which bulls to use. Achieving high pregnancy rates in the key to profitable dairy production systems, especially in seasonal calving (and breeding) production systems. This study clearly illustrates that the fertility rate of bulls on male fertility differ when systematic environmental, as well as genetic effects, are accounted for in a mixed model. The approach is also useful in evaluating technician performance, while simultaneously accounting for the impact of environmental and genetic effects on performance. Further, this study is really useful to get the biomarker of HF bulls using more accurate assessment through genomic analysis to improve the FH breeding cattle. Therefore, there is a more objective molecular approach is to look into problems related to fertility. Sufficient scientific evidence has been produced over the years for genetic and epigenetic regulation of spermatogenesis. Epigenetic modifications such as DNA methylation cause changes in gene of fertility expression without changes in DNA sequence [21]. DNA methylation is the addition of a stable covalent from a base group where cytosine can be methylated in CG-enriched regions of the genome, described as CpG island, which is a response to environmental cues or exposure so as to modify gene of fertility expression [22]. So that, the genomic analysis is needed to get more specific and accurate information about a specific marker of high fertile bulls. It could be used as a potential marker to selected the best superior sire and improve the breeding of FH dairy cattle.
Conclusion
The FH bull semen spread in West Java mostly from Lembang and Singosari AI Center. The fertility rate can be grouped into high fertile (HF), and low fertile (LF) level and also divided into two groups based on the number of AI services. The %FSCR of FH bulls semen that used for AI <1000 services in Singosari 64.29% as HF level; 22.22% as LF level, while in Lembang had 74. 03% as HF; 21.15% as LF level. Then, the AI> 1000 services in Lembang had %FSCR is 61.61% as HF level; 35. 28% as LF level. While Singosari had 63.11% as HF level; 33.78% as LF level. In conclusion, the FH bulls in Lembang and Singosari AI center had the HF of the %FSCR is about 53.13 until 74.03%. It is needed the more accurate assessment through genomic analysis to get the biomarker of HF bulls as a suggestion to improve the FH breeding cattle. The ability to compare estimates of service bull fertility across different breeding AI center producers of frozen semen, having already adjusted for systematic environmental effects, will be valuable in making decisions, on which bulls to use. | 2020-06-11T09:04:48.089Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "36ab8ca8bb15f70a1a48871c28805399b0504ec7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/478/1/012005",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cc10bf3809422c5948f8c725ae725fb6170b6eab",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
256230904 | pes2o/s2orc | v3-fos-license | The Q163C/Q309C mutant of αMI-domain is an active variant suitable for NMR characterization
Integrin αMβ2 (Mac-1, CD11b/CD18, CR3) is an important adhesion receptor expressed on monocytes. Mac-1 is responsible for mediating cell migration, phagocytosis, degranulation as well as cell-cell fusion. It is also the most promiscuous integrin in terms of ligand specificity with over 100 ligands, most of which use the αMI-domain as their binding site. Despite the importance of αMI-domain in defining ligand interactions of Mac-1, structural studies of αMI-domain’s interactions with ligands are lacking. In particular, solution NMR studies of αMI-domain’s interaction with ligands have not been possible because the most commonly used active αMI-domain mutants (I316G and ΔK315) are not sufficiently stable and soluble to be used in solution NMR. The goal of this study is to identify an αMI-domain active mutant that’s amenable to NMR characterization. By screening known activating mutations of αMI-domain, we determined that the Q163C/Q309C mutant, which converts the αMI-domain into its active form through the formation of an intramolecular disulfide bond, can be produced with a high yield and is more stable than other active mutants. In addition, the Q163C/Q309C mutant has better NMR spectral quality than other active mutants and its affinity for ligands is comparable to other active mutants. Analysis of the Co2+-induced pseudocontact shifts in the Q163C/Q309C mutant showed the structure of the mutant is consistent with the active conformation. Finally, we show that the minor fraction of the Q163C/Q309C mutant without the disulfide bond can be removed through the use of carboxymethyl sepharose chromatography. We think the availability of this mutant for NMR study will significantly enhance structural characterizations of αMI-domain-ligand interactions.
Introduction
Integrins are heterodimeric adhesion receptors ubiquitous to all metazoans [1]. Integrins play vital roles in numerous cellular processes including cell growth, proliferation, differentiation, and migration. In the cells of the innate immune system, integrins are also responsible for the phagocytosis of opsonized targets as well as fusion with other immune cells [2,3]. Much of integrins' activities depend on their ability to bind specifically to extracellular proteins. As a these active mutants should be more stable and more amenable to structural characterization by solution NMR. To confirm this hypothesis, we expressed and purified all three mutants and characterized them using NMR, SPR, and other biophysical techniques. Our results show that only one of the mutants (Q163C/Q309C) can spontaneously form the required disulfide bond. Although treatment of the D132C/K315C mutant with the oxidizing agent Cu 2+ /phenanthroline was reported to produce active forms of the mutant, our data indicate only~50% of protein were disulfide bonded after the treatment while unintended modifications associated with oxidations of other amino acids also took place. In addition, expressing the Q163C/Q309C mutant in the E. coli strain OrigamiB(DE3), which contains lower levels of the reductants glutathione and thioredoxin, further improved the percentage of proteins with the internal disulfide bond from~90% to~95%. More importantly, the Q163C/Q309C mutant has better yield, higher thermal stability, and better NMR spectral quality. Analysis of Co 2+ -induced pseudocontact shifts (PCS) of the Q163C/Q309C mutant showed the structure of the mutant is consistent with the active conformation. SPR analysis also showed that the Q163C/Q309C mutant has a similar affinity for the ligand C3d as the I316G and ΔK315 mutants. Finally, the small fraction of protein without the intramolecular disulfide bond can be easily removed by taking advantage of the high affinity of the active α M I-domain for carboxymethyl dextran. In sum, these results show the Q163C/Q309C mutant is excellent for NMR studies. We think this mutant will be useful to researchers interested in studying the interactions of α M I-domain with its ligands.
Expression and purification of α M I-domain
The expression and purification of α M I-domain followed the procedures in Feng et al. [13].
Briefly, the open reading frame (ORF) of the wild type human α M I-domain (E131-T324) was cloned into the pHUE vector [14] using SacII and HindIII as restriction sites. Cysteine mutations (Q163C/Q309C, D132C/K315C, D294C/Q311C) were introduced into α M I-domain using the Q5 Site-Directed Mutagenesis Kit (NEB). The plasmids were transformed into either BL21(DE3) or OrigamiB(DE3) (Millipore Sigma) and the cells were grown in M9 media at 37˚C until the culture reached an OD 600 of~0.8-1. The protein expression was induced with 0.5 mM IPTG, and cells were harvested after overnight incubation at 22˚C. Cells were lysed with sonication after a 20-minute incubation on ice in a buffer containing 20 mM sodium phosphate, pH 7, 0.5 M NaCl, 10 mM imidazole, 5% glycerol, and 1 mg/ml lysozyme. After centrifugation to remove insoluble material, the supernatant was passed through a 5-mL HisTrap column (Cytiva) and the protein was eluted with a 0.01 to 0.5 M gradient of imidazole. To separate the N-terminal His-Ubiquitin from α M I-domain, the protein was digested with enzyme USP2 (1:50 molar ratios) overnight at room temperature in 20 mM Tris, pH 8, 100 mM NaCl. The digestion mixture was then subjected to Ni 2+ column purification again. α M I-domain in the flow through was further purified using a 120-mL Superdex 75 column equilibrated in buffer containing 20 mM HEPES, 0.3 M NaCl, pH 7.0. Finally, the protein was exchanged to 20 mM HEPES, 100 mM NaCl, pH 7.0 for thermal shift and NMR analyses. SPR analysis was carried out in 20 mM HEPES, pH 7 buffer containing 0.1 M NaCl, 1 mM MgCl 2 , and 0.05% Tween 20. 15 N, 13 C, and 2 H isotope enrichment was accomplished using 15 Nenriched NH 4 Cl, 2 H, 13 C-enriched glucose, and D 2 O. In particular, OrigamiB(DE3) cells freshly transformed with the expression plasmid for the Q163C/Q309C mutant was first grown in LB at 37˚C until an OD 600 of~1.0. 2 mL of the culture was then pelleted gently at room temperature and used to seed 50 mL of 2 H, 13 C, 15 N-enriched minimal media containing 4 g/L of 2 H, 13 C-glucose. The culture was grown overnight at 37˚C and diluted with 450 mL of fresh 2 H, 13 C, 15 N-enriched minimal media. The large scale culture was grown to an OD 600 of0 .8 to 1.0 and induced with 0.5 mM of IPTG. The culture was then placed in an shaker incubator at 22˚C for 18 hours before being harvested.
Expression and purification of C3d
C3d was expressed and purified according to Bajic et al. [6]. Briefly, ORF of C3d (residues 993 to 1288) was cloned into pET15b with 6XHis and TEV cleavage site at the N-terminus and expressed in E. coli BL21(DE3) using a similar procedure as α M I-domain. Specifically, after harvesting, cells were resuspended in 20 mM Tris, 200 mM NaCl, pH 8.0 in addition to 1 mg/ mL lysozyme. The supernatant after sonication was passed through a 5-ml HisTrap column (Cytiva) and the protein was eluted using a 0.01 to 0.5 M imidazole gradient. The chimera protein was digested with TEV protease with a protein-to-enzyme ratio of 50. The digested protein was subjected to a second Ni 2+ -affinity column. Flow through fractions containing C3d were combined and concentrated.
Protein thermal shift assay of α M I-domain
To measure the thermal stability of α M I-domain, 10 μg of α M I-domain in 12.5 μL of 20 mM HEPES, 0.1 M NaCl, pH 7 buffer were mixed with 2.5 μL of 8X protein thermal shift dye (Thermo Fisher Scientific), and 5 μL of 4X protein thermal shift assay buffer (Thermo Fisher Scientific). Duplicates of each sample were heated from 22˚C to 99˚C with a temperature gradient of 0.015˚C / second in a QuantStudio 3 qPCR instrument. The fluorescence of the samples was measured using an excitation wavelength of 580 nm and an emission wavelength of 623 nm.
SPR analysis of α M I-domain's interaction with C3d
SPR analysis was carried out on a BI-4500 SPR instrument (Biosensing Instrument). To carry out the measurement, 50 μM C3d was flowed at a rate of 20 μL/min over EDC/NHS-activated CM-dextran sensor until a response of~1000 RU was observed. An increase of 1 RU corresponds to protein deposition of~1 pg / mm 2 . After washing with 1.5 M NaCl, wild type, ΔK315, I316G, and Q163C/Q309C α M I-domain at concentrations of 0.19, 0.39, 0.78, 1.56, 3.13, 6.25, and 12.5 μM were flowed over the sensor while data were acquired. The sensor was regenerated with 1.5 M NaCl after each sample. Response curves were background corrected by subtracting the response of the reference channel with no immobilized C3d.
NMR data collection and analysis
NMR data were acquired on a Bruker 600 MHz instrument equipped with Avance III HD console and a Prodigy probe. All NMR samples contained 0.1 to 0.3 mM 15 N-labeled α M I-domain in 20 mM HEPES, 0.1 M NaCl, pH 7.0 buffer. To study the effect of Mg 2+ and glutamate on the protein, 1 mM MgCl 2 and 10 mM sodium glutamate were also included in the sample. 15 N-edited HSQC spectra were acquired using the Bruker pulse sequence fhsqcf3gpph. Pseudocontact shifts (PCS) were measured by comparing the amide hydrogen chemical shifts of the 2 H/ 13 C/ 15 N-labeled Q163C/Q309C mutant in the Co 2+ -bound form with the Mg 2+ -bound form. PCS samples also contained 10 mM glutamate to prevent aggregation of the protein.
Backbone amide hydrogen and nitrogen assignments were obtained using information from HNCACB and HNCOCAB spectra for both Co 2+ -and Mg 2+ -bound proteins. To assign the chemical shifts of the Mg 2+ species, we first tabulated the HN, N, CA, CB chemical shifts as well as CA and CB chemical shifts of the previous amino acids for each spin system using the spectral data. The information was then used to derive possible assignment using iPINE [15]. Each assignment was then examined manually to confirm the assignment. Similarly, we tabulated the HN, N, CA, CB chemical shifts as well as CA and CB of the previous amino acids for each spin system in the Co 2+ data. Then, utilizing the fact that PCS of backbone amide H and N are similar if the atoms are not too close to the paramagnetic center, possible assignments were proposed by finding assigned signals that are shifted diagonally in the HSQC spectrum of Mg 2+ sample. The possible assignments were further validated by comparing the CA and CB chemical shifts of the Co 2+ and Mg 2+ samples, taking into consideration that CA and CB chemical shifts may have PCS of similar magnitudes. The preliminary PCS tensor calculated using these assignments were used to predict additional assignments. NMR data were processed using NMRPipe [16] and analyzed using NMRViewJ [17]. Fitting of PCS to the structures of active and inactive α M I-domain (PDB ID 1IDO and 1JLM) was carried out using the software Paramagpy [18]. The quality factor of fitting is defined as ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi P
Spontaneous disulfide bond formation in mutants
Our study is motivated by a desire to study the ligand specificity of α M I-domain using solution NMR. In our hands, both the ΔK315 mutant (residues E131 to K315) and the I316G mutant showed low solubility and poor NMR spectral quality (vide infra). We attribute this to the fact that the disruption of the hydrophobic core of α M I-domain may have significantly destabilized the protein, rendering them unsuitable for solution NMR studies. Fortunately, activation of α M I-domain using constraining disulfide bonds that pull the C-terminal α7 helix away from MIDAS has also been reported. We recombinantly expressed three such mutants. These mutants are D132C/K315C [11], Q163C/Q309C [12], and D294C/Q311C [12]. Of these three mutants, D132C/K315C has been shown to have a higher affinity for ICAM1 after treatment with an oxidizing agent whereas HEK293T cells expressing the Q163C/Q309C, and D294C/ Q311C Mac-1 mutants achieved higher ligand affinity even without oxidizing treatments [12]. We recombinantly expressed all three mutants in E. coli BL21(DE3). Analysis of the purified proteins using SDS-PAGE under reducing and non-reducing conditions revealed that the formation of the intramolecular disulfide bond significantly increased the migration distance of the protein in SDS-PAGE. This provided us with a simple method to determine the fraction of proteins containing the intramolecular disulfide bond. Fig 1A shows SDS-PAGE analysis of the three mutants in the presence and absence of 1 mM DTT. It is clear that neither the D132C/K315C nor the D294C/D311C mutant was able to form a significant amount of the intramolecular disulfide bond while close to 90% of the Q163C/Q309C mutant did. In addition, both the D132C/K315C and the D294C/D311C mutants produced a detectable amount of dimer as a result of intermolecular disulfide bonding, but no dimer was observed in the Q163C/Q309C mutant.
Because it was reported that the intramolecular disulfide bond in the D132C/K315C mutant only forms after the protein was treated with 0.1 μM Cu 2+ /phenanthroline complex [11], we examined the efficiency of such a process. Besides 0.1 μM of Cu 2+ /phenanthroline, we also tested the effect of 1 mM MgCl 2 and 10 mM sodium glutamate on the efficiency of disulfide bond formation since these compounds are ligands of active α M I-domain [19] and may stabilize the active conformation. Fig 1B shows the result of the experiment. Not surprisingly, 1 mM MgCl 2 and 10 mM sodium glutamate had little effect on the formation of the intramolecular disulfide bond by themselves, but when used in combination with 0.1 μM Cu 2+ /phenanthroline, they increased the fraction of the proteins with the intramolecular disulfide bond to~50%. However, many other minor bands as well as a band corresponding to the dimer were also visible. We think the minor bands may be produced by the oxidation of other amino acids by Cu 2+ . Interestingly, the original study of the D132C/K315C mutant noted that the protein could not be successfully crystallized [11]. This may be a consequence of the heterogeneous oxidation of the protein.
Because the intramolecular environment of E. coli is reducing and not conducive to the formation of disulfide bonds, we also examined whether the engineered E. coli strain OrigamiB (DE3), which contains mutations in its genes for glutathione reductase and thioredoxin reductase, can produce more proteins with the intramolecular disulfide bond. Fig 1C shows the SDS-PAGE of the Q163C/Q309C mutant produced in BL21(DE3) and OrigamiB(DE3). Densitometry analysis of band intensities showed the OrigamiB(DE3) strain modestly improved the amount of protein with the intramolecular disulfide bond from~90% to~95%.
Yield and thermal stability of active mutants
Compared with the I316G and ΔK315 mutants, the yield of Q163C/Q309C mutant is significantly higher. Fig 2A shows the final Superdex 75 chromatograms of Q163C/Q309C, I316G, and ΔK315 mutants. Based on the area under the elution peak, we estimate the yield of the Q163C/Q309C mutant was~3.5 times that of the I316G and ΔK315 mutants (382 mAU � mL/ L for the Q163C/Q309C mutant vs 90 mAU � mL/L for ΔK315 and 87 mAU � mL/L for I316G). BCA assay of the final products showed that the yield of the Q163C/Q309C mutant was~3.8 mg / L whereas the yield for the I316G and ΔK315 mutants was~1.1 mg / L. To compare the stability of active α M I-domain mutants, we carried out differential scanning fluorimetry [20]. The results show that the Q163C/Q309C mutant is significantly more stable than the I316G and ΔK315 mutants. In particular, while the Q163C/Q309C mutant has a melting temperature of~61˚C, the melting temperatures of the I316G and ΔK315 mutants were only 48˚C and 45˚C, respectively. Interestingly, the melting temperature of the Q163C/Q309C mutant is slightly higher than the wild type α M I-domain (58˚C). This indicates the disulfide bond adds significant stability to the protein structure.
SPR analysis of active α M I-domain's interaction with C3d
Although HEK293T cells expressing Mac-1 with the Q163C/Q309C mutations were shown to have a higher affinity for ligands than wild type Mac-1 expressing cells [12], there is no biochemical study showing the Q163C/Q309C α M I-domain has enhanced affinity for ligands compared to wild type I-domain. To confirm that the Q163C/Q309C mutant indeed has enhanced affinity for ligands, we examined the mutant's interactions with C3d, the Mac-1-binding domain in complement 3 [6]. those reported in other studies [6,7]. These results indicate the Q163C/Q309C mutant has a similar affinity for C3d as other active mutants. However, it is notable that the k off rate of ΔK315 appears to be slower than the other two active mutants, implying the kinetics of binding (Fig 2 of S1 Fig).
Solution NMR analysis of the Q163C/Q309C mutant
To investigate whether the Q163C/Q309C mutant has better NMR spectral quality than other active mutants, we collected the 15 N-edited HSQC spectra of 0.1 mM 15 N-labeled ΔK315, I316G, and Q163C/Q309C mutants (Fig 4A). A comparison of the spectra showed the Q163C/ Q309C mutant produced significantly higher signal intensities than the other two mutants. In addition, samples of the ΔK315 and I316G mutants showed large signal intensity decreases after 24 hours at room temperature whereas the Q163C/Q309C sample of the same of the dissociation constant (Kd) of the interaction for each active α M I-domain by fitting the equilibrium values of the response curves to a one-to-one binding model.
To confirm that the Q163C/Q309C mutant can bind ligands using the metal-mediated mechanism, we also investigated how Mg 2+ and glutamate affect the NMR spectrum of the mutant. As shown in Fig 4C, glutamate produced no changes in the HSQC spectrum of the mutant while the addition of MgCl 2 reduced the signal intensity without producing new signals. We attribute this to the fact that, in the presence of MgCl 2 , the active α M I-domain has a significantly higher affinity for carboxyl-containing molecules, and this may have led to the formation of large homo-oligomers that are not detectable by solution NMR. Consistent with this hypothesis is the fact that the presence of both glutamate and MgCl 2 resulted in large chemical shift changes and significantly stronger signals. We think this is because glutamate, by acting as the competing ligand, was able to dissociate the homo-oligomers, thereby producing the spectrum of the Q163C/Q309C mutant in the ligand-chelated form.
To verify that the conformation of the Q163C/Q309C mutant is similar to the active α M Idomain and not the inactive α M I-domain, we measured pseudocontact shifts (PCS) induced by the paramagnetic ion Co 2+ . α M I-domain naturally binds Co 2+ , and Co 2+ does not affect the activity of the protein [21]. This allows the PCS induced by Co 2+ to be used to validate the structure of the Q163C/Q309C mutant. PCS arise from the dipole-dipole interactions between a paramagnetic metal ion with an anisotropic magnetic susceptibility tensor and nearby atoms. It is both distance and orientation dependent. As a result, PCS has become a valuable tool in protein structure validation [22]. We assigned some backbone amide hydrogen and nitrogen chemical shifts for both the Co 2+ and Mg 2+ species of the Q163C/Q309C mutant in the presence of 10 mM glutamate (see Fig 3 of S1 Fig for sample spectral data). In total, 77 out of 186 non-proline residues in the Mg 2+ species were assigned. Out of the 77 assigned residues, assignments for 56 of which were made for the Co 2+ species. This allowed us to extract 38 backbone amide hydrogen PCS with magnitudes larger than 0.05 ppm (S1 Table). Fitting the PCS values to the active α M I-domain structure (PDB 1IDO) produced a best-fitting magnetic susceptibility tensor with a paramagnetic center less than 0.9 Ǻ away from the position of the metal in the crystal structure (Fig 5). The agreement between experimental and predicted PCS values is also excellent, with a quality factor of 0.04 (Fig 5). However, the same set of PCS did not fit as well to the inactive structure of α M I-domain (PDB 1JLM). In particular, the paramagnetic center of the best-fitting magnetic susceptibility tensor for the inactive structure is more than 7 Ǻ away from the metal in the crystal structure ( Fig 5) and the quality factor of fitting was 0.16, significantly larger than that of the fitting to the active structure. These results support the conclusion that the Q163C/Q309C mutant adopts the active conformation.
Carboxymethyl sepharose purification of the Q163C/Q309C mutant
To provide proteins of the highest purity for structural biology studies, we also devised a strategy to remove the minor population of reduced protein. The method takes advantage of the fact that α M I-domain in the active conformation has a significantly higher affinity for carboxyl-containing amino acids such as glutamate than in the inactive conformation [19]. Based on this, we postulated that carboxymethyl (CM) sepharose chromatography resin may have a higher affinity for the active α M I-domain than the inactive α M I-domain in the presence of MgCl 2 , thereby separating the active α M I-domain from the inactive α M I-domain. Fig 6A shows the CM column chromatogram of the Q163C/Q309C mutant. The protein was loaded onto the column in 20 mM HEPES, 0.15 M NaCl, and 2 mM MgCl 2 , pH 7.0. The column was washed with five column volumes of the same buffer after sample application. Finally, the mobile phase was switched to 20 mM HEPES, 0.15 M NaCl, and 10 mM EDTA, pH 7.0. The chromatogram showed that a small peak emerged during sample application while two larger peaks were seen after the application of the EDTA buffer. SDS-PAGE analysis showed that the flow through (FT) peak contained a~50% / 50% mixture of reduced and disulfide bonded species (Fig 6A) whereas the elution peaks contained only the disulfide bonded form of the protein. This result is consistent with our postulate that the active form of the protein has a higher affinity for CM resin in the presence of MgCl 2 .
We also studied the proteins in the two elution peaks using solution NMR. Surprisingly, the protein in the first elution peak resembled that of the protein in the presence of both MgCl 2 and glutamate, signifying that this fraction of the protein may be chelating a carboxyl-containing ligand (Fig 6B). Protein in the second elution peak produced a spectrum identical to that of the apo Q163C/Q309C mutant (Fig 6C). The addition of EDTA changed the spectrum of the protein in elution peak 1 to that of the apo Q163C/Q309C mutant (Fig 6D), confirming that elution peak 1 contained active α M I-domain bound to ligands through divalent cation mediated interactions.
Discussion
Understanding the interactions between integrins and their ligands is central to the understanding of integrin activities. This is especially true for integrin Mac-1, whose ligands possess diverse structures and physical properties. Although α M I-domain's interactions with ligands have been studied using X-ray crystallography, the lack of an active α M I-domain suitable for solution NMR has prevented such interactions from being characterized by solution NMR, a powerful and versatile technique that has much to offer in way of characterizing protein-ligand interactions. In this report, we examined five active mutants for their suitability in NMR studies. We were especially interested in mutants that used an intramolecular disulfide bond to constrain the position of the C-terminal α7 helix rather than mutants that disrupted the hydrophobic core of α M I-domain. We thought such an activation mechanism should produce more stable and soluble proteins than strategies that disrupt the hydrophobic core of the protein.
Out of the three disulfide bonded mutants, only the Q163C/Q309C mutant formed the intramolecular disulfide bond spontaneously. Characterization of the protein's yield, stability, and NMR spectral quality showed the protein was considerably more stable than the ΔK315 and I316G mutants, the two most commonly used active variants of α M I-domain. It also produced superior NMR spectra and has a higher yield than the other two mutants. In addition, Co 2+ -induced PCS data showed the conformation of the Q163C/Q309C mutant is consistent with active α M I-domain but not inactive α M I-domain. This result is in agreement with SPR data that indicate the Q163C/Q309C mutant has a similar affinity for C3d as the ΔK315 and I316G mutants.
Although treatment with oxidizing agents such as Cu 2+ /phenanthroline has been shown to induce the formation of an intramolecular disulfide bond in the D132C/K315C mutant, in our hands the treatment also produced several other species that could have resulted from the non-specific oxidation of amino acids such as tyrosine and arginine, a possible factor in the unsuccessful crystallization of the protein after oxidation treatment [11]. It was interesting that the addition of MgCl 2 and glutamate, compounds that bind α M I-domain through the MIDAS of active α M I-domain, was able to significantly improve the amount of disulfide bonded D132C/K315C mutant. This shows that stabilizing the protein in the active conformation may help the formation of intramolecular disulfide bonds.
To improve the yield of the Q163C/Q309C mutant, we employed two approaches. First, we examined whether expressing the protein in OrigamiB(DE3), which contains lower levels of the reducing agent glutathione and thioredoxin, increases the fraction of disulfide bonded protein. The result showed OrigamiB(DE3) increased the amount of disulfide bonded protein modestly from~90% to~95%. Second, to remove the remaining protein with no disulfide bond, we leveraged active α M I-domain's affinity for carboxyl-containing compounds to separate the active fraction from the inactive fraction using CM chromatography. This works well as the reduced protein was not retained in the column. However, a fraction of the active protein was also found in the flow through. One explanation is that, in the presence of MgCl 2 , inactive α M I-domain may be bound to active α M I-domain because of the latter's high affinity for glutamate, resulting in a complex incapable of binding the resin. The formation of this inactive-active α M I-domain heterodimer may explain why an equal amount of active α M Idomain was also found in the flow-through fractions. This postulate is corroborated by the observation that a glutamate from one α M I-domain chelated the divalent cation in the MIDAS of another α M I-domain in the crystal structure of the ΔK315 mutant [8]. We think that it may be possible to minimize the amount of disulfide bonded proteins in the flow through if the protein is diluted to a low concentration to prevent the formation of homo-oligomers before applying to the column. Another unexpected outcome is that some of the proteins in the elution are consistent with ligand-bound forms of α M I-domain. It is unclear what the ligand is. One possibility is that the ligand may be contaminating small CM oligosaccharides that were inadvertently extracted by the protein.
Conclusion
The integrin Mac-1 is an important integrin involved in many aspects of leukocyte biology. Understanding the mechanisms of its ligand specificity is essential to developing targeted treatments against Mac-1. However, the lack of a solution NMR suitable active α M I-domain has so far prevented NMR from investigating the interactions of active α M I-domain with its ligands. This report systematically examined five known active mutants of α M I-domain and showed that the Q163C/Q309C mutant adopts the active conformation, can be produced with higher yield, and is more stable than other commonly used active mutants of α M I-domain. The availability of such a mutant will enable more NMR studies of α M I-domain's interaction with ligands and reveal more insights into the mechanisms of Mac-1 activity. | 2023-01-26T06:16:04.026Z | 2023-01-25T00:00:00.000 | {
"year": 2023,
"sha1": "65f349b169f72e74cf9a109cbb54c40dd066f53f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0280778&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ff4704f9889d653caa7f67016d01d87925e103a",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214353220 | pes2o/s2orc | v3-fos-license | Optimised control using Proportional-Integral-Derivative controller tuned using internal model control
Received Jun 29, 2019 Revised Nov 11, 2019 Accepted Nov 26, 2019 Time delays are generally unavoidable in the designing frameworks for mechanical and electrical systems and so on. In both continuous and discrete schemes, the existence of delay creates undesirable impacts on the underthought which forces exacting constraints on attainable execution. The presence of delay confounds the design structure procedure also. It makes continuous systems boundless dimensional and also extends the readings in discrete systems fundamentally. As the Proportional-IntegralDerivative (PID) controller based on internal model control is essential and strong to address the vulnerabilities and aggravations of the model. But for an real industry process, they are less susceptible to noise than the PID controller.It results in just one tuning parameter which is the time constant of the closed-loop system λ, the internal model control filter factor. It additionally gives a decent answer for the procedure with huge time delays. The design of the PID controller based on the internal model control, with approximation of time delay using Pade’ and Taylor’s series is depicted in this paper. The first order filter used in the design provides good set-point tracking along with disturbance rejection.
INTRODUCTION
The Controller design is the most indispensable and vital part of the control applications. There are many types of controller architectures which are accessible in the literature. The nature of the controller can be either of conventional or intelligent type. The performance evaluation part comes into action after the design of the controller. The designed controller needs to produce best possibleoutcome inspite of non linearities in plant and equipment and saturation of saturation [1][2][3][4][5][6].
Kravaris et al. [7] have used Smith predictor as the dead time compensation method for linear systems represented by transfer functions. Gao et al. [8] obtained a PID controller design method based on IMC, it is eye-catching to industrial users because of single alteration parameter. The tuining parameter relate straight to the closed-loop performance and the robustness. Morari-Zafiriou [9] and Gopi et al. [10,11] have used a first order Pade' approximation of the delay element in the process model in order to realise the closed-loop controller based on IMC principle. This closed-loop controller provides good setpoint response. Horn et al. [12] and Gopi et al. [13] have confirmed the widely published IMC-PID tuning rules provide deprived load disturbance support for applications where the closed-loop dynamics needed are significantly faster than the open loop dynamics. The IMC filter design is tailored to acquire loworder controllers that provide efficient disturbance restraint regardless of where the disturbances are entering 2453 the closed-loop system.Lee et al. [14] explained PID parameters for general process models by recalling the feedback form in the Laplace variable of an IMC controller with a Maclaurin series. The PID parameters provide closed-loop results nearer to the required outcomes than those acquired by PID controllers tuned by previous methods. A number of PID and predictive controller strategies have been weighed by Syder et al. [15] to compensate for processes modeling in first order lag plus time delay form. The resulting compensated systems 'performance and robustness are evaluated analytically (if applicable) and in simulation. The analytical tuning rules presented by Skogestad et al. [16] they are as easy as possible, resulting in excellent closed-loop behaviour. The guiding principle was the IMC-PID tuning rules of Rivera, Morari and Skogestad that attained wide industrial acceptance. The integral word has been altered to enhance rejection of disturbance in integration systems [17]. Mann et al. [18] described the PID analysis of the time domain, which included three types FOPTD models: (a) negligible time delay, (b) small to medium delay and (c) prolonged delay. The first part of the analysis shows that for plants with negligible time delay, the optimum PID controller is a PI controller. A new PID tuning scheme has been developed for low to medium delay problems. The proposed tuning rule can accommodate the saturation of the actuator and is therefore capable of selecting an optimal PID controller [19]. Chen et al. [20] presented a PID controller design method based on the direct synthesis approach and specification of the desired closed-loop transfer function for disturbance. Analytical expressions for PID controllers are obtained for several popular kinds of process designs including first order, second order plus time delay models and an integer plus time delay model.Skogestad et al. [21] introduced IMC-based tuning guidelines for PID controllers that are easy and still result in excellent loop behavior. To obtain this model form Simple model reduction analytical rules are provided, for effective time delay incorporating the' half rule'. Wang et al. [22] explained recently developed control methods for unstable processes with time delays. The evaluation was carried out using seven existing controller design methods relating to their applicability, control performance and robustness.
Shamsuzzoha et al. [23] proved the IMC-PID tuning laws for excellent set-point monitoring but slow disturbance, which becomes serious when a process has a slight continual delay/time ratio. In this research, an ideal IMC filter structure is suggested for several representative process models to design a PID controller that generates an enhanced reaction to disturbance rejection. A closed-loop guideline is also suggested to cover a broad variety of process models with different time dealy time constant ratio ().
TUNING RULES FOR FIRSTORDER PROCESS WITH DEAD TIME
To display the desired property on the control system, controller parameters need to be adjusted. It's known as tuning. In many industries, PID controllers are used. Most of these controllers were analog, but digital signals and computers are used by today's controllers. The controller's parameters can be determined explicitly when a system's mathematical model is available. The parameters are determined experimentally when a mathematical model is not accessible. The parameters of the controller produce the desired controller output. Controller tuning enables a process to be optimised and minimizes the error between the process variable and its set point [1,6,[24][25][26].
The different types of methods for controller tuning include methods for testing and error and methods for process reaction curve. Ziegler-Nichols and Cohen-Coon methods are the most common classical controller tuning methods. These techniques are often used when the mathematical model of the system is not accessible. The Ziegler-Nichols technique can be used for both closed and open loop schemes, whereas Cohen-Coon is typically used for open loop schemes. Closed-loop control system is a scheme that utilizes feedback control. The output in an open-loop scheme is not compared to the input [1,6,11,24,26]. The equation of the PID controller is represented in (1).
There are three tuning parameters for a PID controller. If these are adjusted in an adhoc fashion, a satisfactory controller performance may take a while. Thus, in 1942 Ziegler and Nichols proposed the two tuning methods [1] and were widely used either in the original form or in modified forms [5]. First method is called the ultimate method of sensitivity by Ziegler-Nichols. The second method is called the step response method of Ziegler-Nichols [1,5,6,24].
The prior tuning rules were based on experiments that compelled a process to continuously oscillate. As a consequence, the system is compelled to the brink of instability and it may take some time to alliteratively adjust the controller for constant oscillation. The tuning rules explained below are based on (2) The first order plus time delay processes have a maximum slope of / PP K at t q K these guidelines can be used in the first order plus time delay processes for a unit step input.
PROPORTIONAL INTEGRAL DERIVATIVE CONTROLLER TUNING WITH INTERNAL MODEL CONTROL
A model-based control method is used in Internal Model Control (IMC). It is also possible to use the IMC method as a PIDcontroller tuning method [7, 11 25, 27]. The method is generally applicable to systems with constant delays, but the IMC method is also applied to systems with varying time delays. Photograph Figure 1 is the IMC principle representation. The model output error is removed from the reference signal and fed into the control signal calculating IMC. Calculating the IMC controller Q(s) first divides the process model into two parts as follows: . The IMC controller () Qsis given by (5): where () fs the transfer function of low-pass with order n (6): To have a causal controller, the low-pass filter is required. The It is difficult to achieve robust tuning and fast response simultanesouly, thus thieis a trade-off. Robustness plays a vital role in different time-delay systems, and it turns out that tuning IMC is crucial.
Recognizing the dependence between the IMC controller Q in Figure 1 is useful when implementing the IMC controller with the controller of traditional feedback loop in Figure 2. The (7) gives the IMC law in the classic control loop. Figure 2. IMC modified structure for closed-loop [3] The process delays must be approximated with linear transfer function to design the controller. The delay can be approximated to the expansion of the Taylor series or the approximation of the first order Padé [10].
Under certain assumptions, the IMC design often yields high-order controllers. The proportional integral (PI) control structure can be obtained from the IMC design and the tuning parameters can therefore be obtained for a regular proportional internal control controller. Consider the model for the FOPTD process.
Using the IMC design and the first order Taylor series expansion and with first order (n=1) low-pass filter the controller C becomes [5,10].
The PI controller with parameters is given as: When using the delay approximation padé, controller C becomes [10].
The PI Controller form is in fact the interacting controller [6,10,11].
THE IMC BASED PID CONTROL DESIGN PROCEDURE
In the design of the IMC based PID control system, the following steps are used [11,24]: -First, it is necessary to determine the transfer function of the IMC controller () Qs , including a low pass filter () fsfor semi-proper or derivative action. The numerator's order is one order larger than the denominator and this is required to find a PID controller equivalent. It's a big difference from the process of IMC. A filter of the form (13) is often used for integration or unstable processes or to achieve better disruption rejection [11,12].
-Using the transformation, the equivalent standard feedback controller is given -The (14) must be displayed in PID form and evaluated in KP, Ti, Td. This process sometimes results in an optimal PID controller cascaded with a filter with a steady filter period (τf).
-The ideal model situation and instances with closed-loop model mismatch simulations need to be performed. Adjust λIMC to model error based on a tradeoff amid performance and robustness sensitivity.
The initial values for λIMC are between 1/3 and 1/2 of the dominant time constant [24].
First order plus dead time process
The most common representation of chemical process dynamics is the first-order plus dead time. For a large number of process control loops, the PID equivalent form developed is useful. The following steps are used for first order with dead time processes in the IMC-based PID design. The process is given by: -The approximation of the first order for the dead time is provided by [10] -To make the Q(s) proper, filter f(s) is added. But to get the PID controller, Q(s) will besemi-proper. The derivative option is used to allow Q(s) numerator to be higher than the denominator in one order.
When the process is first order plus dead time, the IMC-based PID controller design procedure has resulted in a PID controller. In this development, a Padé approximation for dead time was used which means that the filter constant (λ) can not be randomly reduced. The IMC-based PID strategy will therefore have performance limitations that do not occur in the IMC strategy. Because of the model uncertainty owing to the approximation of thePadé, Rivera et al. (1986) [28] suggest that λ>0.8 be used owing to the uncertainty of the model. Morari and Zafiriou (1989) suggest λ > 0.25 [9] for the PID plus lag system.
Integratorplus deadtime process
For processes in which the time constant is dominant, the step response behavior can be approximated as an integrator plus dead time as the following transfer function characterizes [17,[29][30][31][32][33][34].
RESULTS AND ANALYSIS
Design with time delay is implemented using MATLAB in the IMC-based PID controller. The actual function of process transfer is never accurately known. It is therefore necessary to use two process representations of the transfer function. Thus, one is regarded as a process or plant that is never accurately known, and the other is considered as a process model that is accurately known. In the IMC process model, the actual process is maintained in parallel. The ideal PID controller based on IMC means that the model is perfect and there is no disturbance or delay. So there's no feedback either.
Case 1: FOPDT
An IMC based PID controller's transfer function is given in (37) for a first order with time delay plus first order disturbance. The function of transfer is taken from [25]. The approximation of a first order Pade' is used for time delay. A first-order disturbance Gd(s) (38) together with the process model is considered. to make the controller semi-proper. The value of λ=20 is chosen, which is having range λ>0.2τ. But practically the initial values of λ should lie in the range of 1 /3 to 1 /5 of the time constant of the process. Substituting the value of λ in the IMC controller Q(s) of (21) along with (36), (37), (17), (18) and (19) For obtaining the closed-loop feedback controller with PID controller, substitution and simplification with the procedure defined earlier the parameters of the PID controller are : .5 100.5 0.049; ; IMC-based PID controller's Simulink block diagram for a first order with time delay plus first order disruption is shown in Figures 3 and 4. The IMC based PID controller's unit step response is shown in Figure 5 for a first order with time delay plus first order disturbance. Photograph. Figures 6 and 7 illustrate the disturbance response of IMC-PID and IMC, and Table 1 encapsulates the integral performance criteria for FOPTD plus disturbance of the first order. From Figures 5, 6, 7 and Table 1 it can be inferred, in contrast to the IMC controller, the IMC-PID provides improved set point tracking and disturbance rejection. Rising time is improved, settling time is reduced, and disturbance recovery is rapid. The disturbance response of the IMC-PID controller for integrating process is represented in Figure 8. The IMC-PID provides less overshoot and fast recovery from disturbance along with good set point tracking. Figure 9 is the representation of the controller response for disturbance and it is a smooth action thus enhancing the life of process equipment. Table 2 encapsulates the integral performance criterion which demonstrates the performance of the designed IMC-PI/PID controller.
CONCLUSION
The IMC provides a transparent framework for the design and tuning of control systems. The design of the IMC-based PID controller is simple and robust to handle model uncertainties and disturbances and less noise-sensitive than conventional PID controller for an actual industry process. The design of the IMC based PID controllers results in only one tuning parameter that is the closed-loop time constant λ which is the factor of the IMC filter. The parameters of the IMC based PID tuning are then a function of the time constant of the closed-loop. Closed-loop time constant selection is directly related to the closed-loop system's robustness sensitivity to model error. The PID design procedure based on IMC can be implemented using existing PID control equipment in industrial processes. It also provides a good process solution with significant time delays, which is actually the case with real-time work. | 2020-03-12T10:41:24.743Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "870f2b14e3bedfb5de7866b606c5a2ecf5ebdc90",
"oa_license": "CCBYSA",
"oa_url": "http://ijece.iaescore.com/index.php/IJECE/article/download/20565/13788",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cbe85d119a392b3cf36354c3360a014c39731651",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
144200390 | pes2o/s2orc | v3-fos-license | Sampling and Samples — Five Critical Issues
Otterbein’s review of developments in cross-cultural sampling since 1976 (CAM ~2 ) requires correction in five respects. The errors in his presentation affect his discussion of random sampling versus the standard samples, including both the HRAF Quality Control Sample (QCS) and the Standard Cross-Cultural Sample (SCCS). 1. INCOMMENSURATE SAMPLES. Otterbein does not favor the use of srandard samples (SCCS or QCS) but rather &dquo;replication of results&dquo; by drawing multiple samples from a large sampling universe. In his view, all major cross-cultural studies should draw a new sample. He seems unaware of the advantages of more stringent internal replications of results within large samples such as the SCCS, as in replicating correlations in different regions of the world. For example, Burton and I (1984; White and Burton 1988) have used regional replication to great advantage in our testing of major hypotheses. In contrast, Otterbein’s &dquo;incommensurate samples&dquo; approach was used in the period from 1949
samples (SCCS or QCS) but rather &dquo;replication of results&dquo; by drawing multiple samples from a large sampling universe. In his view, all major cross-cultural studies should draw a new sample. He seems unaware of the advantages of more stringent internal replications of results within large samples such as the SCCS, as in replicating correlations in different regions of the world.
For example, Burton and I (1984;White and Burton 1988) have used regional replication to great advantage in our testing of major hypotheses. In contrast, Otterbein's &dquo;incommensurate samples&dquo; approach was used in the period from 1949 to 1969, and due to the difficulty of developing new codes (or coding all variables used by previous authors) few if any of these studies actually replicated earlier findings. A more serious drawback of these studies was that their authors were rarely able to test altemative theories when they used a sample different from that of previous authors who had presented competing hypotheses.
There are valid reasons for drawi ng a new sample in cross-cultural research, as in the case of Otterbein's ( 1985 ) study of warfare-For warfare, as wi th certain other topics, the proportion of case studies which provide data on the topic is low. Consequently, a high proportion of missing data in a standard sample (SCCS or QCS) may pox a problem. Otterbein's sample design called for sampling one case for each sampling stratum, with replacement sampling for every case with inadequate data on warfare. (One could also do replacement sampling wi thin the framework of a standard sample, of course.) Otterbein seems to assume that all major cross-cultural studies, like his study of warfare, deal with topics that are not well covered in most echnogmphies. Studies of topics that are poorly treated in ethno-graphics do tend to call these topics to the attention of ethnographers. This may have the effect of raising ethnographic standards of reporting, but these are not the only types of &dquo;major&dquo; cross cultural studies. Otterbein's sampling design should not be universally emulated.
Even if new samples were preferable for each newcross-cultural study (which they clearly are not), Otterbein offers aflaweddesign for probability sampling. The purpose of probability sampling is not simply to guard against bias in sample construction. Problems of representation (bias) in sample construction are fairly easy to correct, once biases are known, by comparing the sample with known population distributions. Even random samples sometimes need such corrections.
The purpose of properly executed probability samples is to provide, from the evidence of the sample itself, estimates of the confidence limits or standard errors of estimates of proportions, means, correlations, regression coefficients, or other statistical measures. Standard errors are crucially dependent on computation of the variances of observations within each sampling stratum. The problem with Otterbein's design, in common with that of the QCS, is that no variances can be computed wi thin any of the sampling strata, since only one case is chosen (&dquo;randomly&dquo;) within each stratum. This nullifies the advantages of probability sampling for purposes of statistical estimation.
LIMITATIONS OF (STRATIFIED)
RANDOM SAMPLES FOR CULTURAL COMPARISONS. The proper way to do probability samples is to have either no strata (simple random samples-orSRS) or few strata relative to the number of cases (SRS within each major stratum).
However, in the case of comparative research, simple random sampling of many cases per stratum results in the selection of disproportionately many &dquo;similar&dquo; cases in certain regions where there are many societies of the same general type (e.g., Bantu Africa, Malayo-Polynesia Oceanea, etc.).
Should the societies chosen in such overrepresented regions be counted as i ndependent cases or reduced to a smaller number of effectively independent cases? This is, of courx, Galton's problem. The way that i t has been handled in the literature on cross-cultural sampling is to choose only one representative for each distinctive cul ture typeoften using Mur.
dock's classification of societies into 60 world areas (QCS) or his more elaborate classification into 186-200 cultural pro. vinces (SCCS). Such strategies are highly efficient in two statistical senses: 1) by maximizing between--cluster heterogeneity in the sample they are known to provide more accurate estimates of standard errors than simple random samples used without statistical estimation techniques that correct for Gal ton's problem; 2 ) they thereby provide a greater &dquo;effective&dquo; sample size for the coding effort. That is, coding all cases chosen in a simple random sample represents a considerable wasted effort when the effective sample si~e is considerably reduced by a poor choice of sampling design: in this case, the simple or stratified random sample.
Representative samples that have higher effective sample size(QCS or SCCS) do allow the use of statistical techniquessuch as randomization tests and autocorrelationthat provide valid estimates of standard errors, even for nonpnobability samples. The validity of sample representtion (but not its validi ty as a true probability sample) is enhancedwheniccan be assumed that the choice of one case per stratum is unbiased. ' The SCCS is commonly assumed to achieve this by choice of the best-described case for each stratum, while the QCS restricts the sampling frame to best described cases and makes choices among alternates randomly. Neither approach is self-evidently superior to the other in terms of representation, nor is either one a true probability sample in terms of advantages (Continues on page I I ) | 2019-05-04T13:06:52.676Z | 1990-02-01T00:00:00.000 | {
"year": 1990,
"sha1": "25976f176e699c95c7b844b7f91cc9695875744d",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt6qd5c1tc/qt6qd5c1tc.pdf?t=oqrru3",
"oa_status": "GREEN",
"pdf_src": "Sage",
"pdf_hash": "32a0eef6b3799f52bfb8cb2468f28d78ddf2a079",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.