id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
16985670
|
pes2o/s2orc
|
v3-fos-license
|
The better the story, the bigger the serving: narrative transportation increases snacking during screen time in a randomized trial
Background Watching television and playing video games increase energy intake, likely due to distraction from satiety cues. A study comparing one hour of watching TV, playing typical video games, or playing motion-controlled video games found a difference across groups in energy intake, but the reasons for this difference are not clear. As a secondary analysis, we investigated several types of distraction to determine potential psychosocial mechanisms which may account for greater energy intake observed during sedentary screen time as compared to motion-controlled video gaming. Methods Feelings of enjoyment, engagement (mental immersion), spatial presence (the feeling of being in the game), and transportation (immersion in a narrative) were investigated in 120 young adults aged 18 – 35 (60 female). Results Only narrative transportation was associated with total caloric intake (ρ = .205, P = .025). Transportation was also higher in the TV group than in the gaming groups (P = .002) and higher in males than in females (P = .003). Transportation mediated the relationship between motion-controlled gaming (as compared to TV watching) and square root transformed energy intake (indirect effect = −1.34, 95% confidence interval −3.57, −0.13). No other distraction-related variables were associated with intake. Conclusions These results suggest that different forms of distraction may differentially affect eating behavior during screen time, and that narrative appears to be a particularly strong distractor. Future studies should further investigate the effects of narrative on eating behavior.
It is possible to become so engaged in a technologicallymediated world as to feel as if one is physically present in it. This sensation is known as spatial or physical presence [10]. In addition to feelings of engagement in the world, individuals also suspend their disbelief in its physicality. For example, "jump scares" in horror or other media cause startle reflexes, even though viewers intellectually realize that the objects are not real [11]. Thus, spatial presence consists not only of attentional distraction but also a psychological feeling of being in an alternative space.
Narrative transportation, or absorption in a storyline, integrates attentional allocation with imagery and feelings related to a story [12]. When absorbed in a narrative, as opposed to non-narrative media, a loss of self-awareness is combined with mental construction of the narrative reality. Though engagement and presence in a virtual environment may be relatively passive, individuals must actively participate in imagining a storyline. Thus, narrative transportation may produce more profound distraction than engagement or even spatial presence because of the mental effort required to construct the narrative.
Results of a recent study suggested that sedentary screen time, including TV watching and typical video gaming, can produce greater energy intake than playing video games that use motion-based controls, and that this effect is unlikely to be due to lower levels of energy expenditure [13]. Motion-controlled games played with a camera-based or accelerometer controller (such as Microsoft's Kinect or Nintendo's Wiimote) require body movement to play and may reduce intake by busying hands, reducing opportunities to eat as much as in conditions with idle hands (e.g., TV watching). These games may also be less distracting than sedentary games and TV watching. There is mixed evidence as to feelings of presence and engagement during motion-controlled video games as compared to typical video games [14][15][16][17][18], and little is known about narrative transportation. Previous studies have compared sedentary gaming to no stimulus [19,20] and to gaming while walking on a treadmill [21], but we are unaware of TV or gaming studies that have measured or compared the effects of different types of distraction.
The purpose of this secondary data analysis was to investigate psychosocial variables measured during a study comparing TV watching, typical video gaming, and motion-controlled video gaming (described above) [13]. The effects on energy intake of several different measures of distraction were studied: transportation, spatial presence, and engagement. We hypothesized that greater distraction would be associated with greater energy intake. We also hypothesized that TV watching and typical sedentary gaming would be more distracting than motioncontrolled gaming. The effects of enjoyment and tendency towards immersion were also explored.
Materials and methods
Details on recruitment, participants, and protocol of the larger study have been provided previously [13]. Briefly, the PRESENCE 2 project recruited 120 young adults aged 18 -35 (60 female) for a one-hour experimental protocol. Recruitment occurred through a local online mailing list and TV advertisements. Participants were stratified by gender and then randomized into one of three conditions: TV watching, typical video gaming, or motion-controlled video gaming. The TV watching condition consisted of watching commercial-free TV shows using an instant streaming service (Netflix). Participants could choose the shows they wished to watch and change shows at any time. Comedy and drama shows were the most popular. Only two shows included any kind of food-related content ("Ace of Cakes" and "No Reservations"). The typical video gaming condition consisted of playing one or more of 10 possible games on a video game console that used a standard controller (Sony Playstation 3 using a Dualshock controller). The games included a range of genres and ratings and were all rated over 75 out of 100 on a critical ratings aggregator. Playstation 3 was chosen because it offered a wide variety of highly ranked games that were not first-person shooter games. In previous studies, many women have displayed a strong dislike of violent first-person shooters and similar genres [22,23]. The most popular games were Street Fighter IV, LittleBigPlanet, and Call of Duty: Modern Warfare 2. The motion-controlled video game condition consisted of play of one or more of 10 possible games on a Nintendo Wii or Microsoft Xbox360 console (the two Xbox360 games both included their own motion-controller peripheral controller: Dance Dance Revolution: Universe 2 with a dance mat and Rock Band 2 with a drum set controller). To be included, the games had to include at least punching, throwing, or other similar motions. The most popular games were Wii Sports Resort, Rock Band 2, and Punch-Out!! (played using the motion-controlled configuration). Media were shown on a 58" HD TV in a small, dim room. A comfortable chair was available, with sufficient room (~6-8 feet) for standing to play games.
Participants watched TV or played video games for one hour while four types of snacks (chips, baked chips, trail mix, and chocolate candy) and four types of beverage (Coca-Cola, Diet Coke, Mountain Dew, and bottled water) were available. Caloric intake was estimated by weighing food and beverage containers before and after each session using a Tanita food scale (Arlington Heights, IL) and then converting weight to kilocalories based on each snack's published nutrition data. Baseline and trait psychosocial measures and demographic variables were measured prior to experimentation, and psychosocial outcomes were measured after the one-hour experimentation period.
Measures
Immersive tendencies were measured using Witmer and Singer's 16-item Immersive Tendencies Questionnaire [24]. This measure includes items that specifically mention immersion in TV programs, movies, sports, books, games, and storylines, making it an appropriate general measure of tendencies towards all types of distraction. Items include "do you ever become so involved in a television program or book that people have problems getting your attention?" and "have you ever remained apprehensive or fearful long after watching a scary movie?" The focus and involvement subscales were used in this study (a game-specific subscale was excluded). Items from each subscale were summed from 1 -7 Likert scale responses.
Enjoyment was measured using the interest/enjoyment subscale of the Intrinsic Motivation Inventory, a 7-item measure that has been used in previous gaming studies and that has shown reliability and validity [22,25,26]. Items were altered slightly to be specific to the game or program being discussed, such as "I enjoyed playing the game very much" and "I would describe this program as very interesting." Engagement and spatial presence were measured with their respective subscales from the Temple Presence Inventory [27,28]. This measure was created from items used across various presence scales and previously has been used in similar literature [16]. The six-item engagement subscale included questions such as "to what extent did you feel mentally immersed in the experience" and "how completely were your senses engaged?" The fouritem spatial presence subscale included questions such as "to what extent did you experience a sense of being there inside the environment you saw/heard?" All items on both subscales used 7-point Likert responses that were averaged for the final score (range: 1 -7).
Narrative transportation was measured using Green and Brock's 12-item transportation scale, adapted to reference video games and television shows [29]. Items included "I wanted to learn how the program ended" and "I found myself thinking of ways the game could have turned out differently." Eleven general items were used with one additional item related to characters ("I had a vivid image of my character" for gaming and "I had a vivid image of the main character" for TV). Items for this measure were summed for a final transportation score (range: 12 -84).
Data analysis
All analyses were performed using PASW Statistics version 18 (SPSS, Inc., Chicago, IL). Because energy intake was not normally distributed, associations were analyzed using Spearman's rho. Analyses of covariance were used to determine differences by group assignment and gender for normal variables.
All psychosocial outcome measures were first calculated for each individual game played or program watched. To represent an overall score for the entire hour-long period, scores for all games/programs were averaged for an overall score for each variable. Only those games or programs used for more than five minutes were included in these analyses.
Mediation effects (also known as indirect effects) were tested where associations were found between a psychosocial outcome and square root-transformed energy intake. Preacher and Hayes' PASW macro "indirect" was used for these analyses. This macro uses bootstrapping to estimate the mediated effect and bias-corrected accelerated 95% confidence interval of the effect [30]. Bootstrapping is a simulation technique that uses the observed sample as representative of a larger population [31]. Using the observed sample, the assumed population is resampled with replacement k number of times (here, k = 5000). For each simulated sample, the product of paths a (from the independent variable to the mediator) and b (from the mediator to the dependent variable) are estimated. These products represent a distribution that approximates the sampling distribution of the indirect effect in the population. The 95% confidence interval of the indirect effect provides a method for hypothesis testing such that intervals not including zero represent a significant indirect (mediated) effect. Rather than test the significance of the path coefficients themselves, this technique directly tests the significance of the indirect effect (the a*b product). This technique is considered superior to the causal steps method, which does not directly test a mediated effect, and the Sobel test, which assumes a normal distribution of the indirect effect and has lower statistical power [32].
Most participants only used one game/program (53 out of 120, 44%) or two (49 out of 120, 41%). The average time spent on the first and second games/programs were 41.49 and 23.84 minutes, respectively.
Immersive tendencies were not associated with energy intake (involvement, ρ = .059, P = .522; focus, ρ = .042, P = .651). Women reported higher tendency toward involvement than men (P = .003). No differences by gender were found for tendency toward focus. No group differences were found for any immersive tendency variables (P > .05).
Mediation test
The mediated effect of group assignment on energy intake via narrative transportation was investigated. Dummy variables were used to compare the motioncontrolled gaming condition to the TV watching condition, with the sedentary gaming condition as a covariate. The a-path from group assignment to transportation was negative (b = −6.24, SE = 1.92), and the b-path from transportation to intake was positive (b = 0.22, SE = 0.11). The estimated indirect effect was −1.34, with a biascorrected and accelerated 95% confidence interval ranging from −3.57 to −0.13, indicating statistical significance. We repeated the analysis with the sedentary gaming condition as the independent variable (and the TV condition as a covariate) to determine whether a mediated effect also existed when comparing motion-controlled gaming to typical sedentary gaming. This analysis did not produce a significant mediated effect (−0.21, 95% confidence interval −1.46, 0.59).
Discussion
Only one of the three types of distraction measured was associated with energy intake during screen time. Narrative transportation showed a small positive association with energy intake. Engagement and spatial presence were not associated with intake, nor were feelings of enjoyment. Transportation was found to be higher in men and in those watching TV as compared to women and those playing either type of video game. Narrative transportation mediated the effect of group assignment on energy intake, but only when comparing TV watching to motion-controlled gaming. Thus, greater narrative transportation in the TV watching condition, as compared to the motion-controlled gaming condition, contributed to greater energy intake.
Several previous studies have compared the impacts of different forms of media on energy intake. No difference was found between TV watching and listening to a detective story [5], but TV watching has produced greater energy intake when compared to listening to a symphony [33]. When a continuous TV program was compared to loops of 1.5-minute portions of the same show, the continuous program produced greater energy intake [7]. Narrative-based media may be particularly distracting and thus lead to greater energy intake than non-narrative media. Health promotion efforts currently use narrative to model healthy behaviors [34] and to persuade individuals to change their behaviors [35,36]. Narrative may be useful for increasing "self" presence or character identification, which can promote feelings of social support and belonging [37]. However, the potential of highly transporting narratives to increase positive energy balance should be considered prior to their use in future studies.
Limitations
The data analyzed here represented averages of multiple programs or games used over a one-hour period. It is unknown why participants chose to stop one program or game and begin another. The nature of the study design, with a primary intent on comparing three types of screen time while allowing participants to view/play their preferred titles within those types, precludes precise study of individual titles. Though only two of the TV shows watched were related to food, other cues may have influenced participant desire to eat. Gender differences may have resulted from differences in the content chosen by each gender; further study of the effects of content on distraction is necessary to better understand this finding. We have provided frequencies for all television shows watched and video games played by gender in an Additional file 1. These data suggest that gender differences in video game choice may have existed and thus contributed to differences in distraction. These results are preliminary and intended to provide a basis for more rigorous future investigations of different types of distraction. The types of food and beverages available may also have influenced intake. It appears that much of the kilocalorie intake across all groups came from trail mix (see Additional file 2). The reasons for this differential intake of one food are unclear but may be related to a perception that trail mix was healthier than the other options, which was expressed by several participants (data not reported).
The transportation scale was not developed for use with television or video game narratives, and the immersive tendencies questionnaire may not adequately measure the tendency to be transported specifically by a narrative, as it is general to all types of presence. Self-reported distraction, reported after the experimental period, is not an optimal method of measurement. Future studies could improve on the measures presented here by including measures of attentional focus and distraction types during the experimental period. Further validation of self-report measures would also strengthen future experiments.
Conclusions
Narrative transportation was lower for motion-controlled video games, leading to lower energy intake as compared to TV watching. Story-related immersion appears to be uniquely effective as a distraction, producing an effect when engagement and spatial presence did not. Future research is necessary to further investigate the potential of highly involving narratives to distract from bodily stimuli, which could have positive implications across a number of public health fields. Future research should also determine methods of ameliorating the potential negative effects of narrative transportation due to increased energy intake.
|
2017-04-27T17:27:32.539Z
|
2013-05-16T00:00:00.000
|
{
"year": 2013,
"sha1": "55e114a6f062a712804af1bad01eaee632522dba",
"oa_license": "CCBY",
"oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/1479-5868-10-60",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "55e114a6f062a712804af1bad01eaee632522dba",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
232342873
|
pes2o/s2orc
|
v3-fos-license
|
Improved detection of microbiological pathogens: role of partner and non-governmental organizations
Background Proper detection of disease-causing organisms is very critical in controlling the course of outbreaks and avoiding large-scale epidemics. Nonetheless, availability of resources to address these gaps have been difficult due to limited funding. This report sought to highlight the importance of in-country partners and non-governmental organizations in improving detection of microbiological organisms in Ghanaian Public Health Laboratories (PHLs). Methods/context This study was conducted between June, 2018 to August, 2019. U. S CDC engaged the Centre for Health Systems Strengthening (CfHSS) through the Association of Public Health Laboratories to design and implement strategies for strengthening three PHLs in Ghana. An assessment of the three PHLs was done using the WHO/CDS/CSR/ISR/2001.2 assessment tool. Based on findings from the assessments, partner organizations (CfHSS/APHL/CDC) serviced and procured microbiological equipment, laboratory reagents and logistics. CfHSS provided in-house mentoring and consultants to assist with capacity building in detection of epidemic-prone infectious pathogens by performing microbiological cultures and antimicrobial susceptibility tests. Results A total of 3902 samples were tested: blood (1107), urine (1742), stool (249) and cerebrospinal fluid (CSF) (804). All-inclusive, 593 pathogenic bacteria were isolated from blood cultures (70; 11.8%); urine cultures (356; 60%); stool cultures (19; 3.2%) and from CSF samples (148; 25%). The most predominant pathogens isolated from blood, urine and stool were Staphylococcus aureus (22/70; 31%), Escherichia coli (153/356; 43%) and Vibrio parahaemolyticus (5/19; 26.3%), respectively. In CSF samples, Streptococcus pneumoniae was the most frequent pathogen detected (80/148; 54.1%). New bacterial species such as Pastuerella pneumotropica, Klebsiella oxytoca, Vibrio parahaemolyticus, and Halfnia alvei were also identified with the aid of Analytical Profile Index (API) kits that were introduced as part of this implementation. Streptococcus pneumoniae and Neisseria meningitidis detections in CSF were highest during the hot dry season. Antimicrobial susceptibility test revealed high rate of S. aureus, K. pneumoniae and E. coli resistance to gentamicin (35–55%). In urine, E. coli was highly resistant to ciprofloxacin (39.2%) and ampicillin (34%). Conclusion Detection of epidemic-prone pathogens can be greatly improved if laboratory capacity is strengthened. In-country partner organizations are encouraged to support this move to ensure accurate diagnosis of diseases and correct antimicrobial testing.
Background
Globally, the fight against infectious diseases is still a great force to reckon with. The ability of disease-causing organisms to spread beyond national and international borders means an infectious disease threat anywhere is a threat everywhere [1]. Thus, every country has a role to play in making the world safer from epidemics by strengthening its capacity to prevent, detect in timely manner and respond effectively to current and emerging health threats. Key health threats that could likely pose danger to human lives include increasing trend of antimicrobial resistance, zoonotic diseases, biosafety and biosecurity, weak laboratory and surveillance systems and poor work force development.
Addressing the threats of zoonosis, antimicrobial resistance, biosafety and biosecurity begins with detection of aetiological agents involved in disease outbreaks and infections. Timely and accurate detection and reporting of infectious disease outbreaks and events are critical to controlling the course of outbreaks and avoiding largescale epidemics. Detection of microbial pathogens also enables the performance, reporting and surveillance of antimicrobial resistant microbial organisms.
Antimicrobial resistance is one of the biggest threats to global health [2]. According to WHO, there are 12 families of resistant bacteria which pose the greatest threat to human health, and these are termed priority pathogens [3]. These bacteria are further categorized into Priorities 1 (critical), 2 (high) and 3 (medium). Examples of priority 1 pathogens include Carbapenem-resistant Acinetobacter baumannii, Carbapenem-resistant Pseudomonas aeruginosa and Carbapenem-resistant, ESBL-producing Enterobacteriaceae. Priority 2 pathogens include Vancomycin-resistant Enterococcus faecium, Methicillin and Vancomycin-resistant Staphylococcus aureus, Clarithromycin-resistant Helicobacter pylori, Fluoroquinolone-resistant Salmonella spp., Fluoroquinolone-resistant Campylobacter spp., and Cephalosporin-resistant, fluoroquinolone-resistant Neisseria gonorrhoeae. Other bacteria such as Penincillin-non-susceptible Streptococcus pneumoniae, Ampicillin-resistant Hemophilus influenzae, and Fluoroquinolone-resistant Shigella spp. are referred to as Priority 3 pathogens [3]. These resistant pathogens are a threat to global health security because of their potential to cause significant economic and public health problems [4,5].
One of the effective ways to maximize global health security and preparedness for infectious disease threat is to invest in Global Health and address global health security challenges [6]. Since June 2007, various countries have been putting in place efforts to strengthen their International Health Regulations (IHR) core capacities through the Global Health Security expanded activities. As a way of achieving these objectives, WHO conducted a Joint External Evaluation (JEE) of the IHR core capacities in 2017 in Ghana [7]. Among the key findings identified was the need to strengthen laboratory capacities to improve detection of epidemic prone infectious diseases, improve logistics for surveillance, standardize methods for antimicrobial resistance susceptibility testing and improve collaboration between disease surveillance officers and laboratory scientists.
However, availability of human and material resources to address these gaps has been difficult because of funding limitations. Bacterial isolation and identification from clinical specimens such as blood, stool and urine involve the use of sophisticated devices and specialized skills which are expensive [8]. Interventions from partner organizations and non-governmental organizations (NGOs) and/or corporate institutions are needed in the quest to address global health security challenges especially in resource poor settings such as Ghana.
As a way of addressing these challenges, Ghana received funding as one of the high-risk non-Ebola affected countries to strengthen the public health infrastructures and improve detection of epidemic prone infectious pathogens such as Salmonella, Shigella, Vibrio and diarrheagenic E. coli. The U. S Centers of Disease Control and Prevention engaged the Centre for Health Systems Strengthening (CfHSS) through the Association of Public Health Laboratories (APHL) to design and implement strategies for strengthening Public Health Laboratories (PHLs) in Ghana. This report presents a series of activities leading to improved detection of bacterial pathogens in the laboratory.
Study setting and design
This was a multicentric-single country retrospective study conducted between June, 2018 and August, 2019.
The PHLs supported were the Tamale Public Health Laboratory (TPHL), Kumasi Public Health Laboratory (KPHL) and Sekondi Public Health Laboratory (SPHL). These three laboratories are strategically located to serve the three zonal sectors of Ghana (Fig. 1). KPHL serves the Ashanti region and other southern parts of Ghana. TPHL serves the five northern regions (Savannah, North-East, Northern, Upper East and Upper West) in Ghana and SPHL serves the Western and Central regions of Ghana. All three laboratories are situated on the premises of tertiary health facilities: KPHL is located on the premises of Ashanti regional hospital in Kumasi; TPHL is located very close to the Tamale Teaching Hospital; and SPHL is found on the same environment as the Effia-Nkwanta regional hospital. The laboratories work closely with these hospitals, and invasive procedures such as lumbar punctures are performed by trained clinicians and physician assistants at the various hospitals and samples transported on ice packs to the PHL for laboratory testing. These laboratories are the Fig. 1 Zonal Public Health Laboratories (black dots) selected for capacity building first point of call during outbreak situations and they perform range of diagnostic testing, but with limited capacity and infrastructure.
Study population
Study population comprised individuals of all ages and gender who attended the various hospitals and exhibited clinical presentations of sepsis, gastroenteritis, urinary tract infection and meningitis. Blood and stool cultures were requested by clinicians from individuals with presumptive symptoms of sepsis and gastroenteritis [9]. Urine was collected from individuals who presented with symptoms such as frequent painful urination, hematuria and cloudy/foul-smelling urine for urine culture. Patients with classical case definition for meningitis were referred to trained clinicians for lumbar puncture. The clinical criteria required for lumbar puncture to be performed comprised sudden onset of fever (axillary: > 38°C), and at least two of the following clinical symptoms: neck pain, neck stiffness, photophobia, reduced level of consciousness, bulging fontanelle, and fits/partial seizures in children between 6 months and 5 years [9,10].
Quality assessment prior to start of study Prior to initiating the programme, we conducted laboratory assessments of the three PHLs using the WHO/ CDS/CSR/ISR/2001.2 assessment tool. The assessment evaluated areas including availability of microbiological equipment, scope of laboratory investigations, specimen transportation and handling, adequacy of standard operating procedures (SOPs), internal and external quality assurance and general work flow.
Based on findings from the assessments, in-house mentors and consultants were recruited by CfHSS to assist with capacity building in microbiological investigations and quality management system (QMS). Consultants and mentors assisted in review of SOPs of the laboratories, training laboratory staff on new microbiological techniques such as use of Analytical Profile Index (API), standardizing methods of media preparation and antimicrobial susceptibility testing, establishing sheep farms for blood and chocolate media preparation and training of laboratory staff and disease surveillance officers in the handling, collection and transportation of infectious samples. APHL also established an external quality assurance programme and also supported the procurement of equipment and reagents for smooth operation of the laboratories.
Ethical approval
Permission was sought from the facilities before collection of this data. All protocols related to data collection and analysis were reviewed and approved by the Ethics Review Committee (ERC) of the Ghana Health Service (GHS) (Approval number: GHS-ERC008/03/20).
Interventions
As part of the design of this capacity building programme, we focused on improving capacity in the detection of epidemic-prone infectious pathogens such as Salmonella, Shigella, Vibrio and diarrhegenic E. coli. Samples such as blood, urine, stool and cerebrospinal fluid (CSF) were given priority. Laboratory staff and disease surveillance officers were adequately trained in collection and transportation of priority specimen from the field to the laboratory under cold chain. As part of the training, SOPs related to specimen transportation, processing and pathogen detection were developed/revised for each of the laboratory sites. QMS were also improved and staff were assigned to specific tasks in line with objectives of the programme.
Laboratory methods for detection of priority pathogens
Blood samples collected into culture bottles (BD, Franklin Lakes, NJ, USA) were incubated in BACTEC™ 9050 (for TPHL) or BACTEC FX 40 (for SPHL and KPHL) blood culture systems (BD, Franklin Lakes, NJ, USA). Samples were incubated at 35°C for five days or until a positive signal was detected. Positive blood culture samples were plated on blood and chocolate agar (BD, Franklin Lakes, NJ, USA) and incubated overnight (18- At all PHL sites, CSF samples were cultured for detection of bacteria pathogens by plating directly on blood and chocolate agars and incubated afterwards at 35-37°C aerobically and anaerobically, respectively. Gram stain was also performed immediately on all CSF samples for prompt reporting for patient management. At TPHL, multiplex real-time polymerase chain reaction (PCR) was conducted on all CSF samples for further identification and confirmation. Samples were processed after which Mastermix comprising of primers, probes and other reagents were used for simultaneous detection of Neisseria meningitidis, Haemophilus influenzae and Streptococcus pneumoniae. One microlitre (1 μl) each of sodC-forward primer, sodC-Reverse primer, sodC-Probe, hpd3-Forward primer, hpd3-Reverse primer, hpd3-Probe, lytA-forward primer, lytA-Reverse primer, lytA-Probe was added to 1.5 μl of PCR grade water, 12.5 μl PCR quantabio and 2 μl of DNA template which resulted in a total reaction volume of 25 μl. PCR amplification was conducted using AriaMx RT-PCR System (Agilent Technologies). Thermal cycling conditions comprised one cycle of initial denaturation at 95°C for 10 min, followed by 40 cycles of template denaturation at 95°C for 15 s and annealing at 60°C for 1 min.
Bacteria identification
For positive blood culture, a single pure colony was picked from the blood agar for Gram staining. Urine cultures which yielded significant bacteria growth on CLED was selected for Gram staining by picking single well isolated colony. For positive stool culture, colonies which grew on XLD, MSA, TCBS and SMAC agars were sub-cultured onto different blood agars and incubated overnight aerobically. Single pure colonies on the respective blood agars were picked for Gram staining.
Biochemical investigations such as triple sugar iron (TSI), citrate, urease, indole and oxidase tests were performed on all Gram-negative isolates. API 20E and 20NE were also performed on presumptive enterobacteria and non-enterobacterial isolates, respectively. Other tests such as catalase, coagulase and optochin were performed on all Gram-positive bacteria to aid identification of most common Gram-positive pathogens such as Streptococcus pneumoniae and Staphylococcus aureus.
Statistical analysis
Data were collected, entered into Microsoft excel (Microsoft Cooperation, 2013), cleaned and exported to STATA version 12 (Stata Corp, College Station, Texas, USA) for analysis. Descriptive statistics was used to summarize the distribution of various variables into table and graphs. Differences between discrete variables were analyzed using chi-square (or Fisher's exact where appropriate) and p value less than 0.05 was considered statistically significant.
Characteristics of study population
A total of 3902 patients were tested in all 3 PHLs of which 2229 (57.1%) were females. The median age of all patients was 18 years (IQR: 3-36 years). There was general increase in total number of samples collected and processed after the programme implementation compared to samples received before implementation across all three PHLs (Table 1). At TPHL, there was statistically significant difference in collection of all samples except blood before and after initiation of programme. Blood, urine and stool were significantly collected at KPHL, whereas in SPHL, only stool specimen was significantly collected before and after programme initiation.
Sampling trend
The number of blood samples collected in June, 2018 (inception of study) was the lowest (39/1107; 3.5%) recorded throughout the study period. Sample collection improved with time and remained fairly constant till the end of the study (August 2019), with average number of blood samples collected per month to be 74 (6.7%).
Urine was the most collected sample (1742/3903; 44.6%) among all the specimens. The average number of urine samples collected for each month was 116 (6.6%). Generally, there was a downward trend of urine samples collected from November 2018 to March 2019 (Dry Season), with December 2018 recording the least number (77/1743; 4.4%) collected.
Total number of stool samples collected over the study period was comparatively low (249/3903; 6.4%), with an average of 17 samples collected every month. There was a sharp rise in CSF samples collected from January 2019 to March 2019 (Dry Season) (Fig. 2).
Bacterial pathogen distribution
Bacterial pathogens were detected in five hundred and ninety-three (593) out of 3902 clinical specimens. Generally, before programme initiation, there were low proportions of bacterial isolates recovered from the clinical specimens across all three PHLs (except for bacteria from CSF identified by PCR at TPHL) as shown in Table 2. Of the 593 isolates, bacterial pathogens were identified in 70 (11.8%) blood cultures, 356 (60%) urine cultures, 19 (3.2%) stool cultures and 148 (25%) CSF samples after programme implementation (Table 2).
There were more bacteria isolated from blood and urine after programme implementation compared to before at SPHL and this difference was statistically significant. Bacteria isolated from urine after programme implementation was disproportionately higher than before but the difference was not significant. After implementation, only 6 (0.1%) samples were contaminated with Coagulase-negative Staphylococci (CNS) and these emanated from blood (2) and urine (4).
S. aureus was commonly isolated from the blood of patients in SPHL (n = 13), followed by TPHL (n = 9), with no isolation from KPHL. E. coli causing urinary tract infection (UTI) were mostly isolated from individuals in SPHL (n = 74), followed by KPHL (n = 63) and then TPHL (n = 16). Isolation of bacteria causing meningitis was extremely high in CSF specimens of patients who attended TPHL (151/154; 98.1%). We observed that new pathogens such as Pastuerella pneumotropica, Klebsiella oxytoca, Vibrio parahaemolyticus, Enterobacter aerogenes, Halfnia alvei, Serratia odonfera1 and Citrobacter freundii were identified with the aid of API. Hitherto, these pathogens had never been isolated in any of the three PHLs.
Seasonal variations in prevalence of bacterial pathogens
Ghana has two seasons: dry and wet seasons. The wet season is from April to mid-October and the dry season is from December to March. Detection of pathogens from the blood during the first four months of the study was relatively lower compared to the months afterwards (Fig. 3). There was high isolation of S. aureus and Salmonella spp. from blood samples in the dry season. The wet season mostly had Klebsiella spp. been isolated. There was a steady rise in Streptococcus pneumoniae and Neisseria meningitidis detection from CSF in the dry season. Streptococcus pneumoniae isolates mostly occurred in January and co-detections of Streptococcus pneumoniae and Neisseria meningitidis occurred in March (Fig. 4). Urinary pathogens did not show any particular seasonality. However, high isolations of E. coli occurred mostly in the wet season as compared to the dry season (Fig. 4).
Effects of socio-demographics on bacterial infection
Females were more likely to contract urinary tract infection than males (Table 4; p < 0.01). Nonetheless, gender did not significantly contribute to the likelihood of having bacterial infection from blood, stool and/or CSF. Again, children less than 18 years were more prone to bacterial infection in the blood and CSF than the adults (Table 4). On the other hand, urinary tract infection (UTI) in adults were significantly higher than in children.
Antimicrobial susceptibility results Blood
Overall, gentamicin was the least effective antibiotic with 35-55% of S. aureus, K. pneumoniae and E. coli bacteria resistant to this antibiotic (Table 5). Salmonella spp. showed high proportion of resistance to ampicillin, cefuroxime and ciprofloxacin (38.5%) but not to azithromycin, tetracycline, and cefotaxime (7.7%).
Discussion
The role microbiological laboratories play in the detection and surveillance of pathogenic bacteria is important in addressing the global health security threats posed by infectious agents. Resourcing of the PHLs with equipment, reagents and human resource capacity building have enabled the increased and accurate detection of bacterial pathogens from clinical specimens of blood, stool, urine and cerebro-spinal fluid at three different PHLs in Ghana. It is instructive to note that prior to this programme, TPHL and KPHL for instance had stopped blood cultures due to unavailability of logistics and nonfunctional BACTEC equipment. SPHL as well was not well-versed in the use of CLSI standards for isolation of bacteria. Adequate measures including provision of distilled water plants, establishments of sheep farms, training in microbiological media preparation and identification of microbial pathogens were put in place.
Training and adequate resourcing are therefore essential in strengthening the capacity and functional role of PHLs in developing countries. In sub-Saharan Africa (sSA), about 12 million people die each year [12], with the causes of deaths largely due to undiagnosed infectious diseases such as HIV, malaria and tuberculosis [13]. A study in Kenya found that bacterial bloodstream infection diagnosed only by blood culture accounted for 26% of deaths among children [14], which gives credence to the fact that laboratory diagnosis of bacterial infections needs to be strengthened in sub-Sahara Africa. In line with previous studies in Africa [15,16], Gram-negative bacteria predominated in our study. Infections from Gram-negative bacteria pose significant public health problems and this is mainly due to high resistance to antimicrobial agents [17]. Prior to implementation of the project, all three PHLs had serious challenges with regards to microbiological detection of bacterial pathogens from clinical specimens. At TPHL, identification was solely done by observing morphological characteristics of the colonies which grew on various culture media. There was absence of biochemical testing and automated blood culture system due to logistical and technical constraints. Both KPHL and SPHL performed very limited biochemical tests, and only SPHL performed automated blood culture analysis. Across the three PHLs, identification of suspected bacterial pathogens was performed up to the Genus level. All these 'Before' refers to number of bacterial pathogens identified at the various PHLs before implementation of the programme; 'After' refers to number of bacterial pathogens which were indentified at the various PHLs after implementation of the programme hindered the identification process and proper administration of antimicrobials, which might directly or indirectly contribute to increasing rate of antimicrobial resistance globally. Procurement and maintenance of automated blood culture machines for the PHLs and training of staffs on its use resulted in an increased rate of blood samples received for blood culture. Reagents necessary for biochemical tests were also procured for all the PHLs and staffs were trained on the use and interpretation of tests such as triple sugar iron (TSI), citrate, indole, catalase, urease, coagulase and catalase tests. Analytical profile index and serotyping were also introduced to all PHLs which served as confirmatory tests for identification of some microbial pathogens. All these led to a significant upsurge in bacterial detection (Table 3) which hitherto could have easily been overlooked and missed.
Staphylococcus aureus was the most prevalent bacteria found in the blood. Naber reported that S. aureus is a major cause of bacteremia, and it is associated with higher morbidity and mortality, compared with bacteremia caused by other pathogens [18]. Again, lifethreatening complications from S. aureus bloodstream infections such as infective endocarditis and metastatic infections could occur [19,20], and these complications place high resource burden on health-care systems [21]. Without quality laboratory testing, detection of this medically relevant organism could not be done, and this would have a devastating clinical outcome on the patients. However, in this crisis time, laboratory and healthcare infrastructures are woefully inadequate to meet the pressing needs and/or perhaps have been ignored in several areas of sSA [22]. More often than not, a lot of financial resources from funding organizations are channeled to prevention of diseases and patients' care, whereas building of laboratory capacity receives relatively little financial support [23].
E. coli and V. parahaemolyticus were commonly isolated pathogens from urine and stool cultures, respectively. Other common pathogens detected in urine included Klebsiella spp., Citrobacter spp., Staphylococcus saprophyticus and Staphylococcus aureus. UTI is among the leading causes of morbidity and mortality and inappropriate diagnosis could lead to treatment failures and additional complications [24]. This study showed that females were more likely to contract UTI than males, consistent with findings from rural Nigeria [25] and other parts of the world [26,27]. In women, UTIs are one of the most frequent clinical bacterial infections, constituting about 25% of all infections [28]. This is mainly due to women having extremely short urethra, very close to the anal region, where several enterobacteria are shed frequently in stool. Other factors thought to predispose women to recurrent UTIs include voiding patterns pre-and post-coitus, wiping techniques, wearing tight undergarments, and vaginal douching; however, there is no proven association [29]. Also, medical conditions such as pregnancy, diabetes mellitus and immunosuppression increase risk of women having UTI [30]. Cases of V. parahaemolyticus infection are few globally; however, it is a common cause of bacterial gastroenteritis in Asia, especially in Japan [31]. Transmission of V.
parahaemolyticus is mainly through the consumption of infected seafood causing acute gastroenteritis [32]. The organism can also make its way into an open wound during exposure to salt water [31]. All V. parahaemolyticus cases recorded in this study were from SPHL; located in the only coastal and southernmost region in this study where consumption of raw/undercooked seafoods is high. SPHL received the most specimens, because of its strategic location within the enclave of the South-Western part of Ghana. As a result, clinicians could easily refer patients to the laboratory for culture analysis. Inhabitants living along the coast principally engage in numerous outdoor activities such as fish farming and swimming in the deep ocean which predispose them to some waterborne infections, especially urinary tract infections (UTIs), and this could also account for the high proportions of urine samples collected in SPHL compared to the other PHLs.
Most detections of S. pneumoniae, H. influenzae and N. meningitidis in this study were achieved by multiplex real-time PCR. This platform was however only available at the TPHL, which also doubles as a reference testing centre for meningitis in Ghana. It is possible the other laboratories had low detections because of the use of only culture methods which is less sensitive as compared to multiplex PCR. PCR is a fast, sensitive and reliable technique for simultaneous detection of different molecular targets in one reaction. Training and logistical support provided by the CDC to the TPHL enabled them to utilize this molecular approach to detect common etiological agents of meningitis from CSF in Ghana. It would be important for the other PHLs to be adequately resourced with molecular testing capacities to help them to appropriately detect and respond to infectious agents and outbreaks. The high number of meningitis pathogens identified at the TPHL could also be due to the geographical location and catchment populations targeted which lies within the sSA meningitis belt. Cases of meningitis are frequently reported from the Northern part of Ghana, where TPHL is situated, especially during the hot dry seasons (December -March, and August), leading to several meningitis outbreaks. Unlike CSF samples, other specimens such as urine, stool and blood did not show such seasonal trend. It is an established fact that cerebrospinal meningitis is an infectious disease which is commonly impacted by climate, specifically hot climate [10]. The five Northern regions (Upper East, Upper West, Savannah, North-East and Northern regions) are the most hottest regions in Ghana and the weather worsens during the period between December and June, and this results in a number of CSM outbreaks in Northern Ghana, as previously reported [10,33]. Several studies have indicated that climatic conditions characterized by dry winds, dust storms, low humidity and cold nights considerably diminish the local immunity of the pharynx thereby increasing the risk of meningitis [34][35][36]. These climatic conditions are typically found in the Northern Ghana during the dry season, and this likely explains why CSF samples were largely collected during the peak dry seasons compared to the other sample types.
Almost all CSF pathogens (145/148; 98%) recorded in this study emanated from TPHL. Streptococcus pneumoniae was the most isolated pathogen from CSF, consistent with previous data from Brong Ahafo region [37], about 210 km away from Tamale. This Global Health Security pathogen is the leading cause of bacterial meningitis, with an average mortality rate of 25%, despite effective antibiotic therapy and improved intensive care facilities [38]. Bacterial meningitis outbreaks are common in countries located in the Africa's meningitis belt [39]. Rapid detection of the etiology of these outbreaks can lead to targeted public health interventions. Building and sustaining laboratory capacity in countries where meningitis outbreaks are common will be critical to ensure rapid and effective response to these outbreaks.
Global emergence and spread of antibiotic resistant strains of bacteria is still a major health problem. CfHSS provided training in the performance and interpretation of antimicrobial susceptibility tests to all the PHLs. Guidelines and protocols by Clinical and Laboratory Standards Institute (CLSI) were made available to the PHLs and all staff were adequately trained in the use of these documents. Antimicrobial susceptibility tests from blood cultures in this study revealed high resistance (35-55%) of S. aureus, K. pneumoniae and E. coli to gentamicin. This drug is commonly used to treat wide range of Gram-negative and some Gram-positive infections due to its antimicrobial efficacy, widespread availability and low cost [40]. However, the present report shows increase in resistance to this vital drug, consistent with data reported by Ababneh and colleagues [41]. Escherichia coli in urine showed high resistance to drugs such as ciprofloxacin (39.2%) and ampicillin (34%). These are historically useful antibiotics for the treatment of UTIs [42]. Also, high rate of Klebsiella spp. resistance (39.7%) to the 3rd generation cephalosporin cefotaxime is of great concern. Resistance to 3rd generation cephalosporins such as cefotaxime, ceftazidime and ceftriaxone serves as surrogate marker for detection of extendedspectrum beta-lactamases (ESBL) [43]. There have been reported cases of increasing trend of Klebsiella resistance to these 3rd generation cephalosporins [44,45]. Third generation cephalosporin resistance leaves clinicians with limited options for treating patients with gram-negative infections, and as a result, relatively expensive drugs within the carbapenem class are usually considered the treatment of choice [45].
A limitation of this study was the inability to examine for the presence of ESBL phenotypes in the bacterial isolates which were resistant to the 3rd generation cephalosporins. A suggested approach would be to conduct double-disc diffusion synergy test for phenotypic confirmation of organisms possibly encoding ESBL. Another limitation was the exclusion of laboratory detection of viral and parasitic infections. This was due to restricted CDC-Ghana PHL budget for the project. Further support is therefore needed for capacity building in the detection of viruses and parasites in the three zonal PHLs.
Conclusion
Isolation and proper identification of aetiological agents in bacterial infection are of great importance. Partner and NGOs are encouraged to contribute their quota to help strengthen PHLs in sub-Saharan Africa. Outcome of this report clearly indicates that with routine and effective laboratory trainings, equipment and reagents support, detection of bacteria pathogens could be greatly enhanced, and the right antimicrobial therapy administered.
|
2021-03-25T13:46:48.642Z
|
2021-03-25T00:00:00.000
|
{
"year": 2021,
"sha1": "dba3beb330941953e95c92c05264765a8a8a76eb",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-021-05999-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dba3beb330941953e95c92c05264765a8a8a76eb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214321938
|
pes2o/s2orc
|
v3-fos-license
|
The cluster-delay consensus of nonlinear multi-agent systems via impulsive control
Based on the impulsive control strategy, the cluster-delay consensus of nonlinear multi-agent systems is studied in this paper for the first time. Different from the traditional continuous control method, impulsive control only acts on the systems at discrete impulsive moments, so it has the advantages of low control costs, fast response speed and strong adaptability. In addition, by the impulsive protocol, the state information of all neighboring agents is used to update their own state at impulse instants. Based on the graph theory and Lyapunov stability theory, some sufficient consensus criteria are given. Finally, the correctness of theoretical results is illustrated by numerical simulation.
Introduction
In current society, the research of multi-agent systems (MASs) based on the distributed cooperative control technology has been widely used in many fields of social life, such as sociology [1], economics [2], formation control of unmanned aerial vehicles or robots [3], etc. As a significant topic in the field of the technology, the consensus means that all agents in the same MASs or cluster eventually tend to a common state. The study about consensus has already made abundant achievements [4][5][6] and attracted more and more attention in academic circles.
In practical applications, all agents within MASs can be divided into multiple clusters (i.e. subgroups) by the degree of correlation among agents. In the same cluster, the degree of association or coupling strength between agents is higher. Meanwhile, the coupling strength between agents in different clusters is generally smaller. If each cluster within MASs can have different common state in the end, then it said that the cluster consensus of the systems can be reached. Up to now, the cluster consensus (also known as group consensus) has been studied in [7][8][9], etc.
In particular, if the common state of one cluster is chosen as the leading state, the remaining clusters' common states will eventually be consistent with the corresponding delay states of the leading state. In this case, as a special case of group consensus, the concept of cluster-delay consensus was developed in [10]. The authors have been studied the cluster-delay consensus of first-order nonlinear MASs via continuity control. Based on graph theory and Lyapunov stability theory, some sufficient criteria for consensus are given. Consider a first-order nonlinear MASs with delays, the cluster consensus has been studied via pinning control with periodic intermittent effect in [11]. Similarly, through designing a specific intermittent control protocol, the cluster-delay synchronization of a directed network has been investigated in [12]. On account of the transmission ways of information in different clusters, the new notion of layered intermittence was developed in [13]. Based on that, the cluster-delay consensus of MASs with aperiodic intermittent communication has been researched. Compared with the traditional way of continuous control in [7][8][9][10][11][12][13], the impulsive control has some advantages of low cost and high efficiency, which is widely used in the research of consensus for MASs [14][15][16][17], etc.
Motivated by the above discussions, this paper studies the cluster-delay consensus for a class of first-order nonlinear MASs via impulsive control. Compared with [10][11][12][13], this paper has the following innovations: firstly, impulsive control is used to make sure that the MASs can achieve the clusterdelay consensus while reducing the control costs and improving control efficiency; secondly, in the construction of the controller, agents receive not only the state information of adjacent nodes in their own cluster, but also the state information of adjacent nodes in other clusters.
The rest of this paper is as follows: In section 2, the related concepts in graph theory, the construction of models and protocols are briefly introduced. In section 3, sufficient consensus criteria are derived by mathematical theoretical derivation. In section 4, the correctness of theoretical results is illustrated by some simulations. In section 5, a brief summary of this paper is given.
Graph theory
For convenience, the structure of MASs is represented by a topology graph. If the connection between any two agents in the systems is bidirectional, then the graph is undirected. Otherwise, it is a directed graph.
Problem description and protocol construction
Consider a first-order nonlinear MASs composed by N follower agents, and its dynamics is described by is the subscript of cluster which the ith agent belongs. In [10], the dynamics of each follower agents was modeled by Obviously, on the one hand, the continuous protocol () i utwill not go into effect when the state information of neighboring agents can not be received continuously. On the other hand, by systems (3), the update of state () i xtis only affected by the neighboring agents in the same cluster, and the state information of neighboring agents in other clusters is not fully utilized.
Therefore, motivated by systems (3), the novel protocol (4) is designed in this paper. Two improvements are as follows: firstly, the impulsive control strategy is used to make the systems to reach the cluster-delay consensus while reducing control costs and improving control efficiency; secondly, each agent receives the state information of all neighboring agents to update their current states.
. Virtual leaders are introduced into MASs, and its dynamics is where () where k t is the impulse sequence, which satisfies . Systems (7) can be rewritten as
Main Results
By the impulsive protocol (4), several sufficient criteria for cluster-delay consensus of first-order nonlinear MASs with (1) and (5) ).
The integral of equation (9) is obtained easily, that is ( ) , it can be derived as follows. ).
Equation (14) shows that if (t) 0 W always hold, matrix Q must and will be negative definite. . Therefore, the delay consensus of MASs (5) can be achieved via impulsive control. In summary, the correctness of protocol (4) is illustrated by simulation. Therefore, the cluster-delay consensus of MASs (1) and (5) can be achieved via impulsive control.
Conclusions
For a first-order nonlinear MASs, the impulsive protocol is designed to make the systems to reach the cluster-delay consensus. According to the impulsive protocol, the update of controller of ith agent is related to the state information of all neighboring agents. Based on the graph theory and Lyapunov
|
2020-02-13T09:20:46.020Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "d5e3dab41c1b55fcec0bc1bc35b2a1f63b2f9d4b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1449/1/012099",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d86fbf2d407833fb79ea83cf3f81cad8797f27ec",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
251707559
|
pes2o/s2orc
|
v3-fos-license
|
ASILIDAE (DIPTERA) FROM RAJASTHAN, INDIA
INTRODUCTION From 8th September to 7th October 1986 we had surveyed robberflies of Rajasthan in collaboration with the Desert Regional Station, Zoological Survey of India, Jodhpur. The present paper deals with collections made during this survey, and two specimens of Phi/odieus sharmai n. sp. from Gujarat present in the Desert Regional Station. The arrangement of genera followed is based on' A Review of the AsiIidae (Diptera) flom the Oriental Region' by Joseph and Parui (1983).
Legs bl,ack and yellowish.~brown;,fore coxa black, trochanter yellowish-brown, femur black with its apex narrowly yellowish-brown, tibia yellowish-brown and dark browD,tarsus dark brown, mid leg similar except coxa being yellowish-brown, bind coxa, trochanter and femur similarly coloured like that of mid leg, tibia basally yellowish-brown ~nd rest black, tarsus black; vestiture and bristles pale yellow.
28
Wing pale yellow, but medially and along hind border hyaline; R 6 and M 1 united close to the border.
Using the key to the species of Nusa Walker from India and adjoining countries.(Joseph & Parui 1987a), it keys to N. albibarbis Ricardo from which it differs in the thorax being without stripe, black and yellowish-brown legs, hind border of scutellum without bristles, abdomen predominantly yellowish-brown and the differences in the shape of epandrium and gonocoxite. 5. Nusa setosa n. sp.(Fig. 2) A small, pilose, yellowish-brown or orange species marked with dark brown or black to varying extent, legs pale yellow or yellowish-brown with dark brown or black, and wings hyaline.Male: length 9 rom, wing 5-6 mm; female: length 8-9 mm, wing 6 mm.
Male
Head as broad as thorax, yellowish-brown and black with dense grey tomentum; mystax golden yellow, fronto-orbital setae and ocellar bristles white.Antenna with scape and pedicel pale yellow, first fiagellomere basally pale yellow and relnaindtr dark brown, scape with white setae and 1 or 2 white bristles, pedicel slightly shorter than or nearly equal to scape, first fiagellonlere elongate.Palpus yellowish-brown, proboscis yellowish-brown and black, both with white setae.
Thorax yellowish-brown with dark brown or black to varying extent so much so in some examples mostly dark brown or b18c14 short white or golden yellow pollinose; pronotum ,vith white setae; scutum in holotype with a broad mediolongitudinal, black stripe extending from anterior border to midway between transverse suture and hind border, laterally with the black spots united, in one paratype the stripe faint and in the other absent, in both paratypes lateral black spots much smaller and faint; chaetotaxy: 1 notopleural, 2 postalars, a few bristles postero-laterally; scutellum with dense pile, but in hofotype comparatively less, border without bristles; katatergite with long, white, bristly setae.Haltere pale yellow, with or without dark brown on head.
Legs pale yellow or yellowish-brown with dorsal dark brown or black marking on femur, the intensity and area of dark brown or black increases from fore to bind tibia so much so in the latter in some examples it occupies the whole of upper side, apex of hind tibia also dark brown or black : vestiture and bristles white.
Wing hyaline with the veins pale yellow basally.
Abdomen yellowish-brown or orange brown with short, white or golden yellow pilose, holotype with faint median and lateral black markings on terga and with sparse setae, tergum 2 laterally with three bristles, terga 3 and 4 and in some examples tergum 5 also with 1 or 2 bristles.Male genitalia (Fig. 2A) with gonocoxite bearing a row of bristles apically.
Females: Similar but with the following differences: comparatively darker specimens; scutum with the mediolongitudinal stripe and lateral spots larger and distinctly marked; haltere yellowish-brown; abdomen mostly black with much less yellowish-brown or orange-brown.Female genitalia (Fig. 2B The male genitalia of Nusa selosa n.Spa is distinctive and this species can be readily distinguished by the presence of an apical row of bristles on gonocoxite.
Males: Head narrower than thorax, black with dense greyish-white tomentum; mystax white, fronto-orbital bristles and ocellar bristles and medially with white setae.Antenna black or scape and pedicel yellowish-brown and remainder black, scape and pedicel with white bristles, scape slightly longer than half to much longer than half of pedicel, first fiagellomere longer than the combined length of scape and pedicel, style about half to less than half the length of first flagellomere.Palpus and proboscis black, both with white setae.
Thorax black, with or without grey tomentum; pronotum with white or pale yellow, dense bristles and a few setae; scutum with the humeri sometimes brown, usually with a faintly black mediolongitudinal stripe present extending from anterior border to beyond the transverse suture, the stripe faintly divided by a narrow grey stripe, in most of the cases the stripes black and indistinct to varying extent; chaetotaxy; a number of white bristles present laterally and posteriorly, of which postalars (7 or more) and dorsocentrals distinct; vestiture and bristles white; scutellum with a row of white bristles apically and sparse white setae laterally.Haltere pale yellow, yellowish-brown or yellowish-orange.
Legs black and pale yellow or yellowish-brown, coxa and trochanter black .. fore fetnur "black basally for half or more and remainder pale yellow or yellowishbrown, mid femur similar but black more extensive and occupies two-thirds or more, hind femur wholly black, fore tibia and tarsus pale yellow or yellowish-brown, mid tibia and tarsus similar but tibia apically black, hind tibia wholly black or basally pale yellow upto one-fourth and the remainder black, tarsus pale yellow with basitarsus sometimes black; fore femur with antero-ventral and posteroventral rows of bristles but in one paratype the latter row absent, mid femur with anterodorsal, anteroventral, ventral (weak) and posteroventral rows of bristles, hind femur with anterior and anteroventral rows of bristles; vestiture and bristles white.
Wing anteriorly brownish and the remainder infuscated; first posterior cell closed.
Abdomen black, with or without grey tomentum, tergum 1 posterolaterally with a few whi te bristles, tergum 2 laterally with a few, white, bristly setae, vestiture white or white and yellow.Male genitalia (Fig. 3) black with white and black setae, gonocoxite long.
Females
Similar but with the following differences bristles on upper side of postcranium white or pale yellow; scutum with humeri in most cases yellowishorange, in a few paratypes mediolongitudinal stripe bordered by a narrow, yellowishorange line; ventral row of bristles on mid femur lesser in number; abdomen orange or yellowish-brown medially and black laterally, tergum 1, tergum 2 anteriorly, tergum 6 mostly, and terga 7 and 8 completely black, in some examples orange or yellowish-brown more extensive so much so terga 2 and 7 almost wholly orange or yellowish-brown.Genitalia with tergum 9 bearing teD, blunt, spines.7. SticbopogoD basiti n. sp.(Fig. 4) A tiny, densely grey or greyish-yellow tomentose species with paJe yellow legs with dark brown or black markings, wings with two brown spots, and abdomen with a broken, mediolongitudinal, black or dark brown stripe; lamella of female without hair tuft.
Females: Head broader than thorax, densely greyish-white or greyish-yellow tomentose; mystax white, fronto-orbital plate with white bristles and setae, ocellar setae white, postcranium with white setae, postocular bristles white and confined to above, postgena with white setae.Antennal scape and pedicel pale yellow, remainder black, pedicel bearing a few white bristles, scape very short, about one-fourth the length of pedicel, first flagellomere long, longer than one and-a-half times the combined length of scape and pedicel.Palpus and proboscis black with sparse, white setae.
Thorax black, greyish-yellow and grey tomentose; pronotum with a mediaD transverse row of white, bristly hairs; scutum without mediolongitudinal stripe and lateral spots in holotype but with a faint, black mediolongitudinal stripe extending from anterior border to transverse suture in paratype; chaetotaxy: 1 notopleural, 1 post alar, 1 supra-alar, 1 dorsocentral; vestiture and bristles white; scutellar disc bare, hind border with a few, long, white setae and 8 white bristles; katatergite with a row of elongate bristly setae.Haltere yellowish-brown or pale yellow.
Legs pale yellow with dark brown or black marking, fore and mid femora pale yellow or pale yellow with dark infuscation medially for about one-third length, hind tibia basally pale yellow and distally black or dark brown for about half or lesser than that, tarsal segments yellowish-brown with apical 1 or 2 segments black, vestiture and bristles white.
Wing Jightly infuscated with two brown spots: one at the ongln of R 4 ) and the other at the junction betvveen discal, 2nd posterior and 3rd posterior cells.
Abdomen black or dark brown with grey and greyish-yellow tomentum, medially with a broken, black or dark brown stripe from terga 1 to 6, tergum 7 with the stripe faint, and succeeding tergum without it, tergum 7 mostly and tergum 8 wholly tomentose, hind border of terga with a narow grey or yellowish-brown border, tergum 1 posterolaterally with a few, long, white setae.Female genitalia (Fig. 4) with tergunl 8 bearing a circlet of ten spines; lamella without hair tuft.Of all the known regional species of Stichopogon, S. basiti n. sp. is closer to S. ramakrishnai Joseph and Parui (1988) from which it can be separated by the presence of marginal row of white bristles on scutel1um, wing with only two brown spots, and in the difference in the shape of lamella.It is named in honour of Mr. Abdul Basit, our colleague in Jodhpur, who has taken some excellent photographs of insects.
Females
Head broader than thorax, black, grey tomentose, specimens with dense grey tomentum with a mediolongitudinal black line extending from ocellar triangle to occipital foramen; mystax white except' in one paratype it being pale yellow, fronto-orbi tal setae and ocenar bristles white, postcrani urn with white setae, postocular bristles white and confined to above, postgena with white setae.Antennal scape and pedicel pale yellow to yellowish-brown with sparse white setae, remainder black, scape less than half to slightly more than half of pedicel, first flagellomere long, about one and-a-half times the length of scape and pedicel combined.Palpus and proboscis black, latter with sparse white setae.
Thorax black with grey and greyish-yellow pollinose; pronotum with sparse white setae; scutum with a narrow mediolongitudinal stripe extending from anterior border to transverse suture, the length and breadth of which quite variable, laterally with or without 2 black spots; chaetotaxy: 1 notopleural, 1 postalar, 1 supra-alar; vestiture sparse, white, bristles white or pale yellow; scutellar disc bare, border with a few elongate setae, 2 or 4 of which may form bristly setae; katatergite with a few, elongate, white bristly setae.Haltere pale yellow in most of the cases, but in one example yellowish-brown and in the other white.
Legs black and yellowish-orange or pale yellow, coxa, trochanter and femur black, tibia yellow with black apex, the black colouration increases from fore to hind tibiae so mucb so in the latter one-third or more black, vestiture and bristles w bite.
Wing faintly infuscated.
Abdomen black, greyish-white pollinose, in typical case as in holotype tergum 1 greyish-white pollinose with a median black mark, terga 2 .. 6 black, on anterior border with a medially incomplete greyish-white stripe and laterally greyish-white, tergum 7 similar but hind border also greyish-white so much so the black area much reduced, tergum 8 completely greyish-white, the extent of black and greyishwhite areas quite variable from specimen to specimen.Female genitalia (Fig. 5A) figured, tergum 8 with a circlet of ten spines, lamella with closely matted setae in spindle-like tuft.
Male
Similar except the border of scutellum with 6 bristly hairs; abdomen with much less greyish-white pollinose so much so terga 4-6 almost wholly black, tergum 7 with greyish-white restricted to hind border, terga 8 and 9 almost wholly greyish-white.Male genitalia (Fig. 58) illustrated.Female of Stichopogon biharilali n. sp. is similar to S. meridionalis Oldroyd (1948) in the spindle-shaped tuft of setae on lamella, otherwise a quite distinct species with black fore and mid femora and in the shape of lamella.It is named in honour of our colleague Mr Biharilal, who had helped us in several ways to make our Rajasthan colJection trip successful.
Remarks:
The species was described from Orissa and subsequently reported from South India.This is the first report from Rajasthan.17. Ommatius siogbi n. sp.(Fig. 6) A very small black species with black and yellowish-brown legs, and distally infuscated wings.Male: length 4.0-5.6 mm, wing 4.0 mm; female: 6.00 mm, wing 5.00 mm.
Males: Head black, grey tomentose; mystax white with 2-3 to the whole of upper half with black bristles, fronto-orbital setae black and white or wholly black, ocellar bristles white or black, postocular bristles black above and white below, postcranium bare above and with a few white setae below.Antenna black, scape and pedicel with black bristles, scape subequal to pedicel, first flagellomere and pedicel nearly equal in length.Pal pus and proboscis black with white setae.
Thorax black with grey tomentum; pronotum with white setae and with or without 1 or 2 pairs of pale yellow bristly setae in a transverse row; scutum with the mediolongitudinal stripe black, faintly marked, and extending from anterior border to beyod transverse suture; cbaetotaxy: 1 notopleural, 1 postalar, 2 supraalars, 1 or 2 dorso-centrals; vestiture white, bristles white with one or more pairs black; scutellum with a pair of white or black bristles on border, disc with a few white setae; katatergite with a few, long, bristly white setae, meron also with a pair of similar setae.Haltere pale yellow or white.
Legs black and yellowish-brown, coxa, trochanter and femur black, tibia yellowish-brown with black apex or wholly yellowish-brown, remainder of tarsus black, hind femur with anteroventral and postero-ventral row of weak bristles, vestiture predominantly white with some black setae, bristles white and black.
Wing more than half infuscated distally, the remainder hyaline.
Abdomen club-shaped, black, tergum 1 posterolaterally with a few, white bristles and setae, tergum 2 laterally with a few, long, white setae, vestiture white.Male genitalia (Fig. 6) black with black setae and bristles, epandrium ventrally with a nearly rectangular projection.
Female: Similar to the males.Female genitalia black with black and white setae, sternum 8 depressed medially, shallowly concave apically, and with a pair of long bristles laterally, proctiger narrow at apex.It is typical of the genus Ommatius with black colouration and club-shaped abdomen, which can be separated from all the other known Indian species by the distinctive male genitalia.It is named in honour of Sri R. K. Singh, our colleague at Jabalpur, who collected some interesting robberfiies for us.A medium, b1ack species with sparse grey tom~ntum, black and yellowish-brown or pale yellow legs, and distally infuscated wing.Male: length 13-16 mm, wing 9-10 mm; female: length 16-18 mm, wing 11-13 mm.
Males: Head slightly broader than thorax, black with or without grey tomentum; mystax white, fronto-orbital setae wholly white or black and white, ocellar bristles wholly black or rarely black and white, postocular bristles pale yellow or white, a few, and confined to above, postgena with white setae.Antenna black with first flagellomere to varying extent yellowish-brown or dark brown, scape and pedicel with black and white setae and bristles, pedicel more than two-thirds to slightly shorter than scape, first ffagellomere shorter to longer than the combined length of scape and pedicel.Palpus and proboscis black with white setae.
Legs black, except tibia being mainly yellowish-brown or pale yellow with apically black to varying extent, fore and mid tibiae with black apex, hind tibia distally black for about half or more; vestiture white and black, bristles black.24.Philodicus sharmai n. sp.(Fig. 8) A large black species with dense grey or-greyish-yellow tomentum, black legs and light 'brown wings with yellowish-brown veins.Male: length 26-28 mm, wing 19 mm ; female: length 28-29 mm, wing 19 mm.Males: Head narrower.than .thorax,black, greyish-yellow tomentose; mystax white, fronto-orbital plate with.w4ite.and one or more black setae, ocellar bristles .black, postocular bristles white or" pale yellow wi~h one or more of' them being 61ack~ postcranium with white --: setae; postgena with dense white setae.Antenna wholly black or scape bt'ack and remainder dark brown or yellowish-brown, scape and pedicel with black bristles, scape about one-third longer than pedicel, first fiagellomere nearly equal in length to scape.Palpus and proboscis black with white setae.Thorax black with dense greyish-yellow or grey tomentum; pronotum with a transverse row of white or pale yellow bristles, and with dense white setae, scutum with a mediolongitudinal stripe extending from anterior border to midway between transverse suture and hind border, the stripes divided by a greyish-yellow stripe, posterior to the longitudinal stripe with three, longitudinal, grey marks, lateral three black spots faintly marked or distinct; chaetotaxy: 1 notopJeural, 2 postalars, 2 supra-alars, 1 intra-alar, 1-5 dorsocentrals: vestiture black with some white setae laterally, bristles black; scutellar disc with white setae, border with 2, and in one para type 3, black bristles; katatergite and meron with white or pale yellow, long, bristly setae.Haltere pale yellow or stalk pale yellow with orange or dark brown head.
SA
Legs black, hind femur with an anteroventral row of black bristles, vestiture white, bristles black, fore tibia and tarsus anteriorly with mat of golden yellow setae, in hind tibia and tarsus similar setae present posteriorly.
Wing light brown with cells along hind border still lighter coloured, veins yellowish-brown.
Abdomen black, grey tomentose to varying extent, in typical case as in holotype tergum 1 completely grey, terga 2-6 anteromedially black, and laterally and posteriorly grey tomentose, terga 7 and 8 medially black and laterally grey tomentose, terga posterolaterally with a few white or pale yellow bristles which decrease in number and size in succeeding terga, tergum 2 mediolateral1y with 2 additional bristles, vestiture predominantly white.Genitalia (Fig. 8A, B) black.
Females: Similar but with the following differences: chaetotaxy: 2 para types with 1 supernumerary seta outer to the posterior postalar; bristles on abdomen confined to terga 1 and 2, tergum I similar to that of male, tergum 2 with 2-4 mediolateral bristles (which in one paratype black) but without posterolateral bristles.Genitalia (Fig. 8C) black, tergum 9 with 8 or 10 spines.Joseph & Parui (in press b) froln both of these it can be recognised by the larger size together with the shape of female genitalia.The new species is affectionately dedicated in honour of our long time friend and colleague, Dr. R. C. Sharma, Scientist 'SE', without whose generous help it could not have been possible to undertake the Rajasthan Survey of Asilidae successfully.
1 .
Lateral view of male genitalia of Nasa rajasthanensis n. sp.
3 .
Lateral view of male genitalia of Stenopogon roonwali n. sp.4.Lateral view of male genitalia of Stichopogon hasiti n. sp.
It is quite similar to Slenopogon cinchonaensis Joseph and Parui (1981 b) from which it differs in the white mystax, white bristles on the hind border of scutellum and in the shape of gonocoxite.It is named in honour of Dr. M. L. Roonwal, ex-director, Zoological Survey of India, Calcutta, who recently passed away.StichopogOD Loew1847.Stichopogon Loew, Linn.Enl. 2 : 499.
5 .
Stichopogon biharilali n. sp .• A. lateral view of female genitalia; Bf lateral view of male genitalia.
8 .
Phi/odieus sharmai n. sp., A, lateral view of male genitalia; B, ventral view of male genitalia: C, ventral view of female eighth sternum:
|
2019-06-27T12:14:38.829Z
|
1992-06-01T00:00:00.000
|
{
"year": 1992,
"sha1": "e1ad02e67968f6ace5ef929935b99e3e48964f33",
"oa_license": "CCBY",
"oa_url": "http://recordsofzsi.com/index.php/zsoi/article/download/160960/110740",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e1ad02e67968f6ace5ef929935b99e3e48964f33",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
236288086
|
pes2o/s2orc
|
v3-fos-license
|
Projected Impacts of Climate Change on the Physical and Biogeochemical Environment in Southeast Asian Seas
The seas of Southeast Asia are home to some of the world’s most diverse ecosystems and resources that support the livelihoods and wellbeing of millions of people. Climate change will bring temperature changes, acidication and other environmental change, with uncertain consequences for human and natural systems in the region. We present the rst regional-scale projections of change in the marine environment up to the end of 21st century. A coupled physical-biogeochemical model with a resolution of 0.1° (approximately 11 km) was used to create projections of future environmental conditions under two greenhouse gas scenarios, RCP4.5 and RCP8.5. These show a sea that is warming by 1.1–2.9°C through the 21st century, with surface pH falling by up to 0.02 and dissolved oxygen decreasing by 5 to 13 mmol m − 3 . Changes for different parts of the region, including four sensitive coastal sites, are presented. The changes reach all parts of the water column and many places are projected to experience conditions well outside the range seen at the start of the century. Altered species distribution and damage to coral reefs resulting from this environmental change would have consequences for biodiversity, for the livelihoods of small-scale shers and for the food security of coastal communities across the region. Projections of this type are a key tool for communities planning how they will adapt to the challenge of climate change.
Introduction
The world's oceans are warming, acidifying and deoxygenating, leading to shifts in the geographical range of many marine species, and these changes are expected to accelerate this century (IPCC 2019).
Southeast Asia is particularly vulnerable to the effects of marine climate change: a large population lives in coastal areas (Neumann et al. 2015) and relies on marine resources and marine ecosystem services (Barange et al. 2014). In some Southeast Asian countries the ocean economy can account for 15-20% of total GDP (Ebarvia 2016). In addition, the seas of this region include many sites of high ecological value, including biodiversity hotspots such as the Coral Triangle (Veron et al. 2011;Burke et al. 2012). The impact of climate change on the Southeast Asian marine environment is therefore of major social, economic and ecological concern.
The productivity of marine sheries is likely to be affected by climate change and the associated changes in ocean conditions including water temperature, ocean currents and coastal upwelling ( Climate change also poses a risk to coral reefs, which are found across Southeast Asia and are areas of particularly high biodiversity: 76% of all coral species and 37% of coral reef sh species are found in the Coral Triangle (Burke et al. 2012). Rising temperatures and ocean acidi cation pose threats to coral reefs worldwide (Hoegh-Guldberg et al. 2007; Lough et al. 2018). Increasingly frequent and more extreme heatwaves cause damage to reefs through mass coral bleaching reducing long-term sustainability ). In addition, ocean acidi cation is altering ocean carbonate chemistry, limiting coral growth and degrading the physical structure of reefs (Burke et al. 2012; Lam et al. 2020). Coral reef degradation in Southeast Asia threatens the associated foodweb, jeopardizing dependant biodiversity and sheries, and threatening regional food security, coastal protection and tourism, potentially costing the region billions of dollars in lost revenue (Burke et al. 2002(Burke et al. , 2012Cesar et al. 2003).
There are indications that climate change is resulting in the increased intensity and frequency of typhoons and ooding in Southeast Asia (Loo et al., 2015), although considerable uncertainty remains about both historical and future trends Ying et al. 2012;Knutson et al. 2020). Typhoons disrupt ocean ecosystems through water column mixing by strong winds and the freshening effect of heavy rain. They can also cause ooding and surging which may damage shing gears and cages and cause long-term damage to coral reefs (Harmelin-Vivien 1994; Latypov and Selin 2012; Safuan et al. 2020). Increased typhoon activity leads to beach erosion, causing damage to property for communities living close to the shore. All of these effects are exacerbated by sea level rise, which poses an additional threat to the large coastal population of this region (Rowley et al. 2007;Neumann et al. 2015).
Given the potential impact of climate change on key marine ecosystems in Southeast Asia and coastal communities that depend upon them, adequately projecting the effects of climate change on Southeast Asian seas is a crucial step toward informing strategies for poverty alleviation and food security: UN Sustainable Development Goals 1 and 2. Global climate models provide a broad picture of the environmental change that may be experienced in the region (IPCC, 2019); however they have a coarse resolution and the marine ecosystem models used are typically designed for open ocean conditions. This paper presents regionally-scaled projections of change in the physical environment and lower trophic level ecosystem of Southeast Asian seas, to the end of the 21st century. The projections were created using a model with spatial resolution 0.1° (approximately 11 km) and a well-established biogeochemical/ecosystem model suited to coastal and shelf sea environments: the European Regional Seas Ecosystem Model (ERSEM, Blackford et al. 2004;Butenschon et al. 2016). This modelling system has previously been applied to coastal regions in many parts of the world, including Southeast Asia (Holt et al. 2009; Barange et al. 2014 Eight regions were selected to sample projected change across the region (Fig. 1). Four are coastal sites of key importance for biodiversity and sustainable development: UNESCO Biosphere Reserves at Cu Lao Cham-Hoi An in Vietnam, Palawan in the Philippines and Taka Bonerate-Kepulauan Selayar in Indonesia, and Sabah coastal waters, Malaysia, which includes several marine parks. These are supplemented by four offshore sample areas, boxes A-D in Fig. 1; region-wide snapshots of change projected for the middle and the end of the 21st century under both RCPs are also presented. The next section describes the modelling system and data used; Sect. 3 presents the model outputs, rst a comparison to observations for past years and then projected conditions for the rest of the 21st century; Sect. 4 discusses the implications of the projected change for people and ecosystems.
Methods
The projections were created using the Proudman Oceanographic Laboratory Coastal Ocean Modelling System (POLCOMS, Holt and James 2001) coupled to the European Regional Seas Ecosystem Model (ERSEM, Butenschon et al. 2016). Together these simulate the movement of water, energy and dissolved and suspended material through the sea and the cycling of nutrients and carbon through the marine ecosystem. POLCOMS is a three-dimensional model of physical processes, suitable for modelling both deep and shallow water. 40 depth levels were used at each point, regardless of total water depth, distributed more closely in the upper parts of the water column than at depth. ERSEM models the transfer of carbon, nitrogen, phosphorus and silicate through the lower trophic levels of the marine ecosystem. It is one of the more complex models of its type, with four phytoplankton functional types, three zooplankton and bacteria; the carbonate system is included, enabling changes in pH to be modelled. No information about future changes in river water quality was available, so concentrations of nutrients were kept constant at present-day values. Initial conditions of temperature, salinity, oxygen and nutrients Observations are sparse in some parts of the region and for some variables, so model outputs were extracted to match the date and location of observations. The mean and standard deviation of the resulting datasets over several years were compared; for a free-running climate model a close agreement between speci c observations and model outputs is not expected, but there should be a match between observed and modelled average values and spread. Historic trends were compared to those seen in satellite observations and reported in the literature.
Comparison of model outputs to observations
The mean and seasonal variation in sea surface temperature is captured well by the model (Fig. 2), though modelled April-June temperatures are higher than observed in the west and July-September temperatures are lower in the north. The model matches average values of sea surface temperature from satellite-based observations to within 0.5°C in all regions and the standard deviation is also similar (online resource, Table S1), though the model under-represents the spread in Box D, Palawan and Sabah and over-represents it in Cu Lao Cham. The agreement is less close when temperature at all depths in the water column is compared, with the model tending to underestimate sub-surface temperatures, but modelled and observed mean temperatures agree to within 2°C for most regions and the standard deviation to within 1.5°C, with closer agreement for coastal areas. The model underestimates salinity in the northern part of the region, but average values are in good agreement elsewhere (online resource, Table S1). Modelled and observed standard deviations of salinity agree to within 0.1 psu except in Box A, Box B and Cu Lao Cham, which may re ect the use of climatology for river discharge rather than annually varying values: Box B, in particular, is affected by the out ow of the Mekong River.
Model-observation agreement is less close for biogeochemical variables, but the model reproduces many features of the biogeochemical environment. Mean surface chlorophyll concentration is overestimated compared to satellite values, particularly in the north, but shows broadly the same spatial and temporal patterns ( Fig. 3 and Table S2, online resource). Nitrate, phosphate and oxygen observations are sparse across the region, but are available in boxes A-D and in some of the coastal sites ( Fig. 4 and Table S2, online resource). Modelled nitrate values are higher than observed in the north, but the standard deviation is in agreement with observation, as is the spatial distribution of surface values. Phosphate is overestimated in the north and underestimated in the south; the spread and spatial distribution agree with observed values. Oxygen is overestimated for the region as a whole and in most of the sample areas; the spread of values is less than observed (Table S2, online resource).
The model outputs and satellite observations both show a rising trend in sea surface temperature for the period 1985-2019 (Table 1). The observed trend is smaller than modelled, though there is closer agreement for the trend in the period 2000-2019 (Fig. 7a)
Projected future conditions
The projected 21st century change in some key variables is summarised in Fig. 5. This compares monthly mean conditions for 20 years at the start, middle and end of the century (2000-2019, 2040-2059 and 2079-2098) under the moderate RCP4.5 scenario (atmospheric greenhouse gas concentrations rising until mid-century and then stabilising) and the more extreme RCP8.5 (greenhouse gases continuing to rise throughout the century). The values show the mean difference between start and mid-or end-century, shown in grey where the difference is not statistically signi cant (p > 0.05, t-test, n = 240; for details of the method see Kay and Butenschön, 2018). The colours show the size of the change relative to the presentday variability, de ned as the difference between the minimum and maximum monthly values for 2000-2019. Statistically signi cant change is seen in temperature, surface salinity and bottom-level oxygen for both carbon scenarios: temperatures rising and salinity and oxygen falling, with larger changes for RCP8.5 than for RCP4.5. There is signi cant change in primary production and pH for RCP8.5 but not for RCP4.5. The pattern for phytoplankton and zooplankton biomass is less consistent: there is a broad trend to decreasing biomass under RCP4.5 and increasing under RCP8.5, but the changes are only signi cant in a few places and well within current variability everywhere. Compared to present-day variability, the bottom level changes are larger than for the sea surface, because bottom-level conditions are more stable and the present-day variability is low: these projections show the effects of climate change reaching all depths of the sea. In the regions around Cu Lao Cham, Sabah and Palawan the water is relatively shallow, so bottom level conditions are more variable than for the other areas and the relative change is smaller.
The rest of this section discusses the changes in sea surface temperature, primary production and pH in more detail. Sea surface temperatures are projected to rise by 1-1.5°C by mid-century and 2-3°C by end century under RCP8.5, when compared to 2000-2019 (Figs. 6 and 7). Smaller increases are projected under RCP4.5: 0.5-1°C and 1-1.5°C respectively. These changes are in line with those projected by a range of CMIP5 global models, and they imply that temperatures that were average for the region in 2000-2019 will occur only in the very far north under RCP8.5 (see the contours in Fig. 6). Change under RCP4.5 is substantially smaller, with end-century conditions similar to those at mid-century under RCP8.5. The median temperature for the present-day period shows higher movement poleward in RCP 8.5 as compared to RCP 4.5.
From the sampled regions, the largest temperature changes are seen in Box C, Palawan, Sabah and Taka Bonerate-Kepulauan Selayar (Fig. 7b). Palawan shows a stronger seasonal change than other areas, with temperatures rising more in April-September than in October-March. The southernmost areas, Box D and Taka Bonerate-Kepulauan Selayar, also show a seasonal change, with smaller temperature rises in May-September than the rest of the year.
Projected changes in net primary production are smaller than for sea surface temperature and in many regions are only noticeable for RCP8.5 at the end of the century (Figs. 8 and 9). Some changes in seasonality are projected: for example, Box A shows smaller increases in production in November to February than the rest of the year; Taka Bonerate-Kepulauan Selayar show larger increases in May-September; Sabah shows slightly decreased production in November to March, except for RCP8.5 endcentury when these months show the largest increases (Fig. 9b). In many cases the changes are small and are di cult to distinguish from the general variability.
Modelled changes in surface pH follow the trends in the applied atmospheric carbon dioxide levels, with pH falling until mid-century and stabilising under RCP4.5 but continuing to fall throughout the century under RCP8.5 (Fig. 10). The size of the change is similar across the sample regions, with the exception of Taka Bonerate-Kepulauan Selayar, which has static or even rising values for pH. Changes in other southern areas, Box C and Box D, are also relatively small.
Discussion
The agreement between modelled and observed values and trends for the period 1980-2018 is good enough to support use of the future projections. Agreement is better for physical variables than for biogeochemical, as is common in this type of modelling, but spatial and temporal trends are captured in all cases.
The projections show a sea that is, on average, warming by 1.1-2.9°C through the 21st century, with surface pH falling by up to 0.02 and dissolved oxygen decreasing by 5 to 13 mmol m − 3 . The changes reach all parts of the water column and the bottom levels, in particular, are projected to experience conditions well outside the range seen at the start of the century. There are considerable local variations (Fig. 5), emphasising the value of using a regional rather than global model.
Warming seas mean that some parts of the region will experience temperatures not seen in the region at present (Fig. 6). As a response, some species population may be able to move with the present-day temperature contours, seeking to maintain optimum conditions for their growth, reproduction and survival (Pinsky et al. 2013;Poloczanska et al. 2013). However, this adaptation strategy has limited success near warmer regions of species distributions, and does not apply to all types of species. This is the case for corals, whose algal symbionts are tightly dependant on light near the sea surface, and where range expansion to deeper water is therefore limited.. There are clear consequences for shing, in a region already experiencing signi cant challenges due to overcapacity and declining catches, exacerbated by poorly regulated sheries (Pomeroy 2012;Teh et al. 2017). As populations re-distribute in response to changes in habitat conditions, shers may need to travel further to nd their target species, shift to catching different species, perhaps requiring an investment in new gear, or in the worst cases be faced with declining catches and no incoming new species. This is especially a concern in tropical regions, such as Southeast Asia, where declining local diversity resulting from poleward retreat of distribution leading edges is not necessarily compensated by new species arrivals. Therefore, adapting to climate change will be a signi cant challenge, especially for the many small-scale shers in the region, and good management will be essential for protecting livelihoods (Lam et al. 2020).
Increased temperature and reduced salinity, as seen in these projections, may result in increased incidence of harmful algal blooms (HABs). The consequences of such HAB events include reduced water quality and toxin build-up in sh and shell sh, with potential subsequent impacts on human health . Climate change driven changes in the timing of seasons is also likely to affect aquaculture production cycles, where activities are tightly timed to sharp climatic variations between monsoon/intermonsoon periods. Such changes can be seen in the projected changes for coastal regions, notably Palawan and Taka Bonerate-Kepulauan Selayar (Fig. 7). A reduction in the predictability of seasonal cycles often leads to reduced harvests (Handisyde et al. 2006;Hamdan et al. 2015).
All the analysed coastal regions had projected end-century temperature increases close to 1.5°C under RCP4.5, enough to cause signi cant thermal stress leading to coral bleaching, while the 2.7°C increase seen under RCP8.5 would cause widespread loss of coral (Lough et al. 2018). The smallest temperature increases were seen at Cu Lao Cham, but at 1.5°C (RCP4.5) or 2.5°C (RCP8.5) these are still too high to prevent coral damage. Ocean acidi cation and the overall alteration of the ocean carbonate system resulting from rising atmospheric CO 2 levels provides further stress to coral reefs by affecting the ability of reef organisms to maintain su cient calci cation rates in the face of increased dissolution rates and, in extreme cases, prevent the deposition of carbonate minerals needed for skeleton construction through the occurrence of insu cient saturation levels (Eyre et al. 2018). Our projections show surface pH decreasing, though not beyond the range currently experienced; however any acidi cation will act as an additional stressor on coral reefs and make it more di cult for them to recover from bleaching events. The conditions under which coral reefs are able to recover can be complex (Graham et al. 2015) and this has not been considered in the current study.
The effect of climate change on typhoons is a key concern for coastal communities in Southeast Asia, however changes in the frequency and intensity of storms are one of the least certain climate features reported by the IPCC (IPCC 2013). Regional models can simulate stronger storms than global models, because they have higher resolution, but the uncertainty in the projections remains high. We investigated the surface wind speeds in the regional atmospheric model HadGEM2-ES-RCA4, which was used as input to the marine model reported here, for any changes in the strength, frequency or timing of storms. There was some indication of an increase in the number of days per year with strong winds, but not in maximum wind strength or in the pattern across the year. By contrast, Herrmann et al. (2020), based on a much more thorough analysis of outputs from a similar regional model (CNRM-CM5_RegCM4), found a decrease in projected wind speeds, in most of the region and all seasons, except for some increase in average speeds for December to February in the north of the region. The number of tropical cyclones also decreased in all seasons. There is a clear need for more investigation of changes in storminess in this region.
This study, using a single regional-scale model driven by a single global climate model, provides useful, novel information about the potential scale of climate change effects that may occur in the marine environment at different locations across Southeast Asia. However, it does not provide any indication of the certainty in these changes, which is of key importance for decision-making. Further projections are needed, using a range of models with different climate sensitives and alternative global emissions scenarios, to estimate the uncertainty and give con dence levels in the change projected for different variables and different locations. In a region of exceptionally high marine biodiversity, and where the sea supports the livelihoods of millions of people, such projections are a key tool for communities planning how they will adapt to the challenge of climate change. satellite observations. Note: The designations employed and the presentation of the material on this map do not imply the expression of any opinion whatsoever on the part of Research Square concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. This map has been provided by the authors.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. SupplementaryInformationblinded.pdf
|
2021-07-26T00:06:08.818Z
|
2021-06-08T00:00:00.000
|
{
"year": 2021,
"sha1": "18e5b900f62bd3bec4b930a98dc9922f621624b3",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-552834/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "e33b71ac08299411676649247eb7472b61d7f78b",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
17323505
|
pes2o/s2orc
|
v3-fos-license
|
A Kinect-Based Physiotherapy and Assessment Platform for Parkinson's Disease Patients
We report on a Kinect-based, augmented reality, real-time physiotherapy platform tailored to Parkinson's disease (PD) patients. The platform employs a Kinect sensor to extract real-time 3D skeletal data (joint information) from a patient facing the sensor (at 30 frames per second). In addition, a small collection of exercises practiced in traditional physiotherapy for PD patients has been implemented in the Unity 3D game engine. Each exercise employs linear or circular movement patterns and poses very light-weight processing demands on real-time computations. During an exercise, trainer instruction demonstrates correct execution and Kinect-provided 3D joint data are fed to the game engine and compared to exercise-specific control routines to assess proper posture and body control in real time. When an exercise is complete, performance metrics appropriate for that exercise are computed and displayed on screen to enable the attending physiotherapist to fine-tune the exercise to the abilities/needs of an individual patient as well as to provide performance feedback to the patient. The platform can operate in a physiotherapist's office and, following appropriate validation, in a home environment. Finally, exercises can be parameterized meaningfully, depending on the intended purpose (motor assessment versus plain exercise at home).
Introduction
Over six million people worldwide [1] suffer from Parkinson's disease (PD), a neurodegenerative condition that results from the damage of dopamine-producing neurons in an area of the brain known as substantia nigra. Dopamine acts as a mediator for transferring electrical signals (messages) and helps humans retain smooth, controlled, and purposeful movement. When a large percentage of those dopamineproducing neurons are damaged, the motor symptoms of PD appear. In addition, the meta-analysis of worldwide data [2] reveals and quantifies the rising prevalence of PD with age. At disease onset and in early stages, PD affects mostly motor function, while in more advanced stages one also suffers from cognitive, behavioral, and mental-related symptoms [3]. The four fundamental motor symptoms of the disease (tremor, rigidity, akinesia (or bradykinesia), and postural instability) are commonly referred to by the acronym TRAP [4].
These motor symptoms, which can be expressed in different degrees, can encumber and complicate daily activities and reduce the quality of life, especially as the disease progresses [5,6]. Finally, nonmotor symptoms of the disease include cognitive impairment, sleep disturbances, depression, anxiety, psychosis, hallucinations, pain, and fatigue.
A cure for PD has not yet been discovered. However, six categories of drugs are commonly used to control PDrelated symptoms [7] and maintain body functionality at reasonable levels throughout the lifetime of the patient. The active ingredients include levodopa, dopamine agonists, MAO-B inhibitors, COMT inhibitor, anticholinergic agents, and amantadine. Significant variabilities of symptoms and their severity among patients during the course of the disease make standard medication paths difficult to achieve [8]. Although levodopa is very effective at improving PD-related motor symptoms, large doses over extended periods may give rise to dyskinesia or involuntary abnormal movements, both of which further aggravate patients' walking ability and motor function. Accordingly, recent clinical practice issues agonists and postpones levodopa for later stages, when motor symptoms are not satisfactorily controlled [9]. Later/more advanced stages of the disease may require combinations of levodopa, dopamine agonists, COMT inhibitors, and MAO-B to more effectively control symptoms [8].
In parallel to medical treatment, physiotherapy has proven highly effective in controlling and delaying PDrelated symptoms [10][11][12][13] and is openly supported by a number of Parkinson clinical facilities and associations. For example, the Parkinson Society of Canada [14] provides detailed online instructions on stretching and other physical exercises. Randomized controlled trials, such as [15], support that physical exercise such as stretching, aerobics, unweighted/weighed treadmill, and strength training improves motor functionality (leg stretching, muscle strength, balance, and walking) and quality of life. The "training BIG" strategy for PD rehabilitation [16], in particular, has shown especially promising results. Training BIG advocates exercises that deploy the entire body both in seated and in standing posture (such as reaching and twisting to each side or stepping and reaching forward) that are to be performed at maximum range of motion (maximum amplitude). A recent review [17] of relevant technology-aided rehabilitation platforms gleans a number of design principles that must characterize physiotherapy solutions for the PD population.
In this work we report on a Kinect-based, augmented reality, real-time physiotherapy platform tailored to PD patients. It is meant to augment and not replace physiotherapy sessions and allows a patient to exercise in front of a large TV monitor-instead of in front of a mirror-to control posture, but with added useful digital artefacts. The platform can operate in the exercise room of a physiotherapist and individual exercises can be parametrized to the abilities or physiotherapy needs of an individual patient. The ability for parametrization is very important for progressing diseases like PD, as they allow for tailoring of different exercises to patients not only in the first few establishing physiotherapy sessions, but also as medium-term gains from exercise or medication are realized or even as the disease progresses. Currently a small collection of exercises based on those commonly practiced in traditional physiotherapy for PD patients has been implemented in the platform. We plan to expand the existing exercise compendium to allow physiotherapists more freedom in shaping customized exercise schedules to individual patients.
The choice to employ the Kinect sensor is of fundamental importance in the design and implementation of the platform, because it offers a unique opportunity to create a "closed-loop" system which facilitates patient monitoring during execution of an exercise to provide real-time visual feedback, such as on-screen guiding artefacts and repetition counters to alert the patient on his/her performance. Perhaps more important to clinical motor assessment is the ability of the system to quantify patient mobility and dexterity on a perexercise basis using exercise-specific performance metrics. Truly, such quantitative "kinesiological imprints" can be affected by various factors, such as time of day, tiredness of the patient, effectiveness of administered drugs, and on/off times. However, more meaningful and statistically sound results over a period of a few days of using the platform can be collected by controlling those variables that can be controlled: for example, exercise early in the day and at the same time after taking medication. As a result, customized, daily exercise schedules afford the possibility to collect a time series of performance data that can be usefully correlated with, for example, detailed medication history records and disease progress.
Indeed, as PD progresses over a number of years (typically around 15 after initial diagnosis), patients may show inconsistent response to dopaminergic medications, leading to shorter periods of adequately controlled symptoms (on times), more extended periods where the medication is not working sufficiently well (off times), and possibly erratic "wearing-off" transitions from on to off times. By this time, many patients exhibit more severe motor symptoms and their quality of life is seriously affected. Following an assessment of their status (motor symptoms, response to medication, on and off times, and wearing-off periods), the attending neurologist may indicate the alternative path of undergoing Deep Brain Stimulation (DBS) surgery [18]. DBS intervention may also be indicative to younger PD patients with more active life styles and work schedules if they suffer from drugresistant tremor [19]. This minimally invasive and reversible procedure entails (a) preoperative imaging to determine the best access path to the intended target, (b) surgically implanting and deploying a multielectrode probe in strategically selected areas of the brain such as the subthalamic nucleus (STN), and (c) connecting the probe via an extension wire to a small battery-powered neurostimulator (which is later on implanted at a comfortable place under the skin). The neurostimulator regulates the signals sent to the leads via a programmable computer chip and can be parametrized and tested to effectively block brain signals that cause PD symptoms. The system remains with the patient and requires battery change every few years. Postoperative assessment includes quantifying the response of the patient to different stimulation patterns, which varies among patients due to, for example, neurobiological state and actual placement of the probe leads. By providing quantitative performance metrics on the pre-op and post-op motor abilities of a patient, our physiotherapy platform could quantify the effectiveness of each programmed DBS stimulation pattern on patient mobility. It would be then possible to select those stimulation patterns that prove most effective for the given patient.
Platform Specification.
The physiotherapy platform combines a number of key hardware and software technologies to provide the desired functionalities. Specifically, a Microsoft Kinect v1 sensor supplies real-time 2D (RGB camera) and depth (IR depth camera) streams. These streams are processed by MS Kinect SDK v1.8 functions to (a) identify a patient in front of the sensor, (b) extract that person's skeleton as a hierarchy of nodes with 3D location data, and (c) update that skeleton in every frame to track the patient. The full skeletal model appears in Figure 1. For every frame where a player is visible and tracked, joint information includes joint's position in 3D space as well as a tag with two possible values: "tracked" for clearly visible joints or "inferred" for joints that are not clearly visible (e.g., occluded by another body part) but their position can be calculated from other (tracked) joints.
The logic for each exercise has been coded in the C# programming language in the Unity 3D v4.6 game engine (which works well with Kinect v1.0 and the MS Kinect SDK v1.8). However, while the Kinect SDK libraries are based on Microsoft's .NET 4 framework, Unity's mono framework is based on an older .NET framework version and a number of functions cannot be called in the same manner in the two frameworks. As a result, the Kinect SDK library cannot be directly accessed from within Unity MonoDevelop IDE. As expected, a number of custom middleware solutions appeared to alleviate that problem, that is, to expose Kinect SDK functionality inside MonoDevelop. We opted to adopt Rumen Filkov's robust KinectWrapper which was easily incorporated in our Unity project via Unity's Asset Store. The KinectWraper is essentially a customized C# script that exposes Kinect SDK (Kinect10.dll) functionality inside MonoDevelop. An additional C# script called KinectManager includes functions to read data from the Kinect sensor to build a skeleton.
In addition, synchronized RGB (camera) and depth map stream data has been combined as shown, for example, in Figure 3(C) for the first exercise, to create an experience very similar to working out in front of a mirror (a large screen TV is much more effective than computer monitor in that respect). The overlaid AR artefacts are a straightforward result obtained by projecting the selected joints of interest from 3D onto the 2D vertical plane corresponding to the image of the RGB camera.
Implemented Exercises Tailored to Parkinson's Disease.
Five representative exercises have been adopted from those commonly found in physiotherapy exercise curricula for PD patients, some of which can be executed from a standing position and others from a seated position to show the capabilities of the platform. The exercise menu appears in Figure 2 and includes the following five exercises: (1) circles for extended arms, (2) squats, (3) elbow lifts, (4) broomstick circles, and (5) leg extensions/kicks. The following requirements for the selection of these exercises were used.
(a) The exercises must be possible to be performed reasonably well by PD patients with mild to moderate symptoms (stages 1 through 3 in the Hoehn and Yahr [20] scale, i.e., without severe postural instability/motor impairment). (b) For the entire duration of an exercise, the patient's posture must adhere to the capabilities of the Kinect sensor. Practically, this means that the Kinect sensor (in reality its supporting SDK) must at all times be able to track the patient's body and successfully extract that patient's skeletal model for the full range of movements required for the exercise (e.g., a limb should not occlude another limb).
Each exercise employs either linear or circular movement patterns that pose very low processing demands on real-time 4 Journal of Medical Engineering computations. In addition, Kinect-provided 3D joint data are fed in real time to the game engine and compared to control routines relevant to the exercise being executed to assess proper posture and body control for the entire duration of the current repetition. Visual feedback is provided via AR artefacts which show how the skeleton is tracking the patient and repetition counters. When an exercise is complete, performance metrics appropriate for that exercise are computed and displayed on screen (a) to enable the attending physiotherapist to fine-tune the exercise to the abilities/needs of an individual patient and (b) to provide performance feedback to the patient. The exercises implemented in the current version of the platform and the performance metrics that are produced (and can be stored to establish a sequence of historical data for offline analysis) are discussed directly below.
Exercise 1. Facing the Kinect sensor, the patient assumes a relaxed standing stance with feet spaced apart at about shoulder width and with both arms extended laterally and in parallel to the transverse axis, as shown in the first snapshot of Figure 3(B). Then, from that stance, he/she has to complete cyclic movements of the wrists where both extended arms move in unison, as shown in the sequence of snapshots in Figure 3(B). The default value of for each exercise is 10. During each such cyclic movement, the arms must remain extended laterally while the wrists describe circles on imaginary planes that are parallel to the sagittal plane. Game code relevant to this exercise checks for correct execution as follows: (i) Ideally, each arm must remain fully extended laterally for the duration of the exercise. Deviations from a fully extended arm pattern are calculated from the 3D coordinates of the (detected) shoulder, elbow, and wrist joints for that arm, so that the shoulder-elbowwrist opening angle is computed in real time.
(ii) The motion pattern of each wrist projected to the sagittal plane is checked for circularity, meaning it must follow a superior-anterior-inferior-posteriorsuperior sequence (or, alternatively, a superiorposterior-inferior-anterior-and-back-to-superior sequence). This check is meant to count only circular patterns and not linear patterns, such as the wrist moving vertically, horizontally, or even along a diagonal.
A repetition is considered successful if it passes both tests described directly above, in which case an appropriate onscreen counter ( for the left arm or for the right arm) is incremented by one, as shown in Figure 3(E). On the other hand, a repetition (for the left or right arm) is considered a failed repetition if the corresponding wrist prescribes at least half a circle but does not complete that circle, in which event a "failure" counter ( or ) is incremented accordingly. When or reaches , the success and failure counters corresponding to that arm stop incrementing. The exercise is considered complete when both and equal , at which point the following performance metrics are shown on screen, separately for each arm: (a) the total of number of failed repetitions or and (b) a circularity metric showing the ratio of the average superior-to-inferior distance divided by the anterior-to-posterior distance. Clearly, for a perfectly executed exercise, = = 0 and circularity = 1.
These metrics lead to direct interpretation (a key design requirement for this collection of exercises). For example, large departures of both and from zero may mean that the patient has not understood the exercise or that the exercise is too hard for him/her. Alternatively, consistently disparate values for and (e.g., close to zero but significantly higher) may reveal a measurable differential in mobility control between the left and right sides. Finally, circularity metric values that deviate significantly from 1 show that the patient favors vertical or horizontal elliptical patterns for that arm. It is then up to the physiotherapist to parameterize the exercise depending on the priorities set forth for a given patient as well as the capabilities of that patient. To quantitatively assess the motor function of a patient in the context of the present exercise (but also for any other exercise in the current compendium), one would explore the parameter space to "push" the patient near the limits of his/her abilities and obtain more valid results over a period of sessions. That would also be the suggested approach to establish as accurate as possible baseline of patient's motor abilities in the period before Deep Brain Stimulation (DBS) surgery, but also in the following months to assess the effectiveness of each stimulation pattern. On the other hand, parameterization of the platform for home-based use should probably aim at encouraging patients to exercise more by posing less stringent demands than in the above situations, lest they become discouraged and cease to exercise.
Exercise 2.
Facing the Kinect sensor and in a relaxed standing stance (as shown in Figure 4), the patient extends both arms fully to the front (i.e., parallel to the sagittal axis). Starting from that stance, he/she has to complete a number of squats. The depth of a squat is computed as the maximal distance travelled vertically by the mid-hip joint. Game code relevant to this exercise checks for correct execution by counting only squats that are sufficiently deep; that is, > min , where min is an exercise-specific parameter set to define the difficulty of the exercise. The default value for min is ( Thigh + Thigh )/6 (see Figure 1 for the definitions of Thigh and Thigh ), which corresponds to an average level of difficulty. For each successful squat an appropriate onscreen counter is incremented by one, while each failed squat increments a "failed rep" counter . The exercise completes when reaches , at which point the following two performance metrics are computed and shown on screen: (a) the number of failed squats and (b) the average depth of a squat ⟨ ⟩ as a percentage of min (i.e., ⟨ ⟩/ min * 100%).
These metrics can be used by the physiotherapist to finetune the exercise (number of repetitions and squat depth) to the individual patient. For example, a performance result of = 0 and average squat depth close to or even greater than 100% (overachieving) implies that the exercise is too easy for that patient and could possibly be made harder by Figure 3: (a) Exercise 1 is an arm stretching/strengthening exercise executed from a standing position. The included screenshot is a snapshot of the screen in its most informative mode. Several regions of information are identified as follows. Region "A" is the number id of the exercise being executed. Region "B" shows four key snapshots of a trainer performing the exercise. Region "C" is an insert of Kinect's RGB camera view into the platform, depicting an actor performing the exercise in real time in front of the sensor. The red dots superimposed on the actor are skeleton joints whose 3D coordinates are being calculated in real time and in the current frame are projected on the 2D frontal vertical plane as guiding digital artefacts. Region "D" lists three groups of variables used to minimize the effect of random joint positional errors arising from the hardware and to detect macroscopic movements accurately. Region "E" shows the number of detected repetitions for each arm (in the screenshot, the actor has completed seven repetitions for each arm and is working on the next repetition). Regions "F" and "G" show debugging information that is useful to the developers to effectively fine-tune the parameters in region "D". Finally, the menu button in region "H" takes us back to the main menu shown in Figure 2. It is worth mentioning that the regions visible to the patient in normal (nondebugging) operation are A, B, C, E, and H. (b) A snapshot of an agent performing Exercise 1 in front of a 25 monitor at a distance of approximately 2.5 m from the Kinect sensor (placed to the left of the monitor). In actual deployment, a much larger 55-58 monitor would be preferred. increasing min . Alternatively, a larger value of Mf or one that is comparable to combined with an average squat depth very close to 100% may imply that the patient has trouble performing squats this deep and the exercise must be made easier by decreasing min . Exercise 3. Facing the Kinect sensor, the patient assumes a standing stance with both arms relaxed and to the side as shown in the first trainer snapshot in Figure 5. From that stance, he/she has to complete repetitions of slowly and purposefully lifting both upper arms in unison to a horizontal position (so that the upper arms end up almost parallel to the transverse axis at approximately shoulder height), followed by a controlled reverse movement downwards to the relaxed state. During execution of this exercise it is important to maintain (a) proper control (e.g., not to let the forearms drop under their own weight) and (b) proper posture by keeping the forearms as vertical as possible during the entire range of motion. Game code relevant to this exercise checks for correct execution in the following sense. can be somewhat relaxed to allow for patients that cannot attain and/or retain the required posture to still practice.
(ii) For each repetition, the vertical distances / that must be travelled by the left/right elbow, respectively, must be close to the length of the forearm (variables Arm / Arm in Figure 1) in order to ensure full range of motion. The ratios / Arm and / Arm define the difficulty of the exercise. For PD patients, ratio values close to 1 may be hard to attain and tiring to maintain over many repetitions, whereas ratio values close to, say, 0.7 pose more realistic expectations. In any case, the difficulty of the exercise can be set separately for each arm by the attending physiotherapist on a perpatient basis.
A repetition is considered successful if it passes both tests described directly above, in which case an appropriate onscreen counter ( for the left arm or for the right arm) is incremented by one. Otherwise the repetition (for the left or right arm) is considered failed and a "failure" counter ( or ) is incremented accordingly. When or reaches , the success and failure counters corresponding to that arm stop incrementing. The exercise is considered complete when both and equal , at which point the following performance metrics are shown on screen, separately for each arm: (a) the number of failed repetitions or and (b) the average maximum height ⟨ ⟩ or ⟨ ⟩ over all successful repetitions for the left or right elbow, respectively, expressed as a percentage of the upper arm length, that is, ⟨ ⟩/ Arm * 100% for the left arm and ⟨ ⟩/ Arm * 100% for the right arm. A perfectly executed exercise should yield = = 0 and an average elbow height that is close to or exceeds the value corresponding to the difficulty level set by the attending physiotherapist. These metrics can be used as follows. Large departures of both and from zero may mean that the exercise is too hard for that patient, in which case an attending physiotherapist may select to ease the difficulty level and/or decrease the number of required repetitions to complete the exercise. Alternatively, consistently disparate values for and (e.g., close to zero but significantly higher) All other on-screen information is documented in the caption of Figure 3. reveal a measurable differential in mobility control between the left and right sides.
Exercise 4. Facing the Kinect sensor, the seated patient holds a light stick such as a broomstick (which helps coordinate the movements of the left and right arms) with both hands at the initial upright stance where the wrist joints are located slightly higher than the shoulder joints (as shown in the first trainer snapshot in Figure 6). Then, from that stance, he/she has to complete cyclic movements of the wrists where both arms move in unison and in phase to each other. During each such cyclic movement, the wrists describe circles on imaginary planes that are parallel to the sagittal plane. Game code relevant to this exercise checks for correct execution, in the sense that both wrists must follow a superior-anteriorinferior-posterior-superior sequence or, alternatively, both wrists must follow a superior-posterior-inferior-anteriorsuperior sequence. This check helps avoid linear patterns, such as vertical or horizontal wrist joint movements. The counters and metrics for this exercise are identical to those in Exercise 1.
Exercise 5. In this, final, exercise, the patient is seated facing the Kinect sensor, with feet securely planted on the ground Figure 7: Exercise 5. A leg stretching/strengthening exercise (leg extensions or "kicks") executed from a seated position. The actor shown on the right has completed 7 repetitions for the right leg and his 8th repetition for the left leg. All other on-screen information is documented in the caption of Figure 3.
and holding the seat of the chair with both palms for additional support (as shown in the first trainer snapshot in Figure 7). The exercise calls for the completion of controlled full extensions for each leg in any order. Game code relevant to this exercise checks for correct execution, so that a repetition is considered successful if an ankle joint is lifted to a height that is above a threshold value min (which has a convenient default value of 0.75 * ( Shin + Shin )/2the parameters Shin and Shin which are defined in Figure 1), which corresponds to an average level of difficulty. For each successful extension an appropriate on-screen counter ( for the left leg or for the right leg) is incremented by one, as shown in Figure 7. On the other hand, each failed repetition (for the left or right leg) increments the corresponding "failure" counter ( or ). When or reaches , the success and failure counters corresponding to that leg stop incrementing. Finally, the exercise is considered complete when both and equal , at which point the following performance metrics are shown on screen, separately for each leg: (a) the total of number of failed repetitions or and (b) the average maximum ankle height ⟨ ⟩ or ⟨ ⟩ over all successful repetitions for the leg in question, expressed as a percentage of shin length, that is, ⟨ ⟩/ Shin * 100% for the left leg and ⟨ ⟩/ Shin * 100% for the right leg. A perfectly executed exercise should yield = = 0 and an average maximum ankle height that is close to or exceeds the value corresponding to the difficulty level set by the attending physiotherapist. Parameterization of this exercise is the same as in Exercise 1.
Discussion and Future Work
Patients with neurological disorders are known to benefit from physical practice, which improves mobility and functional independence through increased muscular strength, flexibility, and balance control. The present work reports on a Kinect-based, augmented reality, real-time assessment physiotherapy platform tailored to Parkinson's disease (PD) patients with mild to moderate symptoms (stages 1 through 3 in the Hoehn and Yahr [20] scale, i.e., without severe postural instability and motor impairment).
Main platform characteristics are as follows.
(a) The platform offers a persuasive augmented reality experience (by using a large TV monitor instead of a computer screen, patients are afforded the experience of working out in front of a mirror) and one that is enriched with important digital artefacts (relevant skeleton joints are overlaid on the actual image of the patient for the duration of an exercise) and feedback information (e.g., repetition counters).
(b) The platform is adaptable to the abilities/exercise needs of an individual patient: each exercise is parametrized to a difficulty that can be fine-tuned and tailored to each patient separately. Adaptability is important in the first few establishing physiotherapy sessions and also as medium-term gains from exercise are realized or even as the disease progresses.
(c) The platform has a sufficiently small footprint to operate in the office of a physiotherapist, as it requires the following hardware components: (i) an entry-level laptop such as an Intel Core i3 based machine with entry-level graphics card to run the software (around 400 USD), which is largely a direct outcome of our design decisions to use the Unity game engine and to employ light-weight processes for each exercise, (ii) the Kinect sensor (around 100 USD), and (iii) a large flat panel TV monitor (or projector) that should be placed so that patients can comfortably afford a full view of themselves in both standing and seated positions at a distance of approximately 2-3 meters from the TV/Kinect sensor location (350-400 USD for a 55-58 TV monitor). Even though at a distance of 2-3 meters a more modest 42 TV set may seem adequate, we do feel that a larger 55-58 set would be more satisfying, at least to more elderly patients. Finally, for a home setting, the hardware acquisition cost could almost be cut in half, considering that most living rooms are already equipped with a large TV set.
In the immediate future and in collaboration with physiotherapists we plan to validate the platform with PD patients to address safety issues and fine-tune parameters related to exercise posture and pace. At the same time, we are actively augmenting the platform with exercises among those most commonly practiced in traditional PD physiotherapy to more effectively enrich customized physiotherapy schedules on a per-patient basis. The next step would be to seek funding and partners to make the platform available to a large base of physiotherapists and also PD patients who are willing to run it in their homes. In addition to affording daily customized exercise schedule to a PD patient, it will then be possible to collect a time series of performance data that can be usefully correlated with, for example, detailed medication history records and disease progress.
|
2018-04-03T02:36:57.710Z
|
2016-10-16T00:00:00.000
|
{
"year": 2016,
"sha1": "ad1130a7ab053f12389a3d6f2431bfa848db0a5f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2016/9413642",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76026eec6506cd178583e7497063969850333ab4",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Medicine"
]
}
|
55201820
|
pes2o/s2orc
|
v3-fos-license
|
Donut and dynamic polarization effects in proton channeling through carbon nanotubes
We investigate the angular and spatial distributions of protons of the energy of 0.223 MeV after channeling through an (11,~9) single-wall carbon nanotube of the length of 0.2 $\mu$m. The proton incident angle is varied between 0 and 10 mrad, being close to the critical angle for channeling. We show that, as the proton incident angle increases and approaches the critical angle for channeling, a ring-like structure is developed in the angular distribution - donut effect. We demonstrate that it is the rainbow effect. When the proton incident angle is between zero and a half of the critical angle for channeling, the image force affects considerably the number and positions of the maxima of the angular and spatial distributions. However, when the proton incident angle is close to the critical angle for channeling, its influence on the angular and spatial distributions is reduced strongly. We demonstrate that the increase of the proton incident angle can lead to a significant rearrangement of the propagating protons within the nanotube. This effect may be used to locate atomic impurities in nanotubes as well as for creating nanosized proton beams to be used in materials science, biology and medicine.
Introduction
While the progress in theoretical modeling and computer simulation of ion channeling through carbon nanotubes has reached a mature level, as reviewed in Refs. [1]- [10], the experimental advancement in this area is still in its infancy. Since the issues of ordering, straightening and holding nanotubes are probably the most challenging tasks in the experimental realization of ion channeling through them, it is not a surprise that the best results in performing these tasks are expected when they are grown in a dielectric medium. For example, the first experimental data on ion channeling through nanotubes, which were reported by Zhu et al. [11], were obtained with the He + ions and array of well-ordered multi-wall nanotubes grown in a porous anodic aluminum oxide (Al 2 O 3 ) membrane. The authors performed and compared the results of direct measurements of the yield of ions transmitted through the bare Al 2 O 3 sample and the Al 2 O 3 sample with nanotubes.
On the other hand, the first experimental results on electron channeling through carbon nanotubes were reported by Chai et al. [12]. The authors studied the transport of electrons of the energy of 300 keV through the aligned multi-wall nanotubes of the lengths of 0.7-3.0 µm embedded in the carbon fiber coatings. The misalignment of the nanotubes was up to 1 • . Besides, Berdinsky et al. [13] succeeded in growing the single-wall carbon nanotubes (SWCNTs) in the ion tracks etched in the SiO 2 layers on a Si substrate, offering an interesting possibility for the experimental realization of ion channeling through nanotubes in a wide range of ion energies.
Regarding the theoretical modeling and computer simulation of ion channeling through carbon nanotubes, we note that the effect of dynamic polarization of the nanotube atoms valence electrons by the ion is not usually taken into account [1]- [14] since its influence at very low and very high energies, of the orders of 1 keV and 1 GeV, respectively, is negligible. However, it is expected that at medium energies, of the order of 1 MeV, this effect contributes significantly to the ion energy loss and gives rise to an additional force acting on the ions, called the image force [15,16], as it has been demonstrated in the computer simulation of the angular distributions of protons channeled through the SWCNTs in vacuum [17]. The importance of the image force has also been emphasized in the related area of ion transmission through cylindrical channels in metals [18]- [23] and on ions and molecules moving over supported graphene [24,25].
When the ion channeling dynamics at very low and very high energies is concerned, the material surrounding the carbon nanotubes serves predominantly as their passive container. However, the ions moving at medium energies induce the strong dynamic polarization of both the nanotube atoms valence electrons and the surrounding material, which in turn gives rise to a sizeable image force [26,27]. In these two studies, the image force was calculated by a two-dimensional (2D) hydrodynamic model of the nanotube atoms valence electrons while the surrounding material was described by a frequency dependent dielectric function. On the other hand, the image force has recently been shown to influence significantly the rainbow effect in proton channeling through the short SWCNTs [28] and double-wall nanotubes in vacuum [29] as well as through the short SWCNTs in the dielectric media [30,31]. We think that it is important to improve our understanding of the role of the image force in the rainbow effect with nanotubes because, in analogy with the case of surface ion channeling [32]- [34], the measurements of this effect can give precise information on both the atomic configuration and interaction potentials within nanotubes, which have not yet been explored completely.
However, in ion channeling experiments, the always present questions are the ones of ion beam divergence and misalignment. So, it is important to study the influence of the effect of dynamic polarization of carbon nanotubes when the initial ion velocity is not parallel to the nanotube axis. Therefore, in this paper, we continue our investigation of the image force with the case in which the ion incident angle is not zero. Specifically, we analyze the angular and spatial distributions of protons of the velocity of 3 a.u. channeled through the straight (11,9) SWCNTs of the length of 0.2 µm in vacuum. The proton incident angle is varied between 0 and 10 mrad, being close to the critical angle for channeling. This proton velocity is chosen because the dynamic polarization effect is the strongest in the range about it. The consideration is limited to the case of a nanotube in vacuum because the presence of a dielectric medium around it would introduce only a slight modifying factor in the results of calculation [30,31].
It is well known that, for the ion incident angles close to the critical angle for channeling, the donut effect develops in the angular distributions of channeled ions. The effect was measured with the Si and Ge crystals [35]- [37], and explained independently afterwards by the theory of crystal rainbows [38,39]. That theory was formulated as a proper theory of ion channeling through thin crystals [40], and has been applied subsequently to ion channeling through short carbon nanotubes [41]- [44]. It must be noted that the donut effect has also been observed in a computer simulation of ion propagation through nanotubes [45]. However, the authors did not connect the obtained results to the rainbow effect. We explore here the donut effect in the angular and spatial distributions of protons channeled through a (11,9) SWCNT in the presence of the image force.
Regarding the angular and spatial distributions of channeled protons to be presented in this study, corresponding to the case in which the proton incident angle is not zero, we note that the proton equations of motion in the transverse position plane that are solved to generate them are 2D. This means that the case we explore is truly 2D, unlike the cases treated in our previous studies of the image force in carbon nanotubes, which were in fact one-dimensional (1D) [28]- [31].
The atomic units will be used throughout the paper unless explicitly stated otherwise.
Theory
We adopt the right Cartesian coordinate system with the z axis coinciding with the nanotube axis, the origin in the entrance plane of the nanotube, and the x and y axes the vertical and horizontal axes, respectively. The initial proton velocity, v, is taken to lie in the yz plane and make angle ϕ with the z axis, being the proton incident angle. The length of the nanotube, L, is assumed to be large enough to allow us to ignore the influence of the nanotube edges on the image force, and, at the same time, small enough to neglect the energy losses of channeled protons.
We assume that the interaction between the proton and nanotube atoms can be treated classically using the Doyle-Turner expression [46] averaged axially [47] and azimuthally [45]. This interaction is repulsive and of the short-range character. Thus, the repulsive interaction potential in the proton channeling through the nanotube is of the form where Z 1 = 1 and Z 2 = 6 are the atomic numbers of the hydrogen and carbon atoms, respectively, a is the nanotube radius, l is the nanotube atoms bond length, r = (x 2 + y 2 ) 1 2 is the distance between the proton and nanotube axis, I 0 is the modified Bessel function of the 1 st kind and 0 th order, and a j = {0.115, 0.188, 0.072, 0.020} and b j = {0.547, 0.989, 1.982, 5.656} are the fitting parameters (in atomic units) [46].
The dynamic polarization of the nanotube by the proton is treated via a 2D hydrodynamic model of the nanotube atoms valence electrons, based on a jellium-like description of the ion cores making the nanotube wall [15]- [26]. This model includes the axial and azimuthal averaging similar to that applied in obtaining the corresponding repulsive interaction potential, given by Eq. (1). It finally gives the interaction potential between the proton and its image, U im (r, t), which is stationary in the coordinate system moving with the proton and depends on its velocity. This interaction is attractive and of the long-range character. The details of derivation of the expression for U im (r, t) are given elsewhere [15]- [30]. Consequently, the total interaction potential in the proton channeling through the nanotube is The proton equations of motion we solve are where m is the proton mass. They are subject to the initial conditions for the transverse components of the proton velocity that arė The longitudinal proton motion is treated as uniform with the initial condition for the longitudinal component of the proton velocity that isż(t = 0) = v cos ϕ ≈ v. As a result, the longitudinal component of the proton position is z(t) = vt. Equations (5) and (6) are solved numerically. The angular and spatial distributions of transmitted protons are generated using a Monte Carlo computer simulation code. The components of the proton impact parameter, x 0 and y 0 , are chosen randomly from a uniform distribution within the cross-sectional area of the nanotube and its entrance plane. With l = 0.144 nm [48], we obtain that a = 0.689 nm. If the proton impact parameter falls inside annular interval [a − a sc , a], where a sc = [9π 2 /(128Z 2 )] 1 3 a 0 is the screening radius and a 0 the Bohr radius, the proton is treated as if it is backscattered and is disregarded. The initial number of protons is about 1 000 000.
The components of the proton scattering angle, Θ x and Θ y , are obtained via expressions Θ x = V x /v and Θ y = V y /v, where V x and V y are the final transverse components of the proton velocity, which are obtained, together with the final transverse components of the proton position, X and Y , as the solutions of Eqs. (5) and (6). The proton channeling through the nanotube can be analyzed via the mapping of the impact parameter plane, the x 0 y 0 plane, to the scattering angle plane, the Θ x Θ y plane [40]. The corresponding total interaction potential, given by Eq. (2), is axially symmetric. This means that, if the initial proton velocities were parallel to the nanotube axis, this mapping would be 1D. However, the initial proton velocities are not parallel to the nanotube axis, and the mapping is 2D. Since the proton scattering angle is small, its differential transmission cross section is given by where J Θ is the Jacobian of the mapping, Thus, equation J Θ = 0 determines the lines in the impact parameter plane along which the proton differential transmission cross section is singular. The images of these lines in the scattering angle plane are the rainbow lines in this plane [40]. We can analyze in a similar way the mapping of the impact parameter plane (the x 0 y 0 plane), which is the entrance plane of the nanotube and the initial transverse position plane, to the exit plane of the nanotube or the final transverse position plane, the plane. The Jacobian of this mapping is The rainbow lines in the final transverse position plane are the images of the lines in the impact parameter plane determined by equation J R = 0.
Results and discussion
Let us now analyze the angular and spatial distributions of protons channeled in the (11,9) SWCNT of the length of 0.2 µm. In all the cases to be studied, the initial proton velocity will be v = 3 a.u., corresponding to the initial proton energy of 0.223 MeV, while the incident proton angle, ϕ, will be varied between 0 and 10 mrad. The maximal proton incident angle will be close to the critical angle for channeling, ψ c , being about 11 mrad. The analysis will also include the typical proton trajectories through the nanotube in the proton phase space.
In Figs. 1-6 we shall display the evolution of the angular distribution of channeled protons with the increase of ϕ. In particular, we shall analyze the development of a ring-like structure in the angular distribution under the influence of the image force.
The scatter plot shown in Fig. 1 represents the angular distribution of channeled protons for ϕ = 0 with the image force included. The corresponding angular distribution without the inclusion of the image force contains in its central part only a maximum Figure 3. The distribution along the Θ y axis of protons channeled in the (11,9) SWCNT with the image force taken into account -the solid curve -and without itthe dashed curve -when the proton incident angle ϕ = 6 mrad. The proton velocity is v = 3 a.u., the nanotube radius a = 0.689 nm, and the nanotube length L = 0.2 µm. The former curve corresponds to the angular distribution shown in Fig. 2. at the origin. This means that the non-monotonic character of the central part of the angular distribution with the image force included is due to the effect of dynamic polarization. This was discussed in one of our previous papers [28]. We presented in it the distributions of channeled protons along the Θ y axis (in the scattering angle plane) with and without the image force included, which were in fact (for ϕ = 0) the radial yields of channeled protons. The conclusion of the discussion was that the maxima of the radial yield, appearing when the image force was included, were due to the rainbow effect. Figure 5. 2The distribution along the Θ y axis of protons channeled in the (11,9) SWCNT with the image force taken into account -the solid curve -and without itthe dashed curve -when the proton incident angle ϕ = 10 mrad. The proton velocity is v = 3 a.u., the nanotube radius a = 0.689 nm, and the nanotube length L = 0.2 µm. The former curve corresponds to the angular distribution shown in Fig. 4. Fig. 2 gives the angular distribution of channeled protons for ϕ = 6 mrad with the image force included. One can notice easily about a half of a ring-like structure, with an exceptionally high yield of channeled protons. This is the precursor of the effect known as the donut effect, which is connected to the misalignment of the proton beam and nanotube axis. In addition, the angular distribution contains several intricately shaped regions with lower yields of channeled protons. We show in Fig. 3 the corresponding distribution of channeled protons along the Θ y axis with and without the image force included. The sharp maximum of this distribution, appearing at -6.0 mrad, is due to Figure 6. The rainbow lines in the scattering angle plane for the protons channeled in the (11,9) SWCNT with the inclusion of the image force when the proton incident angle ϕ = 10 mrad. The proton velocity is v = 3 a.u., the nanotube radius a = 0.689 nm, and the nanotube length L = 0.2 µm. the donut effect. It is evident that the image force makes this maximum weaker. On the other hand, the origin of the broad maximum of the distribution, located at -5.1 mrad, is solely the image force. Thus, we can conclude that for the median values of ϕ, between 0 and about ψ c /2, the image force still plays a significant role in generating the angular distribution. We show in Fig. 4 the angular distribution of channeled protons for ϕ = 10 mrad with the image force included. One can see clearly the whole ring-like structure, with Figure 9. The distribution along the Y axis of protons channeled in the (11,9) SWCNT with the image force taken into account -the solid curve -and without itthe dashed curve -when the proton incident angle ϕ = 10 mrad. The proton velocity is v = 3 a.u., the nanotube radius a = 0.689 nm, and the nanotube length L = 0.2 µm. The former curve corresponds to the spatial distribution given in Fig. 8.
an exceptionally high yield of channeled protons. This is the fully developed donut effect. As it has been already said, the corresponding value of ϕ is close to the value of ψ c . In addition, as in Fig. 2, the angular distribution contains several intricately shaped regions with lower but distinctly graded yields of channeled protons, with the very clear boundaries between them. Fig. 5 gives the corresponding distribution of channeled protons along the Θ y axis with and without the image force included. It is evident that, when ϕ is close to ψ c , the role of the effect of dynamic polarization in generating the angular distribution is almost negligible. Fig. 6 shows the corresponding rainbow lines in the scattering angle plane with the dynamic polarization effect taken into account. These lines clearly demonstrate that the non-uniformity of the angular distribution, including the donut effect, is due to the rainbow effect. In Figs. 7-10 we shall display the evolution of the spatial distribution of channeled protons with the increase of ϕ, which is going on in parallel with the evolution of the angular distribution displayed in Figs. 1-6.
The scatter plot given in Fig. 7 represents the spatial distribution of channeled protons for ϕ = 0 with the image force included. This spatial distribution and the corresponding spatial distribution without the image force included were analyzed in one of our previous papers [31]. We presented in it the distributions of channeled protons along the y axis with and without the image force included, which were in fact (for ϕ = 0) the radial yields of channeled protons. It was demonstrated that the maxima of the radial yields, present in both spatial distributions, were the rainbow maxima. We also concluded that the dynamic polarization effect caused the shifts of the maxima of the spatial distribution generated with the effect not taken into account as well as the appearance of the additional maxima. The spatial distribution of channeled protons for ϕ = 10 mrad with the effect of dynamic polarization taken into account is presented in Fig. 8. When this spatial distribution is compared to the spatial distribution for ϕ = 0, it is evident that the maximal change of ϕ induces a significant rearrangement of the protons in the final transverse position plane. Almost all the protons are displaced to the left half of the nanotube. However, as in the cases of angular distributions for ϕ = 6 and 10 mrad, the spatial distribution also contains several intricately shaped regions with lower but distinctly graded yields of channeled protons, with the very clear boundaries between them. Fig. 9 gives the corresponding distribution of channeled protons along the Y axis, in the final transverse position plane, with and without the image force included. For this value of ϕ, the strongest maximum of the spatial distribution lies at -3.6 a.u. instead at the origin for ϕ = 0. One can also conclude that, when ϕ is close to ψ c , the role of the effect of dynamic polarization in generating the spatial distribution is small but noticeable. The effect makes the strongest maximum of the spatial distribution weaker and induces a rightward shift of the second strongest maximum. Fig. 10 shows the corresponding rainbow lines in the final transverse position plane with the image force taken into account. As in the case of angular distribution for this value of ϕ, these lines clearly demonstrate that the non-uniformity of the spatial distribution is to be attributed to the rainbow effect. In Figs. 11-14 we shall display the typical proton trajectories through the nanotube in the proton phase space, complementing the result displayed in Figs. 1-10.
We show in Fig. 11 the y component of the proton scattering angle (Θ y ) as a function of the z component of its position within the nanotube with the effect of dynamic polarization included when ϕ = 0 for x 0 = 0 and y 0 = ±2, ±6 and ±10 a.u. Looking at the angular distribution shown in Fig. 1, we see that the channeled protons with the impact parameters close to the nanotube axis, i.e., for y 0 = ±2 a.u., and to the nanotube wall, i.e., for y 0 = ±10 a.u., contribute to the part of the angular distribution close to the origin. The channeled protons with the impact parameters comparable to a/2, i.e., for y 0 = ±6 a.u., give rise to the rainbow maxima lying at about 2 mrad. Fig. 12 gives the dependence of the y component of the proton position on the z component of its position within the nanotube with the image force included when ϕ = 0 for the same values of the components of the proton impact parameter as in Fig. 11. One can see that the channeled protons with y 0 = ±2 a.u. give rise to the part of the spatial distribution close to the origin. The channeled protons with y 0 = ±6 and ±10 a.u. contribute to the peripheral part of the spatial distribution.
We give in Fig. 13 the y component of the proton scattering angle (Θ y ) as a function of the z component of its position with the image force taken into account when ϕ = 10 mrad for the same values of the components of the proton impact parameter as in Fig. 11. It is easy to conclude that the channeled protons with y 0 = 2, 6 and 10 a.u. contribute to the right part of the donut. The channeled protons with y 0 = -2, -6 and -10 a.u. give rise to the most intense part of the donut, being its farthest left part. Fig. 14 shows the dependence of the y component of the proton position on the z component of its position with the image force included when ϕ = 10 mrad for the same values of the components of the proton impact parameter as in Fig. 11. It is evident that all the propagating protons in question end up in the left half of the nanotube, after being reflected from the right part of the nanotube wall.
An additional result of our computer simulations is related to the influence of the image force on ψ c . We followed the change of the total yield of channeled protons with the increase of ϕ, and found that with the image force taken into account ψ c = 10.6 mrad. When the image force is not taken into account ψ c = 11.9 mrad. This increase of ψ c is attributed to the increase of the total interaction potential in question, given by Eq. (2), when its attractive component, originating in the interaction of the proton and its image, is not taken into account. This conclusion is justified via the relation between ψ c and the total interaction potential at the distance from the nanotube wall equal to the screening radius, U SC , where E is the initial proton energy [49].
Concluding remarks
We have presented the first theoretical investigation of the angular and spatial distributions of ions channeled through a nanotube for different proton incidence angles with the effect of dynamic polarization of the nanotube included. The ions are protons of the velocity of v = 3 a.u. and the nanotube is an (11,9) SWCNT of the length of L = 0.2 µm. The proton incident angle, ϕ, is varied between 0 and 10 mrad, being close to the critical angle for channeling, ψ c . We have noticed a slight increase of ψ c when the image force is not taken into account. We have observed a ring-like structure developing in the angular distribution of channeled protons with ϕ increasing and approaching ψ c . The effect has been recognized as the donut effect, being in fact the rainbow effect. If ϕ is between 0 and about ψ c /2, the image force plays a significant role in generating the angular and spatial distributions, including the rainbow maxima. However, if ϕ is close to ψ c , the contribution of the image force to the angular and spatial distributions, including the donut effect, is minor.
The analysis of the generated spatial distributions of channeled protons has shown that the increase of ϕ can give rise to a significant rearrangement of the propagating protons within the nanotube. For example, for ϕ = 10 mrad, the proton beam is displaced from the nanotube axis toward the nanotube wall leaving the region around the axis practically empty. It is clear that such a rearrangement of the propagating protons may be used to locate various atomic impurities in the nanotube, using the secondary processes like backward Coulomb scattering and nuclear reactions. In addition, the presence of the rainbow maxima in the spatial distributions can be used to determine the positions of the impurities very precisely. One can also think about directing such a nonosized proton beam to a material to be modified with it, or to a biological or medical sample.
|
2010-08-16T11:49:28.000Z
|
2010-04-13T00:00:00.000
|
{
"year": 2010,
"sha1": "d6e20ccbf4ac99df9afd324c564d0dc9e7187cb8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/12/4/043021",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "d6e20ccbf4ac99df9afd324c564d0dc9e7187cb8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
221083801
|
pes2o/s2orc
|
v3-fos-license
|
Scientometric Implosion that Leads to Explosion: Case Study of Armenian Journals
Abstract Purpose The purpose of this study is to introduce a new concept and term into the scientometric discourse and research—scientometric implosion—and test the idea on the example of the Armenian journals. The article argues that the existence of a compressed scientific area in the country makes pressure on the journals and after some time this pressure makes one or several journals explode—break the limited national scientific area and move to the international arena. As soon as one of the local journals breaks through this compressed space and appears at an international level, further explosion happens, which makes the other journals follow the same path. Design/methodology/approach Our research is based on three international scientific databases—WoS, Scopus, and RISC CC, from where we have retrieved information about the Armenian journals indexed there and citations received by those journals and one national database—the Armenian Science Citation Index. Armenian Journal Impact Factor (ArmJIF) was calculated for the local Armenian journals based on the general impact factor formula. Journals were classified according to Glänzel and Schubert (2003). Findings Our results show that the science policy developed by the scientific authorities of Armenia and the introduction of ArmJIF have made the Armenian journals comply with international standards and resulted in some local journals to break the national scientific territory and be indexed in international scientific databases of RISC, Scopus, and WoS. Apart from complying with technical requirements, the journals start publishing articles also in foreign languages. Although nearly half of the local journals are in the fields of social sciences and humanities, only one journal from that field is indexed in international scientific databases. Research limitation One of the limitations of the study is that it was performed on the example of only one state and the second one is that more time passage is needed to firmly evaluate the results. However, the introduction of the concept can inspire other similar case study. Practical implications The new term and relevant model offered in the article can practically be used for the development of national journals. Originality/value The article proposes a new term and a concept in scientometrics.
Introduction
While attempting to evaluate the academic impact of scientific journals, the journal impact factor (JIF) has become the most commonly used measure for journals. Over the last several decades, the interest in this metric and its role in research evaluation has progressively increased within the academic community and beyond.
The aim of the first studies on the quality of scientific journals was about how to filter the most important and influential periodicals in the given scientific field to purchase for the libraries of particularly small American colleges to stimulate their development (Gross & Gross, 1927). The idea that was not initially aimed at research evaluation was then developed to one of the most widely used tools-journal impact factor-with its apparent strength and shortcomings. What we nowadays understand as impact factor was presented by Eugene Garfield in 1955 which led to a publication of Science Citation Index (SCI) in 1961 (Garfield, 2006). In order to accelerate the selection of journals for SCI, Garfield and Irving Sher have introduced journal impact factor by re-sorting the author citation index into the journal citation index, thus creating Journal Impact Factor (Garfield, 2006). The first sample ranking of journals by impact factor appeared in 1969, which was followed by a later annual publication of Journal Citation Reports (JCR). Among the criticisms of this system were geographic (mainly US-based) and language (mainly English -language) bias of journals as a source for research (Aksnes & Siversten, 2019). Although the Web of Science is trying to address this shortcoming by including regional and national indexes (e.g. China Knowledge Resource Integrated Database, Korean Journal Database, SciELO Citation Index, Russian Science Citation Index), there is still a huge misrepresentation of national journals and journals in national languages in the system.
Meanwhile, there was and still is a criticism concerning the journal impact factor, however, as Hoeffel (1998) rightly notes "IF is not a perfect tool to measure the quality of articles, but there is nothing better. . ." This continues to be true until now and despite its shortcomings the Journal Citation Report and its Journal Impact Factor are greatly effecting research, journals, and scientists' evaluations around the world (Archambault & Larivière, 2009).
The issue of evaluation of scientific journals has become important also in the Republic of Armenia but quite late after the country has regained its independence from the USSR in 1991. Integration into the international scientific community as an independent unit became one of the first challenges for the young republic. Development and further preservation of high-quality Armenian academic journals was conceived as one of the steps towards this goal. However, until 2010 there were only two tools that were used to evaluate Armenian academic journals: a) international indexing platforms (which were then not so popular and were not understood as prestigious) and b) the list of the recommended journals developed by the Supreme Certifying Commission (a soviet era special state agency granting academic degrees. It is still operating and publishes a list of so-called acceptable journals). The later was the main source for the "evaluation" of scientific journals for quite a long time. The Commission had a list of journals formed without any hierarchy or ranking. The only difference among the journals was that some journals were acceptable for publishing articles only for seeking the degree of the Candidate of Sciences, while the others for also pursuing the Doctor of Sciences degree (this one is higher). Nearly all Armenian journals published at the time were included in the list, while the criteria for inclusion of journals were unclear, if any. Thus, very soon the national journals appeared in stagnation.
The situation has changed in the first decade of the 2000s. One of the aims of the establishment of the Center for Scientific Information Analysis and Monitoring (CSIAM) was to raise the quality of the Armenian journals. Since its creation in 2010 CSIAM is disseminating international standards and requirements for national journals trying to form a necessary basis for their further inclusion in international scientific databases (ISD). CSIAM has imported the third tool-ArmJIF-for a more objective evaluation of national journals . Moreover, recently the Committee of Science of Armenia has adopted a policy of ranking national journals (apart from the High Attestation Committee) and pushing some prominent Armenian journals to be further included in ISD. All these steps were directed to the internationalization of the Armenian science. Thus, when the journals are only local and their availability For more on the Supreme Certifying Commission, see www.bok.am. The Center for Scientific Information Analysis and Monitoring was established in January 2010 by the initiative of the State Committee of Science and National Academy of Sciences of the Republic of Armenia (NAS RA). The Center is functioning within the Institute for Informatics and Automation Problems of NAS RA. Among the main activities of the Center are: studying and monitoring science in Armenia; calculating Impact Factors for the journals published in Armenia; importing and developing Scientometrics as a separate science sub-field in Armenia; developing the Armenian Science Citation Index. ArmJIF is being calculated for the Armenian journals indexed in the ASCI database. Those journal issues that are missing from the database are being obtained externally.
Research Paper
Journal of Data and Information Science and influence is limited to their state, they implode, and as soon as the national journals reach internationalization, scientific explosion happens. The definition of implosion is used in the natural sciences, social sciences implying compression of a territory (Dvoryadkina & Kaibicheva, 2017). As such, when an object is subjected to endogenous and exogenous pressure, it "explodes." Projecting this phenomenon in scientometrics, we argue that the existence of a compressed space in the country narrows the interest in the journals published in the local languages. Because of the limited audience, the journals are not developed being unable to reach international levels. After fulfilling the technical requirements, citations in ISD are the second prerequisite for the national journals to be included there. Thus, as soon as one of the local journals breaks through the compressed space, an explosion happens. This affects also the dynamic of citations of these journals. When the explosion happens, the other journals acknowledge the prospects of development and follow the same path. This results in an internationalization of science. Such processes are being observed in the countries of Asia and former Soviet states. The aim of this article is to introduce the new term scientometric implosion into the scientific discourse and test the idea on the example of the Armenian journals.
This article is an extended version of the thesis by the authors "Sargsyan Sh., Mirzoyan A., Blaginin V. Scientometric implosion of Armenian journals, 17th International Conference on Scientometrics and Informetrics, ISSI 2019 Proceedings, pp. 2642-2643." Here a more detailed analysis related to the Armenian journal indexed in national and international scientific databases and their citation was made. The article also gives some forecasts related to the possible behavior of Armenian journals and their inclusion in ISD and proves the possibility to use the term scientific implosion in scientometric discourse.
Methodology
The present work is based on the data obtained from four databases: the Clarivate Analytics' Web of Science (WoS Core Collection and Emerging Source Citation Index), Elsevier's Scopus, the Russian Index of Scientific Citation's Core Collection (RISC CC), developed by the Russian Scientific Electronic Library (e-Library), and the Armenian Science Citation Index (ASCI) which is in the process of development by the CSIAM. From the first three databases, we have retrieved journals that are affiliated with Armenia. From the ASCI all journals that were indexed for a given year were retrieved. The used time window of citations received by the Armenian journals from the WoS and RISC is 2013-2017.
Journal of Data and Information Science
ArmJIF is calculated using the same methodology as Journal Impact Factor. ArmJIF is being calculated for the Armenian journals indexed in the Armenian Science Citation Index. Nowadays ArmJIF is being calculated for all Armenian journals indexed in the ASCI. In the future, however, the Core Collection of journals will be identified.
The subject fields of journals are classified into 15 fields according to Glänzel and Schubert (2003).
Results
Inclusion of the Armenian journals in the national scientific database (ASCI) was the first step towards the standardization and then internationalization of the Armenian journals. Currently, there are about 120 scholarly journals in Armenia, but only about 100 of them are included in ASCI database due to different reasons connected with predominantly technical requirements-the absence of archive, irregular periodicity, a large number of non-scientific articles, etc. For all the journals indexed in the ASCI that have availability of issues (for three consecutive years) ArmJIF is being calculated. The number of journals for which ArmJIF has been and is calculated, varied from 50 to 99. According to Figure 1, the number of journals in ArmJIF database has started to increase for the first six consecutive years, then began to fall. This was connected with the policy of CSIAM; at first the approach was to include as many journals as possible (depending on the availability of journal issues necessary for ArmJIF calculation). Later, only those journals that met minimum technical requirements, among which online archives for the three necessary years, were taken. Although the ASCI hasn't a so-called Core Collection yet, it is planned, as making this difference will necessarily further increase the quality of the Armenian journals.
Journal of Data and Information Science
Half of the Armenian journals indexed in the ASCI are in the fields of Social Sciences and Humanities, while the second half is being distributed among the other 12 subject categories (Figure 2). Meanwhile, the journals from Armenia indexed in ISD are predominantly from the natural sciences and only one of them, Wisdom, is in the field of Arts and Humanities. At this moment there are six journals from Armenia indexed in WoS, Scopus, and RISC CC. It should also be noted that the same six journals appear in nearly three databases (Table 1) with the RISC CC being almost the first, followed by Scopus (or simultaneously) and then WoS, which implies that the inclusion in the RISC CC is a sort of the first step for those journals. The present study has revealed that after the investment of journal metrics in Armenia, the great majority of Armenian journals have considerably improved their technical characteristics (Table 2). Additionally, the new requirements of the science management authorities advanced also a peer review process making it mandatory for the Armenian journals. Other improvements are related to language-a lot of journals start to publish also in English (or at least having their abstracts in English); editorial boards of the journals-foreign scholars are included in the board; international requirements related to publishing ethics are introduced, etc. This allows the national journals to make an implosion that will later transform into an explosion, bringing some of them to ISD, thus to a wider scientific community. Only in English 7% 12% 9% As a final step of our research, we have studied citations to the Armenian journals not indexed in any ISD to see the reaction of the international scientific community to the changes of national journals. For that purpose, we have taken the six Armenian journals already indexed in the studied ISD and another six Armenian journals not indexed in that ISD (Table 3). The six journals not yet indexed in several cases have received even more citations than those already indexed. According to our prognosis and in support of our thesis of the scientometric explosion, these journals have all the chances to break the national territory and be internationalized.
Research Paper
Journal of Data and Information Science
Discussion and conclusion
When after the USSR dissolution Armenia regained its independence in 1991, there was a scarcity of publications from Armenia in ISD. Moreover, there was no knowledge and consequently interest in them. Before independence, national journals of the Soviet states indexed in the ISI database were affiliated with Soviet Union, so, although there were very few (2-3 depending on the period) Armenian journals in the ISI, they were attributed to the Soviet Union. Plus, the great majority of national publications were not visible to the international scientific community thus making the Armenian science isolated and out of the scope of any bibliometric analysis. There was a serious scientific enclosure in the area and the journals did not tend for attaining the international level or even pursue that goal. The situation hadn't changed for the next two decades. In order to revive the national science and bring the Armenian journals to the world standards, CSIAM had initiated the establishment of a national scientific database to collect bibliometric data on national journals, citations received by the national journals and also started calculating impact factors for the local journals. The new requirements considerably affected the quality of the Armenian journals.
Nowadays only 5% of Armenian academic journals (published in English) are presented in ISD, in particular WoS, Scopus, and RISC CC. According to the obtained data, the first journal from Armenia that appeared in ISD is Astrophysics, which was well before the introduction of journal technical requirements and ArmJIF into the scientific community of Armenia. This is also true for the other two journals, Journal of Contemporary Physics and Journal of Contemporary Mathematical Analysis. Their inclusion can be explained also by the fact that traditionally these scientific directions are quite strong in Armenia (Gzoyan et al., 2015) and the management of the journals is internationally active.
Interestingly, in nearly all studied cases, the first ISD where the Armenian journals appear is RISC CC, which is understandable taking into consideration historical relations between Armenia and Russia. Today nearly all Armenian journals have full or partial representation in the RISC which is a result of activities of CSAIM and its collaboration with the Russian part. This process continues and more and more Armenian journals put their full content in RISC, enlarging their attainability to the Russian readership and pave a path to other ISD.
The number of local journals in ASCI is also gradually and firmly increasing and at some point Core Collection of the Armenian journals will be formed, which will further tighten the requirements for the academic journals and will create more competition. The application of ArmJIF in the assessment of Armenian scholarly journals has contributed to the rise of competition, maintenance of publishing ethics,
Research Paper
Journal of Data and Information Science Scientometric Implosion that Lead to Explosion: Case Study of Armenian Journals http://www.jdis.org https://www.degruyter.com/view/j/jdis online access of journals, trilingual usage of bibliographic data, etc. If in 2009 only 33% of the local journals had a webpage, now nearly all journals meet this requirement. The number of journals with full information and archive (PDFs of all articles) has reached nearly 100%. Another important development is connected with the shift of languages: more articles started to appear in foreign languages-Russian as the second spoken language in Armenia and English as lingua franca of nowadays science. Generally, however, there is a clear shift towards a trilingual model.
These local developments in a national science market and a compressed territory resulted in the explosion and integration of some Armenian journals to ISD, as they start seeking international recognition and readership. We have further studied international citations to the Armenian journals from the WoS database to assess the chances of other local journals to be included in the ISD. For that purpose, we have identified the first six journals which had received the most citations in WoS. The conclusion is that those journals have all the prerequisites and also intend to break the national boundaries and appear in international ISD. The steady inclusion of Armenian journals in RINC is one of the sure indicators of scientific explosion. This, as well as the latest inclusion of two Armenian journals in the WoS and Scopus, will definitely lead to the inclusion of more Armenian journals in ISD.
|
2020-07-16T09:08:55.974Z
|
2020-07-03T00:00:00.000
|
{
"year": 2020,
"sha1": "70bc395df4fc36b8838a7bb198feafeadf419506",
"oa_license": null,
"oa_url": "https://doi.org/10.2478/jdis-2020-0028",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "50957c6a7e68804d876c817c76f0861cda9ee57f",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science",
"Computer Science"
]
}
|
39822641
|
pes2o/s2orc
|
v3-fos-license
|
Management of tendon haemangiosarcoma in a Bactrian camel ( Camelus bactrianus ) – a case report
An 18-year old intact female Bactrian camel (Camelus bactrianus) was suffering from lameness due to a mass on the right dorsal metacarpal region that caused acute swelling and local skin necrosis. Histology examination and immunohistochemistry of the biopsy material of a mass revealed haemangiosarcoma of the extensor tendons. Three weeks after the biopsy, the tumour was enlarged to 6 cm in diameter and the animal became disabled. The tumour with its associated tendon were resected and the tendon’s edges were bridged with a synthetic polytape graft. The camel was fully weight-bearing after the surgery. Two weeks later, the graft was removed due to widespread necrosis. Since the wound was positive for Corynebacterium sp., Acinetobacter iwoffii, Micrococcus sp., Escherichia coli, and Staphylococcus sp., the post-operative antibiotic treatment was prolonged for 28 days. Four months later, the wound healed using daily irrigation and bandaging and the camel walked normally. Nine months after diagnosis, the camel suddenly died without any clinical signs. Metastases of the haemangiosarcoma were found in the liver, lungs, kidneys, brain, meninges, and mediastinum. Exsanguination due to rupture of a liver metastasis was determined as the cause of the death. Haemangiosarcoma is a malignant neoplasm that arises from endothelial cells of blood vessels and tends to be very aggressive. To the author’s knowledge, this is the first case report of a metastasizing haemangiosarcoma arising from the lateral extensor tendon in a Bactrian camel. Camelids, polytape graft, metastases, neoplasia, tendon surgery There are only few reports on neoplasia in Old World camelids compared to llamas or alpacas (Effron et al. 1977; Valentine and Martin 2007; Molenaar et al. 2009). In Bactrian camels (Camelus bactrianus), metastasing gastric adenocarcinoma, histiocytic sarcoma, meningioma, and a vaginal leiomyoma have been reported to date (Molenaar et al. 2009). In the Arabian dromedary (Camelus dromedarius), an ovarian teratoma, bronchoalveolar adenocarcinoma, lymphatic leukaemia, renal cell carcinoma, ulnar osteosarcoma, pulmonary leiomyoma, squamous cell carcinoma, fibroma, lipoma, and fibromyxosarcoma have been described (Molenaar et al. 2009; Fahd and Yasmin 2013; Gamal and Shawky 2013). To our knowledge, neoplasia involving the tendons has not been reported in camelids yet. Materials and Methods An 18-year old female Bactrian camel from ZOO Ljubljana was examined due to the lameness of the right thoracic limb. The animal suffered from a solid movable circular mass (measuring 4 cm in diameter) without ulcerations on the right dorsal metacarpal region. The camel was put under general anaesthesia during incisional biopsy of the lesion. It was determined that the mass was in contact with the extensor tendon and seemed to be ACTA VET. BRNO 2017, 86: 97–100; https://doi.org/10.2754/avb201786010097 Address for correspondence: Doc. MVDr. Eva Bártová, Ph.D. Department of Biology and Wildlife Diseases Faculty of Veterinary Hygiene and Ecology University of Veterinary and Pharmaceutical Sciences Brno Palackého tř. 1946/1, Brno 612 42, Czech Republic Phone: +420 541 562 633 E-mail: bartovae@vfu.cz http://actavet.vfu.cz/ growing around the tendon. Biopsy material was fixed in 10% buffered formalin, embedded in paraffin, sectioned at 4 μm, and stained with haematoxylin and eosin. Histology examination revealed a poorly demarcated, unencapsulated neoplastic mass, composed of infiltrative, pleomorphic, spindle, ovoid or oval shaped, anisocytotic neoplastic cells with prominent nucleoli, which formed irregular blood-filled clefts, channels and cavities. Bior even multinucleated cells were seen. Eighteen mitotic figures were counted per 10 high power fields. Large, multifocal areas of haemorrhage and necrosis with neutrophil infiltration were scattered throughout the tumour. Immunohistochemical assay was done using the mouse monoclonal anti-von Willebrand factor antibody (DAKO, Denmark) for the immunolabelling of antigen. The sections of specimen were incubated with primary antibodies for one hour at room temperature in a humid chamber. Peroxidase activity was stopped with Peroxidase-Blocking Solution (DAKO, Denmark) for 30 min at room temperature. Subsequently, visualization kit (DAKO REALTM EnVisionTM Detection System Peroxidase/DAB+, Rabbit/Mouse, Denmark) was used according to the manufacturer’s instructions. Slides were counterstained with Mayer’s haematoxylin and mounted. Positive (normal endothelium) and negative (incubation with antibody diluent (DAKO Denmark) without the primary antibody) controls were included. Neoplastic endothelial cells were diffusely and strongly positive for von Willebrand factor (Plate VI, Fig.. 1). Based on histopathological and immunohistochemical findings, the tumour was diagnosed as haemangiosarcoma.
Materials and Methods
An 18-year old female Bactrian camel from ZOO Ljubljana was examined due to the lameness of the right thoracic limb.The animal suffered from a solid movable circular mass (measuring 4 cm in diameter) without ulcerations on the right dorsal metacarpal region.The camel was put under general anaesthesia during incisional biopsy of the lesion.It was determined that the mass was in contact with the extensor tendon and seemed to be growing around the tendon.Biopsy material was fixed in 10% buffered formalin, embedded in paraffin, sectioned at 4 µm, and stained with haematoxylin and eosin.Histology examination revealed a poorly demarcated, unencapsulated neoplastic mass, composed of infiltrative, pleomorphic, spindle, ovoid or oval shaped, anisocytotic neoplastic cells with prominent nucleoli, which formed irregular blood-filled clefts, channels and cavities.Bi-or even multinucleated cells were seen.Eighteen mitotic figures were counted per 10 high power fields.Large, multifocal areas of haemorrhage and necrosis with neutrophil infiltration were scattered throughout the tumour.Immunohistochemical assay was done using the mouse monoclonal anti-von Willebrand factor antibody (DAKO, Denmark) for the immunolabelling of antigen.
The sections of specimen were incubated with primary antibodies for one hour at room temperature in a humid chamber.Peroxidase activity was stopped with Peroxidase-Blocking Solution (DAKO, Denmark) for 30 min at room temperature.Subsequently, visualization kit (DAKO REAL TM EnVision TM Detection System Peroxidase/DAB+, Rabbit/Mouse, Denmark) was used according to the manufacturer's instructions.Slides were counterstained with Mayer's haematoxylin and mounted.Positive (normal endothelium) and negative (incubation with antibody diluent (DAKO Denmark) without the primary antibody) controls were included.Neoplastic endothelial cells were diffusely and strongly positive for von Willebrand factor (Plate VI, Fig. . 1).Based on histopathological and immunohistochemical findings, the tumour was diagnosed as haemangiosarcoma.
Results
Three weeks after the biopsy, the mass enlarged to 6 cm in diameter and the animal became disabled and therefore, put again under general anaesthesia.Pre-operative blood analysis revealed normal haematological and biochemical values.The tumour was dissected from the surrounding tissue together with an 8 cm section of the tendon (Plate VI, Fig. 2).The edges of the tendon were bridged with one doubled synthetic graft of Neoligaments Polytape 40 × 800 (Xiros, United Kingdom).Polytape was fixed to the tendon stumps with a modified Kessler suture under appropriate tension to preserve the original tendon length.Bacteriological examination of the sample from the surgical site was negative.Despite loss of the surrounding skin due to necrosis, the surgical wound was closed and the graft was entirely covered with the skin.A soft bandage was applied to the lower leg.Perioperative and postoperative antibiotic prophylaxis was initiated with enrofloxacin (5 mg/kg i.m., once daily = s.i.d.Enroxil 100 mg/ml) (Krka, Slovenia).Results of post-anaesthesia haematology and biochemistry were unremarkable.Sixth day after surgery, the camel was able to walk short distances without limping.The wound was positive for Corynebacterium sp., Acinetobacter iwoffii, Micrococcus sp., Escherichia coli, and Staphylococcus sp. which is why the post-operative antibiotic treatment (enrofloxacin) was prolonged for further 28 days.Three weeks after surgery, the synthetic polytape graft was the only remaining bridging structure, and there were no means to retain the graft in spite of daily irrigation and debridement (Plate VII, Fig. 3).Because the tendon stumps necrotized, the graft was removed one month after surgery.The wound started to heal due to daily irrigation and bandaging.Four months later, the wound healed completely and the camel was able to walk normally.However, the animal suddenly died nine months after the first examination and was immediately submitted for necropsy.
The camel was in good body condition, but anaemic.Twelve litres of blood and nine litres of clotted blood were found in the abdominal cavity.Numerous reddish-black, rounded, smooth, well-demarcated, bulged, firm neoplastic nodules up to 6 cm in diameter were scattered throughout the liver parenchyma, lungs, kidneys, brain, meninges, and mediastinum.One of the neoplastic nodules in the quadrate hepatic lobe, of 4 cm in diameter, was ruptured.All heart chambers were severely dilated.No gross evidence of tumour recurrence was observed at the tumour excision site.Samples of hepatic, pulmonary, renal, mediastinal, and cerebral nodules, in addition to heart, spleen, pancreas, stomach, intestine, spinal medulla, and soft tissues from the location of tumour excision were collected for histology analysis.The metastatic nodules were noted to be poorly demarcated and unencapsulated, and were composed of cells similar to those seen in the primary tumour of the lateral toe extensor tendon.In contrast to the primary tumour, the mitotic index was higher (62 mitoses per 10 high power fields counted).These tumours were diagnosed as metastatic haemangiosarcoma of the liver, lungs, kidneys, mediastinum, brain, and meninges.No neoplastic changes were microscopically observed in other examined organs and tissues or in the surgical site.
Discussion
Haemangiosarcoma is a malignant neoplasm that arises from endothelial cells of blood vessels (Robinson and Robinson 2016).It is an aggressive vascular tumour with a high malignant potential (Ravi and Patel 2013).It has been mostly described in dogs involving the spleen, right atrium, skin, subcutis, and liver (Withrow et al. 2013), but there was only one report of haemangiosarcoma in the gastrocnemius tendon in a dog (Waren and Summers 2007).To our knowledge, the only reported case of haemangiosarcoma in the camelids was an intraosseous haemangiosarcoma in a three-month old llama that metastasized into lungs and lymph nodes (Hamir et al. 1997).
This kind of tumour tends to have a very aggressive biological behaviour commonly with rapid and widespread metastases (Withrow et al. 2013), therefore requiring multimodality care for optimal outcomes.The efficiency of preoperative radiotherapy or chemotherapy to increase the survival rate and to prevent metastases in animals remains unknown.These tumours tend to be multifocal, and systemic therapy in the neoadjuvant setting is strongly recommended even for a localized disease.Effective cytotoxic chemotherapy is available for metastatic angiosarcomas, but their durability is limited (Ravi and Patel 2013).
In the current case report, we presume that the primary tumour location was in the extensor tendon since haemangiosarcoma metastasizing to a tendon is less likely and the survival time as well as metastatic mass numbers and size were significantly small regarding the duration of the period after tumour excision.Surgical reconstruction with a graft was indicated, because severe walking difficulties and a toe drop due to the extensor tendon involvement were expected.Post-operative bracing is not possible in the camelids and biological reconstruction was expected to fail.The synthetic graft allowed immediate full weight-bearing of the affected limb.Complications during the wound healing process were probably initiated by marginal skin necrosis as the skin was previously compromised by tumour, and by a secondary infection.The primary incompatibility of the synthetic graft is less probable since this graft has been used in human medicine for over two decades in ligament and tendon reconstructions (Zaffagnini et al. 2008;Chen et al. 2009).Nevertheless, at the end of the treatment, the camel was able to walk normally even without the reconstructed tendon.Therefore, it seems that loss of the lateral toe extensor tendon can be sufficiently compensated for by other muscle groups, at least in camels living in a zoo setting.Clinical signs are usually related to the site of origin of the primary tumour or to the presence of metastases, spontaneous tumour rupture, coagulopathies, or cardiac arrhythmias (Nelson et al. 2005).Nevertheless, the animal did not show any clinical signs (until a few hours prior to death), despite the presence of widespread metastases in several organs.In dogs with splenic or cardiac haemangiosarcoma, some episodes of collapse are common as a result of ventricular arrhythmias (Effron et al. 1977;Nelson et al. 2005).Prognosis of cases where metastases are present are poor without systemic chemotherapy, whereas in the absence of metastases it could even be excellent (Withrow et al. 2013).To the author's knowledge, this is the first case report of a metastasizing haemangiosarcoma arising from the lateral extensor tendon in a Bactrian camel that was successfully addressed surgically.Surgical removal of tumours on tendons and subsequent reconstruction with a synthetic graft seems to be a promising solution that needs to be further investigated.Plate VII
Fig. 3 .
Fig. 3. Synthetic polytape graft in the necrotic wound after one month of treatment in a Bactrian camel.
|
2017-10-17T07:19:22.675Z
|
2017-04-09T00:00:00.000
|
{
"year": 2017,
"sha1": "e661776290333858c5a70f9b13cfd28b6b99f7c2",
"oa_license": "CCBY",
"oa_url": "https://actavet.vfu.cz/media/pdf/actavet_2017086010097.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e661776290333858c5a70f9b13cfd28b6b99f7c2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235510254
|
pes2o/s2orc
|
v3-fos-license
|
Are Foreigners at Disadvantage in a Global Labor Market?
We find evidence that being a foreigner decreases the chances of surviving (i.e., keeping the license) on the first season on the PGA TOUR. This phenomenon does not affect all foreigners equally—it is present amongst the non-elite group (those playing the second-tier tour), but we found no evidence amongst the elite group (those playing the first-tier tour). We discover that the international experience acquired by foreigners in other circuits prior to their arrival on the PGA TOUR mitigates this disadvantage. Not keeping the card has hazardous financial consequences for both the golfer and the corporations whose products he endorses.
2016), partly because their performance is published in recognized rankings and their ability is seen as highly portable by hiring banks (Groysberg et al., 2008). Another labor market in which there is high cross-country mobility is that of professional sports. For instance, there is evidence of large international migration in professional football (Berlinschi et al., 2013;De Luca et al., 2015;Kleven et al., 2013), to the extent that some European teams are comprised solely of migrant players. The ability of these players is seen as highly transferable across countries, and football clubs hire them with the expectation of raising the performance of the team (Royuela & Gasquez, 2019). In such labor markets characterized by the high mobility of talent across borders, are foreigners at disadvantage relative to its domestic counterparts?
At the firm level, there is ample evidence that companies operating in a market overseas incur in additional costs that local firms would not incur, thus exhibiting lower survival rates than its domestic counterparts (for a review, see Denk et al., 2012). However, at the individual level, researchers have only started to explore whether foreigners are at disadvantage vis-à-vis locals when competing in the host market. Fang et al. (2013) find that migrants underperform relative to natives when competing for employment in the Canadian labor market. They argue that this result is driven by foreigners' lack of familiarity with the host labor market (i.e., which method to use for an effective job search) and discrimination due to a lack of legitimacy (i.e., employers cannot assess foreigners' schooling credentials as easily as those of local candidates). Mata and Alves (2018) find that the rate of survival amongst foreign entrepreneurs in Portugal is lower than that of comparable native entrepreneurs, a result explained by foreign entrepreneurs' lack of familiarity with local business practices and discrimination due to lack of legitimacy (i.e., clients cannot adequately assess whether foreign entrepreneurs' skills suit the local market). Yet despite these empirical advancements in various national contexts, it remains unclear whether foreigners' disadvantage vis-à-vis locals would persist in a labor market in which there are no immigration restrictions (i.e., individuals qualify strictly based on performance criteria), where an unbiased measure of individuals' ability is publicly available, and where discrimination from the market's governing body is non-existent.
The purpose of this study is to provide such a conservative test by examining foreign newcomers and whether they are at a greater disadvantage relative to local newcomers in the global labor market of professional sports. This is a novel research question that contributes to a growing body of research that examines the consequences 1 of athlete migration. There are at least two important research streams within this literature. First, there is a series of studies providing evidence that athlete migration has allowed sports organizations to enhance their performance. At the national level, there is ample evidence indicating that an increase in the number of foreigners in a domestic league generates improvement in the performance of the national team (Alvarez et al., 2011;Milanovic, 2005), that national teams benefit from having athletes playing abroad in stronger leagues (Allan & Moffat, 2014;Gelade & Dobson, 2007), and that the migration of players to foreign clubs improves the national team performance of their countries of origin (Berlinschi et al., 2013;Lago-Penas et al., 2019). At the club level, evidence shows that clubs from countries with regulations that are more permissive in terms of migration display better results in the world rankings (Royuela & Gasquez, 2019), and that migration has allowed top clubs to strengthen their position in international competitions (Binder & Findlay, 2012). Overall, this stream of research has allowed us to gain a better understanding of the positive performance outcomes of athlete migration in team sports, but there is still limited understanding of the performance downsides of athlete migration, particularly in individual sports. In this respect, our examination of whether foreign individuals are at competitive disadvantage when compared to locals in the host market is a distinct question that has not been hitherto examined.
The second research stream that examines the consequences of athlete migration focuses on the acculturation challenges of foreign athletes. Specifically, these studies examine how migrant athletes must navigate through and adapt to unfamiliar norms and practices in the hosting country (Schinke et al., 2013). This research stream describes how athletes may go through phases of psychological burden and social isolation (Ryba et al., 2016;Schinke et al., 2016), and the mechanisms that can help the migrant to adapt to the new host environment (Ryba et al., 2015(Ryba et al., , 2016. Overall, this literature has allowed for an understanding of the social and psychological challenges that migrant athletes endure, yet thus far there has been less emphasis on quantifying the impact of those challenges on athletes' performance. Our study contributes to this research stream precisely by providing a conservative test of whether foreign newcomers are at disadvantage vis-à-vis local newcomers in a sports labor market with high cross-country mobility and experienced agents. We focus on the PGA TOUR, which is a high-skill and high-stakes labor market located in the US (Hickman & Metz, 2015). Similar to other labor markets where workers earn money by selling their skills to employers, golfers in the PGA TOUR earn money by participating in tournaments (roughly 40 tournaments per season; Rinehart, 2009). The PGA TOUR has the most prestigious reputation among golf circuits, and playing the PGA TOUR season is the aspiration that many golfers vie for, among other reasons because of the unparalleled tournament purses, the possibility of signing endorsement deals, or to prove their abilities competing against the best players in the world. Non-US players represent over one third of the PGA TOUR players and come from over 30 different countries.
There are four attributes of the PGA TOUR that contribute to the conservativeness of our test.
First, individuals cannot be discriminated against based on immigration considerations (e.g., nationality quotas) because they qualify to play on the PGA TOUR based on their athletic performance. Second, the fact that individuals go through a rigorous qualification process ensures that players can cope with competitive pressure. That is, contrary to low-ability workers who are largely influenced by their peers (Mas & Moretti, 2009), Guryan et al. (2009) showed evidence that PGA TOUR golfers' performance is less subject to detrimental effects from peer pressure. Third, foreigners in the PGA TOUR cannot be discriminated against by press, fans or sponsors based on a biased perception of their ability, because in professional golf there is an objective and publicly available measure of ability to which all players are compared regardless of citizenship: a player's rank in the Official World Golf Ranking (OWGR). And fourth, the PGA TOUR, which depends financially on international sponsors and international broadcasting deals, positions itself as an inclusive professional sports circuit. 2 The main results of our paper are as follows. First, even though the PGA TOUR poses no immigration restrictions and enjoys great diversity in terms of players' nationalities, we still find that foreign newcomers (vis-à-vis local newcomers) suffer a lower probability of keeping their card (i.e., license that gives the right to play in PGA TOUR) at the end of their first PGA TOUR season. However, this disadvantage does not affect all foreigners equally-it is present amongst the non-elite group (those playing the second-tier tour), but we find no evidence amongst the elite group (those playing the first-tier tour). Second, we find that amongst the non-elite group of individuals (second-tier tour), foreign newcomers' probability of survival is positively moderated by their prior experience in other international golf circuits. More specifically, the intensity of foreigners' international experience (i.e., distance travelled per season) and the degree of competitiveness of international golf circuits in which they built their experience, contribute to mitigate foreign newcomers' disadvantage vis-à-vis local newcomers.
The remainder of the paper is structured as follows: Section 2 explains how the PGA TOUR is organized, the additional costs that foreign newcomers will face that local newcomers would not incur, and how foreign newcomers' prior international experience may help them mitigate that disadvantage. Section 3 describes the methods and our empirical strategy. Section 4 provides our primary results and the series of robustness checks we have conducted. Section 5 discusses our results and conclusions.
Liability of Foreignness in the PGA TOUR
The PGA TOUR is the organizer of the first-tier and the second-tier tours. Both tours feature skilful professionals, but the first-tier tour features the highest-ability players, while the second-tier tour is a developmental tour that features emerging players or players who lost their first-tier tour card and are trying to regain it. In the fist-tier and the second-tier tours, the season consists of dozens of tournaments (one per week). First-tier and second-tier tour tournaments are independent from one another and are held in different venues, every week moving to a new location across the United States. In the first-tier tour, players compete for large stakes: average prize per tournament in 2019 revolved around US$6 million 3 and players' endorsement deals go from $250,000 to millions per season. In the second-tier tour, players compete for more modest stakes: average prize per tournament in 2019 revolved around US$600,000 and players' endorsement deals go up to $50,000 per season. 4 As a result, second-tier players are on the fringe between making a wealthy living (if they promote to the first-tier tour) or struggling financially if they do not promotein 2019, only half of the second-tier tour players earned enough prize money to cover their own travelling expenses and tournament fees. As such, PGA TOUR players see the second-tier tour as proving ground in their path to the first-tier tour.
Both local and foreign players must qualify for the first-tier or second-tier tours through the Qualifying Tournament. 5 Players who qualify for the first-tier (secondtier) tour earn a card that gives them the right to enter the first-tier (second-tier) tour tournaments for the upcoming season. Yet, players will be confronted with the risk of losing that card at the end of their first season. The criterion to keep the card is the player's rank in the cumulative prize money ranking at the end of the season. In the first-tier tour, every season there are 250 players who have a card that gives them the right to play in first-tier tour tournaments. Of those 250 players, the bottom 125 players, as ranked by cumulative prize money at the season, lose their card and exit the first-tier tour (i.e., are demoted to the second-tier tour). Similarly, every season there are 220 players who have a second-tier tour card. Of those 220 players, the bottom 100 players, as ranked by cumulative prize money at the season, lose their card and exit the second-tier tour. Player turnover is high in both tours. 6 We hypothesize that even in a labor market in which there are (1) no immigration restrictions, (2) where individuals have gone through a rigorous qualification process guaranteeing that players can cope with competitive pressure and the attendant social influences of high-stakes competition, and (3) where individuals cannot be discriminated against based on a biased perception of ability, foreign newcomers face additional costs that local newcomers do not incur. In line with Fang et al. (2013) and Mata & Alves (2018), who argue that foreign newcomers face unfamiliarity and discrimination hazards 7 that local newcomers do not face, we contend that foreign golfers who move to the US to play in the PGA TOUR face hardships stemming from unfamiliarity with the host competitive environment and lack of support, which may have potentially adverse consequences for their performance. We describe these hazards next.
The PGA TOUR both chooses and conditions its courses using specific guidelines, such as very long-distance courses, deep rough (i.e., an area outside the fairway that features thicker grass to penalize imprecise shots) and fast greens (i.e., an area where the hole is located). Foreign newcomers may be less familiar with the courses than local newcomers, since the latter may have had more opportunities to play on these courses because they are open to the public the rest of the year, outside the PGA TOUR season. As a result, foreign newcomers may incur in higher costs when adapting to these new playing conditions (Feinstein, 2011). Moreover, the difficulty of adapting to these unfamiliar conditions may be particularly detrimental for foreigners who do not master English language, since they are limited in their ability to socialize and therefore access valuable information about the specificities of courses. Also, a poor command of the language may add extra pressure each time a foreigner has to interact with the press (Crouse, 2016), thus hampering his popularity among American fans (Diaz, 2017). 8 Furthermore, foreign newcomers in the PGA TOUR may not receive the support that local players do. Home-field advantage is a well-documented phenomenon in sports (Garicano et al., 2005). It not only refers to playing on one's home court (i.e., stadium), but also to playing within one's own geographical territory (Monks & Husch, 2009), such as Americans playing in the US. Indeed, fans' cheering is a growing tendency in the PGA TOUR (Crouse, 2013), which has applied the stadium concept to golf courses, with large grandstands constructed along the course that can fit up to 600,000 spectators. On this respect, some foreign players in the PGA TOUR, when inquired by the journalists, have been vocal about the lack of support they receive from local fans (Golfing World, 2018). This lack of support to foreigners may be particularly detrimental when reflecting upon the realities of daily touring life: with many weeks per season on the road, loneliness is a frequent symptom of life on tour (Noer, 2012). This phenomenon may be aggravated by some local journalists playing upon nationalistic themes and fueling discrimination by using the term "invasion" when referring to the arrival of foreign players to the PGA TOUR (e.g., Burke, 2017;Figueroa, 2001).
Hypothesis 1: Foreign newcomers in the PGA TOUR exhibit lower rates of survival than local newcomers.
Although foreign golfers may be disadvantaged vis-à-vis natives due to unfamiliarity with the host environment, we argue that this disadvantage may be partly overcome through their international experience. Indeed, empirical evidence shows that agents who have been exposed to experiences overseas are better prepared to overcome the unfamiliarity of a new foreign destination (Mudambi & Zahra, 2007). This is because individuals are likely to compare unfamiliar circumstances with prior international experiences in order to identify valid courses of action (Delios & Henisz, 2000;Delios & Beamish, 2001). Accordingly, foreign golfers who have built an intense international experience in the other international golf circuits (Asian, Australian, Canadian, European, Japanese, South African, and Latin American tours), and who are therefore used to adjust their game to varying playing conditions, may be better equipped to adapt to the unfamiliar conditions of the PGA TOUR.
Moreover, after years of competing internationally, foreign newcomers may find on other players they met during the years of international experience, and who are now part of the PGA TOUR, a buffer against adversity (Wacker, 2017). As explained in Rosaforte (2012), these pre-existing ties may help foreign newcomers to overcome the lack of support they receive from local fans and to mitigate the loneliness associated with touring life. Indeed, previous empirical research has shown that social ties can be an important source of support for foreigners facing uncertain work environments (Manev & Stevenson, 2001) or work settings in which foreigners are underrepresented vis-à-vis natives (Mollica et al., 2003).
Hypothesis 2: Foreign newcomers' survival rate in the PGA TOUR is positively moderated by their prior international experience.
Data
We exploit three rich databases that make it possible to address our research question. First, we use the ShotLink ® database, which allows us to examine the performance of 776 newcomers in the PGA TOUR between 2002 and 2016, as well as to trace their prior athletic trajectory in any given international golf tour since 1996. This allows for the creation of precise measures of international experience. Second, we created a database from PGA TOUR's Media Guides (2002-2016), which contains rich biographic features about the players, such as their age, or college they attended. Third, we used the Official World Golf Ranking (OWGR) database, which provides an accurate measure of each player's ability every week during the period under study.
Sample
Our sample only includes newcomers in order to isolate the phenomenon under study from the effects of having players with varying degrees of experience (i.e., liability of newness) on the PGA TOUR (Mudambi & Zahra, 2007). In our sample, a player is considered a newcomer from the moment he earns a card for the first time on either tour, and until he loses his card for the first time. Therefore, a first-tier tour newcomer will only be considered as such until the end of the season when he loses his first-tier tour card. Similarly, a second-tier tour newcomer will be considered as such until the end of the season he loses his second-tier tour card; or, if he is promoted to the first-tier tour while he is still a newcomer, he remains as such until the end of the season when he loses his first-tier tour card (section 3.5.5 explains how the econometric model accounts for this possibility).
The 15-year period under study starts in 2002 and ends in 2016. Prior to 2002 we do not have access to players' biographical information. During the period 2002-2016 on the first-tier tour there were 131 newcomers, 74 of whom were foreigners of 22 different nationalities. First-tier newcomers are on average 28 years old, turned professional six years prior to arrival on the PGA TOUR and have 2.8 seasons of international experience. The average OWGR at the time of entry is 230 (i.e., the lower the ranking, the better the player is), and 52% of these professionals attended university in the US. Within the period 2002-2016 on the second-tier tour there were 645 newcomers, 187 of whom were foreigners of 30 different nationalities. Second-tier newcomers are on average 27 years old, turned professional four years prior to their arrival on the PGA TOUR, and have 0.5 seasons of international experience. The average OWGR at the time of entry is 579, and 79% of them have attended university in the US. These statistics indicate that entering the PGA TOUR through the first-tier tour is the most frequent path for more talented, internationally experienced golfers. In contrast, entering through the second-tier tour is the most frequent path for golfers with less international experience, many of whom come from college golf in the US.
Dependent variable
Binary variable that reflects whether the player survives by keeping his card at the end of the season. For every season on the first-tier tour, there are 250 players who receive a card that gives them the right to play in first-tier tour tournaments. On the first-tier tour, a player maintains his card if he finishes among the top 125 players in the cumulative tournament prize money ranking by the end of the season. For each season on the second-tier tour, there are 220 players who have a card that gives them the right to play in first-tier tour tournaments. On the second-tier tour, a player keeps his card if he finishes among the top 100 players in the cumulative tournament prize money ranking by the end of the season. The few cases in which a player does not keep his card because of a voluntary exit or the need for a medical leave were identified and excluded from the sample.
There are three reasons why we chose survival at the end of the season as our dependent variable: First, survival is the most common measure to assess liability of foreignness in the literature (e.g., Kronborg & Thomsen., 2009;Mata & Freitas, 2012;Mata & Alves, 2018). Second, in golf, not surviving at the end of the season (i.e., not keeping the card and being demoted to a lower-order circuit) is unequivocally a measure of failure because it has hazardous financial consequences for players and the corporations whose products they endorse (Feinstein, 2011). Third, a season-long measure of performance (as opposed to focusing on tournament-level performance) is the appropriate level of analysis. On the one hand, it is the cumulative effect of many tournaments throughout the season that, overall, make foreigners to be at disadvantage relative to locals, thus making the season the appropriate level of analysis. On the other hand, it allows us to level out tournament-specific factors that could be influencing the outcomes of a golf tournament, such as the average strength of the players entering the tournament (Guryan et al., 2009), the presence of superstars (Brown, 2011), or the level of monetary and non-monetary tournament incentives (Kali et al., 2018).
Main explanatory variables
Foreigner. Binary variable that indicates whether the player is local or a foreigner. For the few cases in which foreigners have acquired American citizenship prior to arrival on the PGA TOUR, we consider them to be local players.
International experience. In golf, a well-established measure of international experience is the average distance per season that a golfer has travelled to play in tournaments outside of a home country (Murray, 2017). Such a measure, which reflects the amount of experience that an agent accumulates per unit of time, has been referred to in the international business literature as the intensity of international experience (Clarke et al., 2013). Our calculations of international experience intensity start in 1996 (first season with available records) and end in 2015, which is the last season that a 2016 PGA TOUR newcomer could have played abroad. The international golf tours comprised therein are the most relevant ones according to the OWGR (Asian, Australian, Canadian, European, Japanese, South African, and Latin American tours) and only the tournaments played outside a player's home country are included in our calculations. After having identified the coordinates (i.e., altitude and latitude) of each international tournament in which the newcomer participated, and also the coordinates of the player's residence (updated every season), we computed the distance between locations using the Haversine formula, which is commonly used to calculate the distance between points on the surface of a sphere. When calculating the geographic distance between tournaments, we considered two possible scenarios: the first scenario occurs when golfers enter tournaments that take place on consecutive weeks-in that case players generally fly directly from one tournament location to the next because there are only three days between two tournaments, which are most often dedicated to practice rounds. The second scenario occurs when golfers enter tournaments that do not take place on contiguous weeks-in that case players generally fly back to their residence before going to the next tournament.
Controls
Age. Prior studies in sports show that seasoned athletes are better at handling strain than inexperienced athletes (Hickman & Metz, 2015;Kali et al., 2018). Thus, it may be that experienced players have an advantage when confronted with a demanding situation, such as having to adapt to a new country. On the other hand, studies in psychology have found that youth is an important determinant of acculturation (Yoon et al., 2013); more specifically, individuals who migrate at a younger age to the US go through an easier adaptation process (Ghorpade et al., 2004). In order to account for the possible effect of age on player's performance, we control for a player's age at the time of their arrival to the PGA TOUR.
College. College golf in the US, under the umbrella organization of the National Collegiate Athletic Association, provides student athletes opportunities to travel and compete in collegiate contests. It may be that players who attended a US university will be more acquainted with both touring golf and local culture, thus facilitating their adaptation to the PGA TOUR. Indeed, out of the total number of PGA TOUR newcomers attending college golf in the US, the percentage of Americans is nearly 75%, so one could argue that a hypothetical worse performance (i.e., lower survival rates) of foreign newcomers could be due to the fact that many of them did not attend US college golf. We control for this by adding a binary variable that indicates whether the player studied in a college golf program in the US prior to their arrival on the PGA TOUR.
Country's golf popularity. Golf popularity varies by country. A well-known measure of a country's golf popularity is the total number of golf courses divided by the total population of the country (Royal & Ancient, 2019). Countries where golf is popular, like the US, exhibit a better record of survival in the PGA TOUR than countries where golf is not popular, like India. Accordingly, one could argue that a hypothetical worse performance of foreign newcomers in the PGA TOUR could be due to the fact that foreigners come mainly from countries where golf is not popular as popular as in the US. Adding countries golf popularity as a control allows to account for the possibility that foreign golfers underperform because they come from countries where there is a smaller pool of talent, rather than due to unfamiliarity with the host competitive environment or lack of support.
Official world golf ranking (OWGR). Although all players on the first-tier and second-tier tour have exceptional golf skills, they vary in terms of ability. We control for the ability of each player at the beginning of the season by their position in the Official World Golf Ranking (OWGR). Published records of the OWGR range from 1 (i.e., top of the ranking) to 1300 (i.e., bottom of the ranking). We transformed the OWGR into a categorical variable because some second-tier tour players are not registered in the OWGR at their time of entry (i.e., their rank does not fall within the top 1300 OWGR), which would imply losing those observations. The OWGR categories for first-tier tour players are: 1 (ranks 1-10); 2 (ranks 11-50); 3 (ranks 51-100); 4 (ranks >101). The OWGR categories for second-tier tour players are: 1 (ranks <300); 2 (ranks 301-400); 3 (ranks 401-600); 4 (ranks >601). Note that the models not only account for a player's OWGR at their time of arrival to the PGA TOUR, but also at the beginning of each of the subsequent seasons.
Eligibility ranking. On the first-tier and second-tier tours there are 250 and 220 cards, respectively. However, not all players who want to enter a given tournament can do so. 9 This is because tournaments on the first-tier and second-tier tours cannot exceed 150 players. The PGA TOUR regulates this through a set of rules, known as the eligibility ranking, which determines the order of preference to enter a tournament. Thus, it may be that players with a bad eligibility ranking (i.e., lowest entry priority) have lower chances of surviving (i.e., keeping their card at the end of the season) because they cannot decide the tournaments into which they will enter. In order to account for this possibility, we control for players' eligibility ranking at the beginning of each season. Following the eligibility ranking categories of the PGA TOUR, the eligibility categories for first-tier tour players are: 1 (i.e., highest entry priority) to 3 (i.e., lowest entry priority), while for second-tier players they are 1 (i.e., highest entry priority) to 4 (i.e., lowest entry priority).
Promotion (included only in the second-tier tour model). The models account for the fact that second-tier tour players may promote to the first-tier tour during the period they are still considered newcomers. If a second-tier player promotes to the first-tier tour, this may affect the probability of keeping the card at the end of the season. That is, since the first-tier tour is more competitive, a player who promotes from second-tier to first-tier tour may see the probability of keeping the card be diminished.
Model
Our dependent variable indicates whether a player keeps his card at the end of a season. Accordingly, we model the probability of survival (i.e., keeping the card) at the end of the season. While some players lose their card in the time window that we observed, others keep it through. We thus have multiple observations for a same player, and in the case of some players, we never observe card loss. While the data could be interpreted as a censored time to failure which is often analyzed with Cox regression (Cox, 1972), we are here in a special case where events occur at discrete times: players retain or lose their cards but only at the end of each season. In this case, an appropriate modelling proposed by Cox (1972) is a logistic regression to a set where each season for each player is a different line in the dataset (Allison, 2010). It should be noted that within this framework, covariates are allowed to change through time, as long as they are constant within a season. For instance, the eligibility ranking of a player varies from season to season, so their survival during each season takes into consideration their eligibility for that season. Other variables, such as age at entry, remain constant through a player's career, so the same value is repeated on multiple lines. Because we use a logistic regression, the interpretation of the exponentiated coefficients (odds ratios) corresponds to a multiplicative effect on the odds of keeping the card.
Results
Tables 1 and 2 provide means, standard deviations, and correlations among the variables. We computed the variance inflation factors and detected no signs of multicollinearity.
The results of the first-tier tour (Models 1 and 2) presented on the left-hand side of Table 3, indicate that there is no evidence that being a foreigner on the first-tier tour has a significant impact on the probability of keeping a card (Hypothesis 1). Results of the second-tier tour models (Models 3 and 4) presented on the right-hand side of Table 3 indicate that being a foreigner on the second-tier tour has both a negative and significant impact on the probability of keeping the card at the end of the season, thus supporting the idea that foreign newcomers are at competitive disadvantage relative to local newcomers on the second-tier tour (Hypothesis 1). This effect is sizeable-if the player is a foreigner, and with controls at their mean value, the odds ratio will be exp(À0.38) ¼ 0.68. Table 3 provides support to Hypothesis 2, indicating that liability of foreignness in the second-tier tour is moderated by international experience intensity (i.e., distance travelled by a player). The moderating effect of international experience intensity is shown graphically in Figure 1. This figure indicates that the greater the distance travelled per season by foreign newcomers prior to their arrival on the PGA TOUR, the greater their probability of keeping the card, as stated in Hypothesis 2.
Robustness Test: Alternative Measures of International Experience
Here we are attempting to verify whether the moderating effect of international experience on the ILOF is subject to the measure we used. Prior studies indicate that, besides intensity (i.e., distance travelled per season), there is both the scope of the international experience and the length of the international experience to be considered (Clarke et al., 2013). Scope refers to the geographical diversity of an agent's international experience, and it has been operationalized as the number of foreign countries where the agent has gained international experience (Le & Kroll, 2017). In our study, we measure scope as the number of countries in which golfers have competed professionally before their arrival at the PGA TOUR. Length refers to the duration of an agent's international experience, and it has been operationalized as the number of years that an agent has engaged in international activities (Brouthers et al., 2009). In our study, we measure length as the number of seasons a player has had a card in a professional golf circuit outside his home country. In addition to the intensity, scope and length of international experience, past research shows that agents who forged their experience in demanding competitive contexts are better prepared for international competition (Sakakibara & Porter, 2001). Exposure to strong competition prevents agents from being complacent (Miller & Pakhe, 2002). Thus, one could argue that foreign players who build their experience in highly competitive international tours will be better equipped to overcome ILOF. We tested whether the moderating effect of international experience is robust to the inclusion of a competitiveness measure of international experience. The competitiveness of the international experience was calculated by averaging the competitiveness of the tournaments entered by the foreigner prior to arrival on the PGA TOUR. Each tournament's competitiveness was determined using the formula provided by the OWGR. 10 The results for the second-tier tour are presented on Models 6 through 8 of Table 4. First, we find no evidence indicating that the number of countries in which players have developed their international experience or the length of time accumulated in Age À0.13*** À0.14*** À0.15*** À0.13*** À0.13*** À0.13*** À0.13*** À0.12*** (0 gathering international experience has a positive moderating effect on foreigners' probability of keeping the card in the PGA TOUR. A possible interpretation of this is that since qualifying to play in the PGA TOUR is the ultimate goal that many US and non-US golfers vie for, the fact that it takes a non-US player longer to qualify for the PGA TOUR may be an indication of a lack of adaptability to new competitive environments. Second, we find evidence that the competitiveness of the international contests in which second-tier tour foreigners participated prior to their arrival to the PGA TOUR, has a positive moderating effect on foreigners' probability of keeping the card. Figure 2 shows graphically the moderating effect of the competitiveness of international experience. Overall, these results indicate that second-tier tour players who have forged their experience in highly competitive contests and have gain that international experience intensively (i.e., long distance travelled per season), are better equipped to compete on the PGA TOUR.
Discussion
In a labor market like the PGA TOUR, where there are no immigration restrictions (i.e., individuals qualify strictly based on performance criteria), where an unbiased measure of individuals' ability is publicly available, and where discrimination from the market's governing body is non-existent, we still find that foreign newcomers are at a competitive disadvantage vis-à-vis local newcomers in the second-tier tour. This is the case even though it is in the best interest of the PGA TOUR that foreigners thrive upon their arrival in order to attract international corporate sponsors and international broadcasting deals. Foreign newcomer's disadvantage not only has potentially deleterious financial implications for the PGA TOUR, but also for the foreign golfers themselves and the global corporations whose products they endorse (Knittel & Stango, 2014). There are two plausible interpretations as to why first-tier tour players would not suffer from liability of foreignness. The first explanation has to do with the fact that foreigners who qualified directly into the first-tier tour (through the Qualifying Tournament) may not only possess superior golfing skills, but also a superior ability to adjust to unfamiliarity hazards, such as the specificities of the PGA TOUR course design. Furthermore, they may also be particularly resilient to social pressures in the workplace (Guryan et al., 2009), such as the lack of home-field advantage (i.e., fan support). The second plausible explanation is related to status of first-tier tour foreigners at the time of their arrival to the PGA TOUR. After achieving a direct qualification for the first-tier tour, these players may be perceived by other players as extraordinarily skillful and, consequently, enjoy high status. It has been shown that high-status individuals receive more interpersonal help than low-status individuals (Van der Vegt et al., 2006). Thus, it may be that foreign newcomers in the first-tier tour enjoy a privileged position (vis-à-vis low-status foreigners) from which they can access resources (e.g., information, support) that mitigate the liability of foreignness.
Two limitations in this study are worth mentioning. First, admittedly one could raise a concern that the results found in the labor market of sports may not be perfectly transposable to other labor markets. Contrary to what happens in individual sports, where ability can be assessed frequently and with precision thanks to publicly available rankings, there are markets where this may not be feasible. In the absence of commonly accepted mechanisms to identify an individual's ability, employers (even in global labor markets) may select contenders based on other criteria that is not strictly performance-based (Lee et al., 2015); for instance, employers may favor nationalities that enjoy a high level prestige in a specific profession, such as French chefs in the high-end restaurant market (Rao et al., 2003).
The second caveat relates to the fact that, although we argue that unfamiliarity and lack of support are two factors that put foreign newcomers at disadvantage vis-à-vis local newcomers (Hypothesis 1), our data does not allow us to identify which of the two factors is more influential in driving our results. Similarly, we cannot conclude whether or not the moderating effect of international experience on ILOF (Hypothesis 2) is operating primarily via the reduction of unfamiliarity hazards (i.e., by comparing new challenges with previously experienced situations in order to identify valid courses of action) or via the mitigation of lack of support (i.e., by relying on the players they met through their international experience route). The use of mixed methods (Williams & Shepherd, 2017) may be particularly helpful to address this issue in forthcoming studies.
In a more speculative vein, one could wonder whether there is a difference in the supply of local versus foreign talent feeding the PGA TOUR, so that if the pool of local talent is bigger and of better quality, it negatively impacts the likelihood of a foreigner's survival. In our paper, we already control for two factors that potentially influence the supply of foreign and local talent differently: First, we account for the popularity of golf in the country of origin (which we use to proxy each country's pool of talent); second, we account for college golf attendance in the US (which facilitates the transition to professional touring life and in which Americans represent 75% of total students). A third, more general factor that could influence the supply of local versus foreign talent differently is the cross-country variation in labor market dynamism. That is, the difference in labor market opportunities (i.e., outside professional golf) between the US and the rest of the world, which could influence the supply of athletes, so that the size and quality of the pool of local talent (i.e., US players) versus foreign talent (i.e., non-US players) feeding the PGA TOUR differs. This is an interesting yet complex question that requires exploring the relationship between labor market dynamism (or lack of) and self-employment choices, such as those that golfers take when they decide to become professional touring players. Although a detailed exploration of how cross-country differences in labor market dynamism may influence the supply of athletic talent is beyond the scope of our study, it is a question that is worth exploring.
Future research may also examine the mechanisms through which foreign athletes could overcome the liability of foreignness. Social network methods may be a particularly fruitful path to explore this issue. According to social network theory, individuals form networks and ties in order to access information and social support. Future research may use social network methods, for instance, to explore how the density of the network that an individual has developed prior to host-market arrival, or how central his same-nationality peers are in the host-market network, could play a role in his ability to mitigate unfamiliarity hazards. More research is needed into these issues.
|
2021-06-22T17:54:50.967Z
|
2021-05-04T00:00:00.000
|
{
"year": 2021,
"sha1": "165e69b95ce797a520229eaa8b654242b99dabbc",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1527002521995870",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "68dc59bcdd743f69c6da5d6bda1f89df305e38c7",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
149839517
|
pes2o/s2orc
|
v3-fos-license
|
Polar Switching and Cybotactic Nematic Ordering in 1,3,4-Thiadiazole-Based Short-Core Hockey Stick-Shaped Fluorescent Liquid Crystals
We report here the synthesis and thermotropic properties of novel short-core hockey stick-shaped liquid crystalline molecules based on the 1,3,4-thiadiazole core. Polar switching behavior is observed in the cybotactic nematic and smectic mesophases for the bent-core thiadiazole derivatives. The presence of the lateral methoxy moiety in the outer phenyl ring of the four-ring molecules facilitates the formation of spontaneous ordering in the nematic phase observed via X-ray diffraction measurements. Anomalous temperature dependence of spontaneous polarization on cooling is explained by the possible antiferroelectric packing of the molecules that require higher electric field for switching. The compounds exhibited a strong absorption band at ∼356 nm and a blue emission band at ∼445 nm with a good quantum yield of φf ∼0.39. The mega Stokes shift is observed and depends on the nature of the solvent.
■ INTRODUCTION
Nematic phase is the most technologically important and least ordered mesophase and the backbone of the multibillion-dollar display industry which serves for the upliftment of the status of human living during the last two decades. Modulation in this nematic phase, like the formation of twist-bend heliconical structure or macroscopic biaxiality, is the thrust area of contemporary liquid crystal (LC) research due to their promising potentiality for use in ultrafast switching devices. The available theory that correlates the reduced symmetry in biaxial nematics (N B ) or spontaneous chiral symmetry breaking in twist-bend nematics (N TB ) and modulated molecular structures with relevant features of molecular interactions is not yet completely resolved. It has been well predicted that the introduction of bent curvature in molecular architecture can lead to stabilize these modulations via formation of biaxial ordering 1 or by inducing twist-bend helicity in the nematic phase. 2 Hence, a good number of bentcore molecules have been synthesized that exhibited nematic ordering having a rich variety of complexity of molecular structures regarding the bending unit (bent angle and polarity), linking moieties between two aromatic rings (direction and polarity), substitutions at the outer and/or core unit (position, size, and polarity), terminal chains (nature and length), and so forth. 3 Unfortunately, till today, there is no unambiguous claim of biaxial nematogens that can revolutionize the modern LC-based switching device technology. The biaxiality in the nematic ordering arise due to the formation of macroscopic-sized clusters by the intermolecular coupling and preorganization of the constituent molecules 4 and is known as "Holy-Grail of LC science". 5 The incorporation of heterocyclic rings such as 1,3-oxazole, 1,3,4-oxadiazole, 1,2,4-oxadiazole, triazole, thiazole, 1,3,4thiadiazole, and so forth, instead of usual 1,3-disubstituted phenyl or naphthalene moiety, as a core unit in bent-core LCs produces new multifunctional materials due to the presence of the heteroatoms which provides a reduced symmetry, strong lateral and/or longitudinal dipole, and donor−acceptor interaction within the molecule that change polarity, polarizability, and geometry of the molecule which in turn affects the self-assembly process of the mesophase, transition temperature, electronic behavior, dielectric biaxiality, and other mesomorphic properties. Of these heterocyclic bentcore units, the use of the 2,5-disubstituted 1,3,4-thiadiazole unit in bent-core compounds exhibiting the nematic phase is rare and limited. 6 The majority of the 2,5-disubstituted 1,3,4thiadiazole-based mesogens reported in the literature are either rod-shaped or rod bent-shaped molecules, exhibiting conventional nematic and smectic phases at the higher temperature. 7 Very recently, we have reported new four-ring hockey-stick LCs of 1,3,4-oxadiazole 8 and 1,3,4-thiadiazole bent-core units. 9 It is observed that most of the 1,3,4-oxadiazole and 1,3,4thiadiazole-derived compounds exhibited the nematic phase. It is interesting to note that the terminal chain of the 1,3,4thiadiazole-based hockey stick-shaped molecule drastically influences the mesophase behavior. The lower chain length exhibited the nematic phase, whereas the higher chain length displayed the SmA phase. Further, the isotropic temperature of the 1,3,4-thiadiazole-based molecules are quite high (∼300− 350°C). The high isotropic temperature of the 2,5disubstituted-1,3,4-thiadiazole compounds which restricts the characterization of the phase and limits the application of the materials in display devices. The transition temperature and the phase behavior of the heterocyclic-based bent-core mesogens are sensitive to the structural modification, in particular, lateral substitution or functionalization of the molecule. 10 Therefore, we have designed and synthesized a new series of hockey stick-shaped molecules containing the 2,5-diphenyl-1,3,4-thiadiazole derivative, possessing a lateral methoxy moiety in the terminal phenyl ring at the long arm of the molecule in order to understand the phase behavior. The compounds are shown to exhibit an ordered pattern in the nematic phase observed via X-ray diffraction (XRD) measurements. Most importantly, polar switching is observed in the lower part of the nematic and smectic phase in these thiadiazole-based compounds. The blue light emission band at ∼445 nm with a good quantum yield of φ f ∼0.39 was also observed in these materials.
■ RESULTS AND DISCUSSION
Design and Synthesis. 1,3,4-Thiadiazole-derived new hockey stick-shaped short-core molecules possessing the lateral methoxy moiety at the terminal phenyl ring located in the long arm of the molecular framework having an imine linkage have been designed. The molecule is an unsymmetrical bent-core molecule, possessing two arms of different lengths. Of these, one arm contains two phenyl rings and is considered to be the longer arm of the molecule, containing the 4-n-alkyloxy chain of different lengths (4-n-butyloxy or 4-n-octyloxy or 4-ndodecyloxy or 4-n-octadecyloxy), while the 4-n-butyloxy chain at the other arm of the molecule is considered to be the shorter arm of the molecule, possessing one phenyl ring. The methoxy group is introduced at the terminal phenyl ring of the elongated arm of the molecule. Detailed synthesis of the compounds is represented in Scheme 1 and was carried out via the following procedure as elaborated in the Experimental Section. The intermediate compound, 2-(4′-nitrophenyl)-5-(4″-n-butyloxy)phenyl)-1,3,4-thiadiazole (1), was synthesized by reaction of 4-nitrobenzoic acid-N′-(4′-n-butyloxybenzoyl)hydrazide with Lawesson's reagent in dry toluene under the nitrogen atmosphere. Further, the nitro group of compound (1) was reduced using stannous chloride to produce 2-(4′aminophenyl)-5-(4″-butyloxy)phenyl)-1,3,4-thiadiazole (2). 3-Methoxy-4-n-alkyloxy-benzaldehydes (3) were synthesized by Williamson etherification reaction of 3-methoxy-4-hydroxybenzaldehyde (vanillin) with n-alkyl bromides. Schiff base condensation of 3-methoxy-4-n-alkyloxy-benzaldehydes (3) with 2-(4′-aminophenyl)-5-(4″-butyloxy)phenyl)-1,3,4-thiadiazole (2) was used to obtain hockey stick-shaped molecules containing 1,3,4-thiadiazole (CV-nT). The high-resolution mass spectrometry (HRMS) and elemental analysis of the synthesized compounds were in consistent with the targeted molecular formula which in turn confirmed the pureness of the compounds (see Supporting Information, Figure S4). The characteristic Fourier transform IR (FT-IR) spectra of the compounds (CV-nT) are presented in Supporting Information, Figure S1. The FT-IR spectra of the CV-nT compounds showed the typical peaks of the thiadiazole ring of ∼1070 cm Mesophase Behavior. Polarizing Optical Microscope and Differential Scanning Calorimetry Study. The phase transition temperatures, associated enthalpy, and entropy of the synthesized new thiadiazole-based hockey stick-shaped molecules CV-nT (n = 4, 8, 12, and 18) obtained from differential scanning calorimetry (DSC) at a scan rate of 5°C min −1 in the second heating and cooling scans are summarized in Table 1. The mesomorphic behavior of the new hockey stick-shaped molecules was investigated under polarizing optical microscopy (POM) with crossed polarizers. All the new compounds CV-nT (n = 4, 8, 12, and 18) exhibited the nematic phase with underlying smectic C phases. The compound CV-4T on heating melts at 132.4°C to a focal conic texture which on further heating results in the observation of the schlieren texture at 153.8°C and finally becomes isotropic liquid at 272.8°C. The compound CV-4T on slow cooling from the isotropic liquid, droplet texture of the nematic phase appeared at 269.0°C. The nematic droplets coalesce to from the schlieren texture and immediately transform to the homeotropic texture. On further cooling, the weak birefringent greenish-color homogenous optical texture grows from the background of the homeotropic texture at 225.6°C named as the N Cyb phase. On further cooling, an arc-like texture appeared from the weakly birefringent optical texture and finally transformed into a focal conic texture at 145.8°C. The optical textures of CV-4T are presented in Supporting Information, Figure S5a. The compound CV-8T on heating melts at 97.9°C to the focal conic texture which on further heating transforms to the schlieren/droplet texture at 175.9°C and finally becomes isotropic liquid at 234.4°C. On slow cooling, the sample from the isotropic liquid, droplet texture of the N phase appeared at 233.4°C (see Figure 1a). The droplet texture coalesces to form the schlieren texture with the appearance of the secondary schlieren texture at 232.0°C ( Figure 1b). The formation of the secondary schlieren texture is attributed to nonsingular domain walls which nucleate during the surface anchoring transition in an uniaxial nematic phase as reported in bent-core 1,3,4-oxadiazole-based molecules. 11 On further cooling, the homeotropic domain appeared in the texture, and finally, the texture becomes the weakly birefringent greenish texture at 190.0°C (see Figure 1c). The batonnet-like texture developed from the greenish texture as a distinct transition (Figure 1d) at 177.0°C, and finally the batonnets coalesce to form a focal conic-like texture ( Figure 1e) at 170.0°C. On further cooling, the broken focal conic texture appeared and transformed to schlieren textures at 130.5°C ( Figure 1f) and finally crystallized below 100.0°C. The similar phase behavior was observed for the long-chain compounds (CV-12T and CV-18T) (see Supporting Information, Figure S5b,c).
Further, the thermal behavior of all the new hockey stickshaped LC compounds was also examined with DSC on both second heating and second cooling at the rate of 5°C min −1 under the nitrogen atmosphere. Phase transition temperatures obtained from DSC are agreed well with microscopy observations (see Table 1). From Table 1, it is further noted that the thermal stability of the synthesized compounds increases with increase in the chain length (4-n-alkyloxy), and their decomposition temperature is ∼150°C above the isotropic temperature, indicating high thermal stability of the compounds which is suited for the physical measurement without the decomposition of the sample. The thermogravimetric analysis (TGA) and differential thermal analysis (DTA) thermogram of the mesogenic compounds (CV-nT) are presented in Supporting Information, Figure S7. As revealed in the representative DSC thermogram in Figure 2, the compound CV-8T exhibits the enantiotropic nematic phase over a wide temperature range with four endothermic phase transitions, namely, crystal to smectic C (Cr−SmC), smectic C to N Cyb (SmC−N Cyb ), N Cyb to N, and nematic to isotropic (N−Iso). The measurable isotropic to nematic transitions are associated with a low enthalpy value (0.59 kJ mol −1 ) and low entropy value (1.17 J mol −1 K −1 ). Further, on careful insights into the nematic transition, two noticeable transitions are observed having vanishingly small enthalpy and were detected during N−N Cyb phase transition, indicating that the two phases are closely related in structure (see inset of Figure 2). It is noted that the N−N Cyb transition temperature window is quite narrow (∼2°C). Therefore, the N−N Cyb transition enthalpy change could not be detected. The lower value of the enthalpy and entropy change during the transition indicated the small change in the ordering of the molecules in the phase. Moreover, hardly any measurable enthalpy change values associated for the N Cyb −N transitions and N Cyb −SmC transition are found which can be compared to Iso−N transition, indicating a cybotactic cluster formation in these nematic phases, that is, layer fragments already exist in the nematic phase and are clubbed to infinite layers at the phase transition. 12 These N−N Cyb transitions were observed in all the other compounds. However, because of the narrow temperature window and the enthalpy change, it is difficult to calculate their associated enthalpy change during the transition. The DSC thermogram of the other compounds is presented in Supporting Information, Figure S6.
XRD Study. In order to understand the characteristics of the observed phase structure exhibited by the compounds, detailed X-ray investigation was carried out via small-and wide-angle XRD (SAXS/WAXS) measurements. The samples were filled in thin capillaries (0.5 mm) and aligned via repeated heating and cooling. All the compounds exhibited nematic and smectic phase structures at higher and lower temperatures, respectively.
As a representative case, the diffractogram of compound CV-8T at 200°C comprised one small-angle peak with a d-value of 34.27 Å and a broad peak at a wide angle which designate to the liquid-like order of the alkyl chains (see Figure 3).
Additionally, a small peak was observed in the wide-angle region with a d-value of 3.54 Å which signifies the presence of core−core correlation between the molecules. During cooling the sample, the small-angle peak gradually became sharper. At 120°C, the observed d-value of 28.40 Å which is lower than the molecular length (L) of ∼34 Å and a d/L ratio of 0.83 signifies the presence of tilt ordering in the smectic layers. Additionally, in the smectic phase, the second-order reflection peak with a d-value of 14.15 Å was observed.
As a representative case, detailed XRD analysis of the compound CV-8T and CV-4T has been carried out to explain the ordering phenomena in the nematic phase. Both the compounds exhibited the nematic (N) phase at the higher temperature and the smectic (Sm) phase at the lower temperature. In order to understand the behavior of the N and Sm phase, azimuthal and radial plots of the first peak of the small-angle peak have been calculated. The small-angle peak of the N phase is found to be azimuthally bifurcated, and the bifurcation is prominent with decreasing the temperature in the nematic phase. The 2D XRD image of the cybotatic nematic ordering of CV-8T and CV-4T at different temperatures is presented in Figure 4 and Supporting Information, Figures S8 and S9. However, the bifurcated peak transforms to a single peak in the smectic phase. In the nematic phase of compound CV-8T, the azimuthal plot of the small-angle peak displays its bifurcated nature (Figure 5a). However, the first small-angle peak of the smectic phase is azimuthally single (Figure 5c). The angular separation of the bifurcated peak is found to increase from ∼60°to ∼75°with decreasing temperature, and the respective average angular full width at half-maximum (fwhm) (average of the bifurcated peak) is found to decrease linearly from ∼51°to ∼26°in the nematic phase and suddenly decreases to ∼14°in the smectic phase ( Figure 6a). The observation of the bifurcated peak confirms the two different preferential orientations in the nematic phase which is due self-arrangement of the molecules in the nematic phase and could be attributed to the cybotactic nature of the nematic phase. Further, the radial plot exhibits single peak in the nematic as well as in the smectic phase (Figure 5b,d). The measured d-spacing and correlation length are shown in Figure 6b. The d-spacing decreases slowly from ∼35 to 34 Å in the nematic phase and suddenly to 30 Å in the smectic phase with the decreasing temperature. However, the correlation length increases slowly in the nematic phase (from ∼40 to ∼55 Å) and then increases suddenly to 140 Å in the smectic phase. On the other hand, in the nematic phase of compound CV-4T, the azimuthal plot of the small-angle peak also displays its bifurcated nature (Figure 7a), and the first small-angle peak of the smectic phase is also azimuthally single (Figure 7c). However, the angular separation of the bifurcated peak is In the compound CV-8T: (a) variation of angular separation of the bifurcated peak in degree (blue-colored data) and their average fwhm in degree (red-colored data) with temperature, calculated from the azimuthal plot, (b) variation of d-spacing (blue-colored data) and correlation length (red-colored data) with temperature, calculated from the radial plot. found to increase from ∼61°to ∼76°and then decrease to ∼74°with decreasing the temperature, and the respective average angular fwhm (average of the bifurcated peak) is found to decrease from ∼62°to ∼44°and then increase to ∼48°in the nematic phase and then suddenly decrease to ∼37°in the smectic phase (Figure 8a). Further, the radial plot exhibits a single peak in the nematic and in smectic phase (Figure 7b,d). The measured d-spacing and correlation length are shown in the Figure 8b. The d-spacing decreases slowly from ∼30 to 27 Å in the nematic mesophase and then to 25 Å in the smectic phase with decreasing temperature. However, the correlation length in the nematic phase increases slowly (from ∼31 to ∼50 Å) and then increases suddenly to 124 Å in the smectic phase.
Electric Field Studies. The possibility of polar switching is examined for one representative sample CV-8T. To the best of our knowledge, no study has been reported on a measurement of the spontaneous polarization of thiadiazole-based short bent-core LCs. Under an applied triangular wave voltage, only a feeble and broad current peak (peak A) is observed per half cycle in the isotropic and high-temperature side of the nematic phase up to 195°C (Figure 9a). As the temperature is lowered, another small peak appears at the right side of peak A and is designated as peak B. Peak B gradually becomes a prominent In the compound CV-4T: (a) variation of angular separation of the bifurcated peak in degree (blue-colored data) and their average fwhm in degree (red-colored data) with temperature, calculated from the azimuthal plot and (b) variation of d-spacing (blue-colored data) and correlation length (red-colored data) with temperature, calculated from the radial plot. single peak on cooling (below 170°C), while peak A remains almost the same (Figure 9a−d). Based on the temperature dependence, the current peaks A and B can be interpreted as follows: As peak A is almost temperature-independent and present in all the mesophases including the isotropic phase, it can be designated as an ionic peak. However, peak B is strongly temperature-dependent and completely vanishes far below the isotropic−nematic transition. Thus, the possibility of its ionic origin is ruled out, and it is considered as a polarization current peak. The intensity of current peak B increases on cooling up to 160°C. As the sample is further cooled, the intensity of the current peak starts decreasing. A similar decrease of polar peak intensity on cooling is observed earlier in oxadiazole 13 and resorcinol 14 -based bent-core molecules in the smectic phase near Sm−Cr transition. At ∼115°C, SmC−SmC A transition occurs, and a small additional peak can be seen to overlap with peak A and is designated as A′ (Figure 9e). Spontaneous polarization (P S ) is measured by calculating the area under peak B (Figure 9f). P S varies from ∼76 nC/cm 2 at 195°C to a maximum value of ∼155 nC/cm 2 at 160°C. Below this point, P S decreases continuously as Sm−Cr transition is approached and takes the lowest value of ∼82 nC/cm 2 . The decreasing trend of polarization on cooling can be attributed to the stronger antiferroelectric packing of the molecules that require higher electric field for switching. 15 Next, we investigated the optical switching behavior in different mesophases before and after applying the electric field ( Figure 10) under the polarizing microscope. No abrupt change of texture is observed in the nematic phase after applying field (Figure 10a,b). However, in the smectic phases, textural changes can be noticed, although the color remains almost the same (Figure 10c−f). The chiral domains get well defined after applying electric field, but the extinction regions which are parallel to either of the polarizers do not move. As confirmed by XRD, the smectic phases are tilted, and hence the polarization reversals of the molecules are taking place collectively around the long molecular axes, 16 leaving the orientation of the direction unchanged.
Dielectric Studies. Compound CV-8T is investigated by dielectric spectroscopy in the temperature range of 250−100°C . Temperature-dependent complex dielectric permittivity is obtained in the planar cell and dielectric data are fitted with the Havriliak−Negami fitting function. 17 By fitting the obtained dielectric data in the extended Havriliak−Negami function (detailed notations of the equation are described in Supporting Information, page S19) the Im where Δε k is the dielectric strength and τ k is the relaxation time of each individual process k involved in dielectric relaxation, ε 0 is the vacuum permittivity (8.854 pF/m), σ 0 is the conduction parameter, and ω is the angular frequency. The exponents α and β are empirical fit parameters, which describe symmetric and non-symmetric broadening, respectively, of the relaxation peaks. The first term on the right-hand side of eq 1 describes the motion of free charge carriers in the sample. The exponent s of the angular frequency determines the nonlinearity of the dc conductivity arising from charge accumulation at the interfacial layers. In the case of Ohmic behavior (s = 1), σ 0 is the Ohmic conductivity of the smectic material.
The dielectric spectra of compound CV-8T exhibited typically two distinct relaxations: one at low-frequency region (∼70 Hz to 4 kHz), denoted as peak P 1 , and the other at highfrequency region (∼100 kHz to 6 MHz), designated as P 2 (Figure 11a). The origin of P 2 can be identified as rotation of the molecules along their short axes, while the low-frequency peak P 1 can be attributed to collective motion of the bent-core molecules. Such low-frequency collective modes have been observed earlier in oxadiazole-based compounds 13 and four ring bent-core LCs. 18 The dielectric strength of peak P 1 (δε 1 ) shows strong temperature dependence, and the transition from the N−N Cyb phase and N Cyb −SmC phase can be distinctly identified (Figure 11b). δε 1 increases on cooling in the nematic phase but tend to decrease near N−N Cyb transition, attaining minima and again rises sharply in the N Cyb phase. In the SmC phase, δε 1 has the highest value of ∼50 but decreases slightly on cooling. In the SmC A phase, the dielectric strength decreases rapidly owing to strong antiferroelectric ordering among the molecules. The relaxation frequency (f R1 ) is the highest at high-temperature nematic phase (∼4 MHz) and continuously decreases up to 66 Hz at 100°C due to increase in viscosity of the medium which impedes the collective motion of the molecules. The dielectric strength of P 2 increases, and relaxation frequency decreases monotonously on cooling (Figure 11c).
Photophysical Studies. In Dichloromethane. The photophysical characteristics of the hockey stick-shaped molecules were examined via UV−visible absorption and emission spectroscopy in dichloromethane (DCM) solution. The absorption and emission spectra of CV-4T, a representative hockey stick-shaped compound in dilute DCM (c = 1 × 10 −5 M), are presented in Figure 12a. An absorption maximum at 358 nm having high molar extinction coefficient (ε max = 59 500 M −1 cm −1 ) can be attributed because of the spin-allowed π−π* transition of the π-conjugated aromatic system involving the phenylthiadiazole framework. 9 The optical energy band gap (E g ) of CV-4T as estimated from the onset of the absorption maxima was found to be 3.03 eV. This small band gap of the compounds qualifies as the prospective contender for application in organic light-emitting diodes and organic semiconductors. The compound (CV-4T) in diluted solution displayed an emission band in a region of 375−550 nm with maximum emission intensity in the violet region at 430 nm.
The emission peak appears at ∼430 nm with a Stokes shift of about 72 nm (4677 cm −1 ). This large Stokes shift arises because of the presence of a push−pull organization in a molecule-like two strong electron-donating 4-n-alkoxy moiety and an electron-deficient 1,3,4-thiadiazole moiety. 19 The large Stokes shift value reflects the structural relaxation of the excited molecule and significant changes in molecular conformation upon excitation. 20 The solution absorption spectra of other hockey stick-shaped compounds have almost an identical absorption band λ abs at ∼358 nm. Indeed, no considerable difference in absorption properties was observed with the variance of terminal aliphatic chains.
Solvent Effect. The absorption spectra of CV-4T in different solvent polarities are shown in Figure 12b. The relevant data from absorption spectroscopy are presented in Table 2, where the solvents are presented in order of increasing polarity. It was noted that absorption spectra are fairly independent on the solvent polarity, which clearly indicates that the dipole moment of the ground state and corresponding Frank−Condon excited state is similar. Interestingly, emission maxima are considerably red shifted with increasing the solvent dipole moment (see Figure 12c). This may be explained by the fact that upon excitation with UV-light, thiadiazole compounds were excited to a higher level of vibrational energy of the first excited singlet state, and additional vibrational energy was quickly dissipated into neighboring molecules of solvent, gradually relaxing to the lowest vibrational energy level. The neighboring solvent molecules assist to stabilize and further lower the energy level of the excited state by solvent relaxation around the excited fluorophores. This effect of reduction in the energy separation among the ground and excited states results in a red shift of the fluorescence emission. Upon increasing the solvent dipole moment yields consistently greater reduction in the energy level of the excited state, while reducing the solvent polarity decreases the solvent effect on the excited state energy level. Moreover, solvent relaxation effects in the fluorescence could result in a dramatic consequence on the magnitude of Stokes shifts. With the increase in the solvent dipole moment, the emission of the compound (CV-4T) shifted from the violet (∼423 nm) to blue region (∼452 nm), and the Stokes shift value increases on increasing the solvent polarity (see Figure 12d and Table 2). The compound in the high polar solvent, namely, acetonitrile exhibit a large Stokes shift of 97 nm (6045 cm −1 ) nm, whereas the other low polar solvents have a Stokes shift value of ∼71−89 nm (∼4768−5618 cm −1 ). Similar behavior is reported in the donor−π−acceptor stilbene molecule. 21 Therefore, such compounds having a high Stokes shift value in polar solvents are the potential candidate for use in the fluorescent sensors. 22 The large Stokes shift and the solvatochromic effect in the excited state depicts that these molecules exhibit charge separation, namely, an intramolecular charge transfer character in the excited state (ICT state). 23 Furthermore, fluorescence quantum yields (φ f ) in the solution state of the hockey stick-shaped molecule were determined following the standard procedures, with quinine sulfate in degassed 0.1 M H 2 SO 4 as a reference standard (φ = 0.54). 24 Interestingly, the compound CV-4T exhibits moderately high quantum yield (φ f ≈ 0.39) in the tetrahydrofuran solvent as compared to the other solvents (see Table 2). Concentration-Dependent Emission Spectra. In the fluorescence spectra of CV-4T at different concentrations in toluene showed a small red shift (∼8 nm) on increase in the concentration of the compound as depicted in Figure 12e. This red shift is due to the development of the aggregated species with an increase in concentration of the monomer which suggests the formation of the aggregates from monomer concentration and the population of the J-aggregates increases. 25 Similarly, the red shift is also observed in DCM. Similar behavior was not observed in polar solvents such as tetrahydrofuran and acetonitrile. Interestingly, it was observed that in polar solvents, the intensity of the peak decreases with increase in concentration as presented in Figure 12f. At concentrations larger than 1 × 10 −5 M, self-absorption starts to decrease the intensity of the emission peak, but the spectral shape is or else unaffected up to 1 × 10 −4 M, signifying that merely intrinsic intramolecular emission happens. 26
■ CONCLUSIONS
A new series of 1,3,4-thiadiazole based on short-core hockey stick-shaped molecules possessing a lateral methoxy group have been designed and synthesized. The compounds exhibited a long range of the enantiotropic nematic phase with an underlying tilted smectic ordering. The molecules in the nematic phase are arranged in such a fashion to exhibit four-spot pattern 2D images in XRD studies, indicating the presence of cybotactic ordering. Polar switching was observed in the low-temperature nematic region and smectic phases of CV-8T. It turns out to be antiferroelectric organization near crystallization. This is an unusual phenomenon of polar ordering in the nematic and smectic phases of the thiadiazolebased hockey stick-shaped molecule. The compounds showed a strong absorption band at ∼356 nm and a blue emission band at ∼445 nm with a decent quantum yield of ∼0.39 in tetrahydrofuran as compared with other solvents. The mega Stokes shift is observed, and the Stokes shift value increases on increasing the polarity of the solvent.
|
2019-05-12T14:14:47.429Z
|
2019-04-26T00:00:00.000
|
{
"year": 2019,
"sha1": "7b0bc8a65d8b0137a310ad96e31f7a64de59042b",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.9b00425",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cad4938e070589c284df359bd86c6d22f905f88f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
253390793
|
pes2o/s2orc
|
v3-fos-license
|
Measles-Associated CNS Complications: A Review
(1) primary measles encephalitis (PME), (2) acute postinfectiousmeaslesencephalomyelitis(APME),(3)measles inclusion body encephalitis (MIBE), and (4) subacute sclerosing pan encephalitis (SSPE). The neuropathogenesis, host immune status, and clinical settings are different, but all involves brain-virus and immune interactions that lead to severe morbidity and mortality as discussed ahead.
Introduction
Measles is a highly contagious infection caused by measles virus, which belong to paramyxoviridae family.The virus enters the human body (its sole reservoir) via respiratory system in air droplets form.It targets the macrophages and dendritic cells in lungs which express SLAM receptors. 1These cells then migrate to lymph nodes and transmit the infection to lymphocytes expressing SLAM receptors, hence causing viremia.In later stage, the infected cells transmit the infection to nectin 4 expressing epithelial cells in respiratory tract, from where virions are shed in mucus and spread air drop infection through coughing. 2Patient develops various symptoms during acute phase of measles infection like fever, cough, coryza, conjunctivitis, nasal congestion followed by a morbilliform rash, and koplik spots over buccal mucosa (in 70% cases). 3,4easles virus infection also causes immunocompromised state in the host, making the host susceptible for secondary infections. 5Other than this, measles infection can lead to several complications including diarrhea, otitis media, pneu-monia, CNS infections and sequelae, blindness and hearing loss. 4,6The morbidity and mortality related to measles is higher in developing countries owing to under nutrition, large populations, inaccessibility of health care and vaccination. 7his review is undertaken to highlight the CNS measles infections and associated morbidity.
Measles virus affects the CNS either during active infection or after the infection has become inactive. 6The CNS complications are: (1) primary measles encephalitis (PME), (2) acute post infectious measles encephalomyelitis (APME), (3) measles inclusion body encephalitis (MIBE), and (4) subacute sclerosing pan encephalitis (SSPE).The neuropathogenesis, host immune status, and clinical settings are different, but all involves brain-virus and immune interactions that lead to severe morbidity and mortality as discussed ahead.significantly with immunization.As observed from 2000 till 2017, there was 83% decline in measles infection.In 2017, around 1,73,330 measles cases were reported worldwide, and approximately 1,10,000 people died of it, majority of which were in Asian and African countries. 8,9Unfortunately, rise in number of measles cases in developing countries and frequent outbreaks in industrialized countries have been reported in past 3 years.A 300% rise in measles cases was observed in many developed countries (United States and France) during year 2019. 9,10Decreased immunization due to vaccination hesitancy is considered to be the most important factor leading to this reemergence. 11Measles infection leading to CNS complications is rare, but often detrimental.Neurological complications of measles are reported to occur in around 4 per 1,000 measles cases, out of which one per 1,000 were encephalitis and behavioral changes each and two per 1,000 cases had motor disturbances. 12The four different types of measles-associated encephalitis have been reported to have different epidemiological profiles (►Table 1).PME and APME occur approximately in 1 to 3/1,000 measles-infected patients.MIBE is rare, confined to immunocompromised hosts and may be considered as an opportunistic infection. 13The reported incidence for SSPE is 1:10,000 to 20,000measles cases, it generally occurs after a long latent period of measles infection and is associated with 100% mortality. 13w Does Measles Virus Enter the Brain?
The entry of measles virus in brain still remains unclear, however, different models have shown different mechanisms for measles-related CNS infection: 1. Via receptors: SLAM receptor is the main receptor for morbilliform virus infection including both canine distemper virus (CDV) and measles virus.This receptor is expressed on dendritic cells, thymocytes, lymphocytes, and macrophages in humans.In CDV-infected dogs SLAM receptor expression increases in epithelium of many organs like lungs, gastrointestinal and urinary tracts, however, these detected SLAM positive cells are immune or inflammatory cells.The brain cells are negative for SLAM expres-sion, still various brain cells are infected with CDV, suggesting other receptors role for viral spread in CNS. 14,15he nectin-4 receptor expression has been reported in various CNS cells (purkinje cells, neurons, ependymal cells, and choroid plexus cells) in dogs suggesting it contributes to CDV neuropathogenicity, while it is hardly detected in human brain cells.Astrocytes in dog's brain were found to be infected by CDV leading to demyelination, although they do not express nectin-4.These findings further suggest the role of an unknown receptor in neuropathogenicity of CDV.
Measles-Associated Different CNS Complications; Associated Morbidity and Mortality
There has been a considerable controversy regarding the mechanism of measles encephalitis.Whether it is due to direct CNS infection or immune mediated?It has been suggested that early CNS symptoms (within a week of measles infection) can only be explained with direct neuroinvasion of virus and the late symptoms with autoimmune mechanism. 21,22Some investigators failed to demonstrate viral proteins or RNA in the brain of measles encephalitis patients while, 23 others recovered measles virus from brain parenchyma and CSF of such patients. 24,25he clinical profile and characteristics of measles virus in each disease are discussed below and summarized in ►Table 3.
Primary Measles Encephalitis (PME)
The measles virus directly infects CNS during acute measles infection in a previously healthy child, leading to measles encephalitis. 26The virus replicates in brain cells and leads to injury of neurons, which further causes lymphocytic infiltration in brain parenchyma, meninges, and CSF, hence, the infectious measles virus can be detected from brain cells and CSF. 26t occurs in immunocompetent, unvaccinated or partially vaccinated measles-infected patients (more common in children than adults) with a frequency of 1 to 3 per 1,000 cases. 21he symptoms of encephalitis generally develop during the exanthema phase or within a week of measles prodrome. 27he child presents with fever, headache, irritability encephalopathy (altered mental status), seizures and involuntary movements or motor deficits (hemiplegia/paraplegia), and coma.Child may have features of raised ICT due to brain edema. 21he long-term neurological sequelae leads to hemi or paraplegia, intellectual disability, recurrent seizures, and deafness.The diagnosis of PME is mostly clinical.CSF examination shows marked lymphocytic pleocytosis and mildly elevated protein.
Neuroimaging shows edema and/or focal signal changes in white matter, putamen, caudate nucleus, and thalamus.The viral RNA can be detected in CSF via real time PCR. 21he treatment of PME is mainly supportive including continuous vitals monitoring, anticonvulsants, measures for raised ICT (mannitol or hypertonic saline), antipyretics and fluids and electrolytes management. 28Ribavirin has shown anti-measles properties in vitro and has been given in complicated measles cases via intravenous or aerosol routes. 29However, it has not been approved for measles encephalitis by U.S. FDA. 30Controlled trials need to be done to prove the efficacy of ribavirin in measles.Mortality is observed in around 10 to 15% of patients and long-term neurological sequel in 25% of patients. 31,32
Acute Post Measles Encephalitis
It is also referred to as acute measles encephalitis (AME), acute demyelinating encephalomyelitis (ADEM), or post infectious encephalitis (PIE) or acute disseminated encephalomyelitis.The encephalitis is immune mediated unlike PME which is due to direct viral invasion.The molecular mimicry has been suggested as the mechanism for the development of APME. 26Circulating antibodies react with the myelin basic protein of oligodendrocytes, causing inflammation and dysfunction in CNS.APME causes lesions in both grey and white matter, leading to perivenular inflammation and demyelination. 33Immunoglobulin titers in CSF as compared to that in the serum do not increase, suggesting less synthesis of antibodies in CSF. 21,33Myelin basic protein concentrations are increased in CSF and nearly 50% of the patients show lymphocyte proliferative responses to myelin basic protein. 27It is primarily an immune-mediated demyelinating disease, however, the role of myelin reactive antibodies is still unclear.Any rise in myelin reactive antibodies was not detected by either ELISA or RIA in patients with APME, 34 while in animal models a rise in myelin reactive antibodies along with pathology similar to APME was observed after myelin injection. 35Recently conformation sensitive myelin reactive antibodies have been detected in some patients with ADEM. 36Infectious measles virus is not isolated from brain or CSF of APME patients because the disease pathology is post-infectious. 23s EAE is a good model to study autoimmune mechanism in multiple sclerosis (MS), similarly, the autoimmune response induced by measles virus against MBP explains myelin reactive antibodies in APME.In recent EAE model, autoreactive T cells in EAE and MS are induced by peripheral immunization with an adjuvant emulsified antigen and by unknown pathogen, respectively.These T cells then recognize their antigens on APCs in spleen and activate inflammation in CNS.The tissue debris is then drained from CNS via CSF to the cervical, lumbar lymph nodes, and spleen, where it leads to generation of new autoreactive T cells, further exacerbating the autoimmune reaction. 36,37cephalitis in Lewis rats following measles infection was found to be associated with cell-mediated T-cell response against MBP.Further, it was suggested that CNS susceptibility to autoimmune T-cell autoimmune reaction increases following CNS infection with measles virus. 38t occurs in 1 per 1,000 measles infection which makes measles the most common cause of post infectious ADEM.The highest number of cases are of children 5 years and above, symptoms onset after resolution of rash, even weeks or months later, 12,21 rarely, it can also predate the rash.The signs and symptoms include abrupt onset of fever, encephalitis (headache, seizures, altered sensorium, raised ICT, and multifocal neurological signs), myelitis (back pain and bladder and bowel dysfunction, and hyporeflexia), ataxia, optic neuritis, and cranial nerve involvement.It has been reported that APME relapses in one-third patients and these patients are at increased risk of developing MS. 39,40The preliminary diagnosis is based on history with clinical examination and it is confirmed with the aid of lab findingspresence of serum IgM/IgG anti-measles antibodies (not in CSF), MRI findings (multifocal hyperintensities in brain and spinal cord on T2 and FLAIR images, brain edema and demyelination) 4 and mild to moderate CSF pleocytosis with elevated protein.
Unlike PME the measles virus RNA cannot be detected in CSF or brain cells of APME patients. 41,42The treatment is based on the mechanism of disease which is considered to be immune mediated and post infectious.Hence, the goal of treatment is to temper the immune response and not the use of antivirals.Corticosteroids (intravenous followed by oral), intravenous immunoglobulin (IVIG), and plasmapheresis, along with the supportive measures are the recommended treatment options. 21,39The prognosis is better than PME and some patients show full recovery, while, some patients showed permanent neurological sequelae along with attention and behavioral issues when evaluated more than 3 years after the episode. 40,41Mortality is 5% in children and 25% in adults.
Even with so many differences in pathology, it is sometimes difficult to differentiate whether the patient has PME or APME, because the symptoms of both can occur soon after measles.There may be both factors contributing to the clinical picture including acute viral infection as well as ongoing inflammatory response.However, if the brain imaging shows more edema then patient is treated with steroids. 33ifference between APME and other ADEM: There are some variations in the ADEM phenotype caused due to measles or other organisms (usually virus).The clinical course of APME is more rapid and severe than other types of ADEM.Cerebellar ataxia most commonly occurs in varicella patients and has better prognosis.The incidence of varicella and rubella-associated ADEM is reported to be much lesser than APME, i.e., 1/10,000 and 1/20,000 cases, respectively.ADEM following an acute pharyngitis (group A beta hemolytic streptococcal infection) has been reported with prominent extrapyramidal and behavioral symptoms. 42PME prevention with measles vaccination: The incidence of APME or ADEM after measles live vaccination reduces to one to two cases per million vaccinations, which is significantly lower than among unvaccinated population (►Table 1).It will make only 5% of cases amongst all measles-associated ADEM.Also, the clinical phenotype is less severe and better recovery than PIE occurs after primary measles infection.42,43 However, it has also been suggested that single dose of measles vaccination is not effective in preventing measles and APME.In a study from Vietnam, APME after measles infection was reported in 15 patients (age 20 to 24 year) who had received single dose measles vaccination in infancy.The IgG antibodies titers were raised in all vaccinated patients, and the avidity showed a decreasing trend with an increasing age.44 Again, it emphasized on the need of second dose of vaccination.
Measles Inclusion Body Encephalitis
It is also known as immunosuppressive measles encephalitis and subacute measles encephalitis.MIBE is rare, occurs only in immunocompromised children and adults of any age, with symptoms onset within days to months after measles infection or vaccination.Multiple reports from Texas have described MIBE in total of 33 patients (mean age 6 years) with various immunodeficiency conditions. 28A report of measles outbreak in South Africa (2009 to 2010) has described eight HIV positive patients (median age 28 years) with MIBE, out of which six patients died and only two survived. 45][47] The onset of MIBE can occur days to months (within a year) after measles infection or measles vaccination. 21T-cell function is impaired in immunocompromised patients, hence the typical morbiliform rash (exanthem) as seen in immunocompetent patients is not observed in these patients.Although some have shown a mild rash but without any koplik spots or any other clinical symptoms of primary measles infection.Patients presented with altered sensorium and seizures.Focal seizures along with Todd's paralysis are the most common types of seizures reported.Epilepsia partialis continua, focal motor deficits (hemiplegia, hemiparesis), aphasia, dysarthria, dysphagia, ataxia, and visual problems can also occur. 21Headache, vomiting, emotional lability, and autonomic dysfunction have been reported in few cases. 28It is a rapidly progressive encephalitis leading to coma and death in majority cases.
The neuropathological findings show inclusion bodies in neurons and glial cells, focal necrosis without inflammation.Initial CSF picture is either normal or may show mildly elevated proteins and pleocytosis.However, a four-fold rise in measles antibody in CSF from baseline is observed, as the disease progresses.MRI brain is usually normal but may show edema, enlarged ventricles, and atrophy. 48In absence of definitive evidence of measles infection, the diagnosis can be confirmed on brain biopsy, by detecting measles virus RNA using reverse transcription polymerase chain reaction 47 or detecting measles hemagglutinin and matrix protein via immunohistochemistry. 47 The brain biopsy tissue of patients shows intracytoplasmic or intranuclear inclusion bodies and hence, the name encephalitis. 45The treatment of MIBE includes mainly the supportive measures, although a few cases have shown some improvement in symptoms and imaging with ribavirin antiviral therapy, while, interferonalpha has not shown any efficacy. 28,47The prognosis of MIBE is poor, causing mortality in 75% cases and rest are left with neurological sequel.
Characteristics of MIBE Virus
The measles virus isolated from brain cells of MIBE patients has shown many mutations in intracytoplasmic domain of F protein.The mutation in L454W of HRC domain has been reported previously which leads to highly unstable F protein with hyper-fusogenicity and thermal labiality.Also, measles virus with this mutation does not require H binding to enter the brain cells. 49The emergence of virus with L454W mutated F protein under the selective pressure of fusion inhibitors, raised the question if this neuropathogenic measles virus can be found outside the CNS and lead to spread via natural route. 50 recent study on mice model has shown respiratory epithelial infection with this measles strain suggesting the possibility of infecting a new host. 51n the contrary, sequence analysis of measles virus from 4 MIBE patients has shown similarity with epidemic virus (genotype B3) unlike the typical hypermutation of the matrix and fusion as previously reported. 52They showed N, M, F, and H genes mutations in unique patterns.Mutation rates in brain were similar to the epidemic virus, although, these mutations were mostly non-synonymous.The function of nucleoprotein gene of measles virus remains same, as this protein helps the virus to move from cell to cell in the brain. 49imilar mutations of N gene have been reported in SSPE and MIBE patients, suggesting similarity in two diseases, except the rapid development of MIBE in immunocompromised patients. 30
MIBE Association and Prevention with Measles Vaccination
MIBE following vaccination has been documented in few cases with ALL and CD8 deficiency, implicated to be caused by vaccine strain with fatal outcomes. 53,54Immunosuppressed patients are at higher risk of developing MIBE, but vaccination should not be avoided in all such patients.Children living with lymphoblastic leukemia (remission phase) and post allogeneic bone marrow transplantation patients have been successfully vaccinated for measles without any adverse events.Measles vaccine can be safely given to asymptomatic HIV patients and can be considered in symptomatic HIV patients who are not immunosuppressed.However, it should not be given to severely immunocompromised HIV patients. 48,55,56Also, vaccination schedule should be updated for the patients before starting immunosuppressive treatments.The reduction in measles-associated morbidity and mortality following vaccination outweighs the rare risk of serious disease following measles vaccination in immunocompromised patients. 57Post exposure prophylaxis (within 6 days) with polyvalent human immunoglobulin should be considered in all immunocompromised patient, even if one has been vaccinated. 58Lack in the vaccination coverage and non-administration of second dose of vaccine are the two factors contributing to measles infection and its CNS complications in immunocompromised patients.
Subacute Sclerosing Pan Encephalitis
Subacute sclerosing pan encephalitis (SSPE) is a slowly progressive panencephalitis, that is caused by persistence of measles virus which mutates and leads to neurovirulence.The estimated global incidence of SSPE is 4 to 11 cases per lac measles cases, but lesser number of cases are being reported.The incidence is much higher (approximately 18/1,00,000), if measles infection occurs in early childhood.Higher incidence (28 cases per 1,00,000 measles cases) has been reported from developing countries like India and Pakistan.The new cases of SSPE in developed countries now corelate with measles outbreaks.In a recent report of United States, the incidence of SSPE is reported higher (1/609) if measles infection occurs in first year of life while, it decreases (1/1,367) for children under 5 years.The risk of developing SSPE is 16 times higher if measles occurs in infancy as compared to occurring at 5 years or later. 59,602][63] A latent period of 1 to 15 years following primary measles infection has been defined, only recently few cases with shorter latency period have been reported without any evidence of congenital measles.Hence, implicating the need for high index of suspicion for SSPE cases with shorter latency period and presentation at early age. 64,65SSPE is often diagnosed late because of the non-specific symptoms at the onset, including behavioral problems like inattention, forgetfulness, temper tantrums, and decline in scholastic performance.It is much later that the motor dysfunction and intellectual dysfunction become apparent.Patients typically has myoclonic jerks (typically periodic, generalized, and stereotype), dyskinesia, and ataxia.Later, expressive speech decreases progressively along with difficulty in walking and tone abnormalities, finally leading to vegetative state. 21n nearly half of the SSPE patients, ocular manifestations have been reported, most common being necrotizing chorioretinitis.However, macular edema/degeneration, optic neuritis/atrophy, papillitis/papilledema, and cortical blindness can also occur. 66The neuropathological findings in SSPE depend on the course of the disease.During early disease, edema is the predominant finding, followed by oxidative damage of DNA and RNA in infected cells along with lipid peroxidation in areas of demyelination.Perivascular infiltration of inflammatory cells and demyelination in cortical and subcortical are found in acute phase followed by neuronal loss later.The inflammation starts from the posterior brain with medial thalamus and deep structures involvement followed by anterior brain involvement. 67SF examination is generally normal but sometimes may show mildly elevated proteins.The gold standard for diagnosing SSPE has raised IgM and IgG anti-measles antibodies in CSF and serum, higher titers in CSF. 68,69EEG generally shows specific burst suppression pattern which deteriorates to diffuse slow waves with the progression of disease. 65MRI brain picture can vary from decreased grey matter volume to hyperintensities in cerebral cortex, periventricular white matter, and brainstem leading to cortical atrophy and enlarged ventricles in last stage. 70upportive care and symptomatic management are the mainstays of treatment for SSPE which include anticonvulsants and spasticity reducing drugs.Trials with few drugs like isoprinosine and interferon alpha have shown some benefits of therapy in terms of slower progression, temporary stabilization and prolonged survival.While, ribavirin and immunoglobulin have shown little effects, mesenchymal stem cell treatment has demonstrated no benefit in SSPE patients. 71,72typical Presentation of SSPE There are few reported cases of SSPE with largely or exclusively involving brainstem and presenting with symptoms like cerebellar ataxia, blindness, choreoathetoid movements of extremities.The neuroimaging of these patients showed pons, middle cerebellar peduncles, midbrain, substantia nigra, and inferior colliculus involvement while other areas still spared.4][75] Rarely SSPE may present with an acute-fulminant disease with rapid course of disease leading to death, within 6 months.It is difficult to differentiate acute fulminant SSPE from ADEM and diagnose early. 76t has also been reported during pregnancy and postpartum, with least reported time to death after diagnosis to be 19 days. 77
Characteristics of SSPE Virus
A variety of genetic mutations have been reported in measles virus isolated from SSPE patients brain tissue.It has been suggested that genetic mutations occur only after virus enters the brain.The characteristic clustered mutation is biased hypermutation, leading to uracil to cytosine transitions in M gene.Another study reported mutation in 2% of nucleotides in SSPE virus, which led to 35% changes in amino acids. 78Defects in M proteins help the virus for replication and persistence in neuronal cells while evading the neutralizing antibodies.Genetic mutation in F gene leads to hyperfusogenic F protein, which enables measles virus for neuronal spread. 18,79,80Many mutations have been reported (►Table 2) which facilitate cell to cell fusion without SLAM or nectin 4 receptors.Similarly, many mutations reported in H protein help in virus spread among neurons (►Table 2).
SSPE Prevention with Measles Vaccination
Since the introduction of measles vaccination, number of SSPE cases have reduced drastically, but in developing countries like India, due to lower vaccination coverage, number of SSPE cases are still high.None, of the studies or epidemiological surveys have shown SSPE due to vaccine strain measles virus. 81It is suggested that, SSPE patients who do not have history of prior measles infection, might have had subclinical or undiagnosed measles in early childhood.In a child with SSPE, who has received measles vaccine, wild measles infection is presumed to occur before vaccination. 82Re-emergence of SSPE cases in developed countries coincides with measles outbreaks.In a study from California, higher rates of SSPE were reported in unvaccinated children, mainly those who acquired infection during first year of life. 59
Association of Measles Vaccine Virus with CNS-Related Measles Complications
There have been concerns regarding measles vaccine virus causing measles-related CNS complications.Few cases of encephalitis in healthy individuals, [83][84][85] a couple of MIBE case reports in immunocompromised patients (ALL and HIV), 48,54,55 and some cases of SSPE 86 have been attributed to vaccination, in whom there was no history of prior clinical measles infection.However, the possibility of wild type measles virus infection cannot be ruled out in these cases.
Some authors have implicated the vaccine virus to cause acute encephalitis in many cases solely on the basis of temporal association (onset of symptoms within 6 to 15 days of vaccination). 83,84In one of these cases, measles virus was isolated from CSF on nineth day of vaccination. 83n the basis of infectivity titer, tissue culture sensitivity, and plaquing, it was indicated that the isolate was vaccine like virus, however, genetic sequencing was not performed.Epidemiological data demonstrate that the rate of acute encephalitis within 15 to 30 days of measles vaccination is comparable to the expected background encephalitis rate. 85owever, a clustering of cases on days 8 and 9 post vaccination suggested a possible causal relationship. 84To date, there have been no reports of genetic sequences characteristic of measles vaccine strains in cases of acute encephalitis.
Amongst reported cases of vaccine-associated MIBE, vaccine like virus was isolated from the brain tissue of a 21month-old boy, who had primary immunodeficiency.The nucleotide sequence of isolated virus was identical to that of the Moraten and Schwarz vaccine strains in the nucleoprotein and fusion gene regions while, the fusion gene was different from wild-type genotype A virus, implicating vaccine virus as the cause of MIBE.However, authors also suggested that such an adverse event is very rare and the report should not lead to changes in current immunization practice. 48ccording to the report of GACVS (Global Advisory Committee On Vaccine Safety) meeting 2005, the available epidemiological and measles virus genotyping data, do not suggest that measles vaccine virus can cause SSPE.Furthermore, measles vaccine can neither trigger/accelerate the course of SSPE in an unvaccinated individual nor can lead to the development of SSPE in a person who had benign persistent wild measles infection at the time of vaccination.Also, the available evidence points to natural subclinical measles infection as the cause of SSPE in vaccinated individuals, who had no previous history of prior measles infection. 87In a report on 81 children with confirmed SSPE, 17 children did not report past history of measles infection or vaccination.This suggests the phenomenon of subclinical measles leading to SSPE. 88Till date, no genetic similarity of the defective measles virus in the brain tissue of SSPE cases has been shown with the attenuated measles vaccine virus, hence, there is no evidence to believe that measles vaccine may cause SSPE. 89
Summary
Measles is a vaccine preventable disease, still the measles infection rates are high amongst developing countries leading to significant morbidity and mortality.The neurological complications of measles infection can occur within days, months, or years later.The varied presentation and pathology of these complications sometimes pose difficulties in diagnosis and management as well.Many mutations have been detected in various genes of measles virus which are implicated for the persistence and tropism of virus in the brain.The newer therapeutic options are based on these mutations and are currently under research.Timely diagnosis and supportive treatment still remain the first line of treatment.Adequate vaccination of population with two doses of measles vaccine is the only preventive measure and should be undertaken.
Table 1
9,21dence of measles related CNS complications after natural measles infection and after vaccination9,21
|
2022-11-08T16:05:45.537Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "075d2e9c5eebfc77b6ec1aa8f87293ba3e186f72",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1055/s-0042-1757914",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e4cdc28d5b43f9101beba8491fb137b7f53ee67a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
247369034
|
pes2o/s2orc
|
v3-fos-license
|
Cross-Border E-Commerce Logistics Transportation Alternative Selection: A Multiattribute Decision-Making Approach
Cross-border e-commerce logistics and transportation system is important link in the cross-border e-commerce supply chain. How to choose a fast, safe, reliable, and low-cost logistics transportation mode is an urgent problem to be solved in cross-border e-commerce logistics. In view of this, this paper first analyzes the system characteristics of cross-border e-commerce logistics transportation systems and analyzes the factors affecting the choice of transportation mode from the perspective of reliability. +en, based on the ELECTRE method system, a multiattribute decision-making method for cross-border e-commerce logistics transportation mode selection is proposed. Finally, through a data example, it is verified that the model constructed in this paper and the proposed multiattribute decision-making method can effectively help cross-border e-commerce choose logistics transportation mode according to goods.
Introduction
e rapid development of cross-border e-commerce has not only changed the pattern of China's import and export trade but also rapidly improved the development of the national economy. In the process of cross-border e-commerce trade, a safe and reliable cross-border e-commerce logistics and transportation system is an important link to ensure the smooth progress of cross-border e-commerce trade [1,2]. From 2016 to 2019, China's manufacturing industry continued to upgrade, domestic brands poured into overseas markets, B2C cross-border e-commerce flourished, and China's foreign trade industry also experienced structural improvement, with a significant increase in growth rate and a compound growth rate of more than 9%. e booming cross-border e-commerce industry has an increasing demand for its logistics and transportation modes. e requirements of reliability, controllability, security, stability, low cost, informatization, and intelligence have become important factors in cross-border logistics and transportation systems [3]. e rapid development of cross-border e-commerce has promoted China's economic transformation, employment, and consumption. At the same time, there are many areas that need to be improved [4][5][6]. (1) e service capacity of international express enterprises is difficult to meet the development needs of cross-border e-commerce. e transaction volume of cross-border e-commerce is increasing year by year, but it lacks the service capacity of corresponding logistics enterprises. For example, at present, international express is the most used in cross-border logistics distribution, but relying only on international express will lead to goods extrusion, warehouse explosion, low distribution speed, and low service level. (2) e infrastructure of the logistics transportation system is not perfect. In the whole logistics transportation system, the basic settings of warehousing, transportation, distribution, and other links are not perfect.
ere are gaps in the connection between various modes of transportation, and the system is unscientific. (3) e connection between various links of cross-border logistics is a lack of professionalism. International logistics costs are high, the operation is difficult, the business module is complex, and the quantity and variety of goods are various. erefore, the number of international logistics enterprises engaged in the whole cross-border logistics service is small, so it is difficult to control each logistics link. (4) Cross-border e-commerce logistics costs are high. Compared with domestic logistics, cross-border e-commerce requires higher costs in warehousing, distribution, customs declaration, and other links. On the basis of ensuring the same logistics service level, cross-border logistics enterprises need to pay more costs. If there are returns and exchanges in the whole logistics link, the cost will be higher. (5) e accuracy and timeliness of logistics information are low. Due to the lack of professionalism in the connection between various links of crossborder e-commerce, the timeliness and accuracy of information between links are low. is deficiency will also make users lack trust in logistics and transportation enterprises. e above deficiencies exist in the cross-border logistics and transportation system, which not only hinders the development of cross-border logistics and transportation enterprises but also brings difficulties to users in choosing logistics and transportation methods.
To sum up, with the development of cross-border e-commerce, the problems of cross-border e-commerce logistics have become increasingly prominent. e reliability, controllability, security, stability, low cost, informatization, and intelligence of cross-border e-commerce logistics transportation are hot topics in the development of cross-border e-commerce logistics. However, how to choose an economic and reasonable logistics service provider is an urgent problem to be solved in the development of crossborder e-commerce logistics. e remainder of this paper is organized as follows: In Section 2, the related works will be shown in detail. In Section 3, the cross-border e-commerce logistics and transportation system is analyzed. In Section 4, the crossborder e-commerce logistics transportation mode based on ELECTRE was proposed. An example is given in Section 5. Finally, some conclusions are drawn in Section 6.
Related Works
At present, the research mainly focuses on the logistics system of cross-border e-commerce, the development of cross-border e-commerce, the optimization of the crossborder e-commerce supply chain, the selection of crossborder e-commerce logistics methods, etc.
In terms of cross-border e-commerce research, Gomez et al. investigated the importance of distance to physical online trade and studied the positive role of policymakers in cross-border e-commerce choice in the EU digital single market [7]. Gomez et al. analyzed and compared the differences in influencing factors and challenges between online and offline cross-border trade [8]. Jiao et al. analyzed the e-commerce logistics system and its direct and indirect impact on its operation [9]. In order to improve the service level and competitiveness of cross-border e-commerce, Luo et al. proposed to promote the construction of overseas warehouses and border warehouses and reasonably select the cross-border logistics mode [10]. Cheng et al. put forward countermeasures to speed up the development of cross-border e-commerce in Fujian from the aspects of guidance and support of local governments, cultivating cross-border e-commerce industrial chain, promoting third-party cross-border payment, promoting the construction of credit system, and strengthening precision marketing [11].
In terms of cross-border e-commerce supply chain optimization, Deng et al. put forward supply chain management coordination and optimization measures such as integrating China's supply chain procurement system, building cross-border export high-quality e-commerce, strengthening logistics management, improving response speed, relying on big data application, and improving supply chain management capacity in view of the problems of few commodity types, poor product quality, and upstream and downstream information asymmetry in cross-border export e-commerce [12]. Kaplan et al. proposed the concept of demand price elasticity, including production planning and scheduling, inventory management, transportation delay, transportation cost, and transportation restriction [13]. Ding et al. proposed a cross-border two-way logistics network survivability method in the e-commerce supply chain, which improves the security of cross-border two-way logistics networks in the e-commerce supply chain [14]. Liu et al. analyzed the influencing factors of cross-border e-commerce supply chain and constructed the CBSCR influencing factor system based on the ternary theory of supply chain elasticity to ensure the safe operation of cross-border e-commerce supply chain [15]. Godichaud et al. proposed a supply chain model based on simulation and multiobjective optimization to optimize the control strategy of multilevel return supply chain [16]. Sampat et al. proposed the optimization formula of a multiproduct supply chain network. ese formulas use a general graphical representation to capture the dependencies between any number of products, technologies, and transportation routes [17].
In terms of supply chain partner selection evaluation system, Chen et al. proposed a fuzzy decision method for supplier selection in supply chain system [18]. Ernst et al. introduced a conceptual framework for evaluating different supply chain structures in the context of modularization and postponement [19]. Tsai et al. proposed a fuzzy objective programming method, which integrates activity-based costing and performance evaluation into the value chain structure to realize the optimal selection and process allocation of multinational logistics suppliers [20]. Lin et al. studied the reliability of a complex supply chain system [21]. Demand forecasting is an important aspect of supply chain enterprise planning. Sanders et al. predicted the cross-border e-commerce logistics demand after considering the expected anomalies during the planning period [22]. Based on the satisfaction of different stakeholders, Miao et al. applied the double-sided matching method to the export cross-border e-commerce environment, to better match overseas demand and domestic suppliers [23]. Yi et al. studied consumers' willingness to use cross-border e-commerce, collected global consumer data from the perspective of psychological distance and commitment trust, and analyzed relevant factors affecting online consumers' intention [24].
It can be seen from the above studies that most studies only study the overall development of the cross-border e-commerce industry and cross-border e-commerce logistics system, analyze the unreasonable phenomena in various links of transportation, storage, and distribution in the logistics system, and optimize the cross-border e-commerce supply chain. However, the existing research still has the following problems: (1) ere is no discussion on the connection and cost control of various links in the logistics system; (2) the cross-border e-commerce logistics and transportation systems of different products in different regions of China are distinguished; (3) there is no in-depth research on supply chain management under online and offline multichannel mode; (4) what are the influencing factors in the cross-border e-commerce logistics system; (5) how to choose safe, reliable, and low-cost cross-border e-commerce logistics transportation mode according to these influencing factors.
In view of this, starting from the current cross-border e-commerce logistics characteristics, this paper analyzes the advantages and disadvantages of the current cross-border e-commerce logistics transportation mode and analyzes the main and secondary influencing factors in the cross-border e-commerce logistics system. en, based on the multiattribute decision-making theory, a logistics transportation decision-making method based on ELECTRE theory is proposed.
Logistics Model of Cross-Border e-Commerce.
With the development of cross-border e-commerce, the logistics mode is becoming more and more diversified. At present, the logistics modes of cross-border e-commerce mainly include third-party logistics, logistics alliances, overseas warehouses, goods collection logistics, and bonded area logistics. ird-party logistics (3PL), also known as outsourcing logistics or contract logistics, refers to a professional logistics company with substantive assets that provide logistics-related services to other companies and can provide more complete services. Logistics alliance is an organizational form between independent enterprises and market transactions. It is a relatively stable and long-term contractual relationship between logistics demanders, that is, various production and manufacturing enterprises, commercial circulation enterprises, and logistics enterprises due to the needs of their own development. Overseas location refers to the storage facilities established overseas. It refers to those domestic enterprises that transport goods to the target market countries in the form of bulk transportation, establish warehouses, and store goods locally and then respond at the first time according to the local sales orders to sort, package, and distribute directly from the local warehouse in time. Collection logistics refers to the operation that enterprises collect scattered small quantities of goods for transportation and distribution. Bonded area logistics refers to the business of warehousing, distribution, transportation, circulation processing, loading and unloading, logistics information, scheme design, and other related businesses in the areas under the customs supervision, including bonded areas, bonded location, customs supervised warehouses. e characteristics of these modes are shown in Table 1.
Cross-Border E-Commerce Logistics and Transportation
System. A logistics system is an organic whole with specific functions in a certain time and space, which is composed of materials to be transported and several mutually restrictive dynamic elements, including relevant equipment, transportation tools, storage equipment, personnel, and communication. It is an organic aggregate composed of two or more logistics functional units for the purpose of completing logistics services. In cross-border power grid logistics, transportation is an important central link, which closely connects manufacturers, middlemen, and buyers. However, for different types of sellers, the choice of logistics channels is different. No matter which channel you choose, the purpose is to ensure that you provide excellent services to buyers. e process of logistics mode selection is affected by many aspects such as technology, economy, policy, and consumer preference. It is a multiattribute decision-making process.
Logistics Transportation Mode Selection Factors.
Because various modes of transportation and means of transportation have their own characteristics, and the requirements of goods with different characteristics for transportation are different, it is difficult and unrealistic to formulate a standard for selecting modes of transportation. However, according to the overall goal of logistics and transportation, it is still possible to determine a basic principle.
Generally speaking, the choice of cross-border e-commerce logistics transportation mode is affected by many factors, such as the type of transported goods, transportation volume, transportation distance, transportation time, transportation cost, transportation safety, logistics service level. Of course, these factors are not independent of each other, but closely connected and determined by each other.
(1) Commodity performance characteristics. It is an important factor affecting enterprises' choice of means of transportation. Generally speaking, bulk goods such as grain and coal are suitable for waterway transportation; fresh products such as fruits, vegetables, and flowers are suitable for air transportation; pipeline transportation is suitable for oil and natural gas. (2) Transportation speed and distance. e speed of transportation and the distance of transportation distance determine the length of goods transportation time. e goods transported in transit, like inventory goods, will form capital occupation. Generally speaking, the commodities with large volume, low value, and long haul distance are suitable for waterway or railway transportation; the commodities with small batch, high value, and long haul distance are suitable for air transportation; road transportation is suitable for small batch and short distance.
Mathematical Problems in Engineering
(3) Consistency of transportation. It is the consistency between the time required to perform specific transportation in several shipments and the original time or the time required for the previous n transportation. It is a reflection of transportation reliability. If a given transportation service takes two days for the first time and six days for the second time, this unexpected change will cause serious logistics problems for production enterprises. (4) Transportation costs. It is the operating cost, management cost, tax, etc. incurred in the transportation process. Operating costs (such as operating vehicle fuel taxes, depreciation, maintenance fees, insurance premiums) and management costs mainly include labor costs (such as transportation personnel wages, benefits, bonuses, allowances, and subsidies). (5) Transportation safety. It is the safe arrival of goods in the hands of consumers without damage, loss, or shortage. (6) Logistics service level. It is used to evaluate the customer satisfaction of logistics enterprises in the process of customer service. It is generally reflected in the quality of goods received by customers and the timeliness of transportation. Sometimes, the service attitude of logistics collectors and senders will also be evaluated.
Multiattribute Decision Analysis of Cross-Border E-Commerce Transportation Market
e core of multiattribute decision-making is to study a satisfactory scheme based on the comprehensive analysis of multiple attributes of multiple schemes to be selected. ELECTRE is an elimination and selection rotation algorithm based on the harmony test and disharmony test. e method is simple and convenient, which makes it widely used. e flow chart of the ELECTRE decision-making method is shown in Figure 1. erefore, this paper proposes a method of logistics transportation mode decision-making based on ELECTRE theory.
ELECTRE
eoretical Basis. Suppose the decisionmaking scheme set C � (c 1 , c 2 , . . . c n ), each scheme has the same attributes. Assuming that the attribute set is F � (f 1 , f 2 , . . . f m ), and the original decision data matrix A � (a ij ) n×m , where a ij refers to the attribute value of the scheme c i on the attribute f j . e level priority consistency index matrix CM(c i , c j ) is defined as, which represents the consistency degree of the scheme c i over the scheme c j , expressed as where α k and β k represent the indifference threshold and strict preference threshold of attributes. At the same time, the definition level priority validity index is represented by sm(c i , c j ). e value range of sm(c i , c j ) is 0-1, which indicates the good and bad relationship between schemes. e greater the value of sm(c i , c j ), the greater the degree of c i being better than c j , and the smaller the value of sm(c i , c j ), the less the degree of c i being better than c j .
Decision-Making Method of Cross-Border E-Commerce
Logistics Transportation Scheme Based on ELECTRE eory. e ELECTRE method was first proposed by Benayoun et al. in 1966. e main concept of ELECTRE is to deal with the transcendence relationship between schemes and the use of criteria as evaluation and establish the advantageous relationship between schemes and schemes to eliminate poor schemes. e advantage of the ELECTRE method is that the decision-maker is easy to understand and master, and the specific decision-making calculation process can be programmed.
As a multiattribute decision-making problem, many scholars have proposed improved ELECTRE methods, such as ELECTRE I method, ELECTRE II method, and ELECTRE III method, and have been applied to multiattribute decision-making problems. ese methods are basically proposed to solve more complex decision-making problems, aiming at the multiattribute decision-making problem where the decision-making data are accurate. e ELECTRE method used in this paper is the most classical method proposed by Benayoun. e choice of cross-border e-commerce logistics transportation mode is affected by many factors, such as the type of transported goods, transportation volume, transportation distance, transportation time, transportation cost, transportation safety, and logistics service level. In this paper, a questionnaire survey was conducted on the subjects of each link in the cross-border e-commerce supply chain. e survey results show that 99.72% of the survey objects regard transportation time, transportation cost, and transportation reliability as the main factors affecting the selection of transportation schemes. erefore, in the decisionmaking process of cross-border e-commerce logistics transportation schemes, this paper takes transportation time, transportation cost, and transportation reliability as the attribute values of transportation scheme selection.
Step 1. Firstly, the decision attributes are quantified by using the qualitative grade quantization table. In order to ensure the accuracy of decision attribute quantification, this paper uses the expert scoring method to score different attributes from 0 to 10 and then counts the quantitative values of all experts for a decision attribute in a transportation scheme. However, because different experts have different rating standards, this paper introduces the membership function to quantify the quantitative value of decision attributes. e fuzzy set of each decision attribute is divided into five different fuzzy values (bad, slowly bad, medium, slowly good, and good). At the same time, this paper selects Gaussian function as the membership function of various fuzzy sets, expressed as rough the membership function, we can get the membership values of expert scoring values belonging to different fuzzy sets. In the process of expert scoring, we assume that the scores of experts in this field have the same weight value. erefore, we can get the weighted average of the membership values of this attribute belonging to different fuzzy sets. en, the center of gravity method is used to obtain the quantitative value of a decision attribute in the transportation scheme. e decision attribute matrix of all transportation schemes can be expressed as where x ij represents the quantitative value of te attribute j in the transportation scheme i.
Step 2. Among multiple decision attributes, some decision attributes are more important than others. On the contrary, some decision attributes are less important than others. erefore, through investigation, we analyze multiple attributes in the transportation scheme, such as item type, transportation volume, transportation distance, transportation time, transportation cost, transportation safety, and logistics service level, and use entropy information method to determine the weight values of different attributes.
After determining the weight value, the decision attribute matrix X ij is modified, that is, X � w j · X ij .
Step 3. ELECTRE method uses the concept that level is not inferior to relationship, determines the threshold value, compares two schemes, forms the good and bad relationship matrix of consistency and inconsistency, and then constructs the consistency matrix and inconsistency matrix. erefore, after obtaining the decision attribute matrix X ij and comparing the two schemes, we form a consistent set C kl and a nonconsistent set D kl , which is expressed as C kl � j|v kj ≥ v lj , j � 1, 2, . . . , n, rough the intersection of consistent set C kl and inconsistent set D kl , the judgment matrix is obtained, and the optimal scheme is obtained.
Example Analysis
Suppose a merchant is mainly engaged in fresh agricultural products, and the average package weight is about 2.5 kg. Options include China Post (International), EMS, UPS, special line logistics, and SF international. For different transportation modes, this paper considers three attribute indexes: transportation time, transportation cost, and transportation reliability. e fuzzy sets of the three attribute indexes are divided into (long, slowly long, medium, slowly short, short), (high, slowly high, medium, slowly low, low), and (low, slowly low, medium, slowly high, high). In this Mathematical Problems in Engineering paper, the Gaussian function is used as the membership function of various fuzzy sets, and finally, the quantitative value of a decision attribute in the transportation scheme is obtained by using the center of gravity method. e attribute value of the scheme is shown in Table 2. Simulation environment: Windows 10, Intel Xeon CPU E3, 32 GB RAM. Simulation platform: MATLAB R2020b. e entropy of the attribute matrix H j is calculated according to equation (5), and then, the weight of the matrix w j is calculated according to equation (6).
Consistent sets C ij and inconsistent sets D ij are represented as According to the consistency set C ij and nonconsistency set, the consistency and contradiction dominant matrix is constructed, and the intersection is obtained to obtain the judgment matrix, as shown in Table 3.
It can be guided by Table 3. Firstly, the schemes that have no advantages, that is, China Post and EMS, are deleted and selected, and UPS, special line logistics, and SF international logistics are retained, and then, the UPS and special line logistics with only one advantage are deleted, leaving only SF international logistics. e conclusion, in this case, is that SF international logistics is recommended.
e experimental results show that SF international logistics has obvious advantages in transportation time, transportation cost, transportation safety, and logistics service level for fixed cargo types and transportation volume.
Conclusion
is paper analyzes the impact of transportation price and transportation timeliness on the reliability of cross-border e-commerce logistics transportation mode by using a structural equation model. en, from the perspective of reliability, the subjective and objective factors affecting the choice of transportation mode are analyzed, the main process of multiattribute decision-making model based on ELECTRE method system is constructed, and an actual case operation is carried out to verify the operability of the model. By comparing with the actual logistics situation, the feasibility of the model is verified. e multiattribute decision-making method proposed in this paper can also help cross-border e-commerce sellers choose appropriate cross-border e-commerce logistics and transportation products for their goods.
Data Availability e data supporting the conclusion of the article are shown in the relevant figures and tables in the article.
Conflicts of Interest
e author declares that there are no conflicts of interest. Table 3: Dominant matrix of contradiction set integration.
|
2022-03-11T16:14:55.495Z
|
2022-03-08T00:00:00.000
|
{
"year": 2022,
"sha1": "6eb37586283a55ed4efedf22d7f50b0bee0f0d96",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2022/4990415.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f5203ee3b1baf2a865566f7de7566f5ecf3c0014",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
270670858
|
pes2o/s2orc
|
v3-fos-license
|
Printed Primary Battery in a Rolled-Up Form Factor
: In battery systems, there are several established form factors targeting mass market applications, like D, C, AA, AAA series, lithium round cells, and coin cells. Besides these standardized batteries, in printed electronics, there are several approaches to realize flat batteries of different material systems fabricating primary and secondary battery types. For a dedicated application in agriculture, a sensor system requires a degradable primary battery. In this paper, the development of a dedicated zinc–carbon battery is described, supplying the sensor application with 4.5 V nom . The battery has a 170 mm length and a 23 mm outer diameter. while the inner core is open for the antenna system of the application. The active area is up to 161 cm 2 . The design and manufacturing aspects are described. The rolled-up battery system is fully charged after manufacturing and ready to operate. It may remain inside the degradable sensor system after use in the field.
Introduction
Printed batteries, especially environmentally friendly systems based on the wellknown zinc-carbon material system, have been under consideration for more than a decade.While the standardized D, C, and AA batteries are rigid and batch-processed, the advantages of a printed battery are, e.g., flatness, bendability, thin form factors, form variability, scalability in voltage, and capacity, just to highlight some of them and the reason for performing research on this battery type [1][2][3][4][5][6][7].Typical applications are sensor systems and advertisement.In this paper, the focus is on a zinc-carbon battery application.For a broader overview of different material types of printed battery technology see e.g., [8][9][10][11][12][13].This approach is completely different from rechargeable and 3D micro batteries that aim to have a minor area of less than 1 cm 2 [14].
Common in state-of-the-art printed batteries is a flat battery design that might be bent [8].Typical energy densities of printed zinc-carbon batteries are in the range of <1 mA/cm 2 up to 5 mA/cm 2 for the active area.Additional area is required for the encapsulation of the aqueous electrolyte enabling a chemical reaction inside the battery.Material setups, layouts, and applications are described in, e.g., [8].The benefit of this primary battery is that it is fully charged after its manufacturing.It can be scaled up to multiples of 1.5 V nom in operating voltage.The energy content is dependent on the area.Possible currents that can be driven by the battery are determined by its internal resistance.
The intended application of this newly developed battery is described in the section Sensor System Application for Agriculture.There has been no publication about any printed battery that is rolled up for the application with the inner core being open.Also, the approach to use mainly paper rather than polymer film or so-called coffee bag material is innovative for achieving a higher content of degradable materials.
Sensor System Application for Agriculture
Global agriculture is undergoing a significant shift due to increasing food demand from a growing population, projected to reach 8 billion by 2025 and 9.6 billion by 2050, necessitating a 70 % increase in food production by 2050 [15].However, natural resources like arable land and water for irrigation are limited, further strained by climate change.Intensive farming methods reliant on fertilizers and pesticides exacerbate ecosystem degradation.
Smart farming, employing sensor technology, data processing, and telematics, has emerged as a solution to boost crop yield, reduce pollution, and attract skilled labor.Sensor technology in arable farming must meet stringent criteria, including affordability (10 to 25 EUR/ha), wireless data transmission up to 200-300 m to a base station, and addressing farmers' expectations of higher yields, harvest uniformity, and reduced costs.In addition, the sensor technology should not pollute the environment in the event of damage.Key sensor measurements encompass soil moisture for efficient irrigation, soil nitrate levels for optimal fertilization, and leaf wetness/temperature for timely fungicide application to combat infections like phytophthora in crops.Besides functionality, sensor disposal without environmental harm is a growing concern, ideally decomposing during plowing.These sensor advancements aim to enhance agricultural productivity sustainably amidst resource constraints and environmental challenges.
The EU project "PLANtAR" (https://plantar-project.eu/ (accessed on 13 May 2024)) aims to develop such cost-efficient, miniaturized, networked, and partly biodegradable monitoring electronics [16].Figure 1 shows one of the sensors developed in the project, consisting of a miniaturized electronic module with a single-chip radio system, sensors for measuring temperature, soil tension, and nitrate [17].The sensors and the electronic module are fabricated from materials that are biodegradable or inert with a minimum amount of metal and ceramic and allow for remaining them on the field when harvesting.The device is powered by a biodegradable zinc-manganese dioxide battery.For wireless communication, a printed antenna on paper is used.A gateway collects the data transmitted by the distributed sensor devices and brings it to the internet to a central server running an expert system.
the approach to use mainly paper rather than polymer film or so-called coffee bag material is innovative for achieving a higher content of degradable materials.
Sensor System Application for Agriculture
Global agriculture is undergoing a significant shift due to increasing food demand from a growing population, projected to reach 8 billion by 2025 and 9.6 billion by 2050, necessitating a 70 % increase in food production by 2050 [15].However, natural resources like arable land and water for irrigation are limited, further strained by climate change.Intensive farming methods reliant on fertilizers and pesticides exacerbate ecosystem degradation.
Smart farming, employing sensor technology, data processing, and telematics, has emerged as a solution to boost crop yield, reduce pollution, and a ract skilled labor.Sensor technology in arable farming must meet stringent criteria, including affordability (10 to 25 EUR/ha), wireless data transmission up to 200-300 m to a base station, and addressing farmers' expectations of higher yields, harvest uniformity, and reduced costs.In addition, the sensor technology should not pollute the environment in the event of damage.Key sensor measurements encompass soil moisture for efficient irrigation, soil nitrate levels for optimal fertilization, and leaf wetness/temperature for timely fungicide application to combat infections like phytophthora in crops.Besides functionality, sensor disposal without environmental harm is a growing concern, ideally decomposing during plowing.These sensor advancements aim to enhance agricultural productivity sustainably amidst resource constraints and environmental challenges.
The EU project "PLANtAR" (h ps://plantar-project.eu/ (accessed on 13 May 2024)) aims to develop such cost-efficient, miniaturized, networked, and partly biodegradable monitoring electronics [16].Figure 1 shows one of the sensors developed in the project, consisting of a miniaturized electronic module with a single-chip radio system, sensors for measuring temperature, soil tension, and nitrate [17].The sensors and the electronic module are fabricated from materials that are biodegradable or inert with a minimum amount of metal and ceramic and allow for remaining them on the field when harvesting.The device is powered by a biodegradable zinc-manganese dioxide ba ery.For wireless communication, a printed antenna on paper is used.A gateway collects the data transmi ed by the distributed sensor devices and brings it to the internet to a central server running an expert system.
Battery Configuration
The basic idea of realizing a printed flat zinc-carbon battery is depicted in Figure 2a.There is a layer-wise stacking of the required components: silver grid (optional), current collector, anode, electrolyte + separator, and cathode.A functional description can be found in [18,19].The advantage of this printing approach is that the series connection of batteries can also be easily adapted by adjusting the layout.In Figure 2b, a series connection of three cells is shown, resulting in a total battery voltage of 4.5 V nom .For this battery, just one side of two different substrates is used.
Ba ery Configuration
The basic idea of realizing a printed flat zinc-carbon ba ery is depicted in Figure 2a.There is a layer-wise stacking of the required components: silver grid (optional), current collector, anode, electrolyte + separator, and cathode.A functional description can be found in [18,19].The advantage of this printing approach is that the series connection of ba eries can also be easily adapted by adjusting the layout.In Figure 2b, a series connection of three cells is shown, resulting in a total ba ery voltage of 4.5 Vnom.For this ba ery, just one side of two different substrates is used.Following the basic approach in Figure 2, the layout can be slightly modified by printing all layers on just one substrate using the front and back sides (see Figure 3a).The relating anode and cathode layers will be overlapped adequately when rolling up the substrate (see Figure 3b).Following the basic approach in Figure 2, the layout can be slightly modified by printing all layers on just one substrate using the front and back sides (see Figure 3a).The relating anode and cathode layers will be overlapped adequately when rolling up the substrate (see Figure 3b).
Ba ery Configuration
The basic idea of realizing a printed flat zinc-carbon ba ery is depicted in Figure 2a.There is a layer-wise stacking of the required components: silver grid (optional), current collector, anode, electrolyte + separator, and cathode.A functional description can be found in [18,19].The advantage of this printing approach is that the series connection of ba eries can also be easily adapted by adjusting the layout.In Figure 2b, a series connection of three cells is shown, resulting in a total ba ery voltage of 4.5 Vnom.For this ba ery, just one side of two different substrates is used.Following the basic approach in Figure 2, the layout can be slightly modified by printing all layers on just one substrate using the front and back sides (see Figure 3a).The relating anode and cathode layers will be overlapped adequately when rolling up the substrate (see Figure 3b).In the experiments, two different layouts have been used.In Figure 4a,b, the layout of three batteries is shown; they are arranged side by side in a winding direction ("crosswise").This means that the sealing between the three cells is perpendicular to the winding direction.Each battery has an active area of 55 cm 2 .The second layout is shown in Figure 4c,d.Each battery has an active area of 57 cm 2 .The main difference in layout is that there is no longer any sealing between batteries across the winding direction ("lengthwise").Instead, all sealings between the single battery cells are in the direction of the winding direction.The effect of this difference is described in the section Results.Photographs of the different layers of the lengthwise battery are shown in Figure 5.
In the experiments, two different layouts have been used.In Figure 4a,b, the la of three ba eries is shown; they are arranged side by side in a winding direc ("crosswise").This means that the sealing between the three cells is perpendicular to winding direction.Each ba ery has an active area of 55 cm 2 .The second layout is sh in Figure 4c,d.Each ba ery has an active area of 57 cm 2 .The main difference in layo that there is no longer any sealing between ba eries across the winding direc ("lengthwise").Instead, all sealings between the single ba ery cells are in the directio the winding direction.The effect of this difference is described in the section Res Photographs of the different layers of the lengthwise ba ery are shown in Figure 5.The reaction scheme of zinc, zinc chloride, and manganese dioxide is given in Eq. 1: Environmental aspects have been discussed for different ba ery systems in [20].Ref. [21] discussed sulfur dioxide leaching of spent zinc-carbon ba ery scrap.This was caused The reaction scheme of zinc, zinc chloride, and manganese dioxide is given in Equation (1): Environmental aspects have been discussed for different battery systems in [20].Ref. [21] discussed sulfur dioxide leaching of spent zinc-carbon battery scrap.This was caused by steel encapsulation, especially of alkaline battery cells.In the printed batteries, neither NaOH nor steel is present.
Experimental Setup
For both layouts (see Figure 4), a set of three or four screens is manufactured for printing the silver (optional), carbon, zinc, and manganese dioxide layers.A screenprinting machine (EKRA E1 XL, IBE SMT Equipment, LLC, Magnolia, TX, USA) is used to print three or four layers: 1. a silver grid (DuPont PV410, DuPont, Wilmington, DE, USA) to lower the internal resistance of the battery (optional); 2. a carbon layer (Henkel Electrodag, Henkel AG & Co. KGaA, Düsseldorf, Germany) to cover the silver and prevent any chemical reaction of it with the battery cell; 3. a zinc layer as the anode of the battery; 4. a manganese dioxide layer as the cathode of the battery.After each printing step, the ink is fully dried in a convection oven (3D Micromac microDRY, 3D-Micromac AG, Chemnitz, Germany, 110 • C, 10 min) before applying the subsequent layer.
For the discharge of the fully manufactured battery, self-developed circuitry is used.The circuit has a stand-by current of 15 µA.It is about 1/667 of the current load of 10 mA (20 ms) for wireless data transmission.The firmware of the electronic circuitry is modified in a way that the wireless data transmission, which causes the highest current load for the battery, is conducted with a repetition nearly once a second (1.0064 s) instead of the application frequency of once every 30 min.The voltage is monitored during the discharge by a potentiostat (BioLogic VMP 3, BioLogic, Seyssinet-Pariset, France).This setup was chosen instead of also controlling the discharge by the potentiostat because inside the machine, mechanical relays are used to switch the discharge ON/OFF.Performing this thousands of times will decrease the lifetime of the device significantly.
Materials and Assembly Process
The focus of this paper is the roll-up of a flat materials system encapsulating an aqueous layer.To replicate and build on the published results, only the substrate and the encapsulation materials and methods are required.Material systems used for the production of printed batteries are hardly described in the literature.Due to the fact that ingredients and add-ons of the inks used are background knowledge of each actor in this field, we cannot unveil our recipes in this paper.
The substrate used for the experiments is Felix Schoeller "P_E:SMART paper type 1" (Felix Schoeller GmbH & Co. KG, Osnabrück, Germany).The main content is a raw paper covered by a resin coating avoiding the dry-out of the aqueous electrolyte.Therefore, only a small fraction of this material will remain with very slow degradation.By changing the substrate from 150 µm PET into P_E:SMART paper, the weight of polymer encapsulation could be reduced by more than 80%.
Having printed and dried the layers of silver (optional), carbon, zinc, and manganese dioxide, the electrolyte (gelled aqueous zinc chloride) and the separator (to avoid any short circuit between the anode and cathode inside the battery) as well as the encapsulation are missing to finalize and functionalize the primary battery system.
For the manual sealing of the cells, a 680 µm spacer of 3M 467MP 200MP coated with a glue layer on each side is employed.Alternatively, UV-curable glue (KIWOPRINT-UV 94, KIWO, Wiesloch, Germany) was screen-printed on the flat substrate.The assembly process was not successful, so there is no further reporting on these activities in this paper.
After fixing the sealant, a pre-cut porous paper was placed on the active battery areas and the electrolyte was applied by a syringe dispensing on that area, too.
For rolling up the battery layers, a rod was used to clamp one end of the substrate, enabling an open inner core after the procedure and the removal of the rod.
Results
In this chapter, the observations in building the rolled-up batteries and electrical performance data are given.The information is subdivided into three subsections: Sections 3.1-3.3.
Battery Manufacturing
When winding up the crosswise battery setup, the pressure in this process caused the electrolyte within every battery cell to be pushed in the winding direction.This results in a wetting of the sealing bar between single cells.When the glue layer becomes wetted by the electrolyte, it loses its encapsulation capabilities.In consequence, the electrolyte of two cells is not separated as intended but connected.The result is a short circuit between the two battery cells by the electrolyte.
After many tries, it was concluded that the crosswise battery setup generates such severe problems with respect to encapsulation that a new approach to lengthwise battery layout was developed.
Electrical Performance
To have an idea of the battery performance, the non-winded battery was manufactured simply by the encapsulation of two flat substrates, which had been prepared already for the roll-up experiments.Using two of them, a battery of three cells in between them could be realized.In Figure 6a, the discharge setup with the external electronics is shown.In the discharge diagram recorded by the potentiostat shown in Figure 6b, two zones can be differentiated: up to cycle 78,000, the voltage of the ba ery fluctuates depending on the load level (15 µA vs. 10 mA) between an upper (4.2-3.7 V) and a lower level (2.4-1.9V).In the higher voltage level, there is low power consumption of the electronics.During wireless data transmission, the current demand is highest, resulting in a decreased voltage level due to the internal resistance of the ba ery.At about 78,000 cycles (i.e., 21.8 °h), there is a significant drop in the voltage level from 3.7 V to 2.2 V.This drop indicates that one of the ba ery cells has reached its end of operation.
With >70,000 discharge cycles, the ba ery stays within the requirement of the application, which is defined as 20,000 cycles minimum.In the discharge diagram recorded by the potentiostat shown in Figure 6b, two zones can be differentiated: up to cycle 78,000, the voltage of the battery fluctuates depending on the load level (15 µA vs. 10 mA) between an upper (4.2-3.7 V) and a lower level (2.4-1.9V).
Ba ery Manufacturing
In the higher voltage level, there is low power consumption of the electronics.During wireless data transmission, the current demand is highest, resulting in a decreased voltage level due to the internal resistance of the battery.At about 78,000 cycles (i.e., 21.8 • h), there is a significant drop in the voltage level from 3.7 V to 2.2 V.This drop indicates that one of the battery cells has reached its end of operation.
With >70,000 discharge cycles, the battery stays within the requirement of the application, which is defined as 20,000 cycles minimum.
Battery Manufacturing
When winding up the lengthwise battery setup, the pressure in this process caused the electrolyte within every battery cell to be pushed in winding direction as in the crosswise setup.The movement of the electrolyte can be controlled much better than in the first approach.By the exact dosing of the electrolyte during the winding up of the battery, two issues can be solved: 1. no wetting of the sealing stripes between the single battery cells; 2. no wetting of the sealing strips perpendicular to the winding direction at the closing end of the batteries.With this setup, it is possible to manufacture rolled-up batteries without internal shortages.
Electrical Performance
To have an idea of the battery performance, the non-winded battery was manufactured simply by the encapsulation of two flat substrates, which had been prepared already for the roll-up experiments.Using two of them, a battery of three cells in between them could be realized.
In the discharge diagram recorded by the potentiostat shown in Figure 7, two zones can be differentiated: from 0 to 90,000 cycles, the voltage fluctuated between an upper (4.3-4.0V) and a lower level (2.9-2.0V).At the higher voltage level, there is low power consumption of the electronics.During wireless data transmission, the current demand is the highest, resulting in a decreased voltage level due to the internal resistance of the battery.The electronics demand a minimum voltage level of 2.5 V. Therefore, the reliable operation is only in the range of 0 to 44,000 cycles.For the intended operation, the internal battery voltage should become lower.This can be realized by having a lower resistance in the current collector by a silver grid underneath.This modification is described in Section 3.3.
Designs 2024, 8, x FOR PEER REVIEW 8 of 11 ba ery.The electronics demand a minimum voltage level of 2.5 V. Therefore, the reliable operation is only in the range of 0 to 44,000 cycles.For the intended operation, the internal ba ery voltage should become lower.This can be realized by having a lower resistance in the current collector by a silver grid underneath.This modification is described in Section 3.3.At 90,000 cycles, this ba ery also shows a significant drop in the voltage level from 3.9 V to 2.5 V.This drop similarly indicates that one of the ba ery cells has reached its end of operation.
With >44,000 discharge cycles, the ba ery stays basically within the requirement of the application, which is defined as 20,000 cycles minimum.
Ba ery Manufacturing
The ba ery manufacturing is carried out similarly to the description given in Section 3.2.1.The only difference is that the layer stack has a silver grid underneath, also shown in Figure 5.At 90,000 cycles, this battery also shows a significant drop in the voltage level from 3.9 V to 2.5 V.This drop similarly indicates that one of the battery cells has reached its end of operation.
With >44,000 discharge cycles, the battery stays basically within the requirement of the application, which is defined as 20,000 cycles minimum.
Battery Manufacturing
The battery manufacturing is carried out similarly to the description given in Section 3.2.1.The only difference is that the layer stack has a silver grid underneath, also shown in Figure 5.
Electrical Performance
In this case, the battery performance was determined with a rolled-up battery, as shown in Figure 8a.The measurement results are given in Figure 8b.Due to less electrolyte inside the battery compared with the flat assembly of Section 3.2, the overall discharge cycles decrease from 44,000 to 34,000.This still matches the requirement of 20,000 cycles minimum.In the discharge diagram recorded by the potentiostat shown in Figure 8b, two zones can be differentiated: From 0 to 34,000 cycles, the voltage fluctuates between an upper (4.3-3.3V) and a lower level (4.1-2.5 V).In the higher voltage level, there is low power consumption of the electronics.During wireless data transmission, the current demand is highest, resulting in a decreased voltage level due to the internal resistance of the ba ery.The electronics demand a minimum voltage level of 2.5 V. Therefore, the reliable operation is only in the range of 0 to 34,000 cycles.
At 38,000 cycles, this ba ery also shows a significant drop in the voltage level from 3.1 V down to 2.3 V.This drop similarly indicates that one of the ba ery cells has reached its end of operation.
With >34,000 discharge cycles, the ba ery stays basically within the requirement of the application, which is defined as 20,000 cycles minimum.
By this accelerated discharge setup, it is expected that the ba ery driving an application with less frequent discharge pulses will last longer, i.e., will enable more cycles because there is time for the ba ery to recover by ion reorganization.Therefore, the lifetime of the ba ery will be sufficient for the application.
Discussion
The research and development goal-to supply a rolled-up ba ery with an open inner core as a power supply for a sensor system for a dedicated agriculture applicationwas successfully reached.The main challenges, like the layout or the leakage of the battery's encapsulation, were solved.
Comparing a printed ba ery in flat and the rolled-up form factor, the flat has a higher capacity compared with the rolled-up version.The main reason was found to be the lower amount of electrolyte stored inside the ba ery that is necessary for the chemical reaction.In the discharge diagram recorded by the potentiostat shown in Figure 8b, two zones can be differentiated: From 0 to 34,000 cycles, the voltage fluctuates between an upper (4.3-3.3V) and a lower level (4.1-2.5 V).In the higher voltage level, there is low power consumption of the electronics.During wireless data transmission, the current demand is highest, resulting in a decreased voltage level due to the internal resistance of the battery.The electronics demand a minimum voltage level of 2.5 V. Therefore, the reliable operation is only in the range of 0 to 34,000 cycles.
At 38,000 cycles, this battery also shows a significant drop in the voltage level from 3.1 V down to 2.3 V.This drop similarly indicates that one of the battery cells has reached its end of operation.
With >34,000 discharge cycles, the battery stays basically within the requirement of the application, which is defined as 20,000 cycles minimum.
By this accelerated discharge setup, it is expected that the battery driving an application with less frequent discharge pulses will last longer, i.e., will enable more cycles because there is time for the battery to recover by ion reorganization.Therefore, the lifetime of the battery will be sufficient for the application.
Discussion
The research and development goal-to supply a rolled-up battery with an open inner core as a power supply for a sensor system for a dedicated agriculture application-was successfully reached.The main challenges, like the layout or the leakage of the battery's encapsulation, were solved.
Comparing a printed battery in flat and the rolled-up form factor, the flat has a higher capacity compared with the rolled-up version.The main reason was found to be the lower amount of electrolyte stored inside the battery that is necessary for the chemical reaction.The amount of electrolyte is limited by the sealant's thickness of 100 µm and the paper separator in between.The volume was 500 µL.In the flat form factor, there is no issue of bulging the paper substrate of the battery by a too-large electrolyte volume.The electrolyte volume was about 2 mL in each cell.In the rolled-up battery, this is not possible due to the layers that lie on top of each other in every ply.
The main challenge for rolling up any flat substrate is the tension caused inside any stacked material system due to different bending radii on the inner and outer sides of the substrate.When employing a relatively stiff encapsulation material, these limitations become even more obvious by small crinkles, causing the leakage of electrolyte.This is happening preferably in the glue layer that is weaker than any paper or polymer film layer.
For the application in agriculture, there are currently two drawbacks with respect to the biodegradability of the battery described in this article: 1.
The chemical system requires H 2 O in the electrolyte for the chemical reaction.Therefore, the electrolyte needs a hermetic sealing, which must withstand the aqueous electrolyte.This is realized by a polymer coating of the paper and a polymer frame with glue layers for encapsulation.These polymers must not be water-degradable and thus will remain in the soil for a long time.
2.
The resistance of the carbon electron carrier itself is too high.To deliver the required current for driving the electronics, an additional silver layer is required to lower the overall resistance of the current conducting layer.Silver is also non-degradable.Furthermore, sometimes silver ions are used to avoid any growth of bioorganic cells, such as dopants in sports clothing.A discussion review about silver in soil has been performed by [22].
Conclusions
In this paper, three important steps for designing and building a rolled-up printed primary battery are selected and described.For the first time, printed zinc-carbon batteries have not only been scaled with respect to voltage and capacity but also for threedimensional shape.In this achievement, a primary battery printed on a paper substrate was rolled up to fulfill the application's demand for an open inner core.
Figure 1 .
Figure 1.Partly biodegradable sensor developed in EU project "PLANtAR".(a) Structure of the sensor with all components for a wireless smart sensor system; (b) technology demonstrator of partially compostable sensors for agriculture.
Figure 1 .
Figure 1.Partly biodegradable sensor developed in EU project "PLANtAR".(a) Structure of the sensor with all components for a wireless smart sensor system; (b) technology demonstrator of partially compostable sensors for agriculture.
Figure 2 .
Figure 2. Scheme of a printed zinc-carbon ba ery setup: (a) stack setup of a single cell; (b) series connection of 3 single cells realized by printing layout.
Figure 3 .
Figure 3. Scheme of a printed zinc-carbon ba ery setup: (a) stack setup of a 4.5 Vnom ba ery consistent of three cells; (b) beginning of roll-up, including electrolyte and separator layer; (c) rolledup 4.5 Vnom ba ery from Figure 3a, pictorial schematic.
Figure 2 .
Figure 2. Scheme of a printed zinc-carbon battery setup: (a) stack setup of a single cell; (b) series connection of 3 single cells realized by printing layout.
Figure 1 .
Figure 1.Partly biodegradable sensor developed in EU project "PLANtAR".(a) Structure of the sensor with all components for a wireless smart sensor system; (b) technology demonstrator of partially compostable sensors for agriculture.
Figure 2 .
Figure 2. Scheme of a printed zinc-carbon ba ery setup: (a) stack setup of a single cell; (b) series connection of 3 single cells realized by printing layout.
Figure 3 .
Figure 3. Scheme of a printed zinc-carbon ba ery setup: (a) stack setup of a 4.5 Vnom ba ery consistent of three cells; (b) beginning of roll-up, including electrolyte and separator layer; (c) rolledup 4.5 Vnom ba ery from Figure 3a, pictorial schematic.
Figure 3 .
Figure 3. Scheme of a printed zinc-carbon battery setup: (a) stack setup of a 4.5 V nom battery consistent of three cells; (b) beginning of roll-up, including electrolyte and separator layer; (c) rolledup 4.5 V nom battery from Figure 3a, pictorial schematic.
Figure 4 .
Figure 4. Scheme of two different 4.5 Vnom rolled-up ba ery layouts: (a,b) crosswise ba ery perpendicular to winding direction.(a) Shows the front side layout, while (b) shows the ov when rolling up.(c,d) Lengthwise ba ery cells parallel to winding direction; (c) shows the fron layout, while (d) shows the overlap when rolling up.
Figure 4 . 11 Figure 5 .
Figure 4. Scheme of two different 4.5 V nom rolled-up battery layouts: (a,b) crosswise battery cells perpendicular to winding direction.(a) Shows the front side layout, while (b) shows the overlap when rolling up.(c,d) Lengthwise battery cells parallel to winding direction; (c) shows the front side layout, while (d) shows the overlap when rolling up.Designs 2024, 8, x FOR PEER REVIEW 5 of 11
Figure 5 .
Figure 5. Photographs of printed layers (from left to right: silver (optional), carbon, zinc, and manganese dioxide) on flat paper substrate for the lengthwise, three-cell 4.5 V nom battery setup.
Figure 6 .
Figure 6.Discharge setup of the crosswise ba ery in flat shape without roll-up.(a) Photo of the setup.The discharge electronics are connected via black wires.The potentiostat is connected via the black clamp and red clamp; (b) voltage diagram for 20 h of ba ery discharge.
Figure 6 .
Figure 6.Discharge setup of the crosswise battery in flat shape without roll-up.(a) Photo of the setup.The discharge electronics are connected via black wires.The potentiostat is connected via the black clamp and red clamp; (b) voltage diagram for 20 h of battery discharge.
Figure 8 .
Figure 8. Lengthwise rolled-up three-cell 4.5 Vnom ba ery with silver conductor grid: (a) discharge setup with electronics; (b) voltage diagram of discharge.
Figure 8 .
Figure 8. Lengthwise rolled-up three-cell 4.5 V nom battery with silver conductor grid: (a) discharge setup with electronics; (b) voltage diagram of discharge.
|
2024-06-23T15:02:09.804Z
|
2024-06-21T00:00:00.000
|
{
"year": 2024,
"sha1": "c17f8167f26b7b7012a3502444d369b93e0f0a1f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2411-9660/8/4/62/pdf?version=1718953350",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "56cb5a1decc4b566b4c38525576fda29e6ca7d38",
"s2fieldsofstudy": [
"Engineering",
"Agricultural and Food Sciences",
"Materials Science"
],
"extfieldsofstudy": []
}
|
215783114
|
pes2o/s2orc
|
v3-fos-license
|
2006 Wu Volume and 7, Xie Issue 9, Article R85 Open Access
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Neuronal gene expression control <p>Using comparative sequence analysis, a network among REST, CREB and brain-related miRNAs is propsed to mediate neuronal gene expression.</p> Abstract Background: Two distinct classes of regulators have been implicated in regulating neuronal gene expression and mediating neuronal identity: transcription factors such as REST/NRSF (RE1 silencing transcription factor) and CREB (cAMP response element-binding protein), and microRNAs (miRNAs). How these two classes of regulators act together to mediate neuronal gene expression is unclear.
elucidating the role of these regulators in neural development and function.
The transcriptional repressor REST (RE1 silencing transcription factor, also called neuron-restrictive silencer factor or NRSF) plays a fundamental role in regulating neuronal gene expression and promoting neuronal fate [1,2]. REST contains a zinc-finger DNA-binding domain and two repressor domains interacting with corepressors CoREST and mSin3a. The corepressors additionally recruit the methyl DNA-binding protein MeCP2, histone deacetylases (HDAC), and other silencing machinery, which alter the conformation of chromatin resulting in a compact and inactive state [3][4][5][6]. REST is known to target many neuronal genes, and is pivotal in restricting their expression exclusively in neuronal tissues by repressing their expression in cells outside the nervous system. Recent work also points to REST as a key regulator in the transition from embryonic stem cells to neural progenitors and from neural progenitors to neurons [7]. The role of REST in nervous system development is intriguingly manifested by its expression, which is lower in neural stem/progenitor cells than in pluripotent stem cells, and becomes minimal in postmitotic neurons [7]. The expression of REST is shown to be regulated by retinoic acid; however, other forms of regulatory mechanisms are unknown.
Another important class of regulators implicated in neuronal gene expression control and neuronal fate determination is the microRNA (miRNA) [8][9][10]. MiRNAs are an abundant class of endogenous approximately 22-nucleotide RNAs that repress gene expression post-transcriptionally. Hundreds of miRNAs have been identified in almost all metazoans including worm, fly, and mammals, and are believed to regulate thousands of genes by virtue of base pairing to 3' untranslated regions (3'UTRs) of the messages. Many of the characterized miRNAs are involved in developmental regulation, including the timing and neuronal asymmetry in worm; growth control and apoptosis in fly; brain morphogenesis in zebrafish; and hematopoetic and adipocyte differentiation, cardiomyocyte development, and dendritic spine development in mammals [8,11,12]. Based on data from a recent survey [13], we note that the human genome contains about 326 miRNA genes, many of which are highly or specifically expressed in neural tissues [14]. The function of the brain-related miRNAs and the mechanisms underlying their transcriptional control are beginning to emerge [12,[15][16][17].
In addition to REST and miRNAs, many other classes of regulators might also be involved in controlling neuronal gene expression. This control could be carried out through a variety of mechanisms, such as changing chromatin state, affecting mRNA stability and transport, and post-translational modifications. Here we focus specifically on regulation through REST and miRNAs.
To gain a better understanding of how REST and miRNAs regulate neuronal gene expression, we took the initial step of producing a reliable list of genes targeted by REST and several brain-related miRNAs using computational approaches. A list of these target genes should be informative in unraveling the function of these regulators. Moreover, we anticipate that a global picture of the target genes may provide a clue as to how REST and miRNAs act together to coordinate neuronal gene expression programs and promote neuronal identity.
REST represses target genes by binding to an approximately 21-nucleotide binding site known as NRSE (neuron-restrictive silencer element, also called RE1), which is present in the regulatory regions of target genes. Previously, several genome-wide analyses of NRSE sites have been carried out [6,18,19]. These analyses used pattern-matching algorithms to search for sequences matching a consensus derived from known REST binding sites. The most recent work identified 1,892 sites in the human genome [19]. However, there are several factors limiting the utilities of the pattern-matching algorithms. Most notably, transcriptional factors can bind with variable affinities to sequences that are allowed to vary at certain positions. Consequently, methods based on consensus sequence matching are likely to miss target sites with weaker binding affinities. Indeed, it has been noted that both L1CAM and SNAP25 genes contain an experimentally validated NRSE site that diverges from the NRSE consensus [19], and was not identified in the previous analyses. In addition, even sequences perfectly matching the NRSE consensus could occur purely by chance, and therefore do not necessarily imply that they are functional. Given the vast size of the human genome, random matches could significantly add to the false positive rate of a prediction. For example, in the most recent analysis, it was estimated that 41% of the 1,892 predicted sites occur purely by chance, and likely represent false positives [19].
We have developed a method to systematically identify candidate NRSE sites in the human genome without these two main limitations of the previous methods. To address the first limitation, we utilized a profile-based approach, which computes the overall binding affinity of a site to REST without requiring strict matching of each base to the NRSE consensus. To reduce false positives, we rely on comparative sequence analysis to identify only sites that are conserved in orthologous human, mouse, rat and dog regions [20][21][22][23].
MiRNAs repress gene expression by base-pairing to the messages of protein-coding genes for translational repression or message degradation. The pairing of miRNA seeds (nucleotides 2 to 7 of the miRNAs) to messages is necessary and appears sufficient for miRNA regulation [24][25][26]. This enables the prediction of miRNA targets by searching for evolutionarily conserved 7-nucleotide matches to miRNA seeds in the 3'UTRs of the protein-coding genes [21,[27][28][29][30]. We have generated a list of predicted target genes for several brainrelated miRNAs by searching for seed-matches perfectly conserved in mammalian 3'UTRs.
Additionally, we have sought to understand the mechanisms controlling the expression of brain-related miRNAs. To this end, we have used comparative analysis to identify sequence motifs that are enriched and conserved in the regulatory regions of these miRNAs across several mammals.
Identification of 895 NRSE sites in human with a false positive rate of 3.4%
First, we curated from the literature a list of experimentally validated NRSE sites in the human genome [18,19], including 38 sites with site lengths of 21 nucleotides (see supplementary table 1 in Additional data file 1). Based on the 38 known sites, we derived a profile (also called a position weight matrix) on the distribution of different nucleotides at each position of NRSE. The profile shows an uneven contribution to the binding of the REST protein from each of the 21 positions ( Figure 1a). The positions 2 to 9 and 12 to 17 nucleotides, which will be referred as 'core positions' of NRSE, are much less variable than the remaining positions.
Next we examined the conservation properties of the known NRSE sites. To carry this out, we extracted orthologous regions of these sites in three other fully sequenced mammalian genomes (mouse, rat and dog) [31][32][33][34], and generated an alignment for each site in the four species (see supplementary table 1 in Additional data file 1). The alignment data show that the NRSE sites are highly conserved across the mammalian lineages: out of the 38 reference sites, only one cannot be detected in other mammals. We further examined the conservation of NRSE by counting the number of bases mutated in other species from the aligned human site at each of its 21 positions. Similar to the profile, conservation levels at different NRSE positions are highly non-uniform (Figure 1b). However, the conservation levels at different positions are remarkably well correlated with the NRSE profile: highly constrained positions show much stronger conservation in orthologous species than those with higher variability. The core positions are highly constrained and permit few mutations. Among the 37 aligned sites, all core positions contain fewer than two mutations and no insertions or deletions in any of the other species when compared with a human site. By contrast, in a random control, only 0.47 out of the 38 sites are expected to be called conserved with the same criteria. Therefore, the functional NRSE sites demonstrate a 78-fold increase of evolutionary conservation, suggesting the usefulness of evolutionary conservation as an efficient tool for detecting NRSE sites.
We then used the profile to search the entire human genome for sites that are better described by the profile than other background models. For each candidate 21-nucleotide window in the genome, we calculated a log-odds score quantifying how well the site fits to the NRSE profile (see Materials and methods). The overall distribution of the log-odds scores computed over the regulatory regions of all protein-coding genes in humans is shown in Figure 1c, which follows a normal distribution (mean = -37; standard deviation (SD) = 10). We were interested in sites with scores significantly higher than the bulk of the overall distribution: over the entire human genome, we identified 171,152 sites with log-odds scores above 5 (corresponding to 4.2 SDs away from the mean). The next step was to examine orthologous sequences of these sites in other mammals and filter the list to 1,498 sites based on two criteria: (a) the log-odds scores at the orthologous sites of mouse, rat and dog are also greater than 5, and (b) the number of bases mutated from the corresponding human sequence at the core positions is fewer than two in any of the orthologous sites. The criterion (b) is based on the conservation properties of the known NRSE sites described above.
NRSE profile and distribution of log-odds score We then estimated the number of sites that could be discovered purely by chance. For this purpose, we generated a cohort of control profiles with the same base composition and the same information contents as those of the NRSE profile, and searched the instances of the control profiles using the same procedure. Only 328 sites were found for the control profiles, suggesting that approximately 78% of the 1,498 sites are likely to be bona fide NRSE sites. To balance the need for an even smaller rate of false positives, we further identified 895 sites with log-odds scores above 10 in all aligned species. Only 30 sites are expected by chance, suggesting a false positive rate of 3.4%. The distribution on the log-odds scores of these sites falls distinctly to the far right of the bulk of the background distribution ( Figure 1c). These sites are distributed across all chromosomes of the human genome and include 37 out of the 38 known NRSE sites that we have curated.
Next we identified the nearest protein-coding genes located around each of the 895 candidate NRSE sites. Over 60% of these genes have NRSE sites within 20 kb of their transcriptional starts (Supplementary figure 1 in Additional data file 1), while a few NRSE sites are located more than 150 kb away from genes, suggesting the possibility of long-range interactions. To study the properties of these genes further, we generated a list of 566 genes that contain at least one NRSE site within 100 kb of their transcriptional start sites (see supplementary website [35]). Interestingly, 75 (13.2%) of the genes contain more than one NRSE site in their regulatory regions. For instance, NSF (N-ethylmaleimide-sensitive factor) contains as many as four NRSE sites in its regulatory region in a segment of sequence of less than 100 base pairs; another gene NPAS4 (neuronal PAS domain protein 4) contains three NRSE sites spread over a region of 3 kb.
If the predicted genes are bona fide REST targets, we would expect that the expression of these genes should inversely correlate with the expression of REST. To test this, we examined the expression of these genes and REST across a battery of mouse tissues in a dataset generated previously [36]. The tissue gene expression dataset contains 409 of the predicted target genes. It confirms that REST is expressed at low levels in brain-related tissues, and at much higher levels in nonneuronal tissues (Figure 2a). In contrast to the expression profile of REST, most of the predicted REST target genes are specifically expressed in brain-related tissues ( Figure 2b). We calculated the correlation coefficient between REST and each of the predicted target genes: the mean correlation coefficient for the genes shown in Figure 2b is -0.21, which is much lower (P value = 2.2e - 16 ) than what is expected by chance ( Figure 2c). Using a stringent threshold (See Materials and methods), we screened out 188 (46% of all 409 genes, 5.4-fold enrichment) genes that demonstrate specific expression in brainrelated tissues. A list of these genes and their expression profiles across different tissues is shown in Additional data file 1, supplementary figure 2.
We then examined the functional annotation of all 566 predicted REST target genes. Specifically we were aiming to test if these target genes are enriched in any of the functional categories specified in gene ontology. Based on an annotation provided in [37], we found that the gene set is highly enriched with genes implicated in nervous system development and function ( Figure 3). For example, 51 genes (5.2-fold enrichment, P value = 1.3e -22 ) encode ion channel activity, and 28 genes (7.3-fold enrichment, P value = 6.6e -17 ) are involved in synaptic functions. Interestingly, the list also contains a large number of genes (60, 4.4-fold enrichment and P value = 2.1e -22 ) implicated in nervous system development; 15 genes are involved in neuronal differentiation, which include a set of important transcription factors such as NeuroD1, NeuroD2, NeuroD4, LMX1A, SOX2 and DLX6.
However, we also observed some genes that do not seem to encode obvious neural-specific functions. This is consistent with what we observed when examining gene expression patterns for these genes ( Figure 2b): a significant portion of them show specific expression in non-neuronal tissues such as brown fat, pancreas, spleen and thyroid ( Figure 2b). Interestingly, in most of the tissues the expression of REST is also low (Figure 2a), consistent with the role of REST as a transcriptional repressor. The extent to which REST contributes to the function of other cell types is unclear. A recent study identified REST as a tumor suppressor gene in epithelia cells [38]. Together with our findings, this may suggest that REST could potentially regulate a set of genes not necessarily specific to neuronal functions. Alternatively, the observed expression of some REST target genes in non-neuronal tissues might be due to other confounding factors, such as the heterogeneous cell population in these tissues, added levels of regulation caused by transcriptional regulators which themselves are targeted by REST, and the potential regulation by miRNAs, which we will discuss in more detail later.
Gene expression patterns of predicted REST targets in 61 mouse tissues
(a)
Thus, using a profile constructed from 38 known NRSE sites and requiring evolutionary conservation in other mammalian species, we have identified 895 sites in the human genome with an estimated false positive rate of 3.4%. We have identified protein-coding genes near these elements, and found that most of these genes are expressed specifically in neuronal tissues.
Brain-related miRNAs in the vicinity of the NRSE sites
We noticed that there is a set of miRNAs that are located in close proximity to the predicted 895 NRSE sites in the human genome (Table 1). This includes 10 miRNA genes that are located within 25 kb of at least one NRSE site, where no protein-coding genes can be found nearby. Three of the miRNAs, miR-124a, miR-9 and miR-132, have further experimental support for targeting by REST, as demonstrated in a chromatin immunoprecipitation analysis by Conaco et al. [39]. Additionally, we discovered that miR-29a, miR-29b and miR-135b are also located in the vicinity of the NRSE sites. All these 10 miRNA genes are located in intergenic regions, and are transcribed with their own promoters. We also found that there is a set of miRNA genes likely regulated by REST indirectly through the promoters of protein-coding genes that host these miRNAs. These miRNA genes are located in the introns of protein-coding genes, which themselves are predicted REST targets. It is known that miRNAs located inside protein-coding genes are often cotranscribed with the host, and spliced out only after transcription. The set of miRNAs include miR-153 within PTPRN, miR-346 within glutamate receptor GRID1, and miR-218 within SLIT3.
Overall, we identified 16 miRNA genes that are potentially regulated by REST (Table 1) directly or indirectly through their protein-coding hosts. Interestingly, most of these miR-NAs are expressed in the brain, and some of them show brainspecific/enriched expression patterns. In a recent survey of several miRNA expression-profiling studies, Cao et al. generated a list of 34 miRNAs that demonstrate brain-specific/ enriched expression in at least one study [14]. The 16 miRNA genes we identified correspond to 13 unique miRNA mature products. Out of the 13 miRNAs, eight (62%) are contained in the list of 34 brain-specific/enriched miRNAs summarized by Cao et al., which is about sixfold enrichment when compared with what is expected by chance (34 out of 319 all miRNAs, 10.6%). Among the six miRNAs not included in the list of 34 brain-related miRNAs, mir-29 has been demonstrated to show dynamic expression patterns during brain development, and is strongly expressed in glial cells during neural cell specification [14,40]; mir-346, mir-95 and mir-455 are contained in the introns of (and share the same strand as) their protein-coding hosts, which themselves are specifically expressed in brain-related tissues (supplementary figure 5 in Additional data file 1). It is unclear how these miRNAs and their host genes appear to demonstrate different expression patterns.
In summary, this suggests that similar to neuronal genes, a set of brain-related miRNAs are likely under the control of REST as well. REST might play an important role in repressing the expression of these miRNAs in cells outside the nervous system.
Identification of target genes for each of the brainrelated miRNAs
MiRNAs have been suggested to regulate the expression of thousands of genes. Our next step was to seek to identify genes that are targeted by the set of brain-related miRNAs mentioned above. We used an approach similar to previous analyses [21,27], and identified candidate targets by searching for conserved matches of the miRNA seeds (2 to 7 nucleotides of the miRNA) in the 3'UTRs of the protein-coding genes. To reduce the rate of false positives, we required the seed to be conserved not only in eutherian mammals as used in the previous analysis, but also in marsupials. For this purpose, we first generated an aligned 3'UTR database in the orthologous regions of the human, mouse, rat, dog and opossum genomes (HMRDO). Then we searched the aligned 3'UTRs for conserved 7-nucleotide sequences that could form a perfect Watson-Crick pairing to each of the miRNA seeds. This effort lead to hundreds of predicted targets for the brain-Enriched functional categories for predicted REST target genes Figure 3 Enriched functional categories for predicted REST target genes. Each row represents one function category, and shows the observed number of REST target genes contained in that category and the number of genes expected purely by chance. related miRNAs, including 315 targets for miR-124a, 273 targets for miR-9, and 80 targets for miR-132. The complete list of predicted target genes for each of the brain-related miR-NAs can be viewed at the supplementary website [35].
We examined the expression of the predicted target genes in different mouse tissues. The expression profile of the predicted target genes for each of the miRNAs across different tissues is shown in the supplementary website [35]. Interestingly, we noticed that the brain-related miRNAs target many genes that are highly transcribed in neural tissues (supplementary figure 3 in Additional data file 1). For instance, among 191 genes targeted by mir-124a that have been profiled across different tissues, 45 (23.6%) are specifically expressed in brain-related tissues, which is 2.8-fold enrichment of that which would be expected by chance (8.54%). The enrichment also holds true for mir-9 in that 25.8% of its target genes show brain-specific expression (threefold enrichment). The coexistence of the predicted target genes and the miRNAs in the same tissues suggests that the brain-related miRNAs are likely involved in extensive regulation of a large number of neuronal genes.
As to the REST itself, our initial analysis did not identify any miRNA that could bind to its 3'UTR. However, a closer exam-ination indicates that gene REST harbors a much longer 3'UTR transcript, not annotated by any gene prediction programs (Additional data file 1, supplementary figure 4). This longer 3'UTR is supported by three pieces of evidence: 1) multiple ESTs detected in this region; 2) high levels of conservation across all mammalian species, and even chicken; and 3) a perfectly conserved poly-adenylation site (AATAAA) in all mammals at the end of the new transcript.
Based on the new 3'UTR transcript, we performed the target prediction again and discovered that REST itself is also targeted by several brain-related miRNAs including miR-9, miR-29a, and miR-153. Together with the discovery of regulation by REST on these miRNAs, this suggests the existence of an extensive double feedback loops between the REST complex and the brain-related miRNAs.
We notice that the 3'UTR of the REST also harbors predicted target sites for several miRNAs that do not seem to have obvious neuronal-specific functions. Out of the seven unique target sites (conserved in HMRDO), three sites are not contained in the list of 34 brain-specific/enriched miRNAs curated by Cao et al. [14], including one site targeted by mir-93 family, one site targeted by mir-25 family, and one site targeted by mir-377. Both mir-93 and mir-25 are enriched in non-neuronal tissues such as spleen and thymus [41]. This seems to reinforce the observation of expression patterns for the predicted protein-coding targets of REST, where we also noticed a set of target genes specifically expressed in non-neuronal tissues ( Figure 2). We speculate that REST might be involved in the regulation of genes outside the nervous systems.
cAMP response element binding protein (CREB) is a potential positive regulator of the brain-related miRNAs
Next we sought to understand the regulatory machinery controlling the expression of the set of brain-related miRNAs. Besides the negative regulation by REST, we are particularly interested in factors that positively regulate the expression of these miRNAs. Given the scarcity of data on the regulation of miRNA in general, we decided to take an unbiased approach to look for short sequence motifs enriched in the regulatory regions of these miRNAs.
Since few primary transcripts of the miRNA genes are available, we decided to examine a relatively big region (from upstream 10 kb to downstream 5 kb) around each of the miRNAs. On the other hand, however, using big regions significantly increases the difficulty of detecting any enriched motifs. We therefore resorted to comparative sequence analysis again, by searching only for sequence motifs present in aligned regions of the four mammals. For this purpose, we generated a list of all 7-nucleotide motifs, and for each motif we counted the number of conserved and total instances in those regions, and computed a score quantifying the enrichment of the conserved instances (see Materials and methods section. The analysis yielded 35 motifs that are significantly enriched in these regions with a P value less than 10 -6 ( Table 2). The top motif is GACGTCA, which is a consensus cAMP response element (CRE) recognized by CREB, a basic leucine zipper transcription factor. We repeated the motif discovery using 6-mer and 8-mer motifs, and consistently identified the CRE element as the most significant motif. For the ten miRNA genes (Table 1) predicted to be directly regulated by REST, we found nine containing a conserved CRE site nearby. This set of miRNAs includes miR-124a, miR-9, miR-29a/29b, and miR-132 (Table 3, Figure 4). Although this association is purely computational, a recent study demonstrated experimentally that one of these miRNAs, miR-132, is regulated by CREB and is involved in regulating neuronal morphogenesis [42].
In addition to CREB, we also identified several other potential regulators such as E47, SMAD3, POU3F2, and MYOD. For instance, besides REST and CREB, miR-9-3 is predicted to be regulated by SMAD3, OCT1, and POU3F2 (Figure 5a), and miR-132 is predicted to be regulated by MYOD and MEF2 (Figure 5b). Interestingly, a recent study shows that MEF2 and MYOD control the expression of another miRNA, miR-1, and play an important role in regulating cardiomyocyte differentiation [11]. As well as being expressed in muscle tissues, MEF2 is also highly expressed in brain, where it plays an important role in controlling postsynaptic differentiation and in suppressing excitatory synapse number [43]. It would be interesting to examine whether miRNAs are involved in such processes via the regulation by MEF2.
Thus, we have identified several transcription factors that potentially regulate the expression of the brain-related miR-NAs with CREB being the top candidate. It is likely that the expression of the brain-related miRNAs is under rigorous control of these regulators during different developmental stages and in different cell types.
Discussion
Comparative sequence analysis is a powerful and general tool for detecting functional elements, because these elements are often under strong selective pressure to be preserved, and Table 2 Enriched motifs in the regulatory regions of brain-related miRNAs *Transcription factors from Transfac database. † Known consensus in Transfac database that is similar to the 7-mer. ‡ Measure the similarity between the 7-mer and the Transfac factor consensus. The score ranges from 0 to 1, with 1 for two identical consensus sequences. therefore stand out from neutrally evolving sequences by displaying a greater degree of conservation across related species. In this work, we have relied on comparative genomics to study the regulation of neuronal gene expression, and have identified functional elements for three distinct classes of regulators including REST, CREB, and miRNAs.
We identified 895 NRSE sites conserved in human, mouse, rat and dog with an estimated false positive rate of 3.4%. The number is significantly lower than 41%, which is the estimated false positive rate in the previous analysis by Bruce et al. [19], where across-species conservation criteria were not considered. Moreover, we used a profile-based approach, and were able to identify sites deviating from the NRSE consensus. For instance, we successfully identified two experimentally validated sites in L1CAM and SNAP25 that deviate from the NRSE consensus and were missed in previous analyses.
A set of the predicted sites is located in close proximity to a set of brain-related miRNA genes. This suggests that similar to the regulation of neuronal genes, many brain-specific miRNAs are likely to be repressed by REST in non-neuronal tissues. To help better understand the function of these miRNAs, we have generated a list of predicted target genes for each of the miRNAs. The predicted targets include many genes that are specifically expressed in neural tissues, suggesting the potentially extensive regulation by the miRNAs on these genes.
We discovered that the REST corepressor complex itself is targeted by multiple brain-related miRNAs (Figure 4). Together with the repressive role of REST on these miRNAs, the analysis points to the existence of a double-negative feedback loop between the transcription factor REST and brainrelated miRNAs in mediating neuronal gene expression. The double-negative feedback loop is used widely in engineering as a robust mechanism for maintaining the stability of a dynamic system. A two-component system with mutual inhibitions often results in a bistable system in which only one component is active at the resting state, and the active component can be stabilized against noisy perturbations by negative feedbacks. We speculate that the nervous system may utilize this mechanism in restricting the expression of neuronal genes exclusively in neuronal tissues. It has been reported that REST is actively transcribed in neural progenitors during neurogenesis [7]. Moreover, there are also reports showing that mRNA of REST is present in mature hippocam- pal neurons, and the mRNA level can be elevated following epileptic insults [44]. If these transcripts are all translated into REST proteins, a large number of neuronal genes will be repressed, most likely undesirably. However, little REST protein can be detected in neural progenitors, so to what extent the REST protein is expressed in the mature hippocampus neurons is unclear. Previously, the proteasomal-dependent pathway was suggested to be involved in the post-translational degradation of the REST protein [7]. We suggest that the set of miRNAs targeting REST might be an additional mechanism ensuring the removal of REST products in neuronal tissues.
We have used gene expression data measured across different tissues to examine the expression patterns of REST, its target genes and the brain-related miRNAs. However, there are several confounding factors that might limit the utility of such expression data. First, the tissues typically contain heterogeneous cell types. For instance, the brain tissues are always a mixture of neurons and glials. If a gene is expressed differen-tially in different cell types, its expression measured at tissue level may become hard to interpret. Second, the expression data may be further confounded by many secondary effects. For example, transcriptional regulators controlled by REST may themselves lead to expression changes for a large number of genes. Indeed, many of the predicted REST targets are transcription factors, such as NeuroD1, NeuroD2 and NeuroD4, involved in neural differentiation, and several LIM homeobox proteins such as LHX2, LHX3 and LHX5. The measured expression levels are likely a combined effect of several levels of regulation. Third, because of the added levels of regulation by miRNAs, RNA measurement of a gene may not reflect its true expression levels. As we mentioned above, it has been observed that REST is transcribed in neural progenitor cells, but little REST protein can be detected. Examining protein expression data is certainly more desirable. However, at present we have few high-quality large-scale protein expression data available. Such data might gradually become available in the future with the recent development in In additional to REST, which is a regulator repressing the set of brain-related miRNAs, we are also interested in identifying the factors positively regulating those miRNAs. We have undertaken an unbiased approach of searching conserved and enriched short motifs in regulatory regions of these miR-NAs, and have identified CREB as the top candidate regulator. CREB is an important transcription factor regulating a wide-range of neuronal functions including neuronal survival, neuronal proliferation and differentiation, process growth, and synaptic plasticity [45,46]. CREB can be activated via phosphorylation by multiple extracellular stimuli such as neurotrophins, cytokines, and calcium, as well as a variety of cellular stresses. The discovery of regulation of multiple miRNAs by CREB indicates that these miRNAs are potentially expressed in an activity-dependent manner. It would be interesting to examine whether these miRNAs play a role in regulating synapse development and plasticity.
Conclusion
We have identified 895 putative NRSE sites conserved in human, mouse, rat and dog genomes. A subset of these NRSE sites is present in the vicinity of several brain-related miRNAs, suggesting the transcriptional repression of these miRNAs by REST. We have also found that the brain-related miRNAs are enriched with CRE elements in their promoter regions, implicating the role of CREB in the positive regulation of these miRNAs. Altogether, the comparative sequences analysis points to an intricate network of transcription activators and repressors acting together with miRNAs in coordinating neuronal gene expression and promoting neuronal identity.
Multiple sequence alignment among human, mouse, rat and dog
We used the whole-genome mammalian alignments generated by the UCSC genome browser [47]. From the wholegenome alignment, we then extracted regions of interest. For instance, we generated the aligned NRSE sequences based on genome coordinates of NRSE sites in human. Similarly, we constructed the aligned 3'UTR database using the coordinates of 3'UTRs of all protein-coding genes. For 3'UTRs, we used five-way alignments (human, mouse, rat, dog and opossum). The annotation of genes and their 3'UTRs are from the collection of known genes deposited in the UCSC genome browser.
Constructing the NRSE profile and calculation of logodds score
The NRSE profile was constructed from 38 known NRSE sites each with a site length of 21 nucleotides. We used the 38 sites to compute the frequency of different nucleotides at each position, and generated a position weight matrix representation P of the profile, where p ij represents the probability of nucleotide j at position i. The information content of a profile is defined as IC i = 2+Σ j p ij *log 2 (p ij ) for position i. For any candidate 21-nucleotide sequence, we then calculated a log-odds score to evaluate how well the sequence matched to the NRSE profile. The log-odds score is defined as LO = Σ i log 2 (p i, j(i) / b j(i) ) where j(i) is the nucleotide at position i of the sequence, and b j represents the probability of observing nucleotide j in a background model. The log-odds score computes the log ratio of two likelihoods, one that the site is generated by the NRSE profile, and the other that the site is generated by a neutral background model. In the neutral background model, we assume each nucleotide is generated independently according to a given nucleotide composition. We estimated the nucleotide composition based on sequences extracted from regulatory regions (5 kb upstream) of all known genes for each of the species separately.
Analysis of gene expression across different tissues
We used the microarray gene expression data published previously by Su et al. [36], which profiled expression patterns of genes across 61 mouse tissues. We postprocessed the dataset and removed any probe with a mean expression level across different tissues of less than 100, and an SD less than 50. For genes containing multiple probes in the array, we used values averaged over different probes to represent the expression level for that gene. In total, 13,743 genes were used for further analysis. For each of the genes, we then normalized their expression values across different tissues such that the mean expression across different tissues was zero and the SD was 1. Based on the normalized values, we then screened out genes with expression values higher than 0.35 in at least one of the brain-related tissues. A total number of 1,174 genes was identified, and we refer to the gene set as the brain-related genes.
Identification of regulatory motifs for brain-related miRNAs
First we generated a multiple sequence alignment between human, mouse, rat and dog for the region from 10 kb upstream to 5 kb downstream for each miRNA. We then searched the occurrence of all 7-mers in the aligned regions. For each 7-mer, we counted the number of total instances (N) in human, and the number of instances (K) perfectly conserved in the aligned regions of mouse, rat and dog. We then calculated a Z-score defined as (K-Np 0 )/[Np 0 (1-p 0 )] 1/2 , where p 0 is the background conservation rate. The Z-score measures the number of standard deviations on the number of conserved instances away from what is expected by chance by assuming a binomial model on whether a site is conserved. The Z-score quantifies the enrichment of conserved motifs in the aligned regions. To achieve a significant Z-score, a 7-mer must be highly conserved and occur in high frequencies.
Additional data files
Supporting figures and tables are available with the online version of this article in Additional data file 1. The identified NRSE sites, the miRNA target genes and other materials mentioned in the article can be viewed at a supplementary website [35].
Additional data file 1 Supporting figures and tables A PDF containing supporting figures and tables. Click here for file
|
2014-10-01T00:00:00.000Z
|
2004-04-01T00:00:00.000
|
{
"year": 2006,
"sha1": "55ed8a1c192d47e37a64117ed8a2272287e522f8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "CiteSeerX",
"pdf_hash": "55ed8a1c192d47e37a64117ed8a2272287e522f8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
14012961
|
pes2o/s2orc
|
v3-fos-license
|
Reliability analysis of semicoherent systems through their lattice polynomial descriptions
A semicoherent system can be described by its structure function or, equivalently, by a lattice polynomial function expressing the system lifetime in terms of the component lifetimes. In this paper we point out the parallelism between the two descriptions and use the natural connection of lattice polynomial functions and relevant random events to collect exact formulas for the system reliability. We also discuss the equivalence between calculating the reliability of semicoherent systems and calculating the distribution function of a lattice polynomial function of random variables.
Introduction
Consider a semicoherent system made up of nonrepairable components. Such a system can be described by its structure function, which expresses at any time the state of the system in terms of the states of its components. Equivalently, the system can be described by a lattice polynomial (l.p.) function which expresses the system lifetime in terms of the component lifetimes.
In this paper, we point out the formal parallelism between both descriptions, we collect exact formulas for the system reliability, and we show that calculating the reliability of semicoherent systems is equivalent to calculating the distribution function of an l.p. function of random variables.
We also consider the more general case where there are collective upper bounds on lifetimes of certain subsets of components imposed by external conditions (such as physical properties of the assembly) or even collective lower bounds imposed for instance by backup blocks with constant lifetimes. In terms of lifetimes, such systems can be described by weighted lattice polynomial (w.l.p.) functions. In terms of state variables, we will see that a "weighted version" of the structure functions is required.
This paper is organized as follows. In §2 we discuss the parallelism between the description of semicoherent systems by structure functions and by the corresponding l.p. functions. In particular, in §2.3, Theorem 2 uses the natural connection between lattice polynomial functions and relevant random events to establish a centrally important relation between the lifetimes of the system and its components. In §3 we yield exact formulas for the system reliability in case of independent arguments and in general. In turn, those formulas make it possible to provide exact formulation of reliability parameters such as the mean time-tofailure of the system. In §4 we generalize our results by considering lower and upper bounds on lifetimes of certain components. Finally, in §5 we examine how our results can supply exact formulas for the distribution and moments of w.l.p. functions of random variables.
Structure function and l.p. function
In this section we recall the main concepts and results related to structure functions of semicoherent systems. We also point out the parallelism between the description of a system by its structure function and the description of this system by an l.p. function of the component lifetimes.
Structure function
Consider a system consisting of n components that are interconnected. The state of a component i ∈ [n] can be represented by a Boolean variable x i defined as For simplicity, we also introduce the state vector x = (x 1 , . . . , x n ). The state of the system is described from the component states through a Boolean function φ : {0, 1} n → {0, 1}, called the structure function of the system and defined as φ(x) = 1, if the system is functioning, 0, if the system is in a failed state.
As a Boolean function, the structure function φ can also be regarded as a set function v : 2 [n] → {0, 1}. The correspondence is straightforward: We shall henceforth make this identification and often write φ v (x) instead of φ(x). Clearly, the structure function φ v is nondecreasing and nonconstant if and only if its underlying set function v is nondecreasing and nonconstant. We also observe that, being a Boolean function, the function φ v has a unique expression as a multilinear polynomial in n variables, (see for instance Hammer and Rudeanu [7]), where the set function m v : Another concept that we shall often use in this paper is the dual of the set function v, that is, the set function v * : By extending formally the structure function φ v to [0, 1] n by linear interpolation, we define the multilinear extension of φ v (a concept introduced in game theory by Owen [13]), that is, the multilinear polynomial function φ v : [0, 1] n → [0, 1] defined by Now, by combining the concepts of Möbius transform, dual set function, and even the "coproduct" operation ∐, defined by ∐ i x i = 1−Π i (1−x i ), we can easily derive various useful forms of the structure function. Each of these forms is a polynomial expression of the function φ v and hence, when formally regarded as a function from [0, 1] n to [0, 1], it identifies with the corresponding multilinear extension φ v ; see also Grabisch et al. [5]. Table 1 summarizes the best known forms of the structure function and its multilinear extension.
L.p. function
For any event E, let Ind(E) represent the indicator random variable that gives 1 if E occurs and 0 otherwise. For any i ∈ [n], we denote by T i the random time-to-failure of component i and we denote by X i (t) = Ind(T i > t) the random state at time t 0 of component i. For simplicity, we introduce the random time-to-failure vector T = (T 1 , . . . , T n ) and the random state vector X(t) = (X 1 (t), . . . , X n (t)) at time t 0. We also denote by T S the random time-to-failure of the system and by X S (t) = Ind(T S > t) the random state at time t 0 of the system.
The structure function φ v clearly induces a functional relationship between the variables T 1 , . . . , T n and the variable T S . As we will see in Theorem 2, T S is always an l.p. function of the variables T 1 , . . . , T n . Just as for the structure function, this l.p. function provides a complete description of the structure of the system. [6, §I.4]. Let L ⊆ R denote a totally ordered bounded lattice whose lattice operations ∧ and ∨ are respectively the minimum and maximum operations. Denote also by a and b the bottom and top elements of L. Definition 1. The class of lattice polynomial (l.p.) functions from L n to L is defined as follows: (i) For any k ∈ [n], the projection t → t k is an l.p. function from L n to L.
(ii) If p and q are l.p. functions from L n to L, then p ∧ q and p ∨ q are l.p. functions from L n to L.
(iii) Every l.p. function from L n to L is constructed by finitely many applications of the rules (i) and (ii).
Clearly, any l.p. function p : L n → L is nondecreasing and nonconstant. Furthermore, it was proved (see for instance Birkhoff [1,§II.5]) that such a function can be expressed in disjunctive and conjunctive normal forms, that is, there always exist nonconstant set functions w d : 2 [n] → {a, b} and w c : 2 [n] → {a, b}, with w d (∅) = a and w c (∅) = b, such that Clearly, the set functions w d and w c that disjunctively and conjunctively define the polynomial function p(t) in (4) are not unique. However, it can be shown [8] that, from among all the possible set functions that disjunctively define p(t), only one is nondecreasing. Similarly, from among all the possible set functions that conjunctively define p(t), only one is nonincreasing. These special set functions are given by [n]\A ).
The l.p. function disjunctively defined by a given nondecreasing set function w : 2 [n] → {a, b} will henceforth be denoted p w . We then have where w * is the dual of w, defined as
System descriptions
The following theorem points out the one-to-one correspondence between the structure function and the l.p. function that expresses T S in terms of the variables T 1 , . . . , T n . As lifetimes are [0, ∞]-valued, we shall henceforth assume without loss of generality that L = [0, ∞], that is, a = 0 and b = ∞. We also make use of the transformation γ as defined in (5).
Theorem 2. Consider a system whose structure function φ v : {0, 1} n → {0, 1} is nondecreasing and nonconstant. Then we have where w = γ • v. Conversely, any system fulfilling (6) for some l.p. function p w : L n → L has the nondecreasing and nonconstant structure function Proof. The proof mainly lies on the distributive property of the indicator function Ind(·) with respect to disjunction and conjunction, namely for any events E and E ′ . Thus, for any t 0 we have Hence, we have T S = p w (T) if and only if X S (t) = φ v (X(t)) for all t 0, which completes the proof.
Remark 3. Since φ v is a Boolean function, we can always replace in its expression each product Π and coproduct ∐ with the minimum ∧ and the maximum ∨, respectively. Thus, Theorem 2 essentially states that φ v is also an l.p. function that has just the same max-min form as p w but applied to binary arguments. More precisely, φ v is similar to p w in the sense We observe that many properties of the structure functions can be derived straightforwardly from the properties of the corresponding l.p. functions. Let us examine some of them (see for instance Rausand and Høyland [15, §3.11]): 1. Boundary conditions. From the idempotency of p (that is, p(t, . . . , t) = t for all t ∈ L), we immediately retrieve the idempotency of φ, that is, the boundary conditions φ(0) = 0 and φ(1) = 1.
2.
Internality. The internality property of p, namely corresponds to the following internality property of φ: Note that, in both cases, internality results immediately from increasing monotonicity and idempotency. For instance, we have 3. Pivotal decomposition. Consider the following median-based decomposition formula [8], which holds for any l.p. function: where the ternary median function is defined as and where (a i , t) (resp. (b i , t)) represents the vector t whose ith coordinate has been replaced with a (resp. b). From this formula we derive the following property of the structure function: and hence we retrieve the pivotal decomposition of the structure function, namely The disjunctive and conjunctive representations of the l.p. function p w having a minimal number of terms write (see Marichal [8,Proposition 8]) Let us show that these representations are in one-to-one correspondence with the representations of the structure function by minimal paths and cuts. Recall that a path set P ⊆ [n] is a set of components which by functioning ensures that the system is functioning. Similarly, a cut set K ⊆ [n] is a set of components which by failing causes the system to fail. In other terms, It is known that if P 1 , . . . , P r are the minimal path sets and K 1 , . . . , K s are the minimal cut sets, then The corresponding formulas for the l.p. function write and are exactly the "minimal" representations (9) of p w .
5. Extra component connected in series or parallel. Any l.p. function p : L n → L fulfills trivially the following functional equations for arbitrary u ∈ L. These equations mean that connecting in series (resp. in parallel) any extra component to the system amounts to connecting that component in series (resp. in parallel) to each component of the system. The corresponding equations for the structure function are clear. We have φ(y x 1 , . . . , y x n ) = y φ(x 1 , . . . , x n ), φ(y ∐ x 1 , . . . , y ∐ x n ) = y ∐ φ(x 1 , . . . , x n ), for arbitrary y ∈ {0, 1}.
6. Dual structure. Recall that the dual structure function of a structure function φ v is defined as . From this definition, we derive immediately φ D v = φ v * , and hence from (1) we immediately retrieve the dual form of φ v (i.e., the second expression in Table 1). Using the dual set function w * of w, as defined in (5), we see that the corresponding l.p. function is the dual of p w , namely p D w = p w * .
Exact reliability calculation
The reliability function of component i is defined, for any t 0, by that is, the probability that component i does not fail in the time interval [0, t]. Similarly, for any t 0, the system reliability function is that is, the probability that the system does not fail in the time interval [0, t].
In this section we yield the main known formulas for the system reliability function in the general case of dependent failures and in the special case of independent failures. We also provide some additional useful formulas.
Dependent failures
Dukhovny [2] found simple and concise formulas for the system reliability function in case of generally dependent variables T 1 , . . . , T n . We present them in the following theorem and we provide a shorter proof.
Proof. By (1), we have which proves (10). Formula (11) can be proved similarly by using the dual form of φ v (i.e., the second expression in Table 1).
Consider the joint distribution function and the joint survival function, defined respectively as F (t) = Pr(T i t i ∀i ∈ [n]) and R(t) = Pr(T i > t i ∀i ∈ [n]).
By using the same argument as in the proof of Theorem 4, we obtain two further equivalent expressions of R S (t).
Proof. By (2), we have Similarly, using the dual Möbius form of φ v (i.e., the fourth expression in Table 1), we have and for the last formula, we use the fact that A⊆[n] m v * (A) = φ v * (1) = 1.
It is noteworthy that Theorem 5 immediately provides concise expressions for the mean time-to-failure of the system, namely Theorem 5 may suggest that the complete knowledge of the joint survival (or joint distribution) function is needed for the calculation of the system reliability function. Actually, as Theorem 4 shows, all the needed information is encoded in the distribution of the indicator vector X(t). In turn, the distribution of X(t) can be easily expressed (see Dukhovny [2] and Dukhovny and Marichal [3]) in terms of the joint probability generating function of X(t), which is defined by As it is well known, the joint probability generating function has the advantage of being an expectation and yields not only the probabilities alone but also all kinds of moments via derivatives.
By definition, we have and hence G(z, t) is a multilinear polynomial in z 1 , . . . , z n , which can be rewritten as G(z, t) =
A⊆[n]
G(e 0,1 Moreover, we can easily show [2,3] that G(e 0,1 A , t) = F (e t,b A ). On the other hand, from (14) it follows that G(e 0,1 A , t) = Combining this latter formula with (10) enables us to express the system reliability function in terms of G(z, t).
Independent failures
In the case when T 1 , . . . , T n are independent, which implies that the indicator variables X 1 (t), . . . , X n (t) are independent for all t 0, from (12) we obtain the well-known formula Combining (3) and (15), we immediately retrieve the following classical formula (see for instance Rausand and Høyland [15,§4.5]) and so both R S (t) and MTTF S can be expressed in different forms, according to the expressions of φ v chosen in Table 1. For instance, using the primal Möbius form of φ v , we obtain
Some examples
is of the form p w (t) = i∈B t i for some subset B ⊆ [n]. It then corresponds to a serially connected segment of components.
The reliability of a series structure with n elements is given by R S (t) = Pr(∧ n i=1 T i > t). Using Theorems 4 and 5, we also have R S (t) = Pr(X(t) = 1) = R(e 0,t [n] ) = Pr(T 1 > t, . . . , T n > t).
2. Parallel structure. If all the components are wired in parallel, we have φ v (x) = i x i and p w (t) = i t i . In this case, w(A) = b if and only if A = ∅. Similarly to the series structures, we can show that any l.p. function p w fulfilling the functional equation is of the form p w (t) = i∈B t i for some subset B ⊆ [n]. It then corresponds to a subsystem of parallel components.
The reliability of a parallel structure with n elements is given by R S (t) = Pr(∨ n i=1 T i > t). Using Theorems 4 and 5, we also have . . , T n t).
3. k-out-of-n structure, for some k ∈ [n]. By definition, a k-out-of-n structure is characterized by the structure function It is then easy to show that where, for any k ∈ [n], f k : L n → L is the kth order statistic function (see for instance Ovchinnikov [12]). We recall [9, §5.5] that the n order statistic functions are exactly those l.p. functions that are symmetric in their variables. It follows immediately that a structure is of k-out-of-n type for some k ∈ [n] if and only if its system lifetime is a symmetric function (which is f n−k+1 ) of the component lifetimes. In this case, w(A) = b if and only if |A| k, which means that the system is functioning if at least k components are functioning. Clearly, the minimal representation (8) is such that u w (A) = b if and only |A| = k. We also observe that The reliability of a k-out-of-n structure is given by R S (t) = Pr(f n−k+1 (T) > t). Using Theorem 4, we also have and we can show [2,3] that Pr(|X(t)| = j) = [x j ]G(x1, t) is the coefficient of x j in the nth degree polynomial G(x1, t). On the other hand, combining (13) and (19) gives |A|=j R(e 0,t A ).
Example 6. When R i (t) = e −λ i t (i = 1, . . . , n), it is convenient to calculate MTTF S by using formula (16). Indeed, in that case, setting λ A = i∈A λ i , we simply obtain (see [10]) Assuming further that the structure is of k-out-of-n type, by (19) we immediately obtain
Systems with lower and upper bounds on lifetimes
Consider now a more general system in which we allow upper and/or lower bounds on lifetimes of certain subsets of components. As shown by Dukhovny and Marichal [3], the structure of such a system can be modelled by means of a w.l.p. function, which is an l.p. function constructed from both variables and constants.
Definition 7. The class of weighted lattice polynomial (w.l.p.) functions from L n to L is defined as follows: (i) For any k ∈ [n] and any c ∈ L, the projection t → t k and the constant function t → c are w.l.p. functions from L n to L.
(ii) If p and q are w.l.p. functions from L n to L, then p ∧ q and p ∨ q are w.l.p. functions from L n to L.
(iii) Every w.l.p. function from L n to L is constructed by finitely many applications of the rules (i) and (ii).
It was proved [4] that any w.l.p. function p : L n → L can be expressed in disjunctive and conjunctive normal forms, that is, there exist set functions w d : 2 [n] → L and w c : Moreover, it can be shown [8] that, from among all the possible set functions w d that disjunctively define p(t), only one is nondecreasing. Similarly, from among all the possible set functions w c that conjunctively define p(t), only one is nonincreasing. These special set functions are given by [n]\A ).
The w.l.p. function defined by a given nondecreasing set function w : 2 [n] → L will henceforth be denoted p w . The following theorem, which generalizes Theorem 2 to w.l.p. functions, shows that the system is no longer characterized by a single structure function but by a one-parameter family of structure functions.
Theorem 8. With any system fulfilling T S = p w (T 1 , . . . , T n ) for some w.l.p. function p w : L n → L is associated a unique family of nondecreasing and nonconstant structure functions Proof. We follow the same reasoning as in the proof of Theorem 2. For any t 0, we have Hence, we have T S = p w (T) if and only if X S (t) = φ vt (X(t)) for all t 0, which completes the proof.
Remark 9. According to Theorem 8, when modelling systems with collective bounds, it seems much more convenient to use w.l.p. functions rather than families of structure functions.
The properties of the family of structure functions can be derived from the properties of the corresponding w.l.p. function. Let us examine some of them: 1. Boundary conditions. We have φ vt (0) = v t (∅) = Ind(w(∅) > t) and φ vt (1) = v t ([n]) = Ind(w([n]) > t).
2. Pivotal decomposition. The median-based decomposition formula (7), which also holds for any w.l.p. function, leads again to the pivotal decomposition of each structure function φ vt :
Exact reliability formulas
Regarding the reliability calculation, Dukhovny and Marichal [3] established the following result, which is a direct generalization of Theorem 4: Theorem 11. We have Similarly, a direct generalization of Theorem 5 is stated in the following theorem: [n]\A ).
As far as the mean time-to-failure of the system is concerned, from (23) and (24) we immediately obtain When the variables T 1 , . . . , T n are independent, from (22) we immediately retrieve the formula (see Marichal [11]): Considering the family {φ vt : t 0}, where φ vt is the multilinear extension of φ vt , we then observe that and φ vt can be chosen from among the forms given in Table 1, where each v should be replaced with v t . Also, from (25) and (26) we immediately derive 1. Weighted minimum. A weighted minimum function is a w.l.p. function p w : L n → L whose underlying set function w : 2 [n] → L fulfills Such a function fulfills equation (17) and is of the form (see [9, §5.2]) It then corresponds to a series structure with a lower bound on the lifetime of each component. By using (24), we can easily show that and, in case of independence (see (27)), 2. Weighted maximum. A weighted maximum function is a w.l.p. function p w : L n → L whose underlying set function w : 2 [n] → L fulfills Such a function fulfills equation (18) and is of the form (see [9, §5.2]) It then corresponds to a parallel structure with an upper bound on the lifetime of each component. By using (23), it is also straightforward to show that and, in case of independence (see (27)), 3. Symmetric w.l.p. function. We can generalize the k-out-of-n type structures simply by considering symmetric w.l.p. functions p w : L n → L. The underlying set functions are cardinality based, i.e., such that w(A) = w(B) whenever |A| = |B|. If we define the function w : {0, 1, . . . , n} → L by w(A) = w(|A|), we can easily show [3] that any symmetric w.l.p. function can always be put in the form where f n−k+1 is the order statistic function defining the k-out-of-n structure (see §3.3). Moreover, we can show that where k(t) := min{k, n + 1 : w(k) > t}, which generalizes (20). For more details, see Dukhovny and Marichal [3].
Distribution functions of w.l.p. functions
The articles [2,10,11] on which this paper is partly based were motivated by the exact computation of the distribution functions and the moments of l.p. functions and w.l.p. functions of random variables. In this final section we point out the fact that calculating the distribution function of an arbitrary w.l.p. function amounts to calculating the reliability function of a semicoherent system with possible lower and upper bounds on component lifetimes.
Let L ⊆ R be a totally ordered lattice, let p w : L n → L be a w.l.p., and let T 1 , . . . , T n be L-valued random variables.
The distribution function of the random variable p w (T) is defined as F pw (t) = Pr(p w (T) t) (t ∈ L).
Clearly, this function fulfills the identity F pw (t) = 1 − R S (t), where R S (t) is the reliability function of the coherent system described by the w.l.p. function p w . Using formulas (22)-(24), we then obtain immediately the following formulas for F pw (t): where v t (A) = Ind(w(A) > t) and v * t (A) = 1 − Ind(w([n] \ A) > t). When the arguments T 1 , . . . , T n are independent, each T i having distribution function F i (t), we obtain (see Marichal [11]): m v * t (A) i∈A F i (t).
Conclusion
We have discussed the formal parallelism between two representations of semicoherent systems: structure functions and l.p. functions. Their languages are shown to be equivalent in many ways. The l.p. language is demonstrated to have significant advantages. One is the natural generalization to w.l.p. functions and corresponding systems with bounded subsystem lifetimes. The other is the fact that, due to the distributive property of the indicator function Ind(·) with respect to lattice operations (see proofs of Theorems 2 and 8), the l.p. description is a very natural tool to connect the system's structure to the lattice of typical reliability events of the kind T t, to connect the system's purpose, as encoded in the l.p. function, to the system's equipment, as expressed in the joint distribution of units' lifetimes.
|
2008-09-08T13:22:21.000Z
|
2008-09-08T00:00:00.000
|
{
"year": 2008,
"sha1": "26435de6bc54ca34cafdba613ad073f26fb6bd9a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "26435de6bc54ca34cafdba613ad073f26fb6bd9a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
208489581
|
pes2o/s2orc
|
v3-fos-license
|
Treatment persistence with a fixed-dose combination of tadalafil (5 mg) and tamsulosin (0.4 mg) and reasons for early discontinuation in patients with benign prostatic hyperplasia and erectile dysfunction
Purpose The primary aim of this study was to assess treatment persistence with a fixed-dose combination (FDC) of tadalafil (5 mg) and tamsulosin (0.4 mg). This study also evaluated the reasons for early treatment discontinuation. Materials and Methods This retrospective observational study included patients with benign prostatic hyperplasia and erectile dysfunction who started an FDC treatment of tadalafil (5 mg) and tamsulosin (0.4 mg) between July 2017 and February 2018. Treatment persistence and reasons for early discontinuation were evaluated during the first 6 months. The cumulative discontinuation rate and differences in various parameters were assessed using Kaplan–Meier analysis and the log-rank test, respectively. Factors related to persistence were analyzed using a Cox proportional hazard model. Results Overall, 97 patients were included in the study. The cumulative persistence rate at 30, 90, and 180 days was 88.7%, 66.0%, and 54.6%, respectively. The cumulative persistence over 6 months differed significantly according to the administration of FDC therapy (log-rank p=0.005) and age (log-rank p=0.024). Younger patients (odds ratio, 2.049; p=0.021) and treatment-naive patients (odds ratio, 2.461; p=0.006) were more likely to discontinue therapy within 6 months. The common reasons for discontinuing therapy were side effects (63.6%) and perceived poor efficacy (22.7%). Conclusions Side effects were reported to be the main reason for treatment discontinuation. Thus, to improve compliance for a once-daily FDC of tadalafil (5 mg) and tamsulosin (0.4 mg), it is recommended to select patients who show adaptation to a combination of α-blockers and phosphodiesterase type 5 inhibitors prior to FDC treatment.
INTRODUCTION
Erectile dysfunction (ED) and lower urinary tract symptoms (LUTS) secondary to benign prostatic hyperplasia (BPH) often occur in older males [1]. The association between ED and LUTS has been previously shown in various large-scale community-based studies [2][3][4][5]. Approximately 70% males with LUTS/BPH have concurrent ED [6]. As the population ages, the number of males with all the three conditions is increasing, and these conditions can have a profound impact on quality of life [7].
While α-blockers and phosphodiesterase type 5 (PDE5) inhibitors are effective in alleviating LUTS and ED, respectively, polypharmacy (which is most commonly defined as taking ≥5 medications) is considered for patients with a high prevalence of medical comorbidities (e.g., hypertension, diabetes mellitus, and metabolic syndrome) that need multidrug therapy [8]. Polypharmacy can increase adverse drug reactions and medication noncompliance, especially in the elderly [9]. To improve compliance, a fixed-dose combination (FDC) medication that shows efficacy and safety needs to be developed.
A recent randomized controlled trial investigating the efficacy and safety of a once-daily FDC of tadalafil (5 mg) and tamsulosin (0.4 mg) showed that the FDC of tadalafil (5 mg) and tamsulosin (0.4) mg was superior to that of tadalafil (5 mg) monotherapy for LUTS/BPH treatment and was similar to that of tadalafil (5 mg) monotherapy for ED treatment; no clinically significant safety issues were observed [10]. As a result, it was approved for use in Korea by the Korean Food and Drug Administration and released under the tradename "Gugutams 0.4/5 mg ® " in 2016 [11].
Single-tablet administration is expected to improve treatment compliance; however, adherence to a once-daily singletablet FDC of tadalafil (5 mg) and tamsulosin (0.4 mg) has never been investigated. Furthermore, there are no published data regarding treatment patterns in males receiving an FDC of tadalafil (5 mg) and tamsulosin (0.4 mg), especially in real-world practice settings. Thus, the primary aim of this study was to assess treatment persistence with a oncedaily single-tablet FDC of tadalafil (5 mg) and tamsulosin (0.4 mg) in males with LUTS/BPH and ED over a 6-month period. The secondary aim was to identify the reasons for treatment discontinuation over a follow-up period of 6 months.
Study design and subjects
This was a retrospective, observational, and single-center study (Korea University Guro Hospital, Seoul) of males with LUTS/BPH and ED who received a prescription for a oncedaily single-tablet FDC of tadalafil (5 mg) and tamsulosin (0.4 mg). Adults aged ≥18 years who were first prescribed the target drug between July 2017 and February 2018 were eligible for inclusion. This period was based on the availability of an FDC of tadalafil (5 mg) and tamsulosin (0.4 mg) and the need for at least 6 months of patient follow-up. The main exclusion criteria were as follows: episodic treatment, less than 6 months of treatment, and/or insufficient data for analysis. The first prescription date for an FDC of tadalafil (5 mg) and tamsulosin (0.4 mg) was defined as the index date. One hundred and thirteen patients received their first prescription of an FDC of tadalafil (5 mg) and tamsulosin (0.4 mg) during the study period. Of them, 16 patients were excluded because the follow-up period was less than 6 months. Therefore, only 97 patients were included in this study.
The patients' medical records were reviewed for demographic information, previous and concomitant medications, and symptom questionnaires, including International Prostate Symptom Score (IPSS) and International Index of Erectile Function-5 (IIEF-5) before the index date. The study protocol was reviewed and approved by the Institutional Review Board (IRB) of Korea University Guro Hospital (approval number: 2019GR0221). The need for informed consent was waived by the IRB based on the retrospective nature of the study.
Study outcomes
The primary outcome measures were treatment persistence and factors associated with treatment persistence with an FDC of tadalafil (5 mg) and tamsulosin (0.4 mg). Treatment persistence was measured from the index date to the discontinuation date of the target drug. Data on the mean time to discontinuation and the persistence rate at 6 months were obtained. Age, polypharmacy, baseline symptom severity, and previous treatment status were considered factors associated with treatment persistence. Previous treatment status was subdivided into a treatment-naive group, defined as having no experience with coadministration of α-blockers and PDE5 inhibitors before the index date, and a treatmentexperienced group, defined as having experience with α-blockers and PDE5 inhibitors before the index date. The secondary outcome was the identification of the reasons for discontinuation. Causes of treatment discontinuation were classified as follows: side effects, inconvenience of daily administration, perceived poor efficacy, patient preference (i.e., seeking other treatment options), and drug cost.
Statistical analysis
The baseline characteristics of patients were analyzed descriptively. Categorical variables were reported as number and percentages, while continuous variables were reported as mean±standard deviation (range). Kaplan-Meier curves were used to present the cumulative discontinuation rate over 6 months. Differences in treatment persistence according to the demographics and clinical variables were assessed using a log-rank test. The bivariate Cox proportional hazard model was used to identify factors associated with treatment discontinuation. Factors associated with the dependent variable at a value of p<0.05 were included in the multivariate logistic regression model. The causes of discontinuation in the cohort were analyzed descriptively. All analyses were performed using IBM SPSS Statistics ver. 22.0 (IBM Co., Armonk, NY, USA). All p-values <0.05 were considered significant.
DISCUSSION
As males age, ED symptoms and LUTS tend to increase in severity concurrently [12]. Both LUTS and ED worsen with aging; hence, long-term treatment is required. Therefore, as with other chronic diseases, persistence and compliance for LUTS and ED medications are important for improving the patient's symptoms and quality of life [13,14]. Although it is expected that the use of an FDC will result in treatment persistence and compliance, this has never been investigated in a real-world practice setting. This study is the first retrospective, longitudinal observational study to evaluate the treatment persistence with a once-daily FDC of tadalafil (5 mg) and tamsulosin (0.4 mg). In this study, we found that 45.4% patients discontinued the FDC medication within 6 months. This finding was similar to that in previous Korean studies evaluating treatments for other urologic conditions, including LUTS/BPH and overactive bladder [15][16][17]. However, this finding was much lower than that in previous western studies evaluating treatment persistence with once-daily tadalafil for ED [18]. Unfortunately, no study has evaluated treatment persistence with once-daily tadalafil in Korea. Interestingly, the discontinuation rate in our study was much higher than that in a previous phase 3 clinical trial of an FDC of tadalafil (5 mg) and tamsulosin (0.4 mg) [10]. The lower discontinuation rate in clinical practice was due to increased patient motivation and cooperation with prescribed recommendations in the clinical trial setting [10]. Moreover, in phase 3 clinical trials, patients are given their medication without cost and are
Treatment persistence with a fixed-dose combination
intensively observed during the study period [19]. This may enhance the persistence rate. In contrast, in real-world clinical practice, patients pay for their medication, which might lead to higher expectations and increased vulnerability to side effects with their prescription. These factors allow more enlightening clinical studies to be carried out in real-world practice settings; therefore, actual treatment persistence rates may be more accurate.
The most common reason for treatment discontinuation in this study was adverse drug side effects. Although most side effects were self-limiting, they were directly related to the patient's satisfaction with treatment. The most common undesirable effect was ejaculatory dysfunction, a well-known side effect of selective α-blockers [20]. This has previously been shown to be the main factor related to low satisfac-tion with combination treatment using α-blockers and PDE5 inhibitors [21]. However, other adverse events related to tadalafil may have less impact on treatment persistence, based on previous studies with once-daily tadalafil that reported discontinuation rates due to the side effect as low as 1% to 6% [22][23][24]. Therefore, it can be inferred that ejaculatory dysfunction was a major cause for discontinuation of a once-daily FDC of tadalafil (5 mg) and tamsulosin (0.4 mg). Interestingly, none of the 9 patients who discontinued due to ejaculatory dysfunction received silodosin or tamsulosin (0.4 mg) before taking a once-daily FDC of tadalafil (5 mg) and tamsulosin (0.4 mg).
In this study, patients who were treatment-naive for α-blockers and PDE5 inhibitors were more likely to discontinue treatment early. This is understandable when considering side effects were the main cause for withdrawal. It is also noteworthy that the persistence rates were high in the treatment-experienced group, who had previous coadministration of α-blockers and PDE5 inhibitors. The high persistence rate may be because treatment-experienced patients probably did not switch to an FDC when they had side effects. Another factor associated with treatment persistence was age. Patients aged <65 years were more likely to discontinue treatment at 6 months. Although reasons for this finding are unclear, we inferred that younger patients were more vulnerable to sexual side effects related to α-blockers.
Other potential factors associated with treatment persistence, including polypharmacy and subjective symptom severity, did not affect persistence rates. However, it is difficult to draw conclusions regarding the true effects of In this study, the second most common reason for discontinuation treatment was the lack of response to treatment, regardless of symptom severity; treatment persistence and symptom severity were not correlated. The degree of symptom improvement had a more significant effect on treatment persistence than symptom severity.
This study has several limitations. First, patients who were excluded due to no recorded follow-up examinations could not be analyzed; it is unknown if they were receiving treatment from another hospital or discontinued treatment. Therefore, the persistence rate might be an overestimation. This is, however, an inevitable limitation due to the retrospective study design. Second, this study did not consider probable confounding factors of treatment persistence related to physicians. The doctor-patient relationship is well known to affect a patient's compliance with treatment [25][26][27]. Therefore, further studies will need to evaluate whether satisfaction with the doctor affects treatment persistence. Third, the sample for this study was small and from one center. However, it has been less than 3 years since the drug was approved and prescribed. Moreover, this study was meaningful as an initial investigation of treatment persistence with a once-daily FDC of tadalafil (5 mg) and tamsulosin (0.4 mg) and its related factors. A large scale, prospective trial considering these limitations should be performed to further investigate factors influencing FDC treatment persistence.
CONCLUSIONS
In this study, 54.6% patients with LUTS and ED continued a once-daily FDC of tadalafil (5 mg) and tamsulosin (0.4 mg) for 6 months. Patients who had experience with coadministration of α-blockers and PDE5 inhibitors before the FDC prescription were likely to continue their treatment. The most common reason for discontinuation was adverse drug side effects. To improve patient compliance for a oncedaily FDC of tadalafil (5 mg) and tamsulosin (0.4 mg), it is recommended to select patients who show adaptation to a combination of α-blockers and PDE5 inhibitors prior to FDC treatment.
|
2019-12-01T02:03:42.355Z
|
2019-11-26T00:00:00.000
|
{
"year": 2019,
"sha1": "bdfcbf955feb639ccf9dc998c392bf5d6e7072da",
"oa_license": "CCBYNC",
"oa_url": "http://icurology.org/Synapse/Data/PDFData/2020ICU/icu-61-81.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6704637b4142bf972404c1a9356b7c2eb9be7ea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11175529
|
pes2o/s2orc
|
v3-fos-license
|
The Poincar\'e-Hopf Theorem for relative braid classes
Braid Floer homology is an invariant of proper relative braid classes. Closed integral curves of 1-periodic Hamiltonian vector fields on the 2-disc may be regarded as braids. If the Braid Floer homology of associated proper relative braid classes is non-trivial, then additional closed integral curves of the Hamiltonian equations are forced via a Morse type theory. In this article we show that certain information contained in the braid Floer homology - the Euler-Floer characteristic - also forces closed integral curves and periodic points of arbitrary vector fields and diffeomorphisms and leads to a Poincar\'e-Hopf type Theorem. The Euler-Floer characteristic for any proper relative braid class can be computed via a finite cube complex that serves as a model for the given braid class. The results in this paper are restricted to the 2-disc, but can be extend to two-dimensional surfaces (with or without boundary).
A 2-periodic and 1-periodic closed integral curve represented as a 3-strand braids [left]. A closed integral curve go minimal period yields a -periodic point of the associated time-1 map [middle]. A relative braid with the skeleton (black) and one free strand (red) [right].
x i are integral curves of X and cannot intersect therefore! Multiple closed integral curves of various periods yield a multi-strand braid.
Let y be a geometric braid consisting of closed integral curves of X, which will be referred to as a skeleton. The strands y i (t), i = 1, · · · , m satisfy the periodicity condition y(0) = y(1) as point sets, i.e. y i (0) = y σ(i) (1) for some permutation σ ∈ S m . Let x = x 1 (t), · · · x n (t) be a geometric braid such that the 'union' x rel y := x 1 (t), · · · x n (t), y 1 (t), · · · , y m (t) is again a geometric braid, i.e. the strands in x do not intersect the strands in y. The pair x rel y is called a relative braid. Two relative braids x rel y and x rel y are equivalent if there exists a homotopy of relative braids connecting x rel y to x rel y . The equivalence class is denoted by [x rel y] and is called a relative braid class. The set of relative braids x rel y ∈ [x rel y], keeping y fixed, is denoted by [x ] rel y and is called a braid class fiber. A relative braid class [x rel y] is proper if components x c ⊂ x cannot be deformed onto (i) the boundary ∂D 2 , (ii) itself, 2 or other components x c ⊂ x, or (iii) components in y c ⊂ y, see [12] for details. In this paper we are concerned with relative braids x rel y for which x has only one strand, i.e. x(t + 1) = x(t). The central question is: given a skeleton y of integral curves, does there exist an integral curve x in the relative braid class [x rel y]. The theory also applies if x has more than one strand. To a braid y one can assign an integer Cross(y) which counts the number of crossings (with sign) of strands in the standard planar projection. In the case of a relative braid x rel y the number Cross(x rel y) is an invariant of the relative braid class [x rel y].
A brief summary of Braid Floer homology
In [12] a monotonicity lemma is proven, which states that along solutions u(s,t) of the nonlinear Cauchy-Riemann equations, the number Cross(u(s, ·) rel y) is nonincreasing (the jumps correspond to 'singular braids', i.e. 'braids' for which intersections occur). As a consequence an isolation property for proper relative braid classes exists: the set bounded solutions of the Cauchy-Riemann equations in a proper braid class fiber [x] rel y, denoted by M ([x] rel y; H), is compact and isolated with respect to the topology of uniform convergence on compact subsets of R 2 . These facts provide all the ingredients to use Floer's approach towards Morse Theory for the Hamiltonian action [7]. For generic Hamiltonians which satisfy (i) and (ii) above and for which y is a skeleton, the critical points in In [12] Braid Floer homology is used as Morse type theory for closed integral curves forced by a skeleton y. The Floer theory applies to the Hamilton action and therefore to Hamiltonian vector fields X H . It also provides a forcing theory for periodic points for area-preserving diffeomorphisms. Braid Floer homology cannot be applied to arbitrary vector fields X. The objective of this paper is twofold: (i) we extract an invariant from HB * ([x rel y]) -the Euler-Floer characteristic -that applies to arbitrary vector fields X and non-triviality of the Euler-Floer characteristic yields forcing of closed integral curves, and (ii) we develop an algorithm to compute the Euler-Floer characteristic.
The remainder of the introduction deals with formulating the main theorems and gives an extensive outline of the steps required to prove the main results.
The Euler-Floer characteristic and the Poincaré-Hopf Formula
The Euler-Floer characteristic of HB * [x rel y] is defined as follows: A 1-periodic function x ∈ C 1 (R/Z) is an isolated closed integral curve of X if there exists an ε > 0 such that x is the only solution of the differential equation in B ε (x) ⊂ C 1 (R/Z). For isolated, and in particular non-degenerate closed integral curves we can define an index as follows: Then η → F Θ (η) = d dt − R(t; η) defines a path in Fred 0 (C 1 ,C 0 ). Denote by Σ ⊂ Fred 0 (C 1 ,C 0 ) the set of non-invertible operators and by Σ 1 ⊂ Σ the non-invertible operators with a 1-dimensional kernel. If the end points of F are invertible one can choose the path η → R(t; η) such that F Θ (η) intersects Σ in Σ 1 and all intersections are transverse. Define γ = # intersections of F Θ (η) with Σ 1 , then This definition is independent of the choice of Θ, see Section 7. In Section 7 we also extend the definition of index to isolated closed integral curves, see Equation (32).
Theorem 1 (Poincaré-Hopf Formula) Let y be a skeleton of closed integral curves of a vector field X ∈ X (D 2 × R/Z) and let [x rel y] be a proper relative braid class. Suppose that all 1-periodic closed integral curves of X are isolated, then for all closed integral curves x 0 rel y in the fiber [x 0 ] rel y it holds that The index formula can be used to obtain existence results for closed integral curves in proper relative braid classes.
Theorem 2 Let y be a skeleton of closed integral curves of a vector field X ∈ X (D 2 × R/Z) and let [x rel y] be a proper relative braid class. If χ x rel y = 0, then there exist closed integral curves x 0 rel y in the fiber [x] rel y. Remark 1 In this paper we do not address the question whether the closed integral curves x rel y are non-constant, i.e. are not equilibrium points. However, closed integral curves in different relative braid classes correspond to different periodic points! By considering relative braid classes where x consists of more than one strand one can study non-constant closed integral curves. Braid Floer homology for relative braids with x consisting of n strands is defined in [12]. The ideas in this paper extend to relative braid classes with multi-strand braids x. In Section 11 we give an example of a multi-strand x in x rel y and explain how this yields the existence of non-trivial closed integral curves, which also provides detailed information about the linking of solutions.
Discretization and computability
The second part of the paper deals with the computability of the Euler-Floer characteristic. This is obtained through a finite dimensional model. The latter is constructed in three steps: (a) compose x rel y with ≥ 0 full twists ∆ 2 , such that (x rel y) · ∆ 2 is isotopic to a positive braid x + rel y + ; (b) relative braids x + rel y + are isotopic to Legendrian braids x L rel y L on R 2 , i.e. braids which have the form x L = (q t , q) and y L = (Q t , Q), where q = π 2 x and Q = π 2 y, and π 2 is the projection onto the q−coordinate; (c) discretize q and Q = {Q j } to q d = {q i }, with q i = q(i/d), i = 0, . . . , d and respectively, and consider the piecewise linear interpolations connecting the anchor points q i and Q j i for i = 0, . . . , d, see Figure 2. A discretization q D rel Q D is admissible if the linear interpolation is isotopic to q rel Q. All such discretizations form the discrete relative braid class [q D rel Q D ], for which each fiber is a finite cube complex, cf. [8]. Remark 2 If the number of discretization points is not large enough, then the discretizations may not be admissible and therefore not capture the topology of the braid class. See [8] and Section 10.1 for more details.
For d > 0 large enough there exists an admissible discretization q D rel Q D for any Legendrian representative x L rel y L ∈ [x rel y] and thus an associated discrete relative braid class [q D rel Q D ]. In [8] an invariant for discrete braid classes was introduced.
, which is a cube complex with a finite number of connected components and their closures are denoted by N j . The faces of the hypercubes N j can be co-oriented in direction of decreasing the number of crossings in q D rel Q D , and we define N − j as the closure of the set of faces with outward pointing co-orientation. Figure 3 below explains the sets N j and N − j for the example in Figure 2. The sets N − j are called exit sets. The invariant for a fiber is given by the Conley index This discrete braid invariant is well-defined for any d > 0 for which there exist admissible discretizations and is independent of both the particular fiber and the discretization size d. For the associated Euler characteristic we therefore write χ q D rel Q D . The Euler characteristic of the Braid Floer homology χ(x rel y) can be related to the Euler characteristic of the associated discrete braid class.
Theorem 3 Let [x rel y] a proper relative braid class and ≥ 0 is an integer such that (x rel y) · ∆ 2 is isotopic to a positive braid x + rel y + . Let q D rel Q D be an admissible discretization, for some d > 0, of a Legendrian representative x L rel y L ∈ [x + rel y + ]. Then D is an augmentation of Q D by adding the constant strands ±1 to Q D .
Outline of the paper
The first part of the paper, Sections 2 through 7, is concerned with proving Theorems 1 and 2. The second part of the paper, Sections 8 through 11, is concerned with proving Theorem 3, which deals with computing the Euler-Floer characteristic and applying it to specific braid classes. We will now give an outline of the main steps to prove Theorems 1, 2 and 3 and in which sections these steps are proved.
Degree Homotopy. For a skeleton y for X we can find a Hamilton vector field X H and thus y is a skeleton for the homotopy X α = (1 − α)X + αX H , for all α ∈ The Euler-Floer characteristic. In [12] it is proved that generically see Proposition 7. If we combine all the steps we obtain The index. In Section 7 we introduce the index ι(x) given in Equation (3) for closed integral curves x, which does not depend on the choices we used to define Φ(x).
The index relates directly to the Leray-Schauder degree. We use the index ι(x) to formulate the Poincaré-Hopf Formula and prove Theorems 1 and 2.
Legendrian braids. In order to compute the Euler-Floer characteristic we represent braid classes via Legendrian braids of the form x = (q t , q), where q is a 1periodic function, see Section 8. Legendrian braids can be realized via mechanical Hamiltonian systems for which the Conley-Zehnder index equals the Morse index of q, see Lemma 13. In Equation (40) we express the Euler-Floer characteristic by χ(x rel y) = ∑ q (−1) γ(q) , where γ(q) is the Morse index of a critical point.
Discrete braid classes. The final step toward proving Theorem 3 uses yet another representation of braids. Via the method of broken geodesics, Legendrian braids are discretized and represented via parabolic recurrence relations of conservative type, see [8]. For such braid classes the Conley index of proper relative braid classes is well-defined, see Section 10. In Lemma 15 we show that the Morse index of the discretized critical points is equal the the Morse index γ(q). The theory in [8]
Closed integral curves
In this section we rephrase the existence of a closed integral curves in terms of zeroes of appropriate mapping on C 0 (R/Z). Let X ∈ X (D 2 × R/Z), then closed integral curves of X of period 1 satisfy the differential equation Consider the unbounded operator L µ : The operator is invertible for µ = 2πk, k ∈ Z and the inverse L −1 µ : where K is a (non-linear) compact operator on C 0 (R/Z). Since X is a smooth vector field the mapping Φ is a smooth mapping on C 0 (R/Z).
Proposition 1 A function x ∈ C 0 (R/Z), with |x(t)| ≤ 1 for all t, is a solution of Φ(x) = 0 if and only if x ∈ C 1 (R/Z) and x satisfies Equation (5).
Note that the zero set Φ −1 (0) does not depend on the parameter µ. In order to apply the Leray-Schauder degree theory we consider appropriate bounded, open subsets Ω ⊂ C 0 (R/Z), which have the property that Φ −1 (0) ∩ ∂Ω = ∅. Let y be a skeleton for X consisting of closed integral curves and consider a proper relative braid class [x rel y]. Due to properness of [x rel y] all fibers [x] rel y are isolating neighborhoods for the Cauchy-Riemann equations, cf. [12], and in particular for Equation (5). Therefore, we consider Ω = [x] rel y.
Proposition 2 Let [x rel y] be a proper relative braid class and let Ω = [x] rel y be the fiber given by y. Then, there exists an 0 < r < 1 such that Let x n ∈ Φ −1 (0) ∩ Ω and assume that such an 0 < r < 1 does not exist. Then, by the compactness of Φ −1 (0) ∩ Ω, there is a subsequence x n k → x such that one, or both of the following two possibilities hold: (i) |x(t 0 )| = 1 for some t 0 . By the uniqueness of solutions of Equation (5) and the invariance of the boundary ∂D 2 (X(x,t) is tangent to the boundary), |x(t)| = 1 for all t, which is impossible since [x] rel y is proper; (ii) x(t 0 ) = y j (t 0 ) for some t 0 and some j. As before, by the uniqueness of solutions of Equation (5), then x(t) = y j (t) for all t, which again contradicts the fact that [x] rel y is proper.
By Proposition 2 the Leray-Schauder degree deg LS (Φ, Ω, 0) is well-defined. Consider the Hamiltonian vector field where H(x,t) is a smooth Hamiltonian satisfying (i)-(ii) in Section 1.1, and therefore, X H ∈ X (D 2 × R/Z). Assume that y is a skeleton for X H . Such Hamiltonians can always be constructed, see [12], and the set of Hamiltonians meeting these requirements is denote by H (y). Associate with the vector field X H we write Since y is a skeleton for both X and X H , it is also a skeleton for the linear homotopy X α = (1 − α)X + αX H , α ∈ [0, 1]. Associated with the homotopy X α of vector fields we define the homotopy Proposition 2 applies for all α ∈ [0, 1], i.e. by compactness there exists a uniform 0 < r < 1 such that for all t ∈ R, for all j and for all By the homotopy invariance of the Leray-Schauder degree we have where Φ 0 = Φ and Φ 1 = Φ H . Note that the zeroes of Φ H correspond to critical point of the functional and are denoted by Crit A H ([x] rel y). The Braid Floer homology groups HB * [x rel y] , defined [12], provide information about Φ −1 In the next section we examine spectral properties of the solutions of Φ −1 α (0) ∩ Ω in order to compute deg LS (Φ H , Ω, 0) and thus deg LS (Φ, Ω, 0).
Remark 3
There is obviously more room for choosing appropriate operators L µ and therefore functions Φ. In Section 7 this issue will be discussed in more detail.
The Leray-Schauder degree and parity
The Leray-Schauder degree of an isolated zero . If x is a non-degenerate zero, then it is an isolated zero and the degree can be determined from spectral information.
The Leray-Schauder degree
which will be referred to as the Morse index of x, or alternatively the Morse index of linearized operator D x Φ(x).
The functions Φ α (x) = x − K α (x) are of the form 'identity + compact' and Proposition 3 can be applied to non-degenerate zeroes of Φ α (x) = 0. If we choose the Hamiltonian 'generically', then the zeroes of . By compactness there are only finitely many zeroes in a fiber Ω = [x] rel y.
Then following criteria for non-degeneracy are equivalent: Proof A function ψ satisfies D x K H (x)ψ = ψ if and only if Bψ = 0, which shows the equivalence between (i) and (ii). The equivalence between (ii) and (iii) is proved in [12].
The generic choice of H follows from Proposition 7.1 in [12] based on criterion (iii). Hamiltonians for which the zeroes of Φ H are non-degenerate are denoted by H reg (y). Note that no genericity is needed for α ∈ [0, 1)! For the Leray-Schauder degree this yields The goal is to determine the Leray-Schauder degree deg LS (Φ, Ω, 0) from information contained in the Braid Floer homology groups HB * ([x rel y]). In order to do so we examen the Hamiltonian case. In the Hamiltonian case the linearized operator which is a bounded operator on C 0 (R/Z). The operator A extends to a bounded operator on L 2 (R/Z). Consider a path η → A(η), η ∈ I = [0, 1], given by where η → S(t; η) a smooth path of t-dependent symmetric matrices with the ends satisfying where θ = 2πk, for some k ∈ Z and D 2 x H(x(t),t) is the Hessian of H at a critical point in Crit A H ([x] rel y). The path of η → A(η) is a path bounded linear Fredholm operators on L 2 (R/Z) of Fredholm index 0, which are compact perturbations of the identity and whose ends are invertible. (10) is a smooth path of bounded linear Fredholm operators in H s (R/Z) of index 0, with invertible ends.
Proof By the smoothness of S(t; η) we have that S(t; η)x H m ≤ C x H m , for any x ∈ H m (R/Z) and any m ∈ N ∪ {0}. By interpolation the same holds for all x ∈ H s (R/Z) and the claim follows from the fact that L −1 µ : In the Hamiltonian case the Conley-Zehnder indices can be related to the spectral flow of self-adjoint operators. However, the paths A(η) are not self-adjoint however, and thus spectral flow cannot be used in general. We therefore need cruder tool to link the Morse indices β H (x) to the Conley-Zehnder indices. Parity is such an invariant for paths of Fredholm operators and is related to spectral flow.
Parity of paths of linear Fredholm operators
Let η → Λ(η) be a smooth path of bounded linear Fredholm operators of index 0 on a Hilbert space H . A crossing η 0 ∈ I is a number for which the operator Λ(η 0 ) is not invertible. A crossing is simple if dim ker Λ(η 0 ) = 1. A path η → Λ(η) between invertible ends can always be perturbed to have only simple crossings. Such paths are called generic. Following [4, 3, 5, 6], we define the parity of a generic path η → Λ(η) by where cross(Λ(η), I) = #{η 0 ∈ I : ker A(η 0 ) = 0}. The parity is a homotopy invariant with values in Z 2 . In [4, 3, 5, 6] an alternative characterization of parity is given via the Leray-Schauder degree. For any Fredholm path η → Λ(η) there exists a path η → M(η), called a parametrix, such that η → M(η)Λ(η) is of the form 'identity + compact'. For parity this gives: , H , 0 , for η = 0, 1, and the expression is independent of the choice of parametrix. The latter extends the above definition to arbitrary paths with invertible ends. For a list of properties of parity see [4, 3, 5, 6].
Proposition 4 Let η → A(η) be a path of bounded linear Fredholm operators on H s (R/Z) defined by (10). Then where β A(0) and β A(1) are the Morse indices of A(0) and A(1) respectively.
is a non-degenerate zero, then its local degree can be expressed in terms of the parity of A(η).
In the next subsection we establish yet another path of operators such that we can link the local degree to the Conley-Zehnder indices of the critical points.
Parity and spectral flow
The spectral flow for paths of selfadjoint operators is a more refined invariant than parity. Using the Fourier expansion x = ∑ k∈Z e 2πJkt x k we can characterize we define the selfadjoint operators For µ > 0 and µ = 2πk, k ∈ Z, the operators P µ are isomorphisms on H s (R/Z), for all s ≥ 0. 3 Consider the path which is a path of operators of Fredholm index 0. The constant path η → M µ (η) = P −1 µ is a parametrix for η → C(η) (see [5,6]) and since M µ C(η) = A(η), the parity of C(η) is given by parity(C(η), I) = parity(A(η), I).
Proof From the functional calculus we derive that where n µ (k) = 2π|k| + µ and p µ (k) = 2πk+µ 2π|k|+µ . For s = 1/2 we have that For a path η → Λ(η) of selfadjoint operators on a Hilbert space H , which is continuously differentiable in the (strong) operator topology we define the crossing oper- with invertible ends can always be chosen to be generic by a small perturbation. At a simple crossing η 0 , there exists a C 1 -curve λ(η), for η near η 0 , and λ(η) is an eigenvalue of Λ(η), with λ(η 0 ) = 0 and λ (η 0 ) = 0, see [10,11]. The spectral flow for a generic path is defined by For a simple crossing η 0 the crossing operator is simply multiplication by λ (η 0 ) and where ψ(η 0 ) is normalized in H , and The spectral flow is defined any for continuously differentiable path η → Λ(η) with invertible ends. From the theory in [6] there is a connection between the spectral flow of Λ(η) and its parity: which in view of Equation (11) follows since cross(Λ(η), I) = specflow(Λ(η), η) mod 2 in the generic case. The path η → C(η) defined in (15) is a continuously differentiable path of operators on H = H 1/2 (R/Z) with invertible ends, and therefore both parity and spectral flow are well-defined. If we combine Equations (13) and (16) with Equation (20) we obtain In the next section we link the spectral flow of C(η) to the Conley-Zehnder indices of non-degenerate zeroes and therefore to the Euler-Floer characteristic.
The Conley-Zehnder index
For a non-degenerate 1-periodic solution x(t) of the Hamilton equations the Conley-Zehnder index can be defined as follows. The linearized flow Ψ is given by By Lemma 1(iii), a 1-periodic solution is non-degenerate if Ψ(1) has no eigenvalues equal to 1. The Conley-Zehnder index is defined using the symplectic path Ψ(t). Following [11], consider the crossing form Γ(Ψ,t), defined for vectors ξ ∈ ker(Ψ(t)− Id), A crossing t 0 > 0 is defined by det(Ψ(t 0 ) − Id) = 0. A crossing is regular if the crossing form is non-singular. A path t → Ψ(t) is regular if all crossings are regular. Any path can be approximated by a regular path with the same endpoints and which is homotopic to the initial path, see [10] for details. For a regular path t → Ψ(t) the Conley-Zehnder index is given by and at simple crossings η 0 , after normalizing φ(η 0 ) in L 2 (R/Z). As before the derivative of at η 0 is given by Proposition 6 Let η → B(η), η ∈ I, as defined above, be a generic path of unbounded self-adjoint operators with invertible endpoints, and let η → Ψ(η;t) be the associated path of symplectic matrices defined by where µ CZ B(0) = µ CZ (Ψ(t; 0)), µ CZ B(1) = µ CZ (Ψ(t; 1)).
Proof The expression for the spectral flow follows from [11] and [12].
In the case η = 0, the Conley-Zehnder index µ CZ B(0) can be computed explicitly.
The zeroes (1), is given by χ x rel y = ∑ k∈Z (−1) k dim HB k ([x rel y]), which is also the Euler-Floer characteristic for HB * ([x] rel y). In [12] the following analogue of the Poincaré-Hopf formula is proved.
Since the Conley-Zehnder index is linked to spectral flow we obtain: Proposition 8 For a proper braid class fiber [x] rel y and a generic Hamiltonian H ∈ H reg (y), we have that Proof By Proposition 6 and Lemma 5 the spectral flow satisfies, This implies which completes the proof.
In order to prove that the Euler-Floer characteristic χ(x rel y) and deg LS (Φ H , Ω, 0) are related we need to investigate the relation between the spectral flows of B(η) and C(η).
6 The spectral flows are the same In order to show that the spectral flows are the same we use the fact that the paths η → C(η) and η → B(η) for a non-degenerate zero x i ∈ Φ −1 H (0) ∩ Ω are chosen to have only simple crossings for their crossing operators, i.e. zero eigenvalues are simple. In this case the spectral flows are determined by the signs of the derivatives of the eigenvalues at the crossings. For η → B(η) the expression given by Equation (26) and from Equation (19) a similar expression for η → C(η) can be derived and is given by Lemma 7 For all µ > 0, with µ = 2πk, k ∈ Z, sgn λ (η 0 ) = sgn (η 0 ) for all crossings at η 0 .
Lemma 7 implies that for any non-degenerate where B(η; x) and C(η; x) are the above described path associated with x. Therefore which completes the proof.
The proof of Theorems 1 and 2
We start with the proof of Theorem 2. Since HB * ([x rel y]) is an invariant of the proper braid class [x rel y] it does not depend on a particular fiber [x] rel y. Recall the homotopy invariance of the Leray-Schauder degree as expressed in Equation (7) deg By Proposition 9 we have that and χ(x rel y) = 0, then implies that Φ −1 (0) ∩ Ω = ∅. Therefore there exist a closed integral curves in any relative braid class fiber [x] rel y, whenever χ(x rel y) = 0. This completes the proof of Theorem 2. The remainder of this section is to prove the Poincaré-Hopf Formula in Theorem 1 for closed integral curves in proper relative braid fibers. The mapping is smooth (nonlinear) Fredholm mapping of index 0. Let M ∈ GL(C 0 ,C 1 ) be an isomorphism such that ME (x) is of the form ME ( Let x ∈ C 1 (R/Z) be a non-degenerate zero of E and recall, using the theory in [4, 3, 5, 6], that the index ι(x) defined in (3) is equivalent to: where Θ ∈ M 2×2 (R), with σ(Θ) ∩ 2πkiR = ∅, k ∈ Z and β M (Θ) is the Morse index of Id −K M (0).
Lemma 8
The index ι(x) for a non-degenerate zero of E is well-defined, i.e. independent of the choices of M ∈ GL(C 0 ,C 1 ) and Θ ∈ M 2×2 (R).
Lemmas 8 shows that the index of a non-degenerate zero of E is well-defined. We now show that the same holds for isolated zeroes.
Lemma 9
The index ι(x) for an isolated zero of E is well-defined and for a fixed choice of M and Θ the index is given by where ε > 0 is small enough such that x is the only zero of E in B ε (x).
Proof By the Sard-Smale Theorem one can choose an arbitrarily small h ∈ C 0 (R/Z), h C 0 < ε , such that h is a regular value of E and E −1 (h) ∩ B ε (x) consists of finitely many non-degenerate zeroes in x h . Set E (x) = E (x) − h and define We now show that ι(x) is well-defined. Choose a fixed parametrix M (for E ) and fixed Θ ∈ M 2×2 (R), and let Φ M = M E , then which proves the lemma.
Theorem 1 now follows from the Leray-Schauder degree. Suppose all zeroes of E in Ω = [x] rel y are isolated, then Lemma 9 implies that
Legendrian braids
In this section we prove Theorem 3 and show that the Euler-Floer characteristic can be determined via a discrete topological invariant.
Hyperbolic Hamiltonians on R 2
Consider Hamiltonians of the form where h satisfies the following hypotheses: Since h is smooth we can rewrite the equations as If x(t) is a 1-periodic solution to the Hamilton equations, and suppose there exists an interval I = [t 0 ,t 1 ] ⊂ [0, 1] such that |q(t)| > R on int(I) and |q(t)| ∂I = R. The function q| I satisfies the equation q tt −q = 0, and obviously such solutions do not exist. Indeed, if q| I ≥ R, then q t (t 0 ) ≥ 0 and q t (t 1 ) ≤ 0 and thus 0 ≥ q t | ∂I = I q ≥ R|I| > 0, a contradiction. The same holds for q| I ≤ −R. We conclude that |q(t)| < R, for all t ∈ R/Z.
We now use the a priori q-estimate in combination with Equation (35) and Hypothesis (h3). Multiplying Equation (35) by q and integrating over [0, 1] gives: which implies that 1 0 q 2 t ≤ C(R). The L 2 -norm of the right hand side in (35) can be estimated using the L ∞ estimate on q and the L 2 -estimate on q t , which yields 1 0 q 2 tt ≤ C(R). Combining these estimates we have that q H 2 (R/Z) ≤ C(R) and thus |q t (t)| ≤ C(R), for all t ∈ R/Z. From the Hamilton equations it follows that |p(t)| ≤ |q t (t)| +C, which proves the lemma. Proof The a priori H 2 -estimates in Lemma 10 hold with uniform constants with respect to α ∈ [0, 1]. This then proves the lemma.
Braids on R 2 and Legendrian braids
In Section 1 we defined braid classes as path components of closed loops in LC n (D 2 ), denoted by [x]. If we consider closed loops in C n (R 2 ), then the braid classes will be denoted by [x] R 2 . The same notation applies to relative braid classes [x rel y] R 2 . A relative braid class is proper if components x c ⊂ x cannot be deformed onto (i) itself, or other components x c ⊂ x, or (ii) components y c ⊂ y. A fiber [x] R 2 rel y is not bounded! In order to compute the Euler-Floer characteristic of [x rel y] we assume without loss of generality that x rel y is a positive representative. If not we compose x rel y with a sufficient number of positive full twists such that the resulting braid is positive, i.e. only positive crossings, see [12] for more details. The Euler-Floer characteristic remains unchanged. We denote a positive representative x + rel y + again by x rel y.
Define an augmented skeleton y * by adding the constant strands y − (t) = (0, −1) and y + (t) = (0, 1). For proper braid classes it holds that [x rel y] = [x rel y * ]. For notational simplicity we denote the augmented skeleton again by y. We also choose the representative x rel y with the additional the property that π 2 x rel π 2 y is a relative braid diagram, i.e. there are no tangencies between the strands, where π 2 the projection onto the q-coordinate. We denote the projection by q rel Q, where q = π 2 x and Q = π 2 y. Special braids on R 2 can be constructed from (smooth) positive braids. Define x L = (q t , q) and y L = (Q t , Q), where the subscript t denotes differentiating with respect to t. These are called Legendrian braids with respect to θ = pdt − dq.
Lemma 12 For positive braid x rel y with only transverse, positive crossings, the braids x L rel y L and x rel y are isotopic as braids on R 2 . Moreover, if x L rel y L and x L rel y L are isotopic Legrendrian braids, then they are isotopic via a Legendrian isotopy.
Proof By assumption x rel y is a representative for which the braid diagram q rel Q has only positive transverse crossings. Due to the transversality of intersections the associated Legendrian braid x L rel y L is a braid [x rel y] R 2 . Consider the homotopy ζ j (t, τ) = τp j (t) + (1 − τ)q j t , for every strand q j . At q-intersections, i.e. times t 0 such that q j (t 0 ) = q j (t 0 ) for some j = j , it holds that p j (t 0 ) − p j (t 0 ) and q j t (t 0 ) − q j t (t 0 ) are non-zero and have the same sign since all crossings in x rel y are positive! Therefore, ζ j (t 0 , τ) = ζ j (t 0 , τ) for any intersection t 0 and any τ ∈ [0, 1], which shows that x rel y and x L rel y L are isotopic. Since x L rel y L and x L rel y L have only positive crossings, a smooth Legendrian isotopy exists.
The associated equivalence class of Legendrian braid diagrams is denoted by [q rel Q] and its fibers by [q] rel Q.
Mechanical Hamiltonian systems
Legendrian braids can be described via Hamiltonian of mechanical type, i.e. H(x,t) = 1 2 p 2 +V (q,t). Due to the special form we can also use the Lagrangian formalism for such systems. In the next subsection we investigate the relation between the Conley-Zehnder index and the Lagrangian Morse index of closed integral curves.
The Lagrangian Morse index
A mechanical system is defined as the Euler-Lagrange equations of the Lagrangian density L(q,t) = 1 2 q 2 t − V (q,t). The linearization at a critical points q(t) of the Lagrangian action is given by the unbounded opeartor For a mechanical system the Hamiltonian is given by H(x,t) = 1 2 p 2 +V (q,t). As such the Conley-Zenhder index of a critical point q can be defined as the Conley-Zehnder index of x = (q t , q), see also [1] and [2]. The Morse index of a critical point is defined as Lemma 13 Let q be a critical point of the mechanical Lagrangian action, then the associated Conley-Zehnder index µ CZ (x) is well-defined, and µ CZ (x) = γ(q).
The Poincaré-Hopf Furmula and the Morse index
Legendrian braids can be described with Lagrangian systems and Hamiltonians of the form H L (x,t) = 1 2 p 2 − 1 2 q 2 + g(q,t). On the potential functions g we impose the following hypotheses: In order to have a straightforward construction of a mechanical Lagrangian we may consider a special representation of y. The Euler-Floer characteristic χ x rel y does not depend on the choice of the fiber [x] rel y and therefore also not on the skeleton y. We assume that y has linear crossings in y L . Let t = t 0 be a crossing and let I(t 0 ) be the set of labels defined by: i, j ∈ I(t 0 ), if i = j and Q i (t 0 ) = Q j (t 0 ). A crossing at t = t 0 is linear if Q i t (t) = constant, ∀i ∈ I(t 0 ), and ∀t ∈ (−ε + t 0 , ε + t 0 ), for some ε = ε(t 0 ) > 0. Every skeleton Q with transverse crossings is isotopic to a skeleton with linear crossings via a small local deformation at crossings. For Legendrian braids x L rel y L ∈ [x rel y] R 2 with linear crossings the following result holds: Lemma 14 Let y L be a Legendrian skeleton with linear crossings. Then, there exists a Hamiltonian of the form H L (x,t) = 1 2 p 2 − 1 2 q 2 +g(q,t), with g satisfying Hypotheses (g1)-(g2), and R > 0 sufficiently large, such that y L is a skeleton for X H L (x,t).
Proof Due to the linear crossings in y L we can follow the construction in [12]. For each strand Q i we define the potentials g i (t, x) = −Q i tt (t)q. By construction Q i is a solution of the equation Q i tt = −g i q (t, Q i ). Now choose small tubular neighborhoods of the strands Q i and cut-off functions ω i that are equal to 1 near Q i and are supported in the tubular neighborhoods. If the tubular neighborhoods are narrow enough, then supp(ω i g i ) ∩ supp(ω j g j ) = ∅, for all i = j, due to the fact that at crossings the functions g i in question are zero. This implies that all strands Q i satisfy the differential equation t) is compactly supported. The latter follows from the fact that for the constant strands Q i = ±1, the potentials g i vanish. Let R > 1 and definẽ where m = #Q, which yields smooth functionsg i on R × R/Z. Now define By construction supp(g) ⊂ [−R, R] × R/Z, for some R > 1 and the strands Q i all satisfy the Euler-Lagrange equations Q i tt = Q i − g qq (Q i ,t), which completes the proof.
The Hamiltonian H L given by Lemma 14 gives rise to a Lagrangian system with the Lagrangian action given by The braid class [q] rel Q is bounded due to the special strands ±1 and all free strands q satisfy −1 ≤ q(t) ≤ 1. Therefore, the set of critical points of L in [q] rel Q is a compact set. The critical points of L in [q] rel Q are in one-to-one correspondence with the zeroes of the equation in the set Ω R 2 = [x L ] R 2 rel y L , which implies that Φ H L is a proper mapping on Ω R 2 . From Lemma 10 we derive that the zeroes of Φ H L are contained in ball in R 2 with radius R > 1, and thus Φ −1 H L (0) ∩ Ω R 2 ⊂ B R (0) ⊂ C 1 (R/Z). Therefore the Leray-Schauder degree is well-defined and in the generic case Lemma 13 and Equations (21), (27) and (30) yield We are now in a position to use a homotopy argument. We can scale y to a braid ρy such that the rescaled Legendrian braid ρy L is supported in D 2 . By Lemma 12, y is isotopic to y L and scaling defines an isotopy between y L and ρy L . Denote the isotopy from y to ρy L by y α . By Proposition 9 we obtain that for both skeletons y and ρy L it holds that where Ω ρ = [ρx L ] rel ρy L ⊂ [x rel y] and H ρ ∈ H (ρy L ). Now extend H ρ to R 2 × R/Z, such that Hypotheses (h1)-(h3) are satisfied for some R > 1. We denote the Hamiltonian again by H ρ . By construction all zeroes of Φ H ρ in [ρx L ] rel ρy L are supported in D 2 and therefore the zeroes of Φ H ρ in [ρx L ] R 2 rel ρy L are also supported in D 2 . Indeed, any zero intersects D 2 , since the braid class is proper and since ∂D 2 is invariant for the Hamiltonian vector field, a zero is either inside or outside D 2 . Combining these facts implies that a zero lies inside D 2 . This yields where Ω ρ,R 2 = [ρx L ] R 2 rel ρy L . For the next homotopy we keep the skeleton ρy L fixed as well as the domain Ω ρ,R 2 . Consider the linear homotopy of Hamiltonians where H ρ,L (t, x) = 1 2 p 2 − 1 2 q 2 + g ρ (q,t) given by Lemma 14. This defines an admissible homotopy since ρy L is a skeleton for all α ∈ [0, 1]. The uniform estimates are obtained, as before, by Lemma 11, which allows application of the Leray-Schauder degree: Finally, we scale ρy L to y L via y α,L = (1−α)ρy L +αy L and we consider the homotopy between H L and H ρ,L , where g(q,t; α) is found by applying Lemma 14 to y α,L . The uniform estimates from Lemma 11 allows us to apply the Leray-Schauder degree: Combining the equalities for the various Leray-Schauder degrees with (39) yields: 10 The proof of Theorem 3 The Lagrangian setting introduced in the previous section allows for another simplification via finite dimensional systems.
Discretized braid classes
The Lagrangian problem (38) can be treated by using a variation on the method of broken geodesics. If we choose 1/d > 0 sufficiently small, the integral has a unique minimizer , and τ i = i/d. Moreover, if 1/d is small, then the minimizers are non-degenerate and S i is a smooth function of q i and q i+1 . Critical points q of L with |q(t)| ≤ 1 correspond to sequences q D = (q 0 , · · · , q d ), with q 0 = q d , which are critical points of the discrete action A concatenation # i q i of minimizers q i is continuous and is an element in the function space H 1 (R/Z), and is referred to as a broken geodesic. The set of broken geodesics # i q i is denoted by E(q D ) and standard arguments using the non-degeneracy of min- and yields the following commuting diagram: In the above diagram # i is regarded as a mapping q D → # i q i , where the minimizers q i are determined by q D . The tangent space to E(q D ) at a broken geodesic # i q i is identified by and # i q i + T # i q i E(q D ) is the tangent hyperplane at # i q i . For H 1 (R/Z) we have the following decomposition for any broken geodesic # i q i ∈ E(q D ): where E = {η ∈ H 1 (R/Z) | η(τ i ) = 0, ∀i}. To be more specific the decomposition is orthogonal with respect to the quadratic form Indeed, let η ∈ E and ψ ∈ T # i q i E(q D ), then by the above orthogonality. By construction the minimizers q i are non-degenerate and therefore D 2 L | E is positive definite. This implies that the Morse index of a (stationary) broken geodesic is determined by D 2 L | T # i q i E(q D ) . By the commuting diagram for W this implies that the Morse index is given by quadratic form D 2 W (q D ). We have now proved the following lemma that relates the Morse index of critical points of the discrete action W to Morse index of the 'full' action L .
Lemma 15 Let q be a critical point of L and q D the corresponding critical point of W , then the Morse indices are the same i.e. γ(q) = γ(q D ).
For a 1-periodic function q(t) we define the mapping and q D is called the discretization of q. The linear interpolation reconstructs a piecewise linear 1-periodic function. For a relative braid diagram q rel Q, let q D rel Q D be its discretization, where Q D is obtained by applying D d to every strand in Q. A discretization q D rel Q D is admissible if q D rel Q D is homotopic to q rel Q, i.e. q D rel Q D ∈ [q rel Q]. Define the discrete relative braid class [q D rel Q D ] as the set of 'discrete relative braids' q D rel Q D , such that q D rel Q D ∈ [q rel Q]. The associated fibers are denoted by [q D ] rel Q D . It follows from [8], Proposition 27, that [q D rel Q D ] is guaranteed to be connected when d > #{ crossings in q rel Q}, i.e. for any two discrete relative braids q D rel Q D and q D rel Q D , there exists a homo- Note that fibers are not necessarily connected! For a braid classes [q rel Q] the associated discrete braid class [q D rel Q D ] may be connected for a smaller choice of d.
We showed above that if 1/d > 0 is sufficiently small, then the critical points of L , with |q| ≤ 1, are in one-to-one correspondence with the critical points of W , and their Morse indices coincide by Lemma 15. Moreover, if 1/d > 0 is small enough, then for all critical points of L in [q] rel Q, the associated discretizations are admissible and [q D rel Q D ] is a connected set. The discretizations of the critical points of L in [q] rel Q are critical points of W in the discrete braid class fiber [q D ] rel Q D . Now combine the index identity with (40), which yields The invariant is well-defined for any d > 0 for which there exist admissible discretizations and is independent of both the fiber and the discretization size. From [8] we have for any Morse function W on a proper braid class fiber The latter can be computed for any admissible discretization and is an invariant for [q rel Q]. Combining Equations (44) and (45) gives In this section we assumed without loss of generality that x rel y is augmented and since the Euler-Floer characteristic is a braid class invariant, an admissible discretization is construction for an appropriate augmented, Legendrian representative x L rel y L . Summarizing Since χ q D rel Q * D is the same for any admissible discretization, the Euler-Floer characteristic can be computed using any admissible discretization, which proves Theorem 3.
Remark 5
The invariant χ q D rel Q D is a true Euler characteristic of a topological pair. To be more precise where [q D ] − rel Q D is the exit set a described above. A similar characterization does not a priori exist for [x] rel y. Firstly, it is more complicated to designate the equivalent of an exit set [x] − rel y for [x] rel y, and secondly it is not straightforward to develop a (co)-homology theory that is able to provide meaningful information about the topological pair [x] rel y, [x] − rel y . This problem is circumvented by considering Hamiltonian systems and carrying out Floer's approach towards Morse theory (see [7]), by using the isolation property of [x] rel y. The fact that the Euler characteristic of Floer homology is related to the Euler characteristic of topological pair indicates that Floer homology is a good substitute for a suitable (co)-homology theory.
Examples
We will illustrate by means of two examples that the Euler-Floer characteristic is computable and can be used to find closed integral curves of vector fields on the 2-disc. Figure 4[left] shows the braid diagram q rel Q of a positive relative braid x rel y. The discretization with q D rel Q D , with d = 2, is shown in Figure 4[right]. The chosen discretization is admissible and defines the relative braid class [q D rel Q D ]. There are five strands, one is free and four are fixed. We denote the points on the free strand by q D = (q 0 , q 1 ) and on the skeleton by Q D = {Q 1 , · · · , Q 4 }, with Q i = (Q i 0 , Q i 1 ), i = 1, · · · , 4. In Figure 5[left] the braid class fiber [q D ] rel Q D is depicted. The coordinate q 0 is allowed to move between Q 3 0 and Q 2 0 and q 1 remains in the same braid class if it varies between Q 1 1 and Q 4 1 . For the values q 0 = Q 3 0 and q 0 = Q 2 0 the relative braid becomes singular and if q 0 crosses these values two intersections are created. If q 1 crosses the values Q 1 1 or Q 4 1 two intersections are destroyed. This provides the desired co-orientation, see Figure 5 From Theorem 2 we derive that any vector field for which y is a skeleton has at least 1 closed integral curve x 0 rel y ∈ [x] rel y. Theorem 2 also implies that any orientation preserving diffeomorphism f on the 2-disc which fixes the set of four points A 4 , whose mapping class [ f ; A 4 ] is represented by the braid y has an additional fixed point.
Example
The theory can also be used to find additional closed integral curves by concatenating the skeleton y. As in the previous example y is given by Figure 4. Glue copies of the skeleton y to its -fold concatenation and a reparametrize time by t → ·t. Denote the rescaled -fold concatenation of y by # y. Choose d = 2 and discretize # y as Fig. 6 A discretization of a braid class with a 5-fold concatenation of the skeleton y. The number of odd anchor points in middle position is µ = 3.
in the previous example. For a given braid class [x rel # y], Figure 6 below shows a discretized representative q D rel # Q D , which is admissible. For the skeleton # Q D we can construct 3 − 2 proper relative braid classes in the following way: the even anchor points of the free strand q D are always in the middle and for the odd anchor points we have 3 possible choices: bottom, middle, top (2 braids are not proper). We now compute the Conley index of the 3 − 2 different proper discrete relative braid classes and show that the Euler-Floer characteristic is non-trivial for these relative braid classes.
The configuration space N = cl [q D ] rel # Q D in this case is given by a cartesian product of 2 closed intervals, and therefore a 2 -dimensional hypercube. We now proceed by determining the exit set N − . As in the previous example the co-orientation is found by a union of faces with an outward pointing co-orientation. Due to the simple product structure of N, the set N − is determined by the odd anchor points in the middle position. Denote the number of middle positions at odd anchor points by µ. In this way N − consists of opposite faces at at odd anchor points in middle position, see Figure 6. Therefore Let X(x,t) be a vector field for which y is a skeleton of closed integral curves, then # y is a skeleton for the vector field X (x,t) := X(x, t). From Theorem 2 we derive that there exists a closed integral curve in each of the 3 − 2 proper relative classes [x] rel y described above. For the original vector field X this yields 3 − 2 distinct closed integral curves. Using the arguments in [13] one can find a compact invariant set for X with positive topological entropy, which proves that the associated flow is 'chaotic' whenever y is a skeleton of given integral curves
Example
So far we have not addressed the question whether the closed integral curves x rel y are non-trivial, i.e.not equilibrium points of X. The theory can also be extended in order to find non-trivial closed integral curves. This paper restricts to relative braids where x consists of just one strand. Braid Floer homology for relative braids with x consisting of n strands is defined in [12]. To illustrate the importance of multi-strand braids we consider the discrete braid class in Figure 7. The braid class depicted in Figure 7[right] is discussed in the previous example and the Euler-Floer characteristic is equal to 1. By considering all translates of x on the circle R/Z, we obtain the braids in Figure 7[left]. The latter braid class is proper and encodes extra information about q D relative to Q D . The braid class fiber is a 6dimensional cube with the same Conley index as the braid class in Figure 7[right]. Therefore, χ(q D rel Q D ) = (−1) 2 = 1.
As in the 1-strand case, the discrete Euler characteristic can used to compute the associated Euler-Floer characteristic of x rel y and χ(x rel y) = 1. The skeleton y thus forces solutions x rel y of the above described type. The additional information we obtain this way is that for braid classes [x rel y], the associated closed integral curves for X cannot be constant and therefore represent non-trivial closed integral curves.
|
2012-04-03T10:13:26.000Z
|
2012-04-03T00:00:00.000
|
{
"year": 2017,
"sha1": "89d89fff92badc7d99d1762e9b612bf57835f511",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00209-016-1841-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ebe76176a442d6b94ae7e9a9eac8786a15c17951",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
56572145
|
pes2o/s2orc
|
v3-fos-license
|
Infrared and 13 C NMR spectral studies of some aryl hydrazides: Assessment of substituent effects
A series of some aryl hydrazides have been synthesized. These hydrazides purities were analyzed by physical constants and spectral data. The assigned spectral group frequencies were correlated with Hammett substituent constants and Swain-Lupton parameters using single and multi-linear regression analysis. From the results of statistical analysis, the effect of substituents on the spectral group frequencies has been discussed
Thirunarayanan and Sekar have studied the effect of substituents by Hammett spectral correlations in benzofuranyl flavonols and pyrazoline derivatives [16,17]. Within the above view, there is no report available for the study of effect of substituents in the aryl hydrazides. Therefore the authors have taken efforts for studying the effect of substituents on the substituted benzohydrazide by infrared and 13 C NMR spectra.
1. General
The IR spectra (KBr, 4000-400 cm -1 ) of all compounds have been recorded on AVATAR-300 Fourier transform spectrophotometer. The Bruker AV400 NMR spectrometer operating at 100 MHz for 13 C NMR spectra in CDCl3 solvent, using TMS as internal standard.
Synthesis of substituted benzohydrazides
All substituted benzohydrazides have been synthesized and analyzed their purities by literature method [4]. The general structure of the synthesized benzohydrazide is shown in Fig. 1.
RESULTS AND DISCUSSION
In the present study, the authors have investigated the effect of substituents on the assigned spectral frequencies using Hammett equation with Hammett substituent constants and Swain-Lupton's [18] constants by single and multi-linear regression analysis.
1. IR spectral study
In infrared spectral study, the Hammett equation is applied for prediction of effect of substituents on the carbonyl and NH stretches with Hammett substituent constants. In this correlation, the Hammett equation was taken as, where ν is the frequency for the substituted system, ρ is the reaction constants in terms of intercept, σ is the substituent constants and ν o is the frequency for the parent member of the series. The assigned carbonyl and NH stretches (ν, cm -1 ) of all substituted benzohydrazide are presented in Table 1. The results of statistical analysis [9][10][11][12][13][14][15][16][17] are shown in Table 2. From the table 2, the correlation of νCO and NH stretches (ν, cm -1 ) of hydrazide derivatives were produced satisfactory correlation coefficient. Among these correlations the σ constant gave slightly better r values. Other constants gave more or less equal r values. Similarly they produced the satisfactory correlation for multi-regression analysis with Swain-Lupton's [18] parameters. The generated multi-regression analysis equations are shown in (2)(3)(4)(5).
2. 13 C NMR Spectral study
In nuclear magnetic resonance spectra, the 1 H or the 13 C chemical shifts (δ, ppm) depend on the electronic environment of the nuclei concerned. These chemical shifts of hydrazide have been correlated with reactivity parameters. Thus the Hammett equation was used in the form as shown in (6). Log δ = Log δ 0 + ρσ … (6)
ILCPA Volume 32
where δ 0 is the chemical shift of the corresponding parent compound. The assigned CO and C ipso carbon chemical shifts (δ, ppm) of hydrazides have been assigned and correlated with Hammett substituent constants using single and multi-linear regression analysis [9][10][11][12][13][14][15][16][17]. The results of statistical analysis are shown in Table 2. From Table 2, the correlation of CO carbon chemical shifts (δ, ppm) of hydrazides with Hammett σ, σ + , σ R constants and R parameters were satisfactory. The Hammett σ I constant and F parameter were fail in correlation. This is due to the inability of prediction of the effect of substituents on the carbonyl chemical shifts and is associated with the resonance-conjugative structure as shown in Figure 2. International Letters of Chemistry, Physics and Astronomy Vol. 32 r = correlation coefficient; I = intercept; ρ = slope; s = standard deviation; n = number of correlated derivatives The correlation of C ipso carbon chemical shifts (δ, ppm) of hydrazides gave satisfactory correlation with Hammett substituent constants [9][10][11][12][13][14][15][16][17] and F and R parameters [18]. Among these correlations, the Hammett σ I constants gave slightly better r values. The other correlations are more or less uniform. The multi-linear correlation of CO and C ipso carbon chemical shifts (δ, ppm) of hydrazides were produced satisfactorily correlation coefficients.
CONCLUSIONS
A series of some aryl hydrazides have been synthesized and recorded their infrared and 13 C NMR spectra. The assigned spectral group frequencies were correlated with Hammett substituent constants and Swain-Lupton parameters using single and multi-linear regression analysis. In infrared spectral correlations, all regressions gave satisfactory r values. In 13 C NMR spectral correlations, the carbonyl carbon chemical shifts correlated satisfactorily with Hammett σ, σ + , σ R constants and R parameters. The correlation of C ipso carbon chemical shifts (δ, ppm) of hydrazides gave satisfactory correlation with Hammett substituent constants and F and R parameters. In multi-regression analysis all spectral data of the hydrazides gave satisfactory correlations.
|
2019-04-04T13:14:06.512Z
|
2014-04-22T00:00:00.000
|
{
"year": 2014,
"sha1": "b8fa27eee76114665205a819659925f2dd95a966",
"oa_license": "CCBY",
"oa_url": "https://www.academicoa.com/ILCPA.32.88.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e2116e4d77aa31728d6382bb701d7e7d653b241e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
117522648
|
pes2o/s2orc
|
v3-fos-license
|
Objective and subjective acoustics measurement of audience seating areas in a medium size auditorium
Several consideration of the acoustical quality of a multi-purpose hall should be taken into account and therefore, the audiences must be fully aware of the acoustical consequences in deciding their seating positions. In this paper, objective measurement and subjective assessment in the medium capacity, Driyarkara multi-purpose hall, has been investigated. Room impulse responses (RIR) were taken for several monaural parameter distribution including G, EDT, and C80 at 16 positions referring to ISO 3382-1:2009. The signals were convolved with anechoic sound of musical instruments to reproduce acoustic stimuli for subjects to listen. Stimuli data is reduced to 8 measurement’ positions, which indicated the noticeable difference. These were generated to reproduce binaural sounds for further subjective assessment. The distribution of binaural parameters were observed in 250 Hz, 1 kHz, and 4 kHz to obtain JND (Just Noticeable Difference). The result reveal that this hall contains 3 different acoustic zones with the range of acoustics parameters of G: -8.91 dB – 1.07 dB, EDT: 1.12 s – 1.52 s, C50: -0.11 dB – 5.05 dB, C80: -3.70 dB – 2.03 dB, D50: 31.00 – 60.71, Ts: 62.75 ms – 106.50 ms, IACC: 0.43 s – 0.82 s, Treble Ratio: 0.67 – 0.96 and Bass Ratio: 0.81 – 2.80. Furthermore, this result correlates with the JND value for G: 2.26 dB, Ts: 17.5 ms, Treble ratio: 0.2, Bass Ratio: 0.03 and D50: 16.67.
Introduction
Auditorium have many functions, some of the most familiar functions are for meeting room and for having performances. Auditorium as a performing space should have a capacity of volume per seat minimum 6.2 m 3 and maximum 10.8 m 3 [1]. Music performance space or concert hall emphasizes on distribution of sound in the entire hall. Commonly closed performance space produce low background noise. Suyatno et al [2] studied the acoustic parameters for gamelan performances in Pendopo Mangkunegaran Surakarta. The result of background noise shows in the main hall, pendopo terrace and outside area pendopo are 54 dB, 62 dB and 71 dB, respectively. The pavilion that has a roof with pyramid shape, greatly affects the sound distribution inside the main hall. The largest reverberation time inside the pendopo at the center of the main hall is 2.2 s. This is influenced by the distribution of reflected sounds from the geometry of the roof and its material.
The same phenomenon occurs in an atrium that has a large capacity with solid material used for the roof. These architectural elements created acoustic discomfort. The results of computer simulations show that micro-perforated panels can improve the noise level and reverberation time [3] where the reverberation time decreased around 1 to 2 s. A research done by Marc Aretz and Raf Orlowski [4] reported that changes in the area of absorption not only change the value of the reverberation time but also the value of G. Concert hall will produce a good sound distribution when reverberation time is between 1.6 s to 2 s [5]. Reverberation time also affects the conductor and the orchestra player in their ability to hear the clarity of the tone of each phrase from musical composition [5].
Measurement technics and just noticeable difference (JND) value is provided in ISO 3382 -1:2009 [6]. A study reported a JND value for reverberation time that is larger than the value in ISO 3382-1, which is 24.5 with 6.9% in standard deviation [7]. Distribution of acoustic parameters can be presented with area mapping using data at points that are suitable enough to see the differences. Taeko Akama et al [8] suggested that it is not about the larger amount of measurement points that will give one the mapping discrepancies but by enough measurement points. This is to insure a best visualization in presenting the acoustics par ameters distribution. To get enough measurement points, Taeko Akama also suggested to do some measurements with different amount of points. Other study reported by Mike Barron [9], found that the value of G (strength) is not constant. As the distance between sound source and receiver increases, the G value will decrease. As the distance gets farther away from the sound source, the G val ue will decrease more.
To evaluate the quality of a performance space, subjective evaluation is also needed besides of acoustics parameter measurements Acoustic stimuli helps respondents to identify their favorite seats rather than using visual stimuli only [10]. The combination of these two stimuli shall produce a more suitable assessment. Angelo Farina reported the index for subjective evaluation to assess concert hall and opera houses [11]. The indices are obtained by the regression value more than 0.3 in every experiment of the comparison between objective parameters and subjective index. Case studied in this research is the Driyarkara Auditorium in Yogyakarta, Indonesia. Driyarkara Auditorium was designed specifically for speech and music purposes without electrical sound system. With capacity more than 1000 audiences, this auditorium is required to fulfill the acoustic needs for the entire audience seats.
Research Methodology
Auditorium Driyarkara, Yogyakarta as a multipurpose-hall is functionalized as a graduation ceremony building and also rent as a concert hall. It is a medium size auditorium with a capacity of 974 seats at the first floor and 126 seats at the second floor. Design of the wall is covered by diffusor panels and absorber material. Structure of the wall is arranged with a zig-zag pattern. In front of the stage, a huge diffuser panel is installed to scatter the sound. This condition affects the audience seating area by providing sound reflections that are coming from many surfaces.
Equipment of this research is balloon with 30 cm diameters as the sound source, omnidirectional microphone BSWA MPA416 as the receiver, soundcard, laptop and headphone. Preliminary research was done to obtain positions of measurement that represent the acoustics profile of the auditorium. Objective measurement was done by recording at 16 seats (A1, A8, B15, B22, G1, G9 G18, M26, S1, S8, S18, and S26; see Figure 1 and 2). The recorded Room impulse response (RIR) was then convolved with a dry sound from guitar instrument following the steps of auralization. The auralization signals are then used as the acoustic stimuli for the subjective evaluation. In the subjective evaluation, a comparison of 2 sounds at three directions (vertical, horizontal and diagonal) was performed. Every comparison consist of three questions; (1) Which sound is louder?; (2) Which sound is clearer?; (3) Which sound is more reverb?. The answer options are (a) sound 1; (B) sound 2; (C) No differences perceived. The result of the objective measurement is the percentage difference of three parameters (G, RT, and C80). The result of the subjective evaluation is compared with the parameters obtained from the objective measurements for positions with significant differences.
Objective Measurement
Binaural recording was done at 8 seats (C5, C16, H7, H18, M8, M22, R9, and R22) based on the previous preliminary result. Audacity was used as the software for recording the RIR with each 7 seconds. The height of the source (balloon) and microphone receiver are 1.5 m and 1.2 m respectively. Microphone mounted on a person's head was used to obtain a binaural recording. Data of the recording file was save in .wav format. Before generating the acoustics parameters, the recording file had to be trimmed to delete the early portions of the in order to minimalize error in the energy decay curve. In addition, "trimming" was done to reduce the discrepancies between the reverberation time values obtained from measurement using dodecahedron loudspeaker and balloon as the sound sources. Acoustic parameters analyzed are the G (strength), EDT (Early Decay Time), C80, D50, C50, Ts, IACC, Treble Ratio and Bass Ratio. Room Impulse Response (RIR) in 8 eats convolved with dry sound from guitar instrument by GratisVolver from CATTacoustics. The result of convolution become acoustics stimuli on subjective evaluation.
Subjective Evaluation
Subjective evaluation was done using questionnaire with semantic scale rating from -3 to +3 with zero as neutral value. Respondents were listening to 8 acoustics stimuli through headphones and were giving the assessment based on the subjective parameters, which represents acoustic parameters in the objective measurement. Subjective parameters are pleasant-unpleasant, round-sharp, soft-hard, diffuselocalizable, detached-enveloping, dry-reverberant, treble booster-treble reduced, bass boosted-bass reduced, and quiet-loud. Respondents were selected based on several criteria which are students that have taken acoustics courses, or musicians or those that have experienced attending an indoor live concert.
Results and discussion
Result of the objective measurement is shown in Table 1
The result of rating scale shows that respondents have a tendency to give the assessment at -2 till +2. It is shown that we can actually reduce the rating scale to 5 scale only from -2 to +2.
Just Noticeable Difference
Objective measurement result in Table 1 converts to subjective scale and compares with the subjective evaluation. This comparison obtained the percentage value of respondents who gave the same perception with the objective measurement rating. The Yellow color represent the maximum percentage of the rating from respondent in subjective evaluation. The blue color is the result of converting the objective measurement result. Meanwhile, the green color shows the same result between objective measurement
Acoustic Zoning
Regression analysis was used to obtain parameters which has the highest correlation. The distribution of the parameters is to predict the location of the most convenient seat, acoustically speaking. The regression analysis shows that G, EDT,Ts,C80,D50,C50,IACC,Treble Ratio,Bass Ratio and IACC are 0.42,0.32,0.24,0.06,0.08,0.26,0.23,0.06 and 0.18 respectively. From this result only G and EDT which have regression value above 0.30. The distribution of G and EDT at low (250 Hz), mid (1000 Hz), and high (4000 Hz) were analyzed and obtained the acoustic zoning (see Figure 4). The result indicated that middle seats have the highest listening quality. Hypothetically, these middle seats are receiving the most sound reflections from all surfaces that enable to support the listening quality. Meanwhile, the fair listening quality are seats at the front rows. Seats that are far away from stage have the poor listening quality.
Conclusion
Using the objective measurement, we obtained range of values of acoustics parameter of G, EDT, C80, C50, D50, Ts, IACC, Treble Ratio, and Bass Ratio are (-8.91 to 1.07 dB), (1.12 to 1.52 s), (-0.11 to 5.05 dB), (-3.70 respectively. In addition, based on the regression result of G (0.42) and EDT (0.32), we obtained three acoustics 3 zones (see Figure 4) of different listening quality where middle seats are the best, front seats are the intermediate class and the worst listening quality are seats that are further away from the stage.
|
2019-04-16T13:28:58.140Z
|
2018-08-01T00:00:00.000
|
{
"year": 2018,
"sha1": "41352b9d41d26210cd63b12f06bb091ccdedb193",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1075/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a36b9ac7d5eb7047103234c96ed75a54072a0310",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
40263099
|
pes2o/s2orc
|
v3-fos-license
|
Self-Noise of the MET Angular Motion Seismic Sensors
Interest to angular motion seismic sensors is generated by an expectation that direct measurement of the rotations, associated with seismic signals, would allow obtaining more detailed and accurate information from them. Due to the seismic signals low intensity a self-noise of the sensors is one of the most crucial parameters, characterizing their performance. In seismic applications the molecular-electronic transfer (MET) technology is considered as one of the most promising technologies for the rotations measurements. In this researchwe have developed a noisemodel for theMET angular sensors.The experimental part of the research which fully agrees with theoretical data includes the instrument self-noise measurement in quite locations. Based on the modelling we have revealed the directions of further research to improve the MET angular sensors performance.
Introduction
Unlike traditional geophones angular motion seismic sensors are not sensitive to translational vertical or horizontal motions and generate an output signal only in presence of ground or structure rotations.Interest to angular motion seismic sensors is stimulated by an expectation that direct measurements of the rotations, associated with seismic signals, would allow estimating more precisely the response of the structures to seismic input, providing more accurate measurement of the seismic field spatial distribution, and separating modes of seismic waves based on their polarization and determine site effect [1][2][3].
For seismic applications, the angular motion sensors should be capable of a better than 0.1 rad/sec resolution and display low sensitivity to linear motion.Taking into account that the angular seismic sensors could be largely used in oil and gas seismic exploration, their compactness, low cost, and low power consumption are to be the essential requirements.Nowadays, among a variety of technologies, the molecularelectronic transfer (MET) technology is, likely, the only one to offer a reasonably priced commercial product of the required performance [4].The sensors based on this technology are also known as electrochemical angular motion sensors.Although the MET sensors appeared to be useful for many applications, some of them require significantly better performance than that one currently achieved.
In this paper, we concentrate our efforts on the analysis of the MET angular sensors self-noise.Improvement of the noise characteristics for MET angular sensors is not possible without understanding of the physical mechanisms, responsible for the self-noise generation.We have experimentally investigated the MET angular sensor self-noise for the range 1-150 Hz, which covers the frequencies most significant for seismic exploration.Additional experiments and analysis of the possible noise sources allowed defining the processes responsible for the sensor self-noise at different frequencies.The noise model has been developed and compared with the experimental data.Finally, the methods for the self-noise improvement are suggested.
Instruments.
The critical part of the angular motion sensors is a transducer.The mechanical configuration of the angular motion transducer, based on the MET technology, is presented in Figure 1.The transducer consists of a toroidal channel, filled with a highly concentrated iodide-iodine water-based electrolyte.An expansion volume allows compensating temperature expansions of the liquid.The sensitive cell placed across the channel converts liquid motion inside the channel into the electrical response.For the commercial sensors the sensitive cell consists of four mesh electrodes sandwiched together with three porous dielectric ceramic spacers [5].Alternative configurations for sensitive cells have been reported recently [6][7][8][9].External electrodes (anodes) are connected to positive potential relative to the internal ones (cathodes).The operating principles are based on the sensitivity of the active ions distribution and currents passing through the electrodes to the electrolyte motion.Commonly the differential cathodic current is used as an output signal of the MET transducer.The electrodes of the cell are connected to the signal conditioning electronic board, which converts the differential output current from the cell into voltage and shapes the response in specified frequency operational range.
In our experiments we used the METR-11 (1-150 Hz frequency operational range) sensor manufactured by Rsensors, LLC (http://www.r-sensors.ru/).The instrument used in the tests comprises the ceramic transducer with the electrodes inside, filled with an electrolyte and electronic board.The parts are held together by an external case (see photo on Figure 2).The manufacturer specified that scale factor is = 50 V/rad/sec.
The transducer toroid has external diameter of 50 mm, and the squared cross-section of the toroidal channel is 6 × 6 mm in size.The electrodes are made from platinum mesh with cells of 170 × 170 m and wire diameter of 45 m.The dielectric spacers are of ∼120 m thick with 80 round through-holes 300 m in diameter each.
The block diagram of the electronic signal conditioning board is shown in Figure 3.The first stage (marked as "stage 1" in Figure 3) is designated for transducer output current transformation to voltage and for scale factor temperature compensation.The second stage ("stage 2") is responsible for high frequency correction and for additional frequencydependent temperature compensation.The third stage ("stage 3") contains low pass and high pass second-order Butterworth filters having the cut-off frequencies 1 Hz and 150 Hz, correspondingly.
For further analysis we need to know the instrument transfer function, which could be presented by a product of the MET transducer transfer function trans with transfer functions of three stages of the electronic circuitry 1st , 2nd and, 3rd : ( The MET transducer transfer function trans converts rotation rate into the differential cathodic current diff : According to the data given in [10], trans behaves differently depending on frequency range.It grows up ∼ from 0 to approximately ∼0.1 Hz, and then it does not depend on frequency up to ∼10 Hz and goes down to ∼1/ at higher ones.
The electronics transfer functions could be calculated analytically or modeled using electronic designer's standard software.The first stage converts differential cathodic current diff into the first-stage output voltage: Here fb is an equivalent resistance in the feedback of the first-stage operational amplifier.In practice, as it could be seen on Figure 3, fb is made of several temperature dependent and permanent resistors, connected in sequence and in parallel, thus providing compensation of the transducer sensitivity temperature variations.
Transfer functions of the second and third stages are shaped to achieve flat response | inst | ≈ = 50 V/rad/sec of the instrument in the operating frequency range with low and high cut-off frequencies at 1 and 150 Hz, correspondingly.The calculated frequency behavior of the product | 2nd 3rd | is shown in Figure 4.
Experiments and Data Analysis.
In the first set of our experiments we measured the instrument self-noise.For that purpose we placed two METR-11 sensors in the basement on the solid concrete foundation with sensitivity axis directed vertically upward.The recording was made by 24-bit digitizer LTR-24 (http://www.lcard.ru/)over the quietest nighttime period.The different sampling rates were tried and the results were compared with the aim of determining an optimal sampling rate when there is no noise transfer from high frequencies to the frequency range of interests as a result of the aliasing.
For each sensor the data processing includes the windowing of the recorded signal, each window 128 seconds in length, calculation of the signal spectrum for each window, and averaging them over all of the windows.The output spectrum has been converted to equivalent angular rate units using the known instrument scale factor K. The resulting curves for one of the tested sensors are presented in Figure 5.By comparison of the curves obtained at different sampling rates we can observe that in the 5-100 Hz frequency range the averaged spectrum at sampling rate 400 sps is significantly higher than the one found at 4000 sps.This effect should be attributed to the aliasing.In other experiments we used only the data obtained at 4000 sps.
The correlation function was calculated for the signals recorded by two METR-11 sensors.The correlation is high (up to 0.9) at frequencies corresponding to several peaks at the spectrum observed in the range 40-200 Hz.These peaks should be associated with the real seismic signal of artificial nature and that reason cannot be considered as a part of the sensors self-noise.Beyond these peaks the correlation is less than 0.2 and we associate the recorded signals with the instrument self-noise.The solid smooth line in Figure 5 is an approximation of the self-noise curve when only parts of the signal corresponding to the frequency ranges with low correlation are used for analysis.The resulting self-noise frequency behavior as presented by solid smooth line has the following features.
In the next set of the experiments we measured the electronics self-noise.The transducer was replaced by a constant resistor modelling its electrical impedance.We used 10 Ohm and 100 Ohm resistors.For convenience of the comparison with the results presented in Figure 5 the electronic noise has been converted from output voltage to the equivalent angular rate.The resulting noise curves are presented in Figure 6.At frequencies close to 1 Hz the self-noise of the electronics practically does not depend on the resistor value and is ∼6 times lower than the sensor self-noise presented by solid line in Figure 5.At higher frequencies using 10 times smaller resistor means significantly higher (almost 10 times) electronics self-noise.At high frequencies the electronics selfnoise could be either higher or lower than the measured sensor self-noise, depending on the transducer impedance.So, at low frequencies the transducer is the major source of the self-noise, while at higher frequencies the situation depends on the input impedance value and could be opposite when electronic noise achieves and even exceeds the selfnoise produced in the transducer.
Theoretical Model.
Several processes responsible for the MET motion sensors self-noise have been described in the literature [4,[11][12][13].Based on the experimental data presented above we propose a hypothesis that for the frequency range of interest major contributors into the angular sensor self-noise are the convective self-noise investigated in [12] and the selfnoise of the signal conditioning electronics.So, the total noise could be presented by the following equation: As observed in the experiments presented at [11], the PSD of the convective noise in the transducer output current at the frequencies above 1 Hz has approximately ∼ 1/ behavior.Taking into account, according to [10], that in this range the transducer transfer function trans is frequency independent we obtain the following formula: Only little is known about the parameter dependence on the geometry of the signal converting MET cell and here it is considered as a fitting parameter of a model.
For the electronic self-noise we suppose that most of the noise is generated in the first stage and is the result of a voltage noise of the first stage operational amplifier.After this simplification the electronic self-noise in the instrument operational frequency range could be given by the following: Here ⟨ 2 ⟩ amp is voltage noise power spectral density for the first-stage operational amplifier. in is first-stage input impedance.Values and in in ( 5) and ( 6) are not known a priori and are considered in the following analysis as fitting parameters.
Using (1) and taking into account | 1st | = fb , formula (6) could be also presented as the following one:
Results and Discussions
Let us substitute the following values in = in = 40 Ohm; fb = 1 kOhm; = 50 (V/(rad/sec)); ⟨ 2 ⟩ amp ≈ 20 nV/ √ Hz [14] in (6).The spectrum of the electronic selfnoise is presented in Figure 7 as a solid black line.The experimental behavior of the electronic noise is in good agreement with the theoretical calculations in the whole range of interest.Some discrepancy is observed at lower frequency, which is, probably, the result of simplifications done when deriving formula (6).Nevertheless, as follows from the experimental data, this difference is not influential since the low frequency electronic noise is only a minor contributor into the total noise of the instrument.
Finally, let us put = 5.5 ⋅ 10 7 (rad/sec) 2 in (5) and substitute ⟨ 2 ⟩ conv ,⟨ 2 ⟩ electromics from ( 5) and ( 6) into formula (4).The resultant spectrum is presented in Figure 8 as a solid black line.As discussed above the peaks on the experimental curve resulted from the real seismic signals and should not be considered as an instrument self-noise.Taking this into account we can conclude that the theoretical (black line) and the experimental curves (blue line) are in good agreement.
Conclusions
The model of the MET angular motion sensor self-noise based on the assumption that two physical mechanisms are responsible for the instrument self-noise agrees with experimental data.These two sources are the hydrodynamic convection (first term in (4)) and amplifiers self-noise (second term in (4)).The noise produced by the convection is dominant at low frequencies and then goes down ∼ 1/, while the electronic noise prevails at high frequencies.
The self-noise at low frequencies could be reduced by modification of the transducer.According to earlier analysis [12] this kind of noise could be decreased by using of the sensitive MET cell geometry, characterized by lower Rayleigh number.At high frequencies, as follows from formula (7), the improvement could be achieved by using less noisy operational amplifiers at the first stage.The transducer improvements should be directed toward higher sensitivity | trans | and higher output impedance | in |.
Figure 1 :
Figure 1: The MET angular sensor mechanical configuration.
Figure 2 :
Figure 2: View of the angular seismic sensor used in the experiments.
Figure 3 :Figure 4 :
Figure 3: Block diagram of the signal conditioning board.
Figure 5 :
Figure 5: Spectra of the METR-11 sensor output signal.The data were recorded at quiet nighttime.Different colors correspond to different sampling rates.Red: 400 sps, blue: 4000 sps.
Figure 6 :
Figure 6: Self-noise of the signal conditioning electronics with different input resistors.Red line: in = 10 Ohm, blue line: in = 100 Ohm.
Figure 7 :
Figure 7: Modelling of the electronic board self-noise (blue curve: experiment, red curve: theoretical approximation).
|
2018-04-03T04:46:08.218Z
|
2015-02-10T00:00:00.000
|
{
"year": 2015,
"sha1": "2c91ef880b548e08e941f31909a47bfae0f33a1c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/js/2015/512645.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2c91ef880b548e08e941f31909a47bfae0f33a1c",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
158493488
|
pes2o/s2orc
|
v3-fos-license
|
Global Mass Society
Economic globalization has resulted in corporations, unaccountable to states, making key decisions within an otherwise anarchic world order, rendering normal democratic functioning almost impossible. Global gridlock has resulted from the same issues that plague democracies today. Although transnational civil society has tried to achieve a degree of democratic global governance, the result mostly has been to reinforce the global power structure.
the aim of the present chapter is to identify wherein the entities of global civil society seek to democratize how decisions are made, especially within major global institutions.
The reason for the need for global democracy is that global harm occurs daily. Nonelite masses and the planetary ecosystem remain largely unprotected. Some entities within global civil society protest against existing and future dangers, demanding action. But no single world government now establishes policies and takes remedial action. And major problems of the global economy, environment, and violence are not being satisfactorily addressed due to what Thomas Hale, David Held, and Kevin Young have identified in their book Gridlock: Why Global Cooperation Is Failing When We Need It Most (2013): Even though many problems are almost too complex to be tackled, the authors blame entrenched global interests that will not relinquish their dominant roles to work cooperatively on behalf of the global masses. For example, the UN Security Council is deadlocked on most matters of global security, trade negotiations stopped with the Doha Round, and global warming imperils the planet almost unchecked.
Nevertheless, what has emerged from the shadows is something known as "global governance" 1 in which various public and private entities are cooperating to regularize interaction though voluntary consensus standard-setting (Murphy 2014: 217), with the prospect that low-level collaboration may rise to higher levels. They provide government-like services and regulations for planetary problems, some of which respond to the needs of the global masses (Weiss 2009: 257;cf. Lederer and Müller 2005;Cabrera 2011). Global governance operates by gaining transnational acceptance of norms and rules (Rosenau 1992: 4). Lacking a confederation or hierarchical federation of nation-states, the civil society within global governance may be the only hope to achieve global democracy (Archibugi, Archibugi, Marchetti 2012).
Clearly, the concept of "global governance" is an alternative to the "end of history" narrative that envisaged the United States pursuing hegemonic leadership in establishing a world of democracies (Fukuyama 1992). European scholars have been the most fervent students of global governance, while some American scholars still imagine that the world leadership of the United States is "indispensable" (e.g., Nye 1990).
The term "global governance" arose in 1992, when UN Secretary-General Boutros Boutros-Ghali supported Ingvar Carlsson and Shridath Ramphal in forming the Commission on Global Governance. 2 Three years later, the commission issued Our Global Neighborhood, a report that urged increased formal and informal cooperative arrangements rather than giving more power to the UN or promoting world federalism. That same year the journal Global Governance began publication.
Global governance operates within various issue-areas in the form of "regimes." For example, there are environmental, human rights, and many other global regimes wherein efforts to forge agreement on common norms now takes place. Thus, global governance has gained increasing recognition to describe multiple regimes of global cooperation throughout the world (Avant, Finnemore, Sell 2010: 6; Johnson and Tallberg 2010). James Rosenau and Ernest Czempiel launched the academic concept of global governance in their Governance Without Government (1992). 3 One comprehensive definition of "global governance" is as follows: the complex of formal and informal institutions, mechanisms, relationships, and processes between and among states, markets, citizens and organizations, both inter-and non-governmental, through which collective interests on the global plane are articulated. Duties, obligations, and privileges are established, and differences are mediated through educated professionals. (Weiss and Thakur 2010) Although intergovernmental organizations (IGOs) tend to be the lead members of many regimes of global governance, international nongovernmental organizations (INGOs) constitute the global civil society that has been vital in global rule-making.
To analyze the problem of global mass society within global governance, the same categories used in the two previous chapters will be applied to identify efforts that may enable the needs and will of ordinary people to prevail. The place to start is to identify members of the global order.
MeMbers of Global society
The masses throughout the world have long been members of global society. They have been controlled and occasionally assisted by several types of global institutions. Global society today consists of "multilayered networks of variously aligned transnational forces" (Falk 1999: 102).
Empires One unit of global society has been the empire, a regional body. There have been regional mass societies across the planet for millennia, with decisions made by regional elites far from the masses. Formerly, the elites were rulers of such empires as Persia and Rome, which sought regional domains of the world that their armies could control and occupy (Bozeman 1994). City-states tried to avoid being incorporated into empires but ultimately were drawn into imperial domains. After Spain sought gold in far-off lands, financing the voyages of Christopher Columbus from 1492 to 1502, his Viceroy of the Indies title signaled the beginning the Spanish empire. England, France, Holland, and Portugal soon sought to pepper the world with imperial conquests. In 1494, Lisbon increased the volume of global trade astronomically as slaves began to be extracted from Africa and sold to imperial powers to provide labor in the New World.
As the successor to the Roman Empire, the Holy Roman Empire was initially a useful entity for the Catholic Church to aggregate prototypic European states, but it was not an empire-nor, for that matter, was it holy or Roman. One of the imperatives of the United Nations after World War II was decolonization, and success came during the next three decades. A few remnants of empires remain wherever peoples are governed from abroad-undemocratically, in many cases.
Nation-States
The Peace of Westphalia, with 140 imperial states and 27 interest groups negotiating from 1643 to 1648, established the nationstate as the basic unit of global society. The result was to divide the globe into state members in Europe and nonstates outside Europe awaiting recognition from Europe. The most prominent states involved in negotiations were the Dutch Republic, France, Holy Roman Empire, Holy See, Sweden, Switzerland, and Venice. Most of those outside Europe that were not recognized became colonies, subject to the "right of conquest" (cf. Brenner 2016). Within the nation-state system, major powers tend to dominate, and only democracies have the interests of the people in mind.
Nonstate Entities Some members of the global system have never been allowed membership in the nation-state system-notably, indigenous and nomadic peoples. Terrorist groups, sometimes considered as pirates, also qualify as nonstate members of global society, lacking recognition by the nation-state system. Viking pirates roamed about, settling in cities such as Dublin, and in due course constituted such states as Iceland and Ireland (Denemark 2017).
The Islamic State of Iraq and Syria (ISIS) once operated as if it were a state, occupying territories for a time after seizing them militarily. Western propaganda identified such groups as "terrorists" rather than conducting negotiations with the groups regarding their grievances; they were never recognized as members of the nation-state system (Beyer 2010).
Some Somali groups have been policing the 200-mile exclusive economic zone of their country against intruders from elsewhere (Grell-Brisk 2017). But those charged with piracy seize people and property on board vessels inside or outside of that zone.
Several groups with stable territories are also not accepted as nationstates today. For example, the Palestinian Authority operates as a state but is not recognized sufficiently to be treated as a normal member of the nation-state system. Other de facto states vying for acceptance include Abkhazia, Adzharskaya, Artsakh, Kosovo, Republic of China (Taiwan), Sahrawi, Somaliland, South Ossetia, Transnistria, the Turkish Republic of Northern Cyprus, and several others (Florea 2017). Paradoxically, Cambodia was represented in the UN by the Khmer Rouge, which occupied a portion of the Thai border from 1979 to 1991, because Western countries refused to recognize the government in Phnom Penh on the pretext that Vietnam exercised sovereignty over the country (Haas 1991). The masses in nonstate entities are at the mercy of the major powers.
Transnational Corporations In 1602, the first transnational business was formed-the Dutch East India Company. TNCs, sometimes called multinational corporations, today number in the tens of thousands (Detomasi 2006: 226). Several have economic resources exceeding those of small or even medium-sized nation-states. Insofar as TNCs are in charge of the world economy (Mikler 2017), they have "dramatically altered the balance of bargaining power between states and firms" (Detomasi 2006: 227). Because such giant corporations are more interested in marketing goods produced within a variety of nation-states, they are often inattentive to labor conditions, especially the dangers of sweatshops (Stolle and Micheletti 2013: 17, ch. 6).
Today, many countries have privatized functions that were formerly performed by government, so people-oriented regulations have been relaxed, thereby empowering businesses. The process began in the Reagan-Thatcher era but accelerated after the end of the Cold War (Avant, Finnemore, Sell 2010: 5-6). TNCs, thus, operate with little democratic accountability. Some establish private transnational regimes that merely standardize industry practices, as discussed below, but others form cartels that limit competition. Someday, the out-of-control TNCs might be controlled by democratic global governance. But not today.
International Conferences Regardless of sponsorship, an international conference is a global event that may have wide ramifications. From 1850 to 1913, more than one hundred international congresses were held (Keane 2009: 771). Among multilateral summits with developments in important issue-areas were the Geneva conferences of 1863 and 1864, and the Hague peace conferences of 1899, 1907, and 1929, all of which dealt with war crimes. The initial Hague conferences have been identified as providing the first verbal evidence of the concept of international society (Roshchin 2017: 190). Peace activists were actively engaged at the Geneva and Hague conferences.
Those attending the second Hague conference agreed that the same body should continue to meet at eight-year intervals. When World War I broke out, however, governments failed to organize such a body for 1915. Instead, Dutch suffragette Aletta Jacobs convened a meeting of nongovernmental organization (NGO) representatives and peace activists that year. Known as the International Congress of Women, the meeting was attended by 1,200 persons from twelve countries. The goal was to end the war and formulate principles for the future international political world. Many of their principles were later incorporated into Woodrow Wilson's Fourteen Points.
Conferences also set up new intergovernmental organizations, including the International Telegraph Union (1865), the Universal Postal Union (1874), the International Sanitary Bureau (1902), and the International Office of Public Health (1907). The latter two were merged into the League of Nations Health Organization.
Other international bodies during the nineteenth century met to discuss such issues as fishing zones, the opium trade, and submarine cables. Then came the Versailles Conference of 1919, which redrew some national boundaries and adopted provisions for the League of Nations, an organization discussed below.
The nine-power Washington Naval Conference of 1921-1922 was an arms control conference. The most important contemporary conference, held in San Francisco during 1945, drew up the United Nations Charter.
Although governments have been officially represented in international conferences, once again the major powers are dominant in many such bodies.
The number of international conferences has mushroomed, thanks to jet travel beginning in the 1960s. Many have been sponsored by the United Nations, which in turn has spawned new intergovernmental organizations.
Intergovernmental Organizations Today, the nation-state system seems increasingly passé, as global decisions are made by entities that transcend states. Although there were some precursors, such as the Hanseatic League, the first IGO with multicountry membership was the International Telegraph Union, which was established with the objective of standardizing elements of the newly invented telegraph.
The League of Nations and the United Nations are the most famous IGOs with universal membership, though the United States did not join the League. There are many other IGOs, including the International Criminal Police Organization (Interpol) and the International Atomic Energy Agency. Regional organizations are also global actors, including the African Union, the Association of South-East Asian Nations, the League of Arab States, the Organization for Security and Co-operation in Europe, the Organization of American States, and the Pacific Forum.
Most IGO decisions require agreement among nation-state member countries, some of which may block action. A few IGOs, however, are supranational-that is, they have the power to make decisions that nation-states cannot veto. The European Union, the International Criminal Court, and the World Trade Organization have supranational power.
The International Monetary Fund (IMF) often makes demands on states when their lack of capital to pay back loans leaves them no alternative but to accept IMF direction as under a supranational authority. Governments may have to comply, but their people suffer the most. Many INGOs have been formed for purely cultural/religious or economic purposes. An example of the latter is the International Grains Council, which seeks to lower costs within the global economy (Kindleberger 1983;Spruyt 2001) and thereby to increase predictability in regard to the quality and quantity of goods exchanged (Prakash and Potoski 2006;Busch 2011;Büthe and Mattli 2011).
Nongovernmental Organizations
Epistemic Communities Nonelite experts within professional associations constitute what are called "epistemic communities," which can forge unity of opinion on such technical matters as global warming (cf. P. Haas 1990;Toke 1999;Meijerink 2005;Pak 2013). Such communities have the following unifying characteristics: ideological consensus, shared causal beliefs derived from empirical analysis, shared notions of how to validate research findings, and agreement on practices that will advance human welfare. In other words, they consist of experts.
One type of epistemic community is the transnational advocacy network (TAN), which is an increasingly vital aspect of the quest for democratic global governance (De Mars 2005;Tarrow 2005). TANs identify problems needing attention and carry out campaigns demanding that the issues raised become a part of INGO agendas (Keck and Sikkink 1998;Carpenter 2007). For example, Henri Dunant first formed a TAN and then convened the Geneva Conference of 1864 on war crimes. All three Hague conferences were attended by government representatives as well as TAN epistemic communities, which had the expertise to formulate the texts of the resulting conference agreements. Transformational decisions are more likely when an epistemic community of experts also engages in advocacy. Transnational Regimes IGOs and INGOs cooperate in what are known as "regimes" within specific issue-areas, usually as partnerships. When IGOs encounter conflict between nation-states, INGO-led regimes might take their place.
Private Transnational Networks
The first attempt to establish a regime was in 1603, when Britain tried to stop trade in tobacco, considered a dangerous substance, but the effort was a failure. Also failures were the anti-alcohol regime, founded by the Brussels General Act of 1889-1890, and the anti-opium trade sponsored by the United States in 1908. The latter folded into operations of the League of Nations and later the UN's focus on the drug trade. A separate War on Drugs by the United States began in 1969 as a regime involving Colombia and México. In most cases, governments have tried to stop the operations of nongovernmental drug organizations run by mafias.
The first successful global regime, which sought to abolish slavery, was formed by NGOs, starting with the Society for the Abolition of the Slave Trade, founded in 1787 by Thomas Clarkson and Granville Sharp. Because of interference in world trade, London in 1856 also sought to establish a global regime to combat piracy (but not privateering, a government contract to attack merchant ships during wartime).
Also in 1856, Édouard Ducpétiaux organized the first international conference on conditions of labor at Brussels; thanks to Karl Marx, the movement bloomed as the First International in 1866. Two "internationals" followed in 1889 and 1919. All three sought to establish a global labor regime but were limited by opposition to their ideological convictions. More successful has been the International Labor Organization, an IGO that was formed to establish a regime for humane conditions for workers composed of representatives from businesses, governments, and labor organizations.
Starting with the Universal Telegraph Union (UTU) in 1865, what were called "public international unions" established regimes consisting of nation-states seeking to regularize interaction for mutual benefit. UTU was initially an IGO as well as a regime.
Several more extensive regimes exist in specific issue-areas. After the journal International Organization launched the discovery of international regimes in 1974 (cf. Keohane and Nye 1974), special issues followed, with several essays on oceans (Spring 1977) and food (Summer 1978). Later, individual essays appeared on regimes relating to security (Henderson 1982;Jervis 1982), the environment (Young 1989), and human rights (Moravcsik 2000).
In some cases, the form of regime cooperation is organizational, whether regional or international, but regime analysis also involves searching for informal methods of cooperation within global governance (Young 1989). Principled norm convergence relevant to an issue-area has been identified as the key; the mere articulation of principles has been insufficient (Coleman and Gabler 2002;Conca, Wu, Mei 2006). Regimes built from INGO and TAN origins are more likely to carry out the wishes of the global masses.
Implications
In short, there are many types of entities that act legitimately or otherwise as members of global society, but there is no central place to make decisions with global impact on behalf of the global masses (cf. Carver 2011; Carver and Bartelson 2011). Amid such complexity and confusion, the question remains whether the desires and needs of the peoples around the world are being addressed or ignored, so efforts involving INGOs and TANs are more likely to increase the level of global democracy. But such efforts must confront those who are really in charge-the global elites.
Global Power structure
Who are the global elites? During the imperial era, the elites were regional, not global. Before the early nation-state era, leaders of cities and provinces met together in the Diet of the Holy Roman Empire from 777. The first regional governance for Europe came in 1643 to 1648, when governments adopted the two treaties known as the Peace of Westphalia. The next major multistate conclave was the Congress of Vienna in 1814 and 1815, when leaders of the four countries successfully defeating Napoléon Bonaparte (Austria, Britain, Prussia and Russia) met with the post-Napoleonic French leader to agree to suppress nationalist efforts to break away from empires within Europe. Calling themselves the Concert of Europe, they also redrew the map of Europe by shifting territories around, declared that unilateral violation of a treaty was the equivalent of a war crime, opposed slavery, and recognized the rights of Polish minorities in Austria, Prussia, and Russia (rather than an independent Poland within those territories). But democratic developments in Britain and France ultimately ended that era of elite global governance by 1823 (Mitzen 2013).
Following the Crimean War, the Paris Conference of 1865 was attended by the same countries as at Vienna with the addition of the Ottoman Empire and independent Sardinia. The aim was to restore prewar boundaries, recognize the integrity of the Ottoman state and autonomy of other states that later became Romania and Serbia, protect Christians in the Ottoman Empire, and demilitarize the Black Sea. Another European summit, held in Berlin in 1878 and 1884-1885, decided which countries in Europe would colonize the remaining parts of Africa-of course without consulting Africans.
Perhaps the most famous elite summit of the twentieth century was the Munich Conference of 1938, attended by government heads of Britain, France, Germany, and Italy. Neville Chamberlain, whose country was not prepared for a war with Germany, thought he could mitigate conflict by allowing Adolf Hitler's Germany to occupy a German-speaking part of democratic Czechoslovakia on the basis of the Versailles principle of selfdetermination of peoples. But five months later, the Wehrmacht swallowed up the entire country.
During World War II, elite summits regarding the conduct of the war and the postwar world were held, the most famous at Potsdam, Tehran, and Yalta. The aim, hardly democratic, was for Britain, the Soviet Union, and the United States to design plans for the postwar world.
Since the dawn of the jet age, bilateral and multilateral summit conferences have flourished. During the Cold War, leaders of the two superpowers often trampled on the needs of the world masses. Although the Kennedy-Khrushchëv summit of 1961 was unproductive, the Reagan-Gorbachev summit of 1986 reached an important agreement on limiting nuclear weapons.
Today, the United States government has been accused of being the covert and overt "ruler of the world" (Chomsky 2016) while alternatively praised as making sacrifices to provide global leadership but reluctant to be too hegemonic (Nye 1990;Ikenberry 2007Ikenberry , 2017Kroenig 2017;Miscik 2017). As the lone superpower, Washington wanted to prevail in Afghanistan, Iraq, Vietnam, and elsewhere, but has learned the limits to such an ambition while still dominating the IMF and the World Bank. Today, with the abdication of global leadership under the presidency of Donald Trump, Germany has emerged as a key player among nation-states.
Recently, leaders of China and Russia have been throwing their weight around, preventing the establishment of a global security regime (cf. K. Roth 2016; Z. Roth 2016). They have sought to form an alternative elite summit, known as the BRICS (Brazil, Russia, India, China, South Africa). BRICS countries formed the Shanghai-based New Development Bank in 2014 as a rival to the World Bank. China's launching of the Beijing-based Asian Infrastructure Investment Bank (AIIF) in 2016 was a significant effort to establish a rival to the Asian Development Bank and the World Bank. Britain and forty other countries joined the AIIF as founding members, and the IMF is cooperating (BBC 2015), but Washington has refused to join. The Chinese yuan has become a rival to the dollar as the primary international currency (Holmes 2015).
There is also a global elite consisting of certain powerful persons, who are described as follows: "They set agendas, they establish boundaries and limits for action, they certify, they offer salvation, they guarantee contracts, and they provide order and security" (Hall and Biersteker 2002: 4-5).
One enumeration of "they" consists of executives in TNCs, INGOs, transnational religious movements, mafias, and mercenary armies (ibid.: 4). Within major corporations there is a distinct "transnational managerial class," which hops from one TNC to another (Cox and Sinclair 1996: 111). For Leslie Sklair (2001), global power is held by four types of persons: TNC executives, pro-globalization national bureaucrats and politicians, pro-globalization experts, and executives of media and merchants who sell their products globally. Andreas Paulus (2005) also identifies international lawyers as among the global elites. Yet another formulation is that transnational elites consist of TNC executives, top international civil servants, international judges, transnational media executives, and highpriced international lawyers (Kauppi and Madsen 2014; Sending 2014).
The Club of Rome, established in 1968, is an example of a group of elites who met together to decide the fate of the world. Consisting of economists, educators, industrialists, and national and international civil servants, they produced the famous book The Limits to Growth (Meadows et al. 1972). The main impact was the decision to form the UN Environmental Program.
According to Didier Bigo (2016), national civil servants who attend international conferences form transnational networks that constitute "guilds," a term that may be applied to transnational elites as well. As such, they constitute "double agents," loyal to their own countries but also loyal to transnational professional standards that they share in common (Dezalay andGarth 2002, 2011).
Today, those with the most power to control the global system-the elite "superclass"-often operate in the shadows with a level of unaccountability because of their invisibility (Tsingou 2014: 341). Attendees at Bilderberg and Davos conferences, for example, include development economists and investors who mingle with government leaders and others, presumably to establish priorities (Sklair 2001;Huntington 2005;Rothkopf 2008;Kauppi and Madsen 2014;Tsingou 2014;Easterly 2015).
The ultrarich are largely unaccountable, as they accumulate capital, gaining a higher rate of return on investments than the rise in the standard of living of the masses (Piketty 2014). As Mancur Olson (1982) once argued, a few hundred super-rich individuals can get what they want more effectively than millions of masses scattered around the world.
Some billionaires have used their wealth to finance humanitarian projects but not to undo the power structure. George Soros's Open Society Foundations have provided funds for attorneys to aid the Roma people and sex workers, to support gay marriage, and for other efforts to empower civil society in more than one hundred countries (Koppell 2010: 245; Open Society Foundations 2017). However, other billionaires have plowed money into the economies of dictators, enabling them to exploit their people (Easterly 2015).
Global economic health has been the province of a select group of countries without INGO input: In 1975, the G-6 (Britain, France, Germany, Italy, Japan, and the United States) assumed that role, though the forum expanded to the G-7 (adding Canada) in 1976. Although the group became the G-8 in 1999 when Russia joined, the group reverted to the G-7 in 2014 after Moscow was suspended for annexing Crimea. (Russia formally withdrew in 2016.) The European Union has been represented but has not been counted as a "G." In response to the economic crisis in several Asian countries during 1997/98, a group of about ten countries and several INGOs met to design safeguards (Jönsson 2011). In 1999, central bank governors founded the G-20, adding middle powers to the G-8 countries. The G-20 has also included government heads since 2008.
In 2008, when the world financial crisis began, the invisible corporate globalizers were unmasked for their unaccountability. A network analysis of the "new global rules" of the superclass was begun but stopped short of full disclosure (Büthe and Mattli 2011;cf. Slaughter 2006;Maoz 2011).
Such visible globalizers as the IMF and the World Bank, with power over nation-states, also qualify for elite status (Woods 2006). Major Western powers making decisions about exchange rates, pollution, tariffs, and other matters usually ignore the will of the people, a classic case of mass society politics. They do so to maintain their economic dominance over developing countries that are major economic competitors, producing manufactured goods at lower prices with government subsidies and other deviations from the norms of Western capitalism (Halabi 2004: 34).
Global elites today are mostly unchecked, accounting for inequality, both within countries and by dividing the industrial North from the developing South (cf. Hurrell and Wood 1999;Michael 2005;Nooruddin andSimmons 2009). Daniel Cohen (2006), however, has argued that the poor around the world have failed to experience the prosperity enjoyed by the supercapitalists because of neglect more than exploitation (cf. Choi, Murphy, Caro 2004;Dowlah 2004). Poorer countries are not being accommodated by richer governments (Held and Rogers 2013: 6).
Superelites may be difficult to identify, but global nonelites are not. Larger countries treat atoll-based island republics in the South Pacific as inconsequential while polluting the air, causing the sea level to rise, almost inundating the low-lying countries (Davenport 2015). Indigenous peoples are trapped inside larger countries and often treated as if they lost all rights when they were conquered (Keal 2003). Ethnic minorities and women also suffer dependent status in many parts of the world. Consumers and small businesses are at the mercy of TNCs and governmental rentseeking regulations.
Terrorists are reacting to global political inequality. Within Arabicspeaking countries, the gap between authoritarian rulers and the ruled without effective intermediate institutions accounts not only for the Arab Spring of 2010-2012 but also for the rise of some international terrorist organizations whose leaders have correctly reasoned that despised authoritarian regimes in the Middle East are financed by Western powers (Lipschutz 2008;Brenner 2016). Terrorists then seek to assert their power by inviting nonelites to express frustration over unmet demands to governmental elites, even though their aims and methods are illegitimate under international law.
In Western European countries, inequality has not brought the masses together but instead has fed the terrorist narrative within poor urban Muslim communities (Roy 2015) and the populist narrative in industrial countries (Bicha 1976; Gates 2000; Tomasky 2014; Müller 2016). Democracies have not been able to provide employment for the poor and declining middle class, fueling xenophobic policies. Discriminatory policies have in turn stimulated jihadism (Kepel 2017). When arrested for economic crimes, prisoners are introduced to jihadist and xenophobic networks in prison.
In other words, there is a global crisis of legitimacy. Helmut Breitmeier (2008: 204) suggests that legitimacy will be withheld until global governance actually works for ordinary persons, but that depends on cooperation between institutional structures, nation-states, and participation by global civil society. Legitimacy within global governance is fundamentally a matter of meeting the expectations of the global masses, which are identified next.
Global Public oPinion
Despite divergent cultural trends, public opinion throughout the world demonstrates several trends, as measured by the Pew Global Research Attitudes Project. The greatest dangers perceived around the globe are environmental problems and the spread of nuclear weapons. Environmental catastrophe is the danger most cited by Latin Americans. Africans are most concerned about AIDS and other infectious diseases. Inequality is on top among Europeans. Middle Easterners most fear ethno-religious hatred (Pew 2014b; Carle 2015).
Some 78 percent of respondents in a global survey from 2014 agreed with the statement "In the future, renewable energy sources will be able to fully replace fossil fuels." Regarding economic inequality, all agree that the wealth gap has increased (Simmons 2013), and most agree that the economic system favors the wealthy (Pew 2013). Those in developing countries express the highest levels of economic anxiety.
In addition, Pew found that the strongest supporters of foreign investment and trade are people in emerging markets (Pew 2014a). Less than half of the people in developed countries favor globalization; developing countries are also skeptical (Hu and Spence 2017: 55-56). Whereas people around the world are very unhappy about their own country's economies, most global publics say that their personal finances are in better shape than their government's (Pew 2013).
In short, the people seek solutions to problems about security (economic, environmental, ethnic, religious). Yet the masses believe they are at the mercy of the rich, who in turn employ the well-known divide-andconquer strategy by focusing the attention of the masses on noneconomic matters, such as divisive identity politics (Kuran 2017; Scheidel 2017).
Lone voices of prominent advocacy scholars often speak on behalf of world public opinion. For example, political scientist Richard Falk (1999: 2) took the lead in decrying the neoliberal Washington Consensus that unleashed what he called "predatory globalization" and which he equated with "liberalization, privatization, minimizing economic regulation, rolling back welfare, reducing expenditures on public goods, tightening fiscal discipline, favoring freer flows of capital, strict control of organized labor, tax reductions, and unrestricted currency repatriation." Falk has therefore called upon nation-states and transnational social movements to fight the forces of TNCs in the global marketplace.
The idea that the world might have democratic global governance-or even democratic decision-making in IGOs-is something that was unfathomable to Robert Dahl (1999) because of the powerlessness of global masses in relation to the global or IGO power structure. For the problems identified by the global masses to be satisfactory addressed, what is needed is a new way of thinking about global issues-development of a democratic global culture.
Global cultures
Instead of a culture of global democracy, much of world history has involved attempts to impose a dominant culture: The Mongol, Persian, and Roman imperial cultures tried to gain widespread acceptance but ultimately failed (Brenner 2016). Today, efforts to make American culture the standard for the world are bitterly resented in many quarters. Nevertheless, global society needs a common language (Howard 2004), which clearly is English. But the content of a global culture is in dispute.
Two nonimperial cultural strains professing universalistic applicability began nearly two millennia ago. One is the quest to establish a caliphate that would unite all Muslims under the culture embodied within sharia law, though the vague term "sharia" is mentioned only once in the Quran (Wills 2017). The Abbasid dynasty centered in Baghdad was one effort, lasting from 750 to 1258. The Fatimid dynasty in Cairo ruled from 969 to 1171, making important contributions in the arts, philosophy, and science; eighteen caliphs were allowed to provide religious education in Cairo from 1260 to 1517, when the last caliph was taken to Istanbul, where the sultan claimed to be the heir to the Abbasids (Kennedy 2016). Nevertheless, the split between Shiite, Sunni, and other versions of Islam frustrates those seeking consensus to form a new caliphate.
Christianity, as interpreted by the Catholic Church in Rome, has been more successful in establishing a global culture. Although the pope today primarily has religious authority over the conduct of parishioners around the world, the church also has had secular influence, legitimating the Crusades and establishing the Augustinian concept of "just war." In 697, several Irish priests proclaimed the Cáin Adomnáin, authorizing the death penalty for anyone killing a woman in wartime as well as other penalties for slaying clerics, clerical students, and peasants on clerical land. In 989, six French bishops at the Synod of Charroux declared the Peace of God (Pax Dei), advancing the law of warfare to prescribe immunity for children, clergy, merchants, peasants, and women. In 1026, the Truce of God (Truenga Dei) banned war on Sundays; the doctrine was later expanded to cover all religious holidays, including periods of Lent and Fridays. The Second Lateran Council in 1129 issued Canon 29, banning the crossbow in war. The concept of just war was further developed by Thomas Aquinas in the thirteenth century. But Protestantism contested the content of Catholic Christianity.
When Portugal and Spain made rival claims over South American territory, popes issued papal decrees (bulls). The first, in 1494, gave more territory to Spain, whereupon Portugal renegotiated with the Vatican, and a papal line gave Brazil to Portugal while most of the rest of South America went to Spain. The pope, a global elite, played a legitimating role for the two empires while totally neglecting the interests of the indigenous peoples, many of whom were subsequently slaughtered.
The Peace of Westphalia of 1648 tried to replace imperial cultures with a set of norms embodied in what became the culture of international law, as especially promoted by Hugo Grotius (1609, 1625). Initially, nationstates were to respect the sovereignty of every other country, including the principle of noninterference in internal affairs: Countries were expected to exercise power only inside their borders unless their citizens were mistreated abroad. International law then grew as international custom was incorporated along with new provisions in international treaties. 4 Nevertheless, governments violated the norm of noninterference by going to war. A new culture arose, known as the balance of power-that no single country should be allowed to dominate Europe or the globe. Enforcement responsibility was left to major powers. The new global culture, thus, consisted of both international law and the obligation of major powers to avoid imperialistic takeovers by practicing balance-of-power realpolitik, otherwise known as "realism." During the nineteenth century, new principles emerged and were encoded into the culture of international law: Slavery was gradually abolished. The excesses of warfare were banned by international treaties adopted in Geneva and The Hague. The twentieth century brought more norms-those embodied in the Covenant of the League of Nations, the Geneva Conventions, and the United Nations Charter. Principles of civil, political, economic, and human rights were recognized in treaties during the late 1960s.
Although often attributed to Immanuel Kant (1795), a culture of cosmopolitanism was encouraged after World War II as a way to suppress nationalism (E.Haas 1958), the culprit in many wars. Decolonization after World War II finally allowed the nation-state international organizations of the planet to triumph over the global imperialist vision. Nevertheless, economic neocolonialization then began in the newly independent countries, which had been shaped to follow the same type of top-down rule and to be economically dependent on the former "mother" country (Hardt and Negri 2000).
But there was also a Cold War, in which three cultural strains emerged-a First World preferring capitalism, a Second World advocating socialism, and neutralism within the Third World countries seeking to develop their own economies and polities without outside interference and, for a time, hoping unsuccessfully to gain approval for a New International Economic Order.
The end of the Cold War suddenly produced a single global society with a lone superpower. The United States then flirted with establishing a "New World Order" based on a self-congratulatory "bound to lead" imperative (Nye 1990) that was premised on military hegemony-a Cold War goal, not a new governing philosophy. The exceptionalism of the United States even resulted in more than 200 types of war crimes being committed in Afghanistan, Guantánamo Bay, and Iraq (Haas 2009), so Washington lost considerable moral authority.
Meanwhile, the dominance of TNCs in the world economy has definitely established a culture of consumerism on three levels (Hardt and Negri 2000): (1) World communications are the province of the global media and the Internet. (2) Neoliberalism is now the economic dogma.
(3) And films, sports, and even clothing offer a consumerist perspective that enables the corporate power structure to dominate the globe. For example, the World Trade Organization (WTO) was founded in 1994 as a supranational organization to install and police a "neoliberal order" that would provide an arena for settling trade disputes and a forum for the gradual reduction of barriers to trade.
But another cultural perspective has emerged among opponents of globalization, who rely on two basic theses-anti-Westernization and subalternism (cf. Day 2005; Cohen 2006). Anti-Westernization inspired the "clash of civilizations" thesis (Huntington 1996). Subalternism is the view that the global masses need to rebel against the deleterious effects of the global class struggle to shift the power structure toward more equality; they count on activist forms of global civil society (Kenny 2003: 120-29). But there may be a middle ground among those who want to tame globalization. Whereas the focus of international politics during the Cold War was almost exclusively military, what instead arose after the Soviet Union collapsed were advocates of the global norms of democracy, environmental conservation, human rights, peaceful international relations, and prosperity through interdependent transnational capitalism-a communitarian culture of cosmopolitan democracy based on global civil society, with a rule of law at the global level based on cooperation instead of mili-tary coercion (Haas 2014b(Haas : 172-73, 2014ccf. Krasner 1982cf. Krasner , 1983Fukuyama 1992;Ikenberry 2017). Europe had been gradually developing cosmopolitan democratic standards for a common market that extended into social matters, governed by the supranational institutions of the European Union.
Advocates of various cultures, however, cannot succeed until they achieve people-oriented legitimacy and the power to enforce norms. Marie-Claude Smouts (2003: 213) has pointed out that new norms take time to be established, yet the concept of "global good," especially the idea that the planetary environment is the "world heritage of mankind," has gained traction because INGOs have appealed over the heads of states to the global masses. Cosmopolitan democratic humanistic appeals in the areas of the environment and human rights appear to be creating a new global cosmopolitan culture that has swept across the globe, challenging self-interested TNCs and governments to live up to higher ideals.
However, the world power structure has been changing in centrifugal ways. The arrival of thousands of Iraqi refugees into Europe has dampened cosmopolitan thinking, and the European Union has lost Britain (Krastev 2014). Russian revanchism, including efforts to support extreme right-wing European leaders, has succeeded in producing inward-looking ultranationalism (Shekhovtsov 2017). President Donald Trump has abdicated the global leadership of the United States by denigrating IGOs and renouncing international agreements. The Islamic world is undergoing conflict between competing sectarian views that have been elevated to matters of national power. Internal conflicts plague Africa, some involving jihadist groups. Within Asia, conflict over North Korea has intensified, almost the point of a third world war. Only Latin America seems relatively free of international conflicts, though some domestic problems are associated with globalization. In short, the idea of global democracy seems far fetched nowadays. Nevertheless, incredible developments have been occurring under the radar of power politics.
As the current global cultural dissensus continues, global democracy will only exist when institutions of civil society intervene on behalf of the people to solve problems of globalization. In democratic countries, a crucial element is the ability of the media to inform the public so that intelligent decisions are made by government. The global media, therefore, are examined next.
Global civil society: Global Media
Many of the same observations about the media in the previous chapter apply globally. The media in any country play a crucial role in providing information to the public, though they also disseminate culture, usually subliminally. Whereas mass publics throughout most of history have relied on local and national news sources, globalization has given rise to a universal availability of information. But the major media outlets today have been accused of being "missionaries of global capitalism" (Herman and McChesney 1997), giving priority to corporate perspectives because media ownership is by transnational holding companies that operate more productive nonmedia businesses (Dencik 2012).
Among print media, the International Herald Tribune, the New York Times, and the Wall Street Journal are available worldwide, especially in major hotel chains. Most newspapers, however, are now a shadow of their former selves, while the public relies more on the Internet, and local news sources are more respected than print media emanating from the United States (Bird 2010).
Television commands more worldwide attention than newspapers, providing "mediated worldliness" (Thompson 1995). World travelers often rely on CNN, which began in 1980 as a relatively unbiased news channel, though some hotels only provide Fox News because their owners want to expose guests to a conservative perspective. News sources from China and Japan are also available worldwide. Regional television is also available in Arabic and Russian. With a large satellite dish, television stations from almost any source can be received.
Commercial global television networks rarely try to expose problems of global mass society, since they are owned by global elites. Nevertheless, television does an excellent job of covering environmental disasters, which can serve to encourage elites and nonelites to contribute funds desperately needed for relief.
Global television gives a slanted view of reality. Although some scholars believe that the modern media serves to disseminate cosmopolitan democracy, there is little evidence that the world public has bought that slant (Norris and Inglehart 2009). The media seldom report on the work of the United Nations until natural disasters, refugee surges, and scandals occur. Media advertisements on TV are effective in enticing consumers to buy whatever is displayed before them (Comor 2001: 402).
American films and fictional television programming spread a hedonistic and individualistic culture, to the dismay of many peoples. The effect is to whet the appetite for affluence.
In most countries, free access to the commercialized Internet offers a wealth of information, though omnipresent ads help to condition consumerism (ibid.: 401), and fake news can slip by the inattentive user. News provided by Internet service providers (America Online, Google, Yahoo, etc.) reaches a global audience, as does Facebook. Blogs and not-for-profit websites tend to be biased toward the views of their authors.
Social media, based on the Internet, plays an important role in transnational communication. The use of social media has facilitated mobilization of the masses, as in the Arab Spring.
The Internet provides unparalleled opportunities for states to spy on their own citizens as well as on foreign friends and enemies. Domestic laws provide limits in some Western countries, but China has the capacity to block its citizens' access to Internet sites. Russia plants fake news abroad to sway public opinion in the direction that the Kremlin prefers. Today is the age of the cyberwar, yet cyberwar crimes on the Internet have not been targeted for international prosecution.
The International Telecommunications Union, with nation-states as members, has capitulated to the global media. In 2003 and 2005, the United Nations organized the World Summit on the Information Society to identify and cope with problems of the global media. Unfortunately, little progress was achieved (Hintz 2009: 276). Efforts to establish a global Independent Media service (Coyer 2005) did not progress past 2013.
Perhaps the richest source of helpful news for the global masses is INGO websites (such as Global Justice Now and Oxfam), which process information sources and then disseminate what they find important to their committed members and to anyone else in the public (Kavada 2005). INGOs also seek publicity for their causes in the print and electronic media, not only to gain new members but to prompt action from global elites, governments, and IGOs. They sometimes organize "stunts," most notably the Seattle protest of the World Bank in 1999 (Coyer 2005). Those who use the Internet for humanitarian and political purposes tend to be highly educated and wealthy (Comor 2001: 401). Since TNCs have a stranglehold on the global media, INGOs and the Internet serve as the intermediate institutions for the global masses to counterbalance the consumerist narrative that maintains corporate global dominance.
The global media spreads information and therefore may serve to identity problems and report on solutions. But problem solving requires cooperative measures, including the mobilization of institutions. Accordingly, the next part of the chapter identifies the formal (IGOs), informal (INGOs), and semi-formal (public-private cooperation) components of global civil society that contribute to global governance.
inforMal Global civil society: inGos Aside from the media, the agents of global civil society include transnational social movements and networks, INGOs, organized indigenous peoples and cultural groups, and prominent citizens (Pasha and Blaney 1998: 418;Kenny 2003: 121). The web of INGOs could ideally mediate between the people and the diffuse global power structure (Lipschutz and Mayer 1996). But do they?
The creation of global civil society has been difficult, resisted by elites (Colás 2002;Tsutsui and Wotipka 2004). Globalization of information, especially through social media, has equipped civil society organizations with the resources to be more influential. Thus, there is now an antinomy between "global concentration" by TNCs and "global pluralism" involving mass-based global civil society (cf. Koppell 2010).
For INGOs to be effective, they have relied on issue entrepreneurs or vanguards (Milner 2005: 207). Prime movers, such as anti-slavery advocate Granville Sharp, first define problems and then seek to attract the attention of prominent members to devote resources for campaigns to establish new norms and principles.
Three types of INGOs have been identified (Teegan and Doh 2003): (1) Stakeholder INGOs exercise a certain amount of power. Nature Conservancy, for example, buys land to prevent development. (2) Staketakers carry out campaigns to delegitimize the "enemy." Accordingly, Global Witness has exposed corporate abuse since being formed in 1993; Amnesty International is more famous in the human rights regime.
The key triumph of the INGO approach to global democratization was the release of Nelson Mandela from prison in 1990 and the abandonment of apartheid in South Africa during 1994 after at least a decade of pressure from many sources, including university students demanding disinvestment by their boards of regents. Organized opposition to apartheid has become a model for future efforts.
INGOs can definitely change conditions at the micro level. According to Richard Falk (1999: 134), globalization-from-below started with the Rio Conference on the Environment and Development in 1992, involving technical experts from nation-states along with considerable input from INGOs. Similar conferences have been sponsored by the UN's Economic and Social Council on several subjects. A new democratic consensus appears to be emerging at world summits, thanks to the participation of INGOs.
During the twenty-first century, bottom-up global civil society has emerged on a scale previously unknown, including a critical mass of more than ten million NGOs and nearly ten thousand INGOs that are prepared to effect fundamental change in global governance (Walker and Thompson Several successful methods have been used by INGO campaigns on behalf of nonelites around the world. They include verbal activities, publicity, boycotts, buycotts, protests, direct action, and even direct enforcement. Verbal campaigns have been launched to influence corporations, governments, and IGOs. The anti-slavery movement primarily utilized petitions and speeches (Jennings 1997) and even a novel by Harriet Beecher Stowe (1852). More recently, verbal campaigns have evidently encouraged developing countries to block further rule-making by the WTO, which has not advanced beyond the Doha negotiations (Koppell 2010). Advocacy scientists, similarly, have been prominent in the campaign against global warming (Grundmann 2011).
Publicity can shame corporations and other members of the global superclass. Transparency International, which was set up a decade before the UN Convention Against Corruption of 2003, shames elites by exposing corruption (Kauppi and Madsen 2014), though some observers claim that the most effective anti-corruption campaigns involve investigative journalists, watchful users of social media, and mobilized local NGOs Buycotts increase the effectiveness of boycotts. While consumers boycott some products, they purchase those that meet minimum standards, such as products labeled "non-GMO." Buycotts under the Fair Trade Movement are discussed below.
Protest demonstrations are a fifth method. A demonstration in Paris during 1848 spread throughout Europe. The International Workingmen's Association, which began in 1864, mobilized strikes by workers. The International Alliance of Women, formed in 1902, not only brought together national movements engaged in public demonstrations but also various forms of civil disobedience; the result was that women's suffrage became a reality. In each case, the protest was for positive change. Negative demonstrations tend to be short-term expressions of opinion unless part of a concerted strategy.
Direct action can be taken on behalf of the forgotten and neglected. Save the Children International Union, founded in 1920, uses donations to provide food and shelter for poor children around the world.
Direct enforcement is a final method used by some INGOs (Doh and Teegan 2003; Eilstrup-Sangiovanni and Bondaroff 2014). The most prominent example is the International Olympic Committee (IOC). Although the Olympic Games of ancient Athens drew athletes and spectators from all over Greece, rules were established to ensure that competition was fair. In 1894, the IOC was founded to bring the competition back for a gradually urbanizing world of leisure, with men (and later women) decreasingly tied down on farms. The IOC's founder, Baron Pierre de Coubertin, selected members of the committee with the power to determine the host city, which games would be played, which countries could send athletes, and to set the standards that athletes had to meet. The first games were held in 1896.
But after Olympics host Montréal suffered a $1 million deficit in 1976, the IOC changed from a nonprofit INGO into a money-making corporation of endorsement and sponsorship deals, including sales of broadcasting rights. A bribery scandal was exposed in 2002, resulting in lawsuits and resignations. Big changes to the Charter came soon afterward: An ethics commission is now in place, officers with limited terms are now elected by representatives from Olympic Committees in each country (not governments), sessions are open to the public, and finances are published. Democracy, thus, arrived in at least one arena of international sports (Keane 2009: 703-05). Currently, the IOC recognizes more than seventy sport INGOs, some of which have their own events outside the IOC. In 2009, the UN General Assembly allowed the IOC to attend and speak during regular sessions.
INGOs may claim to speak for the masses, but their voices are not necessarily heard by the masses (Kenny 2003: 127). The "iron law of oligarchy" is inevitable in INGOs, which must raise funds to survive. Although INGOs with a mass base, such as Human Rights Watch, operate autonomously (Berkovitch and Gordon 2008: 884 n.2), other INGOs are dependent on rich donors, who may contribute only on a project-by-project basis, following priorities encouraged by the governments in which the donors are located. In some cases, governments subcontract to domestic NGOs to perform tasks authorized by domestic legislation. Because NGOs and INGOs have to compete for contracts, the result is decreased solidarity (Powell and Friedkin 1987;Cooley and Ron 2002). INGOs, thus, have not yet fully tamed global elites, but they have made enormous strides in articulating the needs of the masses to the global elites.
forMal aGents of Global Governance: iGos
Insofar as global civil society operates as the democratizing force of globalization, a major test is whether IGOs primarily listen to their members, nation-states, or are influenced by the input from INGOs or the global masses. Increasingly, the latter is the case.
Historically, most IGOs have been established by the major industrial powers, who have fashioned structures for their benefit, rarely consulting developing countries or minor powers (Gruber 2005), and even excluding them from membership (Donno, Metzger, Russett 2015). Because IGOs have member governments which seek to advance their own interests, they should not be expected to work for the benefit of the global masses. Indeed, some IGOs have perfected the ability to manipulate less developed countries into compliance with rules of global governance set by developed countries (Halabi 2004: 33-35 IGOs perform two main functions-providing a forum for discussion and providing services for members. The following sections focus on the most prominent IGOs to determine whether they listen to the voices of the global masses, as transmitted by INGOs. League of Nations Set up primarily as a forum to discuss the peaceful resolution of international conflicts, the Assembly and the Council of the League clearly failed to prevent World War II. The requirement that resolutions must be unanimously adopted was a source of gridlock. The Disarmament Commission held two conferences (in 1922 and 1932) but failed to provide a security regime.
Although defeated countries customarily found their territories carved up by victorious countries after war with little regard to the wishes of the people, the League adopted two major innovations based on the principle of self-determination: One was the plebiscite, in which some peoples in Europe were able to vote to determine which of two countries they wanted to be governed by. Second, former German colonies in Africa and the Pacific islands were reassigned through the Mandates Commission to Australia, Belgium, Britain, France, Japan, New Zealand, and South Africa as "sacred trusts" (Louis 1967: 7), though little effort was subsequently undertaken to prepare the people for eventual independence.
The League was most successful in providing services, which later rolled over to the United Nations. The Health Organization became the World Health Organization in 1948. The League's International Commission on Intellectual Cooperation joined an INGO, the International Bureau of Education, to form the UN Educational, Scientific, and Cultural Organization (UNESCO) in 1946 with a goal of promoting "education for all" (Mundy 2010). The work of the League's Commission for Refugees is now carried out by the UN High Commissioner for Refugees (UNHCR). The treaty-based Permanent Central Opium Board was folded into the UN Office on Drugs and Crime (UNDOC) and was a precursor to the International Narcotics Control Board, which supervises several UN treaties that have tried to set up a global anti-drug regime. The League's Slavery Commission was continued by the UN Working Group on Slavery as a body under the UN Economic and Social Council.
In 1919, the League also oversaw creation of the independent International Labor Organization (ILO), which continues as a UN Specialized Agency. Codes of conduct have been drawn up over the years to create a labor-management regime, and many countries have adopted laws to conform to ILO's codes.
League service-oriented bodies, often placing experts in control of the agenda, involved INGOs in their work. As a result, INGOs were given official recognition in the UN Charter.
United Nations Founded in 1945, the UN has been more useful in providing services for the global masses rather than forums. More than a dozen UN agencies have "contact points" with INGOs.
UN Security Council (UNSC)
The UNSC, consisting of five permanent members (Britain, China, France, Russia, and the United States), which have veto power, and ten members elected from major world regions, on paper offers the promise of serving as the major locus of the global security regime. Instead, the UNSC has been divided between rivals, such that decisions mirror world anarchy unless consensus develops regarding how to treat smaller countries. The body has authorized military force to stop aggression-in the case of North Korea's attack on South Korea (1950), which occurred while the Soviet Union was boycotting UNSC, and Iraq's attack on Kuwait (1990); in both cases, coalitions of countries were organized by the United States. More limited UNSCauthorized sanctions have included naval restrictions to enforce sanctions (in Iraq, the former Yugoslavia, Haïti, North Korea, and Sierra Leone). The use of "all necessary means" or "all necessary measures" by multinational forces has been authorized in several cases-Albania, Bosnia, Congo, East Timor, Haïti, Iraq, Liberia, Rwanda, Somalia, Zaïre, and the self-proclaimed ISIS. The early impetus for action came from member countries, even though global civil society increasingly pressured the five UNSC permanent members.
Peacekeeping has been an alternative method for UNSC action to provide global security. Blue-helmeted UN troops recruited from various countries, sent to areas of conflict, have been successful in some cases, but UN peacekeeping often fails because the major powers provide neither sufficient funds nor troops to assure success (Guéhenno 2015). Otherwise, UNSC operates as an elite body, paying minimal attention to victims of aggression despite the power to do so.
UNSC also has the power through resolutions to be a creator of new international laws. But despite more than 600 resolutions over the years, the impact on international law has been negligible (Deplano 2015).
UN General Assembly (UNGA)
With 193 member countries, the UNGA provides a forum in which minor powers might articulate interests vis-à-vis major powers by passing resolutions by majority vote. A notable success was pressure to gain independence for African colonies. Regular condemnations of Israel for occupation of territories on the West Bank of the Jordan River led to recognition in 2012 of the State of Palestine as a nonmember Observer Country with a seat in the General Assembly. The annual debate in September is an opportunity for countries to state their priorities, though they are often stated in an obscure legalese (Johnstone 2005) and thus are overshadowed by policy exhortations by the major powers.
UN Secretary-General
The top official of the UN Secretariat, the Secretary-General, mainly attends to the bureaucratic problems of the agency. Nevertheless, Secretaries-General have often sought to be peacemakers on behalf of innocent people who are victims of war and other calamities.
Several service agencies located within the Secretariat coordinate operations in the field by holding meetings of IGOs, INGOs, and NGOs. The Office for the Coordination of Humanitarian Affairs, for example, coordinated relief from the tsunami that hit twelve countries surrounding the Indian Ocean in 2004 (Weiss and Thakur 2010: 24).
The Economic and Social Affairs Department within the UN Secretariat has held world summits, starting with the World Summit for Children of 1990, to crystallize global opinion. Subsequently, nearly fifty summits have been organized on such matters as basic human needs, human rights, and sustainable development. The most famous is the Millennium Summit of 2000, which drew up goals to be achieved by all countries with special emphasis on the least developed. In addition to UN member countries, INGOs and sometimes TNCs corporations have attended the UN-sponsored summits.
UN Specialized Agencies
Extending the work begun under the League of Nations, many humanitarian organizations do extraordinary work on behalf of distressed individuals, such as providing food for those experiencing famine. However, most such agencies are starved for funds beyond UN dues and thus have their assignments written by major power paymasters. Most Specialized Agencies use INGO personnel as consultants (e.g., Liese 2010). In specific projects, they coordinate the work of all INGO and NGO civil society organizations involved in particular countries (Tussie and Riggirozzi 2001).
The UN International Children's Fund (the current title of UNICEF) was originally set up to aid child victims of World War II and later was assigned the role of delivering UNESCO's educational programs to children. UNICEF still focuses on the right of children to food, health, and shelter, including aiding vulnerable pregnant women.
TNCs dominate at least two UN agencies: The International Civil Aviation Organization (ICAO) deals with airline companies that respond to market forces, though the International Passenger Association tries to monitor ICAO on behalf of consumers (Koppell 2010: 234). The International Maritime Organization allows about four dozen consultative organizations, of which only nine are civil society INGOs (ibid.: 243).
The World Health Organization (WHO) provides an excellent example of how a Specialized Agency can respond to global civil society demands. One success is the worldwide eradication of smallpox. However, if a new disease suddenly spreads in Africa (Ebola) or Asia (SARS), WHO often awaits requests for action from member governments, and then sets a priority on controlling the disease and takes whatever action is within the organization's budget until more funds are raised for the emergency. But nation-states often hide the onset of an epidemic (Sparrow 2016: 27) until spillover is detected in the First World. For example, the Zika virus infected slightly more than one thousand persons in late 2015 but was noticed by WHO only after a few persons in the United States contracted the disease. Meanwhile, cholera continues to affect 3 million and kills about 100,000 annually. WHO's emphasis in 2015 was on Zika, not cholera. Malaria affects and kills even more, but the organization continues to respond to elite member countries more than nonelite countries, frustrating the effort to build a global health regime (Waldman 2007). In 2011, the UN Indigenous Peoples Partnership (UNIPP) was established, recognizing about 5,000 peoples in at least seventy countries, constituting 5 percent of the world population. UNIPP works with several UN agencies to advise countries about rights in the Declaration, including the ILO's convention.
Indigenous Peoples and the UN
The UN organized the World Conference on Indigenous Peoples in 2014, but UN member countries brought along indigenous peoples from their own countries rather than having native peoples alone in attendance, thereby keeping them subordinate during discussions (Morris 2014). Outside the UN, the Unrepresented Nations and Peoples Organization has since 1991 has been more effective in articulating views of indigenous peoples, who are most likely to be harmed by efforts to clear land and cut down trees where they live.
Implications Those who salute the UN in the areas of human rights, security, and socioeconomic development characterize developments as an "unfinished journey." Aside from services to individuals in need around the world, the UN has hardly assisted the global masses in seeking justice from global elites (Falk 1999: 102; Weiss and Thakur 2010).
International Monetary Fund Founded at Bretton Woods, New
Hampshire, in 1944, the IMF was set up to provide capital for countries with cash-flow shortfalls to pay off loans, which was an expected consequence of the need for funds to assist Europe after the devastation of World War II. The IMF thereby eases debt payment, but subjects recipient countries to stringent conditions. Because there is no well-financed alternative in the world, IMF plays a coercive role, disrupting the domestic affairs of countries seeking bailouts.
Restrictions usually involve a cutback in local government spending, which creates unemployment and shortages in government services. Other forced reforms include devaluing exchange rates, privatization, loosening employment security and related laws, and lowering trade barriers (Griesgraber 2008). Although the United States protected infant industry in the late nineteenth century with tariffs on imports, the IMF does not allow developing countries falling into debt to follow that example-just the opposite.
Egypt, for example, was once pressured to produce cash crops in order to obtain international currency to pay back loans. But that meant a local shortage of food, which had to be imported from abroad (Halabi 2004: 44). As a result, a grassroots movement arose among peasants to demand food sovereignty, resulting in a ten-year dialog within the UN Human Rights Council until a draft declaration on the rights of peasants emerged in 2015 (Dunford 2015).
Another IMF requirement is for loan-defaulting countries to spend less on higher education and more on primary education. The basis for such a priority is a study finding that there is a 26 percent return on investment in the latter and only 13 percent from colleges and universities (Landell-Mills, Agarwala, Please 1989: 77), but that study is contradicted by another one (Caffentzis 2000: 5). In short, the IMF wields the power to destroy the intellectual capabilities of poor nations (Kamola 2013), leaving positive investment to the World Bank.
The most prominent backlash has been in the form of "IMF riots," the consequence of imposed austerity that has affected more than fifty countries (ibid.: 43; Wood 2013). The IMF tries to propagandize countries with the neoliberal theory of economic growth and often uses developing countries that have bought the line to persuade those that are more reluctant (Halabi 2004: 39). The theory supports economies that coddle the rich, crush the poor, and cremate the middle class, ignoring the UN's Millennium Development Goals (Gutner 2010).
Having imposed onerous repayment schemes on so many countries in debt, the IMF is now said to focus more on gaining external support. An International Monetary and Financial Committee has been set up to permit more debate among member countries ( World Bank Group Also formed at Bretton Woods, the International Bank for Reconstruction and Development (IBRD) was set up to provide initial loans to countries short on capital but with a high probability of paying off loans due to their industrial workforce capability. However, the Marshall Plan in 1948 made the bank irrelevant for Europe, since the United States provided the capital for initial European reconstruction after World War II.
With the establishment of the International Development Association (IDA) in 1960, the two IGOs were called the World Bank. The addition of three later IGOs-the International Finance Corporation (IFC), the Multilateral Investment Guarantee Agency (MIGA), and the International Center for Investment Disputes (ICSID)-constituted the World Bank Group. IBRD provides government loans, IDA offers technical assistance, IFC provides loans to the private sector, and the functions of MIGA and ICID are obvious from their titles.
Since the 1960s, the World Bank has professed the goal of relieving world inequality. But funding for contracts to build infrastructure in developing countries inevitably goes to TNCs in developed countries, and recipient countries are notoriously selected for reasons of political favoritism (Woods 2006). Bids come from TNCs that will benefit from the new infrastructure by lowering transportation costs for their businesses abroad. Meanwhile, as a former World Bank executive has revealed, recipient governments often treat projects as "cash cows," enabling corruption (Berkman 2008).
The World Bank's Civil Society Policy Forum (formerly the Nongovernmental Organization Committee), which professes to strengthen ties with domestic civil societies (Tussie and Riggirozzi 2001), has adopted very few reforms. Women's movements, however, have refocused projects on the role of women, who are regarded as more reliable recipients of aid (O'Brien et al. 2000: ch. 2).
The World Bank has encouraged private funding to universities. But the capital has been used primarily for postcolonial restructuring of universities in Africa to foster economic development, downplaying the humanities and social sciences as "unfriendly" to economic development (Jaycox 1991: 5;Olukoshi and Zeleza 2004: 2).
Structural adjustments required by the World Bank have not been favorable to human rights (Abouharb and Cingranelli 2006). Not notable for listening to the needs of the global masses, the bank nevertheless established and has funded two bodies to handle dispute settlement, as described below.
Organization for Economic Cooperation and Development (OECD) The
Marshall Plan was an American commitment of funds for European recovery from the ruins of World War II premised on the formation of a European IGO, which was to design how the funds would be used. That body, the Organization for European Economic Cooperation (OEEC), was no longer needed after recovery was completed. Accordingly, OEEC cooperation gradually morphed into what later became the European Union. But in 1961, the United States wanted to join OEEC, so the organization was transformed into OECD, which is now a sixteen-member organization composed mostly of European governments and nine non-European industrial countries (Australia, Canada, Chile, Iceland, Mexico, New Zealand, South Korea, Turkey, and the United States). But the scope of interest is global.
OECD is primarily interested in promoting economic growth throughout the world. A major focus is on the protection of shareholders through procedures of accountability and transparency to member and nonmember countries through five Regional Corporate Governance Roundtables (Detomasi 2006: 241). Recommendations are provided in the form of technical assistance on how to run and supervise businesses. Another function is to assist in tax collection, including the identification of tax havens, where corporations hide their profits. Corporate INGOs play a role in the roundtables.
World Trade Organization
The General Agreement on Tariffs and Trade (GATT), formed in 1947, was superseded by WTO in 2005 to end world economic warfare, once and for all. WTO aims to create a world of trade flowing freely-that is, without trade barriers erected by states. Accordingly, WTO treaty provisions permit states to file trade disputes against one another when such barriers are perceived to create an unfair advantage. During WTO negotiations, major powers favored the interests of businesses over those of consumers and workers; the latter were not present when rules were adopted (Colgan and Keohane 2017: 40). Lowering trade barriers hurts less developed countries the most, which have been relatively marginalized (Dowlah 2004).
Environmental and labor INGOs have complained about WTO's overconcentration on commercial aspects of trade (O'Brien et al. 2000: chs. 3, 4). If, as proposed, WTO provisions were amended to focus on global environmental issues (Vifell 2010), the world polity would benefit, as bilateral dispute settlement would then accumulate jurisprudence beyond trade.
WTO operates a dispute resolution system, which is described below. If a country is successful in complaining against another country's trade practices, the latter is directed to stop what is unacceptable. An order of compliance may be resented, but WTO has the power to authorize all member countries to boycott a noncompliant country. That threat, rarely exercised, has usually been effective.
International Judicial Institutions Legal concepts, such as "just war," have been articulated in concrete terms over the past 1,400 years but never enforced globally. What is brand new in human history are international tribunals that constitute a global legal regime. World tribunals tend to be approached by smaller countries to resolve conflicts peacefully with larger countries.
Most international tribunals use international law as a basis for resolving disputes between governments. Besides custom, international law emerges from four sources-principles adopted at conferences, intergovernmental negotiations resulting in treaties, institutional practices, and decisions of tribunals (cf. Woodward 2010). Most world courts are restricted to the application of treaty provisions but lack enforcement powers. All offer some possibility to obtain a modicum of justice, particu-larly when small countries are frustrated by larger countries, provided that the latter are compliant. What is important is that the decisions involve a deliberative process that results in reasoned decisions, building a coherent jurisprudence (Kuyper and Squatrito 2017). To build global democracy, courts must allow people to have access when decisions are being rendered.
International Court of Justice (ICJ)
The successor to the Permanent Court of International Justice, formed in 1920, ICJ began in 1946 as a major UN institution. Although most cases involve only two countries in dispute, ICJ seeks to cumulate international jurisprudence with comprehensive rulings. Political questions cannot be taken up by ICJ, so when India shot down a military aircraft inside Pakistan, the court ruled in 1999 that the case was not justiciable but instead should be resolved by the UN Security Council. The distinction between political and nonpolitical questions, however, can be thin, since New Zealand addressed the court in 1973 regarding the legality of French nuclear weapons testing in the Pacific Ocean, whereupon France announced a moratorium, and ICJ ruled the case moot. Nevertheless, ICJ can render an Advisory Opinion when a country seeks a ruling but chooses not to sue.
About half of ICJ cases involve boundary disputes. In 1989, the court decided counterclaims involving Hungary and Slovakia regarding treaty obligations involving the Danube, a river considered as a "shared resource." Hungary tried to withdraw from the treaty, something that the court ruled was not possible in view of the wider development of environmental international law.
The court acts on behalf of ordinary people in cases involving human rights, which constitute one-fourth of the cases, mostly dealing with the treatment of nationals of one country who live in another country. For example, in 2004 México sued the United States regarding fifty-one Mexican nationals on death row in ten states who were being denied legal representation guaranteed by the Optional Protocol to the Vienna Convention on Consular Relations of 1963. ICJ ruled in favor of México, recommending a reconsideration of the sentences and payment of reparations to the Mexican government. In 2005, the United States withdrew from the Optional Protocol. In short, a smaller country only obtained verbal justice.
The court has also handled war crimes as a successor to the Nuremberg Trials after World War II. In 1984, after a case was filed by Nicaragua, ICJ ordered the United States to stop all military and paramilitary activities against Nicaragua. Two years later, Nicaragua sued Costa Rica and Honduras for serving as staging areas for American-backed rebels who sought to overthrow the regime in Managua. But in 1990 both cases were withdrawn in the context of a peace settlement.
The use of or threat to use nuclear weapons was ruled a war crime in 1996 as an Advisory Opinion, responding to requests from WHO and UNGA. Matters of war crimes, however, now can be handled by the International Criminal Court.
International Criminal Court (ICC)
Among international courts, only ICC has the power to enforce judicial rulings because individuals charged with violations of international criminal and humanitarian law can be arrested by the Interpol, incarcerated while on trial, and imprisoned in a member country if found guilty. Beginning in 2002, the court has tried fewer than twenty persons, lacking the resources for more activity. Only four persons have been convicted out of more than 1,500 complaints against those recommended for prosecution. Prosecutable offenses include genocide and war crimes.
International Tribunal on the Law of the Sea (ITLOS) Created in 1982 to resolve disputes regarding the designation of 200-mile "exclusive economic zones" projecting from maritime borders, ITLOS was set up primarily to clarify borders, which can be difficult to determine with exactitude. Nevertheless, the Philippines filed a case against China for actions within its zone with the Permanent Court of Arbitration, not ITLOS.
World Trade Organization Appellate Body (WTO AB)
The WTO has a Dispute Resolution Body, which serves an arbitral function when one country is accused by another of violating provision of the WTO's agreements. A panel is then constituted to review the case and make a nonbinding determination. In 1995, WTO AB was established to hear appeals, which apply a more rigorous legal analysis. More than 200 cases have been considered by the seven-member body. If a country is still found in violation but does not accept the ruling, sanctions can be authorized by WTO's main decision-making organ, the General Council. Sanctions have been authorized in a few cases, but the targeted countries usually reverse policies they consider illegal.
Implications International litigation is "slow, costly, inefficient, and inaccessible to the public" (Wai 2005: 252). Often the cases are about ambiguous matters within treaties, opening the opportunity for some norm contestation (ibid.;Berkovitch and Gordon 2008: 893-94, 898-99). The adversarial process may not assist in improving cooperation between the countries involved. With precedent written in the past primarily by Anglo-European legal scholars, the rulings are not always welcomed in other parts of the world. In any case, jurisprudence develops far more imperceptibly than acceptance of the basic principles set forth in the treaties themselves. The greatest success has been in developing a jurisprudence on global environmental law (Baber and Bartlett 2009).
Judicial settlements have considerable legitimacy and can help to mediate between smaller and larger countries, though the latter may ignore the rulings. When a country yields to another based on a ruling, compliance is more likely if the losing country uses a world court decision as justification for losing face. INGOs gain legitimacy and recognition when they aid victors of court decisions.
In addition to courts with universal jurisdiction, there are twenty regional courts (Kuyper and Squatrito 2017). The European Court of Human Rights (ECHR), founded in 1959 as an organ of the Council of Europe (CoE), handles litigation involving violation of CoE treaties. ECHR has heard thousands of cases of individuals who were denied justice in their home countries, mostly regarding failure to provide speedy trials. Due to ECHR rulings, several state practices have changed (Haas 2014b: ch. 12).
The existence of regional courts offers an opportunity to build a global jurisprudence that is not Eurocentric. As cases are decided by courts from the Andean Tribunal of Justice to the Western African Economic and Monetary Union Court of Justice, legal principles can be articulated for specific situations but have worldwide relevance. In contrast with international courts, some regional courts allow litigants to be individuals.
Meanwhile, national court decisions can have international implications by generating new legal principles that are picked up by other countries. The first "climate change" lawsuit in which a court ruled that a government was responsible for reducing carbon emissions, Urgenda Foundation v The Netherlands (2015), relied on the commonly used "hazardous negligence" tort principle and evidently inspired similar rulings in domestic courts in Pakistan, Perú, and two states of the United States (Estrin 2016). Ordinary people, in other words, have enlisted their governments to take action on their behalf-and won. The right to live in a healthy environment has not yet been established under international law but may come if more such cases are decided.
International Arbitration Institutions For millennia, international commercial agreements have presupposed honest arrangements between exporters, traders, and importers. Disputes have been inevitable and resolved through negotiation, sometimes involving third parties. Arbitration has a long history, as guild courts have existed since the time of the city-states of Genoa and Venice (Lehmkuhl 2011). Most industrial states have arbitration bodies, public and private.
Modern international arbitration was formally instituted in 1785, when Britain and the United States signed a treaty that provided a provision for arbitration of commercial disputes. The Jay Treaty served as a model for future trade agreements as the United States entered the global market.
Today, international tribunals for arbitration exist to resolve disputes by applying only specific legal principles to which the parties agree beforehand. Arbitration can involve corporations as well as governments. Arbitration begins adversarially but can rely on negotiated solutions. Arbitration does not build jurisprudence but can facilitate voluntary compliance in a way that judicial courts cannot.
Permanent Court of Arbitration (PCA)
Established at The Hague in 1899, PCA was primarily designed for trade disputes. Most cases involve claims of contract violations, though treaty violations are also handled. PCA cases have increased in the twenty-first century because UN Security Council vetoes have resulted in decreased reliance on the UN for peacekeeping.
In a few recent cases, the small country of Timor-Leste (East Timor) has been able to resolve disputes with Australia. Russia, however, refused to allow PCA action proposed by the Netherlands in 2014 after Russians boarded a Dutch ship and detained crew members, claiming that the vessel was inside Russian territorial waters in the Arctic. China refused in 2016 to accept the opinion of the court regarding successful claims of the Philippines within the latter's exclusive economic zone (200 miles from its territory), but subsequently Beijing offered conciliatory gestures toward all affected countries in the Association of South-East Asian Nations (Pubantz 2017).
PCA can accept nonstate litigants. In 2011, a representative of the Hawaiian Kingdom was accorded recognition in the quest to have the court rule that the United States had illegally annexed the Hawaiian Islands in 1898 (Haas 2016: 257). Hawai'i was then a sovereign state recognized around the world. However, until the United States government agrees to be a party to the case, no arbitration can move forward.
International Center for the Settlement of Investment Disputes (ICSID)
The World Bank formed ICSID in 1965 as an independent IGO to provide arbitration of disputes over terms of investment agreements involving private businesses. Arbitration is handled on a case-bycase basis in accordance with rules in the ICSID treaty (Reed, Paulsson, Blackaby 2010). Parties must include at least one government and a citizen of another government. Most cases are suits brought by businesses in one country against governments of other countries.
World Trade Organization Dispute Settlement Body
Although dispute settlement was offered under GATT, the process was inadequate, so the WTO established a formal procedure for one government to object to the trade practices of another. Panels are created for each dispute. More than 500 disputes have been handled by WTO thus far, mostly involving the largest trading countries, often suing one another. Panel recommendations can be appealed to the WTO Appellate Body, as described above.
In 2001, President George W. Bush imposed tariffs on imported steel to fulfill a campaign promise to the American steel industry, though he doubtless knew that a complaint would be filed in response. The following year WTO ruled that the tariffs were illegal. The European Union then threatened retaliatory tariffs on a range of goods, whereupon Washington removed the tariffs in 2004. Foreign steel then entered the American market, resulting in steel factory shutdowns. President Trump has demonstrated interest in playing the same game.
There is a vaguely worded loophole: WTO member countries can take measures "necessary for the protection of … essential security interests … taken in time of war or other emergency in international relations" (Article XXX of the GATT Agreement of 1994, which is still considered operative). Trump's declaration of an emergency for the steel industry will doubtless impress few.
Although WTO is often assumed to be part of the global power structure, weaker countries have used the body to achieve justice. Panamá, for example, has filed several complaints about the trade practices of nearby Colombia. Argentina initiated a complaint against the European Union in 2012. Several smaller countries have filed complaints against the United States (Antigua and Barbuda, Argentina, Canada, Chile, Colombia, Costa Rica, Ecuador, New Zealand, Norway, Pakistan, Philippines, Thailand-but most have been from México). Most disputes tend to be product-specific with little overall impact in transforming world trade to reverse world income inequality. The protocol to file complaints is so complicated that less developed countries have difficulty doing so, and technical assistance to improve their capabilities is inadequate (Kim 2008: 680).
States do not always comply with WTO dispute rulings, and organizing retaliatory boycotts is difficult. Because the rules are vague, most disputes involve bargaining (Shaffer 2005). As a result, trade jurisprudence has advanced only incrementally (Zangl 2008). WTO's supranational power is in theory but not yet fully in practice.
Work Bank Group Inspection Panel
Characterized as a remarkable advancement in international law (Clark 2003), the Inspection Panel was created in 1993 to provide a way for ordinary people to protest specific environmental and human rights concerns related to IBRD and IDA projects. Although some researchers have found that the response has been insufficient ( Cases involving indigenous groups are usually ruled in their favor. However, in the case of Uganda's Basoga tribe, which objected to construction of a dam because of the spiritual significance of the land involved, the tribe was paid off to quell their opposition (Ziai 2017).
The Inspection Panel carries out both compliance reviews, to determine whether a project might deviate from project design, as well as complaint processing. Most cases relate to infrastructure projects
World Bank Group Compliance Adviser/Ombudsman (CAO)
With the same mandate, processes, and effectiveness as the Inspection Panel, CAO was established in 1999 to handle complaints involving the IFC and MIGA components of the World Bank Group. Together the two World Bank Group bodies have processed some 250 cases in regard to sixty countries (Graham et al. 2017). Similar to the Inspection Panel, the people have prevailed in about half of the disputes, notably when an NGO or INGO presents the case on their behalf (ibid.).
Conclusion There are regional counterparts to the international arbitral bodies. Some regional banks, such as the Asian Development Bank, have adopted the WTO Inspection Panel reform (Bradlow and Fourie 2011). A similar body, the North American Commission for Environmental Cooperation, is based on a side agreement of the North American Free Trade Agreement (von Moltke and Mann 2001; Hale 2011a).
In 1976, the UNGA set up the UN Commission on International Trade Law (UNCITRAL) to devise a set of standard rules for arbitration and conciliation. The Asian African Legal Consultative Organization, an intergovernmental organization with about forty member countries in Asia and Africa, agreed in 1977 to establish regional centers so that members would not have to endure the cost of flying to the arbitral bodies in Europe and could either use UNCITRAL rules or proceed ad hoc (Haas 1989a: 57-59). In 1978, Kuala Lumpur was the first to agree. Cairo followed in 1979, Lagos in 1989, and Tehran in 1997. The regional centers not only provide panels of arbitrators but also seek to enforce rulings.
Regional IGOs IGOs with global membership have been designed, financed, staffed, and otherwise dominated by major powers of the North Atlantic. Minor powers outside that region have felt neglected and have established regional IGOs.
The most famous regional organizations are in Europe, notably the Council of Europe, the European Union, the North Atlantic Treaty Organization, and the Organization for Security and Cooperation in Europe. The powers of the European Union became so extensive, with little input from ordinary Europeans, that the Brexit campaign drew upon mass society imaginary in 2016 and won.
Regional organizations began to emerge in Africa and Asia during the 1960s in order to focus on their own needs as well as to build consensus before going to IGO forums (Haas 1989a(Haas , b, 2013a(Haas , 2014b. The origins the current Organization of American States can be traced to a conference in 1826, when Simón Bolívar urged South American countries to unite against European colonial control. A similar goal, asking European countries to grant independence, was responsible for the formation of the League of Arab States in 1944. Most African countries were colonies until the 1960s, and they formed the Organization of African Union (now the African Union) not only to hasten independence for the continent but also to pressure South Africa to end apartheid. Although six European countries tried to coordinate their efforts with the Pacific islands by forming the South Pacific Commission in 1947, colonialism did not fade in the region until the 1970s. A rival body of five independent countries formed the South Pacific Forum in 1971 (now the Pacific Forum) along with Australia and New Zealand, not only to secure independence for the rest of the South Pacific but also to obtain the wherewithal to become economically independent.
Within Asia, Britain proposed the Colombo Plan in 1950 as a parallel of Europe's Marshall Plan to provide aid to its former South Asian colonies of India, Pakistan, and Sri Lanka. When the organization officially began in 1951, five non-Asian members and three Indochinese colonies joined the three from South Asia (Haas 1989a). The Colombo Plan never obtained the necessary capital to flourish, however, and no pan-Asian organization has ever emerged. Instead, exasperation over American intervention in Vietnam united countries to form the Association of South-East Asian Nations (ASEAN) in 1965, not only to form a bloc in UN forums but also to gain resources for joint economic development. Eventually, ASEAN formed cooperative arrangements with East Asian and European countries while providing a model for the establishment of the South Asian Association for Regional Cooperation in 1985(Dash 2008. Beginning around 1970, the UN became concerned that regional IGOs might eclipse the New York-based organization. Scholars were assigned to go to each region to report on developments (Andemicael 1979). As a result, the UN became more favorable toward regional bodies, even supporting the formation of some intergovernmental IGOs that focused on bringing technological improvements to widely traded agricultural commodities (Haas 1989a: ch. 11). Global and regional IGOs have played complementary roles ever since, and the smaller countries of the world have benefited from building peaceful relations within regions and subregions.
In some cases, regional IGOs are limited to specific countries by geography. For example, an agreement for intergovernmental cooperation in matters of navigation on the Rhine River was adopted in 1815. About one hundred similar agreements now exist at the bilateral and regional levels (Rahner 1998;Conca, Wu, Mei 2006;Zawahiri, Dinar, Mitchell 2011). International law has developed principles to cover disputes between riparian countries (McIntyre 2016), thanks to the UN International Law Commission and the Watercourses Convention of 1997. Although the Convention lacks the ratifications required to go into effect, the regional bodies remain. Governments guard their sovereign land, river, and sea boundaries. Meanwhile, boundary disputes can be referred to global judicial bodies.
Conclusion Superpowers and major powers attend to their own interests in IGOs, and middle powers often line up behind them. Although one scholar has found twenty-three partial environmental regimes (Breitmeier 2008), proposals for an intergovernmental World Environmental Organization, which would consolidate mini-regimes into a single IGO structure, have been proposed by the European Union and several INGOs but opposed by Britain and the United States (Esty 2007; Evans 2012). Meanwhile, China and Russia seek major changes in IGOs to overcome Western dominance in economic-oriented IGOs (Magnus 2013).
Although INGOs play positive roles in delivering services to the needy alongside IGOs, they have much less impact on behalf of the global masses within IGO forums. INGO influence is much greater in private-public regimes.
Private−Public reGiMes of Global Governance
IGOs cannot serve as the home for all possible global regimes for two reasons: Some IGOs work at cross purposes with other IGOs, and the existing IGO structure leaves important gaps in coverage. Therefore, there is a need to identify regimes for all the issue-areas of global governance that involve both IGOs and INGOs-characterized herein as "semiformal" regimes.
Oran Young (1994), the most persistent regime analyst, has identified three types of regimes based on how they are formed-imposed regimes, negotiated regimes, and self-generating or spontaneous regimes. The three types nearly fit the trichotomy of Andreas Hasenclever, Peter Mayer, and Volker Rittberger (1997)-realist (hegemonic), neoliberal (transaction cost negotiations), and knowledge-based (converging expectation) regimes (cf. Oye 1986;cf. Dimitrov et al. 2007;Zawahiri, Dinar, Mitchell 2011). Any particular regime may have two or all three origins, though the negotiated, neoliberal regimes are the focus of the present section.
Many regimes start within IGOs and remain there. Any regime seeking legitimacy today must include INGOs, so humanitarian IGOs and others regularly use the expertise of INGO representatives, finding them to be effective in policy formulation, implementation, and enforcement-especially within the human rights regime (Tallberg et al. 2013: 236-37).
Public-private regime cooperation is more likely to assist the problems of the global masses than IGOs alone because of the greater role of INGOs. IGOs need to learn that without the input of INGOs, especially those with expertise, they will lack effectiveness in dealing with urgent problems.
In several cases, negotiations for regimes have started but failed-cases of regime negotiation gridlock (Dimitrov et al. 2007). For example, the lack of scientific knowledge has been faulted for the failure to develop a regime regarding reef survival. Some problem areas are simply too difficult to tackle, and the outcome of cooperation might be so uncertain that efforts will backfire (Miles et al. 2002). In addition, domestic politics in developed countries has blocked regimes concerning the global economy in regard to corporate takeovers and Internet privacy. Efforts to establish regimes to stop the sale of small arms and the proliferation of tactical nuclear weapons have run into the stone wall of geopolitics. The role of INGOs has been overridden by major powers who resist global governance.
Nevertheless, public-private regimes exist in several issue-areas. A complete analysis would focus on dozens of regimes to derive generalizations in such areas as arms control, banking, energy, health, intellectual property, and travel. To give a flavor of how regimes operate, three are identified below-the counterterrorism, human rights, and financial-monetary regimes; all three are micro-regimes, restricted to a single issue-area. The UN Global Compact, in contrast, seeks to bring together several microregimes. The aim of the short discussion is to illustrate the struggle between global elites and INGOs that seek to represent the global masses.
Counterterrorism Regime Since September 11, 2001, the United States has sought to create a counterterrorism regime, coordinating intelligence and military operations, with other countries. The effort was partly legitimated when the UN Security Council in 2009 authorized a Contact Group on Piracy of the Somali Coast as an umbrella to coordinate the actions of some eighty countries and six IGOs to combat piracy (Percy 2016). The Contact Group has five working groups to strengthen diplomatic, financial, legal, naval, and self-defense components. Cooperation in the Contact Group with labor groups and the maritime industry elevates the effort to a private-public regime. Although the UN Security Council authorized the use of violence against ISIS in 2015, implementation was left to individual governments without coordination. Efforts to restrict funding to terrorist groups have also been pursued as part of the regime. However, terrorists obtain funds from a variety of untraceable sources (Neumann 2017). In short, the counterterrorism regime is only a proposal, with little consensus to move forward.
Financial-Monetary Regime
Trade between two countries is complicated when they use their national currencies, so there has long been a desire for a standard form of payment in the world. Gold arose in Asia at least two millennia ago, but that gave countries with strong military forces an edge to win wars in order to capture gold. The Byzantine gold Solidus was the commonly accepted currency from 330 to 1453 in Europe and the Mediterranean (Lopez 1951). Afterward, Europe chose silver as the standard. In 1717, Britain chose the gold standard, which spread due to extensive trade but was not legally adopted in Germany and the United States until 1873 (Andrei 2011: 146-47). Two world wars and the Great Depression caused havoc over the gold standard.
The IMF was formed in part to construct a global financial-monetary regime. The agreement at Bretton Woods was that the dollar of the United States would be the reserve currency for the world. All countries were then to peg the exchange rate of their currencies to the dollar, which in turn was based on gold held by the United States government (Conway 2015; Buzdugan and Payne 2016). The IMF was charged with the responsibility to handle emergency indebtedness in the global currency.
Europe objected to making the dollar the international standard, however, because it gave undue advantage to the American economy. By 1971, France demanded gold when wine sales produced a large American trade deficit, though the subtext of the demand was protest over the American intervention in Vietnam.
Although the agreement at Bretton Woods to establish the IMF and the World Bank was supposed to form a firm global monetary regime, the system collapsed when President Richard Nixon decided to take the United States off the gold standard in 1971, as gold was flowing from the United States to Europe (Eichengreen 2011). Exchange rates then "floated," and the Bretton Woods monetary regime lacked a replacement. Nevertheless, the IMF continued as if unaffected, while increasingly relying on a "basket of currencies." A conference convened by Washington in 1973 tried to find an alternative to Bretton Woods. Then France hosted a summit conference in 1975, the first "G" summit, with the avowed aim of coordinating economic policies among Britain, France, Germany, Italy, Japan, and the United States-thus known as the G-6. As noted above, Canada and Russia joined later, though Moscow was booted out in 2014 and withdrew in 2016.
Central bank governors and finance ministers consult together in the G summits. Global governance in matters of exchange rates, interest rates, and similar issues is supposed to occur, though statements released at the end of the summits reveal very little about joint decision-making.
Civil society protests around the meeting sites have sought to place various matters on the agenda. At the G-8 meeting in 1998, some 60,000 protesters campaigned on behalf of Jubilee 2000, which urged the First World to establish a clean slate for the twenty-first century by forgiving all debt to developing countries. The movement eventually gained supporters in forty countries as well as among some famous musicians. Britain and the United States expressed vague support, but the movement lost momentum after the year 2000 (Gready 2004).
After the financial crisis of 1997, when Asian governments were unable to pay their international debts, a separate forum, known as the G-20, was formed, as noted above. The aim was to have a wider group of countries with substantial economies to manage future financial crises, and the countries contributed about one trillion dollars to the IMF. With the establishment of twelve working groups on such sectors as agriculture, employment, and the environment, the door opened to INGO input in shaping decisions. Today, there is a B-20 of 706 corporate members from thirty-nine countries, a C-20 of 450 civil society groups from sixty countries, an L-20 of labor organizations, a T-20 composed of think tanks, and a Y-20 of youth organizations (Martens 2017). As a result, G-20 decisions are informed by INGO inputs.
Four disgruntled members of the G-20, not included in the G-7, met in 1999 to offer funds for the IMF on condition that they would be given additional voting power (Brazil, Russia, India, and China). When they were turned down, they revoked their offer and formed the BRIC community, which became BRICS when South Africa joined in 2000. BRICS now invites several other countries as observers in their annual forum. BRICS countries put previously pledged funds in the New Development Bank, which is an alternative to the IMF and the World Bank. BRICS meetings are attended primarily by political leaders, in contrast with the G summits.
Roger Loewenstein (2015: 69) claims that the regime is a façade because the world "will continue to have turmoil over trade and unstable currencies because that is what most nations want." As David Detomasi (2006) has demonstrated, heads of major corporations collude in the financial realm, and the Great Recession was one result. Humanitarianoriented INGOs might convey the needs of the masses to global financial institutions, but the financial-monetary regime has little interest in the people around the world. The current INGO strategy is to try to open the door just a little in order to have some input, hoping that the response will be positive and the door will open wider.
Human Rights Regime Several IGOs and INGOs are focused on the goal of universal respect for human rights, challenging nation-states to live up to norms found in human rights treaties. Relevant IGOs range from the UN Office of the High Commissioner for Human Rights to the Human Rights Committee (one of eight treaty-based organizations sponsored by the UN) and the International Criminal Court (Haas 2014b: chs. 9-10). Human rights INGOs, such as Amnesty International, seek to promote compliance so that the global masses will benefit from protection of their rights. Both IGOs and particularly INGOs seek to shame violators with the aid of media coverage (cf. Murdie and Davis 2012; Peksen, Peterson, Drury 2014), resulting in a decline in investment and a lessening of repression within the countries that have been shamed (Franklin 2008; Barry, Clay, Flynn 2013).
After countries sign and ratify human rights treaties, they tend to comply with provisions and show improvements (Simmons 2009). Although INGOs often take credit for exerting effective pressure, the dynamics of domestic political party systems are that one political party will criticize another for human rights violations, win an election, ratify a human rights treaty, and compliance will increase. Among the areas of recent success are rights accorded to migrant workers (Soysal 1994), women (Ramirez, Soysal, Shanahan 1997;Berkovitch 1999), and gays and lesbians (Frank and McEneaney 1999).
One focus of the human rights regime is war crimes. Following the use of torture by the United States in Afghanistan, Iraq, and Guantánamo, and more than 200 other war crimes committed with impunity (Haas 2009), there has been a backsliding in compliance as other countries and terrorist groups realize there is little accountability for war crimes (Haas 2010: ch. 6).
However, appearances may deceive. Most INGO activity focuses on civil and political rights; economic and social rights are downplayed. The reason is that donors to flourishing INGOs are likely to be corporate foundations, which are attracted to projects that advance the rule of law in developing countries by ensuring that investment and trade are secured without corruption.
Yet economic inequality springs from denials of civil and political rights. INGOs stressing economic and social rights have fewer economic resources, tend to engage in advocacy and documentation of violations rather than court cases, and engage in direct aid projects (Berkovitch and Gordon 2008: 894, 897). In short, the human rights regime has made an auspicious dent in the behavior of elites in the global economy.
UN Global Compact (UNGC)
Operating as a macro-regime with the aim of promoting environmentally and socially responsible business practices, the UN Global Compact began in 2000 as a forum for discussion and a network for communication between national and local governments, businesses, labor organizations, and civil society organizations. UNGC asks corporations to report on how they uphold various human rightsspecifically, the right of collective bargaining, ending forced and child labor, nondiscrimination in employment, ending corruption, and environmental preservation. Participants today include about 10,000 members from more than 170 countries and 7,000 corporations, though 3123 corporations have been expelled for not submitting reports.
Critics accuse the forum of being a talkathon that permits corporate "bluewashing"-that is, corporations can water down the effectiveness of international agreements by submitting reports with no apparent substance. For example, Survival International objected that Ayoreo Indians in Paraguay were never contacted before the Brazilian ranching company Yaguarete Porá felled trees and cleared land, even after being fined by the Paraguay government for illegally clearing the Ayoreo's forests while concealing evidence of the presence of Ayoreo residents in the forests (Cheeseman 2012).
Clearly, UNGC overlaps with the work of the International Labor Organization. Regimes are not entirely tidy constructions.
Implications Private-public regimes promise to bring some stability to global problems because they establish mutual expectations based on common norms. When they involve both IGOs and INGOs, the interests of the global masses are more likely to be considered. Private-public regimes are more effective in relieving problems faced by ordinary people around the world when they articulate norms and enforce them (Young 1999;Coleman and Gabler 2002;Conca, Wu, Mei 2006). Violations of conduct codes can be deterred either through sanctions or the need to receive the rewards of cooperation (Keohane and Martin 1995;Dimitrov 2003;Ritter 2010).
Some scholars, known as cognitivists, stress that regimes thrive because there are real problems that need attention and because considerable learning takes place on how to improve the payoffs during the interactions of the regime negotiators (Wettestad and Andresen 1994;Hasenclever, Mayer, Rittberger 1997). If positive interaction takes place within regimes, the process of developing and implementing codes of conduct will be enhanced. Continual interaction ensures that critical information about the behavior of each participant in a regime will be transparently known to all others. Participants need to be problem solvers with experience in promoting cooperation between diverse interests (Miles et al. 2002).
But the crucial question in terms of global mass society is which interests are dominant in shaping the new codes of conduct within privatepublic regimes. The answer seems to be that the codes of successful regimes-those with a high level of norm compliance-are shaped by major powers (Breitmeier, Young, Zürn 2007;Breitmeier, Underdal, Young 2011). The reason is that every regime requires resources that minor powers lack, and INGOs are beggars (Mearsheimer 1994/95;Dowlah 2004;Halabi 2004;Sharman 2011;cf. Strange 1983: 342).
However, various studies have found conflict among the institutions involved in private-public regimes, providing mixed messages that often undermine regime legitimacy. Effectiveness is a function of the differential "payoff structure," which ultimately is assessed through political more than economic considerations (Oye 1986;Young 1999;Lipson 2004;cf. Berkovitch and Gordon 2008).
Nevertheless, private-public regimes are making important contributions to global governance. Compared with the struggles within privatepublic regimes, more attention to the global masses may be found within IGO-led regimes because the latter service the most vulnerable of the world's citizens, for example those who suffer from health problems and refugees awaiting resettlement. For those who want to avoid the vagaries of politics, there is another way to build regimes-by the private sector alone.
Private Global Governance
Global regimes emerge entirely in the private sector because governments and IGOs are so preoccupied with issues of political legitimacy and survival that everyday problems of TNCs and the global masses seem less immediate (Hale and Roger 2014;Abbott et al. 2015). The public can have input into the rule-making by attending corporate functions to demand certain standards, but what is more likely is that they will join labor INGOs to seek better working conditions and environmental INGOs to press for environmental sustainability.
Claire Cutler (2002) identifies six types of private international regimes: (1) informal industry norms and practices, such as when European banks only sold Eurobonds to blue-chip companies; (2) coordination service firms, such as how stock exchanges impose requirements before a company can be traded; (3) production alliances, as when a company puts the same label on a product made by several companies; (4) cartels, particularly in the maritime transport industry; (5) business associations, such as the International Business Brokers Association; and (6) private regimes, most notably nongovernmental dispute settlement arbitration. A seventh type should be added-consumer-oriented private regimes of global governance. In the discussion below the first four will be identified as corporate global governance, followed by a section on the latter.
Corporate Global Governance TNCs seek to lower transaction costs in the global economy, thereby increasing profits and lowering prices for everyone. Accordingly, transnational private organizations have been created from "clubs"-working groups consisting of industry representatives, which design practices and rules that have gained wide acceptance in the private sector (Koppell 2010: 241; Prakash and Potoski 2010). The process of developing standards within the "clubs" is identified as "a vast network of committees, subcommittees, and working groups that serve as focal points for the negotiation of individual standards" (Dimitrov et al. 2007: 427). The role of the global masses and even INGOs is nonexistent in most corporate global governance (Koppell 2010: 242).
For example, the International Organization for Standardization establishes rules for products and processes and even allocates a code number to every island on the planet. The International Accounting Standards Board seeks to standardize accounting practices around the world. The International Container Bureau standardizes shipment containers to simplify how shipments are loaded and unloaded from merchant vessels. However, the World Standards Cooperation, which is not limited by sector, is a club of corporations that have established standards for food safety and social responsibility (Prakash and Potoski 2010: 74 n.4).
Moody's and Standard & Poor's, private firms in New York, estimate investment risk by applying a rating system, and thereby direct capital toward some countries and away from others while also affecting interest rates on loans (Sinclair 1994;Halabi 2004: 45). Both companies have power without accountability. Yet even though they were discredited by the financial crisis of 2008/09, they continue to operate without competition.
But something more sinister is going on-a hidden agenda. During the 1980s, the writings of André Gunder Frank (1967, 1969cf. Easterly 2015) and others dwelled on how TNCs bought up local businesses in Third World countries and then shut them down to dominate the local market while neocolonially extracting resources and decapitalizing the economy. But now TNCs accomplish the same goal of driving out the competition by refusing to purchase products from developing countries if they fail to comply with "global standards," which never take problems of developing country companies into account (Garcia-Johnson 2000; Arnould, Plastina, Ball 2009).
There are several reasons why aspiring businesses in developing countries are being treated so ruthlessly, in addition to the fact that they lack the capital-though not necessarily the will-for compliance (Dimitrov et al. 2007: 427). Democratic developing countries seeking greater prosperity are coerced into abiding by "global standards" because the First World has the investment capital they need (Li and Resnick 2003;Halabi 2004). Noncompliance with "global standards" in authoritarian developing countries has also been attributed to lack of a strong civil society to counter corrupt governments (Drezner and Lu 2009; Berliner and Prakash 2014; Prakash and Potoski 2014)-and the corruption comes from payoffs by TNCs. Susan Strange (1983: 342) long ago identified such efforts as a strategy of economic domination by the United States and its many TNCs (cf. Dowlah 2004;Halabi 2004;Friedrichs 2005). Even within developed countries, small businesses are destined to fail because they lack the resources to conform to "global standards." In the matter of corporate acquisitions and mergers, the International Competition Network has emerged from eighty-four national and transnational agencies. But efforts to forge a regime regarding international competition have been blocked by the United States (Dimitrov et al. 2007: 238-40;cf. Detomasi 2006).
Corporate global governance is a classic case of global mass society: TNCs and related INGOs ignore the adverse impact on the people of the world, instead relying on a top-down narrative. In short, the global market has a wide range of standardized rules that have been developed by corporations and associated business-oriented INGOs within particular industries without inputs from governments or the global masses. Although consumers sometimes benefit by paying less at cash registers and over the Internet due to standardization, another result is that workers are trapped in sweatshops, the environment is endangered, and TNCs aggregate profits and exacerbate global inequality.
Consumer Global Governance: Fair Trade Movement
TNCs trade in a market where some consumers insist on environmentally friendly standards and oppose exploitative labor conditions. Accordingly, market-based regimes have arisen to attract consumers and to circumvent corporate global governance. What have emerged are alternative trade organizations (ATOs), the most famous of which are associated with the Fair Trade Movement. A survey in 2000 found that about 30 percent of Western consumers avoid purchases if they believe that producers have harmed animals, used sweatshops, or contributed to pollution. Thanks to the movement, they can do so. The discussion below focuses on such cooperative efforts in agriculture, clothing, and forestry.
Agriculture The origins of the Fair Trade Movement can be traced to Eduard Douwes Dekker's pseudonymous novel Max Haavelar, or the Coffee Auctions of the Dutch Trading Company (1860), which decried the conditions of workers on coffee plantations in European colonies. Before imperialism intruded, the workers had been communal farmers within agriculturally self-sufficient villages, but afterward they were paid at starvation-level wages. The novel not only inspired anti-colonialism but also spawned the idea of ATOs.
After World War II, churches in Europe and North America began to purchase handicrafts from refugees through such organizations as the Mennonite Central Committee. In 1965, Oxfam set up an ATO to negotiate purchases of goods from primary producers for department stores and similar consumer companies. Then in 1968, the World Earth Catalog featured handicraft items so that buyers could contact sellers directly. In 1969, the first WorldShop opened in the Netherlands, with the items purchased from the catalog on display for purchase. WorldShops then spread throughout Western Europe. As the price of coffee plummeted due to more plantations being set up in former colonies, the Havelaar Foundation was started in 1988 to issue "Fair Trade" labels for cans of coffee that met living wage standards.
In 1989, the International Federation of Alternative Trade was formed as an alliance of ATOs, with headquarters in England. Now known as the World Fair Trade Organization (WFTO), members include export marketing companies, importers, national and regional fair trade networks, producer cooperatives and associations, retailers, and support organizations. In 1994, American and Canadian ATOs joined WFTO, which then became the Fair Trade Federation.
For ATOs to be successful, there was a need to put visible labels on products. Accordingly, the Fairtrade Labeling Organization (FLO) was founded in 1997 at Bonn. Beginning in 2002, FLO then began to label cocoa, coffee, and tea, which became available at Starbucks from the year 2000. Other fair trade products today include fresh fruits, fruit juices, herbs, honey, rice, sports balls, sugar, vanilla, and even handicraft baskets from Rwanda.
To gain a "fair trade" label, a product must meet several standards. Farmers must receive a living wage, with women paid the same as men. Workers must operate in safe working conditions and have the right to join unions. ATO trade is direct, not involving "middlemen." Producers must be free to invest profits, some of which go to product improvement and scholarships. And production must be ecofriendly, made without harmful chemicals or genetically modified organisms.
FLO split into two entities in 2009. FLO International is a nonprofit that develops standards and licenses ATOs, encouraging producers to process their products before shipment, such as by roasting and packaging products so that they can undersell products processed in developed countries. The profit-making FLO-CERT certifies and monitors producer organizations in more than fifty developed and developing countries. As a result, 1.5 million primary producers around the world receive at least $1 billion of additional income each year.
Clean Clothes Campaign Founded in 1989 by the garment industry, the aim of the Clean Clothes Campaign is to ensure decent labor conditions and to avoid child labor (Pruett 2005). Businesses and unions in fifteen European countries have developed more than forty framework agreements between international union federations and TNCs. The campaign has outreach to more than 250 INGOs and NGOs throughout the world.
Forestry
The global environmental regime has many components. Focusing just on the forestry aspects, the International Tropical Timber Organization serves the interests of exporters and importers and refuses to allow input from INGOs (Smouts 2003: 215), thereby allowing ferocious logging practices to continue with impunity.
Then in 1993, the Forest Stewardship Council (FSC) established a certification program that determines which trees can be cut without jeopardizing forest sustainability (Cashore, Auld, Newsom 2004; Gupta and Mason 2014). An Alternative Trading Organization, FSC was set up through negotiations between logging companies and INGOs at the World Summit on Sustainable Development in Johannesburg in 2002(cf. IIED 1999. FSC's principal decision-making body is a general assembly of some 600 individual and organizational members, though there are three "chambers" dealing with economic, environmental, and social interests and two subchambers in each for input from developing and developed countries, albeit weighted toward the latter, which are more numerous (Dingwerth 2007(Dingwerth , 2008. Although South Africa now subcontracts forest surveillance to FSC (Pattberg 2006: 590), few FSC-certified forests are located in developing countries (Dingwerth 2008: 619). Today, Home Depot, Ikea, Lowe's, and more than 300 others businesses sell only FSC-certified products (Domask 2003;Biermann and Pattberg 2010). The movement has even encouraged the World Bank to uphold FSC standards and inspired an agreement to maintain sustainable global forests between the International Federation of Building and Wood Workers and Ikea (Dingwerth 2008: 611, 618). FSC also agreed to a partnership with World Wildlife Fund, now the World Wide Fund for Nature (Smouts 2003: 216).
Implications Perspicacious consumers are now in a position to undermine reprehensible practices that take place in the global economy. But not all consumers have the knowledge or can pay the extra amounts that are charged by Fair Trade products. Such efforts to divert consumers may merely reinforce the dominance of the transnational corporate structure, according to Ronnie Lipschutz (2005).
In addition, less developed countries have been largely outside private global governance regimes (Ronit and Schneider 1999: 246), mostly because they are economically outclassed by developed countries and lack the expertise in technical areas necessary to comply with strict standards. Nevertheless, they are increasingly trying to play a role in ATOs and the Fair Trade Movement (Dingwerth 2008).
conclusion Global mass society continues, with no clear solution to such global problems as economic inequality, environmental fragility, and massive human rights violations. The main problem is not gridlock but instead the failure of global civil society to penetrate the global economic power structure and to discredit the culture of consumerism. Exponents of democratic global governance are nevertheless encouraged by some developments leading toward stakeholder democracy (Tallberg et al. 2013: 257), but progress has been limited.
Many problems were anticipated four decades ago. On May 1, 1974, a proposal for a New International Economic Order (NIEO) was presented to the UN General Assembly. A New International Information and Communications Order was also proposed as a prerequisite to NIEO (MacBride 1980). The NIEO proposal would have given greater voice to developing countries in the construction of an economic regime among capitalist countries (Bhagwati 1977;Murphy 1984). Before the end of 1974, the General Assembly adopted a Charter of Economic Rights and Duties of States, which called for the redistribution of wealth and political power as well as the promotion of global justice, assigning "duties" to developed countries and "rights" to developing countries. But all three initiatives, which recognized the existence of a global mass society, were stillborn, strongly opposed during the Cold War by the United States.
Today, despite the end of the Cold War, TNCs still block global reforms (Mearsheimer 1994/95), resulting in the marginalization of the developing world (Dowlah 2004;Halabi 2004). Even smaller developed countries are among those trapped by the consequences of out-of-control dominance by TNCs in the global economy.
Barriers to global democracy include the failure of cosmopolitanism to outweigh consumerism, media that cover rather than question global dysfunction, the weakness of people-oriented pressure groups, intergovernmental organizations that await funding before action, and TNC global governance regimes that ignore the consequences of their rules on small businesses, minor and developing countries, and of course the people ultimately affected.
Many scholars hold out the hope that democratic global governance is the answer (Goodhart and Taninchev 2011), but anti-globalization is now evidenced by the rise of nationalist movements led by radical leaders who promise to fix the problems even though they cannot, and instead seek to hold onto power while paying more attention to scapegoating than to global democratization, environmental sustainability, and human rights (Marchetti 2008a, b;Kirchick 2017;Mishra 2017;Peer 2017).
Many supporters of global governance are aware that they are celebrating how Western power has shaped a world that has long neglected countries in Africa, Asia, and elsewhere, where most of the world's population lives. China, the world's largest economy, will ultimately play a larger role (Rachman 2017), though it seems doubtful that either Beijing or the BRICS will serve as an intermediary between the West and smaller coun-tries. The global mass society that the West has created through economic and military domination is not receding.
Nevertheless, heroic inroads are now being made, as nongovernmental organizations are increasingly allowed a voice in the deliberations of intergovernmental bodies. Some but not all regimes consisting of partnerships between private and public entities are providing more global democracy, especially in regard to the environment. Alternative trade organizations compete with TNC dominance in world trade.
Although world federalism is a utopian plan to end global mass society, practical alternatives are not gaining support. Those challenges are addressed in the final chapter.
|
2019-05-20T13:05:20.817Z
|
2018-02-13T00:00:00.000
|
{
"year": 2018,
"sha1": "0eeed7133fbafe91317d7e4dea9ee5bb4606f958",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f96696801f436b204ca65d14465c9373c97bbec0",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
4238836
|
pes2o/s2orc
|
v3-fos-license
|
Real-Time Estimation of Population Exposure to PM2.5 Using Mobile- and Station-Based Big Data
Extremely high fine particulate matter (PM2.5) concentration has been a topic of special concern in recent years because of its important and sensitive relation with health risks. However, many previous PM2.5 exposure assessments have practical limitations, due to the assumption that population distribution or air pollution levels are spatially stationary and temporally constant and people move within regions of generally the same air quality throughout a day or other time periods. To deal with this challenge, we propose a novel method to achieve the real-time estimation of population exposure to PM2.5 in China by integrating mobile-phone locating-request (MPL) big data and station-based PM2.5 observations. Nationwide experiments show that the proposed method can yield the estimation of population exposure to PM2.5 concentrations and cumulative inhaled PM2.5 masses with a 3-h updating frequency. Compared with the census-based method, it introduced the dynamics of population distribution into the exposure estimation, thereby providing an improved way to better assess the population exposure to PM2.5 at different temporal scales. Additionally, the proposed method and dataset can be easily extended to estimate other ambient pollutant exposures such as PM10, O3, SO2, and NO2, and may hold potential utilities in supporting the environmental exposure assessment and related policy-driven environmental actions.
Introduction
Air pollutants, especially fine particulate matters such as PM 2.5 (particles with an aerodynamic diameter less than 2.5 µm), have been the focus of increasing public concern because of its strong relation with health risks [1,2]. Numerous epidemiologic studies have established robust associations between long-term exposure to PM 2.5 and premature mortality associated with various health conditions-such as heart disease, cardiovascular and respiratory diseases, and lung cancer-that substantially reduce life expectancy [2][3][4][5][6][7]. With the unprecedented economic development and urbanization over the past three decades, the severe and widespread PM 2.5 pollution has been one of the biggest health threats in China [8,9]. The Ministry of Environmental Protection reported that only eight of the 74 monitored cities meet China's ambient air quality standards (annual mean: 35 µg/m 3 ; and 24-h mean: 75 µg/m 3 ) in 2014 [10], and the number of cities was only three in 2013 [11].
people move within regions of generally the same air quality throughout a day or other time periods. Thus, how to obtain real-time estimations of population exposure to PM 2.5 concentrations is urgently needed for instant or short-time assessments (e.g., hourly or short-term PM 2.5 concentrations are more relevant to vulnerable population groups than the daily or monthly concentrations on average [14]) and cumulative exposure effects (the aggregation of short-term assessments is more robust than the monthly or annual average).
Addressing these ubiquitous challenges, more information on human space-time location is required. Some of previous studies have tried to use surveying data, such as travel questionnaire surveys, personal GPS or smart sensor based devices [14,31,32] to delineate how an individual move in the city during his/her daily life. For example, Lu and Fang [32] used the GPS-equipped portable air sensor to measure air pollutant intakes in individual's immediate surroundings and space-time movement trajectories in Huston, Texas. However, their high expenses and limited samples within local areas barricade the data availability. The alternative approaches are to use mathematical models to simulate population mobility patterns, such as gravity model [33] and radiation model [34]. This kind of methods allow us to draw more quantitative conclusions from a larger population size, but their results are only valid for situations with similar initial parameters in the simulation process [29]. Recently, Park and Kwan [14] simulated 80 possible daily movement trajectories based on daily trip distribution data from the Congestion Management Program Report to reflect the actual commuting tendency of Los Angeles (USA) county residents, and estimated exposure risks by considering the interactions between air pollution and individuals' location. However, such kind of studies are still constrained to limited spatial and temporal scales. With the rapid growth of mobile internet, especially the location-based services of applications (apps) in the smartphones, it makes us possible to access direct spatiotemporal records of human activities [35,36]. Additionally, the high correlation between the mobile-phone locating-request records and the spatiotemporal characteristics of human activities has been revealed by many studies [37][38][39]. A growing number of studies have started to use mobile phone data in the field of environmental exposure assessments [29,30,40]. For example, Dewulf et al. [29] collected mobile phone data of approximately five million mobile users in Belgium to calculate the daily exposure to NO 2 . Gariazzo et al. [30] conducted a dynamic city-wide air pollution (NO 2 , O 3 , and PM 2.5 ) exposure assessment by using time resolved population distributions derived from mobile phone traffic data, and modelled air pollutants concentrations. Yu et al. [40] combined cell phone location data from 9886 SIMcard IDs in Shenzhen, China to assess the misclassification errors in air pollution exposure estimation. Although all these pioneering studies highlight the promising advantages of incorporating population dynamics in estimating air pollution exposure, the available datasets are still limited to sample sizes and spatiotemporal scales due to the cost and time for collecting fine-resolution data, data privacy and confidentiality issues, and computational complexities [41].
To investigate the nationwide PM 2.5 concentration risks for population in China, spatially explicit and temporally continuous studies are needed to detect hotspots, estimate vulnerability, and assess population exposure at finer temporal scales. In this paper, we propose a novel approach to achieve the real-time estimation of population exposure to PM 2.5 by integrating mobile-phone locating-request (MPL) big data and station-based PM 2.5 observations. Compared with previous studies regarding ambient pollution exposure assessments, it has the following highlights. First, the proposed method introduces the dynamics of population distribution into the nationwide exposure estimation, thereby providing an improved way to better assess the actual exposure risk to PM 2.5 at different temporal scales. Second, to the best of our knowledge, it is the first time to provide the real-time estimation of nationwide population exposure to PM 2.5 at pixel-based level (~1.2 km) in China. Third, the proposed method and dataset can be easily extended to estimate other ambient pollutant exposures such as PM 10 , O 3 , SO 2 , and NO 2 , and may hold potential utilities in supporting the environmental exposure assessments and related policy-driven environmental actions.
Ground-Station PM 2.5 Measurements
Hourly ground-station PM 2.5 measurements from 1 March to 31 March 2016 were collected from the official website of the China Environmental Monitoring Center (http://113.108.142.147: 20035/emcpublish/). According to the Chinese National Ambient Air Quality Standard (CNAAQS), the station-based PM 2.5 data in China were obtained using the tapered element oscillating microbalance method (TEOM) or the beta-attenuation method, combined with the periodic calibration. In this study, we used a total of 1465 monitoring stations ( Figure 1) that have been established in all provinces for monitoring ambient air quality. , the station-based PM2.5 data in China were obtained using the tapered element oscillating microbalance method (TEOM) or the beta-attenuation method, combined with the periodic calibration. In this study, we used a total of 1465 monitoring stations ( Figure 1) that have been established in all provinces for monitoring ambient air quality.
Ground-Station Meteorological Measurements
Ground-station meteorological variables, including air temperature (AT), surface wind speed (WS), and horizontal visibility (VIS) were used from Global Telecommunication System (GTS) established by World Meteorological Organization (https://rda.ucar.edu/datasets/ds461.0/). In this study, the 3-h measurements (from 2:00 a.m. to 23:00 p.m. local time) from 411 stations in China and 128 stations within the 0.01-degree buffer zones around the boundary of China ( Figure 1) were collected from 1 March to 31 March 2016.
Mobile Phone Locating-Request Big Data
By retrieving real-time locating requests from mobile phone users' activities in apps, the mobile phone locating-request (MPL) data was used in this study to monitor human movement. The MPL data are from Tencent big data platform in China, which is one of the largest Internet service providers both nationwide and worldwide. All of the MPL data are produced by active smartphone users using apps, which have been enabled to report real-time locations from the mobile devices. Due to the widespread usage of Tencent apps (e.g., WeChat, QQ, Tencent Map, etc.) and their locationbased services, the daily locating records have reached 36 billion from more than 450 million users globally in 2016 [42]. Thus, the MPL big data can be represented as an indicator to characterize human
Ground-Station Meteorological Measurements
Ground-station meteorological variables, including air temperature (AT), surface wind speed (WS), and horizontal visibility (VIS) were used from Global Telecommunication System (GTS) established by World Meteorological Organization (https://rda.ucar.edu/datasets/ds461.0/). In this study, the 3-h measurements (from 2:00 a.m. to 23:00 p.m. local time) from 411 stations in China and 128 stations within the 0.01-degree buffer zones around the boundary of China ( Figure 1) were collected from 1 March to 31 March 2016.
Mobile Phone Locating-Request Big Data
By retrieving real-time locating requests from mobile phone users' activities in apps, the mobile phone locating-request (MPL) data was used in this study to monitor human movement. The MPL data are from Tencent big data platform in China, which is one of the largest Internet service providers both nationwide and worldwide. All of the MPL data are produced by active smartphone users using apps, which have been enabled to report real-time locations from the mobile devices. Due to the widespread usage of Tencent apps (e.g., WeChat, QQ, Tencent Map, etc.) and their location-based services, the daily locating records have reached 36 billion from more than 450 million users globally in 2016 [42]. Thus, the MPL big data can be represented as an indicator to characterize human activities and population distribution in a fine spatiotemporal scale. The Tencent MPL dataset used in this study was collected from 1 March to 31 March 2016 via the application program interface (API) from the Tencent big data platform (http://heat.qq.com). The original Tencent MPL dataset was recorded by aggregating the real-time locations of active apps users every five minutes within a mesh grid at a spatial resolution of 30 arc-second (~1.2 km). All the information regarding users' identities and privacies were removed in this publicly available dataset.
Population Census Data
The latest city-level population census of China in 2014 obtained from the national scientific data sharing platform for population and health (http://www.ncmi.cn/) was used in this study. This dataset was established and maintained by infectious disease network reporting system, and it was derived based on population census released by the State Statistics Bureau. It collected all population census including permanent resident and registered resident at the county level by gender and age group since 2004.
Estimation of Spatiotemporal Continuous PM 2.5 Concentrations
Due to the difference in geographic locations between PM 2.5 monitoring stations and meteorological stations, all datasets were processed to be consistent in spatial and temporal domains. The meteorological variables were first interpolated by ordinary Kriging method [43] to obtain data that covering the entire study area with a spatial resolution of 30 arc-second (~1.2 km). To mitigate the interpolation biases, we averaged all meteorological observations with a 30 arc-second search radius around each PM 2.5 monitoring station, and then assigned the result to the corresponding PM 2.5 monitoring station. In addition, the widely used Geographically Weighted Regression (GWR) model [44] with adaptive Gaussian bandwidth was adopted to build the statistical relationship between meteorological variables and PM 2.5 concentrations. Specifically, we grouped all variables within a month into 8 time points (i.e., from 2:00 a.m., 5:00 a.m., . . . , 23:00 p.m.), and then developed 8 GWR models for each time point in this study as follows: where PM 2.5,i,t denotes the PM 2.5 concentration at the location i at time t, VIS i,t , AT i,t , and WS i,t denote the visibility (m), air temperature ( • C), and surface wind speed (m/s), respectively, at location i at time t. β 0,i,t , β 1,i,t , β 2,i,t , and β 3,i,t are corresponding regression coefficients at location i at time t. A 10-fold validation analysis [45] was adopted to evaluate the modeling performance by comparing the estimated and measured PM 2.5 concentrations (details can be found in Supplementary Materials). With the iterative cross validations, the optimal coefficients in each time point were retrieved to interpolate the entire study areas with a spatial resolution of 30 arc-second (~1.2 km), and then were used to estimate gridded PM 2.5 concentrations.
Estimation of Real-Time Population Distribution by Integrating MPL and Census Data
The mobile phone locating-request (MPL) data can be served as an indicator to delineate the spatiotemporal pattern of population distribution, however, the MPL data do not represent the actual population sizes. In this study, we first aggregated the 5-min MPL data into 3-h MPL data, making its temporal resolution consistent with that of the estimated PM 2.5 concentrations, and calculated the pixel-based population density using the MPL data, and then applied the MPL-based population density map to downscale the census data. Consequently, we can obtain the 3-h pixel-based population approximations. Given the difference of physical environment and socio-economic development in various areas of China, downscaling the MPL data with population census at the national scale will undoubtedly result in the underestimation of population in under-and less-developed areas and overestimation of population in those developed areas. To solve this problem, we decided to estimate real-time population distribution by integrating MPL and census data at the city level. The 3-h MPL map was used to redistribute the census data for each city by Equations (2) and (3), under the assumption that the inter-city mobility will not dramatically influence the total population of a city within a short time window. Finally, we could obtain the 3-h pixel-based population approximation for each city, and then conducted the image mosaic to produce the 3-h national-scale population distribution map in China.
where p i,j is the amount of locating-request times within the i-th pixel at the hour j, n is the total number of pixels within a city, W i,j is the weight for redistributing population and TR is the total population in the city from the census data. Pop i,j denotes the population approximation in the i-th pixel at the hour j.
Real-Time Estimation of Population Exposure to PM 2.5
Since the levels of PM 2.5 concentration and population distribution are spatially and temporally varied, here we adopted the population-weighted metric (Equation (4)) to estimate the real-time exposure risks to PM 2.5 concentrations, which was likely to be more representative of population exposure to PM 2.5 across different temporal scales [46]: where pop i and pm i denote the population and PM 2.5 concentration level in the i-th pixel, N is the total number of pixels within the corresponding administrative unit. PWP is the population-weighted PM 2.5 concentration level for the targeted administrative unit. With the PM 2.5 concentrations and population distribution estimated in previous sections, we could integrate them based on Equation (4) to provide the estimation of population exposure to PM 2.5 with a 3-h updating frequency, thereby being able to track the real-time dynamics of exposure risks by considering the spatiotemporal variation of PM 2.5 concentration and population distribution.
2.8. Estimation of Cumulative Inhaled PM 2.5 PM 2.5 concentration causes acute and chronic adverse effects on human health mainly by means of inhalation exposure. To our understanding, deriving the estimations of cumulative inhaled PM 2.5 masses will be one of the most important prerequisites to model the accurate relationship between PM 2.5 exposure and human health [47][48][49]. Thus, we proposed to incorporate human respiratory volume and the spatiotemporal variation of PM 2.5 concentration and population density to present a better estimation of cumulative inhaled PM 2.5 : where p i and h i denote the population and the inhaled volume of air for the i-th age group, N is the total number of the age group. t denotes the time (hours in this study), m(t) denotes the PM 2.5 concentration level at time t, T is the target temporal period, d i is the percentage of outdoor population, α is the outdoor-indoor ratio of PM 2.5 concentration.
However, recent advances regarding the outdoor-indoor ratio of PM 2.5 concentrations are all limited to local scales for the purpose of experimental tests [50], as it is difficult to acquire such valid observations relating this ratio on a large scale. More importantly, the outdoor-indoor ratio is influenced by several factors such as geographic location, building structures, and living habits. In addition, the inhaled volume of air is also different, not only in terms of age differences but of physical activities, gender, and size, all of these factors would affect the inhaled value [51,52]. Thus, we have to simplify the ideal model in Equation (5) for being suitable to nationwide estimates of cumulative inhaled PM 2.5 masses by neglecting the difference between outdoor and indoor PM 2.5 concentration exposure and the inhaled volume of air among different age groups, gender, and other related factors. In this way, we can directly obtain the estimation of cumulative inhaled PM 2.5 masses using the following equation: where InPM 2.5 denotes the cumulative inhaled PM 2.5 mass from the simplified model, and h denotes the empirical inhaled volume of air. A measurement conducted by Adams [51] based on 200 individuals showed that the hourly average volume of air breathed by adults when they are sitting or resting were ranging from 0.42 to 0.63 m 3 (i.e., 10.08 to 15.12 m 3 /day), and the volumes for walking were from 1.20 to 1.44 m 3 /h, and for running were from 3.10 to 3.48 m 3 /h. Thus, the average inhaled volume of air for an individual is assumed to be 15 m 3 /day in this study [52].
Comparison of Exposure Assessments from the MPL-Based and Census-Based Methods
In order to investigate whether the improvement of incorporating dynamic population distributions does make a difference in the exposure assessment, we intuitively compared the MPL-based and census-based calculations of cumulative inhaled PM 2.5 masses and population-weighted PM 2.5 exposure concentrations in China's 359 cities across different temporal scales (i.e., 3-h, 1-day, 1-week, and 1-month). For each city, the population from the census data was directly used in the census-based method, while the redistributed population dynamics was used in the MPL-based method.
Different Facets of Population Exposure to PM 2.5
The spatiotemporal integration of PM 2.5 concentration and population density was used to produce thematic information that document different facets of population exposure to PM 2.5 . Figure 2 shows an extracted example from the time-series analysis of population exposure to PM 2.5 in China. Figure 2a shows the real-time nationwide estimation of population distribution (11:00 a.m.) on 1 March 2016, which is derived by integrating MPL and census data at a city-level scale in Section 2.6. The intensity represents the specific population number in each gridded pixel with stretched colors from blue to red denoting varied population size. Figure 2b shows the real-time nationwide estimation of PM 2.5 concentrations (11:00 a.m.), which is derived from incorporating ground-station PM 2.5 measurements and meteorological variables based on GWR models in Section 2.5. Figure 2c shows the nationwide estimation of 24-h cumulative inhaled PM 2.5 masses. Figure 2d shows the estimation of 24-h cumulative inhaled PM 2.5 masses based on the census data.
Different Facets of Population Exposure to PM2.5
The spatiotemporal integration of PM2.5 concentration and population density was used to produce thematic information that document different facets of population exposure to PM2.5. Figure 2 shows an extracted example from the time-series analysis of population exposure to PM2.5 in China.
Temporal Dynamics of Population Exposure to PM 2.5
In the form of Figure 2a Figure 2a shows the real-time nationwide estimation of population distribution (11:00 a.m.) on 1 March 2016, which is derived by integrating MPL and census data at a city-level scale in Section 2.6. The intensity represents the specific population number in each gridded pixel with stretched colors from blue to red denoting varied population size. Figure 2b shows the real-time nationwide estimation of PM2.5 concentrations (11:00 a.m.), which is derived from incorporating ground-station PM2.5 measurements and meteorological variables based on GWR models in Section 2.5. Figure 2c shows the nationwide estimation of 24-h cumulative inhaled PM2.5 masses. Figure 2d shows the estimation of 24-h cumulative inhaled PM2.5 masses based on the census data.
Temporal Dynamics of Population Exposure to PM2.5
In the form of Figure 2a
Comparison of Exposure Assessment Methods
From the visual inspection from Figure 2c,d, it can be found out that the MPL-based method yields the gridded cumulative inhaled PM2.5 masses, whereas the census-based assessments are only based on administrative units (cities in this study), which informs us that the MPL-based method improves the spatial resolution of basic cells from administrative units to gridded pixels in exposure assessments. In addition, by comparing the cumulative inhaled PM2.5 masses and population-
Comparison of Exposure Assessment Methods
From the visual inspection from Figure 2c,d, it can be found out that the MPL-based method yields the gridded cumulative inhaled PM 2.5 masses, whereas the census-based assessments are only based on administrative units (cities in this study), which informs us that the MPL-based method improves the spatial resolution of basic cells from administrative units to gridded pixels in exposure assessments. In addition, by comparing the cumulative inhaled PM 2.5 masses and population-weighted PM 2.5 exposure concentrations in China's 359 cities across different temporal scales, results in Figure 4 show that without introducing the dynamics of population distribution into the exposure assessment, the maximum biases (over-or under-estimation) of cumulative inhaled PM 2.5 mass reach to over 100% across different temporal scales. Meanwhile, the maximum biases of population-weighted PM 2.
Discussion
Compared with previous methods for air pollution exposure assessment, the proposed method in this study considered well the spatiotemporal variability of both population distribution and PM2.5 concentration levels, thereby contributing to a better exposure assessment. The relative reasonability of our method may be due to the following strengths. First, the spatiotemporal variability of PM2.5 concentrations and population distribution are incorporated in air pollution exposure assessments. Given that the level of PM2.5 concentrations is continuously changing over space and time and human beings are also mobile across spatiotemporal scales [14], both of these dynamic characteristics and their interactions at finer spatiotemporal scales should be well considered to estimate population exposure risks. However, many previous studies always used the census data with the assumption that people are non-mobile or moving within regions of generally the same air quality throughout a day or other time periods, thus leading to considerable biases in actual air pollution exposure assessments. In reality, people in different areas experience different levels of PM2.5 concentrations across different temporal scales. In order to characterize the interaction between population dynamics and PM2.5 concentrations, here we used the mobile-phone locating-request (MPL) big data to quantify the dynamics of population distribution. By integrating the MPL and census data, we then derived real-time pixel-based population dynamics at the nationwide scale. Combing this nationwide population dynamic information and surface-based PM2.5 concentrations simultaneously will be of great importance to assess the actual population exposure to PM2.5 at different temporal scales. Second, the characterized dynamics of PM2.5 concentrations and population dynamics in the
Discussion
Compared with previous methods for air pollution exposure assessment, the proposed method in this study considered well the spatiotemporal variability of both population distribution and PM 2.5 concentration levels, thereby contributing to a better exposure assessment. The relative reasonability of our method may be due to the following strengths. First, the spatiotemporal variability of PM 2.5 concentrations and population distribution are incorporated in air pollution exposure assessments. Given that the level of PM 2.5 concentrations is continuously changing over space and time and human beings are also mobile across spatiotemporal scales [14], both of these dynamic characteristics and their interactions at finer spatiotemporal scales should be well considered to estimate population exposure risks. However, many previous studies always used the census data with the assumption that people are non-mobile or moving within regions of generally the same air quality throughout a day or other time periods, thus leading to considerable biases in actual air pollution exposure assessments. In reality, people in different areas experience different levels of PM 2.5 concentrations across different temporal scales. In order to characterize the interaction between population dynamics and PM 2.5 concentrations, here we used the mobile-phone locating-request (MPL) big data to quantify the dynamics of population distribution. By integrating the MPL and census data, we then derived real-time pixel-based population dynamics at the nationwide scale. Combing this nationwide population dynamic information and surface-based PM 2.5 concentrations simultaneously will be of great importance to assess the actual population exposure to PM 2.5 at different temporal scales. Second, the characterized dynamics of PM 2.5 concentrations and population dynamics in the proposed method keep a consistent spatiotemporal scale. The MPL data used in this study were initially retrieved at a 5-min updating temporal resolution from the Tencent big data platform. We further aggregated the 5-min updating MPL data into 3-h synthetic data, making it temporally comparable to the updating frequency of the nationwide surface-based PM 2.5 concentrations. Meanwhile, the spatial resolution of PM 2.5 concentrations is also set to be with a 30 arc-second (~1.2 km) spatial resolution, which is the same with that of MPL data. These efforts contribute much to achieving near real-time (3-h) estimates of national population exposure to PM 2.5 at the pixel-based level in China. Third, the presented model incorporated human respiratory volume and the spatiotemporal variation of PM 2.5 concentration and population density to estimate cumulative inhaled PM 2.5 masses. It will contribute to advancing the development of modelling the relationship between PM 2.5 exposures, health risks, and life expectancies quantitatively.
Besides PM 2.5 , the ground monitoring stations are always coupled with sensors measuring other air pollutants such as PM 10 , SO 2 , NO 2 , and O 3 . With the similar framework by integrating mobile phone big data and air pollutant concentrations, the proposed method can also be customized to estimate population exposure risks to these ambient pollutants in China. Compared with the census-based method, the MPL-based method can yield near real-time estimations of population exposure to ambient pollutants. That is, we can achieve the estimation of air pollution exposure risks at any specific location and time on a large scale by combining the spatiotemporal variability of population distribution and air pollutant concentrations. By aggregating the short-term exposure assessments into longer temporal scales, we can also derive more robust and reliable estimations related to the chronic effects from air pollutants. Additionally, the proposed framework can be also applied to estimate the real-time number of people exposed to poor air quality as a result of updating the population distribution and air pollutant concentrations.
Meanwhile, some potential concerns regarding the implementation of the proposed method should be pointed out. First, in order to redistribute the census data to derive real-time population dynamics using the MPL data, we assume that the total population of each administrative unit (359 cities in this study) is constant since the inter-city mobility (the trade-off of inflow and outflow population) will not dramatically influence the total population of a city within a short time window. Thus, human movements and migrations across administrative units are neglected in this study. Second, volunteer-produced geospatial big data, such as MPL records in this study tend to leave out some population groups of the society because the children, the elderly, and the poor are less-frequent active users. Nevertheless, such data can still well quantify actual population distribution patterns [35,37,38] because of the massive volumes of data records. Here we take the MPL records in China on 1 March 2016 for example, the total number of locating-request records reaches 1.71 billion. By aggregating all MPL records from 1 March to 31 March 2016, the total number of locating-request records will be approximately 60 billion, thereby providing a robust measurement of population dynamics. Third, although the nationwide PM 2.5 concentrations used in this study are estimated by incorporating the meteorological variables and ground-based PM 2.5 measurements with the GWR models, the spatial interpolations are still the limits to affect the estimation accuracy in areas without sufficient inputs of station-based variables. As a result, even there is much greater spatial variations in the population data, there will be relatively less spatial variations in PM 2.5 concentrations, which may lead to no significant impacts on the exposure assessments. However, with·the comparison of exposure assessments between the MPL-based and the census-based methods, we can still figure out considerable differences. Thus, if we can further improve the estimation of PM 2.5 concentrations, such as developing spatial-temporal integrated method by combing satellite-based and station-based observations guided with the diurnal change pattern of PM 2.5 concentrations, land cover/use types, landscape topography, and related meteorological variables, the combination of the mobile phone big data and the improved air pollutant concentrations will contribute to a more reliable exposure assessment. Finally, the simplified model without considering outdoor-indoor ratio of PM 2.5 concentrations and the difference of inhaled volumes of air among different population groups may be biased to the assessment of actual cumulative inhaled PM 2.5 masses. As the Tencent-based MPL dataset was recorded by aggregating the real-time locations of active apps users within a mesh grid at a spatial resolution of 30 arc-second (~1.2 km) without differentiating individual's moving trajectories and population groups, it was impractical to apply empirical parameters into the exposure assessment at a nationwide scale since the outdoor-indoor ratio of PM 2.5 concentrations is influenced by several factors such as geographical locations, building materials, living habits, and so on. Similarly, the gridded MPL data without tracking individuals' trajectories also prevented us from considering the commuting patterns or choices of different transports. However, the MPL dataset represents the unique data source having the best spatial resolution with real-time updating population distribution we can access right now. Meanwhile, the estimates in the experimental test also represent the trade-off between over-and under-estimated cumulative inhaled PM 2.5 masses. On the one hand, these estimates are the highest estimates of cumulative inhaled PM 2.5 masses since we do not consider the situations that people are with indoor environments or commuting transportations. On the other hand, the cumulative inhaled PM 2.5 masses could be even higher because we use the constant value representing a low level of the inhaled air volume for an adult without considering factors such as physical activity, gender, and size [51]. Thus, these over and under estimates help balance each out in terms of cumulative inhaled PM 2.5 masses to provide the general assessment at large scales.
Conclusions
This study sought to combine mobile phone big data and station-based PM 2.5 measurements to achieve real-time estimations of population exposure to PM 2.5 concentrations in China. The results showed that the proposed method can well quantify dynamics of the real-time population distribution and yield the estimation of population exposure to PM 2.5 concentrations and cumulative inhaled PM 2.5 masses with a 3-h updating frequency. This study provides a novel framework for environmental exposure assessments by considering the spatiotemporal variability of both population distribution and PM 2.5 concentrations, which can also be customized to estimate other ambient pollutant exposure risks. These findings and methods may hold potential utilities in supporting the environmental exposure assessment and related policy-driven environmental actions.
|
2018-04-02T11:03:00.599Z
|
2018-03-23T00:00:00.000
|
{
"year": 2018,
"sha1": "36224a7b7221e5a05d4cafbcc9511d1d96ba75e7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/15/4/573/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "994a4ef73761aa11553d72f9872277deadd2f6ee",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
22015800
|
pes2o/s2orc
|
v3-fos-license
|
TCTP as a therapeutic target in melanoma treatment
Background: Translationally controlled tumour protein (TCTP) is an antiapoptotic protein highly conserved through phylogeny. Translationally controlled tumour protein overexpression was detected in several tumour types. Silencing TCTP was shown to induce tumour reversion. There is a reciprocal repression between TCTP and P53. Sertraline interacts with TCTP and decreases its cellular levels. Methods: We evaluate the role of TCTP in melanoma using sertraline and siRNA. Cell viability, migration, and clonogenicity were assessed in human and murine melanoma cells in vitro. Sertraline was evaluated in a murine melanoma model and was compared with dacarbazine, a major chemotherapeutic agent used in melanoma treatment. Results: Inhibition of TCTP levels decreases melanoma cell viability, migration, clonogenicity, and in vivo tumour growth. Human melanoma cells treated with sertraline show diminished migration properties and capacity to form colonies. Sertraline was effective in inhibiting tumour growth in a murine melanoma model; its effect was stronger when compared with dacarbazine. Conclusions: Altogether, these results indicate that sertraline could be effective against melanoma and TCTP can be a target for melanoma therapy.
Translationally controlled tumour protein (TCTP) initially named Q23, P21, and P23, according to its molecular mass, is involved in regulating fundamental cellular processes and is overexpressed in several tumour types (Chung et al, 2000;Amson et al, 2013). Since its characterisation, TCTP has been related to tumourigenesis and cancer progression (Koziol and Gurdon, 2012). Numerous studies show that TCTP level in tumour is higher than that in the corresponding normal tissues, including prostate, renal, breast, and lung cancers (Amson et al, 2013;Acunzo et al, 2014;Ambrosio et al, 2015;Rocca et al, 2015). These observations point to TCTP's critical role in tumourigenesis and highlight its putative role as a therapeutic target in several cancers. Silencing TCTP was shown to induce tumour reversion, a process overriding the malignant process at the molecular level (Tuynder et al, 2002(Tuynder et al, , 2004Telerman and Amson, 2009). The decrease of TCTP levels were related to inhibition of tumour growth and the loss of tumour features (high levels cell proliferation and migration) (Amson et al, 2013;Acunzo et al, 2014). Sertraline was first investigated in the context of TCTP and tumours because of the similarity of its structure to antihistaminic compounds; because TCTP encodes for a histamine-releasing factor, the hypothesis that inhibitors of the histaminic pathway could be effective against tumour cells was evaluated. Several articles had already shown that sertraline inhibits tumour growth in vivo (Tuynder et al, 2004). There are epidemiologic studies showing a protective effect and a decreased risk of tumour development (in breast, colorectal, lung cancers) among users of high doses of selective serotonin reuptake inhibitors, including sertraline (Xu et al, 2006;Coogan et al, 2009;Toh et al, 2009;Wernli et al, 2009). There is a negative feedback loop between TCTP and P53. Translationally controlled tumour protein promotes P53 degradation, inhibits MDM2 autoubiquitination, promotes MDM2-mediated ubiquitination, and degradation of P53. Additionally, P53 directly represses TCTP transcription (Amson et al, 2012). In this context sertraline binds directly to TCTP (Amson et al, 2013). The effects of decreasing TCTP in melanoma were analysed using both sertraline and siRNA. In vivo analysis was performed using a C57BL/6 mice model and compared with the alkylating agent dacarbazine (DTIC). Although DTIC is a long-established and standard treatment for metastatic melanoma, its efficiency is low (Pretto and Neri, 2013). The results reported here provide a basis for the evaluation of TCTP targeting in melanoma and suggests sertraline as a potential drug.
MATERIALS AND METHODS
Cell culture and animals. Human melanoma cell lines and murine melanoma cells (B16-F1 and B16-F10) were obtained from ATCC (American Type Culture Collection, Manassas, VA, USA). Murine cells were cultured in DMEM media and human cells were maintained in RPMI 1640, both supplemented with 10% (v v À 1 ) fetal bovine serum (FBS) (Cultilab, Campinas, Brazil) and 40 mg ml À 1 gentamicin, in humidified 5% CO 2 -95% air at 37 1C. C57BL/6 mice (female, 8-12 weeks old) were provided from the Central Animal House of the Pontifical Catholic University of Paraná, Brazil and received a standard laboratory diet (Purina). All procedures used in this study were approved by the Institutional Ethics Committee of the Federal University of Paraná (no. 730).
Small interfering RNA. The siRNAs of tpt1/TCTP were synthesised by Ambion (Life Technologies, Carlsbad, CA, USA). Sense siRNA, (5 0 -AGCACAUCCUUGCUAAUUUTT-3 0 ); antisense siRNA, (5 0 -AAAUUAGCAAGGAUGUGCUTA-3 0 ). All procedures were performed under RNAse-free conditions, using RNAse-free water. Approximately 10 5 B16-F10 cells were transfected with a final concentration of 50 nM of siRNA duplexes using Lipofectamine reagent (Invitrogen, Carlsbad, CA, USA). After 24, 48, and 72 h transfection, cells were collected and used for cell viability, migration, and proliferation assays, RT-PCR analysis and western blot analysis. The siRNAs used herein were carefully evaluated concerning the main characteristics associated with highly active siRNAs: moderate-to-low (33,3%) guanine-cytosine content, lack of internal secondary structure within the siRNA (high-DG unfavoured secondary structures), and low stability of binding interactions at the 5 0 terminus of the guide siRNA strand, a uridine residue at position 10 of the sense strand, lack of immunostimulatory sequences within the siRNA, and lack of secondary structure of the target site (Reynolds et al, 2004). BLASTn search to potential siRNAs and Smith-Waterman dynamic programming sequence alignment algorithm were performed: BLASTn analysis showed the confidence parameter (E-value) to the right annealing between the siRNA and the target sequence is 250-fold higher than the second predicted hypothetical sequence; furthermore, the TCTP is the only known protein sequence that has 100% identity.
Protein extract and western blot. After treatment for 24, 48, and 72 h with sertraline or siRNA, cultured cell pellets were homogenised and lysed in ice-cold lysis buffer (20 mM Tris-HCl, pH 7.6, 50 mM KCl, 400 mM NaCl, 1 mM EDTA, 0.2 mM phenylmethylsulphonyl fluoride, 2 mg ml À 1 aprotinin, 2 mg ml À 1 leupeptin, 1 mM dithiothreitol, 1% Triton X-100, and 20% glycerol). The lysates were separated by centrifugation at 20 000 g for 30 min at 4 1C; the supernatants were collected and aliquots were made. All protein concentrations were determined using the MicroBCA Assay (Thermo Scientific, Waltham, MA, USA). An aliquot (50 mg protein per lane for cellular extract and 100 mg protein per lane for tumour extract) of the cell per tissue lysate was separated by 15% SDS-PAGE gel, and then the proteins were transferred onto a nitrocelulose membrane. The primary antibodies against TCTP, GAPDH, and P53 (Santa Cruz Biotechnology, Santa Cruz, CA, USA) were used according to the manufacturer's instruction. The membranes were further incubated with HRP-linked anti-rabbit IgG and HRP-linked anti-mouse IgG (1 : 5000). The protein-antibody complexes were detected by using the chemiluminescent substrate according to the manufacturer's instructions and the emitted light captured on X-ray film or using Amersham Imager 600 (Little Chalfont, UK). The intensity of each band was analysed using 'histogram analyses' in the ImageJ Analysis Software (Schneider et al., 2012) to confirm reductions or increases (data not shown).
Cell viability assays. Human melanoma (MeWo and A2058) cells and murine melanoma (B16-F10) (5 Â 10 3 cells per well) cells were plated for 16 h on 96-well plates and then grown in medium containing FBS. The medium was then replaced by a serum-free one. After 16 h, this was replaced with medium containing 10% of FBS plus sertraline at different concentrations (0.01, 0.1, and 1 mM) in pentaplicate. Controls consisted of the respective medium alone and in the presence of 550 mM DMSO, which was the sertraline solvent. After 24, 48, and 72 h, the viability of cells in each well was determined using Cell Titer-Glo luminescent assay reagent, following the manufacturer's instructions (Promega, Madison, WI, USA). Luminescence was quantified using Tecan Infinite X-100 (Männedorf, Switzerland). Experiments were performed in pentaplicate and repeated two times. Alternatively, murine melanoma B16-F1 and B16-F10 cells were plated (5 Â 10 4 cells) in 24-well plates and transfected using tpt1/TCTP siRNA (50 nM) and negative siRNA control. After 24, 48, and 72 h, cells were stained with Trypan blue and counted in a haemocytometer. Trypan blue experiments were performed in pentaplicate and repeated three times.
Cell migration assays. Human melanoma cell lines' migration was measured in a two-dimensional 'scratch' assay. To repress proliferation cells were treated with mitomycin C (10 mg ml À 1 ) for 2 h, before the cell monolayer was scratched with a pipette tip. Cells were then incubated with sertraline (0.01, 0.1, and 1 mM) for 24 h. Scratches were monitored using the Olympus CellR Live Cell Imaging System (Tokyo, Japan) with an IX81 motorised inverted microscope (Tokyo, Japan) and a Hamamatsu camera (Tokyo, Japan), fitted with a climate chamber. Images were acquired using the Olympus excellence RT software (Olympus, Hamburg, Germany). Relative migratory capacity was determined by calculating the percentage of the cell-free area. Experiments were performed in pentaplicate and repeated twice. Alternatively, migration assays were performed using uncoated cell culture inserts with 8 mm pores (Corning Life Sciences, Corning, NY, USA) according to the manufacturer's instructions. Briefly, B16-F10 cells were transfected with siRNA to tpt1/TCTP and with a negative control for 24, 48, and 72 h. Cells were harvested and resuspended in serum-free medium at a density of 5  10 4 cells per well. B16-F1, B16-F10, B16-F10-tpt1 siRNA, and B16-F10 control siRNA were seeded into the upper chamber. Lower chambers were filled with medium containing 10% FBS as a chemoattractant. After 6 h, cells were fixed and permeabilised with methanol and stained with 0.5% Crystal violet/20% methanol. The non-migrating cells on the upper surface of the filter were removed by cotton swab. The number of migratory cells was measured by counting at  100 magnifications using a microscope. Experiments were performed in triplicate and repeated three times.
Clonogenic assay (anchorage-independent cell transformation assay). Soft agar method was used to evaluate colony formation capacity of melanoma cells. Human and murine melanoma cells (5 Â 10 3 cells per well) were treated with sertraline (0.1, 1, and 5 mM) in media containing 1.5 ml of 0.5% agar (DMEM or RPMI 1640) and supplemented with 10% (v v À 1 ) of FBS, and 40 mg ml À 1 gentamicin. Cultures dishes were maintained at 37 1C in a 5% CO 2 incubator for 15 days. Cell colonies were stained with 0.005% Crystal violet, and counted at light microscope. Experiments were performed in duplicate and repeated three times.
Cell proliferation assays. Murine melanoma cells (10 4 cells per well) were plated for 16 h on 96-well plates in medium containing FBS. The medium was then replaced by a serum-free one. After 16 h, serum-free medium was replaced with medium containing 10% of FBS and cells were transfected with siRNA to tpt1/TCTP and with a negative control (control siRNA). After 24, 48, and 72 h, the number of cells in each well was determined using Violet crystal method (Borges et al, 2013). Experiments were carried out in pentaplicate and repeated three times.
In vivo tumour growth. The C57BL/6 mice were subcutaneously injected with B16-F10 cells (5 Â 10 5 cells per animal), and a solid tumour developed at the injection site. Intraperitoneal treatments started 5 days after injection of the cells. Mice were treated with a daily dose of sertraline (10 mg kg À 1 , in 100 ml). The control groups received 100 ml of DTIC solution (60 mg kg À 1 ) and the respective vehicle under the experimental conditions described for the treated groups: water for DTIC and/or aqueous solution containing DMSO (2.4 nM) for sertraline. All mice were kept under observation for 17 days (12 days of treatment) and then killed using the combination (1 : 1) of xilazine cloridrate/ketamine cloridrate (10%) in 50 ml. Tumours were excised, photographed and their weights were determined. Tumour tissues were collected for total protein extraction and histochemical analysis. Experiments were performed using groups of five animals per condition and repeated two times.
RESULTS
Effects of sertraline on human melanoma cell lines. Sertraline treatment was analysed in the context of the human melanoma cell lines MeWo and A2058, and were further assessed for protein expression levels and biological effects. Figure 1A shows TCTP and P53 levels after incubation with sertraline. A decrease in TCTP protein level and an increase in P53 were observed. Sertraline decreased the viability of MeWo melanoma cells in a time-and concentration-dependent manner ( Figure 1B). In the soft agar assay, the number of colonies formed in the presence of sertraline was reduced by B50% (1 mM) and by 65% (5 mM) when compared with the control (DMSO) ( Figure 1C). The effects of sertraline on the reduction of the migration phenotype were also highly significant ( Figure 1D). A scratch assay was performed in the presence of different concentrations of sertraline and cell migration was analysed each hour for 12 h ( Figure 1D graph). Even low doses (0.01 and 0.1 mM) could significantly inhibit the melanoma cells migration.
The same set of experiments was performed on the A2058 human melanoma cell line that was also very sensitive to sertraline treatment ( Figure 2). Translationally controlled tumour protein levels were decreased and P53 levels were increased after sertraline treatment ( Figure 2A). Cell viability was significantly decreased after sertraline treatment ( Figure 2B). Sertraline also caused a marked decrease in the number of colonies formed in the soft agar ( Figure 2C). The higher concentration (5 mM) inhibited by B80% the formation of colonies. Figure 2D shows that melanoma cell migration was also strongly affected by sertraline treatment, in all tested concentrations (0.01, 0.1, and 1 mM). Taken together, these results show that, in MeWo and A2058 cells, sertraline decreases TCTP and inhibits cell viability, colony formation and cell migration.
High TCTP levels in B16 (B16-F1, B16-F10) murine melanoma cell lines. B16 melanoma cell lines are very interesting models for tumour biology studies. Compared with B16-F1, which has low metastatic potential, B16-F10 cell line displays a higher metastatic capacity (Fidler, 1973;Morris et al, 2015). Translationally controlled tumour protein levels are markedly higher in B16-F10 cells ( Figure 3A). A q-PCR analysis revealed that expression of TCTP mRNA in B16-F10 cells was 2.3-fold higher than that in B16-F1 cells ( Figure 3B). A knockdown (siRNA) of TCTP was performed and the transfectants were analysed by qRT-PCR after 24, 48, and 72 h ( Figure 3C). A decrease of 50-70% in the amount of TCTP mRNA was observed and confirmed by the protein levels detected in the western blot assays ( Figure 3D).
Effects of TCTP inhibition in B16-F10 cell viability, proliferation, and migration. The inhibition of TCTP by siRNA in the B16 cells slightly affected the viability and proliferation of these cells. A decrease of 15% and 25% in viability was observed after, respectively, 24 and 72 h of transfection ( Figure 4A). The proliferation was reduced by 20% and 40% ( Figure 4B). When the migration of these cells was evaluated by transwell, a substantial decrease in the migration potential of the TCTPsilenced cells was observed ( Figure 4C). The number of cells that reached the other side of the well membrane was 53% and 67% smaller when TCTP was inhibited after 48 h and 72 h of transfection. The migration of TCTP knockdown cells was lower than the one observed for B16-F1 cells ( Figure 4C).
In vitro effects of sertraline on B16-F10 cells. The effect of sertraline on the downregulation of TCTP was initially assessed by western blot analysis. B16-F10 melanoma cells were treated with sertraline at different concentrations (0.01, 0.1, and 1 mM). Translationally controlled tumour protein levels were evaluated after 24, 48, and 72 h ( Figure 5A). The observed decrease of TCTP levels by sertraline is time and concentration dependent. Sertraline at the concentration of 1 mM markedly reduces the intracellular TCTP levels. Even the lower dose (0.01 mM) was able to significantly decrease the amount of TCTP after 72 h of treatment. Figure 5B shows the results of TCTP relative expression by qRT-PCR in B16-F10 cells treated with sertraline. These results show that TCTP mRNA levels were diminished by sertraline in a timeand concentration-dependent manner. Viability was assessed by measuring the metabolic active cells after 24, 48, and 72 h of treatment ( Figure 5C). Even low concentrations of sertraline triggered a significant decrease of the viability after 48 and 72 h. The capacity of murine melanoma cell to form colonies in a semisolid medium was evaluated in the presence of sertraline ( Figure 5D). Sertraline reduced the number of colonies to 76% (0.1 mM), 48% (1 mM), and 32% (5 mM), when compared with the control. Interestingly, 5 mM of sertraline made B16-F10 cells even less clonogenic than B16-F1 cells.
In vivo effects of sertraline treatment on C57BL/6/B16-F10 mice model. The in vivo antitumour activity of sertraline was evaluated using C57BL/6 mice in which subcutaneous B16-F10 melanoma cells were inoculated (Figure 6). At 5 days after the cells were injected in the mice, sertraline/DTIC daily intraperitoneal treatment was started. At this point, a very small but palpable tumour could be observed. Animals were treated for 12 days with DTIC (60 mg kg À 1 ) and/or sertraline (10 mg kg À 1 ). Control animals were treated with the drug solvent (DMSO and/or water). At the end of the experiment, tumours were excised and their weights were determined (Figures 6A and B). Sertraline (10 mg kg À 1 ) inhibited tumour growth by 84.4%. When animals were treated with both sertraline and DTIC, tumour growth was reduced to 88%. A stronger antitumour effect of sertraline could be observed when compared with DTIC singly treatment (47% tumour growth inhibition). However, when the suppression of tumour growth of DTIC and sertraline is compared with sertraline alone, there is no increase in the antitumour activity of sertraline. Translationally controlled tumour protein content in these mice tumours was assessed by western blot analysis and a marked decrease could be observed. As expected, P53 levels were increased in tumours from animals treated with sertraline ( Figure 6C). Collected tumours were also evaluated regarding the amount of TCTP, Ki67, caspase-3, and P53 by immunohistochemistry ( Figure 6D). In line with the results observed by western blot analysis, tumours from animals treated with sertraline presented lower levels of TCTP and higher levels of P53 protein. Collectively, these results indicate that B16-F10 cells show a decrease in TCTP and malignant status, both in vitro and in vivo when treated with sertraline.
DISCUSSION
Melanoma is the deadliest of skin cancers, caused by the transformation of melanocytes that accumulate genetic alterations, leading to abnormal proliferation and dissemination (Bastian, 2014). The incidence of melanoma is rising all over the world and an effective treatment for advanced cases is still to be determined. Despite improvements, metastatic melanoma (stage IV) is still associated with a poor prognosis and a median survival of 6-12 months (Schadendorf et al, 2015). Melanoma tumours and cell lines overexpress TCTP (Tuynder et al, 2004;Baylot et al, 2012;Sade et al, 2012). Translationally controlled tumour protein is involved in fundamental cellular processes, such as cell cycle, proliferation, and apoptosis (Amson et al, 2013;Thebault et al, 2016).
including TCTP (Tuynder et al, 2002). Accumulating data on TCTP and tumour reversion suggest that such a reprogramming could be clinically interesting and a therapeutic target for cancer treatment, overcoming regular drawbacks of standard procedures as resistance by clone selection (Tuynder et al, 2002(Tuynder et al, , 2004Telerman and Amson, 2009;Amson et al, 2013;Powers and Pollack, 2016).
Relevance of TCTP for tumour reversion in melanoma was shown by Tuynder et al (2004) in WM-266-4, WM-115, SK-MEL-28 and Hs852T cell lines, and human tumours. Using an oncolytic virus (H1 parvovirus), revertant cells were isolated from these melanoma cells. These revertant clones presented decreased TCTP levels and reduced tumourigenicity in vivo. Translationally controlled tumour protein was also implicated to the development of chemoresistance in melanoma (Sinha et al, 2000). MeWo cells induced to resistance by drugs presented higher TCTP levels when compared with their sensitive-counterparts. Mewo cell lines chemoresistant to cisplatin, vindesin, etoposide or fotemustine presented marked TCTP overexpression when compared with the parental cell line MeWo.
Both MeWo and A2058 cells showed reduced levels of TCTP when treated with sertraline. This decrease in TCTP was accompanied by an increase in P53 levels (Figures 1 and 2). The decrease in the migration phenotype and also in the capacity of forming colonies caused by sertraline was striking in both human cell lines.
We took B16 melanoma a model to study TCTP. From B16 cells, different phenotypic cells lines were established: B16-F10, which is highly malignant and metastatic, and B16-F1, less proliferative with a lower metastatic capacity (Fidler, 1973;Morris et al, 2015). When TCTP levels were assessed in these cell lines, a marked difference was observed. The smaller content of TCTP protein in B16-F1 was consistent with low TCTP mRNA levels. In comparison with B16-F10 levels, B16-F1 presented half of the TCTP protein and mRNA content (Figure 3). The results suggest that knockdown of TCTP correlated with a decrease in malignant features of B16-F10 cells (Figure 4). These results are similar to those of WM266, in which decreased TCTP levels result in decreased tumour development (Tuynder et al, 2004). Most striking is the reduced migration capacity of F10 melanoma cells silenced for TCTP, which is even lower than B16-F1 ( Figure 4C). Recently, the knockdown of TCTP was shown to reduce B16-F10 cells' capacity to form pulmonary metastasis in vivo (Bae et al, 2015). Sertraline (even at the lowest dose: 0.01 mM) was very (24, 48, and 72 h). Glyceraldehyde 3-phosphate dehydrogenase was used as an endogenous control for each sample. (B) Analysis of TCTP relative expression using qRT-PCR in B16-F10 cell line after treatment with sertraline (0.01, 0.1, and 1 mM at 24, 48, and 72 h), a negative control (dimethyl sulphoxide (DMSO)) and only medium in soft agar. The DDCt method was used for quantification of the TCTP mRNA levels, using GAPDH as an endogenous standard for each sample. For graphics, DDCt was normalised to the controls. Data were analysed by comparing the lines treated with sertraline and the negative control (DMSO) using one-way ANOVA test with Tukey's post hoc test. Two independent experiments were performed (n ¼ 5) (****Po0.0001). (C) Cell viability was determined using Cell Titer-Glo (Promega). B16-F10 cells were plated in 96-well plates (10 4 cells per well) and treated with sertraline (0.01, 0.1, and 1 mM), DMSO (negative control) and only medium in soft agar. After 24, 48, and 72 h, Cell Titer-Glo reagent was added in the medium and luminescence was measured by Tecan (Tecan Infinite M200). Data were analysed by comparing the lines treated with sertraline and the negative control (DMSO) using two-way ANOVA test with Tukey's post hoc test. Two independent experiments were performed (n ¼ 5) (*P o0.1, **Po0.01, ***Po0.001, and ****Po0.0001). (D) Clonogenic assay. A total of 5 Â 10 3 B16-F10 cells were incubated with sertraline (0.1, 1, and 5 mM), DMSO (negative control) and only medium in soft agar. After 15 days, cells were stained with Crystal violet and colonies were counted. Four independent experiments were performed (n ¼ 2) (***Po0.001, ****Po0.0001). effective in the reducing TCTP levels ( Figure 5). Sertraline was effective in reducing the capacity of melanoma cells to form colonies in soft agar. The results concerning the reduction of migration and clonogenic capacity are quite interesting in the context of melanoma as this tumour type is very metastatic, and difficult to treat when disseminated (Shain and Bastian, 2016).
Results of sertraline treatment using the in vivo model are very promising ( Figure 6). Although new treatment strategies are being developed against melanoma (mainly targeting specific mutations and others based on immune therapy), DTIC is still the main chemotherapy drug. However, 10 mg kg À 1 of sertraline triggered a marked inhibition of tumour growth, which was better than DTIC (60 mg kg À 1 ). We observed a decrease in TCTP and an increase in P53 levels in the tumours of the animals treated by sertraline ( Figure 6C). Dacarbazine also triggered an increase in P53 levels, because of its action on DNA. These results point to a complex reprogramming of tumour cells by sertraline, suggesting its mechanism of action goes further than reactivation of P53. Sertraline also induced a decrease in Ki67 levels and an increase in caspase-3 levels ( Figure 6D). Translationally controlled tumour protein had already been described as a cell survival protein, modulating apoptosis (Amson et al, 2012;Acunzo et al, 2014;Thebault et al, 2016). Reciprocal repression of TCTP-P53 is particularly interesting in the context of melanoma because of its wild-type p53 status prevalence. Although p53 tumour suppressor gene is rarely mutated in melanoma, its functional attenuation is needed for tumour development. Reactivation of p53 was already pointed as an alternative therapeutic strategy for melanoma treatments, in combination with other strategies (Jochemsen, 2014;Lu et al, 2014).
The underlying mechanism of sertraline is based on its direct interaction with TCTP. Translationally controlled tumour protein's function on the autoubiquitination of MDM2 is hereby impaired, resulting in the increase of P53 levels (Amson et al, 2012).
Our results suggest that sertraline effects are related not only to P53 activation and induction apoptosis but also to a more complex pathway, which probably enable cells to suppress tumour features. When sertraline treatment of human melanoma cell lines was evaluated, we choose a p53 wild-type (MeWo) and a mutant p53 Figure 6. In vivo effect of sertraline on melanoma murine model. The C57BL/6 mice were subcutaneously injected with B16-F10 melanoma cells (5 Â 10 5 cells). Daily intraperitoneal treatment with DTIC (60 mg kg À 1 ) and/or sertraline (10 mg kg À 1 ) was started on the fifth day after the cells were injected in the mice. Negative controls were treated with water and/or dimethyl sulphoxide (DMSO). (A) Representative images from tumours excised after 17 days. (B) Weights from tumours were determined and data were analysed by comparing the animal treated with DTIC, sertraline and sertraline/DTIC. The negative control was analysed using t-test. Three independent experiments were performed (n ¼ 5) (***Po0.001 and ****Po0.0001). (C) Western blot analysis of TCTP, P53, and GAPDH protein levels in tumour samples after treatment with DTIC and/or sertraline and negative control. Glyceraldehyde 3-phosphate dehydrogenase was used as an endogenous control for each sample. (D) Immunohistochemistry (IHC) of TCTP, Ki67, caspase-3, and P53 obtained from B16-F10 tumours. Tumour sections obtained from B16-F10 tumours from the DTIC-and/or sertraline-treated mice or -untreated (H 2 O and/or DMSO). Sample sections were incubated with primary antibodies against TCTP (1 : 1000), Ki67 (1 : 1000), caspase-3 (1 : 1000), and P53 (1:1000). Protein expression was detected by immunohistochemistry using a secondary antibody conjugated to FITC (P53 and caspase-3) (1 : 250) or Alexa Fluor 594 (TCTP and Ki67) (1 : 250). Images were merged with a 4',6-diamidino-2-phenylindole (DAPI) channel. The scale bars are visualised on the right.
(A2058) cell line (TP53 web site: http://p53.fr) (Figures 5 and 6). Sertraline effects were quite similar in both cell lines; migration and clonogenicity were markedly reduced. Translationally controlled tumour protein levels decreased and P53 increased by sertraline, even in the p53 mutant cell line (A2058). As TCTP is a multifunctional protein with several partners, involved in crucial cell pathways, the decrease of TCTP levels probably influence other molecules than P53, leading to a complex alteration of cell behaviour and to tumour reversion. However, a specific and detailed mechanism remains elusive.
It is important to highlight TCTP as a target for melanoma therapy in a context of cellular reprogramming and tumour reversion, as the decrease of TCTP levels lead to the loss of tumour features such as migration, clonogenicity, strongly associated with metastatic disease and tumour growth per se.
|
2017-11-08T17:36:26.281Z
|
2017-07-27T00:00:00.000
|
{
"year": 2017,
"sha1": "bb07fd4d1cb30081044a68a4f0c3eec300b366af",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.nature.com/articles/bjc2017230.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb07fd4d1cb30081044a68a4f0c3eec300b366af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
3740511
|
pes2o/s2orc
|
v3-fos-license
|
The synergistic antimicrobial effects of novel bombinin and bombinin H peptides from the skin secretion of Bombina orientalis
Bombinin and bombinin H are two antimicrobial peptide (AMP) families initially discovered from the skin secretion of Bombina that share the same biosynthetic precursor-encoding cDNAs, but have different structures and physicochemical properties. Insight into their possible existing relationship lead us to perform the combination investigations into their anti-infectious activities. In this work, we report the molecular cloning and functional characterization of two novel AMPs belonging to bombinin and bombinin H families from secretions of Bombina orientalis. Their mature peptides (BHL-bombinin and bombinin HL), coded by single ORF, were chemically synthesized along with an analogue peptide that replaced L-leucine with D-leucine from the second position of the N-terminus (bombinin HD). CD analysis revealed that all of them displayed well-defined α-helical structures in membrane mimicking environments. Furthermore, BHL-bombinin displayed broad-spectrum bactericidal activities on a wide range of microorganisms, while bombinin H only exhibited a mildly bacteriostatic effect on the Gram-positive bacteria Staphylococcus aureus. The combination potency of BHL-bombinin with either bombinin HL or bombinin HD showed the synergistic inhibition activities against S. aureus (fractional inhibitory concentration index (FICI): 0.375). A synergistic effect has also been observed between bombinin H and ampicillin, which was further systematically evaluated and confirmed by in vitro time-killing investigations. Haemolytic and cytotoxic examinations exhibited a highly synergistic selectivity and low cytotoxicity on mammalian cells of these three peptides. Taken together, the discovery of the potent synergistic effect of AMPs in a single biosynthetic precursor with superior functional selectivity provides a promising strategy to combat multidrug-resistant pathogens in clinical therapy.
Introduction
Bombinin, one of the typical cationic antimicrobial peptides (AMPs), was first isolated from the skin secretion of the yellow-bellied toad Bombina variegata [1]. The nucleotide sequence analysis of bombinin-related peptides prompted the existence of a class of structurally differentiated peptides, which were named as bombinin H [2,3]. Importantly, the presence of a subtle and inconspicuous single d-amino acid (d-alloisoleucine or d-leucine) at the second position from N-terminus of bombinin H, as a consequence of post-translational modification, was observed. This type of modification may contribute to the versatile antimicrobial mechanisms of frog skin peptides, and may be beneficial in the prevention of bacterial resistance [4][5][6]. However, since the initial discovery of bombinin, bombinin H and d-isoform bombinin H, research have been focused on the study of the individual peptide's antimicrobial property, instead of their synergistic potencies. Combined effects of bombinin peptides with conventional antibiotics, and their antimicrobial selectivity towards pathogens have very rarely been reported [7].
Here, we report the structural and functional characterization of two novel, linear, cationic, α-helical AMPs, initially identified in a single ORF from the skin secretion of Bombina orientalis. These peptides belong to the bombinin and bombinin H families. The potent synergistic relationship of the novel bombinin and bombinin H peptides highlights the significance of combinational utility of AMPs in the treatment of infections caused by drug-resistant bacteria, this continues to provide researchers with novel approaches for prospective innovation in clinical studies.
Specimen preparation and secretion harvesting
Specimens of the oriental fire-bellied toad B. orientalis were obtained from a commercial supplier and raised in a specially designed vivarium until maturation, over a period of 4 months. The skin secretions were collected and lyophilized as previously described [8]. Sampling of skin secretion was performed by Mei Zhou under U.K. Animal (Scientific Procedures) Act 1986, project license PPL 2694, issued by the Department of Health, Social Services and Public Safety, Northern Ireland. Procedures had been vetted by the IACUC of Queen's University, Belfast, and approved on 1 March 2011.
Molecular cloning of novel bombinin and bombinin H precursor encoding cDNA from the skin secretion derived cDNA library
A 5-mg lyophilized secretion of B. orientalis was dissolved in 1 ml of mRNA protection buffer, the polyadenylated mRNA was obtained by using magnetic oligo-dT beads following the instructions of the manufacturer (Dynal Biotech, Wirral, U.K.), and subsequently reverse transcribed. The cDNA was subjected to 3 -RACE PCR procedure to obtain the full-length prepro-bombinin and prepro-bombinin H nucleotide sequence using a SMART-RACE kit (Clontech, Oxford, U.K.) as described by the manufacturer. For 3 -RACE reaction, a nested universal primer (NUP) (supplied with the kit) and a degenerate sense primer were designed and performed as previously reported [9,10]. The 3 -RACE reactions were performed as per previous description [11].
Identification and structural analysis of deduced mature peptides in the skin secretions
Another 5 mg of lyophilized secretion was dissolved in 1.0 ml of 0.05/99.95 (v/v) trifluoroacetic acid (TFA)/water and clarified by centrifugation. The rp-HPLC system was fitted with an analytical column (phenomenex C-5, 0.46 × 25 cm and pheomenex C-18, 250 × 10 mm), eluting with a linear gradient formed from TFA/dd water (0.05/99.95, v/v) to TFA/dd water/acetonitrile) (0.05/19.95/80.0, v/v/v) in 240 min at 1 ml/min. The fractions were collected automatically at a minute's intervals and effluent absorbance was continuously monitored at λ: 214 nm and λ: 280 nm. Each reverse-phase HPLC fraction was analysed with MALDI-TOF MS on a linear TOF Voyager DE mass spectrometer (Perseptive Biosystems, MA, U.S.A.) in positive detection mode using α-cyano-4-hydroxycinnamic acid as the matrix. Fractions containing peptides with molecular masses coincident with predicted mature peptides from 'shotgun' cloning were infused into the LCQ Fleet TM ion-trap electrospray mass spectrometer for analysis (Thermo Quest, San Jose, CA, U.S.A.).
Peptides synthesis and purification
The two novel identified bombinin peptides and one single-residue d-isomer analogue were synthesized by Tribute R Peptide Synthesizer (Protein Technologies, Inc., Tucson, U.S.A.) with solid-phase Fmoc chemistry methodology and amide resin. Their molecular masses were analysed and confirmed by MALDI-TOF. Then, synthetic replicates were purified with rp-HPLC to obtain high purity of synthetic peptides.
CD spectroscopy
CD spectra between 190 and 250 nm were performed on a Jasco J-815 CD spectrometer (Jasco, Essex, U.K.). The machine units of millidegrees ellipticity were converted to mean residue molar ellipticity using the following equation (n, the number of peptide bonds; ellipticity is the raw data from the instrument): The spectra were recorded at 100 nm/min in ammonium acetate (10 mM) buffer or trifluoroethanol (TFE) (50%) solution. CD measurements were performed at 20 • C with 1-mm path length of cuvette. An average of three scans were collected and automated analyses for each peptide. The final predicted percentage of secondary structure was calculated using the K2D3 CD spectra web server [12].
Antimicrobial activity and minimal biofilm eradication concentration assays
The minimal inhibitory concentrations (MICs) of the synthetic replicates of the AMPs were determined using quality control strains, the Gram-positive bacterium, Staphylococcus aureus (NCTC 10788), the Gram-negative bacteria Escherichia coli (NCTC 10418) and Pseudomonas aeruginosa (ATCC 27853), the yeast Candida albicans (NCPF 1467) and methicillin-resistant S. aureus (MRSA) (ATCC 12493). The reference strains of the microorganisms were initially incubated in Mueller-Hinton broth (MHB) for 16-20 h, then the bacterial cultures were diluted to obtain 1 × 10 6 cfu/ml for the bacterial and the yeast culture to 5 × 10 5 cfu/ml. The samples were added to obtain final concentrations from 1 to 512 mg/l. After 24-h incubation, the OD of each well was measured at 550 nm. The MIC value was measured as the minimal concentration of peptide with an OD identical with that of the negative controls [13]. Upon achieved the data from MIC assays, 10 μl of the medium from each well was taken and inoculated on to Mueller-Hinton agar (MHA) plates. After 24-h incubation, the minimum bactericidal concentrations (MBCs) and the minimum fungicidal concentration (MFC) were obtained, which were defined as the lowest concentration of peptide from which no colonies could be subsequently grown.
The minimal biofilm eradication concentration (MBECs) of the synthetic peptides were determined against S. aureus and performed following a standard method as shown in manufacturer's instructions (Innovotech, U.K.). The MBEC TM P&G assay plate with specialized peg architecture designed for the formation of biofilm was used for antibiofilm susceptibility tests. The procedure of inoculation and subculturing were performed as described before. The inoculum plate was prepared by transferring 200 μl inoculum to the 96-well plate and kept in a 150-rpm moist orbital incubator for 72 h at 37 • C. After which, the lid with pegs of the inoculum plate was rinsed by PBS twice, seven replicates of a serial of two-fold diluted peptides (1-512 mg/l) along with the positive/negative controls were added to corresponding wells. After incubation at 37 • C for 24 h, the recovery plate was prepared by adding 200 μl recovery medium (MHB/neutralizing agents 20/0.5 (v/v)) into each well. The lid from the inoculum plate was rinsed and the placed on the recovery plate. After sonication for 30 min, the recovery plate was measured at 550 nm. The MBEC was determined as the lowest concentration with no microbial growth detected. Melittin (Sigma-Aldrich, U.K.), first isolated from honeybee (Apis mellifera) venom, was taken as positive control in comparison [14,15]. Both antimicrobial and biofilm eradication assays were independently performed three times.
Kinetic time-killing assays
The kinetic time-killing assays were performed with different concentrations of peptides alone or with another checkerboard titration predicted synergistic agents. The concentration series of peptides alone or with the synergistic counterpart were added to 1.5-ml microcentrifuge tubes, which were then inoculated with a log phase culture of the test organism as described in the above section. During the incubation, 50 μl sample from each tube was removed from culture tubes at 0-, 5-, 10-, 20-, 30-, 60-and 120-min intervals for single peptides or 0-, 0.5-, 1-, 3-, 6and 24-h intervals for synergistic pairs. After diluting serially with PBS, 50 μl of diluted samples were inoculated on MHA plates and incubated at 37 • C for 24 h for colony counts. The synergistic effect was defined as equal or higher to 2 -log 10 -cfu/ml decrease in bacterial counts compared with the effect of the most active single constituent [16]. Curves were constructed by plotting the log 10 of cfu/ml against time.
Haemolysis assay
Defibrinated horse erythrocytes (TCS Biosciences Ltd, Buckingham, U.K.) were prepared to produce a 4% (v/v) suspension of red blood cells in PBS by repeated washings with sterile PBS. A range of concentrations of synthetic peptides (1-512 mg/l) were incubated with red blood cell suspension samples (200 μl) at 37 • C for 120 min. After incubation, the suspensions of each sample were centrifuged to obtain the final lysis of red blood cells. OD measurements of supernatants were recorded at 550 nm. Negative controls were prepared by a 2% (v/v) suspension with PBS in equal volumes (0% haemolysis), while the positive controls employed a 2% (v/v) suspension with 2% (v/v) of the non-ionic detergent, Triton X-100 (Sigma-Aldrich, U.K.) in PBS. HC 50 was defined as the peptide concentration that caused 50% haemolysis.
Cytotoxicity testing
The cytotoxicity of synthetic peptides on mammalian cells was examined using human microvessel endothelial cell (HMEC-1), which were cultured with MCDB 131 medium (Gibco, U.K.) supplemented with 10% FBS, 10 mM l-glutamine, 10 ng/ml EGF and 1% penicillin-streptomycin; and 5 ×10 3 cells/well were seeded into 96-well plates. After 24-h incubation at 37 • C with 5% CO 2 , 12-h serum-free starvation was performed, peptides with 10 −9 -10 −4 M concentrations were added for 24 h treatment prior to 10 μl MTT (5 mg/ml PBS) incubation for 4 h, the growth medium was removed followed by adding 100 μl of DMSO to dissolve the formazan crystals. The absorbance was measured at 570 nm. Data from the present study were analysed by t test using GraphPad Prism (version 5.01). A P-value less than 0.05 was considered a significant difference. Negative and positive control treatments were carried out with culture medium and 1% Triton X-100 respectively. Data from the present study were analysed by one-way ANOVA with Bonferroni's post test.
Evaluation of combination effects of antimicrobial AMPs
A 2D checkerboard with two-fold dilutions of each AMP was used for examining the combination effects with S. aureus. The dissolved samples of each peptide or antibiotic agent were diluted from 4× MIC to 1/16× MIC. The series of component A were added along the row of a 96-well plate, while the columns were filled with the diluted component B. Growth control wells containing only microorganism medium and sterility control wells with only MHB medium were included. After the addition of a log-phase bacterial inoculum at 1 × 10 6 cfu/ml, plates were incubated at 37 • C for 24 h and then measured the λ at 550 nm. The combination effects were examined by calculating the fractional inhibitory concentration index (FICI) of each combination as follows: After the combination ratio of the two tested compounds was confirmed, lower concentration pairs were selected to determine the FICI with more accuracy. The profile of the combination was interpreted as synergistic for FICI ≤0.5, additive for 0.5< FICI ≤4.0, and antagonistic for FICI >4.0 [17,18].
For assessing the synergetic activity of bombinin and bombinin H against the growth of mammalian cell lines, both CalcuSyn software [19] and Jin's formula [20] were employed. Combination index (CI) plots were generated by using CalcuSyn software. A value of CI <1 represents synergy. The following formula was used in Jin's formula : Q = Ea+b (Ea + Eb − Ea × Eb). Q is the CI; Ea+b represents the cell proliferative inhibition rate of two AMPs; Ea and Eb represents the cell proliferative inhibition rate for individual peptide. After calculation, the results Q >1.15 indicates synergy, and 0.85< Q <1. 15 indicates an additive effect [19].
Molecular cloning of skin secretion precursor cDNA encoding bombinin and bombinin H
The full-length biosynthetic precursor-encoding cDNAs were cloned from the skin secretion derived cDNA libraries of B. orientalis. The nucleotide of full ORF of the cloned precursor transcripts and its translated sequences are shown in Figure 1, which contains 139 residues and encodes a novel bombinin (BHL-bombinin) and a novel bombinin H (bombinin HL).
The nucleotide sequence of the cDNA encoding BHL-bombinin and bombinin HL precursor from the skin secretion of Bombina orientalis, has been deposited in the EMBL Nucleotide Sequence Database under the accession code: LT615078.
The sequences of two novel AMPs were subjected to online BLAST program analysis with the NCBI online portal. The resulting typical primary structures were compared in Figure 2. The BHL-bombinin and bombinin HL, which were exhibited as tandem mature peptides in biosynthetic precursor in Figure 1, revealed 96 and 82% sequence identity respectively, with other bombinins identified from Bombinatoridae. The main sequence difference was indicated in the last two or three residues in the C-terminus, where BHL-bombinin is -Ala-Asn-loss and bombinin HL is truncated of -Lys-Lys-Ile-with a typical valine residue at 12th position from N-terminus ( Figure 2). The putative signal peptide is double-underlined. The mature peptide is single-underlined for bombinin and dash-underlined for bombinin H. The stop codon is indicated by an asterisk.
Identification and structure characterization of novel bombinin and bombinin H by rp-HPLC and MS/MS fragmentation
HPLC fractions with molecular masses coincident with predictions from molecular cloning for BHL-bombinin and bombinin HL were identified (Figure 3) following detection by the ion-trap of the LCQ Fleet mass spectrometer with further testing by MS/MS fragmentation sequencing of doubly charged ions derived from frog skin secretions (Figures 4 and 5).
CD spectra and bioinformatic analysis
The secondary structures of synthetic replicates of AMPs were investigated in 10 mM of ammonium acetate (pH 7.0, mimicking aqueous environment) and 50% TFE (mimicking the hydrophobic environment of the microbial membrane) by CD spectroscopy. As shown in Supplementary Figure S2, all the peptides displayed random coil conformations in the aqueous environment. However, the spectrums of peptides were characteristic of α-helix conformations in the presence of 50% TFE, as indicated by the presence of double-negative dichroic bands at approximately 208 and 222 nm. The web server K2D3 calculation revealed that the helical content for BHL-bombinin is 87.59%, and for bombinin HD and bombinin HL are 77.73% in 50% TFE solution.
The physiochemical parameters of novel AMPs are listed in Table 1, which not only provides evidence of the pos- sible interactions between peptides and bacterial membrane but also gives more information on their synergistic mechanisms. The molecular masses of synthetic peptides were determined by MALDI-TOF (Supplementary Figure S1). Physiochemical parameters including charge, hydrophobic moment (μH) and hydrophobicity were determined using the Heliquest server [21]. The μH was determined by Eisenberg's scale with a full window in Heliquest server. BHL-bombinin elicits higher cationic but much less hydrophobic properties than bombinin HL, which exhibit highly structural and physiochemical differences between the two co-encoded AMPs. These findings suggest that these two peptides may possess distinguishing roles in the interaction with microorganisms and exhibit potent antibacterial activities synergistically.
Antimicrobial and haemolytic activities
The antimicrobial effects of synthetic AMPs on the growth of the tested microorganisms, and the biofilm eradication effects on S. aureus are illustrated in Table 2 Figure S3). Interestingly, BHL-bombinin displayed potent inhibitory effects (MIC: 4-16 mg/l) towards MRSA and biofilm. By contrast, the MIC values for bombinin HL and bombinin HD against S. aureus were 256 and 128 mg/l respectively with undetected MBCs, which were significantly less effective compared with BHL-bombinin. The selectivity indices (SIs), which represent the degree of antibacterial selectivity, are showed in Table 2, higher SI value reflecting a better selectivity towards microbial over mammalian membranes [22]. As indicated, the BHL-bombinin had a higher SI compared with bombinin HL and bombinin HD, which is in agreement with previous studies that high level of hydrophobicity may decrease the antimicrobial selectivity of α-helical peptides [23]. Additionally, compared with the melittin peptide, all the AMPs investigated in the present study exhibited 32-128-times higher SI values, which emphasizes that amphibian-derived AMPs are potential research targets for therapeutic alternatives to current antibiotics. The time-killing curves demonstrated the faster cell-killing effects of BHL-bombinin compared with the ampicillin, while the kill rates of bombinin HL and bombinin HD were relatively low (Supplementary Figure S4). The combined administration of BHL-bombinin with either bombinin HL or bombinin HD revealed a synergistic antimicrobial effect against S. aureus (FICI: 0.375). In addition, BHL-bombinin showed additive property with classic antibiotic ampicillin (FICI: 0.75), while the novel bombinin H, either d-or l-amino isoforms, displayed synergistic activities with β-lactam and ampicillin (FICI: 0.5). The results were summarized in Table 3. The synergistic effects were further confirmed by the outcomes of time-killing assays. Figure 6a,b exhibits that the isolated S. aureus had a 6.16 (+ − 0.76) log 10 decrease in cfu/ml at 24 h when incubated with BHL-bombinin (0.75 mg/l) and bombinin HL (48 mg/l), compared with the single peptide effect. The time-killing value was 5.83 (+ − 0.67) log 10 for combined effects of BHL-bombinin (0.75 mg/l) and bombinin HD (24 mg/l). As shown in Figure 6c,d synergistic effects were also observed when co-administrated bombinin HL (64 mg/l) with ampicillin (0.016 mg/l) or bombinin HD (32 mg/l) with ampicillin (0.016 mg/l), which demonstrated a 7.51 (+ − 0.97) log 10 and 6.57 (+ − 0.77) log 10 decrease in cfu/ml at 24 h respectively.
Cytotoxicity assessment of novel bombinin, bombinin H and their synergistic effect on HMECs
The antiproliferative effect data obtained from MTT cell viability assays of each peptide on HMEC-1 are represented in Figure 7a,b, and their IC 50 values are calculated. All the AMPs tested in the present study exhibited low cytotoxicity with cell viabilities exceeding 90% up to the concentration 10 −5 M against HMEC-1. For BHL-bombinin, at MICs (1.6-26.2 μM), 83.5-100.0% HMEC-1 remained viable. For bombinin HL and bombinin HD, they displayed relatively lower selectivity and higher cytotoxicity on HMEC-1 compared with BHL-bombinin. To identify the possible synergistic cytotoxicity between BHL-bombinin and bombinin HL or bombinin HD, the cells were cultured with combinations of these two peptides at different doses but in a constant ratio (BHL-bombinin to bombinin HL or bombinin HD: 5-10 μM, 10-20 μM and 20-40 μM respectively) for 24 h (Figure 7c,d). The combination of 20 μM BHL-bombinin with 40 μM bombinin HL inhibited cell growth of 52.21%, compared with mono-administration of BHL-bombinin (43.93%) or bombinin HL (9.63%), indicating an additive effect (CI =1.03; Q =1.06). The values for combination of BHL-bombinin and bombinin HD were CI =0.98; Q =1. 10. The results revealed that the synergistic relationship was abolished with 0.85< Q <1. 15 and CI ≥1 with regard to their cytotoxicity on normal mammalian cells (Figure 7e,f).
Discussion
Different from the well-studied bioactive peptides from the amphibian Pipidae, Hylidae, Ranidae and Pseudidae families, skin secretions from Bombina species, remain to be investigated fully and may yield valuable promotion for drug development. The best-known constituent identified from Bombina skin secretions is bombesin, which leads to the subsequent identification of the mammalian homologues, gastrin-releasing peptide (GRP) and neuromedin B (NMB) as neuropeptides [24]. Among all the molecules secreted from Bombina species, no counterparts of the novel BHL-bombinin and bombinin HL, which are encoded by single coding region precursor, have been identified in other amphibian genera or in mammals [25].
The present study describes the molecular cloning, primary structure identification, chemical synthesis and bioactive examinations of two tandem-coded novel bombinin peptides. Since the d-isomer exists in some of the bombinin H-type molecular at the second position of their sequences, the analogue bombinin HD was designed by substituting the l-leucine at such position. CD studies revealed that the helical content of BHL-bombinin was only approximately 10% higher than bombinin HL and bombinin HD, while the hydrophobicity of BHL-bombinin is significantly lower than that of bombinin HL and bombinin HD. Therefore, all the tested AMPs in the present study were found to adapt an amphipathic α-helical conformation in a membrane mimetic environment, a feature that is essential for allowing AMPs to exert their bioactivities [26]. However, due to the diversities of their primary structures and physiochemical parameters, the functional mechanisms that they employed can be significantly different.
Synthetic BHL-bombinin were found to possess potent antimicrobial activities against S. aureus and C. albicans, but relatively lower activity against E. coli and P. aeruginosa. The MBCs for all the four tested microorganisms were approximately equal to or over four-fold of their respective MICs. Clinically, the formation of biofilm and conventional antibiotic-resistant MRSA strain are two major causes of antibiotic crisis. BHL-bombinin showed potent effects for eliminating S. aureus biofilm and inhibiting the growth of isolated MRSA. However, the MICs observed for wild-type bombinin HL and analogue bombinin HD, were moderately effective against S. aureus with undetected MBC. The antimicrobial properties of the peptides reported in the present study were further compared against melittin, which is a well-studied bee venom derived AMP [15]. BHL-bombinin exhibited similar antimicrobial activity to melittin, but weaker haemolytic activity, which indicates a better selectivity. Both bombinin H peptides possessed mild antibacterial property but higher selective antimicrobial activity compared with melittin. Following on from this, the BHL-bombinin and bombinin HL are tandem encoded in single ORF, which prompted us to speculate that their combination effect might be vital for frogs to survive in pathogen-rich environments. Of note, the combination effect between novel components and conventional antibiotics has also been proven as a promising solution to amplify the potency of antibiotics, a good example is co-amoxiclav, which enormously enhances amoxicillin potency after combined use of clavulanic acid [27]. As expected, the combination interaction of BHL-bombinin with either bombinin HL or bombinin HD showed synergistic inhibition activities against S. aureus (FICI: 0.375). Moreover, BHL-bombinin showed additive effect with classic antibiotic ampicillin (FICI: 0.75), while the bombinin HL and HD, displayed synergistic activities with β-lactam, ampicillin (FICI: 0.5). The results were further confirmed by time-killing assays, that the BHL-bombinin exerted higher bactericidal rate compared with ampicillin. However, the killing rates for bombinin HL and bombinin HD were diminished. The mechanism of the positive outcomes between peptides and conventional antibiotics (ampicillin in the present study) appears to be complex. The FICI as a measure of synergy employed in the present study is the best known and very basic method for evaluating the inhibitory effects of paired agents comparing the sum of their effects alone. We calculate their FICI according to the protocol for investigating their synergistic relationships at preliminary level in the present study [17,18]. For addressing the more complicated natural environment, the detailed concentration and structural relationship between peptides in this work needs a further and more systematic mechanism evaluation, which depends on the physiochemical parameters and their combination results of BHL-bombinin and bombinin HL, BHL-bombinin may have direct and selective membrane permeabilizing activity, which increases the uptake of other antibacterial agents that initiate the process to interfere with intracellular targets or enhance the effect of highly hydrophobic molecules like bombinin HL. On the other hand, either bombinin HL or bombinin HD may cause degradation of the peptidoglycan by triggering the activity of bacterial murein hydrolases, which can enhance the activity of the β-lactams [28][29][30].
Safety evaluations via haemolytic assay demonstrated the relatively lower SIs of both bombinin HL and bombinin HD on horse erythrocytes compared with BHL-bombinin, as a consequence, when treated HMEC-1 with the peptides using their MICs for MTT-based viability assessment, bombinin HL and HD exhibited higher cytotoxicity. The typical theory is that the increased hydrophobicity of the AMPs is associated with higher antimicrobial activity, but in contrast, the high hydrophobic peptides are associated more with stronger self-assembly, which can result in the formation of dimers or oligomers. This spatial character may in turn decrease their potential for passing through the target cell wall and bacterial membrane [31]. The high toxicity of BHL-bombinin in the present study might be mainly due to its innate character of high hydrophobicity. Additionally, the abolishment of combined effect of BHL-bombinin and bombinin H against HMEC-1 revealed their high functional selectivity. The application of combined antimicrobial agents, either with AMPs or with conventional antibiotics, is a prospective strategy to improve clinical therapy caused by multidrug resistant pathogens and decrease the side effect [32].
Conclusion
In this project, the novel BHL-bombinin, bombinin HL and analogue bombinin HD are reported from less-studied frog species B. orientalis. They revealed comparable antimicrobial property individually and enhanced synergistic effect and selectivity jointly, all these inherent and robust characteristics hold significant potential to alleviate the current antibiotics crisis.
Author contribution
J.X., Y.W. and T.C. conceived and designed the experiments. J.X. performed the experiments. J.X., Y.W. and M.Z. analysed the data. C.S. and L.W. contributed reagents/materials/analysis tools. J.X. and Y.W. wrote the paper. Y.W. and T.C. edited the paper.
|
2018-04-03T01:12:39.957Z
|
2017-09-11T00:00:00.000
|
{
"year": 2017,
"sha1": "3af6c0c18621e434c6469414bd9ce87ceda52f14",
"oa_license": "CCBY",
"oa_url": "http://www.bioscirep.org/content/ppbioscirep/37/5/BSR20170967.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a44316b80f5e4c93cf30ae9d1d6f446f6e4f8ae2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
252533769
|
pes2o/s2orc
|
v3-fos-license
|
Activities of daily living and its influencing factors for older people with type 2 diabetes mellitus in urban communities of Fuzhou, China
Background Type 2 diabetes mellitus (T2DM) is an independent risk factor for functional limitations among the older population. The predicted increase in T2DM cases combined with the ongoing rapidly aging population may further burden the already overloaded healthcare system and aggravate the loss of economic self-sufficiency. This study aimed to investigate the activities of daily living (ADL) and its influencing factors on older people with T2DM, and to provide implications for the development and improvement of community nursing services in the context rapidly aging population in China. Methods From March 2019 to June 2020, we conducted a cross-sectional questionnaire survey among older T2DM patients in Fuzhou, using a multi-stage cluster sampling approach. Functional status was measured by the Lawton ADL scale. Stata “nptrend” test was used to examine the trend of ordinal variables on ADL. Non-conditional logistic regression was used to identify factors affecting ADL limitations. Results A total of 2016 questionnaires were received, with a response rate of 96%. 12.4% of participants suffered from varying degrees of functional impairment. ADL limitations increased with age. More comorbidities were associated with a greater risk of developing functional limitations in ADLs. the following sub-groups were more likely to suffer from ADL impairment: those aged 70 and over years (OR = 1.99, 95%CI 1.77–2.56), living in an aged care house or with spouse/children (OR = 2.31, 95%CI 1.25–4.26), low monthly income (OR = 1.49, 95%CI 1.28–1.64), without health insurance (OR = 1.82, 95%CI 1.40–2.40), tight family expenses (OR = 1.95, 95%CI 1.42–2.69), having stroke (OR = 6.70, 95%CI 2.22–20.23) or malignant tumor (OR = 4.45, 95%CI 1.27–15.53), irregular eating habit (OR = 2.55, 95%CI 2.23–2.92), smoking (OR = 1.40, 95%CI 1.22–1.60), sedentary lifestyle (OR = 2.04, 95%CI 1.46–2.85), lack of physical exercise (OR = 1.35, 95%CI 1.19–1.53), sleeping difficulty (OR = 1.25, 95%CI 1.10–1.42), and lack of family support (OR = 1.19, 95%CI 1.10–1.29). Conclusion Older adults (≥70 years) with T2DM had a high prevalence of functional limitations across a range of daily living tasks, which not only affect individual life of quality but also present a huge burden on the family, health services system, and the whole society. Identified factors associated with ADL limitations may provide useful information for targeted nursing practice and health promotion.
Background: Type diabetes mellitus (T DM) is an independent risk factor for functional limitations among the older population. The predicted increase in T DM cases combined with the ongoing rapidly aging population may further burden the already overloaded healthcare system and aggravate the loss of economic self-su ciency. This study aimed to investigate the activities of daily living (ADL) and its influencing factors on older people with T DM, and to provide implications for the development and improvement of community nursing services in the context rapidly aging population in China.
Methods: From March
to June , we conducted a crosssectional questionnaire survey among older T DM patients in Fuzhou, using a multi-stage cluster sampling approach. Functional status was measured by the Lawton ADL scale. Stata "nptrend" test was used to examine the trend of ordinal variables on ADL. Non-conditional logistic regression was used to identify factors a ecting ADL limitations.
Results: A total of questionnaires were received, with a response rate of %.
. % of participants su ered from varying degrees of functional impairment. ADL limitations increased with age. More comorbidities were associated with a greater risk of developing functional limitations in ADLs. the following sub-groups were more likely to su er from ADL impairment:
Introduction
Type 2 diabetes mellitus (T2DM) is a serious public health concern. According to the latest 10th Edition IDF (International Diabetes Federation) Report (1), about 537 million people were living with diabetes (over 90% being T2DM) worldwide in 2021 and ∼6.7 million people have died from it or its complications at the same year. Driven by a complex interplay of multifarious factors such as the rapidly aging population, increased sedentary lifestyle, and abrupt changes in traditional dietary habits (2), T2DM has become one of the fastest-growing global health emergencies in this century, with the projected prevalence rate reaching 7,079 individuals per 100,000 by 2030 (3). China has the largest numbers of both current (accounting for about onequarter of global cases) and projected T2DM cases partly due to its large population size (1). From 1990 to 2016, the allage morbidity and mortality rates in China have dramatically increased by 78.4 and 63.5% (4), respectively, presenting huge healthcare and economic burden on the society.
T2DM is a prevalent chronic health condition more frequently affecting people aged 65 and over. Results of the 2017 national epidemiological survey showed that the prevalence of diabetes was 30.2% in people aged ≥60 years and the prediabetes prevalence rate reached 47.7%, although the proportion of undiagnosed cases was estimated to be 51.7% (1). Older T2DM patients were found to have an accelerated decline in leg lean mass, muscle strength and functional capacity when compared with normoglycemic control groups (5). Evidence has shown that T2DM is an independent risk factor for functional limitations among the older population (6), impairing about 60% of activities of daily living (ADL) for diabetic people aged>65 years compared with only 34% for the same age group without T2DM in the USA (7), especially among older Mexican Americans with T2DM (8). Moreover, T2DM patients were two to three times more likely to suffer from disability than their counterparts (9), and utilized healthcare services more frequently as well. Therefore, the predicted increase in T2DM cases combined with the ongoing rapidly aging population may further burden the already overloaded healthcare system and aggravate the loss of economic self-sufficiency.
Effective diabetes self-management is critical for maintaining health and preventing the occurrence of further diabetes-related complications such as diabetic ketoacidosis, hypoglycemia, cardiovascular diseases, retinopathy, nephropathy, vascular nephropathy, and foot complications (6). However, self-management can be especially challenging for elderly people with T2DM, as they are more likely to suffer from functional limitations and develop geriatric syndromes than those without diabetes (8,9). Results of a survey including 1,691 individuals sampled from 5 provinces of China showed that T2DM patients especially those living in a low socioeconomic status were moderately satisfied with urban community health services (10), indicating room to improve diabetes caring services at a community level in the aspects of healthcare services quality, health promotion, health insurance, and the essential drug system (10,11). Currently, few studies in China have investigated the extent to which older adults suffer from functional limitations due to T2DM and its related complications. This study aims to investigate older T2DM patients' activities of daily living (ADL) and identify its influencing factors. Findings of this study can provide useful evidence for the provision of targeted community healthcare services for older people with T2DM to improve their quality of life.
Study design and participant recruitment
Located on the southeast coast of China, Fuzhou is the capital city of Fujian Province, with a population of around 8 million in 2020. In line with the national trend, Fuzhou has stepped into an aging society since 2000 (12). The aging of Fuzhou's population is still ongoing at a rapid pace, and the proportion of people aged >60 years increased from 12.1% in 2011 to 19.1% in 2020 (13). In terms of T2DM, the agestandardized prevalence rate (12.3%) of T2DM in Fuzhou was slightly higher than the national average level (11.2%) in 2017 and similarly for the age-standardized mortality rate From March 2019 to June 2020, we conducted a crosssectional questionnaire survey among T2DM patients in Fuzhou to investigate their ADLs. T2DM patients were approached under the support of local community health service centers when they carried out the NBPHSP services for T2DM patients. In this study, T2DM patients were recruited through a multi-stage random cluster sampling process. Firstly, two of the five urban districts in Fuzhou were randomly selected through drawing lots (Names of the five districts were put into a bowl and two names were randomly chosen). The two sampled districts (Taijiang District and Gulou District) have 22 community health service centers; Secondly, we assigned a unique number from 1 to 22 to each of the 22 community health service centers. Then, 11 of the 22 community health service centers were randomly selected as our study sites through an online random number generator (https://epitools. ausvet.com.au/randomnumbers). The inclusion criteria were T2DM patients aged ≥60 years. Those could not answer the survey questions because of health issues (e.g., dementia or/and mental disorders) were excluded. Participation was completely voluntary and no incentives were offered. Informed consent was obtained from individual participants. The study has been approved by the Medical Ethics Committee of Fujian Health College.
Questionnaire design
The questionnaire consists of two parts. The first section requested the following demographic information: age, gender, education level, marital status, household income, comorbidities, family support, and individual living habits (e.g., smoking, drinking, sleeping, physical exercise, eating pattern). The second section is the widely used Lawton ADL scale (validated Chinese version) to measure two important domains of functioning of older people with T2DM: physical self-maintenance scale (PSMS) and instrumental activities of daily living (IADL) (18). PSMS contains ratings of selfcare ability necessary for living in the community in areas of toileting, feeding, dressing, bathing, and locomotion. In contrast, IADL contains a more complex set of behaviors required for independent living skills, including the following eight areas: telephoning, shopping, food preparation, housekeeping, laundering, use of transportation, use of medicine, and financial behavior. Each item of PSMS and IADL was measured using a 4-point Likert scale question: "do it completely by yourself, " "a little difficult to do it independently, " "do it with assistance, " and "must be done by others." They were assigned 1-4 scores, respectively. Therefore, PSMS has a summary score from 4 to 24 and PSMS refers to "physical self-maintenance scale, " IADL refers to "instrumental activities of daily living, " and ADL refers to "activities of daily living". The scores of PSMS, IADL, and ADL were summarized as mean ± standard deviation; and ADL, PSMS and IADL were defined as "impaired" if the summary scores exceeded 14, 6, and 8, respectively.
IADL has a summary score from 8 to 32. The higher the score means the greater the person's functional limitation. ADL consists of PSMS and IADL, with a summary score from 14 to 56. The severity of ADL was classified into three levels: normal (14 scores), somewhat impaired (15-21 scores), and severely impaired (≥22 scores). PSMS and IADL were defined as "impaired" if the summary scores exceeded 6 and 8, respectively. After a pilot survey, the questionnaire was revised to ensure all questions were clear and understandable. The questionnaire has also been reviewed by relevant experts. All investigators received unified training to ensure the survey was carried out consistently. Participants filled out the questionnaire by themselves under the support/assistance of an on-site investigator in the local community health service center. Completed questionnaires were checked and collected by the investigators on the spot.
Statistical analysis
Data entry was facilitated using EpiData 3.1 software (EpiData Association, Odense M, Denmark). The demographic characteristics of ADL were descriptively analyzed. The scores of functional impairments were summarized as mean ± standard deviation. Kruskal-Wallis H test and Mann-Whitney U test were conducted as the first step to identifying factors associated with ADL. Stata "nptrend" test was used to examine the trend of ordinal variables on ADL (19). Then, we put statistically significant factors into a non-conditional logistic regression model to identify the factors influencing ADL (inclusion criterion α = 0.05, elimination criterion α = 0.10). Stata 16.0 was used to perform all statistical analyses. Results were considered statistically significant at a P < 0.05.
Results
A total of 2,016 questionnaires were received, with a response rate of 96%, including 995 participants recruited from Taijiang District and 1,021 participants from the Gulou District. As shown in Table 1, the average ADL self-care ability score was 14.91 ± 3.38 points. 12.4% of participants suffered from varying degrees of functional impairment, of which 8.2% were mild and 4.2% were severe. The average points for PSMS and IADL were 6.20 ± 1.10 and 8.71 ± 2.49, respectively. Accordingly, the percentages of participants with impaired function for PSMS and IADL were 5.2 and 12.0%, respectively.
Differences in the functional capability of participants with T2DM by demographic characteristics are summarized in Table 2. We found ADL functions were significantly affected by the following demographic factors: age (H = 52.13, P < 0.01), gender (Z = −2.53, P = 0.01), marital status (Z = −4.19, P < 0.01), monthly income (H = 68.31, P < 0.01), and health insurance status (H = 14.41, P < 0.01). Moreover, with the increase in age ADL functions demonstrated a decreasing trend (Z = 12.06, P < 0.001). Conversely, education level (Z = −2.4, P = 0.017) and financial status (Z = −3.99, P < 0.001) showed an increasing trend with ADL functions. Table 3 summarizes the differences in ADL functions by individual behaviors. We found the following factors Table 4 shows the differences in ADL functions of older people with T2DM by family characteristics. We found ADL functions were significantly affected by living conditions (H = 28.20, P < 0.001), family support (H = 19.33, P < 0.001), and family income (H = 31.51, P < 0.001). Moreover, results of trend analysis suggest that more family support (Z = -6.07, P < 0.001) and better economic status (Z = -3.99, P < 0.001) were associated with fewer ADL function restrictions.
As shown in Table 5, among the 2016 participants with T2DM, 82.7% of them lived with other chronic diseases. Moreover, the results of trend analysis suggest that those with more existing chronic diseases had more ADL function restrictions (Z = 3.38, P = 0.001). In addition to T2DM, we found the following two chronic diseases may compromise older people's ADL functions: stroke (Z = -3.40, P < 0.001) and malignant tumors (Z = -2.63, P = 0.01). Table 6 summarizes the results of multivariate unconditional logistic regression analysis to identify factors affecting ADL functions. We found the following sub-groups were more likely to suffer from ADL impairment: those aged 70 and over years old (OR = 1.
Discussion
Physical disability is a major socioeconomic and public health issue, as it not only diminishes the quality of life of those affected but also may result in a greater increase in healthcare services utilization such as physician visits and hospitalizations (4,20). Diabetes is associated with functional disability through mechanisms such as decreased cardiopulmonary reverse, inflammatory or sarcopenic process, extreme of blood glucose, muscle catabolism, cognitive impairment, and inflexible treatment regimens (5,(21)(22)(23). Presence of diabetes and associated complications can lead to a significant decline in physical functioning, especially among older patients (8,24). Currently, there are limited studies in China investigated to what extent older T2DM patients' activities of daily living were affected and its influencing factors. The limitations in ADL and IADL have been widely used as an indicator to assess disability in basic life activities among the population over 65 years old (25,26). In this study, we found unhealthy lifestyle lifestyle, suffering from stroke or malignant tumor, and sleeping difficulty may increase the risk of ADL limitations among older T2DM patients. We also found that household composition was associated with physical limitations in ADLs. Participants living alone performed ADLs much better than those lived in aged care homes or living with spouse or/and children, probably because those with severe ADL impairment were lack of selfcare capability and had to live with others. These findings may provide useful information for the development of nursing practice and the improvement of effective health management for older T2DM patients. Currently, most published epidemiological research findings support that diabetes was associated with ADL limitations (27). According to a cohort study from China, the risk of ADL impairment was increased by 102% (HR = 2.02, 95%CI 1.29-3.17) for T2DM patients aged 65-74 years, compared to those without T2DM in the same age group (28). Nevertheless, inconsistency still exits. Results of a multi-country study showed that diabetes was not associated with ADL limitations in China after controlling for confounding factors such as socioeconomic status, but significant associations were found in Mexico, Barbados, Brazil, Chile, Cuba, and Uruguay (29).
In this study, we found the prevalence of functional limitations (ADL) among the older T2DM patients in Fuzhou was 12.4%. It is much lower than the national average ADL impairment rate (32.3%) according to the survey data from China Health and Retirement Longitudinal Study (CHARLS) (30,31). The differences in the prevalence of ADL disability among T2DM patients across studies have been reported by international literature as well (8,29), which may be due to the varied criteria used to define functional limitations. Moreover, differences in socioeconomic and healthcare services levels (e.g., early diagnosis, medical treatment, and rehabilitation) across regions/cities may also contribute to the disparity. Another possible explanation is that the CHARLS survey data were collected between 2015 and 2016, and evidence has shown that the incidence of ADL disability among the Chinese older adult population with T2DM had a declining trend over time (26), mainly due to the considerable improvements in living standard, biological environment, and healthcare services.
We found ADL limitations increased with age, as older people were more likely to experience T2DM-related comorbidities (32). Moreover, our results showed that more comorbidities were associated with a greater risk of developing ADL limitations. It is in line with previous studies (8,26). As to gender differences in ADL limitations among T2DM patients, there is no consistency. Most previous literature suggests that older female T2DM patients usually reported more ADL functional limitations and physical disability than their male counterparts (8,26,28), although women generally utilized healthcare services more often than men. The greater prevalence and severity of arthritis and musculoskeletal disease among older women may partly explain the difference (33).
Another explanation is that women were more likely to report or over-report their ill health and disability than men (34). However, we found males reported more ADL limitations than their female counterparts, probably because males were older than females in this study.
Our results also indicate that those living with a low socioeconomic status were at higher risk of developing functional limitations in ADLs. It is consistent with previous studies (27). Moreover, lower socioeconomic status in older age seems to predict ADL limitations more than socioeconomic status at younger age (27). In recent years, some social security programs have been launched or reformed by the government to provide better welfare to the older population, especially in health. The coverage of basic pension insurance has expanded to about one billion people in 2020 (35). Currently, there are three categories of government-funded health insurance programs, namely urban employee medical insurance, urban resident medical insurance, and new rural cooperation medical insurance, with aims to improve the accessibility for medical treatment. However, social medical insurance schemes in China adopts the "payment-beforereimbursement" principle. The insurers are required to pay the medical expenses in advance when seeking medical treatment, then a certain proportion of medical expenses are reimbursed after treatment. A large amount of prepayment may become one of the reasons restricting low-income groups from seeking timely medical treatment (36), which may potential increase risk of ADL limitations due to lack of heath care access. To reduce the healthcare burden for those with serious chronic diseases, in recent years the reimbursement cap for more than 20 chronic diseases including diabetes has been increased to 140,000 Yuan per year, compared to 6,000 Yuan for general diseases in outpatient clinics (37). Targeted supportive policies for those vulnerable subgroups are helpful for maintaining T2DM patients' ADL functions.
There are several limitations to this study. First, some older T2DM patients with severe functional limitations such as having mobility problems or staying in bed may not go to the community service center during the study period and are possibly under-represented. This may lead to the ADL function impaired rate underestimated. Second, evidence has shown that older patients in poorer health were more likely to participate in health services related research (38). Patients who gave explicit written consent may mischaracterize the health status of the larger population. In this study, we did not count how many people were excluded due to not meeting the inclusion criteria or refusing to participate. It is unclear how people who refused differ from those who agreed to participate. Therefore, the generalization of the results should be cautious due to the potential selection bias. Third, cautious should be exercised if extending the results to rural communities. Lastly, the duration of participants' diseases may be associated with .
/fpubh. . functional limitations in ADLs. However, we did not take it into account in the analysis due to unavailability issue.
Conclusion
The growing number of older T2DM patients coupled with a rapidly aging population continues to be a major public health concern in China. Older adults with T2DM especially among those aged ≥70 years had a high prevalence of functional limitations across a range of daily living tasks, which not only affect individual life of quality but also present a huge burden on the family, health services system, and the whole society. Identified factors associated with ADL limitations may provide useful information for targeted nursing practice and health promotion.
Data availability statement
The data collected during the current study is not publicly available as the ethics approval only allows for members of the research team access. Upon reasonable request and with permission of the ethics committee, access can be granted. Any queries should be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by Medical Ethics Committee of Fujian Health College. The patients/participants provided their written informed consent to participate in this study.
Author contributions
J-HJ, JX, L-NJ, and H-LZ conceived the study. J-HJ, DL, and H-LZ designed the questionnaire. J-HJ, DL, L-NJ, YC, YY, and BZ did the field work and collected the data. J-HJ, DL, YC, YY, and BZ entered and cleaned the data. J-HJ, JX, and DL analyzed the data. J-HJ drafted the manuscript. JX, DL, L-NJ, YC, YY, CW, BL, RX, and H-LZ reviewed and edited the manuscript. All authors read and approved the final manuscript.
Funding
The study was funded by the Medical Innovation Project of Fujian Province (2018-CXB-16) and the Applied Technology Collaborative Innovation Research Project of Fujian Health College (2019-5-1).
|
2022-09-27T14:07:23.630Z
|
2022-09-26T00:00:00.000
|
{
"year": 2022,
"sha1": "ad6541debf28506e8e1a248d82f447f39f870d29",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2022.948533/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa7c4e49f770ece5663101d5ab34efa8bb07c5ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244087299
|
pes2o/s2orc
|
v3-fos-license
|
Deconvolution well test analysis applied to a long-term data set of the Waiwera geothermal reservoir (New Zealand)
The geothermal reservoir at Waiwera has been subject to active exploitation for a long time. It is located below the village on the Northern Island of New Zealand and has been used commercially since 1863. The continuous production of geothermal water, to supply hotels and spas, had a negative impact on the reservoir. So far, the physical relation between abstraction rates and water level change of the hydrogeological system is only fairly understood. The aim of this work was to link the influence of rates to the measured data to derive reservoir properties. For this purpose, the daily abstraction history was investigated by means of a variable production rate well test analysis. For the analysis, a modified deconvolution algorithm was implemented. The algorithm derives the reservoir response function by solving a least square problem with the unique feature of imposing only implicit constraints on the solution space. To further investigate the theoretical performance of the algorithm a simulation with synthetic data was conducted for three possible reservoir scenarios. Results throughout all years indicate radial flow during middle-time behaviour and a leaky flow boundary during late-time behaviour. For middle-time behaviour, the findings agree very well with prior results of a pumping test. For the future, a more extensive investigation of different flow conditions under different parametrisations should be conducted.
Abstract. The geothermal reservoir at Waiwera has been subject to active exploitation for a long time. It is located below the village on the Northern Island of New Zealand and has been used commercially since 1863. The continuous production of geothermal water, to supply hotels and spas, had a negative impact on the reservoir. So far, the physical relation between abstraction rates and water level change of the hydrogeological system is only fairly understood. The aim of this work was to link the influence of rates to the measured data to derive reservoir properties. For this purpose, the daily abstraction history was investigated by means of a variable production rate well test analysis. For the analysis, a modified deconvolution algorithm was implemented. The algorithm derives the reservoir response function by solving a least square problem with the unique feature of imposing only implicit constraints on the solution space. To further investigate the theoretical performance of the algorithm a simulation with synthetic data was conducted for three possible reservoir scenarios. Results throughout all years indicate radial flow during middle-time behaviour and a leaky flow boundary during late-time behaviour. For middle-time behaviour, the findings agree very well with prior results of a pumping test. For the future, a more extensive investigation of different flow conditions under different parametrisations should be conducted.
Introduction
Waiwera is a small east coastal town in the northern part of the Auckland Region in New Zealand. Its hot water springs have been used for centuries and became increasingly popu-lar owing to their recreational value. Over the decades, many pools were constructed, including a larger commercial spa in the centre (ARWB, 1980). In the 1960s the extensive use of hot water let to such a decline in water level that artesian conditions ceased. Since then hot water could only be produced by pumping. During the 1970s the number of bores and the abstraction rates further increased. At the same time, the water level continued to decline and the reservoir started to show signs of intruding seawater. Because the reservoir was at risk of irreversible damage, the Auckland Regional Water Board introduced a management plan for Waiwera in the 1980s (ARWB, 1980). The plan imposed restrictions on the abstraction rates by means of a minimum water level to be maintained. Until now this water level is being measured in an observation bore adjacent to the sea site. In 2018 the central spa closed down which until then had been the main user of the geothermal water. The closure was due to economic reasons and the need for renovation of the pools and is supposed to be temporarily only. As a consequence, the water level recovered over the following years and the initial problem of overexploitation became obsolete. With unmanned aircraft systems and coupled thermal infrared cameras data were retrieved which show a renewed activity of the hot springs on the beachfront of Waiwera (Präg et al., 2020).
Until 2018 the main objective was to find a maximum abstraction rate which still retains a sufficient water level in the reservoir. For this purpose, a multivariable regression analysis was conducted by Chapman (1998) and later by Kühn and Schöne (2017). Both regression models relate production rate readings to water level measurements and were used to predict the water level based on preceding rates. Although such statistical models have the advantage of being easy to implement, their applicability is limited to a certain constellation of bores (Kühn and Altmannsberger, 2016). In addition, the models cannot be used to understand reservoir properties. For this purpose, a hydrogeological model was developed by Kühn and Stöfen (2005) which considered the three-dimensional, fully-coupled reactive flow behaviour in the reservoir. Beside the hydrostatic data, also chemical and thermal measures were incorporated making it by far the most profound model for the reservoir. The aim of the presented work was to re-examine some reservoir properties by looking again at the relation between abstraction rates and water level measurements. As the exploitation of the Waiwera geothermal reservoir can be seen as a long-term pumping test with varying rates such an evaluation is equivalent to an ordinary non-equilibrium well test analysis. Beside its simplicity, the method has the advantage of serving both: describing reservoir properties and providing the best prediction model possible for water level changes based on rates.
For the implementation of such a well test analysis, a novel deconvolution algorithm has been used which found wide acceptance already in the oil and gas industry. Here we tested the general applicability of the approach for Waiwera. For evaluation purposes, we have compared the results with an "expected" model as well as with the outcome of a steadystate pumping test from 1979. The expected model is solely based on the hydrogeological setting at the Waiwera location.
Location and hydrogeology
The geological unit that makes up the reservoir is a compacted sandstone interlayered by siltstones. Owing to its depositional history, the rock comprises bathyal features such as Bouma sequences, channel-like depositions, as well as strong irregularities in bed thickness. All of these cause the original reservoir to be heterogeneous. Furthermore, the rock is highly fractured and larger faults cut through the reservoir. Undeformed beds dip towards the west with angles of up to 10 • . For the reservoir rock, a matrix permeability range of 0.06 to 11.1 mD was found. In addition, a pumping test from 1979 determined a transmissivity of 320 m 2 d −1 . Because of that, at least vertical fluid flow is assumed to be fracturedriven. The entire reservoir has a thickness of roughly 400 m and is confined by metamorphic greywacke at its basement. On top lies a unit of unconsolidated alluvium with a thickness of roughly 13 m. A schematic cross-section of the reservoir is shown in Fig. 1. The hot water likely enters the reservoir from the greywacke through a fault. Orientation and extent of such a fault or a fault system can only be approximated using the apparent temperature distribution. The western reservoir boundary represents a cold freshwater aquifer. The eastern boundary is marked by the seaside with cold marine water. From both neighbouring systems flux occurs into the geothermal aquifer. The magnitude of flow depends on the hydraulic head gradients and thereby on the hydraulic heads along the reservoir margins. The resultant mixing of geothermal, fresh and seawater leads to changes in water salinity and temperature. The clay-rich fluvial sediments on top of the reservoir act as an impermeable seal and confine the aquifer.
Production rates and water level data
For the water level data, an hourly and a daily averaged time series from the observation well no. 74 ( Fig. 1) were available. The data cover a period of almost 40 years from 1982 to 2019. The data set is not fully continuous and shows gaps ranging from a few days up to several months. Gaps smaller than 3 d in maximum were interpolated linearly.
The water level readings were corrected first for the atmospheric pressure load, because the aquifer is confined. For this purpose, atmospheric pressure data from the two nearest available stations was used (NIWA 1 ). Station "Whenuapai Aero" is 28.5 km away from the centre of Waiwera and was used to cover the time range from the beginning of the water level measurements in the 1980s until the year 2010. Station "1340" is 31.6 km away and covered the remaining time until today. For each station, a linear regression between daily atmospheric pressure change P atmo and daily water level change was conducted. The slope of the regression line is the barometric efficiency B which for "Whenuapai Aero" and "1340" was 0.59 and 0.52, respectively. The corrected water level p cor is then calculated from the actual water level reading p old and γ the specific weight of water following Eq. (1): The production rate data were available as time series for the two main wells, no. 31 and no. 80 (Fig. 1). The data is continuous without any gaps. For the work presented here, the sum of both rates was used. The analysed range of the production data starts December 2005 and ends June 2017. From prior studies it is known that the Kaikoura Earthquake in 2016 induced significant water level changes in the reservoir and maybe even changed its properties (Kühn and Schöne, 2018). Therefore, the analysed time range was further confined to one day before the earthquake (13 November 2016) as the last day.
Implementation, boot strapping and synthetic data simulation
The implementation of the algorithm strictly follows the description of the variable projection algorithm in the original 1 NIWA: National Institute of Water and Atmospheric Research -Climate data base. https://cliflo.niwa.co.nz/ (last access: 7 November 2021) paper of Von Schroeter et al. (2004). It is the standard algorithm for separable least squares problems and requires the solution of two parts in each iteration. One is based on the mathematical QR decomposition and one on Singular Value Decomposition (SVD). The advantage of the scheme is its applicability for large data sets. In the following we only describe the adaptions we made for the presented study: within the variable projection algorithm, both, the linear and non-linear sub-problems are being solved using the singular value decomposition; because the rate and the water level data are both given with a daily resolution the total least squares (TLS) system turns out to be underdetermined when incorporating the estimation of true rates. Therefore, only the water level error and the measure of curvature are part of the TLS, not the rate error.
From these two points, the new TLS is deduced via the convolution matrix C, the derivative matrix D, the naturally unaffected water level p 0 and the column vectors γ and k with: The estimation of the regularisation parameter λ will be specified with an additional, adjustable exponent a. This is done to variate the initial regularisation parameter defined as: with the number of hydraulic head data points m in such a way that the resulting response curve becomes smooth enough to be interpretable, as mentioned by Von Schroeter et al. (2013). Because smoothness is a subjective criterion, the adjustment of the initial regularisation parameter requires careful investigation. For this, the result has been investigated with different exponent values that range from high (close to zero) to low (greater negativity). For values being too high the response function is stiff and suppresses features while for values being too low it achieves an excessive level of freedom and shows high-frequency features with no physical meaning. The optimal choice of smoothness lies in between and has been identified by a gradual decrement of the exponent a until a response function is generated where the dominant features still exist and are interpretable, but arose from the maximum possible freedom, i.e. the lowest possible exponent.
To increase the reliability of the result and to also derive the statistical values of the response function seen as a dependent random variable, the algorithm was subjected to a bootstrapping method. In each of the 1000 iterations, a fortnightly time period was randomly sampled from the entire time range. Even though the initial regularisation parameter λ def as well as the initial guess of the response function are estimated individually for each run, the following parameters remained constant and were predefined as follows: the first and best guess of the naturally unaffected water level p 0 was the mean water level between January and September 2019 which comes closest to the end of the build-up curve after the main spa in Waiwera closed down; the total number of nodes was set to 36; according to Von Schroeter et al. (2004) the number of nodes is arbitrary within the constraints that an increment of nodes will increase the resolution while also putting the TLS problem at a higher risk of being underdetermined; here, the number of 36 nodes ensures a resolution which is still equal to that of one day at the end of a fortnightly period; an underdetermined TLS problem could not be detected even for much higher number of nodes since the adjustment of the exponent a always seemed to compensate for it; the first node was set to one day due to the respective resolution of the time series.
To relate true response functions with corresponding functions found by the algorithm, three simulations with synthetic data were conducted. In doing so, the actual physical flow behaviour may be linked to the bootstrapping results from an empirical point of view. This extends the pure mathematical evaluation which, as to be seen later, turns out to be not always feasible. For the three simulations, three different scenarios were assumed. The first and the second one are referring to radial flow for the first day followed by a Leaky Flow Boundary (LFB) or a Constant Head Boundary (CHB), respectively. We regard these two scenarios as the most likely ones for the reservoir. Their parameterisation is mainly based on the results of the pumping test from 1979 (ARWB, 1980). The only exceptions are the parameterisation of the leakage factor and the distance ratio for the LFB and the CHB respectively. These parameters could of course not be deduced from the pumping test and are therefore parameterized to best fit the results of the bootstrapping results while still lying in a physically reasonable value range. The third scenario represents the assumption of the pumping test itself and assumes an instant steady-state (ISS) condition as expressed by the Thiem solution. The parameters from the pumping test are applied in this case as well.
The explicit formulation for all three scenarios is as follows: one day radial flow, followed by a Leaky Flow Boundary (LFB): For radial flow the response function to the power of e is: The resistance function G is parameterised with the transmissivity of T = 320 m 2 d −1 determined from the pumping test. For a leaky flow boundary the drawdown is calculated after Walton's method described by Kruseman et al. (1990) under the valid assumption of a negligible aquifer storativity S: From this equation, the response curve to the power of e can be derived as: 4T t . The function is parametrised with the theoretical storativity suggested in the pumping test, S = 2 × 10 −3 . The leakage factor β is arbitrarily set to 200 which translates to a comparably steep gradient in the response function.
one day radial flow, followed by a Constant Head Boundary (CHB): The linear constant head boundary will be described by Stallman's method described by Kruseman et al. (1990): with r Ratio given by the distance to the imaginary well r im over the distance to the observation well r 74 : The response function to the power of e is then: -Instant Steady State (ISS) case within the first day, as inferred from the pumping test: here the response function to the power of e is a Dirac delta function scaled by the same factor as the Thiem equation used for pumping tests The function is parametrised with the transmissivity from the pumping test. Further, r 74 is the distance between the observation well no. 74 and the production wells which is estimated with 140 m. r nat is the distance to a position where the water level is assumed to be unaffected by water production. From the drawdown distribution during the pumping test the distance was estimated to be 240 m.
The overall creation of synthetic data works as follows: the true response function is defined based on each scenario of the flow behaviour to be simulated; random production rates are created following a normal distribution with the same first and second moments like the measured rates; the water level is calculated by conducting a forward convolution; the production rate data are perturbed with a given error level. This error level corresponds to 10 % of the standard deviation of the measured production rate data. Compared to other error levels this is a quite high estimate.
The production rate data and the water level are then fed into the algorithm and the resulting response function can be compared with the true response function (Grabow and Kühn, 2021).
Results of the well test analysis
The results of the bootstrapping algorithm and the three synthetic data simulations are shown in the corresponding columns of Fig. 2. Each row refers to a different exponent a.
In each plot the y axis has the unit of the response function while the x axis displays the nodes.
Owing to the occurrence of outliers the representative response curve is derived from the median of all response curves and is shown in black. For the same reason, the median absolute deviation (MAD) of all response curves is used as the representative quantity of statistical dispersion. It is depicted as a blue area which expands below and above the median response function for a given MAD value. Further, for the LFB and CHB scenarios the true response curve is shown in red.
For the evaluation and the subsequent discussion name conventions of early, middle and late times in accordance with Gringarten (1985) are used. Whereas early times belong to characteristic flow close to the well and is not considered in this study, middle times will be equivalent to the processes during the first day. Anything later where flow boundaries become present are called for as late times.
Response functions
For the bootstrapping with exponents of a = −2 to a = −4, the median curve is almost horizontal and with values for the first node of −14.4 up to −12.3, respectively. For lower exponents down to a = −10 the curves develop a characteristic shape with a sharp decline at the beginning and a linear increases towards the end. At the very end, we observe again a sharp drop.
For the exponent of −10 this shape leads to a global minimum at times of roughly t = e 0.5 = 1.6 d and to a less distinct local maximum at times of t = e 2.5 = 12.2 d. The intersection with the y axis rises with the evolution of the shape with decreasing exponents. For the exponent of −10 the first node has a value of −8.7. For even smaller exponents the shape reverses and the depression slowly disappears until the curve is almost horizontal again. However, the sharp drop at the end remains accompanied by increasingly large, high-frequency fluctuations that superpose the overall shape for later times.
The MAD remains relatively small and constant throughout the whole time for the application of larger exponents. With the development of the mentioned characteristic shape with a = −4 and below it clearly increases for across the entire investigated time frame, except for the very first node where it remains small irrespective of the applied exponent. For exponents smaller than a = −12 the MAD decreases again, except for very late times where the fluctuations of the median response curve are observed.
Synthetic data simulation
For the CHB scenario the median response curve shows the best fit compared to the true response function with high exponents already for a = −2. With decreasing exponents, the median response starts deviating especially for later times while at the beginning it remains almost unchanged. The deviation mainly originates from the fluctuations. It is observed as well that the MAD increases with decreasing exponents, especially towards the late times where the fluctuations occur.
For the LFB scenario the median response function starts again with a horizontal line which aligns more and more to the true response function between the start and roughly t = e 1.5 = 4.5 d with decreasing exponents from −4 to −8. The values for very late times remain almost constant which leads to an upward trend after 4.5 d and the development of a global minimum. Notably, the value of the first node is always underestimated by the algorithm. With smaller exponents down to a = −12 the global minimum decreases further and at the same time the curves experience larger fluctuations which start comparatively early. With even lower exponents the median response function changes its shape and develops back to a horizontal line with fluctuations at the end. The MAD generally increases with decreasing exponents down to a = −12. This is especially observed during the middle period, whereas for very late times the increase is only little. For early times however, it remains more or less constant. When the median response function becomes a horizontal line again along the x axis of ln(t), the MAD decreases significantly and only increases for very late times when fluctuations are present.
The median response function of the ISS scenario shows a similar development as the one of the LFB scenario with decreasing exponents. That is, the curve is an almost horizontal line for higher exponents and then develops more and more a minimum down to a = −6. With even lower exponents down to a = −12 this global minimum becomes more pronounced and its position moves further towards earlier times. This is accompanied by a strong increase of the fluctuations at later times. With exponents smaller than −12, the curves turn into horizontal lines again with fluctuations similar to the ones for the CHB and LFB scenarios. The MAD is generally quite high except for very high and very small exponents. In all cases it is high right from the start until later times for high exponents down to a = −12. Like for the two other scenarios the MAD decreases slightly for very late times. For exponents below a = −12 the MAD decreases in general but remains high for the late times where the fluctuations are observed.
Mathematical evaluation
We do see the exponent of −10 as most suitable for further interpretation of the response functions for the investigated system. This is due to three reasons. First, it satisfies the requirement, coming from higher exponents, to correspond to the lowest regularisation parameter which yields a response function that is still interpretable and as well features the shape that is seen as valid. A valid shape is considered to be the development of the global minimum, just by exclusion of the other two shapes, i.e. the almost horizontal lines for very high and low exponents. In comparison with the simulated scenarios these are regarded as shapes of artificial origin. Second, it corresponds to one of the lowest regularisation parameter, which still ensures a stable parameterisation of the algorithm for which 998 out of 1000 samples yielded a solution. Third, among these first regularisation parameters, it yields a response function with the lowest MAD at the beginning of the curve. Because only the evaluation of middle times allows the comparison with the results from the pumping test, low variability at the beginning is especially desirable.
So far only the mathematical evaluation of middle times is thought to be meaningful. For later times, after one day, the MAD becomes too high to regard the median response curve as a representative outcome. For this reason, only the first node will be evaluated which also means that, in contrast to usual well test evaluations, the flow behaviour cannot be inferred from the shape of the response function. Only the value of the first node itself may give an indication for it. So far, only the assumption of radial flow during the first day led to a transmissivity value, which also comes close to the findings of the pumping test: With the node τ 1 = −8.69 the transmissivity equates to roughly 474 m 2 d −1 . A MAD of 0.99 translates into an asymmetric confidence interval for T with T = 1281 m 2 d −1 as the upper boundary and T = 176 m 2 d −1 as the lower boundary.
Comparison with simulated data cases
For high exponents, the algorithm yields an almost horizontal curve for all three scenarios (Fig. 2). This is reasonable because the initial guess of the function is a horizontal line and the regularisation parameter at this stage is high. Therefore, any deviation from the initial curve, which inevitably disturbs smoothness, i.e. increases the second derivative, is penalised to a great fraction in the objective function. The optimum is then found close to that of a horizontal line. The fact that the true response function of the CHB scenario is estimated quite well may therefore be explained by its smaller negative slope which comes closer to a horizontal line than the true response curve of the LFB. Even though the shape of the LFB and the ISS scenario cannot be estimated at this stage, the value of the first node which corresponds to middle times is estimated well for all three scenarios. For the results of the bootstrapping method this means that the middle time behaviour may also be regarded as valid.
The phenomenon that for very low exponents again a horizontal line happens to be the median response curve for all three scenarios can be explained by the considerably lower number of successful samples. On the one hand, this translates into less curves that can be considered for evaluation which decreases the MAD. On the other hand, it creates a bias of the solution spectrum since the curves from a successful run satisfy certain properties. The reason why a solution cannot be found is not because the algorithm did not converge but rather because the response functions achieved values that are too low to be computationally handled. Since in addition fewer solutions are found right above these limiting values the median depicts the left-over majority of curves which are solutions of the horizontal line close to the initial guess.
Considering the development of the median response function over the course of decreasing exponents for the LFB scenario, it seems that the good fit until day 4.5 for a = −6 is purely accidental. Beside the good fit for middle times the curve goes down arbitrarily and up again with a high value of the MAD and therefore a broad spectrum of other solutions being completely different than represented by the median curve.
A similar arbitrary development can be seen for the ISS scenario after the first day. Because here a mathematical relation between water level and rate data does not exist after the first day, fluctuations as well as the high MAD must be regarded as the algorithms behaviour to such circumstances. Therefore, the fluctuation as well as the MAD in the LFB scenario may also be dedicated to a lack of obvious connection between water level and production rate data. This can be the case because the true response curve for the LFB reaches down to particularly low values after a short time and a decrease of a response function value equates to an exponential decrease of the function the rate series is convoluted with. In other words, even though mathematically the influence of production rates still exists, below some value, the superposed rate error overweighs this influence and no serious relation can be found by the algorithm. Because the results of the bootstrapping method show a similar large MAD it is then likely that the true response function of the reservoir also reaches down to very low values. This leads to the assumption that either a LFB with a comparable low leakage factor or an instant equilibrium like in the ISS scenario is present. Considering the history of the reservoir during which an excessive exploitation led to a steady decline over decades as well as the build-up curve which extended over nearly two years, the latter is regarded as unrealistic. With the mathematical findings for middle times, a radial flow behaviour within the first day followed by a leaky flow behaviour for later times is seen as the most plausible result based on the current findings.
Errors and uncertainties
The biggest source of error comes with the assumption that bore holes no. 31 and no. 80 are the only production wells. In contrast, a lot of other bores exist (Kühn and Stöfen, 2005) from which also water is produced on a regular basis. This fact must be accepted owing to the lack of other data.
Another error might arise from taking the sum of both rates and so to treat the system with effectively only one bore. However, the error likely affects only the early-time behaviour which cannot be seen on a daily resolution. To account for both wells individually the program should be extended to a multi-well deconvolution problem like it has been done by Cumming et al. (2014). For now, the error due to summation is still lower than the error which would result from selecting only one well and neglecting the other.
Furthermore, apart from the barometric effect, other influences on the hydraulic head were neglected. It must be acknowledged that the hydraulic head values which were used in this analysis do not relate to production rates only. Other effects might be the loading of the overlying fresh water aquifer, variation in groundwater recharge and varying boundary conditions, especially the tides on the sea side.
A conceptual error arises from deconvolution itself which implies a linear system with the principle of superposition in time. For larger fractures, this condition is often not met according to Kruseman et al. (1990). However, based on the high density of fractures this case might be excluded. Observations from the pumping test showed a spatially homogeneous response during pumping and thus support this assumption.
All these different errors end up in a perturbation that makes it difficult for the algorithm to distinguish it from actual convolution. This can especially be seen for later times where the response function is low and thereby its contribution to water level changes. With the method applied in this work, the uncertainties are too high to allow anything else than to speculate for a type of boundary condition, not to mention its parametrisation.
Conclusions
We conclude that the current implementation of a variablerate well test analysis is applicable to the daily-averaged time series in Waiwera. This is true for middle-time behaviour for which well test analysis yields the same model parameter as the pumping test. The result for late-time behaviour, however, can only be interpreted based on comparison with synthetic data. The outcome indicates very low values for the true response function right after the first day. Considering an instant equilibrium of the reservoir as incompatible with the observations over the past, only a leaky flow boundary with a low leakage factor can be seen as appropriate. It needs to be taken into account that the original method we adapted in the present study was developed for the interpretation of standard well tests. For such set-ups, it is a very powerful tool and is applicable to many hydrogeological settings. However, in situations in which reservoirs react to changing constraints the deconvolution reaches its limits. For the "long term pumping test with varying rates" we tested here, further development is required.
For the future, a more extensive investigation of different flow conditions under different parametrisations should be conducted. Only in this way the statistical dispersion of the outcome could be linked quantitatively to response functions. To also improve data quality, the influence of other environmental factors on the water level should be investigated more extensively. Especially the influence of precipitation and the tides require more analysis.
To overcome the inherent limitation of the deconvolution algorithm implemented here, spectral methods could be tested. This completely different approach would solve the deconvolution in the Laplace/Fourier space and therefore simplify the problem to a pointwise product between two functions.
Appendix A: List of symbols and nomenclature
In order to avoid conversion factors in equations, all quantities appearing in them are assumed to be either dimensionless or to have matching units.
|
2021-11-14T16:24:06.093Z
|
2021-11-12T00:00:00.000
|
{
"year": 2021,
"sha1": "0690b3fa12feeb1941d8b9a518f6d8fe8105a7d5",
"oa_license": "CCBY",
"oa_url": "https://adgeo.copernicus.org/articles/56/107/2021/adgeo-56-107-2021.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fd8e3116a83b18a4069b7f8daecd4fe71eac9519",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": []
}
|
16566606
|
pes2o/s2orc
|
v3-fos-license
|
Post-translational environmental switch of RadA activity by extein–intein interactions in protein splicing
Post-translational control based on an environmentally sensitive intervening intein sequence is described. Inteins are invasive genetic elements that self-splice at the protein level from the flanking host protein, the exteins. Here we show in Escherichia coli and in vitro that splicing of the RadA intein located in the ATPase domain of the hyperthermophilic archaeon Pyrococcus horikoshii is strongly regulated by the native exteins, which lock the intein in an inactive state. High temperature or solution conditions can unlock the intein for full activity, as can remote extein point mutations. Notably, this splicing trap occurs through interactions between distant residues in the native exteins and the intein, in three-dimensional space. The exteins might thereby serve as an environmental sensor, releasing the intein for full activity only at optimal growth conditions for the native organism, while sparing ATP consumption under conditions of cold-shock. This partnership between the intein and its exteins, which implies coevolution of the parasitic intein and its host protein may provide a novel means of post-translational control.
INTRODUCTION
The existence of inteins, protein-splicing elements, at the active site of critical proteins suggests their regulatory potential (1). Inteins are mobile genetic elements that invade protein-coding genes at the DNA level, whereas at the protein level they catalyze self-removal from the host protein, the exteins. Although this process of protein splicing occurs spontaneously, a role for an extein-intein partnership in controlling activity of the host protein, suspected on the basis of enigmatic intein distribution, is largely unexplored.
Although inteins exist in proteins with diverse functions, proteins involved in DNA metabolism, such as polymerases, helicases, recombinases, topoisomerases and ribonucleotide reductases are the most common hosts for inteins (1). Adenosine triphosphatase (ATPase) domains are particularly common intein insertion points that are found in several classes of proteins including recombinases and helicases. Invasion of intein DNA often occurs into regions encoding conserved protein domains that are critical to protein function, such as a catalytic center, a ligand binding site or an interaction surface. This intein localization may be explained by the specificity of the intein's mobility apparatus, the homing endonuclease domain, for conserved sequences (2). An alternative explanation may be that inteins become more readily fixed in the population by occupying conserved sites (3,4). Alternatively, we proposed that the presence of inteins in some conserved motifs might be explained by an adaptive, regulatory role of inteins (1,5).
Protein splicing is a naturally occurring posttranslational autoprocessing event, where the intein performs a series of autocatalytic peptide bond rearrangements. The mechanism of splicing for canonical inteins starts with two transesterification steps catalyzed by the first residue of the intein (Cys or Ser) and the first residue of the C-extein (Cys, Ser or Thr) (for review see (6)(7)(8)). A resulting branched intermediate, with the N-extein and the intein connected to the C-extein, is resolved by cyclization of the intein's C-terminal conserved Asn, with release of the intein. Finally, the thioester bond connecting the ligated exteins is converted to a peptide bond, leaving a scarless protein. Several conserved amino acid residues within the intein, including His residues, modulate activity of the catalytic residues. Although extein residues flanking the intein can also influence the rate of splicing, this extein effect was assumed to be limited to amino acid residues immediately proximal to the intein (9)(10)(11) and there is no known role for remote extein residues.
Conditional protein splicing (CPS) is consistent with the hypothesis that some inteins act as post-translational regulators of gene expression (1). CPS occurs in the presence of a particular environmental trigger, such as redox stress, temperature or pH (5,(12)(13)(14)(15)(16)(17)(18). The existence of environmentally sensitive inteins hints at intein adaptation to the intracellular niche by development of a post-translational regulatory response. In these models an intein is thought to act as an environmental sensor and the role of exteins has been largely ignored (5,(12)(13)(14)(15)(16)(17)(18). We are prompted to ask if exteins play a more substantive role and if three-dimensional (3D) extein-intein interactions might act in intein regulation, affecting intein splicing and thereby extein function, as a novel form of post-translational control.
To probe this hypothesis, we chose the intein-containing RadA protein from the hyperthermophilic archaeon Pyrococcus horikoshii (Pho RadA) for the following reasons. First, the intein is in the adenosine triphosphate (ATP) binding site (P-loop) of the RadA ATPase domain, which is the most common site for intein occupancy (1). Second, the RadA protein belongs to the conserved recombinase superfamily composed of bacterial RecA, archaeal RadA or Rad51, and eukaryal Rad51, all of which share a structurally conserved core ATPase domain with P-loop (19). Third, in contrast to strong transcriptional regulation of bacterial and eukaryal recombinase expression (20,21), hyperthermophilic RadA proteins were proposed to be regulated mostly post-translationally (22). Fourth, the RadA intein splices robustly in the context of different non-native exteins (23). Finally, the Pho RadA intein is a mini-intein of known structure, lacking the endonuclease domain, which simplifies RadA precursor modeling (24). In this work we discovered that the intein functions in a partnership with its native exteins in 3D space to regulate splicing in an environment-dependent manner, being responsive to temperature and solution conditions. Thus, protein splicing provides a thermal switch allowing full activity of the RadA protein only at the elevated temperature corresponding to the growth temperature of its native host. Thereby, superimposed on thermally regulated recombinase activity (25)(26)(27)(28)(29)(30), protein splicing may impose an additional level of posttranslational control, sparing ATP consumption at suboptimal temperatures and converting the inactive RadA zymogen to active RadA protein at the optimal growth temperature of the host.
Computational analysis
The amino acid sequences used in this study were obtained from the Protein database at the National Center for Biotechnology Information (NCBI; www.ncbi.nlm.nih. gov) and InterPro database (www.ebi.ac.uk/interpro/). Iterative and complementary searches were performed to identify and obtain sequences for the RadA/RecA inteins. We used the NCBI Protein database and the key-word search queries of the following composition: ('Archaea'[Organism]
AND RadA[All Fields] AND intein[All Fields]) or ('Bacteria'[Organism] AND RecA[All Fields] AND intein[All
Fields]), to search for RadA proteins with annotated inteins in archaea or all RecA proteins with annotated inteins in bacteria. Next, we performed a complementary search in the InterPro database which allows access to annotated proteins within a given protein family simultaneously. The family 'DNA recombination/repair protein RadA' (InterPro accession number: IPR011938) was represented by 291 proteins from archaea and eukaryota. By filtering species based on RadA domain organization, also through the InterPro database, the list of species with RadA proteins containing an intein-like Hint domain N-terminal (InterPro: IPR003587), Hint domain C-terminal (Inter-Pro: IPR003586), and/or Homing endonuclease (InterPro: IPR027434) was obtained, and compared with results of the key-word search from the NCBI Protein database. The family 'DNA recombination and repair protein RecA' (In-terPro: IPR013765) contained 22,046 proteins from bacteria. Using the same domain-identification strategy as for archaea, the bacterial RecA proteins with inteins were identified and compared. Finally, we performed a series of iterative blastp and tblastn analyses (both with default parameters) using the identified inteins as queries. This last step was especially useful in cases when genomes were not annotated or assembled. The identified inteins are listed in Table 1. Multiple protein alignments were performed by ClustalW (31) and edited manually. Phylogenetic analysis was performed using the Neighbor-Joining method in the MEGA5 program (32).
Bacterial strains and growth conditions
All strains and plasmids used in this study are listed in Table 2. Escherichia coli DH5␣, BL21 Star (DE3) and ArcticExpress (DE3) strains were grown in Luria Broth (LB) medium with aeration. Where appropriate, the media contained ampicillin (100 g/ml) or chloramphenicol (25 g/ml). Electroporation of E. coli was performed with a Gene Pulser apparatus (BioRad). Transformants were recovered in SOC media (0.5% yeast extract, 2% tryptone, 10 mM NaCl, 2.5 mM KCl, 10 mM MgCl 2 , 10 mM MgSO 4 and 20 mM glucose) for 1 h at 37 • C with aeration.
Plasmid methodology, enzymes and oligonucleotides
Plasmid DNA was isolated and purified using a QIAprep Spin Miniprep Kit (Qiagen). The enzyme digests and polymerase chain reaction (PCR) fragments were visualized by electrophoresis in 0.7% (w/v) agarose gels stained with ethidium bromide. DNA fragments were purified from agarose gels with QIAquick Gel Extraction Kit (Qiagen). Restriction endonucleases and T4 DNA ligase were purchased from New England Biolabs (NEB) and used as described in manufacturer protocols. The list of oligonucleotides used in this study is in Table 3. The sequences of all fragments generated by PCR were verified.
Construction of plasmids
Construction of the plasmid pMIG-RadAi, carrying the Pho RadA intein gene in foreign exteins, was accomplished by inserting the RadA intein sequence with short native exteins (N-ext: EVFGEFGS and C-ext: TQLAHTLAVM) into ClaI and SphI sites between sequences coding for maltose binding protein (MBP) and green fluorescent protein (GFP) in a pACYCDuet-1 plasmid backbone ( Table 2). The RadA intein fragment was generated by PCR with primer pair IDT3519/IDT3520 from P. horikoshii genomic DNA (a generous gift from K. Mills) using ExTaq DNA polymerase (TaKaRa). The plasmid carrying the full-length radA gene, pFL-RadAi, expressing the Pho RadA intein in native exteins, was constructed by amplifying the radA from P. horikoshii genomic DNA by PCR using primer pair IDT3184/IDT3185 and inserting the resulting fragment into BamHI and XhoI sites of the expression vector pET45b. Plasmid pRadA, expressing the inteinless version of the RadA protein, was constructed by inverse PCR using primer pair IDT3962/IDT3964 and pFL-RadAi as a template. The CloneAmp HiFi PCR PreMix (Clontech) was used to ensure accuracy and efficiency of the amplification. The In-Fusion HD Cloning Plus kit (Clontech) was used to seal the ends of the amplified plasmid.
Splicing assays
To test a temperature effect on splicing in vitro, the Ni-NTA-purified FL-RadAi precursor protein and the extein mutant precursors were incubated at different temperatures (25-85 • C with 10 • C increments) for 15 min, 30 min, 1 h, or 2 h and then analyzed on 12% SDS-PAGE gel with Coomassie blue staining. To test solvent/solute effects, aliquots of the Ni-NTA-purified FL-RadAi precursor protein in a 96-well plate were supplemented with 96 reagents (1/20 dilution) from the Solubility and Stability Screen (Hampton Research), incubated overnight at 35 or 25 • C and analyzed by 12% SDS-PAGE. Those compounds that were found to facilitate intein splicing at 25 • C were incubated with purified FL-RadAi precursor at 25 • C for 30 min and then analyzed by 12% SDS-PAGE. To calculate the progression of intein splicing, protein bands on Coomassie blue-stained gels were quantified using ImageJ (www.imagej.nih.gov/ij/). The splicing efficiencies were calculated as follows: LE/(LE + PRE) × 100, where LE represents the amount of ligated exteins and PRE, the amount of precursor. Considering that intensity of Coomassie bluestained bands depends on a protein size the signal from LE band was adjusted to accommodate the size difference between the ligated exteins and the precursor.
Circular dichroism measurements
Far-UV circular dichroism (CD) spectra were measured in a 0.1 cm temperature-controlled quartz cell using a Jasco J-720 spectropolarimeter (Tokyo, Japan) at various temperatures. Bandwidths of 1 nm and scan speed of 100 nm/min were utilized. Three spectra accumulations were averaged. The absorbance of the sample in the CD cell did not exceed 1.5, which was well within the recommended absorbance range (<3). The photomultiplier voltage was recorded in each run and did not exceed 600 V, which was also within the recommended range.
ATPase activity of RadA
The inteinless RadA protein and the splicing inactive FL-RadAi-AA protein precursor were Ni-NTA-purified as described above, exchanged into assay buffer (25 mM Tris pH 7.5, 10 mM MgCl 2 ) pre-treated with PiBind resin (Innova Biosciences) to remove contaminating inorganic phosphate. ATPase activities of RadA and FL-RadAi-AA (2 M) were analyzed using the High Throughput Colorimetric ATPase Assay kit (Innova Biosciences) following the manufacturer instructions. Enzymatic reactions were performed at different temperatures (25-85 • C with 10 • C increments) for 30 min. Different concentrations of the substrate, ATP, was added in the presence of 0.2 g/ml of M13mp18 ssDNA (NEB). To eliminate the impact of temperature-dependent non-enzymatic ATP hydrolysis, protein-free controls were used for each data point. All experiments were performed in triplicate.
RadA precursor modeling
Homology models for the RadA extein and intein were generated separately based on the closest templates for the extein PDB ID: 2ZUB (35) and for the intein PDB ID: 4E2T (24). The initial sequence alignment required for homology modeling was performed using ClustalW (31) (http://www.ebi.ac.uk/Tools/msa/clustalw2/) and was manually corrected for proper alignment of conserved intein sequences and correcting for artificial mutations by also taking into consideration a structural alignment performed using Phyre (36). Homology models were generated using MODELLER (37), with the DOPE (38) and GA341 (39) energy functions, and the best scoring model was chosen for further optimization. The intein homology model was manually positioned such that its ends were as close as possible to its connection with the extein homology model using the molecular visualization program Visual Molecular Dynamic (VMD) (40). The two loops connecting the intein to the N-extein and the C-extein were then remodeled using the loop prediction program Loopy (41) to get an initial continuous precursor structural model. This model showed the first residue of the intein and the first residue of the C-extein to be far apart, which is not a feasible conformation for the splicing reaction. Therefore, they were brought closer using the following protocol, which used restrained energy minimizations and molecular dynamics (MD) simulations. A harmonic distance restraint was applied between the centers-of-mass of the two separated residues with a force constant of 10 kcal/mol/Å 2 . The distance minimum of the restraint was gradually reduced to 6Å in 2Å decrements to achieve the optimized precursor model. At each distance minimum, the structure was optimized in vacuo using 500 steps of Steepest Descent (SD) minimization (42), followed by 100 steps of Adopted Basis Newton Raphson (ABNR) minimization (43) and then by Langevin dynamics for 2 ps with a friction coefficient of 60 ps at a temperature of 200 K, followed by another round of 500 steps of SD minimization and 100 steps of ABNR minimization. These minimizations and MD simulations were performed using the program CHARMM (44,45) and the CHARMM22 protein force field (46).
The optimized precursor models were analyzed for vacuum interaction energies between catalytically involved intein amino acid residues (C153, H245, H312, H323 and N324) and all other residues in the precursor. This was done by looping over all the residues and calculating the extein sidechain:intein sidechain and extein sidechain:intein backbone components of these interaction energies to get an understanding of the approximate influence of each extein amino acid sidechain on the intein catalytic residues.
RadA model assessment
A few different Model Quality Assessment Programs were used to assess the RadA precursor wild-type and mutant models with the scores shown in Supplementary Table S1. The Swiss-Model Qmean server (http://swissmodel.expasy. org/qmean/cgi/index.cgi) was used to obtain the Qmean and Qmean derived Z-score (47). QMEAN score is a global score for each model that scores its reliability from values ranging between 0 and 1, with higher values indicative of greater accuracy. The associated Z-score relates this QMEAN score of each model to the scores of a nonredundant set of high-resolution X-ray structures of similar size with ideal values being close to 0 (i.e. showing least deviation from the X-ray set scores). ProSA-Web Z-scores (48) were obtained using the ProSA-Web server (https: //prosa.services.came.sbg.ac.at/prosa.php). These Z-scores are mostly within the range −7 to −12 for X-ray structures of proteins of length close to 500 residues. The Molprobity score (49) of each model was obtained using the Joint Center for Structural Genomics' Quality Control Check version 3.1 (http://smb.slac.stanford.edu/jcsg/QC/). The Molprobity score reflects the X-ray resolution at which its individual scores would be the expected values such that lower values indicated better models. The Protein Structure Validation Server (PSVS1.5) (50) was used to obtain Z-scores for Verify3D (51) and Procheck G-factor phi/psi (52) assessments. These Z-scores are derived from comparison of the Verify3D and Procheck G-factor phi/psi scores of each model to the scores of a set of 252 X-ray crystal structures with resolution ≤1.80 Ang., R-factor ≤0.25 and Rfree ≤0.28. As with other Z-scores, values closer to 0 indicate models that better agree with the high quality X-ray structure dataset.
Diverse intein insertion sites in functional domains of the RadA/RecA proteins
To characterize the diversity and distribution of inteins in the archaeal RadA and bacterial RecA proteins, we performed computational mining ( Figure 1 and Table 1). In addition to 72 inteins reported in the intein database In-Base (53), we identified 26 new inteins. An interesting aspect of intein distribution is the exceptionally wide span of organisms containing RadA/RecA inteins, given only a few other examples of inteins present in orthologous genes in both archaea and bacteria (1). There are five distinct intein insertion points in RadA/RecA proteins, a-e, including two newly identified ones (54,55). The RadA/RecA inteins vary greatly between archaeal and bacterial species, based on sequence similarity and their insertion points in RadA/RecA proteins, suggestive of independent events of intein invasion ( Figure 1 and Table 1).
The intein insertion points are clustered in important functional domains, at the monomer-monomer interaction interface (sites a, b and d) and the ATP-binding site (sites c and e) ( Figure 1B). Interestingly, insertion point c exists in both archaeal RadA and bacterial RecA proteins (designated c1 and c2, respectively). This insertion point is located in the conserved active site P-loop of the ATP-binding domain ( Figure 1B), which is a hot-spot for inteins in different ATPase-containing proteins in various archaeal and bacterial species ( Figure 1A) (1). Although, the comparative analysis demonstrated a high degree of RadA and RecA extein sequence correspondence, there is little similarity between the RadA and RecA inteins at position c, suggesting independent invasion at the identical insertion point in archaea and bacteria ( Figure 1C). Given that this site appears to have been repeatedly targeted by different inteins in two domains of life, the small, well-characterized intein in the Pho RadA protein was selected as the focus of the present study.
Native exteins inhibit splicing as post-translational thermal switch of RadA activity
To test a potential relationship between intein catalysis and the nature of the extein, we made two precursors with the Pho RadA intein with foreign and native exteins. The construct with foreign exteins contains MBP and GFP flanking the intein, to form the MPB-Intein-GFP fusion (MIG-RadAi) (Figure 2A). The MIG-RadAi construct contains short native exteins (8-10 amino acid residues) as extein residues at the splice junctions can modulate splicing by affecting the electronic properties of intein active site (9)(10)(11). The construct with the native exteins contains the fulllength RadA protein (FL-RadAi) with the intein at its Ploop ( Figures 1B and 2B). Splicing of the RadA intein from both precursors was examined.
MIG-RadAi was overexpressed in E. coli and the cellular lysates were used to analyze the extent of MIG-RadAi processing in vivo. The protein bands of the MIG-RadAi precursor and the MG (MBP-GFP) splice product were visualized by GFP fluorescence in an SDS-PAGE gel, without boiling the extracts (Figure 2A). The MIG-RadAi precursor processed effectively in vivo upon expression at 15 and 25 • C for 3 h, with splicing in the 59-81% range. In vitro, the MIG-RadAi precursor recovered after induction at 15 • C was 94% spliced within 1 h at 25 • C (Figure 2A). These experiments indicate high splicing activity of the RadA intein from foreign exteins, consistent with a previous report (23).
In contrast to the MIG-RadAi precursor, the FL-RadAi precursor demonstrated compromised splicing activity in vivo ( Figure 2B), with no detectable splicing in vitro at 25 • C over several days (data not shown). Interestingly, catalytic activity of the RadA intein in the context of native exteins was strongly dependent on temperature, showing an increase in splicing at 55 • C and reaching a maximum at 75-85 • C ( Figure 2B; plotted Figure 3B). The different activity of the RadA intein in the context of its native and foreign exteins suggests that the native exteins modulate performance of the intein in a temperature-dependent manner. Moreover, given that the amino acid residues immediately flanking the intein are identical in MIG-RadAi and FL-RadAi, remote residues must be mediating the regulatory extein effect, likely via extein-intein interactions that occur in 3D space. Table 2) and the recombinant protein was overexpressed in BL21 Star (DE3). The MIG-RadAi precursor (left) consists of the RadA intein (I; red) with short native exteins fused to MBP (dark gray) and GFP (green). The precursor (94 kDa) and the ligated exteins (LE; 74 kDa) were visualized by in-gel GFP fluorescence. More than 50% of precursor was spliced in vivo at 15 • C and >80% was spliced at 20 • C. The MIG-RadAi precursor recovered from a 15 • C induction spliced efficiently (94% splicing) in vitro at 25 • C within 1 h. (B) RadA intein splicing in the native exteins is inefficient at low temperatures, but efficient at high temperature. In the FL-RadAi precursor (left) the RadA intein (red) is flanked by its native exteins (N-Ext and C-Ext; gray). RadA intein splicing of the Ni-NTA purified FL-RadAi precursor was visualized in a Coomassie blue-stained gel. Accumulation of the spliced intein (I; 20 kDa) and the ligated exteins (LE; 42 kDa) and disappearance of the FL-RadAi precursor (62 kDa) were observed at high temperatures. A plot of the data from Figure 2B is shown in Figure 3B
Extein-imposed inhibition of splicing can be modulated by solution environment
If thermal regulation of the RadA intein is mediated by contacts between the exteins and the intein, agents other than temperature might disrupt these extein-intein interactions to facilitate splicing. Denaturants, detergents and solvents are known to activate some enzymes by relaxing the rigidity of interactions at the active site (56). To test whether this might be the case, we subjected FL-RadAi to a Hampton Solubility and Stability Screen (see 'Materials and Methods' section) which includes a panel of 96 compounds that modulate protein solubility and stability. We determined that several compounds had modest effects (data not shown). Although 1.25% 1-butyl-3methylimidazolium chloride, a water-miscible ionic liquid (57) and 0.5% SDS, a detergent, had little effects on splicing of FL-RadAi upon incubation at 25 • C for 30 min, the two reagents together resulted in a sharp increase in splicing, to 90% ( Figure 2C). Doubling the concentration of the ionic liquid allowed splicing to go almost to completion. We hypothesized that the ionic liquid in combination with the detergent acts by reducing the interactions between the native exteins and the intein. Thereby solution environment is capable of releasing intein inhibition in the FL-RadAi precursor at low temperature, indicating that zymogen activation by splicing is also sensitive to non-thermal cues.
Thermal responses of structure and activities of FL-RadAi and RadA
To test if the FL-RadAi precursor might undergo a secondary structure transition that could allow protein splicing, folding of FL-RadAi precursor at different temperatures was characterized by CD spectroscopy. To eliminate changes that might reflect rearrangements during protein splicing, a catalytically inactive FL-RadAi-AA precursor was utilized, where AA designates C153A and N324A mutations at the N-and C-termini of the intein ( Figure 1B). The inteinless RadA protein was used as a control to show secondary structure transitions in the ligated exteins and RadA protein plus RadA intein in a 1:1 mixture (RadA + Intein) were used to mimic FL-RadAi in composition, alongside RadA intein alone. Far-UV spectra of the proteins showed temperature-dependent changes in the 25-85 • C interval in extein-containing species, FL-RadAi-AA, RadA and the RadA + Intein. However, there were minimal changes associated with RadA intein spectra. CD spectra of the samples containing the RadA exteins showed common temperature-dependent secondary structure rearrangements, similar to those reported for the Pyrobaculum islandium RadA (25). Given that the exteins have predominantly ␣-helical content (35) and the intein comprises mainly  strands (24), we followed changes in ellipticity around the middle of the spectrum in the 217-223 nm interval, which covers characteristic wavelengths for ␣-helices (222 nm) and -sheets (218 nm) ( Figure 3A and B, top panel). We observed an increase in slope at 55 • C for both the FL-RadAi-AA precursor and for the RadA protein, albeit less pronounced for RadA ( Figure 3B). This similarity suggests that temperature-dependent rearrangements in the FL-RadAi precursor are related to its extein component.
Considering the temperature-dependence of Pho RadA intein splicing, it is interesting that catalytic activity of hyperthermophilic RadA/Rad51 proteins are also regulated by temperature (25)(26)(27)(28). We therefore tested the temperature dependence of ATPase activity of the Pho RadA protein. Similar to other RecA-like proteins, the inteinless Pho RadA shows ssDNA-dependent ATPase activity (25) (Fig-Figure 3. Temperature-dependent structure transition and ATPase activity of the Pho RadA protein and the precursor. ure 3B, bottom). The rate of ATP hydrolysis by RadA rises above temperatures of 65 • C. Unlike RadA, the splicing inactive FL-RadAi-AA precursor showed no catalytic activity at elevated temperatures ( Figure 3C), consistent with the assumption that intein splicing is required for functional activity of the host protein.
To further analyze the effect of temperature on the rate of ATP hydrolysis by the RadA protein, an Arrhenius plot was generated by measuring ATPase activity at temperatures in the 55 to 80 • C interval ( Figure 3B, bottom, inset; Figure 3D). Pho RadA exhibits biphasic ATPase activity and an Arrhenius plot with a breakpoint at 76 • C, with two activation energies, of 12.7 and 22.7 kJ/mol ( Figure 3D). This breakpoint in activation energy is 20 • C higher than the increase in splicing activity. The breakpoint may therefore correspond to a local rearrangement of amino acid residues at the ATPase active site similar to P. islandium RadA (25) rather than to global conformational changes. Given that P. horikoshii lives in the temperature range of 70-103 • C, the two distinct catalytic modes of RadA below and above the 76 • C breakpoint may reflect the state of the protein in vivo (25).
FL-RadAi precursor models suggest inhibitory extein-intein interactions
Considering the innate ability of the RadA intein to splice at low temperatures (23) (Figure 2A) and its temperaturedependent splicing from the native precursor, together with the similarity in FL-RadAi-AA and RadA temperaturedependent structure transitions ( Figure 3A and B, top panel), we suggest that the temperature sensitivity in the FL-RadAi precursor is attributable to its extein component. We also speculate that intein splicing is blocked by interactions with extein residues and that these interactions can be disrupted via conformational changes in the exteins induced by temperature or by the solvent environment, thereby releasing the intein from a locked, inactive state. We therefore wished to gain insight into these inhibitory interactions.
In the absence of structural data on the full-length inteincontaining protein precursor, the Pho FL-RadAi precursor ( Figure 4A) was modeled using the available structures of the Pho RadA intein (24) and the S. solfataricus RadA protein (35) (Figure 4B). Two assumptions were made in generating the model: first, the individual intein and extein fragments maintain their overall 3D structures in the precursor, consistent with the minimal differences in the CD spectra of the intact FL-RadAi precursor and its compositional mimic of RadA + Intein ( Figure 3B); and second, the covalent connectivity between the RadA intein and extein sequences imposes 3D constraints that are sufficient for a reasonable prediction of the intein and extein regions with respect to one another. A representative structure of the FL-RadAi precursor illustrates multiple features pertinent to splicing ( Figure 4B). First, a large amount of the internal structure can be maintained for both the RadA exteins and the intein in the precursor without steric clashes. Second, the RadA intein and the exteins share a large interaction surface and the intein is in direct contact with the C-terminal extein, which constitutes the main part of the ATPase core domain of the RadA protein. Finally, the catalytic residues of the in-tein are located on the extein-intein interface ( Figure 4C), revealing the possibility for 3D extein-intein interactions affecting intein catalysis. Conserved residues of the intein C153, H245, H312, H323 and N324 are oriented toward the RadA exteins ( Figure 4C). C153 corresponds to the first residue of intein (C1 in Figure 4A), which initiates splicing, whereas N324 is the terminal Asn of intein which cyclizes to release the intein from the C-terminal extein (6)(7)(8). The His residues are in conserved sequence blocks B, F and G, respectively ( Figure 4A), and they modulate the activity of the catalytic residues of the intein.
The proximity of the intein active site to the extein predicts which particular extein sidechains could affect splicing. For this analysis, the intein residues C153, H245, H312, H323 and N324 were used. Ten optimized precursor structures were generated to analyze the sensitivity of these interaction energies to model variations. Identification of exteinintein contacts was based on calculation of absolute values of interaction energies between each extein sidechain and the sidechains or the backbone of the individual intein catalytic residues. The extein-intein interactions with values >1 kcal/mol (positive or negative) were considered significant and shown in Figure 4D. Such interactions were observed with groups of extein residues that are remote from the splice sites in the precursor sequence, in helix 1, loop 1 and loop 2 ( Figure 4A), but close to catalytic residues of the intein in 3D space in the FL-RadAi precursor structure model ( Figure 4C). These residues belong to highly electrostatic secondary structure elements of the ATPase domain of RadA: helix 1 (D352-K367), loop 1 (R496-R503) and loop 2 (E524-D529) ( Figure 4A, C and D). Helix 1 residues interact with sidechains and backbones of H312, H323 and N324; especially strong interactions were identified between R358 and R361 and the sidechain of N324. Loop 1 residues K497, K498 and R503 interact with the sidechain of H245 and R503 has additional interactions with the sidechain of H323 and the backbone of N324. Loop 2 residues interact only with the sidechain of H323. Interestingly, most of the identified 3D extein-intein interactions involve residues H312, H323 and N324, responsible for C-terminal cleavage activity of the intein, while only weak interactions were observed with H245 and none with C153, the residues involved in N-terminal activity of intein. The strongest interactions identified in FL-RadAi were taken into account in extein mutant design to probe the basis of thermoregulation of protein splicing.
Design of extein mutants to establish interactions that control protein splicing
To test whether identified 3D interactions between the extein residues and the catalytic intein residues are responsible for the observed temperature dependence of RadA intein splicing from its native exteins, specific extein residues of the FL-RadAi precursor were mutated to alanine (Figure 5A and B). These mutations can be classified into three categories: extein-intein interaction mutants designed based on the interaction energies, RadA functional mutants based on residues involved in ATPase activity of RadA and control mutants based on the distance from the intein active site in the models. Twelve mutants were generated, as justified in Table 4 and illustrated in Figure 5A and B.
Among the extein-intein interaction mutants, the residues from the electrostatic helix 1, which show the strongest interactions with the intein, were given the most attention ( Figure 4D, marked with * and Figure 5C). Mutant 1 (M1) has R358, E360, R361, R363 and E364 residues of helix 1 changed to alanine; M2 has mutations in the residues R358 and R361 that have strong interactions with intein residue N324 ( Figure 5C); M3 has mutations in residues E360, R363 and E364 that interact with H312 ( Figure 5C); M4-M7 have single point mutations in R358, E360, R361 and E364 ( Figure 5 and Table 2). Additionally, R503 from loop 1 interacts with H245, H323 and N324 residues of the intein and was mutated to alanine in M8 ( Figure 5C).
Since the intein insertion is in the ATPase active site of the RadA protein, we also tested the effect of mutating residues that are involved in the ATPase activity of RadA. The effect of mutations in highly conserved residues E354 and Q465 that are proposed to coordinate the nucleophilic water molecule (58) were tested in mutants M9 and M10, respectively. The effect of mutation in K152, the residue directly involved in ATP hydrolysis, could not be studied here as K152 is also the terminal residue of the N-extein, a site that perturbs intein splicing (9,10). Interestingly, the ATP basestacking residue, R361 (58), came up in our computational screen as having one of the strongest interactions with the intein and is mutated in M1, M2 and M6. As controls, M11 and M12 have mutations in charged residues that are not expected to be in proximity to the catalytic intein residues. This range of mutants will probe the veracity of the intein-extein structure model and the basis of the environmental control of protein splicing.
Splicing of extein mutants supports three-dimensional model for extein-intein interactions
To probe extein-intein interactions intein splicing activity of the FL-RadAi extein mutants and the isogenic wildtype FL-RadAi precursor (WT) were analyzed, after overexpression and Ni-NTA purification. The single and multiple points mutants M6 and M1, respectively, have folding and temperature-dependent secondary structure transitions similar to the ones observed in WT FL-RadAi, as confirmed by CD analysis (data not shown). Modulation of intein splicing by the extein mutations was analyzed in vivo and in vitro ( Figure 5D). The extent of in vivo splicing activity is reflected by the amount of spliced precursor at 0 min corresponding to ∼20% for WT during induction at 46 • C ( Figure 5D). Mutants M1, M2, M3 and M5 showed an increased level of in vivo splicing, exceeding 30%.
In vitro analysis of intein splicing was performed by incubation of the samples at elevated temperatures. The RadA intein spliced efficiently from all samples at 75 • C (data not shown). In contrast, at 55 • C, while WT splicing increased from 20 to 30% after 30 min and to >40% after 120 min, the extein mutations were found to produce dramatic effects on intein splicing at 55 • C, especially in the case of the extein-intein interaction mutants ( Figure 5D, cf M1-M8 with WT). M1, with multiple electrostatic residues in helix 1 mutated to alanine, showed the strongest phenotype, with 85 and 98% of precursor spliced after 30 min and 120 min of incubation, respectively. Although less dramatic, other multiple mutants, M2 and M3, and single point mutants M5-M7 also showed increased splicing activity compared to WT ( Figure 5D). Interestingly, M8, which is a loop 1 extein mutant, has inhibited splicing activity compared to WT, opposite to all helix 1 mutants. M6, with mutation of the R361 residue involved in ATP binding, was identified computationally by virtue of extein-intein interactions, and has the strongest splicing activation phenotypes among the single mutants. Other functional mutants in the ATPase domain, which were not identified by computational screen, such as M9 and M10 have only small deviations in splicing compared to WT. The control mutants, M11 and M12, showed negligible modulation of intein splicing, which is consistent with the charged residues in these mutants not interacting directly with the intein catalytic residues.
Mutants M1, M6 and M8 were selected for more detailed characterization of the kinetics of splicing at 55 • C (Figure 6A). Consistent with previous data, M1 had a significant amount of splicing in vivo, and spliced in vitro at least three times faster than WT, resulting in 98% splicing after 2 h at 55 • C. Also, in corroboration with previous data, single point mutations in M6 (helix 1) and M8 (loop 1) spliced >80% and <20% after 2 h at 55 • C resulting in 2.7-fold acceleration and 5-fold inhibition of intein splicing, respectively.
To further investigate the temperature dependence of M1, this mutant and WT were overexpressed at 12 • C to prevent M1 splicing in vivo. With recovery of more unspliced precursors ( Figure 6B, T = 0) characterization of splicing was performed in vitro at 55, 45 and 35 • C ( Figure 6B). At 55 • C, M1 spliced 10 times faster than WT with almost complete precursor conversion after 120 min. At 45 • C, M1 spliced 19 times faster than WT with >85% of precursor spliced within 120 min while WT had virtually no activity. Even at 35 • C with extremely low levels of activity M1 spliced at five times the rate of WT.
Together these data show unequivocally that the RadA exteins can modulate intein splicing. Significant stimulation and inhibition of protein splicing were observed even with single point mutations in extein sequences remote from the intein in primary sequence, but predicted to be close to the intein catalytic center in 3D space. The most dramatic effects were found in the predicted extein-intein interaction mutants. These results support our 3D precursor model for extein-intein interactions that regulate splicing, while providing the basis for a post-translational environmental switch of RadA activity.
DISCUSSION
This work reports the discovery that splicing of the Pho RadA intein, located in the P-loop of the ATP-binding domain of the hyperthermophilic RadA protein, is regulated by its native exteins in a manner dependent upon temperature and solution environment. The extein effect on the Pho RadA intein stands out from previous cases of exteinderived modulation of splicing for three reasons. First, regulation is observed only in the native exteins. Second, exteinintein interactions occur via residues that are remote in primary sequence but proximal in 3D space. Finally, exteins serve as an environmental sensor to control splicing, providing a new form of post-translational control. Thereby the exteins impose a lock on splicing, which can be viewed as a cold-shock response that is released at high temperature, such that RadA is most active under the optimal growth conditions of the native organism (Figure 7).
P-loop is a hot spot for intein invasion
Recently, detailed characterization of genome organization and gene expression has resulted in paradigm shifts from viewing mobile elements as purely selfish and parasitic entities to considering their dynamic role in the evolution of species (59). Increased frequencies of intein invasion of particular types of proteins and especially of common sites in different proteins may inform us of potential benefits to the organism of such intein localization. Our attention was drawn to a previously reported hot-spot for intein invasion found in the conserved motif in a phosphate-binding loop of the ATPase domain, called the P-loop. Seven of 16 of the most common proteins with inteins have insertions in the P-loop (1). The attractiveness of the P-loop for intein insertions is poorly understood. It is possible, that the bias toward insertion into the P-loop arose from its sequence conservation and specificity of the homing endonucleases from different inteins (2). Another possibility is that intein invasion is quasi-random and inteins are retained in P-loops for some adaptive advantage to the organism (1), including their partnership with exteins. Given the diverse ATPases invaded by inteins, we propose below that this arrangement might allow modulation of ATP consumption under various conditions of stress.
Among the intein-containing proteins, the RecA family stands out as a favorite niche. Strikingly, the newly identified bacterial RecA intein, the first intein reported for E. coli, is at precisely the same insertion point as an archaeal RadA counterpart, in the P-loop of the ATP-binding domain ( Figure 1). Comparative analysis of the RadA and RecA extein and intein sequences suggested independent invasions at this insertion point in archaea and bacteria, given the conserved nature of the exteins containing the disparate inteins ( Figure 1) (1). In the present study we show that the activity of the RadA intein is regulated by its native exteins, suggesting that the intein insertion in the P-loop of the ATPbinding domain of some proteins can be of functional importance. Although we cannot rule out the possibility that the extein-intein partnership is a secondary adaptation, it is clearly valuable to view the intein as part of a complex sys-Nucleic Acids Research, 2015, Vol. 43, No. 13 6645 tem and to consider the nature of its host protein, the intein insertion site, the host species and its environment.
Post-translational regulation of RadA by superimposed mechanisms
RecA/RadA/Rad51 orthologs in bacteria, archaea and eukarya are ATPases that facilitate DNA strand exchange during homologous recombination, repair of double-strand DNA breaks and restart of stalled replication forks (19). In striking contrast to bacterial and eukaryotic proteins, the archaeal RadA gene is not induced by DNA damage caused by ␥ and UV irradiation and heat shock, suggesting that it might be constitutively expressed (60)(61)(62). It was proposed that the RadA protein might be regulated posttranslationally (22).
Although the cellular environment in E. coli and P. horikoshii is different, it is widely accepted that hyperthermophilic proteins expressed in E. coli retain their native folding and biochemical properties (56). Considering the observed temperature dependence of RadA intein splicing, it is important to recognize that ATPase activity of the RadA protein itself is strongly regulated by temperature ( Figure 3) (25)(26)(27)(28). Most hyperthermophilic enzymes have optimal activity within the range of growth temperatures of the host organism, typically 70-125 • C. Thermal control of hyperthermophilic enzymes is often related to their higher rigidity at moderate temperatures and, rarely, with temperature induced conformational changes (56). An Arrhenius plot, which measures the effect of temperature on reaction rate, is linear for the majority of hyperthermophilic proteins, suggesting a uniform functional conformation with changes in temperature (56). However, a biphasic Arrhenius plot for some hyperthermophilic proteins (63-67) suggests functionally significant conformational changes (56), such as reported for RadA of some hyperthermophiles. These RadA proteins exhibit two catalytic modes above 70 • C related to rearrangement of hydrophobic amino acid residues near the ATPase active site (25,68). Pho RadA has similar temperature dependence of ATPase activity including the increase of ATPase activity at elevated temperatures (Figure 3B) and the breakpoint in Arrhenius plot at 76 • C (Figure 3D). Considering that below the optimal growth temperature archaea shut down replication and recombination, the existence of two catalytic modes of RadA was proposed to represent in vivo temperature dependent regulation of RadA function (25,68). The presence of the intein in Pho RadA and the independent control of splicing from ATPase activity by temperature and solution environment represent superimposed regulatory mechanisms on ATPase function that may safeguard against futile ATP consumption or recombination at unphysiologically low temperatures.
Native exteins as sensors that modulate intein splicing
This work shows that splicing of the RadA intein in the context of its native exteins is temperature dependent. In contrast, several hyperthermophilic inteins show thermally sensitive splicing in foreign exteins (5,(12)(13)(14)(16)(17)(18), a phenomenon that may be related to high intein rigidity at suboptimal temperature. Indeed, structural characterization of the hyperthermophilic temperature-dependent Pyrococcus abyssi PolII intein revealed that this intein has a significantly more rigid structure than that found in mesophilic inteins (69). In sharp contrast to these inteins with innate temperature dependence, the Pho RadA intein can readily splice from foreign exteins even at low temperature ( Figure 2A) (23), showing that RadA intein splicing per se is insensitive to temperature. However, splicing of the Pho RadA intein from its native precursor has pronounced temperature dependence, suggestive of a direct involvement of the exteins in the thermal properties of RadA intein splicing.
Regulation of RadA intein splicing is related to the temperature dependent changes in the secondary structure of the RadA exteins ( Figure 3A and B). Splicing begins at 55 • C, coincident with secondary structure rearrangements of RadA exteins of the FL-RadAi precursor. Splicing accelerates with increased temperature and plateaus above 75 • C. This leveling off is possibly due to rearrangements in the ATPase active site of RadA responsible for the Pho RadA transition between the two catalytic modes ( Figure 3D). Such modulation of FL-RadAi splicing observed only in the native extein context suggests that the RadA exteins can serve as a sensor that regulates intein activity through temperature-and solution-sensitive extein-intein interactions.
Extein-intein partnership in three-dimensional space for post-translational control of RadA activity
To understand the nature of the interactions involved in the extein-dependent modulation of intein activity, atomic details of the FL-RadAi precursor are needed. In the absence of a full extein-intein precursor structure, it was possible to derive a model using the independently determined structures of the intein and the extein, as the prediction is reduced to mostly the relative orientation of the two structures and the limited extein distortion to accommodate the intein insertion. This approach is legitimized by the similarity in secondary structure rearrangements of FL-RadAi precursor and RadA + Intein ( Figure 3A and B). We generated 10 Pho RadA precursor models and identified extein sidechains that could potentially affect splicing through their electrostatic and van der Waal's interactions with the intein catalytic residues, leading to a mutagenesis study that supported the model, as discussed further below. Interestingly, when we performed similar modeling for the newly identified bacterial E. coli RecA precursor, with intein insertion in its P-loop, we found that precursor models with similar extein-intein orientations as Pho RadA are not feasible due to steric clashes between the E. coli RecA intein and its C-extein (data not shown). Orientation of the disparate inteins within the highly conserved ATPase domains could be different for the archaeal hyperthermophilic RadA precursor and the bacterial mesophilic RecA precursors, suggestive of coevolution of intein and extein within each precursor, and precursor-specific functional adaptations.
Mutational analysis strongly supported the modeled interactions between the exteins and the intein. Reduced electrostatic interactions between helix 1 and the intein active site in M1, allow splicing in the temperature interval of 35-55 • C ( Figure 6B), suggesting that these helix 1-intein active site interactions are the primary source of the inhibition of splicing above 55 • C. Intein splicing has a profound sensitivity toward its exteins as even single point mutations, such as R361A (M6) and R503A (M8), cause strong stimulation and inhibition of protein splicing, respectively. Interestingly, all helix 1 mutations facilitate intein splicing, whereas the loop 1 mutation leads to complete inhibition of splicing, suggesting that different secondary structure elements of the RadA exteins may have divergent effects on intein activity. Such differential modulation of intein splicing by the exteins suggests the potential for extein-intein interaction to toggle intein activity in response to different stimuli.
The conserved residue R361 from helix 1, directly involved in ATP base stacking (58), has the strongest predicted interaction with intein catalytic residues and the strongest phenotype among the single point mutants. Importantly, the interactions between R361 and the intein may affect activity of both the intein and the exteins, by affecting splicing catalysis on one hand and preventing ATP binding to R361 in the functionally inactive FL-RadAi precursor on the other. A similar form of thermal modulation of a hyperthermophilic enzyme favoring activity at high temperatures was reported for the Pho acylphosphatase, where formation of a salt bridge with catalytic residues inhibits protein activity at low temperature and the removal of these interactions by mutagenesis releases the inhibition resulting in enzymatic activity of the protein at low temperature (70).
In addition to temperature and mutations, the inhibitory extein-intein interactions within FL-RadAi could be released in a defined solution environment. This environment was formed by the combination of the detergent SDS and the ionic liquid 1-butyl-3-methylimidazolium chloride, which together allow FL-RadAi splicing at low temperature. Similarly, detergents, denaturants and solvents can activate some enzymes by relaxing the rigidity of interactions at the active site (56). However, ATPase activity of the RadA protein at low temperature is not stimulated by the solution composition mentioned above (data not shown), suggesting independent modulation of splicing activity of the RadA precursor and ATPase activity of the RadA protein. Thus, the extein-intein interactions are sensitive to the nature of extein amino-acid side chains and environmental conditions, including temperature and solution composition, and allow independent regulation of intein splicing and ATPase activity per se. It would also be interesting to probe the effect of pressure on these interactions given that P. horikoshii is hyperbaric.
In conclusion, this work demonstrates a partnership between the RadA intein and the exteins in the P. horikoshii RadA precursor. This interaction between the two serves as a sensor for temperature and solution composition, modulating intein splicing. We propose that the intein serves as a transducer that permits extein function in response to environmentally induced interruption of extein-intein interactions. During this modulation, the intein transforms the inactive exteins into the active RadA protein, providing a novel mechanism of post-translational regulation of RadA function.
SUPPLEMENTARY DATA
Supplementary Data are available at NAR Online.
|
2018-04-03T04:55:59.481Z
|
2015-06-22T00:00:00.000
|
{
"year": 2015,
"sha1": "2c073dcf355da34bbf385c1ceb891a88c87048dd",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/43/13/6631/17434383/gkv612.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecf8c977e010ec145c87a7c633eb7a2b936296e3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
263501458
|
pes2o/s2orc
|
v3-fos-license
|
Assessing the neurocognitive correlates of resting brain entropy
The human brain exhibits large-scale spontaneous fluctuations that account for most of its total energy metabolism. Independent of any overt function, this immense ongoing activity likely creates or maintains a potential functional brain reserve to facilitate normal brain function. An important property of spontaneous brain activity is the long-range temporal coherence, which can be characterized by resting state fMRI-based brain entropy mapping (BEN), a relatively new method that has gained increasing research interest. The purpose of this study was to leverage the large resting state fMRI and behavioral data publicly available from the human connectome project to address three important but still unknown questions: temporal stability of rsfMRI-derived BEN; the relationship of resting BEN to latent functional reserve; associations of resting BEN to neurocognition. Our results showed that rsfMRI-derived BEN was highly stable across time; resting BEN in the default mode network (DMN) and executive control network (ECN) was related to brain reserve in a negative correlation to education years; and lower DMN/ECN BEN corresponds to higher fluid intelligence and better task performance. These results suggest that resting BEN is a temporally stable brain trait; BEN in DMN/ECN may provide a means to measure the latent functional reserve that bestows better brain functionality and may be enhanced by education.
Introduction
The human brain is a dynamic system with large-scale ongoing fluctuations.Understanding these fluctuations is essential to understanding the individual differences of brain function, functional anatomy, and the pathologies associated with neuropsychiatric conditions.Both theoretical models and neuroscience experiments have demonstrated a characteristic self-organized criticality of normal brain activity [1,2].A crucial aspect of this criticality is the emergence of long range temporal correlations (LRTC), which have been shown to be fundamental to high-order brain functions such as memory, attention, perception, coordination, etc .Loss of temporal coherence may cause inter-neuronal and inter-regional dysconnections.Restoring these dysconnections and the related dysfunctions may require a restoration of LRTC of brain activity.In fact, a recent study has shown that enhancing brain activity coherence improved memory in older people [24].Over the past decades fMRI, especially resting state fMRI, has been used predominantly to elucidate the potential importance of LRTC by focusing on slow fluctuations in fMRI timeseries and the intrinsic spatial modes that they define, for example, the default mode [25].Rather than assessing only the slow fluctuations, we have proposed a method [26] to directly map the whole brain LRTC using a nonparametric entropy metric, the Sample Entropy [27,28].This metric is based upon the entropy of measured hemodynamic states that considers dependency over time through temporal embedding and long range similarity matching.In other words, this use of entropy reflects the statistical dependencies or order implicit in itinerant dynamics, expressed over extended periods of time.Brain entropy (BEN) mapping results based on resting state fMRI (rsfMRI) have been shown to be unrelated to regional perfusion and other rsfMRI measures in most parts of the brain cortex [29].Resting BEN is reproducible across time and sensitive to various brain diseases and to focal neuromodulations [26,[30][31][32].While these data clearly demonstrate the potential of BEN, a direct measure of LRTC as a unique brain signature for studying brain diseases or normal brain conditions, its neuro-mechanism remains unclear as does how stable this relatively new brain measure is across time.
The purpose of this study was to address the above questions using the large rsfMRI and behavioral data from the human connectome project (HCP) [33].It has been proposed that the role of resting state brain activity is to facilitate overt brain functions [25,[34][35][36].Although it is unclear how such facilitation works, this energy metabolism-costly process (resting state activity accounts for most brain energy metabolism [25,37]) may actually generate or maintain a brain functionality reserve, or equivalently a collection of various functional brain states.
Constantly shuffling among these latent states may act as a priming condition used to respond to upcoming familiar or novel events.Given the aforementioned important role of LRTC in brain activity, we hypothesized that LRTC of resting state brain activity as measured by BEN is related to the latent brain reserve with lower BEN (meaning greater LRTC) indicating a bigger or stronger reserve.Because the latent reserve and resting state activity are both non-specific to any particular cognitive function, they should correlate with general cognitive capabilities and many functionalities.To test the potential role of resting BEN as an index of the latent functional reserve, we will examine the correlations between resting BEN and education and fluid intelligence that are associated with general functional capability and intelligence.
Education is a major indicator of cognitive reserve [38], a concept used to explain the individual difference of brain resilience to neuropathology in Alzheimer's Disease.For young healthy individuals, education is known to be strongly correlated with general intelligence [39], suggesting it as a sensitive index for assessing the latent brain functionality reserve.Fluid intelligence is the capability for solving newly encountered problems for which learned and specialized skills provide little benefit [40].Given the fact that fluid intelligence and the latent functional reserve are two general properties of the brain, it is reasonable to expect that they are correlated.For the same reason, we hypothesized that the latent reserve as measured by resting BEN is correlated to various functional task performances showing non-specificity to a particular functional domain.
Resting BEN has been shown to be replicable across two different acquisition times [26] but the length of rsfMRI time series is short (<130), making it impossible to assess the temporal variations of BEN.By contrast, the HCP rsfMRI data has 1200 timepoints, providing sufficient data for assessing the dynamic information of BEN.We hypothesized that BEN is a stable brain trait presenting small variations across time.
In addition to the above questions, we re-examined the age effects of BEN that have been reported before using small samples [29,41].
Materials and Methods
rsfMRI data, demographic data, and behavior data from 860 healthy young subjects (age 22-37 yrs, male/female=398/464) were downloaded from HCP.Each subject had four resting scans acquired with the same multi-band sequence [42] but the readout directions differed: readout was from left to right (LR) for the 1 st and 3 rd scans and right to left (RL) for the other two scans.The pre-processed rsfMRI data in the Montreal Neurological Institute (MNI) brain atlas space were downloaded from HCP and were smoothed with a Gaussian filter with full-width-at-half-maximum = 6mm to suppress the residual inter-subject brain structural difference after brain normalization and artifacts in rsfMRI data introduced by brain normalization.BEN mapping was performed with BEN mapping toolbox (BENtbx) using the default settings [26].To cope with the huge computation required to calculate BEN for the 4x860 long rsfMRI scans (each with 1000 timepoints), we implemented the BEN mapping algorithm in C++ using CUDA (the parallel computing programming platform created by Nvidia Inc).Four graphic processing unit (GPU) video cards were used to further accelerate the process.Entropy value was calculated using the approximate entropy formula, the Sample Entropy, which is the "logarithmic likelihood" that a small section (within a window of a length 'm') of the data "matches" with other sections will still "match" the others if the section window length increases by 1 (see Fig. 1B)."Match" is defined by a threshold of r times standard deviation of the entire time series.In this study, the window length was set to be three and the cut off threshold was set to 0.6 (Wang et al., 2014).
Mean BEN maps of the first LR and the first RL scans and the second LR and the second RL scans were calculated for the following analyses.Age, sex, and education associations of resting BEN were assessed with simple regression using SPM (https://www.fil.ion.ucl.ac.uk/spm/).Associations of BEN to fluid intelligence (measured by the Penn Matrix Test [43]), and functional task performance were similarly examined but with age and sex included as nuisance covariates.Task performance was measured by the accuracy of button selection during the on-magnet fMRI-based working memory, language, and relational tasks [44].The voxelwise significance threshold was defined by p<0.05.Multiple comparison correction was performed with the family wise error theory [45].
To assess the temporal stability of BEN, we implemented a sliding-window based BEN mapping algorithm.Fig. 1 provides a schematic view of this new algorithm.Similar to the current static BEN mapping, entropy calculation in dynamic BEN mapping is performed at each voxel separately.A time window with a length of L timepoints (L=8 in Fig. 1) sliding from the beginning to the end of the original time series is used to extract a set of temporally overlapped sub-series with one sub-series at each sliding position (Fig. 1A).For each sub-series, the regular SampEn calculation (Fig. 1B, 1C) is applied to get the entropy value at the corresponding sliding window position."m" in Fig. 1B indicates the window length for SampEn calculation.Fig. 1B.1 illustrates the process of finding the total number of matches among all possible embedding vectors (the temporal signal segments extracted by the smaller sliding window of a length of m).Fig. 1B.2 is a repetition of Fig. 1B.1 but with the embedding vector length increased by 1.And the final SampEn value becomes the natural logarithm of the ratio between the total number of matches of window length m and window length m+1 (Fig. 1C).
The length of the entire time series was 12000.Because BEN mapping using rsfMRI data with a length from 120 to 200 has been shown to provide reliable results, we chose 300 as the sliding window length to get reliable transit BEN from each 300 timepoints rsfMRI sub-series.
Successive sub-series were gapped by 9 timepoints to reduce the total number of sub-series to reduce the total computation burden.This gap was empirically set to be 9 timepoints so the interval was 9TRs=6.48sec, which was roughly the same as one hemodynamic response function cycle.Similar to the static BEN mapping mentioned above, we implemented the dynamic BEN mapping algorithm in C++ and the CUDA programming environment.GPU computing was used for finding the number of matched vectors for many voxels simultaneously.The number of voxels to be processed simultaneously was determined based on the available computation resource in the GPU card.Four Nvidia 1080Ti GPU cards were used.
After dynamic BEN mapping, each subject had a BEN image series.The mean, standard deviation (STD), and the coefficient of variance (CV) of this BEN image series were calculated.
For each one of them, the average across the first LR and RL, and the second LR and RL scans were calculated.Similar statistical analyses as mentioned above were performed to assess the potential associations of these maps to age, sex, and cognition.
Results
The GPU-based implementation of BENtbx was 10-fold faster than the original version.But it still took roughly 10 days to calculate the static BEN maps (using all 1200 timepoints) for all 1023 subjects (only 862 had all 4 rsfMRI scans).The dynamic BEN mapping took about 40 days.Fig. 2 shows the mean BEN maps (2A-2D), mean STD maps (2E, 2F), mean CV maps (2G, 2H) of the two sessions (each session containing a LR and a RL scan) of all 862 subjects.
The mean BEN maps from the static BEN mapping (shown in Fig. 2A and 2B) were very similar to those from the dynamic BEN maps (Fig. 2C and 2D), although the intensity differed due to the significant difference of the time series length (1200 for the static BEN mapping vs 300 for the dynamic BEN mapping).Gray matter (GM) showed lower BEN than white matter (WM), and regions in the default mode networks (DMN) had lower BEN than the rest of the brain; both findings are consistent with our previous study.Dynamic BEN showed inhomogeneous fluctuations across the brain with higher fluctuations in WM, visual cortex, and motor cortex (Fig. 2E, 2F).In relative to the mean BEN, dynamic BEN showed very high temporal stability as measured by the CV (<0.032 in the whole brain, Fig. 2G, 2H.Data with CV <1 is often considered low variation).
We then assessed the effects of age and sex on BEN and its variations.We also examined the associations between cognition and BEN as well as its variations.The results from the BEN maps calculated from the entire 1200 time points and then averaged across the LR and RL scans were highly similar to those from the mean BEN maps of the dynamic BEN image series.
Therefore, the results shown below were based on the static BEN mapping results.Also because the results based on the mean of the first LR and the first RL rsfMRI scans or the mean of the second LR and the second RL scans were very similar, we chose to show the results based on the mean BEN of the first LR and RL scans only.
Fig. 3 shows the association maps of BEN to age, sex, education years, and fluid intelligence.
Resting BEN was significantly correlated with age (Fig. 3A) in the prefrontal executive control network (ECN, consisting of the lateral prefrontal cortex, the posterior parietal cortex, the frontal eye fields, and part of the dorso-medial prefrontal cortex) and the frontal-temporal-parietal DMN.Women showed higher BEN in visual cortex, motor area, and some part of precuneus (Fig. 3B) than men.Longer education years were associated with decreased BEN in ECN and DMN (Fig. 3C).In part of ECN was negatively correlated with better performance during performing working memory (Fig. 4A), language (Fig. 4B), and relational tasks (Fig. 4C).Temporal STD of BEN showed no significant age and sex effects and no significant correlations to education years, fluid intelligence, and task performance.
Discussion
In this study, we assessed the long-range temporal coherence of resting state brain activity using a large data set.Long-range coherence was measured by BEN.A sliding window-based dynamic BEN mapping method was implemented to examine the temporal fluctuations of BEN.
Our data showed that BEN was stable across the entire acquisition time with minor temporal variations, and did not show any significant correlation to age, sex, education, and neurocognitive measures.To understand the potential neuro-cognitive mechanism of resting BEN, we assessed the associations of BEN with biological and behavioral measures and found that BEN in DMN and ECN increases with age but decreases with years of education; women had higher BEN than men in the cortical area; BEN in DMN and ECN was negatively correlated with fluid intelligence and task performance for all of the assessed cognitive tasks.
The high temporal stability of resting BEN across many different timepoints is consistent with the high test-retest reproducibility of BEN shown in our previous study [26], further proving BEN to be a reliable brain metric.Our findings of a strong positive correlation between age and BEN in DMN and ECN were consistent with the results reported in [46] and provide additional evidence for the physical law-based brain entropy hypothesis which states that brain entropy tends to increase with time in the normal adult brain due to progressive tissue aging and deteriorations [47][48][49][50][51].While this unfortunate entropy increase trend may seem avoidable, our data also showed that longer education years were correlated with lower resting BEN, suggesting a plausible way of reducing resting BEN through extended learning.Our previous study showed that beneficial focal stimulations via transcranial magnetic stimulations can reduce local BEN [52].The education effects on resting BEN revealed in this paper further proved that resting BEN is modifiable, which is of particular interest for future brain disease studies.
The finding of females having higher BEN than males was consistent with [41], which might be due to hormonal effects [53].The sex difference of BEN in the visual and motor cortex may reflect the sex difference of visual and motion processing previously reported in [54].
Our BEN vs education and cognition association analysis results unanimously highlighted DMN and ECN, which is consistent with the well-known phenomenon that DMN and ECN (also called task positive network [55]) are two major brain circuits that are both active either during task performance [55,56] or at rest [57,58].Different from the previous studies, our results suggest that both DMN and ECN are related to neurocognition through their resting BEN, which is independent of age, sex, and education though all three factors did show significant effects on DMN and ECN BEN.In terms of the long-range temporal coherence, DMN and ECN may actually represent a unified neural circuit underlying the general intelligence and general functionality of the brain.
Resting state brain activity has been postulated to be involved in maintaining and facilitating brain functions such as language, social interaction, and memory [25,[59][60][61].Our data directly support those postulations through the negative correlations between resting BEN and the task performance for three different functional tasks.Moreover, our data for the first time linked regional resting activity as measured by BEN to general intelligence as reflected by fluid intelligence and education years.
Regarding the correlation between resting brain activity and task activations, several fMRI studies have reported that brain activation during functional task performance can be predicted reliably by the resting state fMRI based on regional inter-voxel correlations [62,63], the amplitude of the low frequency fluctuations [64], and inter-regional functional connectivity [65][66][67][68][69].Our study differs from these by assessing the brain-behavior associations rather than a brain vs brain relationship.Several groups have reported the correlations between resting state functional connectivity and task behavior or cognition [63,[70][71][72].
Conclusion
In conclusion, the long rsfMRI time series from a large cohort of healthy subjects in the HCP proved BEN is a temporally stable brain activity measure.Our data suggest BEN in DMN/ECN can be used as a measure of the potential functional reserve that can be improved by education and may result in better brain function.
Legends
Fig 4 shows the associations of resting BEN to functional task performance.BEN in DMN and
Fig. 2 .
Fig. 2. A scheme of the sliding window-based dynamic entropy calculation.A) A large time
Fig. 3 .
Fig. 3.The age, sex, and education effects on resting BEN as well as the associations of
Fig. 4 .
Fig. 4. Resting fBEN was negatively associated with task performance levels for A) working
Fig. 2 .
Fig. 2. A scheme of the sliding window-based dynamic entropy calculation.A) A large time
Fig. 3 .
Fig. 3.The age, sex, and education effects on resting BEN as well as the associations of
Fig. 4 .
Fig. 4. Resting fBEN was negatively associated with task performance levels for A) working memory
|
2020-04-29T01:01:22.979Z
|
2020-04-28T00:00:00.000
|
{
"year": 2020,
"sha1": "c060c23132266c5e7f5baa39e677e0e0c90143a6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c060c23132266c5e7f5baa39e677e0e0c90143a6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
59026881
|
pes2o/s2orc
|
v3-fos-license
|
Stochastic gradient methods for unconstrained optimization
This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.
Introduction
Stochastic optimization problems appear in all areas or engineering, physical and social sciences.Typical applications are model fitting, parameter estimation, experimental design, performance evaluation etc.The models we are considering here can be written in the form where f : R n → R is either observed with noise or is defined as the mathematical expectation.In fact the objective function depends on a vector of random variables ξ from some probability space that might be know or unknown, depending on application.Thus the exact evaluation of f (x) is impossible to evaluate and it is necessary to use simulation to estimate the objective function value.The feasible set Ω can be defined by constraints of different types -simple box constraints, deterministic constraints, chance constraints, constraints in the form of mathematical expectation etc.In this paper we consider only the case Ω = R n .More precisely, we consider only two types of stochastic problems.The first type are problems with random objective function, min x F (x, ξ), where ξ represents the noise (or randomness) and x is the decision variable.Such models appear when the decision has to be taken before the full information about the problem parameters is known.The lack of information in that case is represented by random vector ξ.The second type of problems are the problems with the mathematical expectation as the objective function min x E(F (x, ξ)). ( Although the noise is technically removed in the above problem, it is rather hard to solve it as the expectation is hard or even impossible to state analytically even if the distribution of ξ is known.Methods for solving stochastic optimization problems are combination of ideas from numerical optimization and statistics.Thus the class of popular methods include simulation-based methods, direct methods for stochastic search, annealing type algorithms, genetic algorithms, methods of reinforced learning, statistical methods and many others, [34], [7].Among all of them we restrict our attention here on two methods typically used in simulation based optimization: Stochastic Approximation, SA, and Sample Average Approximation, SAA.
Stochastic Approximation methods are introduced in the seminal paper of Robbins and Monro, [30] and remain a popular choice for solving stochastic optimization problems.They relay mainly on noisy gradient evaluations and depend heavily on the choice of steplength sequence.The choice of this sequence is the subject of many research efforts as well as other techniques for accelerating the convergence of SA methods.Sample average Approximation methods can be seen as an alternative to SA methods.In this approach a sample from the underlying distribution is used to construct a deterministic sample average problem which can be solved by optimization methods.However the sample used for the SAA approximation very often needs to be large and a naive application of standard nonlinear optimization techniques is not feasible.Therefore there has been extensive research in variable sample size methods that reduce the cost of SAA.
Both SA and SAA methods are considered here in the framework of gradientrelated optimization (gradient methods, subgradient methods, second order quasi Newton methods) as well as in the derivative-free framework.This survey will largely deal with gradient methods for stochastic optimization and an interested reader can look at Spall, [34] for an overview of other methods.This paper is organized as follows.In Section 2 we discuss the SA method and its modifications.Section 3 contains results for the SAA methods and unconstrained problems with the mathematical expectation objective function.Two important deterministic problems that are rather similar to SAA problem are discussed in Section 4 as well as methods for obtaining their solution that rely on stochastic gradients.Finally, some conclusion and research perspectives are presented in Section 5.
Stochastic Approximation Methods
There have been countless applications of Stochastic Approximation (SA) method since the work of Robbins and Monro, [30].In this section we give an overview of its main properties and some of its generalizations.The problem we consider is min assuming that only the noisy measurements f (x) of the function and its gradient ĝ(x) are available.Let us start by considering the SA algorithm for solving systems of nonlinear equations as it was defined originally in [30].The convergence theory presented here relays on imposition of statistical conditions on the objective function and the noise.Convergence analysis can be conducted throughout differential equations as well, see [34] and [25] for further references.
Consider the system of nonlinear equations with g(x) being the gradient of f (x).Suppose that only the measurements with noise that depends on the iteration as well as on the decision variable x are available.Then the SA is defined by The sequence of step sizes {a k } k∈N is also called the gain sequence and it has dominant influence on the convergence.Let {x k } be a sequence generated by an SA method.Denote by F k the σ-algebra generated by x 0 , x 1 , . . ., x k .If the problem has an unique solution x * the set of assumptions that ensures the convergence of an SA method is the following.S 1.The gain sequence satisfies: For some symmetric, positive definite matrix B and for every η ∈ (0, 1), inf S 4.There exists a constant c > 0 such that for all x and k, The first assumption which implies that the step sizes converge to zero is standard in stochastic algorithms, see [34].The second condition, ∞ k=0 a k = ∞ is imposed in order to avoid inefficiently small step sizes.On the other hand, the summability condition on a 2 k ensures stability.Its role is to decrease the influence of the noise when the iterates come into a region around the solution.An example of a sequence that satisfies the first assumption is where α ∈ (0.5, 1] and a is some positive constant.The condition of zero mean is also standard in stochastic optimization.Its implication is that ĝk (x) is an unbiased estimator of g(x).Notice that under assumption S3, the condition in S4 is equal to Therefore, the mean of ĝk (x) 2 can not grow faster than a quadratic function of x.Under these assumptions, the almost sure convergence of the SA algorithm can be established.The convergence in mean square i.e.E[|x k − x * |] 2 → 0 as k → ∞ was proved in [30] and the theorem bellow states a stronger result, the almost sure convergence.
Theorem 2.1.[34] Consider the SA algorithm defined by (6).Suppose that assumptions S1 -S4 hold and that x * is a unique solution of the system (4).Then x k converges almost surely to x * .
Closely related and more general result is proved in Bertsekas, Tsitsiklis [6] where the gradient-related method of the form is considered.Here ξ k is either stochastic or deterministic error, a k is a sequence of diminishing step sizes that satisfy assumption S1 and s k is a descent direction.In this context, the direction s k is not necessarily the gradient but it is gradientrelated.The convergence is stated in the following theorem.
Theorem 2.2.[6] Let {x k } be a sequence generated by (8), where s k is a descent direction.Assume that S1 and S3 -S4 hold and that there exist positive scalars c 1 and c 2 such that Then, either f (x k ) → −∞ or else f (x k ) converges to a finite value and lim k→∞ ∇f (x k ) = 0. Furthermore, every limit of {x k } is a stationary point of f.
The gain sequence is the key element of the SA method.It has impact on stability as well as on convergence rate.Under some regularity conditions, Fabian [13], the asymptotic normality of x k is obtained.More precisely, where → d denotes the convergence in distribution, α refers to (7) and Σ is the covariance matrix that depends on the gain sequence and on the Hessian of f .Therefore, the iterate x k approximately has normal distribution N (x * , k −α Σ) for large k.Due to assumption S1, the maximal convergence rate is obtained for α = 1.However, this reasoning is based on the asymptotic result.Since the algorithms are finite in practice, it is often desirable to set α < 1 because α = 1 yields smaller steps.Moreover, if we want to minimize Σ , the ideal sequence would be where H(x) denotes the Hessian matrix of f, Benveniste et al. [5].Even though this result is purely theoretical, sometimes the Hessian at x * can be approximated by H(x k ) and that way one can enhance the rate of convergence.Two main drawbacks of the SA method are slow convergence and the fact that the convergence theory applies only if the solution of ( 4) is unique i.e.only if f has an unique minimizer.Thus several generalizations are developed to address these two issues.One can easily see from (7) that the gain coefficients increase with the increase of a.On the other hand a large a might have negative influence on stability.Therefore several generalizations of the gain coefficients are considered in the literature.One possibility, Spall [35], is to introduce the so called stability constant A > 0, and obtain Now, the values of a and A can be chosen together to ensure effective practical performance of the algorithm, allowing for larger a and thus producing larger step sizes in latter iterations, when the effect of A is small, and avoiding instability in early iterations.The empirically recommended value of A is at most 10% of the iterations allowed or expected during the optimization process, for more details see [35].
Several generalizations of the SA method are based on adaptive step sizes that try to adjust the step size at each iteration to the progress achieved in the previous iterations.The first attempt of this kind has been made in Kesten [20] for one dimensional problems.The main idea is to monitor the changes in the sign of x k+1 − x k .If the sign of the difference between two consecutive iterations starts to change frequently we are probably in the domain of noise and therefore small steeps are needed to avoid oscillations.This idea is generalized in Delyon, Juditsky [11] for multidimensional problems.The gain coefficients are defined as where I is the identification function defined as I(t) = 1 if t < 0 and I(t) = 0 if t ≥ 0. The method is accelerating the convergence of SA and its almost sure convergence is proved under the standard assumptions.The idea of sign changes is further developed in Xu, Dai [38].It is shown that the sequence {s k /k} stands for the change frequency of the sign of ĝT k+1 ĝk in some sense.The assumption in [38] is that the noise ξ k is state independent.The theoretical analysis shows that in that case s k /k converges to P (ξ T 1 ξ 2 < 0) in the mean square sense.Based on that result, a switching algorithm that uses the switching parameter t k , defined as is proposed.Then the gain coefficients are defined as where 0.5 ≤ α < β ≤ 1, v is a small positive constant and a, A are the constants from assumption S1.To prove the convergence of the switching algorithm ( 9)-( 10) one additional assumption is introduced in [38].
S 5. G(x) = g(x)−x is a weighted maximum norm pseudo-contraction operator.
That is for all x ∈ R n there exists a positive vector w and some w , where β ∈ [0, 1) and • w is defined as x w = max{x(i)w(i) −1 , i = 1, . . ., n} where x(i) and w(i) are the ith components of x and w respectively.Theorem 2.3.[38] Suppose that assumptions S1 -S2 and S5 hold.Then for {x k } generated through ( 9)- (10) we have x k → x * as k → ∞ with probability 1.
If the objective function is given in the form of mathematical expectation the adaptive step length sequence can be determined as proposed in Yousefian et al. [36].For the problem min the following assumptions are stated.S 6.The function F (•, ξ) is convex on a closed and convex set D ⊂ R n for every ξ ∈ Ω, and the expected value E[F (x, ξ)] is finite for every x ∈ D.
S 7. The errors ξ k in the noisy gradient ĝk are such that for some µ > 0, A self-adaptive scheme is based on the error minimization and the convergence result is as follows.
Theorem 2.4.[36] Let assumptions S6 and S7 hold.Let the function f be differentiable over the set D with Lipschitz gradient and assume that the optimal set of problem ( 11) is nonempty.Assume that the step size sequence {a k } is generated through the following self-adaptive scheme where c > 0 is a scalar and the initial step size is such that 0 < a 0 < 1/c.Then the sequence {x k } converges almost surely to a random point that belongs to the optimal set.
An important choice for the gain sequence is a constant sequence.Although such sequences do not satisfy assumption S1 and almost sure convergence to solution can not be obtained, it can be shown that a constant step size can conduct the iterations to a region that contains the solution.This result initiated development of a cascading steplength SA scheme in [36] where a fixed step size is used until some neighborhood of the solution is reached.After that, in order to come closer to the solution, the step size is decreased and again the fixed step size is used until the ring around the solution is sufficiently tighten up.That way, the sequence of iterates is guided towards the solution.
A hybrid method which combines the SA gain coefficients and the step sizes obtained from the inexact Armijo line search under the assumptions valid for the SA method is considered in Krejić et al. [24].The method takes the advantages of both approaches, safe convergence of SA method and fast progress obtained by line search if the current iterate if far away from the solution (where the SA steps would be unnecessarily small).The step size is defined according to the following rule.For a given C > 0 and a sequence {a k } that satisfies assumption S1 we define where β k is obtained from the Armijo inequality After that the new iteration is obtained as The existence of C such that the gain coefficient ( 12) is well defined as well as the convergence of the sequence generated by ( 13) is proved in [24] under one additional assumption.
S 8. Observation noise is bounded and there exists a positive constant M such that ξ k (x) ≤ M a.s.for all k and x.
Theorem 2.5.[24] Assume that Assumptions S1-S4 and S8 hold, the gradient g is Lipschitz continuous with the constant L, and the Hessian matrix H(x * ) exists and is nonsingular.Let .
Let {x k } be an infinite sequence generated by (13).Then x k → x * a.s.
Many important issues regarding the convergence of the SA methods are not mentioned so far.One effective possibility to speed up the convergence is to apply the averaging to the sequence of gradient estimations ĝ(x k ) as suggested in Andradottir [1].It is shown that the rate of convergence could be significantly better than the rate of SA if two conditionally independent gradient estimations are generated and the new iteration is obtained using a scaled linear combination of the two gradients with the gain coefficient.More details on this procedure are available in [1].Let us also mention a robust SA scheme that determines an optimal constant step length based on minimization of the theoretical error for a pre-specified number of steps [27].
We have assumed in the above discussion that the noisy gradient values are available.This is the case for example if the analytical expression of F in ( 11) is available.In this case, under certain assumption we can interchange the expectation and derivative and thus a sample average approximation of the gradient can be calculated.It is important to be able to use a sample gradient estimation with relatively modest sample size as calculation of the sample gradient is in general expensive for large samples.However it is safe to claim that the analytical expression for the gradient calculation is not available in many cases and thus the only input data we have are (possibly noisy) function values.Thus the gradient approximation with finite differences appears to be a natural choice in many applications.The first method of this kind is due to Keifer, Wolfowitz, [21].Many generalizations and extensions are later considered in the literature, see Fu [15] for example.Among many methods of this kind the Stochastic Perturbation method is particularly efficient as it uses one two function values to obtain a good gradient approximation, see [35] for implementation details.
The questions of stopping criteria, global convergence, search directions which are not gradient related, and other important questions are beyond the scope of this paper.An interested reader might look at Spall [34], Shapiro et al. [32] for guidance on these issues and relevant literature.
Sample Average Approximation
Sample Average Approximation (SAA) is a widely used technique for approaching the problems of the form The basic idea is to approximate the objective function f (x) with the sample mean where N is the size of a sample represented by i.i.d.random vectors ξ 1 , . . ., ξ N .Under the standard assumption such as finite variance of F (x, ξ) the (strong) Law of Large Numbers implies that fN (x) converges to f (x) almost surely.Moreover, if F (x, ξ) is dominated by an integrable function, then the uniform almost sure convergence of fN (x) on the compact subsets of R n is obtained.Within the SAA framework the original problem ( 14) is replaced by the approximate problem min x fN (x), (16) and thus the key question is the relationship between their respective solutions as N tends to infinity.Denote by X * the set of optimal solutions of the problem ( 14) and let f * be the optimal value of the objective function.Furthermore, denote by X * N and f * N the set of optimal solutions and the corresponding optimal values, respectively, of the problem (16).Then, the following result holds.Theorem 3.1.[32] Suppose that there exists a compact set C ⊂ R n such that X * is nonempty and X * ⊂ C. Assume that the function f is finite valued and continuous on C and that fN converges to f almost surely, uniformly on C. Also, suppose that for N large enough the set X * N is nonempty and X * N ⊂ C. Then f * N → f * and the distance between sets X * N and X * tends to zero almost surely as N → ∞.
Let xN be an approximate solution of the problem (14).Clearly fN (x N ) can be calculated for a given sample.The Central Limit Theorem can be used to obtain an approximation of the error bound c N (x N ) such that the inequality holds with some high probability δ ∈ (0, 1).For example, using the sample variance σ2 with z being the quantile of the standard normal distribution.The error bound is directly proportional to the variance of the estimator V ar( fN (x N )).Therefore, in order to provide a tight bound one can consider some techniques for reducing variance such as the quasi-Monte Carlo or Latin hypercube sampling [32].However, these techniques tend to deteriorate the i.i.d.assumption.This issue is addressed further on in this section.The gap g(x N ) = f (x N ) − f (x * ) where x * is a solution of the original problem can be estimated as well.Clearly g(x N ) ≥ 0. To obtain an upper bound suppose that M independent samples of size N are available, i.e.
. Then, the upper bound estimator for the where N is some large enough sample size and t M −1,δ is the quantile of Student's distribution with M − 1 degrees of freedom.It should be mentioned that the sample size bounds such that the solutions of an approximate problem are nearly optimal for the true problem with some high probability are mainly too conservative for practical applications in general.For further references on this topic, see [32] for instance.
Recall that almost sure convergence fN (x) towards f (x) is achieved if the sample is i.i.d.under standard assumptions.However, if the sample is not i.i.d. the almost sure convergence of fN is achievable only if the sample size N which defines the SAA problem grows at the certain rate, Homem-de-Mello [19].The analysis presented in [19] allows for biased estimators fN (x) if fN (x) is at least asymptotically unbiased.Let us first assume that the sample ξ k 1 , . . ., ξ k N k generated at the iteration k is independent of the sample at the previous iteration for every k.The following assumptions are needed.R 1.For each x, there exists M (x) > 0 such that sup i,k F (x, ξ k i ) ≤ M (x) with probability 1. R 2. For each x, we have that lim k→∞ E( fN (x)) = f (x).Theorem 3.2.[19] Suppose that assumptions R1-R2 hold and that the sample size sequence {N k } satisfies ∞ k=1 α N k < ∞ for all α ∈ (0, 1).Then fN k (x) converges to f (x) almost surely.
For example, N k ≥ √ k satisfies the previously stated summability condition.The rate of convergence is also addressed in [19], i.e. the error bounds for | fN k (x) − f (x)| are developed.In the case where N k ≥ c 1 k ρ for c 1 > 0 and ρ > 2 it can be proved, under some additional assumptions, that for every k sufficiently large the following inequality holds almost surely Moreover, if the sample is cumulative, the corresponding error bound is where C is some positive constant.
The above analysis provides a justification for the SAA approximation as well as a guidance for choosing N in (16).Thus from now on we concentrate on gradient methods for solving (16).Several papers exploit ideas from deterministic optimization.Generally speaking we are interested in solving the SAA problem for some finite, possibly very large N as well as obtaining asymptotic results i.e. the results that cover the case N → ∞ even if in practical applications one deals with a finite value of N. A naive application of an optimization solver to ( 16) is very often prohibitively costly if N is large due to the cost of calculating fN (x) and its gradient.Thus there is a vast literature dealing with variable sample scheme for solving (16).
Two main approaches can be distinguished.In the first approach the objective function fN is replaced with fN k (x) at each iteration k and the iterative procedure is essentially a two step procedure of the following form.Given the current approximation x k and the sample size N k one has to find s k such that the value of fN k (x k + s k ) is decreased.After that we set x k+1 = x k + s k and choose a new sample size N k+1 .The key ingredient of this procedure is the choice of N k+1 .The schedule of sample sizes {N k } should be defined in such way that either The second approach, often called the diagonalization scheme or the surface response method, is again a two step procedure.It consists of a sequence of SAA problems with different sample sizes that are approximately solved.So for the current x k and N k the problem (16) with N = N k is approximately solved (within an inner loop) for xN k starting with x k as the initial approximation.After that we set x k+1 = xN k and choose the new sample size N k+1 .Two important points in this procedure are the choice of N k+1 and the precision in solving each of the optimization problems min fN k .
Let us now look into algorithms of the first kind.Keeping in mind that min fN k is just an approximation of the original problem and that the cost of each iteration depends on N k , it is rather intuitive to start the optimization procedure with smaller samples and gradually increase the sample size N k as the solution is approached.Thus the most common schedule sequence would be an increasing sequence N 0 , N 1 , . . . .The convergence theory for this kind of reasoning is introduced in Wardi, [40] where an Armijo type line search method is combined with SAA approach.In order to solve the problem of type ( 14), the iterative sequence is generated as where N k is the sample size used at iteration k and α k is the largest number in (0, 1] satisfying the inequality The method is convergent with zero upper density [40], assuming that N k → ∞.More precisely, the following statement is proved. Theorem 3.3.[40] Assume that the function f is given by ( 14) and that F is twice continuously differentiable on R n for every ξ.Furthermore assume that for every compact set D ⊂ R n , there exists K > 0 such that for every x ∈ D and every ξ If N k → ∞ then the sequence {x k } given by ( 19) converges with zero upper density on compact sets.
An extension of the above work is presented in Yan, Mukai [37] where the adaptive precision is proposed i.e. the sequence {N k } k∈N is not determined in advance as in [40] but it is adapted during the iterative procedure.Nevertheless the sample size has to satisfy N k → ∞.The convergence result is slightly stronger as the convergence with probability 1 is proved under the set of appropriate assumptions.The more general result that applies to both gradient and subgradient methods is obtained in Shapiro, Wardi [33] where the convergence with probability 1 is proved for sample average gradient and subgradient methods assuming that the sample size tends to infinity.
In practical applications, the sample size is finite.So, let us now suppose that N max is the sample size which makes fNmax good approximation of the original objective function.Very often we assume that the sample is generated at the beginning of the process which justifies considering the SAA objective function as deterministic.In this case one wishes again to decrease the cost of the optimization process by decreasing the number of function and gradient evaluations.Let us now look closer at the possible schedule of N k .Clearly the sample size should be equal to N max at the final stages of the optimization procedure to ensure that the problem (16) with N = N max is solved.Thus one can consider even some heuristic schedule [14] to generate a non-decreasing sequence {N k } which eventually becomes stationary with N k = N max for k large enough.For example, a simple way to define such sequence could be to increase N k by a fixed number every K iterations.
The problem of scheduling can be approached from a different perspective in the following manner.Instead of constantly increasing the sample size one could monitor the progress in decreasing the (approximate) objective function and choose the next sample size according to that progress.One algorithm of this kind is presented in Deng, Ferris [12] where the Bayes risk is used to decide the scheduling sequence within a trust region method.Another class of results in the framework of trust region methods is presented in Bastin [2] and Bastin et al. [3], [4].The key point of the approach considered in [2,3,4] is that the sample sizes might oscillate during the iterative process, i.e. {N k } is not necessarily non-decreasing at the initial stages of the iterative process.Eventually N k = N max is reached and ( 16) is solved, but very often with smaller costs if compared with an increasing scheduling.The efficiency of this approach comes from the fact that there is a balance between the precision of the objective function approximation fN k and the progress towards the solution.The same idea is further developed for the line search methods in Krejić,Krklec [22] as follows.
Let us assume that the gradient of ∇F is available and that the search direction p k satisfies p T k ∇ fN k (x k ) < 0. The Armijo rule with η ∈ (0, 1) is applied to find α k such that The sample size is updated as follows.First, the candidate sample size N + k is determined by comparing the measure of decrease in the objective function T p k and the so called lack of precision ε N δ (x) defined by (17).The main idea is to find the sample size ).The reasoning behind this idea is the following.If the decrease measure dm k is greater than the lack of precision ε , the current approximation is probably far away from the solution.In that case, there is no need to impose high precision and therefore the sample size is decreased if possible.The candidate sample size does not exceed N max , but there is also the lower bound, i.e.
This lower bound is increased only if N k+1 > N k and there is not enough progress concerning the function fN k+1 .After finding the candidate sample size, a safeguard check is performed in order to prohibit the decrease of the sample size which might be unproductive.More precisely, if N + k < N k the following parameter is calculated ) .
If ρ k is relatively small, then it is presumed that these two model functions are too different and thus there is no gain in decreasing the sample size.So, N k+1 = N k .In all the other cases, the decrease is accepted and N k+1 = N + k .The convergence analysis relays on the following important result which states that after some finite number of iterations, the objective function becomes fNmax and ( 16) is eventually solved.Theorem 3.4.[22] Suppose that F (•, ξ) is continuously differentiable and bounded from below for every ξ.Furthermore, suppose that there exist a positive constant κ and number Then there exists q ∈ N such that N k = N max for every k ≥ q.
Let us now present some results for the second type of methods, the so called diagonalization methods described above.One possibility to determine the sample sizes in the sequence of optimization problems to be solved is presented in Royset [31] where an optimality function is used to determine when to switch on to a larger sample size.The optimality function is defined by mapping θ : R n → (−∞, 0] which, under standard conditions, satisfies θ(x) = 0 if and only if x is a solution in some sense.For unconstrained problem θ(x) = − 1 2 ∇f (x) 2 and its SAA approximation is given by θ N (x) = − 1 2 ∇ fN (x) 2 .Under the set of standard assumptions, almost sure convergence of θ N (x) towards θ(x) is stated together with asymptotic normality.Denote by xN k the iterate obtained after a finite number of iterations of an algorithm applied on the SAA problem with sample size N k where xN k−1 is the initial point.The point xN k is an approximate solution of ( 16) with N = N k and it is assumed that the optimization algorithm used to determine that point is successful in the following sense.R 3. For any N k every accumulation point xN k of the sequence generated by the optimization method for solving The algorithm proposed in [31] increases the sample size when where δ 1 is some positive constant and ∆ is a function that maps N into (0, ∞) and satisfies lim N →∞ ∆(N ) = 0.The sample size is assumed to be strictly increasing and unbounded, but the exact dynamics of increasing is not specified.The convergence of the algorithm is proved under one additional assumption.R 4. On any given set S ⊂ R n , the function F (•, ξ) is continuously differentiable and F (•, ξ) and ∇ x F (•, ξ) are dominated by an integrable function.
Theorem 3.5.[31] Suppose that the assumptions R3-R4 are satisfied and that the sequence of iterates generated by the algorithm proposed in [31] is bounded.Then, every accumulation point x of that sequence satisfies θ(x) = 0 almost surely.
The relation between the sample size and the error tolerance for each of the optimization problems solved within the diagonalization methods is considered in Pasupathy [28].The error tolerance here is a small number ε k which almost surely satisfies xN k − x * N k ≤ ε k where xN k and x * N k represent the approximate and the true (unique) solution of the corresponding SAA problem, respectively.A measure of effectiveness is defined as where W k represents the number of simulation calls needed to obtain the approximate solution xN k .Since the almost sure convergence is analyzed, it is assumed that N k → ∞ and ε k → 0. It is proved that the measure of effectiveness is bounded in a stochastic sense if the following three conditions hold.R 5. If the numerical procedure used to solve SAA problems exhibits linear convergence, we assume that lim inf k→∞ ε k N k−1 > 0.
If the numerical procedure used to solve SAA problems exhibits polynomial convergence of order p > 1, we assume lim inf k→∞ (ln(1 If any of the above conditions is violated, then q k tends to infinity in probability.The key point of analysis in [28] is that the error tolerance should not be decreased faster than the sample size is increased.The dynamics of change depends on the convergence rate of numerical procedures used to solve the SAA problems.Moreover, the mean squared error analysis implies the choice of ε k and N k such that 0 < lim sup In order to further specify the choice of the optimal sequence of sample sizes, the following theorem is stated. Theorem 3.6.[28] Suppose that ( 21) holds together with the assumptions R5-R7.If the numerical procedure used to solve SAA problems exhibits linear convergence, then lim sup k→∞ N k /N k−1 < ∞.If the numerical procedure used to solve SAA problems exhibits polynomial convergence of order p > 1, then lim sup k→∞ N k /N p k−1 < ∞.More specific recommendations are given for linear, sublinear and polynomial rates in [28].For example, if the applied algorithm is linearly convergent, then the linear growth of a sample size is recommended, i.e. it can be set N k+1 = 1.1N k for example.Also, in that case, exponential or polynomial growth of order p > 1 are not recommended.However, if the polynomial rate of convergence of order p > 1 is achieved, then we can set N k+1 = N 1.1 k or N k+1 = e N 1.1 k for instance.Furthermore, it is implied that the error tolerance sequence should be of the form K/ √ N k where K is some positive constant.The diagonalization methods are defined for a finite N as well.One possibility is presented in Polak, Royset [29] where the focus is on finite sample size N although the almost sure convergence is addressed.The idea is to approximately solve the sequence of SAA problems with N = N k , k = 1, . . ., s applying n k iterations at every stage k.The sample size is nondecreasing and the sample is assumed to be cumulative.The method consists of three phases.The first phase provides the estimates of relevant parameters such as the sample variance.In the second phase, the scheduling sequence is obtained.Finally, the sequence of the SAA problems is solved in the last phase.
An additional optimization problem is formulated and solved in the second phase in order to find the number s of the SAA problem to be solved, the sample sizes N k , k = 1, . . ., s and the number of iterations n k that are applied to solve the corresponding SAA problem.The objective function of this additional problem is the overall cost s k=1 n k w(N k ), where w(N ) is the estimated cost of one iteration of the algorithm applied on the function fN .For example w(N ) = N .The constraint for this problem is motivated by the stopping criterion f (x)−f * ≤ ε(f (x 0 )−f * ) where f * is the optimal value of the objective function.More precisely, the cost-to-go is defined as e k = f (x k n k ) − f * where x k n k is the last iteration at the stage k.Furthermore, the upper bound estimate for e s is determined as follows.Let ∆(N ) be the function defined as in Royset [31].One may use a bound like (18), but it is usually too conservative for practical implementations.Therefore, ∆(N ) is estimated with the confidence interval bound of the form (17) where the variance is estimated in the first stage.The following bound is derived e s ≤ e 0 θ l0(s) + 4 where l k (s) represents the remaining number of iterations after the stage k and θ defines the rate of convergence of the deterministic method applied on SAA.The initial cost-to-go from e 0 = f (x 1 0 ) − f * is also estimated in the first phase.Finally, the efficient strategy is obtained as the solution of the following problem In order to prove the asymptotic result, the following assumption regarding the optimization method used at each stage is imposed.R 9. The numerical procedure used to solve SAA problems almost surely exhibits linear rate of convergence with parameter θ ∈ (0, 1).Theorem 3.7.[29] Suppose that the assumptions R8-R9 hold and that the sample size sequence tends to infinity.Then lim s→∞ e s = 0 almost surely.
Applications to deterministic problems
A number of important deterministic problems can be written in the form of min where f i (x) are given functions and N is a large integer.For example, least squares and maximum likelihood problems are of this form.The objective function in (22) and its gradient are generally expensive to compute if N is large.On the other hand for a given sample realization ξ 1 , ..., ξ N and f i (x) = F (x, ξ i ) the SAA problems discussed in Section 3 are the same as (22).Therefore the SAA methods that deal with finite N can be used for solving the deterministic problems specified in (22).The main idea of this approach is to use the same reasoning as in the variable sample schemes to decrease the cost of calculating the objective function and its gradient i.e. to approximate the function and the gradient with fN k and ∇ fN k .One application of a variable sample method to the data fitting problem is presented in Krejić, Krklec Jerinkić, [23].In this section we consider two important problems, data fitting and machine learning, and methods for their solutions that use stochastic gradient approximation in the sense of approximate gradient as explained above.
The data fitting problem of the form ( 22) is considered in Friedlander, Schmidt [14].The problem is solved a by quasi Newton method but the gradient approximation of the SAA type is used.Let the gradient estimation at the current iteration k be given as g k = ∇f (x k ) + k where is the error term.The following assumptions are stated.P 1.The functions f 1 , . . ., f N are continuously differentiable and the function f is strongly convex with parameter µ.Also, the gradient ∇f is Lipschitz continuous with parameter L. P 2. There are constants β 1 ≥ 0 and β 2 ≥ 1 such that ∇f i (x) 2 ≤ β 1 + β 2 ∇f (x) 2 for all x and i = 1, . . ., N .
The algorithm can be considered as an increasing sample size method where the sample size is bounded with N .The main issue in [14] is the rate of convergence and the convergence analysis is done with the assumption of a constant step size.More precisely Two approaches are considered: deterministic and stochastic sampling.The deterministic sampling assumes that if the sample size is N k then the gradients to be evaluated ∇f i (x k ), i = 1, . . ., N k , are determined in advance.For example, the first N k functions are used to obtain the gradient approximation On the other hand, stochastic sampling assumes that the gradients ∇f i (x), i = 1, . . ., N k , to be evaluated are chosen randomly.We state the relevant results considering R-linear rate of convergence.In the case of deterministic gradient, q-linear convergence is also attained but under stronger conditions on the increase of the sample size.Theorem 4.1.[14] Suppose that the assumptions P1-P2 hold and that (N − N k )/N = O(γ k/2 ) for some γ ∈ (0, 1).Then for any ε > 0, σ = max{γ, 1 − µ/L} + ε and every k, in deterministic case we obtain Machine learning applications which usually assume large number of training points can also be viewed as problems of the form (22). Methods for solving such problems are the subject of Byrd et al. [8] and Byrd et al. [9].The main idea in [8] is to create methods which use the second order derivative information but with cost comparable to the steepest descent method.The focus is on using a cheep Hessian approximations ∇ 2 fS k (x k ) where S k is a number of training points, i.e. the Hessian-related sample size at iteration k.More precisely, matrix-free conjugate gradient method is applied in order to obtain the search direction p k as an approximate solution of the system Here, N k is a sample size related to the gradient and the function approximation.This can be considered as an inexact Newton method.In the relevant examples, p k is guaranteed to be a descent search direction and therefore the Armijo line search is applied.The proposed method (named S-Newton) does not specify the dynamic of changing the sample sizes.It only requires that the variable sample strategy is used and S k < N k .The analysis is conducted for the full gradient case.
[8] Suppose that the function fN is twice continuously differentiable and uniformly convex and that there exists a constant γ > 0 such that x T ∇ 2 fS k (x k )x ≥ γ x for every k and x.Then the sequence generated by the S-Newton method with N k = N satisfies lim k→∞ ∇ fN (x k ) = 0.
The same result can be obtained for the so called SLM method which uses matrix-free limited memory BFGS method.In that case, the conjugate gradient method is used to obtain the search direction where the sub-sampled Hessian approximation ∇ 2 fS k (x k ) is used for the initial matrix-vector product at every iteration.The line search uses Wolfe conditions for choosing the suitable step size.
The dynamic of increasing the sample size in the machine learning problem is addressed in [9].The main idea is to estimate the sample size which makes the search direction p k descent for the objective function fN without evaluating the true gradient ∇f N (x).The approximation of the negative gradient p k = −∇ fN k (x k ) is a descend direction if for some θ ∈ [0, 1], the following inequality holds Since E[ ∇ fN (x k )−∇ fN k (x k ) 2 ] = V ar(∇ fN k (x k )) 1 and N is large, inequality ( 23) is approximated by where σ2 N k is a sample variance related to the chosen sample of the size N k .The algorithm for the sample size schedule proposed in [9] can be described as follows.After finding the step size α k such that fN k (x k + α k p k ) < fN k (x k ) and setting x k+1 = x k + α k p k , a new sample of the same size N k is chosen.If inequality (24) holds for the new sample, the sample size remains unchanged, i.e.N k+1 = N k .Otherwise, the sample is augmented and the new sample size is determined by 2 .
In order to conduct the complexity analysis, the constant step size is considered and the q-linear convergence rate is analyzed.Theorem 4.3.[9] Suppose that the function fN is twice continuously differentiable and x * is a solution of the problem (22) with fN (x * ) = 0. Furthermore, assume that there are constants 0 < λ < L such that λ h 2 ≤ h T ∇ 2 fN (x)h ≤ L h 2 for all x and h.Let the sequence of iterates be gen- The schedule {N k } for the gradient estimations is extended to the second order approximations to define a Newton-type method, the S-Newton method defined in [8].This method uses the updating of N k described above while the Hessian-related sample size S k follows the dynamic of N k .More precisely, S k = RN k where R is some positive number substantially smaller than 1.Also, the sample used for the Hessian approximation is assumed to be a subset of the sample used for the gradient and the function approximations.The stopping criterion for the conjugate gradient method used for obtaining the search direction is more complex than in [8] since it is related to the sample size.Wolfe conditions are imposed to obtain a suitable step size.
Conclusions
In this survey we considered unconstrained problems with the stochastic or expectation objective function.We focused our attention on two specific classes of gradient-related methods: Stohastic Approximation and Sample Average Approximation and many other important approaches are left out for the sake of brevity.An interested reader should consults [32,34] for the initial guidance into stochastic optimization problems.Several natural extensions are easily incorporated in the framework considered in this paper, for example search directions with second order information which usually yield faster convergence, but also require additional cost, [8,9,10].In order to decrease the linear algebra costs, one can consider preconditioners as their construction might be a nontrivial issue due to the presence of random variable.On the other hand, given that there are a lot of problems yielding only input-output information, an interesting approach within SA and SAA frameworks is based on zero order information, [17].Constrained problems are always of great interest and some recent research of penalty methods within SA methodology is presented in [39].Projection and filter methods with variable sample size might be a valuable topic of future research.Among the others, chance constrained problems are especially challenging.In all the considered methods, deriving some complexity bounds can be of great interest from practical point of view.
|
2018-12-15T06:14:51.563Z
|
2014-12-01T00:00:00.000
|
{
"year": 2014,
"sha1": "185da41fe3a7cfad942852b9a3ee2bf5b023ab85",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/pope/v34n3/0101-7438-pope-34-03-0373.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "185da41fe3a7cfad942852b9a3ee2bf5b023ab85",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
10467884
|
pes2o/s2orc
|
v3-fos-license
|
Missing call bias in high-throughput genotyping
Background The advent of high-throughput and cost-effective genotyping platforms made genome-wide association (GWA) studies a reality. While the primary focus has been invested upon the improvement of reducing genotyping error, the problems associated with missing calls are largely overlooked. Results To probe into the effect of missing calls on GWAs, we demonstrated experimentally the prevalence and severity of the problem of missing call bias (MCB) in four genotyping technologies (Affymetrix 500 K SNP array, SNPstream, TaqMan, and Illumina Beadlab). Subsequently, we showed theoretically that MCB leads to biased conclusions in the subsequent analyses, including estimation of allele/genotype frequencies, the measurement of HWE and association tests under various modes of inheritance relationships. We showed that MCB usually leads to power loss in association tests, and such power change is greater than what could be achieved by equivalent reduction of sample size unbiasedly. We also compared the bias in allele frequency estimation and in association tests introduced by MCB with those by genotyping errors. Our results illustrated that in most cases, the bias can be greatly reduced by increasing the call-rate at the cost of genotyping error rate. Conclusion The commonly used 'no-call' procedure for the observations of borderline quality should be modified. If the objective is to minimize the bias, the cut-off for call-rate and that for genotyping error rate should be properly coupled in GWA. We suggested that the ongoing QC cut-off for call-rate should be increased, while the cut-off for genotyping error rate can be reduced properly.
Background
Driven by the common disease-common variant (CDCV) hypothesis [1], genome-wide association (GWA) studies have demonstrated its power in the identification of genetic variants underlying the diseases [2][3][4][5]. The completion of the human genome sequence [6,7] and the International HapMap Project [8][9][10] as well as the advent of highly efficient and affordable genotyping technologies made GWA within reach. The Phase II HapMap contains more than 4.3 million common SNPs and the coverage is estimated to capture 94% of common variation in CEU and CHB+JPT and 81% in YRI with r 2 ≥ 0.8 [9]. Several high-throughput and cost-effective technologies for genotyping that are currently being used, they are TaqMan assay [11] and GeneChip array [12] (based on hybridization with allele-specific probes), SNPstream system [13] and GoldenGate assay [14] (based on single nucleotide primer extension), Invader assay [15] (based on enzymatic cleavage), and SNiPer [16] (based on Oligonucleotide ligation). Though different reaction mechanisms are employed in different methods, fluorescence detection is widely employed in the process of the specific allele detection.
To deal with abundant genotype data produced by various genotyping platforms, quality control (QC) to ensure the accuracy of allele call becomes a critical issue. When genotyping errors occur, its effects on linkage analysis [17][18][19], LD measures [20,21], tagging SNP selection [22] and the subsequent association tests [22][23][24] have been widely and carefully investigated. Various strategies of detecting genotyping errors or removing its effects on analyses, especially on linkage analysis have been proposed [25][26][27][28]. In addition to genotyping errors, missing calls seem to be abundant in high-throughput genotyping. For example, in the data of the Phase I HapMap, less than 20% data that failed to pass QC was due to genotyping error (>1 duplicate inconsistent or >1 medelian error), while more than 65% of the markers show missing data in over 20% individuals [9]. The presence of missing calls was even more prominent in the Phase II data of HapMap [8]. However, the effect of missing call on the subsequent analyses has been largely ignored.
Strong emphasis on the accuracy of allele calls and technical success to achieve that has made the effect of missing call largely overlooked. It was suggested that 'no-call' procedure should be taken where observations of borderline quality be removed from allele calls in order to keep the genotyping error rate as low as possible [29]. This 'no-call' principle becomes a common practice in genotyping procedures. However, it should be noted that the validity of the 'no-call' principle relies on an implicit assumption that genotyping frequencies in no-call individuals are equal to those in the population. Under this hypothesis, missing data from the no-call procedure simply leads to a power loss due to a decreased sample size, and does not affect the estimation of allele frequencies at all. In this report, we started with a close examination of the validity of the 'no-call' principle by regenotyping those individuals whose genotypes that cannot be unequivocally determined and therefore would have been otherwise discarded. The objectives of this report are (1) to demonstrate experimentally how widely and seriously the problem of missing call bias (MCB) exists, (2) to investigate theoretically the effects of MCB on the subsequent analyses, especially on association studies, and (3) to provide suggestion on dealing with observations of borderline quality and re-evaluate the current QC standards, through comparing the effects of MCB and genotyping errors on allele frequency estimation and association studies.
Results
There are two major causes for missing calls. One is due to poor quality of DNA samples, which often fails to be amplified and to generate strong enough intensity of fluorescence signals over the background. The other arises when an observation, i.e., a read out of fluorescence signals, cannot be assigned unequivocally to any of the clusters of genotype, therefore, is subject to 'no-call' procedure. In this report, we mainly focus on the missing calls due to the failure of being assigned to any clusters of genotype.
Nature of no-calls: results of sequencing
To evaluate the nature of no-calls in reality, four different widely-used high-throughput genotyping platforms were included in this study, and they are GenomeLab™ SNPstream Genotyping System (Beckman Coulter, Los Angeles), BeadLab SNP Genotyping System (Illumina, San Diego), TaqMan ® SNP Genotyping Assays (ABI, Foster City) and GeneChip ® Human Mapping 500 K Array Set (Affymetrix, Santa Clara). Eight SNPs were selected and subjected to regenotyping of equivocal observations (nocalls) through sequencing. The criteria for the selection of SNPs and samples for sequencing were presented in Methods.
The genotype distribution of the observed data at each locus which were produced by the respective genotyping technology was compared with that of no-calls which were obtained by sequencing (Table 1). Statistically significant differences were observed in SNPstream, Illumina and GeneChip 500 K, indicating that the MCB indeed exists in widely-used genotyping technologies and it would lead to a biased estimation of allele/genotype frequencies. The genotype-specific call-rates c i (i = AA, Aa, aa) were calculated, most of which were above 0.95, but it could be as low as 0.75 for GeneChip 500 K.
In the subsequent sections, in order to explore the effects introduced by MCB, we proposed a model to investigate the nature of no-calls. Typically an equivocal observation occurs as follows (Fig. 1). For the data points that lie between the cluster of homozygotes of minor alleles (AA) and the cluster of heterozygotes (Aa), the real genotypes could be either homozygotes of minor alleles (Scenario I) or heterozygotes (Scenario II). For those that lie between the cluster of homozygotes of major alleles (aa) and the cluster of heterozygotes (Aa), the real genotypes could be either heterozygotes (Scenario III) or homozygotes of major alleles (Scenario IV). When the observations that cannot be called unequivocally are discarded, Scenario II is equivalent to Scenario III. To facilitate discussion, we assumed no-calls only happen in a specific genotype with the genotype-specific call-rate c (0 ≤ c ≤ 1).
Effect of MCB on type-I error rate for HWE
Hardy-Weinberg Equilibrium has been repeatedly recommended as a measure for QC in the context of genetic association studies [30]. In the following, we will show that MCB is one of causes for the departure from HWE.
The type-I error rate for departure from HWE disturbed by MCB is inflated with the increasing of c (Fig. 2). When MCB happens in homozygotes (Scenario I and Scenario IV), it leads to a departure from HWE because of excessive heterozygotes. The inflation is similar for AA and aa for a given c. When MCB happens in heterozygotes (Scenario II & III), excessive homozygotes result in the departure from HWE. The inflation is more pronounced for MCB in het-erozygotes than that in homozygotes. For example, the type-I error rate is 0.055 ~0.228 in Scenario I and Scenario IV, while it can be 0.076 ~0.654 in Scenario II & III under different MAFs in the presence of MCB (c = 0.80) for a population (N = 500) under HWE in the significant level of 0.05. However, HWE still holds, as expected, when missing equivalently but unbiasedly across different genotypes (Unbiased Missing, UBM).
Effect of MCB on allele frequency estimation
The accuracy of allele/genotype frequency estimation is of special importance since many analyses such as association studies, inference of haplotype, and inference of population structure rely on it. UBM does not affect allele/ genotype frequency estimation although it reduces the sample size. However, MCB does. When missing bias in Scenario I and Scenario II & III, MAF is underestimated; and when in Scenario IV, MAF is overestimated and this change is larger than the former two (Fig. 2). It should be
Effect of MCB on association studies
The development of high-throughput genotyping technologies make association study widely conducted for identification of disease loci underlying complex traits. We now examine the effect of MCB on association using various disease models and statistical tests.
Power issue is of special importance in association studies [31,32]. To investigate the effect of MCB on the power of association studies, MCB was introduced into disease models (see Methods). Here, the sample size is 500 for both case and control groups, and MCB are identical for both groups.
It has been commonly assumed that missing calls would lead to power loss due to decreased sample size, which only holds in absence of MCB. In the presence of MCB, the power can be affected by both the sample size and the biased estimation of allele and genotype frequencies (see Fig. 2, Additional file 1 &2). The change of power by MCB is usually larger than that by UBM. For example, the power loss by UBM is all less than 5% for the locus with MAF = 0.25 under various disease models (power ≈ 80% in genotypic χ 2 test) when c = 0.80, while the change can be around and even more than 10% when disturbed by MCB (Table 2).
For the χ2 test based on genotype frequencies, MCB always leads to power loss in all scenarios under different disease models compared with the null (in the absence of missing) (see Fig. 2 and Additional file 2). But for the χ2 test based on allele frequencies, it even can gain the power in some scenarios, because of the biased estimation of allele frequency (see Fig. 2 and Additional file 1). Genotypic χ2 test seems to be more robust to the changes of power in association studies than allelic χ2 test in the presence of MCB (see Fig. 2, Additional file 1 &2, and Table 2). 2).
In addition, though the minor allele of A in the current settings of disease models is susceptible to the disease (in overdominant disease model, Aa is susceptible to disease), the conclusions drawn above can also be extended when A is a protective one (data not shown). Moreover, in the disease model (h AA = 0.01, h Aa = 0.01, and h aa = 0.01), the type-I error rate in MCB remains to be 0.05, indicating that MCB does not inflate the false positive rate for the association studies, under the assumption that the extent of missing is identical in both case and control.
Tradeoff between MCB and genotyping errors
In the previous section, we showed that MCB is common in the current genotyping technologies, and it could affect the subsequent analyses seriously and lead to false conclusions. The key issue is how to deal with those equivocal observations which apparently are responsible for MCB. Two alternative options are available. The first option is to discard the observations of borderline quality using the 'no-call' procedure which may lead to MCB. The second option is to assign these observations to one of the geno- The sketch of genotyping calling Figure 1 The sketch of genotyping calling. The points shown in 'x' represent no-calls due to the failure of being assigned to any clusters of genotype unequivocally. When the data points lie between the cluster of homozygotes of minor alleles (AA) and that of heterozygotes (Aa), the real calls could be AA or Aa, corresponding to Scenario I and Scenario II. When the data points lie between the cluster of heterozygotes (Aa) and that of homozygotes of major alleles (aa), the real calls could be Aa or aa, corresponding to Scenario III and Scenario IV.
Effects of MCB on HWE, MAF estimation and association studies under multiplicative disease model types at the cost of increasing genotyping errors. Here, we compare the overall outcome (allele frequency estimation and power of association studies) of these two options and try to offer guidelines for different scenarios to minimize the biases caused by the equivocal observations. In addition, we evaluate the overall call-rate and genotyping error rate in the two options respectively and intend to reexamine the current QC standards.
To facilitate the presentation, the genotype-specific callrate c was set to 0.80, a moderate MCB as shown previously. For the second option, we assumed all of the equivocal observations are called and the proportion of accurate calls among these equivocal ones is denoted by Both MCB and genotyping errors will possibly lead to inaccurate estimation of allele/genotype frequencies and in turn distorted association. When 'no-call' procedure is applied to those observations of borderline quality, we showed earlier that the biased estimation is dictated only by MAF and c. When the equivocal observations are called, the bias in allele frequency estimation depends on conf in addition to MAF and c. In particular, the bias in allele frequency estimation reflected by the changes of MAF estimation increases with the decreasing conf (see Additional file 3). Fixed MAF and c, the biased estimations caused by MCB and by genotyping errors are comparable. The bias introduced by MCB is certain, whereas the bias caused by genotyping errors changes with the conf monotonically. Interestingly, when the conf is large enough (indicated by the solid line in Fig. 3B), the biased estimation of MAF caused by genotyping errors are smaller than that caused by MCB. Therefore, it would be more beneficial to call the equivocal observations in this case (grey area above the line in Fig. 3B). However, when the conf is below the solid line indicated in Fig. 3B, the biased estimation of MAF caused by genotyping errors is more and 'no-call' procedure is recommended (area below the line in Fig. 3B). It should be noted that the bias in estimation of allele frequency in Scenario I by MCB is the greatest, more so than in genotyping errors, even with the highest error rate (conf = 0). Therefore, it is suggested that 'no-call' principle should not be taken in Scenario I if the objective is to minimize the biased estimation of MAF (Fig. 3).
In the following section, we explore the performance of association studies affected by MCB and by genotyping errors. Here, MCB (c = 0.80) and genotyping errors (c = 0.80, 0 ≤ conf ≤ 1), are assumed to be identical for case and control groups. MCB and genotyping errors were introduced to the disease models with various modes of inheritance relationship ( Table 2) respectively according to Methods. The power affected by MCB was discussed previously. The power affected by genotyping errors were shown in Additional file 3. For χ 2 test based on genotype frequencies, genotyping errors may have no effect on asso- Though the power affected by genotyping errors is complicated in different scenarios and disease models, the power either does not change or changes monotonically with the conf given the scenarios and disease models (see Additional file 3). Therefore, similar to the previous study on allele frequency estimation, a threshold of conf (indicated by the solid lines in Fig. 4B, and Additional file 4B &5B) is expected as well. If the conf is below the threshold, the power loss caused by genotyping errors is larger than that by MCB; therefore, 'no-call' procedure should be taken (area below the line in Fig. 4B, and Additional file 4B &5B). Otherwise, it is better to call the observations of borderline quality at the cost of genotyping errors (grey area above the line in Fig. 4B, and Additional file 4B &5B). For instance, for a locus with MAF = 0.35, when 'no-call' procedure is taken for the equivocal observations happened in Scenario I (c = 0.80), the overall call-rate can still achieve at 97.6%. The power is 87.7% in MCB compared with 92.1% of the null in the multiplicative disease model for allelic χ 2 test. However, if these equivocal observations are called even though they are completely misclassified (conf = 0), the power with genotyping errors can be 88.1% at least. It indicates that in order to reduce the power loss caused by the equivocal observations, it would be more beneficial to call the equivocal observations with a geno-typing error rate 2.5% than 'no-call' with an overall callrate 97.5% (Fig. 4).
As shown above, the commonly-used 'no-call' principal for the observations of borderline quality is not always the best choice. By weighing the influences on the performance of association study and allele frequency estimation, it is therefore preferable to force the calling of, even though they can be erroneous, these equivocal observations when they lie between the cluster of homozygotes of minor alleles and that of heterozygotes (i.e. in Scenario I & Scenario II). When the equivocal observations lie between the cluster of homozygotes of major alleles and the cluster of heterozygotes (i.e., in Scenario III & Scenario IV), the loss of power introduced by MCB is more pronounced than that by genotyping error when these equivocal observations can be accurately called; but when the calling accuracy cannot be granted (the conf is small), the power loss is affected more by the genotyping error and it may be better to invoke 'no-call' procedure. In addition, with different disease models, different decisions for dealing with these equivocal observations may be made. A program called QC-Tradeoff is available online to suggest whether 'no-call' procedure could be conducted http:// humpopgenfudan.cn/en/resource/download.html to minimize the biases caused by the equivocal observations.
In the above analyses, for the models of genotyping errors, we assumed all of equivocal observations were called to facilitate the discussion. Here, we extended a general Table 2. Fig. 5 and Additional file 6 illustrated the power of association tests in the presence of MCB and genotyping errors, and the corresponding overall call-rate and genotyping error rate. An interesting finding is that the influences of the power caused by the equivocal observations always change monotonically with α from 0 (the model of MCB) to 1(the model of genotyping error). It indicates in order to minimize the biases caused by the equivocal observations on association studies, the validate procedure is either no-call resulted in MCB or call all of the equivocal observations with genotyping errors, which we had discussed above.
Effects of MCB and genotyping errors on MAF estimation
Given the knowledge of relationship of bias in allele/genotype frequency estimation and in association study with the magnitude of MCB and genotyping errors, it is therefore likely to develop a strategy to minimize the bias by Figure 4 Effects of MCB and genotyping errors on association studies under multiplicative disease model. A) illustrates the overall call-rate for the loci with different MAFs in the presence of MCB (c = 0.8). B) illustrates the threshold of conf by a solid line. If the equivocal observations can be called accurately in a confidence above the conf threshold, it prefers to call those equivocal ones at the cost of genotyping errors to minimize the power loss (grey area above the line). Otherwise, 'no-call' procedure is beneficial, which results in MCB (area below the line). C) illustrates the genotyping error rate, when the equivocal observations are called in the conf threshold mentioned above. The figures correspond to Scenario I, Scenario II, Scenario III and Scenario IV from the left to right.
Effects of MCB and genotyping errors on association studies under multiplicative disease model
choosing proper cut-offs for call-rate and genotyping error rate (see Discussion).
Disccusion
The advent of high-throughput genotyping technologies led to an exciting era of genome-wide associations. Genotype data with good quality are imperative in ensuring the creditability of a study. Missing calls in high-throughput genotyping has long been ignored in genetic studies. In this study, we demonstrated experimentally the prevalence and severity of the problem of missing calls, especially MCB, in the current genotyping technologies.
We also showed theoretically how MCB could lead to biased conclusions in the subsequent analyses including estimation of allele/genotype frequencies and association tests. MCB leads to power loss in most cases, and such loss may lead to false negative conclusions. Compared with allelic χ 2 test, genotypic χ 2 test is more robust to MCB. Various modes of inheritance relationship (dominant, recessive, overdominant, additive and multiplicative) were considered in our study. We also showed that when missing bias happens in the genotype whose contribution to the disease differs most, regardless whether it is susceptible or protective to the disease, it affects the power of association studies most.
In this study, we investigated the bias of association in the presence of both MCB and genotyping errors, and demonstrated that they contributed to the bias differently. This result is of special importance in determining the cut-offs used for QC in the current practice of GWA. The question is whether the current QC standards are optimal. If the objective is to minimize the bias in allele/genotype frequency estimation and in association tests, the cut-off for call-rate and that for genotyping error rate should be properly coupled in GWA. This leads to a re-examination of the existing QC standards for both call rate and genotyping error rate that are widely used in various association studies.
A commonly used QC standard for call rate is 80% and 95% or above for the first screening and fine mapping, respectively, in GWAs (e.g. Easton et al. [3]; Hunter et al. [4]). Although we demonstrated that the bias in allele frequency estimation and in association study discussed in Results (Fig. 3 &4, and Additional file 4 &5) is not negligible when 'no-call' procedure is applied, their call-rates are all above 80% and even can be above 95%. It suggests that the existing cut-offs are not sufficiently stringent to filter out the loci which may suffer from MCB.
A genotyping error rate < 1% is considered acceptable [3][4][5]33]. This is an extremely stringent cut-off in the presence of equivocal observations, given that such a stringent cutoff would force to invoke 'no-call' principal whereas would not lead to a reduction of bias. Our results indicated that in most cases, the bias can be greatly reduced by increasing the call-rate at the cost of genotyping error rate, i.e., < 5% (Fig. 3 &4, and Additional file 4 &5). Therefore, we suggested that the ongoing QC cut-off for call-rate should be increased, while the cut-off for genotyping error rate can be reduced properly.
A program called QC-Tradeoff is available online to provide a conf threshold. If the threshold is high, it is conservative to take 'no-call' procedure to reduce the power loss in association studies introduced by the equivocal observations; otherwise, the equivocal ones could be called even though genotyping errors may be occurred. Moreover, we showed that the missing calls can usually be reduced sufficiently but with certain accuracy using the current technologies, indicating that the value of conf in reality is usually high enough. Through adjusting relevant parameters which are implemented in the calling software provided by genotyping platforms (such as Illumina, Taq-Man and GeneChip 500 K), it allows either higher call rates or greater genotyping rate. For example, we adjusted quality value of TaqMan from 0.95 (default) to 0.80 to illustrate the change of calling in the two loci rs10109984 and rs11226. After a change of the quality value, the overall call-rate increased substantially. In particular, when the quality value is 0.95, 30 and 29 calls were not called for the loci rs10109984 and rs11226, respectively. When 0.8 was chosen as the quality value, only 5 were not called at rs10109984 and 7 at rs11226. Subsequently, the number of genotyping discordance between the genotyping results and sequencing results was 8 (corresponding to conf = 0.73) and 0 (corresponding to conf = 1.0) for the locus rs10109984 and rs11226, respectively. Furthermore, our results also suggested that MCB does not inflate type-I error rate for association studies. But it should be noted that the conclusion made here is under the assumption that the extent of missing is same to case and control. Sometimes differential bias between case and control is unavoidable, i.e., for the different sourcing of samples. In this case, effects of MCB and genotyping errors could be more complicated. Clayton et al. [34] showed case-control differential bias and calling inaccuracies can lead to differential misclassification, and consequently, to increase false-positive rates. Plagnol et al. [35] from the same lab found case-control bias associated with missing data can increase the false-positive rate as well and recommended to use 'fuzzy' calls to deal with uncertain genotypes that would otherwise be labeled as missing.
Conclusion
Missing calls in high-throughput genotyping has long been ignored in genetic studies. However, it had been illustrated that the problem of missing call bias does exist widely and sometimes seriously in prevalent highthroughput genotyping technologies. Missing call bias could lead to biased conclusions in subsequent analyses, including allele/genotype frequency estimation and association studies. The commonly used 'no-call' procedure does not always a best option for observations of borderline quality. Our results indicated that in most cases, the biased conclusion can be greatly reduced by increasing the call-rate at the cost of genotyping error rate. Therefore, the existing QC standards should be modified that the cut-off for call-rate and that for genotyping error rate should be properly coupled in GWA. A program called QC-Tradeoff is available online to suggest call or no-call the equivocal observations according to the case the user faced to minimize the power influences in association studies, and illustrate the acceptable QC standard in the corresponding case.
Regenotyping for no-calls
Two SNPs were selected for each platform (GeneChip 500 K, SNPstream, Illumina and TaqMan) from a large number of loci which were genotyped by the respective technology in our laboratory. The SNPs were selected using the following criteria: 1) the genotype calls were made using the software provided by the venders of the respective technology; 2) the call-rate at each locus is around or above the average call-rate from the same platform; 3) the minor allele frequency (MAF) of the observed data is above 0.15.
As for SNPstream, Illumina and TaqMan, their calling algorithms are based on various methods, including GetGenos/ QCReview program for SNPstream, GeneCall software for Illumina and SDS software for TaqMan. But generally, a baseline is set to distinguish the background signals and the informative ones, and then clustering and calling procedures conduct in the informative ones. The samples that show signals above the baseline but cannot be called unequivocally were selected for further analysis in the respective genotyping platforms. As for GeneChip 500 K, genotype data was called by the software GTYPE. It introduces a dynamic model-based algorithm [36] to suit its properties that many different SNPs are to be examined in a few individuals. This algorithm is different from others, so instead, we collected all the missing samples for sequencing.
Overall, rs6743724 (call-rate: 96.5%) and rs699512 (98.6%) for SNPstream, rs2277632 (98.9%) and rs1457043 (98.9%) for Illumina, rs10109984 (95.8%) and rs11226 (96.6%) for TaqMan, rs1192885 (94.2%) and rs6855202 (95.1%) for GeneChip 500 K were selected. The genotypes for the missing data were generated by sequencing the DNA segment containing the polymorphism locus. In the 2 × 3 contingency table of genotype in the 'observed' data produced by the genotyping platforms (Obs.) and the 'missing' data produced by sequencing (Seq.), Fisher Exact Test was used to examine whether there is difference in the genotypic distribution between them (Missing Call Bias, MCB). The genotypespecific call-rate was calculated as well, for AA, Aa and aa, respectively.
Models for MCB and genotyping errors
In the presence of no-calls, let denote the frequency of the genotypes G i (i = AA, Aa, aa) in the population and On the other hand, an equivocal data point can be assigned to a genotype, which could lead to a genotyping error. Assume all the equivocal data points are called and let conf denote the proportion of genotypes that could be accurately called among these equivocal ones. The genotyping error rate is (1 -conf)(1 -c)p G , where G denotes the genotype of equivocal data points.
The observed genotype frequencies in the models were listed in Table 3.
Effects on allele frequency estimation and type-I error rate for HWE
Let π denote the frequency for minor allele A in a population, where 0 ≤ π ≤ 0.5. Suppose HWE holds in the population, the genotype frequencies are p AA = π 2 , p Aa = 2π(1π) and p aa = (1 -π) 2 , respectively.
However, in the presence of MCB or genotyping errors, the estimation of π will be affected. Here, we present the results by the difference (π obs -π) to reflect the changes in the allele frequency estimation, where π obs is the estimated frequency of minor allele A in the existence of MCB or genotyping errors according to (1) power in association studies. The genotype frequencies in case and control can be calculated respectively according to Table 3 and formula (1)~(3). The power was calculated using a non-central χ 2 distribution following Gordon et al. [23]. The power calculation was conducted in the package of R. In order to the comparison of power affected by MCB and that by genotyping errors with different levels, we tend to find a conf threshold, where if the equivocal observations are called above the conf threshold, the power loss caused by genotyping errors are smaller than that by MCB, and vice versa.
Here, the same extent of MCB, UBM or genotyping errors was assumed for both case and control. Various modes of inheritance relationships were considered, including dominant, recessive, overdominant, additive and multiplicative relationship ( Table 2). The power in these disease models can be 80% in the significant of 0.05 using genotypic χ 2 test, when MAF is 0.25 and the sample size of case and control is 500 respectively.
p p aa
Additional File 3 Figure S3.
|
2014-10-01T00:00:00.000Z
|
2009-03-13T00:00:00.000
|
{
"year": 2009,
"sha1": "09761710db307f954fcb98ff6c616be0227fb612",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-10-106",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09761710db307f954fcb98ff6c616be0227fb612",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
230575649
|
pes2o/s2orc
|
v3-fos-license
|
Intermittent Superior Vena Cava Syndrome Secondary to Malignant Pericardial Mesothelioma
Malignant pericardial sarcomatoid mesothelioma is a massively rare tumor accounting for 0.8% of all cases of mesothelioma. Superior vena cava syndrome (SVCS) occurs due to a partial obstruction or compression to the superior vena cava, which hinders the blood outflow to the upper body. It can be caused by an intrinsic factor such as thrombosis, or by an extrinsic factor such as tumors. Clinical presentation includes edema of the face and upper limbs, plethora, dyspnea, dysphagia, stridor and cough. we are reporting a case of a 56-year-old female, who is a known case of hypertension on angiotensin-converting enzyme inhibitors (ACEIs). Presented to the emergency department with intermittent facial swelling and dyspnea. Imaging and pathology reports confirmed the diagnosis of intermittent SVCS secondary to pericardial sarcomatoid mesothelioma with pericardial effusion. What makes our case unique is that both the etiology and the presenting complaint are rare entities, as most SVCS cases are continuously symptomatic throughout the disease course, and are usually caused by a lung cancer or lymphoma.
Introduction
Superior vena cava is a large vein that drains blood from upper trunk, head and neck. Superior vena cava syndrome (SVCS) occurs due to an obstruction of the vein, which is characterized by cyanosis, plethora and distention of sub-cutaneous vessels, in addition to an edema of the arms, head and neck. The resulting edema may jeopardize the airways causing dyspnea, stridor and cough. The obstruction can be due to intrinsic etiologies such as a thrombus, or a long-term indwelling catheter. It can also be due to extrinsic pathologies such as tumors, or in rare occasions pericardial effusion. It could be a life-threatening condition as it may compromise cardiac output and results in shock [1]. We are reporting a case of an intermittent SVCS caused by pericardial sarcomatoid mesothelioma with pericardial effusion. Both the presentation and the etiology are rare entities. We reviewed the literature and found two cases similar to ours in respect to their presentation, and two others in regard to the etiology.
Case Presentation
A 56-year-old female, known case of hypertension and dyslipidemia for more than 10 years on Amlodipine and Valsartan, presented to the emergency department of a tertiary hospital with intermittent facial swelling and exertional dyspnea for a one-week duration. In the context of severity, she described her face as a balloon in the morning after sleeping eight hours or more, and by the end of the day only minimal swelling around the eyes remains, not associated with change in color, headache, hoarseness or dysphagia. Patient reported loss of weight around 10 kg in a two-month period. Furthermore, her dyspnea is not associated with chest pain, palpitation, orthopnea, dizziness, syncope, cough or fever. The patient denied any history of trauma, allergies, urinary or gastrointestinal symptoms. She reported multiple emergency visits for the same complain, the patient was investigated, reassured and discharged in each visit. Upon examination, she was conscious, alert, oriented to time, place and person, not in pain but in mild respiratory distress. She wasn't pale, cyanosed or jaundiced, but there was a mild swelling around the eyes and lips. Her vital signs were as follows: temperature: 37.0 °C, pulse: 101/min, respiratory rate: 24, blood pressure: 146\86 mmHg, Oxygen saturation: 96%, on room air. Chest, abdomen, upper and lower limbs examinations were unremarkable. All her labs were insignificant except for microcytic hypochromic anemia, the labs were as follow: complete blood count (CBC) showed a white blood cells (WBC): 5.3 k/ul (normal 4-11 k/ul), hemoglobin: 8. Patient was admitted, and on her fifth day of admission she underwent video-assisted thoracoscopic surgery (VATS) for biopsy of the soft matted lesion found on the CT scan. At first, 200cc hemorrhagic pericardial fluid was drained. Then, the surgeon found a mass felt within the pericardium compressing the left atrium. Upon retracting the mass, the left pulmonary artery was injured, the injury had extended to the main pulmonary artery which resulted in massive bleeding. Therefore, the procedure was converted to open thoracotomy. The patient was hypotensive, and developed cardiac arrest, open cardiac massage was performed with suction and compressions. The tear was clamped, bleeding was controlled, and the patient was resuscitated. A superficial wedge biopsy then was taken. Shortly after, the patient re-bled from the same source and developed hypotensive shock, then got a second cardiac arrest. Unfortunately, she was announced dead at the end of the procedure. Moreover, histopathology report was positive for pan CK, CK7, calretinin, focally weakly for D2-40 and CD56. Ki67: high with a percentage of 70%. It is highly suggestive of a sarcomatoid mesothelioma.
Discussion
Malignant pericardial sarcomatoid mesothelioma is a massively rare tumor accounting for 0.8% of all cases of mesothelioma. Most mesotheliomas histopathological types are either epithelioid or biphasic which usually occur in the pleura [2]. It can also arise from the lining of other structures such as peritoneum, tunica vaginalis of the testis and to a lesser extent the pericardium. The diagnosis is made by a biopsy, but more commonly from an autopsy [3].
SVCS occurs due to a partial obstruction or compression to the superior vena cava, which hinders the blood outflow to the upper body. It is characterized by cyanosis, plethora, distention of sub-cutaneous vessels, and edema of the arms, head and neck. The resulting edema may jeopardize the airways causing dyspnea, stridor and cough. The obstruction can be due to intrinsic etiologies such as a thrombus, or a long-term indwelling catheter. while extrinsic pathologies can be caused by tumors, or very rarely a pericardial effusion. Small cell lung cancer and non-Hodgkin lymphoma are the most common culprits. It could be a life-threatening condition as it may compromise cardiac output and results in shock [3,4]. Intermittent facial swelling is an uncommon presenting complain, and it can be caused by a variety of etiologies. The most common of which is angioedema. Nonetheless, more serious etiologies should be ruled out such as SVCS [5].
A PubMed (US National Library of Medicine, Bethesda, MD, USA) search using medical subject headings (MeSH) "intermittent facial swelling", "superior vena cava syndrome" and "pericardial sarcomatoid mesothelioma" resulted in four cases that have similarities to our case.
The first case was reported in 2014 for an 84-year-old man who is a known case of atrial fibrillation, coronary bypass graft, chronic bronchitis, dementia and ex-smoker (60 packs per year), and had worked as a miner. He presented with intermittent face and right arm swelling for eight weeks, that is worse in the morning, and improved as the day went on. He also reported shortness of breath and right wrist pain. At first, a diagnosis of angioedema was made. However, after further investigations, a final diagnosis with SVCS secondary to metastatic bronchogenic carcinoma was established [5].
The second case was in 1992 for a 48-year-old male who is a known case of Hodgkin's lymphoma in the neck which was metastasized to the chest, abdomen and pelvic lymph nodes. Hickman catheter was inserted for him via the left subclavian vein to gain a vascular access for administration of chemotherapy. The catheter's tip was extending to the SVC vein. Several months later, he presented with intermittent tightness of the neck and distension of the veins on the forehead specially when bending down. Further investigations revealed an intermittent SVCS caused by a thrombus at the end of the catheter, acting as a ball valve [6].
The third case was in 2009. It is regarding a 60-year-old woman presented with a repeated dry cough, exertional dyspnea and shortness of breath due to a large pericardial effusion and bilateral pleural effusion. There was no exposure to asbestos, tuberculosis or smoke. Shortly after her presentation, the patient developed a superior vena cava compression along with pericardial constriction. Partial pericardiectomy revealed a large tumor which was then attributed to pericardial mesothelioma [7].
The last case was published in 2015 for a 70-year-old woman, status post right lower lobectomy and chemotherapy for adenocarcinoma six years prior to presentation. Presented with a history of facial swelling for three weeks. CT scan revealed a loculated pericardial effusion compressing the SVC. Pericardiocentesis rapidly relieved her symptoms and fluid analysis later on confirmed a malignant effusion [8].
In comparison to the above cases, we found several similarities to our case. In settings of presentation, the first two cases were both presented with symptoms of intermittent SVCS due to different etiologies. The third case is similar to ours in that they're both caused by the same tumor that is mesothelioma. The last case was documented to explain why we thought of pericardial effusion as an attributive factor to the patient's clinical manifestations.
In our patient, we first thought of angioedema as the number one differential diagnosis as the patient was using ACEIs. Nevertheless, this is a diagnosis of exclusion thus other diagnoses must be ruled out first including SVCS. Based on the radiological and histopathological reports, we concluded that the most likely cause of her symptoms is pericardial mesothelioma with malignant pericardial effusion.
In terms of management, SVCS is an etiology based management. Therefore, malignant pericardial mesothelioma will be addressed. The disease follows an extremely poor prognosis with a median survival rate of six months. Standardized treatment guidelines for it are yet to be established. However, current practice implies that patients with an early stage of the disease or a loculated one might benefit from surgical therapy, yet most patients are diagnosed at an advanced stage due to the absence of symptoms early on. Systemic chemotherapy is indicated for those with an advance or unresectable disease. Having said that, the sarcomatoid histopathological subtype of mesothelioma has shown a very poor response to chemotherapy. additionally, pericardiocentesis is commonly done to alleviate heart failure onset. On those grounds, Further improvement in the therapeutic modalities used for malignant pericardial mesothelioma is necessary [3].
Conclusions
In summary, SVCS can present with intermittent symptoms on rare occasions. It could represent a fatal diagnosis hence, it shouldn't be taken lightly and must be ruled out first before considering other diagnoses. Furthermore, mesothelioma can be a rare cause for this unusual presentation.
Additional Information Disclosures
Human subjects: All authors have confirmed that this study did not involve human participants or tissue.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no
|
2020-12-17T09:11:04.416Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6ebc2361a342647d82af7e7b5eb208162c895a1f",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/47447-intermittent-superior-vena-cava-syndrome-secondary-to-malignant-pericardial-mesothelioma.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbb0a00190ac1c28eec2a79f6bcab7d6f3194f3a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266813618
|
pes2o/s2orc
|
v3-fos-license
|
When Blood Is Thicker Than Water: A Case of Acute Pancreatitis Secondary to Familial Hypertriglyceridemia
Hypertriglyceridemia is one of the major causes of acute pancreatitis in addition to gallstones and alcohol use. These etiologies are often associated with underlying comorbidities. Acute pancreatitis secondary to hypertriglyceridemia is associated with an increase in clinical severity and further complications. We present a case of a 56-year-old man with a past medical history of hypertension, diabetes mellitus, and familial hypertriglyceridemia who was diagnosed with acute pancreatitis secondary to hypertriglyceridemia. The patient presented with 9/10 pressure across the abdomen radiating to the sternum. Labs revealed elevated triglyceride count > 8000 mg/dL and cholesterol > 705 mg/dL. Abdominal CT showed fat stranding along the anterior aspect of the pancreatic head. The patient was managed with IV fluids, nil per os (NPO), and statin management for hypertriglyceridemia. Seven days later, triglycerides decreased to 658 mg/dL, and abdominal pain resolved. This case highlights an unusual presentation of acute pancreatitis and demonstrates the importance of understanding the spectrum of etiologies for this condition.
Introduction
Hypertriglyceridemia is one of the major causes of acute pancreatitis, accounting for up to 10% of all cases [1].It typically occurs in patients with dyslipidemia in the presence of a secondary condition, such as inadequately controlled diabetes, excess alcohol consumption, or medication use.It is also associated with greater clinical severity and rate of complications compared to other etiologies of acute pancreatitis [1].Familial hypertriglyceridemia is characterized by an increase in very low-density lipoprotein (VLDL) particles and follows an autosomal dominant inheritance pattern [2].Here, we discuss a patient case of acute pancreatitis secondary to familial hypertriglyceridemia.This article was previously presented as a meeting abstract and poster at the 2023 PA-ACP Eastern Region Conference on October 21, 2023.
Case Presentation
A 56-year-old male with hypertension, uncontrolled diabetes mellitus (A1C: 12.6%), and familial hypertriglyceridemia presented to the emergency department with a chief concern of abdominal pain for two days.The pain was described as 9/10 pressure radiating across the abdomen to the sternum.The patient stopped taking his prescribed medications one year prior, including amlodipine 10 mg daily, atorvastatin 20 mg daily, fish oil 1000 mg twice per day (BID), cholecalciferol 50 mcg daily, vitamin B12 500 mcg daily, lisinopril 40 mg daily, and metformin 1000 mg BID.
Physical exam was unremarkable aside from tenderness to palpation diffusely throughout the abdomen.Blood drawn for labs had a white hue and was noted to be "strongly lipemic."Labs revealed elevated triglyceride count > 8000 mg/dL, cholesterol > 705 mg/dL, and lactic acid was 5.0 mmol/L.Glucose was elevated to 403 mg/dL, sodium decreased to 114 mEq/L, and lipase elevated to 298 U/L.Liver function tests (LFTs) were within normal limits.Abdominal CT showed fat stranding along the anterior aspect of the pancreatic head (Figure 1), confirming the diagnosis of acute pancreatitis secondary to familial hypertriglyceridemia based on the Atlanta classification [3].The revised Atlanta classification requires that two or more of the following must be met to diagnose acute pancreatitis: (1) abdominal pain (i.e., right upper quadrant pain) suggestive of pancreatitis; (2) serum amylase or lipase levels on labs greater than three times the upper limit of normal; and/or (3) characteristic imaging findings on CT, often described as pancreatic inflammation or fat-stranding on the pancreas.The patient was managed with insulin drip until triglycerides were <500 mg/dL, IV fluids to slowly correct sodium, bowel rest with nil per os (NPO) initially, restarting low-fat diet as tolerated, and restarted on antilipid management for hypertriglyceridemia. Medications at discharge included atorvastatin 40 mg daily, fenofibrate 145 mg daily, fish oil 2000 mg BID for medium-chain fatty acids, lisinopril 10 mg daily, basal insulin, and metformin 500 mg BID.
Discussion
Acute pancreatitis is a disease that has many different etiologies, the most common of which include alcohol and gallstones, followed by hypertriglyceridemia-mediated disease.In gallstone pancreatitis, the most common cause of acute pancreatitis in the Western world, blockage of the pancreatic duct by a biliary stone leads to inflammation of the pancreas [4].Alcohol-induced pancreatitis is less understood as ethanol itself does not directly cause pancreatitis.Rather, it sensitizes the pancreas to injury by mechanisms such as a high lipid diet, cigarette smoke, and infectious agents [5].While acute disease is common, oftentimes this can progress to chronic pancreatitis.
The proposed mechanism for hypertriglyceridemia-induced pancreatitis suggests that high levels of lipids increase plasma viscosity, resulting in ischemia and inflammation of pancreatic tissue [1].Pancreatitis secondary to hypertriglyceridemia typically occurs in those with genetic lipid disorders, most commonly types I, IV, or V, as these result in higher levels of lipids and triglyceride-rich chylomicrons (chylomicronemia) [1].In type I, chylomicron metabolism is predominantly affected, resulting in chylomicronemia [1].Patients with type IV, familial combined hyperlipidemia, present with elevated VLDL levels, whereas those with type V are characterized by elevated VLDL and chylomicrons due to gene alterations that reduce catabolism [1].Type IV and V are both more prevalent than type I and are more affected by environmental risk factors, such as obesity, alcoholism, and diet [1].
In the setting of hypertriglyceridemia, pseudohyponatremia should be suspected as the increased mass of the non-aqueous lipid components of serum can dilute the aqueous component of serum, subsequently reducing plasma sodium concentration [6].A proper workup is necessary to determine the etiology of pancreatitis.Liver function tests, lipid panels, and pancreatic enzymes are important laboratory values to obtain.Right upper quadrant ultrasound is an important step to identify obstructive causes if indicated.Magnetic resonance cholangiopancreatography (MRCP) and abdominal CT may also aid in workup.All patients should be managed with aggressive IV fluids, early enteral feeding to advance as tolerated, and pain medication, with further management to control the underlying etiology.
Management of familial hypertriglyceridemia will help in preventing the recurrence of this disease.These treatments are primarily focused on reducing levels of triglycerides, managing comorbid conditions, and lifestyle modifications such as diet and exercise [7].Statins are often the first-line pharmacotherapy choice.Fibrates such as fenofibrate and gemfibrozil may also be used for the management of hypertriglyceridemia [8].Fibrates have been shown to reduce triglyceride levels by up to 50%, attributing their placement in the
FIGURE 1 :
FIGURE 1: Axial computed tomography scan showing fat stranding along the pancreas head.
|
2023-10-11T21:08:20.464Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "0c4c3c3a564980bd9dc507a7a0e5588b7c4849cf",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/202374/20240102-29248-1nlbldc.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd69bb3e5efed7d43a7aa401f689f3a9cdd4cca1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
252992873
|
pes2o/s2orc
|
v3-fos-license
|
Exclusive Supermask Subnetwork Training for Continual Learning
Continual Learning (CL) methods focus on accumulating knowledge over time while avoiding catastrophic forgetting. Recently, Wortsman et al. (2020) proposed a CL method, SupSup, which uses a randomly initialized, fixed base network (model) and finds a supermask for each new task that selectively keeps or removes each weight to produce a subnetwork. They prevent forgetting as the network weights are not being updated. Although there is no forgetting, the performance of SupSup is sub-optimal because fixed weights restrict its representational power. Furthermore, there is no accumulation or transfer of knowledge inside the model when new tasks are learned. Hence, we propose ExSSNeT (Exclusive Supermask SubNEtwork Training), that performs exclusive and non-overlapping subnetwork weight training. This avoids conflicting updates to the shared weights by subsequent tasks to improve performance while still preventing forgetting. Furthermore, we propose a novel KNN-based Knowledge Transfer (KKT) module that utilizes previously acquired knowledge to learn new tasks better and faster. We demonstrate that ExSSNeT outperforms strong previous methods on both NLP and Vision domains while preventing forgetting. Moreover, ExSSNeT is particularly advantageous for sparse masks that activate 2-10% of the model parameters, resulting in an average improvement of 8.3% over SupSup. Furthermore, ExSSNeT scales to a large number of tasks (100). Our code is available at https://github.com/prateeky2806/exessnet.
Introduction
Artificial intelligence aims to develop agents that can learn to accomplish a set of tasks.Continual Learning (CL) (Ring, 1998;Thrun, 1998) is crucial for this, but when a model is sequentially trained on different tasks with different data distributions, it can lose its ability to perform well on previous tasks, a phenomenon is known as catastrophic forgetting (CF) (McCloskey and Cohen, 1989;Zhao and Schmidhuber, 1996;Thrun, 1998).This is caused by the lack of access to data from previous tasks, as well as conflicting updates to shared model parameters when sequentially learning multiple tasks, which is called parameter interference (McCloskey and Cohen, 1989).
Recently, some CL methods avoid parameter interference by taking inspiration from the Lottery Ticket Hypothesis (Frankle and Carbin, 2018) and Supermasks (Zhou et al., 2019) to exploit the expressive power of sparse subnetworks.Given that we have a combinatorial number of sparse subnetworks inside a network, Zhou et al. (2019) noted that even within randomly weighted neural networks, there exist certain subnetworks known as supermasks that achieve good performance.A supermask is a sparse binary mask that selectively keeps or removes each connection in a fixed and randomly initialized network to produce a subnetwork with good performance on a given task.We call this the subnetwork as supermask subnetwork that is shown in Figure 1, highlighted in red weights.Building upon this idea, Wortsman et al. (2020) proposed a CL method, SupSup, which initializes a network with fixed and random weights and then learns a different supermask for each new task.This allows them to prevent catastrophic forgetting (CF) as there is no parameter interference (because the model weights are fixed).
Although SupSup (Wortsman et al., 2020) prevents CF, there are some problems with using supermasks for CL: (1) Fixed random model weights in SupSup limits the supermask subnetwork's representational power resulting in sub-optimal performance.(2) When learning a task, there is no mechanism for transferring learned knowledge from previous tasks to better learn the current task.Moreover, the model is not accumulating knowledge over time as the weights are not being updated.Figure 1: EXSSNET diagram.We start with random weights W (0) .For task 1, we first learn a supermask M1 (the corresponding subnetwork is marked by red color, column 2 row 1) and then train the weight corresponding to M1 resulting in weights W (1) (bold red lines, column 1 row 2).For task 2, we learn the mask M2 over fixed weights W (1) .If mask M2 weights overlap with M1 (marked by bold dashed green lines in column 3 row 1), then only the non-overlapping weights (solid green lines) of the task 2 subnetwork are updated (as shown by bold and solid green lines column 3 row 2).These already trained weights (bold lines) are not updated by any subsequent task.Finally, for task 3, we learn the mask M3 (blue lines) and update the solid blue weights.
To overcome the aforementioned issues, we propose our method, EXSSNET (Exclusive Supermask SubNEtwork Training), pronounced as 'excess-net', which first learns a mask for a task and then selectively trains a subset of weights from the supermask subnetwork.We train the weights of this subnetwork via exclusion that avoids updating parameters from the current subnetwork that have already been updated by any of the previous tasks.In Figure 1, we demonstrate EXSSNET that also helps us to prevent forgetting.Training the supermask subnetwork's weights increases its representational power and allows EXSSNET to encode task-specific knowledge inside the subnetwork (see Figure 2).This solves the first problem and allows EXSSNET to perform comparably to a fully trained network on individual tasks; and when learning multiple tasks, the exclusive subnetwork training improves the performance of each task while still preventing forgetting (see Figure 3).
To address the second problem of knowledge transfer, we propose a k-nearest neighbors-based knowledge transfer (KKT) module that is able to utilize relevant information from the previously learned tasks to improve performance on new tasks while learning them faster.Our KKT module uses KNN classification to select a subnetwork from the previously learned tasks that has better than random predictive power for the current task and use it as a starting point to learn the new tasks.
Next, we show our method's advantage by experimenting with both natural language and vision tasks.For natural language, we evaluate on WebNLP classification tasks (de Masson d'Autume et al., 2019) and GLUE benchmark tasks (Wang et al., 2018), whereas, for vision, we evaluate on SplitMNIST (Zenke et al., 2017), SplitCIFAR100 (De Lange and Tuytelaars, 2021), and SplitTiny-ImageNet (Buzzega et al., 2020) datasets.We show that for both language and vision domains, EXSSNET outperforms multiple strong and recent continual learning methods based on replay, regularization, distillation, and parameter isolation.For the vision domain, EXSSNET outperforms the strongest baseline by 4.8% and 1.4% on SplitCI-FAR and SplitTinyImageNet datasets respectively, while surpassing multitask model and bridging the gap to training individual models for each task.In addition, for GLUE datasets, EXSSNET is 2% better than the strongest baseline methods and surpasses the performance of multitask learning that uses all the data at once.Moreover, EXSSNET obtains an average improvement of 8.3% over SupSup for sparse masks with 2 − 10% of the model parameters and scales to a large number of tasks (100).Furthermore, EXSSNET with the KKT module learns new tasks in as few as 30 epochs compared to 100 epochs without it, while achieving 3.2% higher accuracy on the SplitCIFAR100 dataset.In summary, our contributions are listed below: • We propose a simple and novel method to improve mask learning by combining it with exclusive subnetwork weight training to improve CL performance while preventing CF. dynamically identifies previous tasks to transfer knowledge to learn new tasks better and faster.
• Extensive experiments on NLP and vision tasks
show that EXSSNET outperforms strong baselines and is comparable to multitask model for NLP tasks while surpassing it for vision tasks.Moreover, EXSSNET works well for sparse masks and scales to a large number of tasks.
Motivation
Using sparsity for CL is an effective technique to learn multiple tasks, i.e., by encoding them in different subnetworks inside a single model.SupSup (Wortsman et al., 2020) is an instantiation of this that initializes the network weights randomly and then learns a separate supermask for each task (see Figure 7).They prevent CF because the weights of the network are fixed and never updated.However, this is a crucial problem as discussed below.
Problem 1 -Sub-Optimal Performance of Supermask: Although fixed network weights in SupSup prevent CF, this also restricts the representational capacity, leading to worse performance compared to a fully trained network.In Figure 2, we report the test accuracy with respect to the fraction of network parameters selected by the mask, i.e., the mask density for an underlying ResNet18 model on a single 100-way classification on CI-FAR100 dataset.In Figure 3, we report the average test accuracy versus the fraction of overlapping parameters between the masks of different tasks, i.e., the sparse overlap (see Equation 2) for five different 20-way classification tasks from SplitCIFAR100 dataset with ResNet18 model.We observe that SSNET outperforms SupSup for lower sparse overlap but as the sparse overlap increases, the performance declines because the supermask subnetworks for different tasks have more overlapping (common) weights (bold dashed lines in Figure 1).This leads to higher parameter interference resulting in increased forgetting which suppresses the gain from subnetwork weight training.
Our final proposal, EXSSNET, resolves both of these problems by selectively training a subset of the weights in the supermask subnetwork to prevent parameter interference.When learning multiple tasks, this prevents CF, resulting in strictly better performance than SupSup (Figure 3) while having the representational power to match bridge the gap with fully trained models (Figure 2).
Method
As shown in Figure 1, when learning a new task t i , EXSSNET follows three steps: (1) We learn a supermask M i for the task; (2) We use all the previous tasks' masks M 1 , . . ., M i−1 to create a free parameter mask M f ree i , that finds the parameters selected by the mask M i that were not selected by any of the previous masks; (3) We update the weights corresponding to the mask M f ree i as this avoids parameter interference.Now, we formally describe all the step of our method EXSSNET (Exclusive Supermask SubNEtwork Training) for a Multi-layer perceptron (MLP).
Notation: During training, we can treat each layer l of an MLP network separately.An intermediate layer l has n l nodes denoted by V (l) = {v 1 , . . ., v n l }.For a node v in layer l, let I v denote its input and Z v = σ(I v ) denote its output, where σ(.) is the activation function.Given this notation, I v can be written as I v = u∈V (l−1) w uv Z u , where w uv is the network weight connecting node u to node v.The complete network weights for the MLP are denoted by W .When training the task t i , we have access to the supermasks from all previous tasks {M j } i−1 j=1 and the model weights W (i−1) obtained after learning task t i−1 .
(2020), we use the algorithm of Ramanujan et al.
(2019) to learn a supermask M i for the current task t i .The supermask M i is learned with respect to the underlying model weights W (i−1) and the mask selects a fraction of weights that lead to good performance on the task without training the weights.To achieve this, we learn a score s uv for each weight w uv , and once trained, these scores are thresholded to obtain the mask.Here, the input to a node v is I v = u∈V (l−1) w uv Z u m uv , where m uv = h(s uv ) is the binary mask value and h(.) is a function which outputs 1 for top-k% of the scores in the layer with k being the mask density.
Next, we use a straight-through gradient estimator (Bengio et al., 2013) and iterate over the current task's data samples to update the scores for the corresponding supermask M i as follows, Finding Exclusive Mask Parameters: Given a learned mask M i , we use all the previous tasks' masks M 1 , . . ., M i−1 to create a free parameter mask M f ree i , that finds the parameters selected by the mask M i that were not selected by any of the previous masks.We do this by -(1) creating a new mask M 1:i−1 containing all the parameters already updated by any of the previous tasks by taking a union of all the previous masks {M j } i−1 j=1 by using the logical or operation, and (2) Then we obtain a mask M f ree i by taking the intersection of all the network parameters not used by any previous task which is given by the negation of the mask M 1:i−1 with the current task mask M i via a logical and operation.Next, we use this mask M f ree i for the exclusive supermask subnetwork weight training.
Exclusive Supermask Subnetwork Weight
Training: For training the subnetwork parameters for task t i given the free parameter mask M f ree i , we perform the forward pass on the model as model where ⊙ is the elementwise multiplication.Hence, Mi allows us to use all the connections in M i during the forward pass of the training but during the backward pass, only the parameters in M f ree i are updated because the gradient value is 0 for all the weights w uv where m f ree uv = 0.While during the inference on task t i we use the mask M i .In contrast, SSNET uses the task mask M i both during the training and inference as model(x, W (i−1) ⊙ M i ).This updates all the parameters in the mask including the parameters that are already updated by previous tasks that result in CF.Therefore, in cases where the sparse overlap is high, EXSSNET is preferred over SS-NET.To summarize, EXSSNET circumvents the CF issue of SSNET while benefiting from the subnetwork training to improve overall performance as shown in Figure 3.
KKT: Knn-Based Knowledge Transfer
When learning multiple tasks, it is a desired property to transfer information learned by the previous tasks to achieve better performance on new tasks and to learn them faster (Biesialska et al., 2020).Hence, we propose a K-Nearest Neighbours (KNN) based knowledge transfer (KKT) module that uses KNN classification to dynamically find the most relevant previous task (Veniat et al., 2021) to initialize the supermask for the current task.To be more specific, before learning the mask M i for the current task t i , we randomly sample a small fraction of data from task t i and split it into a train and test set.Next, we use the trained subnetworks of each previous task t 1 , . . ., t i−1 to obtain features on this sampled data.Then we learn i − 1 independent KNN-classification models using these features.Then we evaluate these i − 1 models on the sampled test set to obtain accuracy scores which denote the predictive power of features from each previous task for the current task.Finally, we select the previous task with the highest accuracy on the current task.If this accuracy is better than random then we use its mask to initialize the current task's supermask.This enables EXSSNET to transfer information from the previous task to learn new tasks better and faster.We note that the KKT module is not limited to SupSup and can be applied to a broader category of CL methods that introduce additional parameters for new tasks.
Metrics: We follow Chaudhry et al. (2018) and evaluate our model after learning task t on all the tasks, denoted by T .This gives us an accuracy matrix A ∈ R n×n , where a i,j represents the classification accuracy on task j after learning task i.We want the model to perform well on all the tasks it has been learned.This is measured by the average accuracy, A(T ) = 1 N N k=1 a N,k , where N is the number of tasks.Next, we want the model to retain performance on the previous tasks when learning multiple tasks.This is measured by the forgetting metric (Lopez-Paz and Ranzato, 2017), ).This is the average difference between the maximum accuracy obtained for task t and its final accuracy.Higher accuracy and lower forgetting are desired.
Sparse Overlap to Quantify Parameter Interference: Next, we propose sparse overlap, a measure to quantify parameter interference for a task i, i.e., the fraction of the parameters in mask M i that are already updated by some previous task.For a formal definition refer to Appendix A.1 Previous Methods and Baselines: For both vision and language (VL) tasks, we compare with: (VL.1)Naive Training (Yogatama et al., 2019): where all model parameters are sequentially trained/finetuned for each task.(VL.2) Experience Replay (ER) (de Masson d'Autume et al., 2019): we replay previous tasks examples when we train new tasks; (VL.3)Multitask Learning (Crawshaw, 2020): where all the tasks are used jointly to train the model; (VL.4)Individual Models: where we train a separate model for each task.This is considered an upper bound for CL; (VL.5)Supsup (Wortsman et al., 2020).For natural language (L), we further compare with the following methods: (L.6) Regularization (Huang et al., 2021): Along with the Replay method, we regularize the hidden states of the BERT classifier with an L2 loss term; We show three Adapter BERT (Houlsby et al., 2019) based methods, (V.6) Online EWC (Schwarz et al., 2018), (V.7) Synaptic Intelligence (SI) (Zenke et al., 2017); one knowledge distillation method, (V.8) Learning without Forgetting (LwF) (Li and Hoiem, 2017), three additional experience replay method, (V.9) AGEM (Chaudhry et al., 2018), (V.10) Dark Experience Replay (DER) (Buzzega et al., 2020), (V.11) DER++ (Buzzega et al., 2020), and a parameter isolation method (V.12) CGATE (Abati et al., 2020).models' performance.The average sparse overlap of EXSSNET is 19.4% across all three datasets implying that there is a lot more capacity in the model.See appendix Table 11 for sparse overlap of other methods and Appendix A.4.1 for best-performing methods results on Imagenet Dataset.Note that, past methods require tricks like local adaptation in MBPA++, and experience replay in AGEM, DER, LAMOL, and ER.In contrast, EXSSNET is simple and does not require replay.
Q2. Can KKT Knowledge Transfer Module
Share Knowledge Effectively?In Table 3, we show that adding the KKT module to EXSSNET, SSNET, and SupSup improves performance on vision benchmarks.The experimental setting here is similar to Table 2.We observe across all methods and datasets that the KKT module improves average test accuracy.Specifically, for the Split-CIFAR100 dataset, the KKT module results in 5.0%, and 3.2% improvement for Sup-Sup and EXSSNET respectively; while for Split-TinyImageNet, EXSSNET + KKT outperforms the individual models.We observe a performance decline for SSNET when using KKT because KKT promotes sharing of parameters across tasks which can lead to worse performance for SSNET.Furthermore, EXSSNET + KKT outperforms all other methods on both the Split-CIFAR100 and Split-TinyImageNet datasets.For EXSSNET + KKT, the average sparse overlap is 49.6% across all three datasets (see appendix Table 11).These results suggest that combining weight training with the KKT module leads to further improvements.
Q3. Can KKT Knowledge Transfer Module
Improve Learning Speed of Subsequent Tasks?Next, we show that the KKT module enables us to learn new tasks faster.To demonstrate this, in Figure 4 we plot the running mean of the validation accuracy vs epochs for different tasks from the Split-CIFAR100 experiment in Table 3.We show curves for EXSSNET with and without the KKT module and omit the first task as both these methods are identical for Task 1 because there is no previous task to transfer knowledge.For all the subsequent tasks (Task 2,3,4,5), we observe that -(1) EXSSNET + KKT starts off with a much better initial performance compared to EXSSNET (2) given a fixed number of epochs for training, EXSSNET + KKT always learns the task better because it has a better accuracy at all epochs; and (3) EXSSNET + KKT can achieve similar performance as EXSSNET in much fewer epochs as shown by the green horizontal arrows.This clearly illustrates that using the KKT knowledge-transfer module not only helps to learn the tasks better (see Table 3) but also learn them faster.For an efficiency and robustness analysis of the KKT module, please refer to Appendix A.4.2.
Additional Results and Analysis Q4. Effect of Mask Density on Performance:
Next, we show the advantage of using EXSSNET when the mask density is low.In Figure 5, we show the average accuracy for the Split-CIFAR100 dataset as a function of mask density.We observe that EXSSNET obtains 7.9%, 18.4%, 8.4%, and 4.7% improvement over SupSup for mask density values 0.02, 0.04, 0.06, 0.08 respectively.This is an appealing property as tasks select fewer parameters which inherently reduces sparse overlap allowing EXSSNET to learn a large number of tasks.
Q5. Can EXSSNET Learn a Large Number of
Tasks? SupSup showed that it can scale to a large number of tasks.Next, we perform experiments to learn 100 tasks created by splitting the Tiny-ImageNet dataset.In Table 4, we show that this property is preserved by EXSSNET while resulting in a performance improvement over SupSup.We note that as the number of task increase, the sparse overlap between the masks also increases resulting in fewer trainable model weights.In the extreme case where there are no free weights, EXSSNET by design reduces to SupSup because there will be no weight training.Moreover, if we use larger models there are more free parameters, leading to even more improvement over SupSup.
Q6. Effect of Token Embedding Initialization
for NLP: For our language experiments, we use a pretrained BERT model (Devlin et al., 2019) to obtain the initial token representations.We perform ablations on the token embedding initialization to understand its impact on CL methods.In Table 5, we present results on the S2 1 task-order sequence of the sampled version of WebNLP dataset (see Section 4.1, Datasets).We initialize the token representations using FastText (Bojanowski et al., 2016), Glove (Pennington et al., 2014), and BERT embeddings.From Table 5, we observe that -(1) the performance gap between EXSSNET and SupSup increases from 0.8% → 7.3% and 0.8% → 8.5% when moving from BERT to Glove and FastText initializations respectively.These gains imply that it is even more beneficial to use EXSSNET in absence of good initial representations, and (2) the performance trend, EXSSNET > SSNET > Sup-Sup is consistent across initialization.
Related Work
Regularization-based methods estimate the importance of model components and add importance regularization terms to the loss function.Zenke et al. (2017) regularize based on the distance of weights from their initialization, whereas Kirkpatrick et al. (2017b); Schwarz et al. (2018) use an approximation of the Fisher information matrix (Pascanu and Bengio, 2013) to regularize the parameters.In NLP, Han et al. (2020); Wang et al. (2019) use regularization to constrain the relevant information from the huge amount of knowledge inside large language models (LLM).Huang et al. (2021) first identifies hidden spaces that need to be updated versus retained via information disentanglement (Fu et al., 2017;Li et al., 2020) and then regularize these hidden spaces separately.
Replay based methods maintain a small memory buffer of data samples (De Lange et al., 2019;Yan et al., 2022) or their relevant proxies (Rebuffi et al., 2017) from the previous tasks and retrain on them later to prevent CF.Chaudhry et al. (2018) use the buffer during optimization to constrain parameter gradients.Shin et al. (2017); Kemker and Kanan (2018) 2019) trains a language model to generate a pseudo-sample for replay.
Architecture based methodscan be divided into two categories: (1) methods that add new modules over time (Li et al., 2019;Veniat et al., 2021;Douillard et al., 2022); and (2) methods that isolate the network's parameters for different tasks (Kirkpatrick et al., 2017a;Fernando et al., 2017;Mallya and Lazebnik, 2018;Fernando et al., 2017).Rusu et al. (2016) introduces a new network for each task while Schwarz et al. (2018) distilled the new network after each task into the original one.Recent prompt learning-based CL models for vision (Wang et al., 2022a,b) assume access to a pre-trained model to learn a set of prompts that can potentially be shared across tasks to perform CL this is orthogonal to our method that trains from scratch.Mallya and Lazebnik (2018) allocates parameters to specific tasks and then trains them in isolation which limits the number of tasks that can be learned.In contrast, Mallya et al. (2018) use a frozen pretrained model and learns a new mask for each task but a pretrained model is crucial for their method's good performance.Wortsman et al. (2020) removes the pretrained model dependence and learns a mask for each task over a fixed randomly initialized network.EXSSNET avoids the shortcomings of Mallya and Lazebnik (2018); Mallya et al. (2018) and performs supermask subnetwork training to increase the representational capacity compared to (Wortsman et al., 2020) while performing knowledge transfer and avoiding CF.
Conclusion
We introduced a novel Continual Learning method, EXSSNET (Exclusive Supermask SubNetwork Training), that delivers enhanced performance by utilizing exclusive, non-overlapping subnetwork weight training, overcoming the representational limitations of the prior SupSup method.Through the avoidance of conflicting weight updates, EXSSNET not only improves performance but also eliminates forgetting, striking a delicate balance.Moreover, the inclusion of the Knowledge Transfer (KKT) module propels the learning process, utilizing previously acquired knowledge to expedite and enhance the learning of new tasks.The efficacy of EXSSNET is substantiated by its superior performance in both NLP and Vision domains, its particular proficiency for sparse masks, and its scalability up to a hundred tasks.
Limitations
Firstly, we note that as the density of the mask increases, the performance improvement over the SupSup method begins to decrease.This is due to the fact that denser subnetworks result in higher levels of sparse overlap, leaving fewer free parameters for new tasks to update.However, it is worth noting that even in situations where mask densities are higher, all model weights are still trained by some task, improving performance on those tasks and making our proposed method an upper bound to the performance of SupSup.Additionally, the model size and capacity can be increased to counterbalance the effect of higher mask density.Moreover, in general, a sparse mask is preferred for most applications due to its efficiency.
Secondly, we have focused on the task incremental setting of continual learning for two main reasons: (1) in the domain of natural language processing, task identities are typically easy to obtain, and popular methods such as prompting and adaptors assume access to task identities.(2) the primary focus of our work is to improve the performance of supermasks for continual learning and to develop a more effective mechanism for reusing learned knowledge, which is orthogonal to the question of whether task identities are provided during test time.
Moreover, it is worth noting that, similar to the SupSup method, our proposed method can also be extended to situations where task identities are not provided during inference.The SupSup paper presents a method for doing this by minimizing entropy to select the best mask during inference, and this can also be directly applied to our proposed method, ExSSNeT, in situations where task identities are not provided during inference.This is orthogonal to the main questions of our study, however, we perform some experiments on Class Incremental Learning in the appendix A.4.3.
A.3 Experimental setup and hyperparameters
Unless otherwise specified, we obtain supermasks with a mask density of 0.1.In our CNN models, we use non-affine batch normalization to avoid storing their means and variance parameters for all tasks (Wortsman et al., 2020).Similar to (Wortsman et al., 2020), bias terms in our model are 0 and we randomly initialize the model parameters using signed kaiming constant (Ramanujan et al., 2019).We use Adam optimizer (Kingma and Ba, 2014) along with cosine decay (Loshchilov and Hutter, 2016) and conduct our experiments on GPUs with 12GB of memory.We used approximately 6 days of GPU runtime.For our main experiment, we run three independent runs for each experiment and report the averages for all the metrics and experiments.For natural language tasks, unless specified otherwise we initialize the token embedding for our methods using a frozen BERT-base-uncased (Devlin et al., 2018) model's representations using Huggingface (Wolf et al., 2020).We use a static CNN model from Kim (2014) as our text classifier over BERT representations.The model employs 1D convolutions along with Tanh activation.The total model parameters are ∼110M Following Sun et al. (2019); Huang et al. (2021), we evaluate our model on various task sequences as provided in Appendix Table 6, while limiting the maximum number of tokens to 256.Following (Wortsman et al., 2020), we use LeNet (Lecun et al., 1998) for SplitMNIST dataset, a Resnet-18 model with fewer channels (Wortsman et al., 2020) for Split-CIFAR100 dataset, a ResNet50 model (He et al., 2016) for TinyImageNet dataset.Unless specified, we randomly split all the vision datasets to obtain five tasks with disjoint classes.We use the codebase of DER (Buzzega et al., 2020) to obtain the vision baselines.In all our experiments, all methods perform an equal number of epochs over the datasets.We use the hyperparameters from Wortsman et al. (2020) for our vision experiments.
For the ablation experiment on natural language data, following Huang et al. (2021), we use a sampled version of the WebNLP datasets due to limited resources.The reduced dataset contains 2000 training and validation examples from each output class.The test set is the same as the main experiments.The dataset statistics are summarized in Table 7.For WebNLP datasets, we tune the learning rate on the validation set across the values {0.01, 0.001, 0.0001}, for GLUE datasets we use the default learning rate of the BERT model.For our vision experiments, we use the default learning rate for the dataset provided in their original implementation.For TinyImageNet, SplitCIFAR100, SplitMNIST dataset, we run for 30, 100, and 30 epochs respectively.We store 0.1% of our vision datasets for replay while for our language experiments we use 0.01% of the data because of the large number of datasets available for them.Table 11: We report the average sparse overlap for all method and dataset combinations reported in Table 3.
A.4.1 Results on Imagenet Dataset
In this experiment, we take the ImageNet dataset (Deng et al., 2009) with 1000 classes and divide it into 10 tasks where each task is a 100-way classification problem.In Table 8, we report the results for ExSSNeT and the strongest vision baseline method, SupSup.We omit other methods due to resource constraints.We observe a strong improvement of 6.7% of EXSSNET over SupSup, indicating that the improvements of our methods exist for large scale datasets as well.
A is 173 minutes which is a very small difference.Second, there are two main hyperparameters in the KKT module -(1) k for taking the majority vote of top-k neighbors, and (2) the total number of batches used from the current task in this learning and prediction process.We present additional results on the splitcifar100 dataset when changing these hyperparameters one at a time.
In Table 9, we use 10 batches for KKT with a batch size of 64, resulting in 640 samples from the current task used for estimation.We report the performance of EXSSNET when varying k.From this table, we observe that the performance increases with k and then starts to decrease but in general most values of k work well.
Next, in Table 10, we use a fixed k=10 and vary the number of batches used for KKT with a batch size of 64 and report the performance of EXSS-NET.We observe that as the number of batches used for finding the best mask increases the prediction accuracy increases because of better mask selection.Moreover, as few as 5-10 batches work reasonably well in terms of average accuracy.
From both of these experiments, we can observe that the KKT module is fairly robust to different values of these hyperparameters but carefully selecting them hyperparameters can lead to slight improvement.
A.4.3 Class Incremental Learning
We performed Class Incremental Learning experiments on the TinyImageNet dataset (10-tasks, 20classes in each) and used the One-Shot algorithm from SupSup (Wortsman et al., 2020) to select the mask for inference.Please refer to Section-3.3 and Equation-4 of the SupSup paper (Wortsman et al., 2020) for details.From do not use Experience Replay by at least 2.75%.Moreover, even with the need for a replay buffer, EXSSNET outperforms most ER-based methods and is comparable to that of DER.
A.4.4 Sparse Overlap Numbers
In Table 11, we report the sparse overlap numbers for SupSup, SSNET, and EXSSNET with and without the KKT knowledge transfer module.This table corresponds to the results in main paper Table 3.
A.4.5 Average Accuracy Evolution
In Figure 6, we plot i≤t A ti vs t, that is the average accuracy as a function of observed classes.This plot corresponds to the SplitCIFAR100 results provided in the main paper Table 2.We can observe from these results that Supsup and ExSS-NeT performance does not degrade when we learn new tasks leading to a very stable curve whereas for other methods the performance degrades as we learn new tasks indicating some degree of forgetting.
Algorithm 1 EXSSNET training procedure.In this Section, we provide the result to compare the runtime of various methods used in the paper.We ran each method on the sampled version of the WebNLP dataset for the S2 task order as defined in Table 6.We report the runtime of methods for four epochs over each dataset in Table 13.Note that the masking-based method, SupSup, SSNET, EXSSNET takes much lower time because they are not updating the BERT parameters and are just finding a mask over a much smaller CNN-based classification model using pretrained representation from BERT.This gives our method an inherent advantage that we are able to improve performance but with significantly lower runtime while learning a mask over much fewer parameters for the natural language setting.
A.4.7 Validation results
In Table 14, we provide the average validation accuracies for the main natural language results presented in Table 1.We do not provide the validation results of LAMOL (Sun et al., 2019) andMBPA++ (de Masson d'Autume et al., 2019) as we used the results provided in their original papers.For the vision domain, we did not use a validation set because no hyperparameter tuning was performed as we used the experimental setting and default param-
Figure 2 :
Figure 2: Test accuracy versus the mask density for 100way CIFAR100 classification.Averaged over 3 seeds.
Figure 4 :
Figure4: We plot validation accuracy vs Epoch for EXSS-NET and EXSSNET + KKT.We observe that KKT helps to learn the subsequent tasks faster and improves performance.
uses a generative model to sample and replay pseudo-data during training, whereas Rebuffi et al. (2017) replay distilled knowledge from the past tasks.de Masson d'Autume et al. (2019) employ episodic memory along with local adaptation, whereas Sun et al. (
Figure 6 :
Figure 6: Average Accuracy of all seen tasks as a function of the number of learned classes for the Split-CIFAR100 dataset.
Input:▷
Tasks T , a model M, mask sparsity k, exclusive=True Output: Trained model ▷ Initialize model weights W (0) initialize_model_weights(M) forall i ∈ range(|T |) do ▷ Set the mask Mi corresponding to task ti for optimization.mask_opt_params = Mi ▷ Learn the supermask Mi using edge-popup forall em ∈ mask_epochs do Mi = learn_supermask(model, mask_opt_params, ti) end ▷ Model weight at this point are same as the last iteration W (i−1) if i > 1 and exclusive then ▷ Find mask for all the weights used by previous tasks.M1:i−1 = ∨ i−1 j=1 (Mj ) ▷ Get mask for weights in Mi which are not in {Mi} Learn the free weight in the supermask Mi forall em ∈ weight_epochs do W (i) = update_weights(model, weight_opt_params, ti) end end A.4.6 Runtime Comparison across methods
Table 2 :
Average accuracy ↑ (Forgetting metric ↓) on all tasks for vision.For our method, we report the results are averaged over three random seeds.
Table 3 :
Average test accuracies ↑ [and gains from KKT] when using the KKT knowledge sharing module.
Table 5 :
Ablation result for token embeddings.We report average accuracy ↑ [and gains over SupSup]
Table 7 :
Statistics for sampled data used from Huang et al. (2021) for hyperparameter tuning.The validation set is the same size as the train set.Class means the number of output classes for the text classification task.Type is the domain of text classification.|W | * 1 bits in total as in the worst case we need to store all |W | model weights.
Table 8 :
Comparision between EXSSNET and the best baseline SupSup on Imagenet Dataset.
Table 9 :
Effect of varying k while keeping the number of batches used for the KKT module fixed.
Table 10 :
Effect of varying the number of batches while keeping the k for top-k neighbours fixed for KKT module fixed.
Table 12 :
Table 12, we observe that EXSSNET outperforms all baseline methods that Results for CIL setting.
|
2022-10-20T01:16:17.329Z
|
2022-10-18T00:00:00.000
|
{
"year": 2022,
"sha1": "dbb1471abd2062cbe83a6355110cf603221927c4",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.findings-acl.36.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a4102e7a89cf4d0667fd0f14849e471745a139cb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
102350640
|
pes2o/s2orc
|
v3-fos-license
|
On the mass accretion rate and infrared excess in Herbig Ae/Be Stars
The present study makes use of the unprecedented capability of the Gaia mission to obtain the stellar parameters such as distance, age, and mass of HAeBe stars. The accuracy of Gaia DR2 astrometry is demonstrated from the comparison of the Gaia DR2 distances of 131 HAeBe stars with the previously estimated values from the literature. This is one of the initial studies to estimate the age and mass of a confirmed sample of HAeBe stars using both the photometry and distance from the Gaia mission. Mass accretion rates are calculated from $H\alpha$ line flux measurements of 106 HAeBe stars. Since we used distances and the stellar masses derived from the Gaia DR2 data in the calculation of mass accretion rate, our estimates are more accurate than previous studies. The mass accretion rate is found to decay exponentially with age, from which we estimated a disk dissipation timescale of $1.9\pm 0.1$ Myr. Mass accretion rate and stellar mass exhibits a power law relation of the form, $\dot{M}_{acc}$ $\propto$ $M_{*}^{2.8\pm0.2}$. From the distinct distribution in the values of the infrared spectral index, $n_{2-4.6}$, we suggest the possibility of difference in the disk structure between Herbig Be and Herbig Ae stars.
INTRODUCTION
Herbig Ae/Be stars are intermediate-mass pre-main sequence (PMS) stars with masses between 2 and 10 M ⊙ . They are often used to understand the missing link in the star formation sequence connecting T Tauri stars and massive young stellar objects (e.g. Herbig 1960;Waters & Waelkens 1998;Oudmaijer et al. 2017). Herbig Ae/Be stars (hereafter HAeBe) show emission lines in their spectrum and exhibit infrared excess (known as IR excess) in the continuum, suggestive of hot and/or cool dust in the circumstellar medium (CSM) (Hillenbrand et al. 1992;Malfait et al. 1998). The emission lines such as Hα are formed in the CSM and are used for understanding the mass accretion process in HAeBe stars (eg. Hamann & Persson 1992;Vieira et al. 2003;Manoj et al. 2006;Mendigutía et al. 2011a,b).
Understanding the accretion of material from the CSM is important to study the PMS evolution because it can provide vital information about the formation and evolution of planets around the stars (Muzerolle et al. 2003;Beltrán & de Wit 2016). It is proposed that Herbig Ae (HAe) and Herbig Be (HBe) stars may show considerable differences in disc morphology and mode of accretion (Vink et al. 2002;Alonso-Albi et al. 2009;Vioque et al. 2018). However, in order to establish these results, we need to have precise distance measurements. This is due to the fact that the precision of stellar parameters such as age, mass, log(g) etc., strongly depend on precise distance measurements. One of the pioneering missions which provided accurate distances of nearby astronomical objects was the Hipparcos mission. Based on the distance measurements of nearby HAeBe stars from the Hipparcos mission (ESA 1997), van den Ancker et al. (1998) derived the astrophysical parameters of a sample of 44 HAeBe stars and found that 65% of HAeBe stars show photometric variability. It may be noted that Hipparcos provided reli-able distance values for stars within 1 kpc to the Sun (de Zeeuw et al. 1999). The Gaia mission is designed to provide high-quality astrometry and photometry of 1.3 billion stars (Gaia Collaboration et al. 2016a,b). With the second data release of Gaia (named as Gaia DR2) (Gaia Collaboration et al. 2018a), it is possible to get parallax measurements of stars with uncertainties limited to 0.04 mas, for sources brighter than G = 14 mag (Luri et al. 2018). From precise distance measurements, it is possible to derive the relations connecting the IR excess and mass accretion rates (Ṁ acc ) with the stellar parameters of HAeBe stars. This can be used to understand whether magnetospheric or disc accretion plays a major role in HAeBe stars.
In this work, we estimate the stellar parameters of a well-studied sample of HAeBe stars, thereby understanding the mass accretion process in pre-main sequence stars. We present the sample of HAeBe stars used for this study in Sect. 2. The results of this study are presented in Sect. 3, wherein we discuss the procedure associated with distance and extinction measurements. Also, we estimate the mass and age of HAeBe stars and discuss mass accretion in HAeBe stars. Recently, Vioque et al. (2018) estimated stellar parameters of HAeBe stars using distance measurements from Gaia DR2. They based their analysis on the derived quantities such as luminosity and temperature, which can introduce additional errors in the estimation of mass and age of HAeBe stars. Instead, in the present study, we based the analysis on Gaia color-magnitude diagram. The main results are summarized in Sect. 4.
DATA INVENTORY
A sample of 142 stars is taken from Mathew et al. (2018), which is a carefully selected, well-studied sample of HAeBe stars from The et al. (1994), Manoj et al. (2006) and Fairlamb et al. (2015). Mathew et al. (2018) discussed various mechanisms for the formation of Oi emission lines in HAeBe stars and found that Lyman beta fluorescence is the dominant excitation mechanism. This is the second work in the series, studying about thė M acc and IR excess in HAeBe stars. Here we re-estimate the relations connecting theṀ acc with the stellar parameters such as age and mass in the context of the Gaia DR2 release. These new estimates will be used for our future work to explore the possibility of using Oi 8446Å emission line as an accretion indicator in HAeBe stars (Mathew et al. in prep.).
The coordinates, proper motions and V magnitudes of the 142 stars are taken from the literature. RA and Dec of these stars are converted from J2000 to J2015.5 epoch using their proper motion. A query for a Gaia DR2 match for these stars was then performed around the converted coordinates with a search radius of 10 arcsec via the Mikulski Archive for Space Telescopes (MAST) 1 . If a match was not found, then the search radius was increased up to 30 arcsec. This procedure returned 354 Gaia DR2 rows for 142 stars. For 60 stars, only one Gaia DR2 match was returned. For the remaining 82 stars with multiple entries, those which had |G−V| mag > 3.5 were removed. For the remaining multiple entries, the Gaia DR2 row with the closest positional match was selected for which |G−V| mag ≤ 2. Thus we got the Gaia DR2 parallax and magnitudes for all stars in the sample. After avoiding 11 sources, where 6 showed no parallax data and 5 had negative parallax, we finalized our sample of HAeBe stars to 131. These stars are found in the distance range of 0.09−6 kpc, with a range in Gaia G band magnitude from 4.4 to 14.5 mag.
Comparison of the Gaia DR2 distances with previous estimates
The uncertainty in the distance determination of stars is mitigated to a considerable extent due to the precision of the Gaia mission. Although Gaia DR2 provides accurate positions and parallax measurements via a rigorous astrometric reduction technique, the estimation of distance by simple inversion of Gaia parallax does entail certain inherent problems. The distance obtained through such a method is acceptable only when the parallax measurements are fairly precise, i.e., when the signal to noise ratio (SNR) of the parallax measurement is preferably high (SNR≥5). In cases where fractional parallax uncertainty is high, the probability distribution for the distance inferred from inverted parallax becomes strongly asymmetric and non-Gaussian in nature. Furthermore, the distance thus estimated will be nonphysical if the concerned parallax measurement is negative, owing to the large measurement noise or due to the star moving opposite to the direction of the true parallactic motion. To tackle this problem, Bailer-Jones et al. (2018) applied a probabilistic approach to estimate distances to 1.3 billion stars having Gaia DR2 data. They adopted the distance likelihood (inferred from Gaia parallax) and a distance prior (an exponentially decreasing space density prior that is based on a Galaxy model) approach. The distance estimates and corresponding uncertainties thus determined are purely geometric and devoid of any underlying assumptions. Hence, for the present study, we use the dis-tance estimates from Bailer-Jones et al. (2018), which are listed in Table 1.
We compared the distance estimated from the Gaia DR2 with the values listed in the literature. Manoj et al. (2006) compiled the distances of HAeBe stars from various studies and provided the best estimate of distance for each star. This is supplemented with the distance information from the Gaia DR1 (Gaia Collaboration et al. 2016b) and those given in Fairlamb et al. (2015). The extreme values of distance from these compilations are included in Figure 1 along with the Gaia DR2 estimates. It can be seen from the figure that distance estimate from the Gaia DR2 is more accurate (with minimal error) than previous estimates.
Extinction Calculation
The extinction in all the photometric bands, G, G BP and G RP , are listed in the Gaia archive. But this extinction and reddening values are limited to a small number of objects. The extinction calculation is done by an automated algorithm, which is explained in detail in Evans et al. (2018). Also, they have listed the caveats involved in the automated way of estimating extinction values. For this work, we have independently estimated the extinction values from the extinction curve of McClure (2009). From the curve we calcu- The A V values for our sample of HAeBe stars are taken from Fairlamb et al. (2015), Chen et al. (2016) and Mathew et al. (2018). Hernández et al. (2004) suggested using high values of total-to-selective extinction (R V = 5) for estimating the extinction values of HAeBe stars. This is suggestive of grain growth in the disk of HAeBe stars (Gorti & Bhatt 1993;Manoj et al. 2006). For the present work, we adopted R V = 5 while calculating the extinction (A V ) values. This method was followed while calculating the A V values of HAeBe stars in Mathew et al. (2018). Hence, for this analysis, we included the A V values of HAeBe stars which are listed in Mathew et al. (2018). For remaining stars, A V values are taken from Fairlamb et al. (2015) and Chen et al. (2016), which are re-estimated for R V = 5. It may be noted that Hernández et al. (2004) pointed out that the age and luminosity of HAeBe stars better match with that of PMS stars when R V = 5 is employed. The A V values estimated for all the HAeBe stars will be used for correcting the Gaia photometry for extinction.
The mean wavelength values in the Gaia passbands and Johnson V band are taken from Jordi et al. (2010).
Using these relations we estimated A G , A GBP and A GRP from the known values of A V . This is further used to correct the Gaia magnitudes, which will be used for this work.
Age and mass of HAeBe stars
In addition to precise astrometric measurements, the Gaia DR2 lists three broad-band photometric magnitudes, G, G BP and G RP , extinction in G band (A G ) and reddening (E(G BP − G RP )) values. This provides the possibility to construct a color-magnitude . We identified that the G-band filter in Gaia is very wide (720 nm) and hence can introduce uncertainty in G magnitude measurements. Hence for the present work, we use G BP and G RP magnitudes for constructing the CMD. The observed Gaia G BP and G RP are corrected for extinction using the method discussed in Sect. 3.2. Further, making use of the distance estimates (see Table 1), we estimated the absolute G RP magnitude (M GRP ), which will be used for the CMD analysis. Usually, the construction of the CMD with non-homogeneous datasets belonging to different epochs can introduce systematic errors in the estimation of stellar parameters. The use of Gaia astrometry and photometry for the CMD analysis alleviate this issue. Also, we derived the age and mass of HAeBe stars from the observed CMD rather than from a theoretical Hertzsprung-Russell (HR) diagram. Luminosity calculation for stars in the HR diagram involves the conversion of V magnitude to luminosity using bolometric corrections. Such a conversion will provide substantial errors in mass and age estimates. In addition, the effective temperature of the star (T eff ) is identified using a calibration table which introduces degeneracy in T eff for relatively nearer spectral types. The age and mass of the HAeBe stars are estimated by plotting the Modules for Experiments in Stellar Astrophysics (MESA) isochrones and evolutionary tracks (MIST) 2 (Choi et al. 2016;Dotter 2016) in the Gaia CMD. The MIST is an initiative supported by NSF, NASA and Packard Foundation which builds stellar evolutionary models with different ages, masses, and metallicities. The updated models in the MIST archive included isochrones and evolutionary tracks for the Gaia DR2 data. We know that HAeBe stars have a range of rotation rates but we adopted the isochrones corresponding to (V/V crit ) = 0.4, since that is the only model available in the MIST database for a rotating system.
Also, we adopted the metallicity Fe H = 0 (corresponding to solar metallicity; Z ⊙ = 0.0152) for estimating the age and mass of HAeBe stars. The Gaia CMD for our sample of 131 HAeBe stars is shown in Figure 2 & Figure 3. From Figure 2, we estimated the ages of 110 HAeBe stars by over-plotting MIST isochrones. They are found to be in the range of 0.1 to 15 Myr. From Figure 3, it can be seen that the mass range of our sample of HAeBe stars is 1.4 to 25 M ⊙ . The masses are identified from the coincidence of the data points with the grid of MIST evolutionary tracks. The estimated ages and masses of the HAeBe stars from this work are compared with that in Vioque et al. (2018) and are listed in Table 1. We found that 21 stars from our sample are placed below the main sequence and hence the parameters could not be estimated. Since these stars are catalogued as HAeBe stars, they may be properly positioned in the pre-main sequence location in previous studies. HAeBe stars are known to show photometric variability (van den Ancker et al. 1998). The stars which are found below the main sequence in Figure 2 & Figure 3 may show photometric variability. Also, some stars are positioned in the evolved region of the evolutionary track. Further studies are needed to evaluate the nature of these candidates.
Mass accretion rates of HAeBe stars
The mass accretion process during the pre-main sequence phase represents one of the important mechanisms associated with star formation. In T Tauri stars, mass accretion is through a process known as magnetospheric accretion (MA) in which the magnetosphere of the host star truncates the circumstellar disk at a few stellar radii and the material from the disk fall on to the star at free-fall velocities along the magnetic field lines, which in turn create shocks at the surface of the star. The hot (10 4 K) emission from the post-shock gas appear as excess in the UV continuum of T Tauri stars (e.g. Gullbring et al. 1998;Hartmann et al. 1998;Bouvier et al. 2007). The MA accretion model may not be a viable mode of accretion in HAeBe stars since there are no convincing signatures of a magnetic field in these systems (Alecian et al. 2013). Although many studies suggest disk accretion as the possible mechanism in Herbig Be stars, a consensus is yet to be obtained whether MA accretion can account for mass accretion in low mass HAeBe stars (Muzerolle et al. 2004). For the present work, we employed magnetospheric accretion formalism while calculating theṀ acc in HAeBe stars.
The Hα line flux values of 102 HAeBe stars are taken from Mathew et al. (2018), Fairlamb et al. (2017) and Mendigutía et al. (2011b). In addition, we took the Hα equivalent width (EW) for four stars from Boehm & Catala (1995), Baines et al. (2006), Borges Fernandes et al. (2007 and Vieira et al. (2011). The EW is converted to line flux from the R band magnitude using the method mentioned in Mathew et al. (2018). Hence, for the present analysis, we will be using the Hα line flux (F Hα ) values of 106 HAeBe stars. The Hα line flux is converted to luminosity (L Hα ) using the equation, where d is the distance in pc. The accretion luminosity (L acc ) is calculated using the empirical relation given in Fairlamb et al. (2017), which is reproduced below.
The (Ṁ acc ) can be derived from the L acc using the relation, where M * is the mass of HAeBe stars, estimated in Sect. 3.3 and given in Table 1; R i is the disk truncation radius. For T Tauri stars, R i is assumed to be 5 R * Costigan et al. 2014). HAeBe stars are fast rotators and therefore have a smaller corotation radius. The disk truncation radius, R i , should be smaller than the co-rotation radius (Shu et al. 1994). Thus in this work, we adopt disk truncation radius, R i = 2.5 R * (Muzerolle et al. 2004;Mendigutía et al. 2011a;Fairlamb et al. 2015). The stellar radius R * for the 106 HAeBe stars are calculated using the equation, where L * is the bolometric luminosity of the star, which is calculated from the V magnitude, bolometric correction and Gaia distance. Using the calibration table listed in Pecaut & Mamajek (2013), we identified T eff and bolometric correction corresponding to the spectral type of the HAeBe star. The V magnitudes of 101 HAeBe stars are compiled from AAVSO Photometric All Sky Survey (APASS; Henden et al. 2016) and Tycho-2 (Høg et al. 2000) catalogues. The remaining 5 stars which had no V magnitude listed in both the catalogues are taken from the following references
Correlation analysis of mass accretion rates with stellar parameters
The relationship between theṀ acc and the stellar parameters such as age and mass are analyzed in some of the studies (e.g. Mendigutía et al. 2011aMendigutía et al. , 2015Fairlamb et al. 2017). However, in the context of precise mass and age estimates using Gaia DR2, we re-assessed the relations betweenṀ acc and the stellar parameters using the largest sample of 106 HAeBe stars to date. Figure 4(a) illustrates the correlation between the log(Ṁ acc ) and age of HAeBe stars. It can be seen that log(Ṁ acc ) decays exponentially with the age of HAeBe stars. This trend is discussed in the studies of Manoj et al. (2006) and Mendigutía et al. (2012). From the rate of decline of accretion rate, it is possible to estimate the disk dissipation timescale, τ , using the relation,Ṁ acc (t) =Ṁ acc (0)e −t/τ (8) where t is the age of HAeBe stars. By fitting the relation to the set of data points, we obtained the disc dissipation time scale, τ = 1.9 ± 0.1 Myr. This value is near to that given in Mendigutía et al. (2012), which is τ = 1.3 +1.0 −0.5 Myr. It may be noted that τ for T Tauri stars is 2−4 Myr (Fedele et al. 2010;Takagi et al. 2014). We find a lower τ value for HAeBe stars indicating that the disk dissipation timescale is shorter for intermediate mass young stars compared to their lower mass counterparts.
Further, another parameter used in the literature for calculating the rate of decline of accretion rate with age in young stellar objects (YSOs) is the power law index, η Mendigutía et al. 2012;Fairlamb et al. 2015). The relation which connectsṀ acc with age of the star can also be considered as a power law distribution of the form, From the best fit to the distribution of the data points in Figure 4(b), we obtained η = 1.2 ± 0.1. This value is on the lower end when compared to the estimates of Mendigutía et al. (2012) and Fairlamb et al. (2015), which are 1.8 +1.3 −0.7 and 1.92 ± 0.09, respectively. This could be because of the increased number of high mass HBe stars in our sample.
In Figure 5 we plotted the correlation betweenṀ acc and stellar mass. Our sample of HAeBe stars cover a broader range in spectral type/mass andṀ acc (∼ 10 −3 − 10 −7 M ⊙ yr −1 ), when compared to the sample of stars given in Mendigutía et al. (2011a). This is because our sample contains high mass candidates with mass > 6 M ⊙ , whereas those listed in Mendigutía et al. (2011a) are with mass < 6 M ⊙ . The best fit for our sample of HAeBe stars in Figure 5 provides the rela-tionṀ acc ∝ M 2.8±0.2 * . Mendigutía et al. (2011a) did a similar study and obtained a steep power law relation,Ṁ acc ∝ M 5 * . The reason for a steeper power law relation might be due to the unavailability of massive HAeBe stars in their sample. The Pearson correlation coefficient for our fit is 0.81 for a sample size of 106 stars. Incidentally, Fairlamb et al. (2015) obtained the relation between stellar mass and accretion rate asṀ acc ∝ M 3.72±0.27 * , which comes close to our estimate. It may be noted that the mass dependence of accretion rate in T Tauri stars is lower than the value calculated for HAeBe stars, i.e.,Ṁ acc ∝ M 2 * (Muzerolle et al. 2005;Natta et al. 2006).
The best fit and the confidence limits for Figures 4(a), 4(b) and 5 are determined using the Monte Carlo method to account for the associated uncertainties in age, mass andṀ acc . For this purpose, 100,000 samples for age, mass andṀ acc were created. The values for these samples were randomly drawn from a Gaussian distribution having a mean equal to the actual measured value in each case and a standard deviation equal to the associated uncertainty. The best fit is then estimated for each of the resulting data set. The fit parameters obtained for all 100,000 datasets results in a normal distribution, the mean of which, along with its 3 σ confidence limits, is taken as the final best fit.
3.6. Quantifying IR excess using spectral index IR excess in the Spectral Energy Distribution (SED) is one of the important criterion used in identifying YSOs. It provides a better understanding of the composition of gas and dust in the disk of a PMS star. Lada & Wilking (1984) differentiated YSOs into different classes from the shape of their SEDs in the IR region. Lada (1987) quantified the classification scheme using the slope in the IR region of the SED, which are known as Lada indices. The YSOs can be classified as Class 0, Class I, Class II and Class III, based on the steepness of the indices at various wavelength intervals (Lada 1987;Andre et al. 1993). The estimation and analysis of Lada indices are very important in studying the evolution of HAeBe stars as it gives an idea about the evolution of the CSM. The equation defining the spectral index (Lada 1987;Wilking 1989;Greene et al. 1994) is expressed as, For our analysis we consider the spectral index, n 2−4.6 , which is the ratio of the flux values at 2MASS (Skrutskie et al. 2006) K s -band (i.e., λ 1 = 2.159 µm) and WISE (Cutri et al. 2013) W2-band (i.e., λ 2 = 4.6 µm). The age estimates are available only for 110 stars. However, the spectral index is not calculated for the HAeBe stars CPD-61 3587B and LkHA 224 due to the unavailability of WISE magnitudes. Hence, a sample of 108 stars is used for this analysis. A plot between spectral index (n 2−4.6 ) and age of HAeBe stars is shown in Figure 6. No clear trend is evident in the variation of n 2−4.6 with respect to age in Figure 6. However, when we categorize the HAeBe stars in various mass bins, a tentative trend seems to emerge. For HAeBe stars with mass less than 2 M ⊙ , the n 2−4.6 value is around -1. For stars in the mass range 2−7 M ⊙ , there is a scatter in the distribution of n 2−4.6 values, with majority of the data points around n 2−4.6 = -1. The majority of massive stars (mass > 7 M * ) are showing IR index from 0.5 to -3, where the negative index is more prominent in these high mass candidates. This agrees with the study of Alonso-Albi et al. (2009) where they suggested that in high mass HBe stars disk dispersal is faster and disk masses are 5−10 times lesser than low mass counterparts. They explained this observation by suggesting that photoevaporation mechanism due to the UV radiation disperses the gas content in the disk, after which only a thin dusty disk containing large grains remain. The caveat in our study is the upper bound in age quoted for massive HBe stars. Vioque et al. (2018) Calculation of stellar parameters from the theoretical HR diagram involves the use of derived variables such as bolometric luminosity (L bol ) and effective temperature (T ef f ). The estimation of these quantities from magnitude and color/spectral type involves approximations and comparison with standard calibration tables, which add more errors into the calculation of age and mass. Our analysis is based on the Gaia CMD rather than a theoretical HR diagram. Using a uniform photometric system combined with precise distances can give accurate estimation of age and mass of PMS stars. Thus, combining the refined stellar distances and the most consistent photometric measurements from the Gaia DR2, along with the help of synthetic photometry isochrones and evolutionary tracks from the MIST, accurate stellar ages and masses are estimated in this work. In comparison, Vioque et al. (2018) adopted the theoretical HR diagram for the analysis of age and mass. The differences between our analysis with that of Vioque et al. (2018) are listed below.
Comparison with
• We used the photometry and distances from the Gaia for the estimation of age and mass of HAeBe stars. Vioque et al. (2018) used only the Gaia distances for the same. This is because Hernández et al. (2004) showed that total to selective extinction R V = 5 better reproduces the stellar parameters of HAeBe stars. Also, it is understood that the photometric variability and high value of reddening in HAeBe stars are not due to the interstellar medium, but due to dust particles with large grain size in the CSM (see Gorti & Bhatt 1993;Manoj et al. 2006).
• For a statistical comparison of stellar parameters with Vioque et al. (2018), we also estimated ages and masses of HAeBe stars with R V = 3.1. The median of the fractional difference between our ages with R V = 3.1 and Vioque et al. (2018) ages is calculated to be within 19%. The fractional difference is defined as, V ioque estimate − Our estimate Our estimate × 100 For masses, the fractional difference is found to be within 8%. The difference in age and mass could be due to our use of the Gaia CMD and the MIST models whereas Vioque et al. (2018) used the HR diagram and the PARSEC models (Bressan et al. 2012). This comparison is extended to our actual estimates of age and mass for R V = 5. The median of the fractional difference of age and mass between our work (R V = 5) and Vioque et al. (2018) is within 31% and 17% respectively.
• Vioque et al. (2018) used the Hα EW for correlation studies with age and mass of HAeBe stars. However, for our analysis, we used the Hα line flux, from which theṀ acc is calculated, which is used for the correlation analysis with age and mass of HAeBe stars. It may be noted that Mendigutía et al. (2012) have reported that the Hα EW may not give a clear idea about the gas content of the disk. They suggested estimatinġ M acc from the Hα line flux to study the gas content of the disk, which we employed in this work.
• Vioque et al. (2018) used the continuum flux distribution from 1.24 µm to 22 µm for the analysis of IR excess in HAeBe stars. This includes the flux measurement from the WISE W4 photometric band, which is not very reliable as the images of many HAeBe stars are not registered in W4 band. Hence, we restricted the analysis to WISE W2 band, which provides better photometry with good SNR and is free of artifacts.
• Vioque et al. (2018) found that there is a break in IR excess with mass. We also arrived at a similar conclusion. However, they suggested considerably low IR excess for massive HAeBe stars whereas we see a considerable range in IR excess values in this work (see Figure 6).
SUMMARY
The present study made use of the unprecedented capability of the Gaia mission to derive the stellar parameters such as age and mass of HAeBe stars. Using the stellar parameters and the compiled Hα flux, theṀ acc for the sample is estimated. Also, we investigated the capability of the IR spectral index as a better method in quantifying the IR excess. The main results of this study are summarized below.
• Better accuracy of the Gaia DR2 astrometry is confirmed from the comparison of the Gaia DR2 distances with the previously estimated values from the literature. We adopted the distance values compiled in Bailer-Jones et al. (2018), which are the best distance estimates to date with minimal errors, for the sample of HAeBe stars used for this study.
• Age and mass of 110 HAeBe stars are estimated using the Gaia CMD, with the aid of MIST isochrones and evolutionary tracks. In our knowledge, no studies were done till now which calculated the age and mass of a confirmed sample of HAeBe stars using both the photometry and distance from the Gaia mission. Since we employed Gaia CMD for estimating the age and mass of HAeBe stars, we avoided considerable errors when these quantities are estimated from theoretical HR diagram.
• Mass accretion rates are calculated from the Hα line flux measurements of 106 HAeBe stars, which is the largest sample to date. Since we had used distances and the stellar masses derived from Gaia DR2 data in the calculation ofṀ acc , our estimates can be more accurate than previous studies.
• The disk dissipation time scale derived for our sample of HAeBe stars is 1.9 ± 0.1 Myr, which is consistent with the previous estimate (Mendigutía et al. 2012).
• We found that mass accretion rate is related to the mass of HAeBe stars in the form of the relation, M acc ∝ M 2.8±0.2 * .
• We calculated the spectral index (n 2−4.6 ) in quantifying the IR excess in HAeBe stars. A correla-tion between the spectral index and age suggested a distinction between the disk of HAe and HBe stars. Massive HBe stars with ages <0.1 Myr show diverse values of the infrared spectral index, ranging from 0.5 to −3, with the negative index being more prominent. The possibility of photoevaporation resulting in the dissipation of gas content in the disk and thereby forming a thin disk and the formation difference between HBe and HAe stars needs to be explored from further studies.
We would like to thank the anonymous referee for providing helpful comments and suggestions that improved the paper. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC; https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Also, we made use of the VizieR catalog access tool, CDS, Strasbourg, France. (Vioque et al. 2018-V18), mass (our work) and mass (V18). Our estimates of age and mass are derived using Gaia CMD. (*)-The errors in our age and mass estimates are rounded off to two digits whereas those from Vioque et al. (2018) is reproduced as in their paper.
|
2019-04-08T06:57:19.000Z
|
2019-03-04T00:00:00.000
|
{
"year": 2019,
"sha1": "8f54df0eef93a5c64a77d55b1ca8e20a145ab962",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1903.01070",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8f54df0eef93a5c64a77d55b1ca8e20a145ab962",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
219112078
|
pes2o/s2orc
|
v3-fos-license
|
“She Moves Through Deep Corridors”: Mobility and Settler Colonialism in Sharon Doubiago’s Proletarian Eco-Epic Hard Country1
This article analyzes Sharon Doubiago’s American long poem Hard Country (1982) from the joined perspectives of ecocriticism and mobility studies. It argues that Hard Country is a proletarian ecoepic that rethinks human-nature relations from a working-class perspective shaped by different kinds of (im)mobility. In my analysis, I show how the text revises the American epic tradition by foregrounding working-class people’s desire for meaningful relationships to place in light of histories of environmental injustice and displacement. Doubiago’s text promotes traditional place-based notions of belonging, but it also challenges ideas about what kind of sense of place can be environmentally suggestive. In doing so, it allows for the emergence of a proletarian “ecopoetics of mobility” (Gerhardt) that emphasizes the bodily experiences of Doubiago’s mobile narrator as well as U.S.-American histories and cultures of mobility. Among these cultures of mobility, settler colonialism stands out as a system of violent domination and form of environmental injustice (Whyte) that calls into question working-class people’s desire to move or settle on dispossessed indigenous lands. As such, settler colonialism poses a challenge to Doubiago’s proletarian ecopoetics of mobility, which must engage with the fact that white working-class people in the United States have always been perpetrators as well as victims of both environmental and mobility injustice.
Resumen
Este artículo presenta un análisis del extenso poema americano Hard Country (1982) de Sharon Doubiago, combinando la perspectiva ecocrítica con la de los estudios de movilidad. El argumento principal es que Hard Country es una epopeya ecológica proletaria que reconsidera las relaciones entre los seres humanos y la naturaleza desde una perspectiva de la clase obrera, influida por diferentes tipos de (in)movilidad. En mi análisis, demuestro cómo el texto revisa la tradición épica americana, focalizando el deseo de la gente de clase obrera de tener relaciones significativas con el lugar ante las historias de injusticia medioambiental y de desplazamiento. El texto de Doubiago fomenta nociones de arraigo tradicionales basadas en el lugar, pero también cuestiona ideas sobre qué sentido del lugar puede ser medioambientalmente sugerente. Así pemite la emergencia de una "ecopoética de movilidad" (Gerhardt) proletaria que enfatiza tanto las experiencias corporales de la narradora móvil de Doubiago, como historias y culturas de movilidad estadounidenses. Entre estas culturas de movilidad, el colonialismo de los asentamientos destaca como un sistema de dominación violenta y como una forma de injusticia medioambiental (Whyte) que cuestiona el deseo de gente obrera de moverse o de establecerse en tierras 2 Sharon Doubiago's "Mama Coyote talks to the Boys" criticizes the deep ecology movement for its sexism. This sexism, the poet argues, not only manifests in deep ecology's promotion of ecomasculinist, pseudouniversalist positions that refuse to recognize the gendered nature of human-nature relations, but also, more concretely, in deep ecologists' failure to acknowledge ecofeminist scholarship and its propositions for the kind of radical new ecological consciousness that deep ecology demands. Doubiago ends her short essay with the following appeal: "And so the paradigm change I am presenting to you: Ecologists must become feminists. If you don't, you are doomed to remain outside the real work of saving Earth" (44). 3 A long debate exists about the nature of the (post)modern American epic and its relationship to the (post)modern long poem. Considerably less has been said on the "female" (Schweizer 10), "feminist" (Dewey 72), or "woman's epic" (Keller,"To Remember" 307). When this largely neglected, yet highly diverse tradition is discussed, as in Lynn Keller's influential study Forms Expansion: Recent Long Poems by Women (1997) or Jeremy M. Downes's The Female Homer: An Exploration of Women's Epic Poetry (2006), Sharon Doubiago is usually identified as one of few female American poets whose work can be clearly identified as epic.
©Ecozon@ 2020
ISSN 2171-9594 117 Vol 11, No 1 own travels, her grandparents' move from Tennessee to West Virginia, and her parents' work-related migration to California. It also evokes other historical mobilities, for example two legendary pre-Columbian explorations of North America, settlers' westward movement along the Oregon Trail, Native American removal, and the arrival of refugees in California after the Vietnam War. Commenting on this double-emphasis on place and (histories of) mobility, Jeremy Downes notes that the text's overarching narrative-the "circular journey of the hero and her current lover" (167)-is continuously interrupted by "many layers of subnarrative" (166) that have cumulative effect. In my reading of Doubiago's epic, I explore how narratives of mobility produce "places of depth" (Downes 167) in Hard Country and how they shape the representation of complex human-place relations in the text. I agree with Lynn Keller that Doubiago's epic text expresses an "urgent ecological awareness of the danger humans pose to themselves through failing to understand their place as part of the natural world" (Forms 39). Like Keller, I also find it noteworthy that the poet continuously highlights "the mixed positionality of the oppressed" (Forms 42) in an effort to challenge existing power hierarchies. Indeed, I see these two concerns as connected. In discussing how different kinds of (im)mobilities shape the lives of working-class people, Sharon Doubiago reveals the contradictory position that the working poor occupy in a nation built on capitalist exploitation and settler colonialism. Especially white working-class people, her long-poem indicates, are victims as well as perpetrators of both environmental and mobility injustice.
Revising the American Epic Tradition
Sharon Doubiago's poetry is both representational and rich in imagery, both narrative and lyrical. As Lynn Keller notes, Doubiago's "omnivorous free verse" (Form 19) is overall characterized by "straightforward documentary syntax" (27) but simultaneously relies on "fragmentation and parataxis, and on elaborate interweaving of motifs" (27). In its narrative passages, Hard Country chronicles a woman's life on the California coast and a road trip this woman, the narrator Sharon, takes across the United States. Throughout the epic poem, the narrator's experiences on the road are interspersed with personal memories and passages that link family histories to national histories of marginalization and oppression, allowing Doubiago to challenge "the discourses of nationalism with which the epic is entwined" (Goodman 449; see also Crown 80). Like other female poets revising the American epic tradition, Doubiago has to "wrestle with the mixed legacy of the largely male-authored modernist collage long poem, finding different strategies for capitalizing on its liberating dimensions while evading its misogynist ones" (Keller,Forms 16 Overwhelmed by the abyss that is the "American soul" but unable to turn away from the "stories and people and land," the narrator begins her very personal critical examination of "this overstuffed country" by turning to the coastal landscapes of her childhood: In a land hard to love, in a harsh, masculine land this was the first, these rhythmic, low-wide mesas coming west from the mountains we lived in as girls down to the sea, the first land I loved. (34) Alluding to the book's title, the above passage denounces the United States as "a harsh, masculine land" that is "hard to love" (emphasis added). The grown narrator's desire to love her country with a devotion comparable to the one with which she used to love the "rhythmic, low-wide mesas" of Southern California as a young girl is one of the underlying themes of Doubiago's revisionist American epic. The impossibility of this desire is the poem's greatest tragedy, but it is also its most important lesson.
One reason why Doubiago's adult narrator cannot love the entire country as she loved the landscapes of her childhood is that she refuses to approach places as if the realm of nature was distinct from politics. When Sharon thinks about her beloved California coast, she must also take into account the ecological, social, and political realities of her day and the histories that produced them: […] I write this in verse, this letter to you as a poem, this news story, these many stories, this essay, this spilling and collecting of my life in these hills. The details are ominous, journalistic, the experience deepest poetry: how the San Onofre Nuclear Generating Station and Richard Nixon share the south and north rim of the lagoon down in which the refugees and marines are camped, at the mouth of which beneath this bridge we cross over, Mexican farmworkers are bent all in a row for our food.
We are blonde, we are never stopped at the border checking stations, though I wonder of everyone's exile here where during the war I passed and saw a doomed California Brown Pelican rowing her prehistoric, now DDT lope between the San Clemens White House and the weeping juices of the setting sun ( Addressing environmental pollution, U.S. militarism, the plight of Vietnamese refugees, the racist logic underlying U.S. border policies as well as the exploitation of migrant workers, Doubiago's narrator resolves to write "deepest poetry" that reaches beyond the personal experiences of her "life in these hills." Her poetry, as Kathleen Crown puts it, " [bears] witness to the stories of the dispossessed" (80-1), wherever she encounters them. Such an endeavor entails a critical examination of her own social position. For even as the narrator muses whether everyone's relationship to the polluted lagoon may be viewed as one of "exile," she recognizes the privileges her racial background and citizenship status afford her. Not least, these include the privileges of whiteness and mobility: neither will Sharon be "stopped at the border" like the "Mexican farmworkers" mentioned in the excerpt, nor will she have much trouble traveling across the United States later in the epic poem. Like in the above excerpt, Doubiago's epic poem repeatedly addresses issues of environmental degradation. Hard Country for example evokes the devastating effects of "atomic testing in the Pacific" (18), the "mountains sucked hollow for bombs" (87), or the logging of the ancient "Redwood Empire" (98) of Albion Ridge (see also Crown 81). At the same time, Doubiago is concerned with the lives and struggles of working-class people, whether she refers to the Mexican migrant laborers in the excerpt above, to waitresses like the narrator's mother (70), to seasonal farm workers like the narrator's father who used to catch "the freight to make the wheat harvest" (19), to a "black worker/ against East Texas oilfield" (211-212), or to striking Arizona miners who were "hauled out to the desert to die" (225) during the Bisbee Deportation of 1917. Linking environmental degradation with social injustice, Doubiago's long poem critiques the disruption and distortion by the capitalist system of the desire of working-class subjects for meaningful relationships to the more-than-human world. Hard Country can therefore be called not only an American eco-epic, but also a proletarian eco-epic. 4
Sharon Doubiago's Proletarian Eco-Epic
Doubiago frequently addresses the place of working-class people in the nation by embedding the stories of her immediate and extended family into larger historical, political, economic, and environmental contexts. "Signal Hill," the very first poem of Hard Country, alludes to the narrator's own working-class background as well as to the complex relationship between California's oil industry and the United States' status as a military superpower. Because the narrator's mother is in hospital to be treated for tuberculosis, her father-who is described in other poems as either unemployed or doing odd-jobs-goes drinking "every Friday when he gets paid" (5; emphasis original), ©Ecozon@ 2020 ISSN 2171-9594 120 Vol 11, No 1 leaving the children alone in the car outside a bar. From the parked car-a symbol of physical and social mobility in U.S. Culture as well as an emblem of the "human 'mastery' of nature" (Urry 51)-the children see the city that "spreads beneath [them]/ in a rainbow-spilled oil puddle" (5). In the distance, they perceive the giant robots that pump/ the fields" (5) and the "battleships/ that strain at their ropes/ toward bigger war across the sea" (5). The references to the pump robots and the oil puddle evoke the environmental costs of California's coastal oil industry, costs addressed again in a later passage that mentions the "polluted waters/ beneath Signal Hill" (240). The mention of a "bigger war across the sea" points to the "smaller" wars at home, which include, as Doubiago's epic suggest, the exploitation of the working poor by big industry, of nature by humans, of women by men, and of Native peoples by white settlers.
Doubiago's narrator traces her working-class background back several generations, often locating the disenfranchisement of America's working poor in a troubled relationship to place and to the non-human world. These troubled relationships have very real, material consequences: they manifest physically in people's bodies. This is why Sharon's great-grandmother, whose entire family worked in North Carolina's textile mills, "witnessed/ seven of her ten children die/ of tuberculosis" (198) and eventually died from the disease herself. Her granddaughter, the narrator's mother, was orphaned by the disease as a child and became sick herself as an adult. Passed down from generation to generation, tuberculosis not only functions as a marker of workingclass heritage in Doubiago's proletarian eco-epic; it is also used as a signifier for how social class influences human-nature relations and vice versa: Once a doctor asked me if the family was from North Carolina as if the place itself tells the story of swampy, humid lungs […] of the thing still carried in the breath of my children (199; emphasis original) As the narrator indicates, the "place itself" does not "tell[/] the story" of her maternal family's long history with pulmonary tuberculosis. However, the vulnerable bodies of her relatives tell the story of "the place" her ancestors lived in ("North Carolina"), just as her children's bodies tell the story of her family's working-class background. Workingclass bodies here record the frequently precarious relationships of the poor to their places of residence and the long-term effects that acts of environmental injustice committed against the laboring poor can have even after relocation.
On her father's side of the family, the narrator's relatives suffered doubly from the interconnected exploitation of working-class people and the land. Sharon's grandfather worked in the copper mines of Tennessee, which eventually left him and many of his fellow miners unemployed and sick, with "nothing but the black dust that filled their lungs" (186; emphasis original). Hard Country here evokes another case of ©Ecozon@ 2020 ISSN 2171-9594 121 Vol 11, No 1 environmental and labor injustice. Yet, the most unsettling passages set in the Copper Basin of Polk County focus not on Sharon's grandfather, but on her grandmother and her son, the narrator's father. As the reader learns, Sharon's paternal family lived in Ducktown, one of the cities located within a roughly 30-km² area of Tennessee that had been stripped almost completely bare of vegetation by the early twentieth century because of logging and the toxic sulfuric emissions of the local swelters (see Mathews and Harden 7). 5 In the passages focusing on her family's life in the Copper Basin, Doubiago's narrator evokes working-class people's desire for intimate relationships to the more-than-human world, the distortions of these relationships by capitalist exploitation, and the harrowing physical and psychological consequences of those distortions.
One section of the sequence "Headstone," appropriately entitled "The devastation that remains," addresses matters of environmental degradation alongside matters of (re)productive justice by juxtaposing images of a devastated (Mother) Earth with images of the equally devastated body of the narrator's grandmother: […] your husband crawling beneath all borders deep in the earth's mind the light on his forehead leading the way and five children crawling through you. You never healed, you told me, the Edens' head too large (17; emphasis original) Comparing the act of copper mining to that of giving birth, Sharon represents both as productive and destructive, leaving the Earth/woman with lasting scars and open wounds ("You never healed"). The juxtaposition of her grandmother's husband "crawling/ beneath all borders/ deep in the earth's mind" and of her "five children crawling through [her]" highlights the fact that the South's labor-intensive extraction industry relied on the ongoing re/productivity of "Edens" and other working-class families like them. It not only required working-class people to remain in "this poisoned corner of Tennessee," it required working-class bodies to remain re/productive, despite the horrific working and living conditions in the Copper Basin.
Doubiago's proletarian eco-epic links the environmental and mobility injustice inflicted on the miners and their families to a long-term exposure to pollution on the one hand and to a class-based immobilization on the other. A form of "slow violence" (Nixon 2), this immobilization can be described as a "displacement in place" (Nixon 17) that leaves a community "stranded in a place stripped of the very characteristics that made it ©Ecozon@ 2020 ISSN 2171-9594 122 Vol 11, No 1 inhabitable" (19). 6 The Tennessee Copper Basin is such a place, even if Sharon's father did not realize so as child: Daddy who thought the whole earth without trees, without flowers, without grass the way it's supposed to be, he thought, death-cracked blood-red rain-rotted tree-split body-ripped hillskulls who swam in a green river of cupric chloride and copperheads (18) The narrator's paternal family could not move away from the place their own labor helped to destroy because they were dependent on the income that the mining industry offered. They had no choice but to live in a devastated environment made toxic by "a green river of cupric chloride." Hard Country denounces these ignoble living conditions. Even more, it acknowledges working-class people's desire to live in places of natural beauty. For Sharon's grandmother, this desire remained tragically unfulfilled, the narrator indicates: Sometimes, Grandma, you walked to the Georgia border. I make it up. You must have walked to North Carolina looking for a tree. How else did you bear That poisoned corner of Tennessee? (17) For the narrator's father, by contrast, a new opportunity for such fulfilment arose when he left the Copper Basin to move to California: We moved to the country to start over. [Daddy…] was climbing a hill and when he came to the crest the sky went inside him. Time blew around like a cloud And he saw the earth for the first time. She was green, not red. (27) Representing the father's hike in the "Sierras" (27) as a spiritual experience, this passage describes the moment in which the narrator's father begins to develop an intimate connection to his new place of residence. He not only awakens to the beauty of California's mountains, he also begins to realize the extent of the devastation he was surrounded with as a child. It is only after moving and by moving from one place to 6 Nixon uses the terms "displacement in place" (17) and "displacement without moving" (19) interchangeably. Both describe the experience of groups of people, indigenous or non-indigenous, who live in places where "an official landscape is forcibly imposed on a vernacular one" (19). According to Nixon, a "vernacular landscape" is one that "is shaped by the affective, historically textured maps that communities have devised over generations" (19, emphasis added). Although Nixon speaks about humanplace relations that are produced by long-term inhabitation, which is not necessarily the case with Doubiago's working poor, I would maintain that the term "displacement in place" is still useful for a situation like theirs, in which rapid environmental degradation makes it impossible for a community to create a stable "vernacular landscape" in the first place.
©Ecozon@ 2020
ISSN 2171-9594 123 Vol 11,No 1 another that the narrator's father is able to overcome the displacement in place suffered by his family and so many working-class people like them.
Doubiago's proletarian eco-epic depicts working-class people who are alienated from the more-than-human world but long for what one might describe as a proletarian sense of place unimpeded by capitalist exploitation and environmental destruction. The sense of place promoted in these passages is often a traditional one that views "the local as the ground for individual and communal identity and as the site of connections to nature that modern society is perceived to have undone" (Heise 9). This emphasis on the local also becomes apparent when Sharon stops at the Eden family graveyard during her travels through the U.S. South. Musing about her early European ancestors, the narrator imagines one of the headstones as an outgrowth of the body buried beneath it. Then she reflects on the radically changed landscape the headstone surveys: The broad human head and shoulders rise from the forest floor. The nose, the mouth, the eyes look from the ridge out over the land that has disappeared beneath the waters of Dale Hollow Lake on the mid-Tennessee-Kentucky line (7) The valley near the Tennessee-Kentucky border which the family graveyard overlooks, the reader learns, was flooded, when the completion of a dam in 1943 created "Dale Hollow Lake," a water and flood control reservoir that permanently displaced the narrator's paternal family from the land that their ancestors had inhabited for several centuries. Unlike many later passages in Hard Country, this one does not acknowledge the displacement of indigenous people by European settlers from what had originally been Cherokee lands. On the contrary, by using the family graveyard to speculate about a settlement history that reaches beyond official historical records-Doubiago suggests that the first Eden was buried in the graveyard in "1558/ […] 50 years/ before Jamestown"(7)-this passage reveals the tension that arises when Doubiago's examination of her family's relationship to place comes into conflict with histories of Native American displacement. Rather than addressing this conflict, the gravestone passage speaks to the hierarchy that the nation establishes among (white) settlercitizens of different socio-economic backgrounds. This hierarchy comes to the fore when working-class people's claims to the land go against corporate or state interests, whether these interests be economic or environmental. 7 7 While Dale Hollow Dam was officially built for power generation and flood control, Dale Hollow Lake has since become a widely popular recreational area. Similar projects were undertaken in several other places along the Tennessee and Cumberland River during the 1930s and 40s. T. Crunk's poetry collection New Covenant Bound (2010) deals with the consequences of two such "federal land-and water-management projects" (Crunk "Memoriam")-Kentucky Lake and Lake Barley, which today form the Kentucky Woodlands National Wildlife Refuge-and the resulting forced removal of "between 28,000 and 30,000 people" ("Memoriam").
Vol 11, No 1
The graveyard passage is not only significant from an ecocritical perspective interested in mobility because it highlights that working-class people have sometimes been displaced for reasons of environmental development and thus been turned into "conservation refugees" (Nixon 18). The fact that Sharon imagines a gravestone as an ancestor's body that "rises[s] from the forest floor" to survey the lost family lands is also significant because it speaks to the narrator's desire for rootedness and belonging. This desire for rootedness and belonging is tied to the kind of human-nature intimacy that is often associated with people who have inhabited and worked a particular piece of land for decades, if not generations. Doubiago's evocation of Wendell Berry's poetry a few lines later (7; see also Doubiago,259,n. 3) reinforces the ecolocalist idea of rootedness as an environmental ideal. After all, Berry has long been known not only as a regionalist ecopoet who celebrates the "simplicity of farm life" (Hönninghausen 285), but also as a poet-farmer who cultivated his Kentucky farm without the use of modern technology. By promoting this particular brand of land ethics, the beginning of Hard Country stands in tension with other passages in Doubiago's epic poem in which the travelling speaker relinquishes ideals of rootedness at least partly, replacing them with what I would describe, in drawing from Christine Gerhardt, as a more "mobile sense of place" (425).
Writing about Emily Dickinson's and Walt Whitman's ecopoetics and questions of mobility, Gerhardt identifies three tactics that imbue the works of these two protoecological poets with a mobile sense of place: the construction of places that are significantly shaped by mobilities, of speakers whose environmental insights are critically informed by their geographical movement, and of broader cultural frameworks characterized by overlapping movements of people, materials, goods, and ideas. (426) All of these tactics are crucial for Doubiago's ecopoetics. Indeed, when the poet discusses working-class people's relationship to the land, she not only evokes matters of environmental injustice and "displacements in place," she also evokes different histories and experiences of displacement. In other words, she discusses different kinds of materialities-the land, bodies, and the material conditions of production and reproduction that connects them to each other-and different kinds of (im)mobilities. As I will argue, Hard Country is thus not merely characterized by an "ecopoetics of mobility" (Gerhardt 425), that is, by "a way of poetic world-making that conceives of natural phenomena and human-nature relationships in particular places as both ecologically suggestive and fundamentally geographically mobile" (425). Rather, it is characterized by a proletarian ecopoetics of mobility that reflects on how different kinds of mobilities and cultures of mobility shape (white) working-class peoples' relationships to place and to the more-than-human world. emphasizes that (im)mobilities, along with the particular forms and meanings they assume at a given moment, must be analyzed in their specific social, political, and cultural contexts. This perspective also informs Gerhardt's discussion of an "ecopoetics of mobility, which considers "places of mobility" (426), "mobile speakers" (432) and "mobile cultures" (437). Such an approach is also useful when analyzing Doubiago's epic poem Hard Country. In the section "Avenue of Giants," for instance, Doubiago's mobile narrator Sharon is driving from Southern California to Oregon when she begins to reflect on how "cars travel/ the mythical highway north/ through iridescent, silver-blue columns/[w]hile loggers haul south/ Trees of Mystery" (107, emphasis original). These unassuming lines draw attention to the West coast of the United States as a place that is shaped by different kinds of mobilities, all of which are ecologically significant. The passage mentions U.S. car culture and the human labor involved in the commercial logging of old growth giant redwoods on the coast, which in many places was still underway when Doubiago wrote Hard Country (see Newton). In doing so it points to the paradoxical fact that both the efforts to preserve charismatic megaflora such as Sequoias and the exploitation of the environment for leisure by nature parks such as Trees of Mystery have been made possible, at least partly, by the rise of automobility. 8 Finally, by reading this passage with a triple focus on the environment, mobility, and social class, it is revealed that all the industries alluded to here-the logging industry, the transport industry, the automobile industry, and the tourist industry-heavily depend on the mobilization of working-class people for labor and leisure. These industries thus influence working-class people's perspectives on the non-human world as well as a working-class culture of mobility that informs both the experience of Doubiago's narrator and Doubiago's ecopoetics.
A few pages before the narrator starts on her road trip across the United States, Doubiago places a "Prayer for the beginning of a Journey" that also speaks to her ecopoetics of mobility. In order to complete the task the narrator has set for herself, namely to report on "what is seen and heard" (101) in her native country, the traveling poet asks to be plunged "into deepest earth" (101) hoping to re-emerge with a better understanding of the places she visits, of the histories of the people "who have preceded [her]" (101) and of the hopes of "those who come after" (101). The image of going 8 Christof Mauch discusses the paradoxical link between discourses of preservation and exploitation in the United States. Suggesting that the relationship of the American people to nature has always been ambivalent and dominated by economic concerns, he uses the example of national parks to argue that while the railway was used to open up the "American wilderness" to the public, it was the rise of automobile tourism during the 1920s and the promotion of nature tourism as a patriotic adventure at the home front during the two World Wars that turned national parks into sites of mass consumption (see esp. 11-13).
©Ecozon@ 2020
ISSN 2171-9594 126 Vol 11, No 1 underground used in the poem recalls the myth of Persephone, a mythical traveler between places. It also ascribes an explicitly experiential and indeed physical (one might also say environmental) dimension to the act of writing, which the text conceives of as involving intimate, bodily encounters with the more-than-human world. It is this combination of movement and intensive engagement with the materiality, histories, and mythologies of places, this "mov[ing] through deep corridors" (131), as Doubiago puts it elsewhere, that characterizes the proletarian ecopoetics of mobility developed in Hard Country.
As she travels the country, engaging with places and their histories, the narrator's white, female, working-class body emerges as an instrument of sense-making, an orientation device indicative of a poetry of witness that values the poet's subjective and yet mobile and thus shifting perspective on the world: I understand, in this moment of wind I understand we are each stranded in our essential Body […] I understand we come from a truth we each wholly and separately possess to a particular house and street in time to tell only the story our body knows and our tragedy will be we will not tell it well because our witnesses will be telling their stories […] my own story is understanding our singleness that I am destined to move my body and time into the body-time the story of Others. (8-9; emphasis original) While this passage maintains that the narrator's bodily experiences determine her ability to tell some stories better than other ones, it also expresses the narrator's conviction that poets must try to tell stories that go beyond their personal experience. The best way to do so, the narrator suggests, is by "mov[ing one's] body and time/ into the body-time […] of Others." Movements of the imagination seem to be as important to Doubiago's narrator here as traveling to those places where history happened to engage them with her "essential Body." Especially those parts of Doubiago's eco-epic that focus on the narrator's travels through the Midwest and the South indicate that Sharon's movements provide her with a more acute sense of how U.S. cultures, national mythologies, and histories of mobility have shaped the country's non-urban environments: the countless trips back and forth across the country the road we've grown so old on animal paths, old Indian foot trails ©Ecozon@ 2020 ISSN 2171-9594 127 Vol 11, No 1 become superhighways, interstates, buffalo tearing their way across it, covered wagons covering it, the flesh our feet have walked upon, the fear we still have alone at night of the land (88) Linking the narrator's own eastward movement to the westward expansion, the poem implies that whereas the "covered wagons" of the early treks began to disrupt ecologies in the American West, modern "superhighways" and "interstates" are in the process of obliterating them. Despite their ongoing efforts to conquer nature by "covering" the ground with tar and concrete, white working-class Americans like the narrator have been unable to overcome their "fear […] of the land," which the speaker imagines as "flesh," a metaphor that seems to refer both to a dangerously unstable, living land and to the genocide against indigenous people by which the West was won.
Several poems in Hard Country evoke historical migrations and displacements together with environmental histories. The sequence "Heartland," for example, conjures up the catastrophic hygienic conditions on the Oregon Trail, a westward trek that took several hundred thousand emigrants from Missouri and nearby states to the Pacific Coast during the mid-nineteenth century. Rather than presenting these migrations as a heroic feat of brave pioneers, Doubiago describes a "trail to Oregon through/ garbage heaps" with "wells and latrines, too close" that left behind "seepage/ and stink" (119). A later poem, "The Heart of America: Yellowstone," elevates the destructiveness of the westward expansion to even grander proportions by associating it with the movement of tectonic plates. It can be argued that this kind of geological imagery naturalizes the westward movement, deflecting blame and responsibility away from the settlers and thus erasing the devastating effects of settler colonialism. Yet, I would argue that Doubiago primarily uses geological imagery here to emphasize the epochal nature of the European settlement of North America together with its lasting impact. Indeed, voicing a critique of the westward expansion and its underlying ideology, "The Heart of America" suggests that U.S. settler-colonial appetites remain as boundless in Doubiago's time as they were 150 years prior. Just as the "continents" are constantly "sliding" (136) against each other under the surface of Yellowstone, the poem ominously concludes, "America is always coming from the East,/ overriding everything in her path" (137). Hard Country thus also explores ambivalences that arise in the relationship of white working-class subjects with the land because they are migrants and settlers.
Doubiago's Proletarian Ecopoetics of Mobility and Settler Colonialism
Doubiago's narrator frequently addresses the migratory histories of her European ancestors. She describes her maternal family as "seatossed here a hundred years before the Revolution" and as a family of "westwalkers" driven by the "mania" of ©Ecozon@ 2020 ISSN 2171-9594 128 Vol 11, No 1 "starting over" (196). Where Doubiago mentions such family histories of migration without addressing the histories of Native American displacement, a tension arises in Hard Country. This tension is especially noticeable when Doubiago employs what Eve Tuck and K. Wayne Yang describe as "settler moves to innocence" (10), that is, "strategies or positionings that attempt to relieve the settler of feelings of guilt or responsibility without giving up land or power or privilege" (10). When Sharon lays claim to several indigenous female ancestors (Doubiago 197), for example, she is using a rhetoric of "settler nativism" (Tuck & Yang 10). 9 And when she mixes nature imagery and sexual imagery implying that women's bodies are being colonized like the land, she employs "colonial equivocation" (Tuck & Yang 17). Doubiago's use of settler strategies of evasion is problematic because they do the cultural work of legitimizing settler colonialism regardless of the author's intent. Yet, I would argue, her epic poem also works against relieving settler guilt and against evading settler responsibility. One strategy Hard Country employs to this effect is addressing the role (white) working-class subjects have played in the dispossession of indigenous people and the devastation of Native American ancestral lands. Another is foregrounding the narrator's own whiteness and the privileges that results from this racialization.
In one passage from the section "Headstone," Doubiago explicitly links the dispossession of indigenous communities and the devastation of Native American ancestral lands to the environmental degradation caused by copper mining: the place of silence where there are no birds the place where there are no seeds, only scars of your having been there a wide red-rock copper river named for a chief named Duck whose trees are gone, who now is lost, whose babies crying in the kudsu crawl back onto the hills (19) Providing yet another powerful description of the "place of silence" in which the narrator's father grew up, these lines depict an environment in which native vegetation has been replaced by "kudsu," an invasive vine that has been spreading uncontrollably in the South ever since it was introduced at the beginning of the twentieth century as a means to revitalize exhausted soils. The absence of birds and trees in this excerpt points to the removal of the Cherokee from the region, while the mention of kudsu points to the "invasion" of indigenous lands by white settlers. At the same time, the quoted passage describes the destruction of indigenous ancestral lands by industrial copper mining. It thus points to the troubled position (white) working-class people such as the members 9 Doubiago also claims Native American ancestors for herself. In the afterword to the 1999 reprint of Hard Country, she refers to government records that identify two of her great-grandparents as members of the "North Carolina Lumbee" and "Eastern Boundary Qualia Cherokee" (272). While she cherishes this heritage, she also acknowledges the "righteous Native contempt" for culturally non-indigenous, whiteidentified "wannabes" (272) like herself.
©Ecozon@ 2020
ISSN 2171-9594 129 Vol 11, No 1 of the narrator's family hold in U.S. history: they are victims of environmental injustice and displacement, yet, they are also perpetrators of environmental destruction and agents of settler-colonial domination, which, as Kyle Whyte notes, is necessarily a form of environmental injustice, because it "disrupts human relationships with the environment" (125).
While the narrator of Hard Country sometimes identifies with indigenous peoples and even occasionally assumes their perspective, Sharon usually speaks "specifically as a white woman" (Goodman 455; emphasis original): and in dreams I am Goldilocks still wandering through cities and woods searching for the place that will fit me just right, Goldilocks the ache to be Bear, little white person without roots (21; emphasis original) Like the story of Goldilocks, Hard Country is a text about a "little white person" in search for a home ("roots"). The quoted passage suggests that Sharon's "ache" to be Native ("to be/ Bear") is futile but also unnecessary, because as white person she can take up residence wherever she chooses, even if the home in question is already occupied. As Doubiago suggests a few pages earlier, her narrator's "white body" can function "as place/ of sanctuary" (10) and as "city of refuge" (11) until she has found a "place/that will fit [her]/ just right" (21; emphasis original). What the narrator gradually realizes during her travels, then, is that being a "little white person" in America means having mobility and settler privilege. Yet, it also means that she is a "betrayer of the Body, the Earth" (256) and a "consort, abettor, accomplice" (256; emphasis original; see also Keller,Form 57) to the settler-colonial violence committed for her benefit.
By foregrounding the embodied perspectives of a white working-class poet, Doubiago points to a problem that cannot be easily resolved: if a non-indigenous person moves from place to place in search of a home in a settler nation like the United States, especially if she is white, her mobility can never just be a strategy to gain a better understanding of places and their histories because her movement and desire for emplacement also perpetuate settler colonial violence. This is why writing poetry about human-nature relations and (histories of) mobility presents white settler poets like Doubiago/Sharon with a dilemma: I took a vow to never be a poet because the art I was taught is too delicate to sing of genocide. But what else could I sing while people were being murdered in my name? (144) ©Ecozon@ 2020 ISSN 2171-9594 130 Vol 11,No 1 Settler poets who want to write about the nation can be silent about (settler-colonial) histories of violence, or they can write about them, although the poetic models available to them will be inadequate to the task. Doubiago has resolved "to sing," which is why she must address the history of settler colonialism with whatever language is available to her, not least because, as her narrator asserts, all the people murdered for the sake of the (settler-colonial) nation, "were being murdered/ in [her] name" as well.
Hard Country is not a decolonial text, which would require for it to support, if not explicitly demand a "repatriation of Indigenous land and life" (Tuck & Yang 1). However, Doubiago's proletarian eco-epic is critical of settler colonial violence and thus examines the heritage and the burden that comes with being a white settler. "It was our grandparents who did it" (142), the poet writes in the poem "Wyoming," adding: Now when we reach for this land we think of invasions from Outer Space because for so long we were the alien inhuman invaders (142) Using the plural "we," Doubiago counts her narrator among "the alien inhuman invaders" and thus among those Americans who inflict settler-colonial violence by "reach[ing] for this land." Although the ambiguous use of tenses in the excerpt might indicate that the narrator treats settler colonialism as a matter of the past, Doubiago does consider the relevance of settler colonialism for the nation's present. Trying to understand what her self-positioning as a (descendant of) white settler(s) means, Sharon not only asks, "How did we do it?" but also "How do we bear it?" and, even more importantly, "How do we live now?" (143). Doubiago does not claim to have the answers to these questions. What she vows to do is to continue searching for answers, even if these answers must remain flawed and provisionary, by writing poetry that looks to the past to examine the present and, ultimately, to shape more viable futures.
Conclusion
One of the last sections of Hard Country looks to the future by returning to several passages from the long poem that also address the three topics at the center of this essay: working-class people's relationships to place, American histories of (im)mobility, and settler colonialism. Revising the passages set in Tennessee's mining country, Doubiago writes: I tell you everyone I know has one of these stories, the end of love, the rivers damned, the earth mined, the gems carried out to make the bomb. Once I took a vow never to be a poet, but now, this manmade desert back of us, this 200 th anniversary, how can I not polish and string these beads of blood and light? […] ©Ecozon@ 2020 ISSN 2171-9594 131
|
2020-04-30T09:07:24.204Z
|
2020-03-31T00:00:00.000
|
{
"year": 2020,
"sha1": "e46ab3f2ad4bc191d0317be80ea61f902a40519c",
"oa_license": "CCBYNC",
"oa_url": "https://ecozona.eu/article/download/3297/4144",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "92988ad137e7e6af7052c2bef643b25d3fd3a756",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Sociology"
]
}
|
222352449
|
pes2o/s2orc
|
v3-fos-license
|
Risk factors for an infection with Coxiella burnetii in German sheep flocks
In Germany, sheep are the main source of human Q fever epidemics, but data on Coxiella burnetii (C. burnetii) infections and related risk factors in the German sheep population remain scarce. In this cross-sectional study, a standardised interview was conducted across 71 exclusively sheep as well as mixed (sheep and goat) farms to identify animal and herd level risk factors associated with the detection of C. burnetii antibodies or pathogen-specific gene fragments via univariable and multivariable logistic regression analysis. Serum samples and genital swabs from adult males and females of 3367 small ruminants from 71 farms were collected and analysed using ELISA and qPCR, respectively. On animal level, univariable analysis identified young animals (<2 years of age; odds ratio (OR) 0.33; 95% confidence interval (CI) 0.13–0.83) to reduce the risk for seropositivity significantly (p < 0.05). The final multivariable logistic models identified lambing all year-round (OR 3.46/3.65; 95% CI 0.80–15.06/0.41–32.06) and purchases of sheep and goats (OR 13.61/22.99; 95% CI 2.86–64.64/2.21–239.42) as risk factors on herd level for C. burnetii infection detected via ELISA and qPCR, respectively.
Introduction
Q fever is an infectious zoonotic disease caused by the obligate intracellular and Gram-negative bacterium Coxiella (C.) burnetii. Domestic ruminants are traced as its most common reservoir and are widely recognised as the main source for human infections [1]. Clinical manifestation in ruminants may vary from asymptomatic infection to abortion, premature delivery, stillbirth and weak offspring [2]. Infected ruminants shed the pathogen through birth products, milk, faeces and urine [3]. In the environment, C. burnetii can survive in a highly resilient spore-like form [1]. Transmission by bacterial contaminated aerosols or dust is the most common route of human infection and a radius of 5 km around infected farms identified as being exposed to highest risk [1,4]. Dry and windy weather conditions favour the spread of the pathogen [5]. Moreover, ticks are also considered to be involved in the infection cycle of C. burnetii in sheep [6].
In several European countries, e.g. Bulgaria and the Netherlands, dairy goats were held as responsible for sizeable human Q fever epidemics [7,8]. In contrast, lambing sheep were identified as a primary source of human C. burnetii epidemics in Germany, where a number of small-scale outbreaks occurred within the last two decades with a maximum of 331 infected individuals in one of the outbreaks [9]. A recently conducted study revealed a herd prevalence of 26-36.6%; 13.9% of C. burnetii-positive sheep flocks in Germany detected using ELISA and qPCR, respectively [10].
In humans, the source for C. burnetii infections can usually be detected in retrospective studies [11]. The high tenacity of C. burnetii, the variation or even the absence of clinical signs in sheep and the lack of many aspects of the pathogen's epidemiology makes it particularly difficult to identify its way of introduction into the small ruminant population. However, several risk factors were identified for sheep and sheep flocks. For example, larger herds [12][13][14][15] and more breeding ewes within a flock [12,15,16] increased the chance for seropositivity. Furthermore, the seroprevalence and the risk for the detection of C. burnetii antibodies were more prevalent in older animals (>1-2 years) [15,17,18] and females having already given birth in contrast with nulliparous replacement animals [15]. Contact with other flocks [15], one or several supply addresses for ewes [19] or returning loaned sheep [16] also increased the risk of infection determined by the detection of antibodies. According to some reports, the likelihood of seropositive sheep increased with the number of goats within a radius of 10 km [12,19]. However, Meadows et al. [16] reported no significant influence of goats on the C. burnetii status of sheep. Also, reproductive disorders such as infertility during the previous year [15] and more than six stillborn lambs in the subsequent lambing season [19] were associated with seropositivity.
Although sheep play an important role with regards to human infections in Germany [9], there remains a need for reliable data on risk factors for sheep exposure to C. burnetii. Consequently, the purpose of the present study is to identify risk factors for a C. burnetii infection in sheep flocks on individual animal and herd level in five federal states in Germany previously tested using ELISA and qPCR, respectively [10].
Study area
In relation to cattle or swine populations, the number of small ruminants in Germany is comparatively small. According to the German Federal Statistical Office via their GENESIS-Online Database, Germany counted 19556 sheep farms with approximately 1.83 million sheep in 2016 [20]. The vast majority of German farms (71.7%) shelter fewer than 50 sheep, while only 5.1% count over 500 [21]. Most flocks are run by hobby farmers, while professional farmers keeping more than 500 sheepthough less frequentaccount for the majority of reproducing sheep. Numbers of farms and sheep in each federal state vary substantially and management approaches in sheep farming vary mainly between northern and southern Germany. Besides the sedentary husbandry system, traditional transhumance (migrating flocks) is still practised, especially in the southern federal states of Baden-Wuerttemberg (BW) and Bavaria (BAV). In this part of Germany, goats are frequently used to manage scrub in protected natural areas. Contrastingly, in northern federal states, especially Schleswig-Holstein (SH), sheep are used particularly for coastal protection on dikes while goats are difficult to keep under such conditions. As a consequence, there is a larger share of mixed (sheep and goat) farms in the southern federal states (BAV: 19.9%; BW: 30%) than in northern Germany (SH: 11.8%; Lower Saxony LS: 14%; North Rhine-Westphalia NRW: 14.4%) [20,22]. In Germany, various sheep breeds are kept with a focus on meat sheep breeds with spring lambing in northern parts, while southern federal states mainly focus on Merino breeds with year-round, i.e. aseasonal lambing. These differences in husbandry impact variabilities of herd structures with regards to age and gender distribution. Nevertheless, female sheep (⩾1 year of age and mated females younger than 1 year) make up the largest part of the flock (64.4%), followed by lambs (<1 year of age, 32.6%) and sires, muttons and other sheep (3%) [20].
Study design and detection methods
The current risk factor analysis of a C. burnetii infection is based on data taken from a recently published prevalence study. Details of farm and animal selection, sampling procedure and laboratory tests are published elsewhere [10].
In total, 3367 animals from 71 farms across five federal states were sampled and analysed using ELISA and qPCR, respectively. For each sampled animal, individual ear tag number, species (sheep or goat), sex, age and reproductive status of females (gimmer or ewe) were recorded for subsequent analysis of animal level risk factors for C. burnetii infection. In addition, a standardised interview was conducted by either author (AW or BUB) with the farm's manager on the animal sampling day to ascertain herd level risk factors. The standardised questionnaire consisted of questions concerning: (1) general farm indicators, (2) information on livestock kept on the farm, (3) husbandry system, (4) flock history, (5) diseases of humans living or working on the farm, (6) last lambing season and (7) current mating season. Before the first visit, the questionnaire was tested in three farms not included in the study. Variables of the sample list and the questionnaire with hypothetical high relevance were selected to identify risk factors at animal and herd level. Furthermore, mean humidity and temperature during sampling were retrieved from meteorological stations from the German weather service closest to the farms.
Correlation analysis
Due to the large number of possible risk factors, we first verified that all variables differ from one another in terms of content. A correlation analysis was carried out to support this step. For this purpose, the following measures were determined and confirmed for correlation: Cramer's V > 0.5 for qualitative variables, ANOVA (equal variances) or Kruskal-Wallis test (unequal variances) P > 0.05 and coefficient of determination R 2 > 0.1 for qualitative and quantitative variables and Pearson correlation coefficient > 0.7 for quantitative variables. Either correlated variables were summarised, one of them removed for further analysis but considered in the subsequent interpretation of the results or, if there was a moderate correlation, both variables were included in the model selection using an interaction term.
Risk factor analysis
A risk factor analysis was conducted to identify risk factors for an infection with C. burnetii at herd and animal level. The target variables (ELISA and qPCR) were dichotomised (positive/negative). Moreover, the geographical location was dichotomised (North = SH, LS, NRW; South = BAV, BW) to reduce results' distortion. Due to the deviating infection rates and the different farm management systems in these two regions [10], the geographical location of the examined farms was considered as a confounder and therefore the model was stratified for the two regions.
For the risk factors at herd level, univariable and multivariable logistic regression models were provided for ELISA and qPCR results, respectively (PROC LOGISTIC, SAS, Institute Inc., Cary, NC, USA). For risk factor analysis at animal level, the farm was considered as a cluster variable. To do this, we took an extended generalised linear model approach to take the hierarchical structure of the data into account (PROC GENMODE, SAS, Institute Inc.). The parameters were estimated by using generalised estimating equations [23].
Odds ratio (OR), a 95% confidence interval (CI), Akaike Information Criterion (AIC) at herd level or Quasilikelihood under the Independence Model Criterion (QIC) at animal level and p values were calculated for categorical and continuous variables. A variable was used for further analysis if it had a p value lower than 20% (p < 0.20) of the model [24]. Moreover, a 2 A. Wolf et al.
distinctive OR < 0.75 or OR > 1.33, and a reasonable corresponding 95% CI (ICI > 0.001; uCI < 999.99) led to the variable being taken further into account. In rare cases, a variable took on the same value on all observed farms. As a result, it was impossible to calculate meaningful ORs and CIs and the corresponding variables were not considered for the multivariable models. These criteria allowed variables to be considered for further analysis if they did not have a p value lower than 20% but still had a distinctive OR. Thus, the multivariable model could be selected from the largest number of possible risk factors and the probability of wrongly removing influencing factors was minimised.
Hereafter, we carried out a forward selection with the variables that met the abovementioned criteria. The variables which most improved the model fittings and whose addition achieved the best p values of the models were selected. The addition of the variables to the models was terminated either if all variables were included, or if the addition of variables led to no further improvement of the model fittings and the p values. In the ultimate step, the final models were each examined for collinearity using the variance inflation factor.
Univariable analysis
Risk factors on animal level for an infection with C. burnetii detected using ELISA or qPCR The animals' age was a significant (p < 0.05) risk factor for an infection with C. burnetii detected using ELISA ( Table 1). The likelihood for the detection of antibodies was reduced by two-thirds in animals younger than 2 years of age. None of the variables on animal level (sex, species and age) had any significant influence on the detection of C. burnetii by qPCR (Table 2). Tables 1 and 2 also show the apparent prevalence at an individual animal level of sex, species and age. Because of non-evaluable results using ELISA and especially qPCR and occasionally missing age indication, not all of the 3367 sampled animals could be included.
Risk factors on herd level for an infection with C. burnetii detected using ELISA The categorical variables, infestation with ticks, lambing on pasture and purchases of sheep and goats had a significant influence with the detection of C. burnetii infections using ELISA (Table 3). All farms stated their purchases (breeding sires/females or sires and females/no purchases) within the last 12 months in the questionnaire. On the whole, purchased animals were introduced into the flock without quarantine. The majority of farms purchased exclusively breeding sires (n = 41), followed by 19 farms without any purchases. The remaining 11 farms either bought breeding sires and females (n = 8) or only breeding females (n = 3). The proportion of seropositive farms (positive farms/examined farms) was 29.3% (12/41) for exclusively buying breeding sires, 10.5% (2/19) for no purchases and 54.5% (6/11) for purchases with females or sires with females.
Although insignificant, the categorical variables of wild birds (common ravens, pigeons, sparrows, wild geese), game (wild boars, mouflons, red, fallow, roe and sika deer and foxes), rodent control, aseasonal lambing, migrating flocks and participation on animal markets were linked to an increased likelihood of antibody detection, while the OR of the variables, presence of swine and poultry on the farm and lambing either in autumn or winter hinted to a lower risk of antibody detection in the present study. Other variables such as the presence of cattle, cats or dogs, ectoparasitic treatment and summer lambing did not appear to influence the serological findings. Furthermore, none of the continuous variables had a significant impact on the detection of C.
burnetii antibodies in German sheep flocks, although the OR indicated that an increasing mean humidity leads to a heightened seropositivity.
Risk factors on herd level for an infection with C. burnetii detected using qPCR The categorical variables poultry and purchases of sheep and goats had a significantly positive influence on the detection of a C. burnetii infection using qPCR (Table 4). The proportion of farms (positive farms/examined farms) was 12.2% (5/41), 5.3% (1/19) and 36.4% (4/11) for purchases of exclusively breeding sires, no purchases and buying only females as well as females and sires, respectively. Most categorical variables did not have a significant impact on an infection with the pathogen detected by qPCR as using ELISA. Nevertheless, the OR of the variables presence of cats on the farm, infestation with ticks, rodent control, aseasonal lambing, lambing in autumn, on-pasture lambing and the participation on animal markets suggested an increased risk of C. burnetii infections detected by qPCR. The presence of dogs, wild birds, game and lambing either in summer or winter lessened the incidence of C. burnetii infection at herd level in the present study. The presence of swine and cattle, ectoparasitic treatment and husbandry system did not have any impact based on our data. Among the continuous variables, pathogen detection depended significantly on the humidity. An increase in the mean humidity resulted in a decrease in detection via qPCR. Although insignificant, an increase in temperature increased the risk for a C. burnetii infection. Moreover, a significant increase in the herd size decreased the probability of detecting the pathogen via qPCR, while the increase in the ratio of breeding sires to females in a flock tended to amplify the risk of a C. burnetii infection. Both continuous variables were not significant.
Multivariable analysis
Risk factors on animal level for an infection with C. burnetii detected using ELISA or qPCR As expanding the model did not achieve an improvement of the model fittings, no multivariable analyses were carried out.
Risk factors on herd level for an infection with C. burnetii detected using ELISA The final multivariable model contained the variables lambing behaviour, purchases and poultry kept on the farm (Table 5).
Risk factors on herd level for an infection with C. burnetii detected using qPCR Four C. burnetii infection-associated risk factors were included in the final model: purchases, lambing behaviour, presence of game and mean humidity ( Table 6).
Discussion
Some of the variables obtained from the questionnaire and the sample list were identified as being associated with an increased C. burnetii infection hazard. However, the data gathered from the questionnaire were included in the study for prevalence estimation in 71 German sheep flocks [10] and no sample calculations were undertaken for subsequent risk factor analysis. The lack of significant impact of some risk factors could be due to the limited numbers of farms and the huge variability in farm management and husbandry. Therefore, the generated risk factors are inconclusive, but the results provide an indication of possible influences on an infection with C. burnetii detected using ELISA and qPCR, respectively [25]. Further research is needed to confirm the results in this study.
In general, the risk factor analysis was based on the results of two different methods to detect an infection of C. burnetii in sheep and goats. The applied ELISA was based on the detection of IgG Phase I and Phase II antibodies and rendered no information about time and progress of an infection. Additionally, according to other studies, some small ruminants did not seroconvert although they were shedding C. burnetii and vice versa [26][27][28]. Moreover, antibodies were described to last for several months after an acute infection without the presence of the pathogen [27]. In contrast, the detection of C. burnetii-DNA by qPCR showed that the pathogen was circulating within the flock, which indicates a recent infection as long as chronic shedders have not yet been reported in small ruminants. These circumstances could explain some opposing findings of risk factor analysis in the present study.
Risk factors on animal level for an infection with C. burnetii
Age was identified as the only significant risk factor based on the ELISA results at animal level. Animals 2 years of age or older presented a higher seroprevalence. This is in line with studies performed in other European countries [15,17,18]. Adults were more likely to form antibodies due to a higher chance of getting in contact with the pathogen during their lifetime [15,17]. Moreover, the proportion of infected ewes (2.95/2.92%) in this study was higher in comparison to gimmers (young female before first lambing) (1.34/0.91%) detected using ELISA and PCR, respectively. This is in accordance with García-Pérez et al. [18] and Rizzo et al. [15], who reported a lower prevalence in replacement animals not exposed to the pathogen until lambing.
Although not statistically significant, the results of the presented regression analysis suggest that goats within a sheep flock have a higher probability to contract an infection with C. burnetii. Overall, the detection rate of the pathogen was higher in goats compared to sheep regardless of the applied test system, although the proportion of goats in the present study is overrepresented and should be accounted for cautiously when interpreting the results. A similar observation was made in a study conducted with small ruminants with 25.7% seropositive goats compared to 16.3% seropositive sheep in mixed flocks [15]. Interestingly, in the same study, no significant difference in seroprevalence was detected between sheep (11.42%) and goats (10.34%) originating from pure sheep and pure goat farms. Moreover, a higher goat density within a 10 km radius was identified as a risk factor for a C. burnetii infection in sheep [12,19]. Therefore, the crossspecies interaction of C. burnetii between sheep and goats needs further investigation to identify possibly species-specific characteristics.
Risk factors on herd level for an infection with C. burnetii
The univariable analysis identified infestation with ticks as a significant risk factor for a C. burnetii infection based on antibody detection at herd level. Ticks were described as vectors for C. burnetii [1,6]. However, the pathogen was scarcely detected in ticks in Germany [29,30]. Recently, however, both common tick species (Dermacentor marginatus and Ixodes ricinus) were experimentally infected with C. burnetii and remarkable amounts of the pathogen were found to be shed with their faeces [31]. Therefore, transmission very likely occurs predominantly via tick faeces. The relevance of the particular tick species remains unclear. Further research to verify the transmission of the pathogen from ticks to livestock is needed. In this context, it is worth mentioning that ectoparasitic treatment had no significant influence on the infection with C. burnetii in our analysis. The risk of a positive test outcome from ELISA in the current study for flocks with on-pasture lambing was significantly higher than for flocks lambing indoors. This is in line with findings from Schimmer et al. on individual sheep level [19]. Additionally, lambing on pasture increased the risk for an infection detected by qPCR, although statistically insignificantly. On the one hand, outdoor lambing was suspected to reduce the exposure of sheep to C. burnetii [16] possibly due to a lower infection pressure in comparison to lambing in the stable. On the other hand, implementation of hygienic measures (disinfection of lamb pens, disposal of afterbirth) is probably rarer on pasture and pathogen containing material could thus be spread by wind and contaminate the grazing area. In addition, the frequency of grazing on contaminated pastures might have an influence on the risk of infection. In flocks with aseasonal lambing, it is likely that only a small part of all the available grazing ground is appropriate for a mob of lambing ewes. Therefore, these specific pasturesoften located nearby the farmhouseare frequently used by flocks with lambing all year-round. Moreover, getting into contact with other flocks was identified as a risk factor and was more likely with small ruminants kept on pasture than housed animals year-round [15]. But contrary to the findings of Rizzo et al. [15], in the present study, besides poultry, neither other livestock, nor pets or game had a significant influence on the infection with C. burnetii at herd level.
The presence of poultry on a farm increased the risk for a C. burnetii infection detected by PCR significantly. Based on ELISA detection, poultry decreased the risk. The role of poultry as a reservoir for C. burnetii was summarised by Lang [32]. One should note that recent studies are rare. Antibodies against C. burnetii but no pathogen-specific DNA were detected in chicken [33]. Therefore, the influence of poultry as host for C. burnetii remains inconclusive.
In this study, aseasonal lambing was identified by multivariable analysis as a risk factor regardless of the detection method.
Lambing all year-round might lead to constant circulation of the pathogen within the flock and a steady source of (renewed) infection for animals and humans alike. Merinos are an aseasonal sheep breed and thus lambing year-round. They are mainly kept in southern Germany. This may explain the higher occurrence of C. burnetii in the small ruminant population in this part of Germany and the frequently small-scale human epidemics especially in BW [9,10].
Both logistic regression analyses identified purchases of sheep and goats to be significantly associated with a C. burnetii infection detected using ELISA or qPCR, respectively. Schimmer et al. [19] made a similar observation. They determined one or more supply addresses for ewes as a risk factor in sheep flocks in the Netherlands. In the present study, farms purchasing females tested positive more frequently than farms exclusively purchasing males. However, most farmers bought exclusively breeding sires, which reflects a common management practice in the German sheep industry. Buying females, ewes and gimmers, remains rare. Therefore, buying new breeding sires may increase the risk for C. burnetii infection. Venereal transmission of C. burnetii is not yet detailed conclusively, but detection of the pathogen in ram semen [34] and on preputial mucosa [10] was already demonstrated. This is supported by the observations made in the current study. The chance of finding C. burnetii by qPCR was higher in preputial than in vaginal swabs. On the other hand, the chance of being seropositive was higher in females. Females remain in a flock for years and thus have a higher opportunity to become infected during their lifetime [15,17], whereas sires are exchanged after one or two mating seasons. This could explain the higher proportion of seropositive females. However, sex did not have a significant influence on the detection of a C. burnetii infection using ELISA and qPCR, respectively. Overall, animal movements pose a particularly high risk for the entry of C. burnetii into the flock [16,19], especially when purchasing animals from different farms with unknown infection status.
An increasing mean humidity significantly reduced the risk for an infection detected by qPCR and an increasing temperature tended to increase the risk of shedding the pathogen. Different lambing seasons may be connected with the detected influence of the climate on the risk of infection. For instance, higher temperatures and lower relative humidity during summer lambing in the federal states BW and BAV and their continental climate may increase the risk for shedding and transmitting the pathogen. In the federal state of SH, experiencing a more maritime climate due to the location between North Sea and Baltic Sea, lower temperatures in combination with higher relative humidity during the normal lambing season in February and March may reduce the risk of an infection detected by qPCR. Environmental weather conditions and their influence on an infection with C. burnetii were investigated in some studies. Nusinovici et al. [35] suggested low precipitation and high temperature as a risk factor for an infection with the pathogen. In addition, van der Hoek et al. [36] described areas that favour the formation of dust constitutes a higher risk for human infection. Conversely, rain seemed to reduce transmission [37]. Therefore, high precipitation and high humidity may create worse conditions for the transmission and maintenance of the pathogen, while dry weather and wind blowing indicate an increase in the spread of C. burnetii [5].
In summary, age had a significant influence on the detection of C. burnetii antibodies at animal level. Older animals (⩾2 years of age) were more frequently seropositive than younger ones. Therefore, the composition of the flock, especially the replacement rate might have an influence on transmission and circulation of the pathogen in the flock. The multivariable analysis identified purchases and lambing all year-round as risk factors for a C. burnetii infection at herd level detected using ELISA and qPCR, respectively. The results and observations compiled in this study are of particular use in establishing an active monitoring and surveillance system for the German small ruminant population, which may contribute to prevent the transmission of C. burnetii to animals and humans alike.
|
2020-10-15T13:05:33.104Z
|
2020-10-14T00:00:00.000
|
{
"year": 2020,
"sha1": "a37169f47cd7dbe2113d6b9f683e033a320c5e08",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/65A0759AA5DA4E33153E3E3027675091/S0950268820002447a.pdf/div-class-title-risk-factors-for-an-infection-with-span-class-italic-coxiella-burnetii-span-in-german-sheep-flocks-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4977ca474db13ae854a2910fcdda5d8a6cd5b4fc",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
268272802
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence and determinants of utilizing skilled birth attendance during home delivery of pregnant women in India: Evidence from the Indian Demographic and Health Survey 2015–16
Background Utilization of skilled birth attendance during home delivery of pregnant women is proven to reduce complications during and after childbirth. Though the utilization of skilled birth attendance (SBA) during home delivery has increased significantly in recent times, the rate of utilizing skilled birth attendance is still low in several regions across India. The objective of this study is to analyze the prevalence and to identify the determinants of the utilization of skilled birth attendance during home delivery of pregnant women in India. Methods To conduct this study, data and information from the Indian Demographic and Health Survey 2015–16 have been utilized. The sample size for this study is a weighted sample of 41,171 women. The sample consisted of women who had given a live birth in the three years preceding the survey. For women with more than one child, only the first live birth was considered. The binary logistic regression model and the log-binary logistic regression analysis have been applied as the adjusted odds ratios (AORs) with 95% confidence intervals for identifying the determinants of home-based skilled birth attendance during delivery. That allows us to select the most appropriate model for our study objective by ensuring that the determinants of skilled birth attendance for home delivery are accurately assessed based on the characteristics of the data. Results The analyses show that only 18.8% of women had utilized skilled birth attendance during delivery. Women residing in urban areas are more likely to utilize skilled birth attendance during home delivery (AOR: 1.14; 95% CI: 1.08–1.20). Women having higher education levels are associated with increased use of SBA during home delivery (AOR: 1.15; 95% CI: 1.04–1.28). Exposure to media is associated with increased utilization of SBA (AOR: 1.17; 95% CI: 1.11–1.23). Overweight women are also more likely to avail the SBA during home delivery (AOR: 1.11; 95% CI: 1.03–1.19). Women belonging to affluent households have higher odds of utilizing skilled birth attendance (AOR: 1.41; 95% CI: 1.33–1.49). Having 3+ tetanus injections is associated with the utilization of SBA (AOR: 1.56; 95% CI: 1.43–1.69). Women having 4+ antenatal care visits were more likely to utilize SBA (AOR: 1.81; 95% CI: 1.71–1.92). Women belonging to the Hindu religion were 1.12 times more likely to utilize SBA (AOR: 1.12; 95% CI: 1.07–1.18). Women with 1 to 3 birth orders were 1.40 times more likely to utilize skilled birth attendance during home delivery (AOR: 1.40; 95% CI: 1.30–1.51). Conclusion The percentage of women utilizing skilled birth attendance during home delivery is still very low which is a matter of serious concern. Several factors have been found to be associated with the utilization of SBA during home delivery in India. As skilled birth attendance has significant positive health outcomes for pregnant women and newborns, efforts to increase the rate of SBA utilization during home delivery should be undertaken.
Methods
To conduct this study, data and information from the Indian Demographic and Health Survey 2015-16 have been utilized.The sample size for this study is a weighted sample of 41,171 women.The sample consisted of women who had given a live birth in the three years preceding the survey.For women with more than one child, only the first live birth was considered.The binary logistic regression model and the log-binary logistic regression analysis have been applied as the adjusted odds ratios (AORs) with 95% confidence intervals for identifying the determinants of home-based skilled birth attendance during delivery.That allows us to select the most appropriate model for our study objective by ensuring that the determinants of skilled birth attendance for home delivery are accurately assessed based on the characteristics of the data.
Results
The analyses show that only 18.8% of women had utilized skilled birth attendance during delivery.Women residing in urban areas are more likely to utilize skilled birth attendance during home delivery (AOR: 1.14; 95% CI: 1.08-1.20).Women having higher education
Introduction
A skilled birth attendant (SBA) is a healthcare professional who provides essential and emergency healthcare services to women and their newborns during pregnancy, childbirth and the postpartum period.Delivery attended by skilled professionals is known to contribute to a better pregnancy and childbirth outcome as well as early detection and management of complications during the antenatal, delivery, and postnatal period [1].In 2015, an estimated 3,03,000 women died from pregnancy-related causes worldwide [2].Despite the Sustainable Development Goals (SDGs) target of reducing the global maternal mortality ratio to less than 70 per 100,000 live births by 2030, countries in WHO (World Health Organization) regions such as Sub-Saharan Africa and South Asia will have a high maternal mortality ratio (MMR) [3,4].Within the South Asian countries (region), India has the highest MMR [3].Although the maternal mortality rate has decreased significantly since 1997, the rate is still high in rural and tribal areas (Meh et al., 2021) [5].The maternal mortality rate (MMR) in India has declined from the previous three years which has been observed to be most significant in the Empowered Action Group (EAG) states and Assam (also known as low-performing states) where the number went down from 188 to 175 per 100,000 live births.The decline has been from 77 to 72 among the Southern states (known as high-performing states) [6,7].This decrease is strongly associated with increased utilization of essential health care and quality of care services including antenatal care, institutional delivery and skilled birth attendance (SBA).Previous research has demonstrated that unskilled birth attendance and home delivery are associated with high infant and maternal mortality and morbidity [2,[8][9][10].Therefore, skilled attendants are deemed to contribute positively to the reduction of maternal and newborn mortality and morbidity [2].
However, pregnancy-related deaths among Indian women continue to be unacceptably high [6].Despite significant progress, disparities in MMR remain widespread and prolonged across regions and socioeconomic groups in India [3,11,12].Low coverage of essential maternity care services including antenatal care (ANC), SBA and postnatal care (PNC) significantly impacts mother and newborn survival.It puts them at risk [2,13].SBA-assisted childbirth can significantly reduce the risk of maternal and neonatal deaths caused by prematurity, intrapartum or postpartum complications [10,14,15].Increasing institutional deliveries can help reduce maternal and neonatal mortality [16][17][18].
In three of the six WHO regions, the percentage of births attended by skilled personnel exceeds 90%.However, in certain regions where skilled birth attendance (SBA) is lacking include Africa where the figure remains less than 50% followed by South Asia [2].SBA has increased significantly in India; for example, between 1992-93 and 2005-06, skilled birthassisted deliveries increased by 13 percentage points (from 36 to 49 percent) and then by 81 percent in 2015-16 [12,19].Nonetheless, understanding regional disparities in skilled birth attendance is required as the case of India demonstrates that there are large gaps in maternity care service availability across India particularly in rural areas.There are differences in the use of SBA for deliveries between rural and urban areas with 90 and 78 percent respectively [12].
Furthermore, socioeconomic determinants are always crucial in broadening the gaps in availing SBA-assisted delivery among women in India.For example, whereas only 64% of women in the lowest income quintiles had an SBA for delivery, 96 percent of women in the highest income quintiles had availed the service in India.Ensuring safe and well-prepared childbirth requires the presence of a skilled birth attendant.In maternal and child health, it is also crucial to take into account the age of a woman at delivery as it can influence the risk of complications [20][21][22][23][24][25].Use of SBA also varies across social groups and as per the mothers' educational statuses [12,26].Furthermore, inequalities in obtaining SBA within the state and between states or regions are widespread in India.Limited research in India has investigated women's preference towards home deliveries between public and private health facilities [27].Several research studies have investigated the factors that influence the utilization of healthcare facilities for childbirth [28,29].On occasions, economic status can significantly influence the choices related to the location of childbirth than mere accessibility particularly when deciding between private and public healthcare facilities [30].The utilization of private healthcare services is often regarded as an indicator of affluence and social standing [28].On the other hand, public healthcare facilities serve as the primary source of cost-effective delivery facilities for India's underprivileged populations but people may opt for home-deliveries instead [31].Overall, most of the studies on accessing delivery services are conducted on institutional skilled birth attendance.There is a dearth of literature identifying factors affecting SBA at home which constitutes a significant literature gap.This information is critical for different stakeholders working to improve maternal and child health in India and other developing countries to make informed decisions.This study, therefore, intends to explore what factors affect home-based skilled birth attendance among Indian women.
Sampling design and data sources
This study was conducted using the Indian Demographic and Health Survey 2015-16 data also regarded as the National Family Health Survey-4 (NFHS-4).This survey was conducted under the Ministry of Health and Family Welfare (MoHFW), Government of India.The International Institute for Population Sciences (IIPS) in Mumbai acts as the nodal agency for all the surveys conducted by the MoHFW.The 2015-16 National Family Health Survey's primary goal was to collect vital information on health and family welfare and information on emergent difficulties in India.A two-stage stratified clustered sampling technique was used in this survey [12].
The Indian Demographic and Health Survey 2015-16 was conducted using four types of questionnaires (Household, Woman's, Man's, and Biomarker Questionnaire).In this study, we used data from the woman's questionnaire.This questionnaire was based on 17 local languages administered by the Computer Assisted Personal Interviewing (CAPI) adjusted to India's circumstances and requirements.During this survey, all eligible 15 to 49 aged women were asked questions regarding their background characteristics (for instance, age, education, religion, caste or tribe, and media exposure), reproductive history, hysterectomy prevalence, menstrual hygiene, knowledge of usage and sources of family planning methods, antenatal, delivery, postnatal and newborn care, husband's background, fertility preferences, empowerment of women etc. [12].
In our study, we only measured the case of the first live birth in a woman's life.A weighted sample of 41,171 women who had given a live birth in the three years preceding the survey was taken into account.Only the first live birth was considered for women who had more than one live birth.
Outcome variable
The outcome variable for this study was whether a Skilled Birth Attendant was present at the woman's first live birth if the delivery took place in their home.In our analysis, we considered doctors, Auxiliary Nurse Midwives (ANM)/Nurses/Midwives/Lady Health Visitors (LHV), Midwives and other health personnel as skilled birth attendants [32].Other birth attendants like Dai (Traditional Birth Attendant) and friends/relatives were considered unskilled birth attendants.The skilled birth attendant variable is a binary response with 'skilled provider' coded as 1 and 'unskilled provider' as 0.
Explanatory variables
For this study, the explanatory variables include sociodemographic factors such as type of place, educational level, media exposure, body mass index (BMI), tetanus injection receiving status, number of antenatal care visits, age at first birth (years), wealth index, religion and birth order.type of place was defined as two categories: 'rural' and 'urban.'Respondent's educational level was categorized as no education, primary, secondary and higher.Media exposure was defined as women who had not watched TV or listened to the radio, or read a newspaper/ magazine at least once a week and were categorized as 'no exposure.'In contrast, others were grouped as 'having exposure.'Body Mass Index was defined as BMI <18.5 as underweight, BMI between 18.5-24.9as normal and BMI >24.9 as overweight.The Number of Tetanus injections received before birth was re-coded into None, 1, 2 and 3+.During pregnancy, the number of antenatal care visits was categorized as 'no visits,' '1 to 3 visits' and '4+ visits'.Age at first birth was categorized as �18 years, 19-23 years and 24+ years.The wealth index was recategorized using 'poorest' and 'poorer' as 'poor'; 'richest' and 'richer' as 'rich' while the middle wealth category remained the same.The respondent's religion was re-coded into Muslim, Hindu and others.Respondent's birth order, in this study, was re-coded into one, two, three, four and five or more.
Statistical analyses
In this study, we first conducted a univariate analysis to observe the frequency of selected background characteristics of the women in India.Using cross-tabulation and chi-square test, the bivariate analysis shows the relationship between the outcome and explanatory variables.The Binary logistic regression (BLR) model was presented as adjusted odds ratios (AORs) with 95% confidence intervals for identifying the determinants of home-based skilled birth attendance during delivery [33].Then, Log-binary logistic regression (LBLR) model analysis was conducted as adjusted odds ratios (AORs) (including 95% confidence intervals) for identifying the determinants of home-based skilled birth attendance.It allows for a more comprehensive analysis of the determinants of home-based skilled birth attendance.Finally, the appropriate model selection was done to decide which statistical model best approximates the reality given the set of data and minimizes loss of information.The following goodness of fit tests were used in this study for model selection: (1) Akaike information criterion (AIC) and (2) Bayesian information criterion (BIC).When the log-likelihood value is small, it indicates the model is a worse fit.After comparing BLR and LBLR models, the model with the most negligible AIC value is the best.BIC is similar to AIC.The goal of BIC is to find the best model for prediction using the highest posterior probability while the purpose of AIC is to identify the model that most plausibly generates the data.Both AIC and BIC can be used to identify whether the models are nested.It has a high potential of selecting the best model as it is independent of the order in which the models are computed.Data analyses of this study were performed through STATA version 14.0 for Windows and sampling data were weighted using the Stata Survey command.
Results
In this study, we included 41,171 women for our analysis.As shown in Table 1, only 18.8% of women were attended by home-based skilled attendants during delivery at home.The majority of the women (87.0%) lived in rural areas and nearly two-thirds were Hindu women.About one-third of the women (29.8%) reported their education level as 'secondary.'More than half of the women (53.8%) had media exposure and nearly 30% of them were in the underweight group.Moreover, a high percentage of the women (73.5%) were from low-income families.More than half of the women (58.2%) received tetanus injections two times in this study.Similarly, 39.8% reported having one to three antenatal care visits before their deliveries.Most of the women (26.9%) gave birth to two children and most of the women included in this study mainly gave their first birth during 19 to 23 years of age (54.2%).
Bivariate analysis (Table 2) shows significant association (p<0.001) between the utilization of home-based skilled birth attendants during delivery and the type of place, educational level, religion, media exposure, body mass index, wealth index, tetanus injection receiving status, age at first birth, number of antenatal care visits and birth order.The utilization of unskilled delivery attendants was higher among the rural woman, uneducated, women belonging to poor families and among the Hindu women.Also, there was a higher rate of using skilled birth attendants among women with one to three antenatal care visits.
The Binary Logistic Regression shows that type of place, educational level, religion, media exposure, body mass index, wealth index, tetanus injection receiving status, number of antenatal care visits and birth order have highly significant association with the utilization of homebased skilled birth attendance (Table 3).
The women from urban areas were 1.19 times (AOR: 1.19, 95% CI: 1.11-1.26)more likely to avail home-based delivery by skilled birth attendants compared to those from rural areas.Women with secondary and higher level of education were 1.17 times (AOR: 1.17, 95% CI: 1.09-1.25)and 1.25 times (AOR: 1.25, 95% CI: 1.06-1.47)more likely to use skilled birth attendants during delivery compared to the woman with no education.Moreover, women exposed to media were 20% (AOR: 1.20, 95% CI: 1.13-1.18)more likely to be delivered by skilled birth attendants than women unexposed to media.The odds of having home-based delivery by skilled birth attendants are 51% higher among women during their first birth (AOR: 1.51, 95% CI: 1.38-1.66)compared to their fifth and later births.Women who are overweight had 1.15 1.79) times higher than women in poor households.Women who had received 3+ Tetanus injections were 73% (AOR: 1.73, 95% CI: 1.56-1.91)more likely of having skilled birth delivery than women who did not receive Tetanus injections at all.Furthermore, women who had 4 + antenatal care visits for their most recent delivery were 2.16 (AOR: 2.16, 95% CI: 2.01-2.32)times more likely to have home-based skilled birth attendance compared to those who had no antenatal care visits.Besides, Hindu women had 1.15 (AOR: 1.15, 95% CI: 1.07-1.23)times higher odds of using skilled birth attendance during delivery compared to Muslim women.The Log-Binomial Logistic Regression, as shown in Table 4, also reveals a highly significant association between the utilization of home-based skilled birth attendance and the predictors such as type of place, educational level, religion, media exposure, body mass index, wealth index, tetanus injection receiving status, number of antenatal care visits and birth order.
Urban women had 1.14 times (AOR: 1.14, 95% CI: 1.08-1.20)more likelihood to have home-based delivery by skilled birth attendants compared to rural women.The higher educated women had 15% (AOR: 1.15, 95% CI: 1.04-1.28)more likelihood to use skilled birth attendance during delivery than the women who had no formal education.Regarding media exposure, women exposed to media were 1.17 (AOR: 1.17, 95% CI: 1.11-1.23)times more likely to be delivered by skilled birth attendants than women unexposed to media.Also, women in the overweight group had 11% (AOR: 1.11, 95% CI: 1.03-1.19)higher likelihood of using skilled birth attendants than women in the underweight group.Similarly, the higher odds (41%) of having home-based skilled birth attended delivery among women were observed among women from affluent households (AOR: 1.41, 95% CI: 1.33-1.49)than women from poor households.Women who had received 3+ Tetanus injections were 56% (AOR: 1.56, 95% CI: 1.43-1.69)more likely of having skilled birth delivery at home than women who did not receive Tetanus injections at all.Similarly, women who had 4+ antenatal care visits for their most recent delivery were 1.81 (AOR: 1.81, 95% CI: 1.71-1.92)times more likely to have home-based skilled birth attendance compared to those who had no antenatal care visits.Hindu women had 1.12 (AOR: 1.12, 95% CI: 1.07-1.18)times higher odds of using skilled birth attendants during delivery compared to Muslim women.The odds of having home-based delivery by skilled birth attendants among the woman during their first birth were 1.40 times (AOR: 1.40, 95% CI: 1.30-1.51)higher compared to their fifth born.
Discussion
The goal of this study was to investigate the disparities in Indian women's use of competent delivery help during labor at home and to identify the factors associated with the use of skilled birth attendance during delivery at home.We concluded that Log-Binomial Logistic Regression (LBLR) is the best fit model for this study after assessing AIC and BIC.According to the findings of the LBLR investigation, women who live in urban areas, are well educated, and have regular media exposure are more likely to use a competent birth attendant even when they give birth at home.The study also revealed that women with overweight and from affluent households in India are more likely to use SBA at home during childbirth.Women who receive prenatal care during pregnancy and receive a tetanus shot and at their early pregnancies are more likely to request for SBA during childbirth at home.Literature also suggests that urban women are more likely to have skilled birth attendance compared to the rural women [34,35].The healthcare system in rural India does not have adequately trained support staffs which makes it difficult for the rural people to access health care services when needed [36].The relative unavailability of skilled birth attendance in rural area might result in lower incidence of giving birth in the presence of a skilled birth attendant at home by the rural Indian women.Studies also suggest that women with higher level of education are more likely to give birth in the presence of a skilled birth attendant [37,38] which is reconfirmed in our study.Women with higher education are supposed to be more aware about the complicacies related to pregnancy and more capable to understand the role of skilled birth attendance in safe delivery.Therefore, it is likely that educated women, even while giving birth at home, will avail the service of a skilled birth attendant.
It is also found in our study that Hindu women are more likely to have home-based skilled birth attendance compared to Muslim women.In the literature, religion has been identified as a significant predictor of having skilled birth attendance [39,40].A study also found that Muslim women are usually less likely to use skilled birth attendance and safe deliveries compared to Hindu women [41].It is difficult to explain the variation in the use of skilled birth attendance at home based on religion.Therefore, further research is warranted to explore how an individual's belief system is associated with decision on availing professional health care services like skilled birth attendance.
Despite the fact that a large percentage of educated women from wealthy Hindu families are considering SBA during childbirth, the number of women who are unaware of the benefits of SBA even during home-based childbirth is not insignificant which needs immediate actions by the government of India.
Our research further suggests that media exposure is positively associated with home based skilled birth attendance.Prior works suggests the same that women who are more exposed to mass media are more likely to give birth in the presence of a skilled birth attendant [42][43][44].Mass media helps people know about complicacies that may arise during giving birth and how a skilled birth attendant can be a life savior which probably led women with high media exposure to use skilled birth attendance at home as well.
This study found that overweighted women are more likely to seek help from skilled birth attendants while giving birth at home compared to women who are underweighted.Literature showing relationship between BMI and skilled birth attendance is extremely scarce.A study, conducted on women in Bangladesh, found that women whose weight is not normal are more likely to use skilled birth attendance compared to normal-weighted women [44].High BMI is associated with a number of health risks such as diabetes, cardiovascular diseases, kidney diseases, stroke etc. [45].Women, due to being overweighted and having a number of associated health complicacies, are more likely to be under supervision of a professional medical practitioner and therefore, are more likely to be aware about the complicacies that may occur during pregnancy and child-birth.They are supposed to receive suggestions to seek professional help even if they give birth at home which may lead to higher likelihood of availing skilled birth attendance at home by them.Our research found that women from affluent households in India are more likely to ask for help from skilled birth attendants while giving birth at home compared to women from poorer households.Literature, however, shows mixed results on the association between wealth index and use of skilled birth attendance.For example, a study found that women who belong to rich household have higher odds of having skilled birth attendance than the women who are from poor families which is in line with our findings [46].Contrary to that, another study found that poorer women are more likely to use skilled birth attendance at home compared to women from rich households [47].Therefore, we don't have a plausible explanation on why women from affluent households are more likely to use skilled birth attendance at home and through this study, we call for further research on this.
Generally, low-income families choose to give birth at home with the help of unskilled SBAs due to the low cost and the benefit of payment negotiation [48].This behavior can be reduced, especially for the poor, by lowering out-of-pocket costs for institutional delivery [49].Effort should be made to improve the training of these birth attendants so that they can assist with childbirth at home and make referrals to the nearest healthcare institution when necessary [50].
Our study also found that women who received Tetanus injections were more likely to seek assistance from skilled birth attendants at home compared to women who did not receive the Tetanus shots at all.Research on the relationship between receiving TT injections and skilled birth attendance is extremely inadequate.A study found that TT injection can reduce neonatal mortality [51].Tetanus is a disease which is still prevalent across the world and can lead to fatal outcomes [52,53].A mother who is conscious about neonatal mortality and other fatalities that are associated with tetanus and therefore takes tetanus shots has high level of awareness and is supposed to be careful about safe delivery as well.Therefore, a mother who receives tetanus injection should be more likely to use skilled birth attendance while giving birth at home.Our study also found that people who are more likely to receive antenatal care (ANC) services are more likely to have home based skilled birth attendance which is in line with the findings of the literature showing positive relationship between number of visits for ANC services and availing skilled birth attendance [54,55].People who avail ANC services and skilled birth attendance belong to the same cohort almost [56,57].Therefore, it is likely that women who avail ANC services will also go for skilled birth attendance.Efforts should be made to make SBA conveniently accessible along with prenatal and postnatal care services during the first 24 hours of birth to minimize the high maternal mortality rate in the low-income countries like India [10,58].
Findings from our study also suggest that birth order has a significant relationship with availing skilled birth attendance while giving birth at home.Women in their early pregnancies are more likely to use home-based skilled birth attendance compared to later pregnancies.Similar studies also found that with later pregnancies, women are less likely to ask for skilled birth assistance [44,59].During the first couple of pregnancies women may feel insecure as they don't have prior experience of child-birth.They may tend to be extra-cautious for safe delivery.Once they get accustomed to the procedures related to child-birth, they may feel that they don't need expert handling of the pregnancies which may lead to lower usage of skilled birth attendance at home by the Indian women.
Strength and limitation
This study is one of its kind as it explored factors that influence availing the assistance of a skilled birth attendant while giving birth at home unlike other studies that explored factors related to availing institutional skilled birth attendance.The novelty of our studies lies in identifying some of the factors that have not been investigated thoroughly by the prior studies such as BMI, status of tetanus injection reception, birth order using a nationally representative data.However, there are some limitations associated with this study.The DHS data used in this study encompassed a wider range of locations and time points, which introduced selection bias.Furthermore, each variable was divided into two categories before the OR was calculated.In addition, we were unable to incorporate all the potential risk factors such as cost of delivery, geographical locations and the availability of other medical schemes.
Further research should investigate those newly identified factors.Further research should also attempt to explain how belief system, BMI, status of Tetanus injection reception and birth order affect decision on availing skilled birth attendance at home.The study calls for public campaign and social mobilization to raise awareness among the general populace so that they seek skilled birth attendance in cases where women are giving birth at home as well.This study also urges for greater access to the healthcare services so that home-based SBA takes place to a larger extent.Appropriate policy interventions by the governments, therefore, has a key role to play here.
Conclusions
This study intends to explore the factors affecting home-based skilled birth attendance among the Indian Women.Data was taken from the Indian Demographic and Health Survey 2015-16 also termed as the National Family Health Survey-4 (NFHS-4).Analysis of the data suggests that a number of factors such as place of living, level of education, religion, media exposure, body mass index, possession of wealth, status of tetanus injection reception, number of antenatal care visits and birth order affect the availing of skilled birth attendance by the Indian women while giving birth at home.The important strategy for India, as demonstrated by earlier studies is to increase the rate and the availability of skilled birth attendants during childbirth at home in order to lessen the burden of maternal and child mortality and to achieve maternal and child health-related goals [60][61][62][63].
Table 5
denotes that Log-Binomial Logistic Regression model is more favored than the Binomial Logistic Regression model based on the two goodness of fit measures: AIC and BIC.In addition, the table specifies that the assessed AIC and BIC are smaller for the Log-Binomial Logistic Regression model than the Binomial Logistic Regression model.
|
2024-03-09T05:05:56.263Z
|
2024-03-07T00:00:00.000
|
{
"year": 2024,
"sha1": "1c0b79f6315dd7bce415529cbc58cc82f1b52eec",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1c0b79f6315dd7bce415529cbc58cc82f1b52eec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237547470
|
pes2o/s2orc
|
v3-fos-license
|
UCP3 (Uncoupling Protein 3) Insufficiency Exacerbates Left Ventricular Diastolic Dysfunction During Angiotensin II‐Induced Hypertension
Background Left ventricular diastolic dysfunction, an early stage in the pathogenesis of heart failure with preserved ejection fraction, is exacerbated by joint exposure to hypertension and obesity; however, the molecular mechanisms involved remain uncertain. The mitochondrial UCP3 (uncoupling protein 3) is downregulated in the heart with obesity. Here, we used a rat model of UCP3 haploinsufficiency (ucp3+/‐) to test the hypothesis that decreased UCP3 promotes left ventricular diastolic dysfunction during hypertension. Methods and Results Ucp3+/‐ rats and ucp3+/+ littermates fed a high‐salt diet (HS; 2% NaCl) and treated with angiotensin II (190 ng/kg per min for 28 days) experienced a similar rise in blood pressure (158±4 versus 155±7 mm Hg). However, UCP3 insufficiency worsened diastolic dysfunction according to echocardiographic assessment of left ventricular filling pressures (E/e’; 18.8±1.0 versus 14.9±0.6; P<0.05) and the isovolumic relaxation time (24.7±0.6 versus 21.3±0.5 ms; P<0.05), as well as invasive monitoring of the diastolic time constant (Tau; 15.5±0.8 versus 12.7±0.2 ms; P<0.05). Exercise tolerance on a treadmill also decreased for HS/angiotensin II‐treated ucp3+/‐ rats. Histological and molecular analyses further revealed that UCP3 insufficiency accelerated left ventricular concentric remodeling, detrimental interstitial matrix remodeling, and fetal gene reprogramming during hypertension. Moreover, UCP3 insufficiency increased oxidative stress and led to greater impairment of protein kinase G signaling. Conclusions Our findings identified UCP3 insufficiency as a cause for increased incidence of left ventricular diastolic dysfunction during hypertension. The results add further support to the use of antioxidants targeting mitochondrial reactive oxygen species as an adjuvant therapy for preventing heart failure with preserved ejection fraction in individuals with obesity.
H eart failure with preserved ejection fraction (HFpEF) accounts for more than half of all heart failure cases and bears unacceptably high 5-year hospital readmission and mortality rates. 1 Left ventricular diastolic dysfunction (LVDD) is an early but underinvestigated stage in the pathogenesis of HFpEF. 2 Along with hypertension, obesity and obesity-related metabolic disorders represent major preventable causes for LVDD. 3,4 Thus the prevalence of LVDD ranges between 20% to 30% in the general adult population but can go as high as 65% in elderly individuals with hypertension, obesity, and diabetes. 2 Although there is some clinical evidence to support a role for an interaction between hypertension and obesity-related metabolic derangements in the exacerbation of LVDD, 5,6 the nature of the molecular mechanisms at play remains uncertain. UCP3 (uncoupling protein 3) is a mitochondrial anion carrier protein predominantly expressed in brown adipose tissue, skeletal muscle, and the heart. 7 While the physiological functions of the protein have not been clearly delineated, muscle mitochondria lacking UCP3 have been shown to generate more superoxide anions under stimulated respiration. 8 Hearts from UCP3 knockout mice also generate more reactive oxygen species (ROS) in response to ischemia/reperfusion, leading to further deterioration of contractile function following myocardial infarction. 9 Using a rat model of UCP3 haploinsufficiency (ucp3 +/-) we demonstrated that, similar to the knockout experiments, partial loss of UCP3 is associated with a greater generation of mitochondrial ROS in cardiomyocytes and the exacerbation of left ventricular contractile dysfunction at reperfusion following ischemia. 10 All together, these observations support the notion that UCP3 deficiency exacerbates myocardial cell injury in pathological conditions associated with enhanced mitochondrial ROS production. 7,10 Oxidative stress plays a key role in the development of diastolic dysfunction and the pathogenesis of HFpEF. 11 Obesity-induced hypertension is characterized by activation of the sympathetic nervous and renin-angiotensin-aldosterone systems. On the one hand, increased myocardial energy demand driven by the increases in workload and β-adrenergic receptor activation stimulates mitochondrial ROS production. 12 On the other hand, angiotensin II (Ang II) directly signals through the Ang II receptor type 1 to increase ROS generation from both nicotinamide adenine dinucleotide phosphate (NADPH) oxidases and mitochondrial sources. 13 We and others have previously reported that UCP3 is down-regulated with obesity and type 2 diabetes. 10,14,15 Therefore, we propose that UCP3 insufficiency is 1 of the mechanisms linking obesity to accelerated development of LVDD during hypertension through exacerbation of myocardial oxidative stress.
To test our hypothesis, non-obese and metabolically normal male ucp3 +/rats were subjected to chronic elevation in blood pressure by slow-pressor angiotensin II infusion under high dietary salt intake (HS/Ang II), a well-established method for induction of neurogenic hypertension. 16 Cardiac function was evaluated with transthoracic echocardiography and invasive left ventricular (LV) pressure measurements. Mechanistic causes for LV dysfunction were further sought by histology and through targeted analyses of cardiac primary transcripts and proteins.
METHODS
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Animals and Diets
Generation of the Sprague Dawley rat model of UCP3 haploinsufficiency has been reported previously. 10 Because lack of UCP3 has no impact on adiposity or insulin sensitivity in rats maintained under conventional housing conditions, 17 the model allowed testing of the present hypothesis independently from the confounding effects of obesity and diabetes. Male UCP3 insufficient rats (ucp3 +/-) and wild-type controls (ucp3 +/+ ) were obtained from same litters by non-brother-sister mating of heterozygous knockout rats. Animals were housed in the animal facilities of the Center for Comparative Research from the University of Mississippi Medical Center on a 12-hours light/12-hours dark cycle at a temperature of 22 °C ±2 °C and 40%-60% humidity. Rats (n=45 ucp3 +/and 39 ucp3 +/+ ) were fed a nonpurified standard laboratory rodent diet ( Research Diets). Drinking water was supplied ad libitum until end of the study protocol ( Figure 1). The study complied with the Guide for the Care and Use of Laboratory Animals and was approved by the Institutional Animal Care and Use Committee (protocol #1436A). All efforts were made to minimize animal suffering and to reduce the number of animals used. Four rats in the HS/Ang II treatment groups (2 ucp3 +/+ rats and 2 ucp3 +/rats) died before the end of the protocol and the partial data on these animals were not included in the analyses.
Blood Pressure and Heart Rate Measurements
Mean arterial blood pressure and heart rate were measured in conscious rats using the CODA non-invasive blood pressure system (Kent Scientific, Torrington, CT). Animals were subjected to tail-cuff measurements for 5 consecutive days for baseline values before osmotic pump implantation, and then for 2 consecutive days every week following initiation of Ang II or 0.9% saline treatment. Mean values of blood pressure and heart rate were calculated for each time point and used for statistical analyses.
Transthoracic Echocardiography
Echocardiographic exams were performed under isoflurane anesthesia using a Vevo 3100 Imaging System (FUJIFILM VisualSonics, Toronto, Ont). The amount of isoflurane dispensed (1%-2% isoflurane in 100% O2) was individually adjusted to maintain similar heart rate between rats. Body temperature was maintained within physiological range (36.0 °C-37.5 °C) throughout the procedure using a dedicated heating pad. B-Mode and M-Mode images were obtained from the parasternal long-axis view and used to calculate aortic root diameter, left ventricular anterior wall thickness at end-diastole, LV internal diameter at end-diastole, A first cohort of ucp3 +/and ucp3 +/+ rats (red; n=9-16 per group) was used to assess in vivo the evolution of the mean arterial blood pressure (weekly measurements) as well as cardiac structure and function (transthoracic echocardiography; bi-weekly measurements) over the course of a combination treatment consisting in high dietary salt intake (2% NaCl) and chronic Ang II infusion (190 ng/kg per min; Bottom panel). Control animals were maintained on a low-salt diet and infused with 0.9% saline (Top panel). Cardiac tissue was collected after euthanasia for further molecular analyses. The protocol was repeated on a second cohort of ucp3 +/and ucp3 +/+ rats (blue; n=6 per group) to determine the impact of the treatment on exercise tolerance on a treadmill and left ventricular pressure. Numbers only include rats that completed the experimental protocol. Ang II indicates angiotensin II; ExT, exercise tolerance on a treadmill; MAP, mean arterial pressure; qPCR, quantitative polymerase chain reaction; and UCP3, uncoupling protein 3.
Invasive Hemodynamic Monitoring
Invasive monitoring of left ventricular diastolic function was conducted with a 2-French Mikro-Tip catheter transducer (Millar, Houston, TX) connected to a PowerLab data acquisition system (ADInstruments, Colorado Springs, CO). In brief, rats were anesthetized with isoflurane and the amount of isoflurane dispensed (1%-2% isoflurane in 100% O2) was individually adjusted to maintain similar heart rate between animals. A small incision was made through the diaphragm of the rats to insert the tip of the catheter into the LV through the apex of the heart. The left ventricular diastolic time constant, Tau, was calculated with LabChart 8 software (ADInstruments) following the Weiss method.
Real-time Polymerase Chain Reaction
Total RNAs were extracted using RNeasy Fibrous Tissue Kit and their integrity checked on QIAxcel Advanced System (QIAGEN, Germantown, MD). One microgram of total RNA was reverse-transcribed with RevertAid reverse transcriptase using random hexamers as per manufacturer's instructions (Thermo Fisher Scientific, Waltham, MA). TaqMan gene expression assays were used to perform relative quantification of mRNAs encoding atrial natriuretic peptide (Nppa); myosin heavy chain alpha, Myh6; myosin heavy chain beta, Myh7; and sarcoplasmic/ endoplasmic reticulum Ca2+ ATPase 2 (SERCA2; Atp2a2) with the standard curve method. Gene expression levels were normalized with quantification of peptidylprolyl isomerase A (Cyclophilin A; Ppia) as the housekeeping gene.
Histology and Immunofluorescence
Formalin-fixed tissue samples were embedded in paraffin, serially cut into 5-μm-thick sections and processed for Masson Trichrome staining at AML Laboratories (Jacksonville, FL). Separate tissue sections were incubated with fluorescein-labeled wheat germ agglutinin (Vector Laboratories, Burlingame, CA) for 1 hour before staining of cell nuclei with 4',6-diamidino-2-phenylindole. Sections were imaged on a Lionheart FX Automated Microscope and analyzed with Gen5 data collection and analysis software (BioTek Instruments, Winooski, VT).
Transmission Electron Microscopy
Left ventricular tissue samples (≈1 mm 3 ) were quickly dissected following euthanasia and immediately fixed in glutaraldehyde. After thin sectioning (70 nm in thickness) and application on copper grids, the stained samples were loaded in a JEM-1400Plus transmission electron microscope (JEOL USA, Peabody, MA) for data acquisition. The entire tissue sections were thoroughly viewed at low magnification (×300) to ensure integrity and quality of stained tissues before image acquisition. At least five randomly picked fields per sample were examined at higher magnifications.
Statistical Analysis
Data are expressed as mean±SEM. GraphPad Prism Software (ver 9.0; La Jolla, CA) was used for the statistical analysis. Blood pressure and heart rate were statistically analyzed with use of a 2-way repeated measures ANOVA followed by the Tukey test using time and treatment group as the main factors, and with inclusion of the interaction between these 2 factors. All other multiple comparisons were performed with use of a 2-way ANOVA followed by the Tukey test using genotype and treatment as the main factors. Pairwise comparisons were performed using a 2-tailed paired t-test. P<0.05 was considered significant.
Blood Pressure Response to HS/Ang II Treatment Is Not Affected by UCP3 Insufficiency
Non-invasive tail-cuff measurements demonstrated that mean arterial pressure of 10-week-old male ucp3 +/rats maintained on a low-salt diet (124±3 mm Hg; n=15) was comparable with that of the ucp3 +/+ controls (121±4 mm Hg; n=9), and the values remained unaffected by 4 weeks of saline infusion. While high dietary salt intake alone didn't have any impact on blood pressure either, mean arterial pressure started to increase similarly in both genotypes after beginning of Ang II infusion ( Figure 2A). Blood pressure rose sharply by an average of 29 mm Hg (P<0.01) at the end of the first week of HS/Ang II treatment before reaching a plateau at the end of the second week. At the end of the fourth week of treatment, blood pressure in the ucp3 +/rats (158±4 mm Hg) was still comparable with that of the ucp3 +/+ rats (155±7 mm Hg; Figure 2B). Heart rate was stable over time and remained unaffected by treatments until end of the experiment ( Figure 2C). Aortic root size increased to the same extent in ucp3 +/+ and ucp3 +/rats at completion of the HS/Ang II treatment ( Figure 2D). As expected, UPC3 levels were decreased by ≈50% in hearts of ucp3 +/rats. Interestingly, HS/Ang II treatment further decreased cardiac UCP3 levels in ucp3 +/rats ( Figure 2E).
UCP3 Insufficiency Increases LV Wall Thickness in Hypertensive Rats
The decrease in UCP3 levels had no impact on the body weight of adult rats maintained on low-salt diet and infused with saline ( Figure 3A). Rats from both genotypes also maintained normal body weight when subjected to HS/Ang II but experienced a similar increase in total heart weight ( Figures 3B and 3C). While transthoracic echocardiography confirmed the comparable increase in LV mass on HS/Ang II the analysis revealed greater thickening of the LV anterior and posterior walls and a trend toward greater reduction in LV internal diameter (P=0.06) for the ucp3 +/rats when compared with their respective non-treated controls (). This observation was supported at the cellular level by a greater increase in myocyte width (+24% for ucp3 +/versus +10% for ucp3 +/+ ; P<0.01). Conversely, the length of ventricular myocytes was not further affected by UCP3 insufficiency ( Figure 3D).
UCP3 Insufficiency Exacerbates Development of LVDD Associated With HS/Ang II-Induced Hypertension
In consistence with our previous study, 10 all baseline LV functional parameters were normal for ucp3 +/rats (Table). While indices of systolic function remained unaffected over the whole duration of the experimental protocol, the diastolic function parameters describing LV filling pressure (E/e') and LV relaxation (isovolumic relaxation time) increased after 2 weeks of HS/Ang II treatment for the ucp3 +/+ rats. This increase was maintained until the end of the protocol and coincided with impairment of global LV function as determined by increased myocardial performance index. The ucp3 +/rats experienced a similar rise in E/e', isovolumic relaxation time, and myocardial performance index values after 2 weeks on HS/ Ang II. However, those indices continued to deteriorate to become more elevated than that of the ucp3 +/+ rats at completion of the protocol (Figure 4). Invasive monitoring of LV pressure also revealed a greater increase in Tau for ucp3 +/rats subjected to HS/Ang II (Table and Figure 4E).
UCP3 Insufficiency Promotes Exercise Intolerance in Hypertensive Rats
Decreased diastolic function is the strongest echocardiographic predictor of impaired exercise tolerance. 18 Baseline exercise capacity was similar between all 4 experimental groups. In addition, exercise capacity remained unchanged at the end of the experimental protocol for animals in the control groups and for HS/Ang II-treated ucp3 +/+ rats. Conversely, ucp3 +/rats on HS/Ang II experienced a decrease in maximum running speed from
UCP3 Insufficiency Promotes Detrimental Cardiac Remodeling in Response to Hypertension
There was a greater increase in collagen deposition in hearts from HS/Ang II-treated ucp3 +/rats when compared with control animals from both genotypes (+170%) or the HS/Ang II-treated ucp3 +/+ rats (+40%; Figure 6A). In addition, a closer examination of the coronary microvasculature pointed to greater expansion of the perivascular interstitium that was for the most part filled with ground substance ( Figure 6B). Common markers of the fetal gene program include an increase in natriuretic peptides expression, a decrease in sarcoendoplasmic reticulum calcium transport ATPase isoform 2 (SERCA2) mRNA, and myosin heavy chain switching from α to β isoform. All those markers were identified by real-time PCR in the hearts
UCP3 Insufficiency Amplifies the Oxidative Stress and Impairment of Protein Kinase G Signaling Associated With Hypertension
4-hydroxynonenal-mediated protein modification is a ROS-induced toxic process occurring when lipid peroxides react with amino acid side chains. 19 Interestingly, while hypertension increased hydroxynonenal-mediated modification of a ≈20 kDa protein to a similar level in all animals, the ucp3 +/genotype specifically led to hydroxynonenal-mediated modification of another protein with a molecular weight exceeding 250 kDa ( Figure 8A). Cysteine sulfenation of proteins also increased 2-fold in hearts of hypertensive ucp3 +/rats ( Figure 8B). Oxidative stress predisposes to low myocardial protein kinase G (PKG) activity. 20 Accordingly, higher oxidative stress in hearts of the hypertensive ucp3 +/rats was accompanied by decreased phosphorylation of the PKG target site Ser23/24 on cardiac troponin I (cTnI), a posttranslational modification that has been linked to impaired relaxation of cardiomyocytes. 21 Conversely, the phosphorylation of cTnI at Ser150 which is independent from PKG signaling was not affected by UCP3 insufficiency nor the presence of hypertension ( Figure 8C). Because Ser23/24 can also be phosphorylated by protein kinase A, 21 PKG-specific phosphorylation of vasodilator-stimulated phosphoprotein on Ser239 was also quantified and found to be decreased specifically in hearts of HS/Ang II-treated ucp3 +/rats ( Figure 8D).
DISCUSSION
Impaired ventricular diastolic function is exceedingly common in hypertensive patients with obesity and diabetes, 5,6 yet the reasons for this negative synergy remain incompletely understood. In the present study we demonstrate that a partial loss in UCP3, such as has been observed with obesity and type 2 diabetes, is sufficient to exacerbate per se the development of LVDD and exercise intolerance during hypertension. Mechanistically, UCP3 insufficiency led to the amplification of detrimental remodeling events that have been linked to increased myocardial stiffness and impaired LV relaxation. Those events included excess LV wall thickening, alterations in the composition and expansion of the extracellular matrix (ECM), reactivation of the fetal gene program and impairment of PKG signaling. Based on our previously published data and work from others, [8][9][10] we propose increased oxidative stress as the root cause for exacerbation of this adverse myocardial remodeling.
Intrinsic LV abnormalities such as LV hypertrophy play a key role in the development of diastolic dysfunction. It is well established that increased LV wall thickness is an independent predictor of diastolic stiffness. 22 Hence, a worsening of LV concentric hypertrophy caused by a greater enlargement of cardiomyocytes may be part of the mechanism by which UCP3 insufficiency precipitates the impairment of diastolic function in our experimental model. While complete loss of UCP3 has previously been suggested to stimulate cardiac hypertrophy through aggravation of high-salt induced hypertension in mice, 23 UCP3 insufficiency was not associated with increased blood pressure response to HS/Ang II. Although we cannot rule out potential changes in the effects of sodium and Ang II on cardiomyocytes, it is more likely that UCP3 insufficiency exerted an additive pro-hypertrophic effect through increased ROS generation. Indeed, ROSmediated stimulation of cardiomyocytes growth may occur through inhibition of cGMP-PKG signaling and through activation of the extracellular-signal regulated kinases 1 and 2 (ERK1/2). 24,25 Even though the modulation of ERK1/2 signaling was not investigated here, our results clearly demonstrate that UCP3 insufficiency contributed to impaired PKG activity in the pressureoverloaded heart.
Changes in the myocardial ECM network, including an increase in interstitial and perivascular fibrosis and an expansion of the interstitial proteoglycan pool, are other well-described causative factors in the impairment of LV compliance and diastolic dysfunction associated with pressure overload, and those detrimental alterations were clearly accelerated with UCP3 insufficiency. 26 Because excess ROS generation has been implicated in the activation of a fibrotic response in the diseased and aging hearts, 26,27 it is plausible that increased oxidative stress induced by lack of UCP3 acted as a central mediator for accelerated remodeling of the ECM in the pressure-overloaded heart. In addition to the exacerbated ECM remodeling, our gene expression analyses also revealed enhanced reactivation of the fetal gene program. Although return to the fetal gene program may initially help the adult heart adapt to a variety of stress conditions, its long-term activation is detrimental to contractility, calcium handling and myocardial energetics and eventually contribute to heart failure. 28 Thus while the transition from α-to β-myosin heavy chain beta is energetically advantageous, the shift lowers contribution of the atrial contraction to filling of the ventricle, which compromises diastolic function at higher heart rates. 29 Decreased expression and activity of the SERCA pump could also lead to slowed rate of Ca 2+ reuptake by the sarcoendoplasmic reticulum and prolongation of muscle relaxation. 30 Moreover, elevated plasma atrial natriuretic peptide levels have been associated with early LVDD in patients with diabetes. 31 This transcriptional remodeling has been associated with increased myocardial oxidative stress, and although causality is not clearly established it is noteworthy that deletion of nuclear factor erythroid 2-related factor 2, a master regulator of the endogenous antioxidant defense system, is sufficient to exacerbate return to the fetal gene program in hearts of mice subjected to pressure overload by constriction of the transverse aorta. 32,33 Therefore, accelerated return to the fetal gene program in hypertensive ucp3 +/rats may also be rooted in the excessive ROS generation caused by partial loss of the mitochondrial anion carrier. Lastly, ROS can affect cardiomyocyte relaxation through direct and indirect modification of certain amino acids, thereby resulting in loss of protein functions and interruption of normal regulatory signals. 34 Thus, an increase in protein-bound carbonyls driven by the generation of lipid peroxides and subsequent formation of lipid-protein adducts has been linked to the modification of at least 167 proteins from the cytoskeleton, ECM, cell adhesion and junction components, and ion channels, including several regulators of cellular Ca 2+ homeostasis. 35 The mutual interplay between Ca 2+ and ROS signaling systems and its contribution to impairment of myocardial relaxation is particularly well described. 36 For example, irreversible oxidative sulfonation of Cys674 in SERCA has been associated with decreased activity of the protein in senescence-and diabetes-related conditions. 37,38 Another protein well known to be affected by high oxidative stress in the myocardium is PKG, and decreased phosphorylation of PKG targets located in the sarcoplasmic reticulum (phospholamban) and sarcomeres (titin, cTnI) has been consistently associated with impaired diastolic calcium reuptake, increased myocytes stiffening and decreased cell relaxation rate. 20 Therefore, by showing that the LV of ucp3 +/rats exhibited greater ROS-mediated modification of certain proteins as well as an impairment of PKG-mediated regulation of cardiac contractile components, our results unambiguously support a role for UCP3 insufficiency in the pathogenesis of LVDD during hypertension.
Because our experimental protocol was limited to the study of male rats, whether UCP3 insufficiency less susceptible to oxidative stress, 41,42 and although LV contractile recovery following myocardial ischemia/ reperfusion was similarly impaired in hearts from male and female ucp3 +/rats, 10 a role for UCP3 in the sex difference in diastolic function during hypertension cannot be ruled out at this time. E/e' indicates ratio of E velocity to early diastolic mitral annulus velocity; HR, heart rate; HS/Ang II, high dietary salt intake and Ang II infusion; IVRT, isovolumic relaxation time; LV, left ventricular; LS/Saline, low-salt diet and infused with saline; LVAWd, left ventricular anterior wall thickness at end-diastole; LVEF, left ventricular ejection fraction; LVFS, left ventricular fractional shortening; LVIDd, left ventricular internal diameter at end-diastole; LVPWd, left ventricular posterior wall thickness at end-diastole; MPI, myocardial performance index; and ucp3, uncoupling protein 3. Data were analyzed by 2-way ANOVA with Tukey test. P<0.05 vs * LS/Saline ucp3 +/+ , † LS/Saline ucp3 +/-, and ‡ HS/Ang II ucp3 +/+ . n=9-16 for all parameters except for Tau Weiss (n=6). Figure 8. Exacerbation of oxidative stress and downregulation of protein kinase G signaling in the left ventricle of ucp3 +/rats during hypertension. A, Quantification of 4-hydroxynonenal protein adducts, (B) total protein sulfenation amounts, (C) phosphorylation levels of cardiac troponin I at Ser23/24 and Ser150, and (D) phosphorylation levels of vasodilator-stimulated phosphoprotein at Ser239 in the left ventricle of ucp3 +/and ucp3 +/+ rats after 4 weeks of a combination treatment with high dietary salt intake and Ang II infusion. Control animals were fed a low-salt diet and infused with saline. Protein amounts were normalized to HSP60 (heat shock protein 60) or cardiac troponin I protein levels. Data are expressed as mean±SEM. Data were analyzed by 2-way ANOVA with Tukey test. cTnI indicates cardiac troponin I; HS/Ang II, high dietary salt intake and Ang II infusion; HSP60, heat shock protein 60; LS/Saline, low-salt diet and infused with saline; p-cTnI, phospho-cardiac troponin I; p-VASP, phospho-vasodilator-stimulated phosphoprotein; and UCP3, uncoupling protein 3. P<0.05 vs * LS/Saline ucp3 +/+ , † LS/Saline ucp3 +/-, and ‡ HS/Ang II ucp3 +/+ .
|
2021-09-18T06:17:06.556Z
|
2021-09-17T00:00:00.000
|
{
"year": 2021,
"sha1": "f9f050dad7b8b39fa771fe3ba91d923bc20f8fab",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.121.022556",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa0abcd00bf4e4b04c4d845f38ab9a7d9cd8b875",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258387672
|
pes2o/s2orc
|
v3-fos-license
|
Mechanisms of Oxidative Stress in Metabolic Syndrome
Metabolic syndrome is a cluster of conditions associated with the risk of diabetes mellitus type 2 and cardiovascular diseases (CVDs). Metabolic syndrome is closely related to obesity. Increased adiposity promotes inflammation and oxidative stress, which are precursors of various complications involving metabolic syndrome components, namely insulin resistance, hypertension, and hyperlipidemia. An increasing number of studies confirm the importance of oxidative stress and chronic inflammation in the etiology of metabolic syndrome. However, few studies have reviewed the mechanisms underlying the role of oxidative stress in contributing to metabolic syndrome. In this review, we highlight mechanisms by which reactive oxygen species (ROS) increase mitochondrial dysfunction, protein damage, lipid peroxidation, and impair antioxidant function in metabolic syndrome. Biomarkers of oxidative stress can be used in disease diagnosis and evaluation of severity.
Introduction
Metabolic syndrome is characterized by the presence of several interconnected risk factors for type 2 diabetes (T2DM) and cardiovascular disease (CVD) [1]. The presence of metabolic syndrome increases the risk of developing T2DM by 5 fold, CVD by 2 fold, and the risk of all-cause mortality by 1.5 fold [2,3]. Metabolic syndrome is highly prevalent in the United States, with about 35% prevalence and almost half in those aged 65 years and older [2,4]. The risk factors of metabolic syndrome include increased waist circumference or belly fat, high plasma triglycerides, elevated blood pressure, high blood sugar, and low plasma high density lipoprotein (HDL) [1]. If a patient has three of the five major risk factors, a diagnosis of metabolic syndrome is made [5].
Although many factors contribute to the pathophysiology of metabolic syndrome, several studies show that oxidative stress, in conjunction with chronic inflammatory conditions, is at the core of the development of metabolic diseases [4,6,7]. The imbalance between oxidants and antioxidants, which is often tilted in favor of the oxidants, is what causes oxidative stress, which in turn causes a disruption in redox signaling and regulation as well as molecular and cellular damage [8,9]. Metabolic syndrome is characterized by obesityrelated problems, indicating a relationship between obesity and metabolic syndrome [10]. Inflammation and oxidative stress play a significant role in the development of metabolic comorbidities such as hyperlipidemia, high blood pressure, and increased glucose intolerance, all of which lead to metabolic dysfunction [10,11]. Several studies have shown that the risk for metabolic syndrome can be greatly reversed by reducing body weight and focusing interventions on dietary changes such as time-restricted eating, special diets such as the Mediterranean diet, including increasing physical exercise, sleep changes, or even reduce stress [2,3,5,12]. In this review we explore the mechanisms involving oxidative stress in metabolic syndrome. We discuss mechanisms associated with each component of metabolic syndrome and a few related risk factors associated with metabolic syndrome. ROS signaling may participate in normal physiological processes or contribute to maladaptive responses that result in metabolic dysfunction and inflammatory signaling, depending on the ROS source, cell type, and tissue environment [9,27,28]. The two major sources of ROS inside the cell are nicotinamide adenine dinucleotide phosphate (NADPH) oxidase (NOX) enzymes and the mitochondria [29]. The NOX enzymes are a family of enzymes (NOX1, NOX2, NOX3, NOX4, NOX5, DUOX1, and DUOX2) located in the cell membrane, and NOX2-NOX3 is important in most pathological conditions [29]. In the mitochondria, ROS are formed during oxidative phosphorylation by oxidizing reduced nicotinamide adenine dinucleotide (NADH) to NAD + [30,31]. The superoxide anion that is produced by the mitochondria and NOX2 is rapidly converted by an enzyme called superoxide dismutase into hydrogen peroxide (H2O2), which serves as a signaling molecule [32,33] (Figure 2). Hydrogen peroxide is a powerful oxidizing agent. For this reason, cells express antioxidant proteins, including peroxiredoxin, catalase, glutathione (GSH), and thioredoxin, that convert H2O2 to water [31,34]. The level of H2O2 must be strictly maintained; hence, its production must be equal to its reduction [9]. High H2O2 in the presence of free ferric iron (Fe 2+ ) produces hydroxyl radicals (•OH) in the Fenton reaction [9,28]. Through tightly controlled redox regulation, signaling, and sensing, ROS are essential for normal biological functions in physiologic settings [35]. Oxidative posttranslational modification (Ox-PTM), also known as oxidative protein modification, is a crucial molecular process that regulates proteins, which eventually affect the biological responses of cells [36]. Redox-sensitive proteins include ion transporters, receptors, signaling molecules, transcription factors, cytoskeletal structural proteins, and matrix metalloproteases [9]. Proteins are normally targets of reversible Ox-PTM; however, in pathological condi- Through tightly controlled redox regulation, signaling, and sensing, ROS are essential for normal biological functions in physiologic settings [35]. Oxidative posttranslational modification (Ox-PTM), also known as oxidative protein modification, is a crucial molecular process that regulates proteins, which eventually affect the biological responses of cells [36]. Redox-sensitive proteins include ion transporters, receptors, signaling molecules, transcription factors, cytoskeletal structural proteins, and matrix metalloproteases [9]. Proteins are normally targets of reversible Ox-PTM; however, in pathological conditions associated with oxidative stress, such as hypertension, proteins undergo irreversible Ox-PTM, which results in a loss of protein function and, as a consequence, cell damage, tissue injury, and failure of the target organs [37,38]. ROS, such as H 2 O 2, are also essential for the activation of cellular pathways, including those that interact with vasoactive drugs such as angiotensin II (Ang II), endothelin-1 (ET-1), aldosterone, and prostanoids used to mediate cellular effects, and those that regulate intracellular calcium homeostasis [9]. ROS activate transcription factors such as hypoxia-inducible factor (HIF) that regulates angiogenesis, activate the phosphoinositide 3 kinase (PI3K) pathway that regulates cellular growth, the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-kB) pathway, which in normal conditions prevents apoptosis by regulating cell survival, activates the mitogen-activated protein kinase (MAPK) pathway, which regulates cellular proliferation [39]. ROS also stimulate the transcription of pro-inflammatory chemokine and cytokine production and the recruitment and activation of inflammatory and immune cells [40,41].
Overproduction of ROS can occur in pathological disorders such as obesity, insulin resistance, hyperglycemia, chronic inflammation, and dyslipidemia [40,42,43]. Oxidative stress is detrimental because all the excess ROS induces cellular damage, specifically damaging DNA and peroxidize lipids [44]. Lipids present in plasma, mitochondrial, and endoplasmic reticulum membranes are major targets of ROS attack and peroxidation in most macromolecules [30,44]. The end products of lipid peroxidation, known as lipid peroxides, can be toxic to a cell and require removal by glutathione through an elusive mechanism [45]. Many studies have found that metabolic syndrome patients had lower plasma antioxidant enzyme activity and greater biomarkers of oxidative damage than healthy individuals, which may contribute to oxidative stress [46]. In the same manner, proteins and nucleic acids can be subject to peroxidation as well as nitrosylation [31]. Nevertheless, these end products are not usually directly toxic to the cell [31]. However, accumulation of inactive proteins can overload the ability of a cell to metabolize them and hence lead to the damage of DNA as they are capable of activating apoptosis [45]. In addition, the accumulation of modified proteins decreases their function, leading to a severe loss of normal cell activity [4,27,32,45]. The overproduction of ROS results in an oxidative stress environment, which also destabilizes redox signaling and control and leads to deleterious effects on gene expression, increases growth factors and stress response elements, and activates the apoptosis pathway [9,27]. The disrupted redox signaling also promotes pro-inflammatory and pro-fibrotic pathways, which affect insulin metabolic signaling and endothelial dysfunction and promote cardiovascular and renal inflammation and fibrosis, which contribute to target organ damage [9,47]. The mechanisms of ROS and their role in the development of metabolic syndrome are shown in Figure 3. ment, which also destabilizes redox signaling and control and leads to deleterious effects on gene expression, increases growth factors and stress response elements, and activates the apoptosis pathway [9,27]. The disrupted redox signaling also promotes pro-inflammatory and pro-fibrotic pathways, which affect insulin metabolic signaling and endothelial dysfunction and promote cardiovascular and renal inflammation and fibrosis, which contribute to target organ damage [9,47]. The mechanisms of ROS and their role in the development of metabolic syndrome are shown in Figure 3. Under pathological conditions such as obesity, chronic inflammation, and hyperglycemia, excessive ROS generation can occur. ROS production occurs through the activation of enzymes in the cytosol, membrane, and mitochondria. An increase in the production of ROS and the depletion of antioxidants result in oxidative stress. The resulting oxidative stress leads to intracellular cell damage and altered redox, which leads to the irreversible accumulation of oxidation products, promoting endothelial dysfunction, which leads to insulin resistance, hypertension, dyslipidemia, and, subsequently, metabolic syndrome. ROS, reactive oxygen species; NOX2, nicotinamide adenine dinucleotide phosphate (NADPH) oxidase (NOX) enzymes.
Inflammation and Free Radical Production via Several Pathways
Obesity is a metabolic disorder characterized by either an excessive accumulation of body fat (BF) or an improper distribution of BF that is associated with adverse effects Figure 3. Mechanisms of metabolic syndrome. Under pathological conditions such as obesity, chronic inflammation, and hyperglycemia, excessive ROS generation can occur. ROS production occurs through the activation of enzymes in the cytosol, membrane, and mitochondria. An increase in the production of ROS and the depletion of antioxidants result in oxidative stress. The resulting oxidative stress leads to intracellular cell damage and altered redox, which leads to the irreversible accumulation of oxidation products, promoting endothelial dysfunction, which leads to insulin resistance, hypertension, dyslipidemia, and, subsequently, metabolic syndrome. ROS, reactive oxygen species; NOX2, nicotinamide adenine dinucleotide phosphate (NADPH) oxidase (NOX) enzymes.
Inflammation and Free Radical Production via Several Pathways
Obesity is a metabolic disorder characterized by either an excessive accumulation of body fat (BF) or an improper distribution of BF that is associated with adverse effects [32,48,49]. Obesity can be both a result of and a cause of oxidative stress [50]. Excessive intake of lipids, carbohydrates, and saturated fatty acids, particularly trans-fatty acids, stimulates specific intracellular pathways, leading to oxidative stress through superoxide generation via oxidative phosphorylation, glyceraldehyde autoxidation, protein kinase C activation, and activation of the polyol and hexosamine pathways [43,50,51]. Animal and cell culture studies have shown that oxidative stress may play a causative role in obesity by increasing pre-adipocyte proliferation, differentiation, size, white adipose tissue (WAT), and alter food intake [43,50].
Obesity can lead to systemic oxidative stress due to increased NOX activity and ER stress in adipocytes, as well as abnormal post-prandial metabolism, ROS generation, hyperleptinemia, chronic inflammation, tissue dysfunction, and low antioxidant defenses [50][51][52]. Oxidative stress and inflammation are closely linked to obesity. In adipocytes of obese individuals, there is activation of pro-inflammatory transcription factors, such as NF-κB and activator protein-1 (AP-1), which are redox-sensitive and trigger the release of inflammatory cytokines such as tumor necrosis factor alpha (TNF-α), interleukin-1β (IL-1β) and interleukin-6 (IL-6), which in turn enhance ROS production, creating a vicious circle [43,53,54]. Oxidative stress and inflammation are important components in the patho-physiology of obesity-related conditions such as atherosclerosis, insulin resistance, type 2 diabetes, and cancer [55].
Numerous mechanisms, including altered lipid and glucose metabolism (hyperglycemia), chronic inflammation, tissue dysfunction, hyperleptinemia, and aberrant postprandial ROS formation, have been proposed to increase oxidative stress in obese people [43,51,56]. Glycolysis and the tricarboxylic acid (TCA) cycle produce the electron donor's nicotinamide adenine dinucleotide hydrogen (NADH) and reduced flavin adenine dinucleotide (FADH 2 ) [57,58]. In overnutrition, excessive glucose increases metabolism via glycolysis and the TCA cycle, resulting in increased NADH and FADH 2 formation in the mitochondrial electron transport chain [50,51,56]. The increased proton gradient causes electron leakage and causes reactive intermediates to produce superoxide anions in addition to those produced by activated NADPH oxidase [50,51,56]. Through the enzyme superoxide dismutase, superoxide is converted to hydrogen peroxide [59]. The free radical inhibits glyceraldehyde-3-phosphate dehydrogenase and consequently shifts upstream metabolites into four alternate pathways, which increase free radical generation or reduce antioxidant defenses, causing oxidative/nitrosative stress [50]. The four alternative pathways include the following: (1) Activation of the polyol pathway, which involves the reduction of glucose into sorbitol via aldolase reductase, which uses NADPH, resulting in depletion of cytosolic NADPH and subsequently increased ROS production [60][61][62]. (2) Fructose-6-phosphate is converted to glucosamine-6-phosphate, which inhibits thioredoxin action and causes oxidative and ER stress [50]. (3) Triose phosphates produce methylglyoxal, the main precursor of advanced glycation end products (AGEs) [63]. AGEs activate NOX pathways, which increase the production of ROS/reactive nitrogen species (RNS), whereas NF-κB alters gene expression and causes transcription of pro-inflammatory cytokines (including TNF-α and IL-6), adhesion molecules, microRNAs (miR), and inducible nitric oxide synthase (iNOS), which are implicated in adipogenesis, inflammation, and oxidative stress [47,63,64]. (4) Dihydroxyacetone phosphate is converted to diacylglycerol, which activates the protein kinase C (PKC) pathway, which plays a vital role in the development of cardiovascular complications via its activation of MAPK cascades ( Figure 4) [51,55,65].
Obesity is linked to an increase in plasma free fatty acids (FFA) and excessive fat storage in white adipose tissue (WAT) [66,67]. The pathological increase in serum FFA levels caused by excessive fat accumulation in obese people impedes glucose metabolism, enhances hepatic, muscle, and adipose accumulation of energy substrates, and increases mitochondrial and peroxisomal oxidation [50,55]. Adipose tissue is a major source of ROS production as it promotes the generation of superoxide ions in the mitochondrial electron transport chain by inhibiting adenine nucleotide translocation [56], leading to oxidative stress, mitochondrial DNA damage, ATP depletion, and lipotoxicity. This causes an increase in the production of cytokines such as TNF-α, which in turn generates more ROS in the tissues and worsens lipid peroxidation [55].
The proinflammatory cytokines TNF-α, IL-1, and IL-6 have been linked to adiposity [68]. TNF-α regulates the inflammatory response, immune system, adipose cell apoptosis, lipid metabolism, hepatic lipogenesis, insulin signaling, and oxidative stress [43,49,69]. Obesity increases serum TNF-α, which induces the release of IL-6 from immune cells and adipocytes and reduces systemic anti-inflammatory cytokines, promoting systemic inflammation [50,70]. Tissue dysfunction amplifies oxidative stress and inflammation, leading to increased expression of adipokines, deletion of nuclear factor E2-related factor 2 (Nrf2), and endothelial dysfunction in obesity and obesity-induced hypertension [55,71]. Angiotensin II (Ang II) regulates IL-6 and TNF-α secretion, allowing monocyte recruitment and exacerbating vascular injury [55,72]. Monocytes emit the pyrogenic cytokine IL-1β after tissue injury, infection, or immunologic insult [73]. Production of pro-inflammatory cytokines, including IL-1β and IL-6, has been linked to obesity's pro-inflammatory response [55,72]. IL-6 regulates energy homeostasis and inflammation, affecting the transition from acute to chronic inflammatory diseases, such as obesity and insulin resistance, through promoting the synthesis of pro-inflammatory cytokines and negatively regulating inflammatory . Proposed mechanisms of oxidative stress associated with adipocytes. Nutritional excess and adipocyte hypertrophy, as well as the release and accumulation of pro-inflammatory mediators such as free fatty acids (FFA), hyperglycemia, advanced glycation end products, cytokines, and proinflammatory cytokines linked to protein kinase C (PKC) and polyol pathways, characterize obesity. By activating NADPH oxidase (NOXs), nitric oxide synthase, uncoupled endothelial NOS (eNOS), and myeloperoxidase, these components may induce tissue oxidative stress. Chronic inflammation may also contribute to the modification of adipose tissue's redox balance by activating stress signal transduction, which contributes to increased autophagy and apoptosis, uncontrolled adipokine production, and adipose tissue inflammation. The resultant functional changes may further impair adipose tissue function by affecting intracellular pathways that generate pro-inflammatory cytokines, resulting in increased attraction, infiltration, and activation of immune cells, as well as increased adipose tissue inflammation, thereby creating a vicious cycle between adipose tissue oxidative stress and inflammation, as well as a decrease in antioxidant system activity, ultimately leading to metabolic dysfunction. AGEs, advanced glycation end products; PKC, protein Kinase C; NOX, nicotinamide adenine dinucleotide phosphate oxidase enzyme; ER, endoplasmic reticulum; MAPK, mitogen-activated protein kinase; NF-kB, nuclear factor kappa-light-chain-enhancer of activated B cells; ROS, reactive oxygen species; TCA, tricarboxylic cycle; TNF-α, tumor necrosis factor alpha.
Obesity is linked to an increase in plasma free fatty acids (FFA) and excessive fat storage in white adipose tissue (WAT) [66,67]. The pathological increase in serum FFA levels caused by excessive fat accumulation in obese people impedes glucose metabolism, enhances hepatic, muscle, and adipose accumulation of energy substrates, and increases mitochondrial and peroxisomal oxidation [50,55]. Adipose tissue is a major source of ROS production as it promotes the generation of superoxide ions in the mitochondrial electron transport chain by inhibiting adenine nucleotide translocation [56], leading to oxidative stress, mitochondrial DNA damage, ATP depletion, and lipotoxicity. This causes an increase in the production of cytokines such as TNF-α, which in turn generates more ROS in the tissues and worsens lipid peroxidation [55].
The proinflammatory cytokines TNF-α, IL-1, and IL-6 have been linked to adiposity [68]. TNF-α regulates the inflammatory response, immune system, adipose cell apoptosis, lipid metabolism, hepatic lipogenesis, insulin signaling, and oxidative stress [43,49,69]. Obesity increases serum TNF-α, which induces the release of IL-6 from immune cells and adipocytes and reduces systemic anti-inflammatory cytokines, promoting systemic inflammation [50,70]. Tissue dysfunction amplifies oxidative stress and inflammation, Figure 4. Proposed mechanisms of oxidative stress associated with adipocytes. Nutritional excess and adipocyte hypertrophy, as well as the release and accumulation of pro-inflammatory mediators such as free fatty acids (FFA), hyperglycemia, advanced glycation end products, cytokines, and proinflammatory cytokines linked to protein kinase C (PKC) and polyol pathways, characterize obesity. By activating NADPH oxidase (NOXs), nitric oxide synthase, uncoupled endothelial NOS (eNOS), and myeloperoxidase, these components may induce tissue oxidative stress. Chronic inflammation may also contribute to the modification of adipose tissue's redox balance by activating stress signal transduction, which contributes to increased autophagy and apoptosis, uncontrolled adipokine production, and adipose tissue inflammation. The resultant functional changes may further impair adipose tissue function by affecting intracellular pathways that generate pro-inflammatory cytokines, resulting in increased attraction, infiltration, and activation of immune cells, as well as increased adipose tissue inflammation, thereby creating a vicious cycle between adipose tissue oxidative stress and inflammation, as well as a decrease in antioxidant system activity, ultimately leading to metabolic dysfunction. AGEs, advanced glycation end products; PKC, protein Kinase C; NOX, nicotinamide adenine dinucleotide phosphate oxidase enzyme; ER, endoplasmic reticulum; MAPK, mitogen-activated protein kinase; NF-kB, nuclear factor kappa-light-chain-enhancer of activated B cells; ROS, reactive oxygen species; TCA, tricarboxylic cycle; TNF-α, tumor necrosis factor alpha.
Adipokines
Bioactive adipokines such as leptin, adiponectin, visfatin, resistin, apelin, and plasminogen activator inhibitor type 1 (PAI-1) are found in adipose tissue and have been linked to the homeostasis of physiological and pathological processes involving oxidative stress [75,76]. Adipocytes secrete leptin in proportion to adipose tissue mass and triglyceride accumulation [77]. Leptin promotes hunger through its action in the central nervous system (CNS) [55]. Hyperleptinemia increases oxidative stress and stimulates the proliferation and activation of monocytes/macrophages, producing IL-6 and TNF-α [51]. Leptin also activates NOX and induces the production of reactive intermediates such as H 2 O 2 and OH free radicals [50]. It also decreases the activity of the cellular antioxidant paraoxonase-1 (PON-1), a decrease that is associated with increased levels of plasma and urinary F(2)-isoprostane , and plasma levels of malondialdehyde and hydroperoxides [56,78,79]. Adiponectin is important in glucose and lipid metabolism and helps to avoid the development of pathological changes [80]. Adiponectin works as an antiinflammatory and anti-atherogenic hormone secreted by differentiated adipocytes, which decreases TNF-α and C-reactive protein (CRP) levels, increases NO production, and inhibits ROS release [81,82]. Its serum levels are inversely correlated with systemic oxidative stress [81,82]. Visfatin is a pleiotropic molecule showing pro-oxidant and pro-inflammatory effects, and its levels are positively correlated with body fat mass, and its concentration decreases when weight loss occurs [81].
Food Intake
The post-prandial response to high-fat and high-carbohydrate (HFHC) meals is impaired in obese people, which could lead to an increase in oxidative stress [51]. Obese individuals exhibit a more pronounced and prolonged oxidative and inflammatory response to HFHC meals, as well as increased expression of the p47phox subunit of NOX2, increased ROS generation, intra-nuclear NF-кB binding in mononuclear cells, and plasma matrix metalloproteinase (MMP-9) concentrations [54,83].
Vitamin and mineral deficiencies can also contribute to the development of compromised antioxidant defense in the pathophysiology of obesity [50]. Obese people are more susceptible to oxidative damage due to decreased antioxidant sources and significantly decreased antioxidant activity [50]. Antioxidant supplementation reduces oxidative stress and ROS, lowers obesity-related comorbidities, and restores adipokine expression [55,84].
Mechanisms of Oxidative Stress Associated with Abnormal Lipogram Levels
Lipoproteins are complex molecules that have a central hydrophobic core of non-polar lipids, primarily cholesterol esters and triglycerides [85]. The non-polar lipid is engulfed by a hydrophilic membrane consisting of phospholipids, free cholesterol, and apolipoproteins [86]. Frequently, most of the lipids that fall under the category of lipoproteins and estimated lipid markers include total cholesterol, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), and triglycerides [31,86]. One of the mechanisms through which the accumulation of lipids begins to form a plug in the blood vessels is linked to excessive biosynthesis of ROS, which leads to oxidative stress in the walls of the blood vessels [85]. An increase in the magnitude of ROS supports the oxidation process of LDL. This results in a high level of oxidized LDL (ox-LDL), thereby resulting in the death of vascular endothelial cells, and subsequent endothelial dysfunction [86]. Furthermore, the oxidation of LDL leads to further oxidation in the vascular walls hence resulting in increased levels of lipid hydroperoxides such as the lipid hydroperoxides (LOOH) in the LDL [87]. At the cellular level, particularly in mitochondria, there is dysregulation of oxidative metabolism, resulting in unbalanced levels of ROS biosynthesis [44,88]. The imbalance leads to disrupted mitochondrial utilization of lipids, resulting in accumulation in body tissues [44,88]. At the micro level, ROS disrupt cell signaling and cause mitochondrial dysfunction, resulting in an energy deficit and ultimately, function loss [44]. It is imperative to note that the citric acid cycle intermediate molecule, citrate, is transported to the cytoplasm, where it is used as a substrate in the production of fatty acids (FA) and cholesterol. However, when the mitochondria are damaged due to ROS accumulation, HDL and cholesterol metabolism will be defective, suggesting that ROS contribute to the elevation of both molecules [31].
Mechanisms of Oxidative Stress Associated with Hypertension
Hypertension is one of the most significant cardiovascular risk factors for metabolic syndrome [7,89]. Many factors contribute to the development of hypertension, as proposed by Irvine Page, when he developed the mosaic theory of hypertension, which states that multiple factors, including genetics, environmental factors, adaptive and endocrine factors, and hemodynamic forces, all contribute to the development of hypertension [90,91]. Since then, great strides have been made to explain the molecular and cellular basis of hypertension, including the discovery of nitric oxide (NO) and its role in the cardiovascular system and the role of oxidative stress in factors associated with the mosaic theory [91,92]. Current evidence indicates that oxidative stress is a significant contributor to the development of hypertension. Oxidative stress and chronic inflammation have been linked to endothelial damage and vascular dysfunction, cardiovascular remodeling, renal dysfunction, sympathetic nervous system excitation, immune cell activation, and systemic inflammation that lead to high blood pressure and heart diseases [7,[93][94][95].
In hypertension, the important sources of ROS include non-phagocytic NADPH oxidase (NOX) hyperactivation, nitric oxide synthase (NOS) uncoupling, xanthine oxidase, mitochondrial stress, and endoplasmic reticulum stress ( Figure 5). NADPH oxidase is a major source of ROS in the vasculature and kidney, which plays an important role in NO depletion, vascular damage, and endothelial dysfunction [94][95][96]. NADPH oxidasederived superoxide inactivates NO in the process that generates peroxynitrite, leading to impaired endothelium-dependent vasodilation and hypertension [96]. eNOS activation normally produces NO; however, oxidation or deficiency of tetrahydrobiopterin (BH4) and L-arginine are associated with increased eNOS-mediated superoxide production as well as the decreased formation of vasoprotective NO [96,97]. Peroxynitrite oxidizes and destabilizes eNOS to produce more superoxide, whereas BH4 is susceptible to oxidation and upper oxide oxidizes it, uncoupling eNOS, causing endoplasmic reticulum stress and mitochondrial oxidative stress and producing more ROS [94,95]. Xanthine oxidase is an important source of ROS in the vascular endothelium and is associated with increased arteriolar tone and end-organ injury in hypertensive patients [98].
Endoplasmic Reticulum
The endoplasmic reticulum (ER) synthesizes, modifies, and delivers proteins to their target sites [99]. In a quality-controlled process, only correctly folded proteins are exported to the Golgi apparatus, whereas poorly folded proteins are maintained in the ER to complete the process or be degraded [100,101]. The ER's protein load and folding ca-
Endoplasmic Reticulum
The endoplasmic reticulum (ER) synthesizes, modifies, and delivers proteins to their target sites [99]. In a quality-controlled process, only correctly folded proteins are exported to the Golgi apparatus, whereas poorly folded proteins are maintained in the ER to complete the process or be degraded [100,101]. The ER's protein load and folding capacity are balanced under physiological settings. Increased protein synthesis, accumulation of misfolded proteins, or changes in the ER's calcium or redox balance cause ER stress, resulting in the activation of the unfolded protein response (UPR) [94,102]. Inositol-requiring protein 1 (IRE1), activating transcription factor 6 (ATF6), and protein kinase RNA-like endoplasmic reticulum kinase (PERK) are three primary axes of the UPR that, in response to ER stress, signal to downstream molecules [103]. As a consequence of protein folding, ROS are created in the ER, and certain ER stress conditions can promote ROS production in the ER [104]. ER stress stimulates signaling molecules, initiating the UPR and activation of Nox4, and possibly Nox2 during the UPR, generating more ROS [94]. The UPR causes the expansion of the ER membranes, an increase in the translation of folding chaperones, an acceleration in the destruction of unfolded proteins, and a decrease in the transcription and translation of the majority of other proteins, leading to apoptosis, phenotypic switching, de-differentiation, and trans-differentiation, all of which are mechanisms involved in cardiovascular remodeling and vascular damage in hypertension [103,105].
Mitochondrial Oxidative Stress
Mitochondria are responsible for the production of most cell adenosine triphosphate (ATP) through the electron transport chain's enzyme complexes [33]. Electron transfer from one complex to the next is efficient and with minimal electron leakage, but in various disease conditions, electron leakage increases and can lead to a reduction of oxygen and the generation of superoxide and hydrogen peroxide [106,107]. In hypertension, mitochondrial dysfunction produces ROS, leading to oxidative stress [9]. The activation of angiotensin II (Ang II) stimulates the synthesis of mitochondrial ROS (mtROS) and the opening of the mitochondrial permeability transition pore (mPTP), which allows mtROS to leak into the cytosol [108]. In the cytosol, mtROS stimulates NOX via activating p38 MAPK and the JNK pathway or cSrc-dependent phosphorylation of p47phox [109]. NOX-derived ROS crosses the mitochondria, causing mitochondrial damage and the generation of mtROS, resulting in mtROS accumulation that leads to immune cell infiltration, Ang II-mediated eNOS uncoupling, reduced circulatory NO, and endothelial dysfunction, all of which cause adverse cardiovascular effects [94,108].
Nitric Oxide Synthase Uncoupling
Nitric oxide mediates vascular effects, and its synthesis requires l-arginine (substrate), while the co-substrates that are required are molecular oxygen, reduced NADPH, and the cofactor BH4 to stabilize eNOS [110,111]. Under conditions of oxidative stress, NOS removes an electron from NADPH and donates it to O 2 , which results in the production of O 2 -rather than NO. [94,112]. The uncoupling of eNOS, which is caused by a lack of BH4, has been linked to several cardiovascular diseases, including hypertension and aortic aneurysms [113,114]. Because BH4 is the regulator of all NOS isoforms, any of them can "uncouple" when subjected to stressful conditions. Tryptophan 447, which is located in the BH4-binding domain of eNOS, is an important component of the equation that determines whether eNOS generates NO or O 2 -. When this is mutated, the connection between BH4 and eNOS is disrupted, which results in the preferential formation of O 2 - [94]. Oxidative stress is the most important factor promoting NOS uncoupling, resulting in reduced NO production and increased O 2 - [95]. Oxidative stress has been demonstrated in spontaneous (genetic) and experimental models of hypertension, with increased p22phox mRNA expression and NADH/NADPH oxidase activity in the aortic and mesenteric vessels of stroke-prone spontaneously hypertensive rats [115]. Vascular oxidative stress has also been demonstrated in many forms of experimentally induced hypertension, such as Ang II-mediated hypertension, Dahl salt-sensitive hypertension, lead-induced hypertension, obesity-associated hypertension, aldosterone-provoked hypertension, and nitric oxide synthase inhibitor-induced hypertension [94,106].
Oxidative damage to the endothelium affects the circulation level of NO due to a decline in synthesis caused by the uncoupling of eNOS and the depletion of BH4. Increased production of ONOO-through NO-O 2 coupling also adds to NO depletion [97]. ROSinduced reduction in circulatory NO due to endothelial dysfunction impairs the formation of the capillary network and blood flow regulation, resulting in decreased microcirculation in metabolically active tissues as well as dysregulations of glucose and dyslipidemia [116].
In prediabetic individuals, increased glucose levels are responsible for the activation of oxidative stress, which in turn leads to insulin resistance. Obesity has shown a substantial relationship with insulin resistance. In this context, an adipocyte-α derived factor, such as TNF-, leptin, FFA, and resistin, could be the mediator of oxidative stress-induced insulin resistance in the pre-diabetic condition [117,118]. In obesity, the formation of reactive oxygen species is increased and lipid peroxidation is induced in the adipocytes, liver, and skeletal muscles [56,119]. Increased FFA concentrations result in mitochondrial malfunction, including uncouplers of oxidative phosphorylation in mitochondria and increased superoxide formation, creating oxidative stress and decreasing intracellular glutathione to compromise natural antioxidant defenses [119].
Mechanisms of Oxidative Stress Associated with Impaired Fasting Glucose and Insulin Resistance
Insulin is secreted by the pancreas and drives nutrient transport into cells, acutely affects metabolic enzyme activity, regulates metabolic gene transcription, controls cellular development and differentiation, and regulates its clearance by activating receptors [120,121]. Oxidative stress contributes to numerous chronic conditions, including insulin resistance and type 2 diabetes [122]. Insulin resistance is common worldwide and can most accurately predict the development of diabetes [123]. In this situation, there is a decrease in peripheral insulin sensitivity [122]. An accumulation of oxidants is linked to the multifactorial etiology of insulin resistance, mainly in skeletal muscle and adipose tissue. ROS production mechanisms are many, including oxidative phosphorylation, transition metal ions, oxidase activity, protein folding, thymidine, and polyamine catabolism [124]. However, mitochondrial H 2 O 2 production and NADPH oxidase activation are relevant to insulin resistance [125]. The mitochondrion is one cellular location that has a high capacity for the synthesis of oxidants such as H 2 O 2 and other reactive oxygen species [126]. ROS and RNS have been found to disrupt the insulin signaling cascade; however, the disruption depends on the dose and is time-dependent [127]. When insulin is released, a burst of H 2 O 2 is made, which exposes cells to ROS for a short time and at a low dose. This improves the insulin cascade by reducing the activity of tyrosine phosphatase, which raises the basal level of tyrosine phosphorylation in both the insulin receptor and the proteins it controls [20]. Studies have shown that oxidative stress impairs insulin signaling and leads to insulin resistance [20,128]. The proposed mechanisms leading to insulin resistance include the accumulation of specific lipid mediators, abnormal features of mitochondrial function, an increase in stress-activated protein c-Jun-N-terminal-kinase (JNK), and inflammatory pathways.
Lipid-Induced Insulin Resistance
Diacylglycerols (DAG) and ceramides mediate liver and skeletal muscle lipid-induced insulin resistance [20]. In insulin-resistant individuals, lipid oversupply from high-fat, high-calorie meals or excessive adipose lipolysis can contribute to enhanced fatty acid oxidation and worsening insulin resistance [129,130]. In contrast, a reduction of circulating FA levels with the lipolysis inhibitor acipimox increases insulin sensitivity, which correlates with a decrease in intramyocellular FA CoA concentration [131].
In individuals with prolonged increased triglycerides, DAG accumulates and impairs insulin signaling by activating conventional (α, βI, βII, γ) and protein kinase C (PKC) isoforms (δ, ε, v, θ) [20,131]. For increased intrahepatic triglyceride (IHTG), activation of the ε isoform (PKCε) is most consistently observed, and skeletal muscle PKCβ is observed [20,131]. PKC phosphorylates Thr1160 on the insulin receptor (INSR), destabilizing the insulin receptor kinase's (IRK) active conformation and function and resulting in a defect in glucose transport or phosphorylation [20,131]. Lipid peroxidation is another mechanism that exacerbates insulin resistance. The two most prevalent ROS that are known to affect lipids are hydroxyl radicals and hydroperoxyl radicals [122]. Cells produce approximately 50 hydroxyl radicals in a second, and in a full day, each cell generates around 4 million hydroxyl radicals. These produced radicals have a detrimental effect on the biomolecules [122]. These radicals are known to cause unspecific damage to a biomolecule that is present a few nanometers from the site of their synthesis and lead to adjacent organelle damage and also plasma membrane damage, which is a key target in the signaling of tyrosine kinase and downstream effects signal transduction of many reactions, including insulin receptor substrate 1 (IRS-1), which is responsible for phosphorylation of another enzyme, PI3-kinase [132,133]. High levels of HO 2 can also precipitate continuous peroxidation because this molecule alone has a strong oxidant effect and could initiate the chain reaction of oxidation of polyunsaturated phospholipids, hence impairing membrane function [133]. Damage to the plasma membrane leads to an inability of the glucose transporter to function as well as the entire mechanism of phosphorylation, thereby affecting insulin function [122].
Mitochondrial Dysfunction
Mitochondria control glucose sensing and insulin secretion in beta cells [122]. Mitochondrial dysfunction has been recognized to cause insulin resistance and is an underlying cause of diabetes [134]. Insulin secretion by pancreatic beta cells is linked to the extracellular glucose concentration, which is phosphorylated by glucokinase and metabolized to pyruvate in mitochondria [133]. Pyruvate enters mitochondria and is oxidized by tricarboxylic acid (TCA) to NADH and FADH2, which donate electrons to the electron transport chain, leading to ATP generation. Mitochondrial ATP is transported to the cytosol, raising the cytosolic ATP/ADP ratio, leading to depolarization and exocytosis of insulin-containing vesicles [135,136]. Mitochondrial dysfunctions impair this metabolic process and promote apoptosis and beta-cell death. Many human studies have shown that mitochondrial dysfunction exists in obese and insulin-resistant patients, with these individuals having downregulated metabolic and mitochondrial pathways in obesity and insulin resistance [137,138]. ROS generation in beta cells is proposed to be caused by hyperglycemia, hyperlipidemia, hypoxia, and ER stress [139]. Mitochondria can contribute to fatty acid inflow and the activation of stress-related kinases, both of which can lead to insulin resistance [132,140]. Oxidative stress appears to have a significant role in mitochondrial malfunction, which can amplify stress signals and limit adenosine triphosphate (ATP) synthesis [137].
Insulin release from beta cells is triggered by mitochondrial oxidative phosphorylation (OxPhos) and ATP production [141]. Beta cells from patients with T2DM showed decreased OxPhos gene expression [135,139]. In T2DM, IR and chronic hyperglycemia lead to increased glucose and fatty acid metabolism in beta cells [142]. Increased fatty acid levels and hyperglycemia increase NADH and FADH, which lead to the activity of the electron transport chain and ROS production, consequently leading to beta cell oxidative stress [141]. Increased fatty acids also cause incomplete fatty acid oxidation, which worsens ROS generation [143]. Oxidative stress predisposes mitochondrial damage and enhanced mitochondrial fission, leading to a further decline in OxPhos and increased ROS generation, leading to apoptosis and beta cell loss [139,144]. Beta cells are vulnerable to oxidative stress due to high ROS production and low antioxidative enzyme expression [139,145]. Human islets from diabetic individuals show that lipid peroxide protein adducts and lipid infusion increases islet ROS and impairs insulin secretion, leading to mitochondrial dysfunction [122,146].
Low-Grade Inflammation
The production of excess ROS leads to oxidative stress and activates numerous transcription factors, including NF-κB, JNK/SAPK, and MAPK [147]. The NF-κB transcription factor plays a role in mediating immune and inflammatory responses by elevating systemic pro-inflammatory cytokines and promoting an insulin-resistant environment through the activation of activated protein kinase C (PKC) [56,147]. The NF-B pathway is triggered by an active serine kinase, IKK, phosphorylating the inhibitory subunit, IkB [148,149]. When exposed to an oxidative environment, mitogen-activated protein kinases (MAPK), such as JNK, ERK, and p38 MAPK, are activated [131,150]. Increased serine-threonine phosphorylation impairs the protein's ability to recruit and activate downstream SH-2-containing signaling molecules and disrupts the insulin receptor substrate (IRS) protein's ability to interact with the insulin receptor, according to the proposed mechanism of insulin signal interference by activated serine/threonine kinases [151,152].
Glucose Transporters
The diffusion of glucose into the cell is facilitated by glucose transport (GLUT), and GLUT4 is the principal glucose transporter in adipose tissue, skeletal muscle, and cardiac muscle [153]. The insulin binds to insulin receptors and activates a signal transduction cascade that leads to enhanced GLUT4 expression in the plasma membrane, hence enhancing glucose uptake from the circulation [122,153,154]. An increased metabolite flow into mitochondria, changes in mitochondrial proteins, and decreased expression of antioxidant enzymes can lead to higher ROS levels in obese and diabetic conditions [155]. ROS causes insulin resistance in the periphery by impairing insulin receptor signal transduction and decreasing cellular membrane GLUT4 transporter expression [125,155,156]. Normal glucose tolerance is sustained in the early stages by compensatory hyperinsulinemia, eventually leading to desensitization of the peripheral tissues to insulin [56]. This regulation employs distinct transduction proteins compared to the typical pathway. The signaling of protease inhibitor 3 (PI3)-kinase shifts above optimal insulin concentrations. Instead of phosphorylating phosphatidylinositol 4,5-bisphosphate (PIP2), PI3-kinase phosphorylates Rac, and hence raising NOX4 activity. NOX4 is a potent oxidizing enzyme that generates reactive oxygen species, increasing ROS [132]. Oxidative stress causes Casein kinase-2 (CK2) to activate the retromer, which, instead of the plasma membrane, the retromer signals the trans-Golgi network to transport GLUT4 to lysosomes for destruction, resulting in hyperglycemia [132,154].
Immune Activation Mechanisms of Oxidative Stress in Metabolic Syndrome
Chronic inflammation in metabolic syndrome is thought to be mainly mediated by adipose tissue, and involves a crosstalk between various cell components such as adipocytes, T cells, macrophages, dendritic cells, B cells, and fibroblasts [89,157]. Non-obese adipose tissue mainly contains type 2 macrophages, which express anti-inflammatory cytokines such as IL-10 and transforming growth factor-β [158]. In obesity, there is an increased infiltration of M1 macrophages derived from the bone marrow that particularly express pro-inflammatory cytokines [158]. Saturated fatty acids from adipocytes activate M1 macrophage toll-like receptor 4 and macrophage-inducible C-type lectin via an integrated stress response involving the activation of the NF-κB pathway [158]. This results in the secretion of inflammatory cytokines such as TNF-α, IL-6, and IL-1, which recruit further pro-inflammatory immune cells such as CD4+ and CD8+ T cells, natural killer (NK) cells, and innate lymphoid cells into the adipose tissue to boost the immune response [157][158][159]. These inflammatory cytokines mediate insulin resistance in metabolic syndrome, especially macrophage-derived IL1-β [160]. T cells also play an important role in inducing insulin resistance in adipose tissue. McDonnell et al. demonstrated that CD8+ T cells infiltrate the adipose tissue of obese mice, where they accumulate, are clonally expanded, and activate in response to isolevuglandin-containing M2 macrophages [161]. Furthermore, initial inflammatory cytokines are imperative in the propagation of chronic inflammation stimulated by oxidative stress and their intracellular status. ROS cause initial damage to the mitochondria, and this leads to the activation of the nod-like receptor family pyrin domain containing 3 (NLRP3), which is an inflammasome [162]. This inflammasome is a key molecule in the signaling of IL-1β expression by macrophages. Additionally, oxidative damage to DNA induces several molecules, including inflammatory molecules, involved in gene expression [162,163]. Expression of isoprostane due to peroxidation by ROS leads to further expression of interleukin-8 (IL-8), a chemoattractant cytokine that attracts several inflammatory cells, including neutrophils. Hence, continual expression of this molecule leads to a prolonged state of inflammation [153]. Although not clearly understood, other studies have indicated that ROS lead to the activation of an enzyme called peroxiredoxin-2 (PRDX2), which has an effect on macrophages, triggering them to produce and release TNF-α, a key cytokine in chronic inflammation [154].
The hallmark of oxidative stress associated with continued immune cell proliferation and activation in adipose tissue is the suppression of bioenergetics and a metabolic switch to preferential utilization of select catabolic pathways for their energy needs [164]. More importantly, owing to the increased dynamic bioenergetic demands of activated cells in the adipose tissue of obese individuals, there is a concomitant increase in mitochondrial activity to produce ATP, a situation that increases ROS production and contributes to chronic inflammation in ways already described above [164]. Thus, immune activation contributes to metabolic syndrome via inflammatory cytokines that induce insulin resistance as well as the generation of ROS that promote apoptosis, inflammation and metabolic dysfunction [9,27,28]. However, the exact underlying mechanisms involving immunometabolism and ROS signaling remain unclear to date.
Gut Microbiota, Oxidative Stress, and Metabolic Syndrome
The gut microbiota is the most diverse microbial community in the human body, with more than 1000 species encoding approximately 3 million genes [165]. The gut microbiota interacts with the host's brain and targets organs through the autonomic nervous system and circulatory and endocrine systems [166]. The gut microbiota plays a key role in maintaining physiological function as it modulates host nutrition, energy harvest, epithelial homeostasis, the immune system, and drug metabolism while maintaining balance [167]. Dysbiosis, an imbalance of gut microbiota content resulting in increased pathological species, can be caused by infections, antibiotic therapy, diseases, diet, and lifestyle ( Figure 6) [150,168,169]. Dysbiosis of the gut microbiota increases the risk of metabolic syndrome by causing inflammation, increasing reactive oxygen species, and oxidative stress [170,171]. Intestinal dysbiosis causes intestinal permeability, which can lead to metabolic endotoxemia, which is a cause of chronic low-grade systemic inflammation [171,172]. Modulation of dysbiosis through dietary interventions and probiotic supplementation may help treat metabolic syndrome [173].
The symbiotic association between host-microbe interactions in the intestine determines the oxidative stress level, which is influenced by the balance between good and harmful gut microbiota [174,175]. Lactobacillus brevis 23017, Bacillus SCo6, Lactobacillus plantarum, and Macleaya cordata extract can reduce the production of oxidative stress and protect the intestinal mucosal barrier [176][177][178][179][180]. The composition of gut microbiota and gut cells is directly correlated with ROS production in the host body [181]. Under healthy conditions, there is a dynamic equilibrium between ROS formation and elimination from the host body, and ROS harbor microbicidal machinery in innate cells. An imbalance between the production of ROS and antioxidants can lead to oxidative stress, disrupting redox signals, and intestinal damage [182,183]. The gut microbiome has been associated with the pathophysiology of most chronic diseases, such as obesity, diabetes, dyslipidemia, and hypertension, which can consequently result in the metabolic syndrome [165,[183][184][185]. cal species, can be caused by infections, antibiotic therapy, diseases, diet, and lifestyle ( Figure 6) [150,168,169]. Dysbiosis of the gut microbiota increases the risk of metabolic syndrome by causing inflammation, increasing reactive oxygen species, and oxidative stress [170,171]. Intestinal dysbiosis causes intestinal permeability, which can lead to metabolic endotoxemia, which is a cause of chronic low-grade systemic inflammation [171,172]. Modulation of dysbiosis through dietary interventions and probiotic supplementation may help treat metabolic syndrome [173]. Figure 6. Eubiosis and dysbiosis in the gut. In normal conditions, the gut is in a eubiotic state, having a pool of microbes that is mostly composed of non-pathogenic microorganisms that are relevant for normal physiological function, such as promoting physiological cross-talk with other systems such as the brain, cardiovascular organs, and metabolic-related tissues, helping to avoid and fight hypertension and metabolic syndrome progression. The gut microbiota produces compounds beneficial Figure 6. Eubiosis and dysbiosis in the gut. In normal conditions, the gut is in a eubiotic state, having a pool of microbes that is mostly composed of non-pathogenic microorganisms that are relevant for normal physiological function, such as promoting physiological cross-talk with other systems such as the brain, cardiovascular organs, and metabolic-related tissues, helping to avoid and fight hypertension and metabolic syndrome progression. The gut microbiota produces compounds beneficial to host intestinal health, which can be regulated through personal nutrition. However, dysbiosis in the gut microbiota (triggered and caused by antibiotics, urban diet, and sedentary lifestyle) is linked to chronic inflammation and exacerbates oxidative stress, consequently leading to metabolic syndrome.
Obesity increases the risk of chronic metabolic disorders, and there is evidence that the gut microbiota plays an important role in the development of obesity, including interactions between the gut microbiota and host metabolism [186]. Studies in both animals and humans have shown that the composition of the gut microbiota in healthy individuals is significantly different from that in individuals with the above-mentioned conditions, suggesting that the gut microbiota may play an important role in their development. Studies using 16S rRNA pyro-sequencing have shown that the composition of the gut microbiota of obese animals and humans differs from that of healthy, leaner individuals [187]. Obesity has been associated with two dominant bacterial phyla, Firmicutes and Bacteroidetes, with the Firmicutes/Bacteroidetes ratio increasing significantly in obese mice and humans [188].
In studies focusing on diabetes mellitus, evidence suggests that microbiota can affect glucose metabolism in both preclinical and healthy animals. Genera of Bifidobacterium, Bacteroides, Faecalibacterium, Akkermansia, and Roseburia were negatively associated with T2DM, while Ruminococcus, Fusobacterium, and Blautia were positively associated [186]. Experimental studies in animals and humans found that a high-calorie diet is a causal factor in obesity and may induce changes in the function of the gut microbiome [188]. These studies have shown that the gut microbiota regulates fat accumulation in the host, which influences obesity [185,[189][190][191].
Comorbidities Associated with Risk for Metabolic Syndrome
Having metabolic syndrome can increase the risk of developing T2DM, CVD, diabetes, polycystic ovary syndrome (PCOS), nonalcoholic fatty liver disease (NAFLD), chronic kidney disease, some types of cancer (breast, uterus, colon, esophageal, pancreatic, kid-ney, and prostate cancers), and osteoarthritis [18]. Many conditions are implicated in the development of metabolic syndrome and are known to coexist with each other in its development [192]. Hence, screening for comorbidity should be an integral part of metabolic syndrome care, as further studies confirm the association and the underlying mechanisms of metabolic syndrome and its comorbidities [193,194].
Metabolic Syndrome and Cardiovascular Disease Risk
A spectrum of cardiovascular conditions, such as microvascular dysfunction, coronary atherosclerosis and calcification, cardiac dysfunction, myocardial infarction, and heart failure, are all related to metabolic syndrome [195]. Each component of the metabolic syndrome is a separate risk factor for cardiovascular disease, and the combination of these risk factors increases the rates and severity of cardiovascular disease [195]. For instance, a study by Klein et al. reported that patients with a single metabolic syndrome component had a 2.5% risk of developing CVD in 5-years, while patients with ≥4 components had about a 14.9% risk of developing CVDs [18].
Compared to other risk factors of metabolic syndrome, hypertension is not only considered a major risk factor of CVD; it is regarded as a key feature of metabolic syndrome and is also attributed to about one-third of all deaths worldwide [196]. An increase in hypertension amplifies the effect of cardiovascular cellular damage and can eventually compromise the performance of the kidneys and lungs, which are key organs in the development of CVD and eventually metabolic syndrome [1,197]. Studies have shown that an amplified effect of metabolic syndrome is set into motion as a result of an overreaction due to overstimulation of the sympathetic nervous system (SNS) [193]. The overreaction of the SNS results in the stimulation of the renin-angiotensin-aldosterone system (RAAS), alterations in adipose-derived cytokines such as leptin, insulin resistance, and structural as well as functional renal changes [194]. These will ultimately collectively amplify the activity of both the physiologic functions of the SNS, which will eventually increase blood pressure [194,198]. Additionally, the RAAS also indirectly raises blood pressure by acting on the water retention system, thereby causing a surge in blood pressure which is an independent and important risk factor of metabolic syndrome development [194]. Triglycerides alone are an independent factor that contributes to many conditions that are directly and indirectly associated with metabolic syndrome and CVD. Triglycerides are a risk factor for CVD events, independent of serum HDL or low-density lipoprotein (LDL) levels [199]. Triglycerides increase the likelihood of obesity, which is a direct predisposing factor to metabolic syndrome. Therefore, triglycerides are directly associated with the development of diabetes, obesity, atherosclerotic cardiovascular disease, and hence metabolic syndrome [199,200].
Biomarkers of Oxidative Stress in Metabolic Syndrome
Oxidative stress biomarkers include molecules altered by ROS in the microenvironment and antioxidant system molecules that alter with redox stress [201]. Risk factors disrupt cell signaling pathways, increasing inflammatory markers, lipid peroxides, and free radicals, producing cell damage and clinical signs of metabolic syndrome. It is hypothesized that oxidative stress and inflammatory markers contribute to metabolic syndrome pathogenesis [6,14]. Quantification of biomarkers is the most accurate way to determine the amount of oxidative stress present in vivo. Total antioxidant capacity can also be used as a measure of oxidative stress in metabolic syndrome [14,202]. The isoprostanes (IsoP) generated from arachidonic acid, specifically 8-iso prostaglandin-F2alpha (8-isoPGF2), could be a good measure for investigating simultaneously oxidative stress and inflammation in disorders in which both are thought to be implicated [14,203]. Various studies have been conducted in individuals with metabolic syndrome in which the concentrations of oxidative stress biomarkers and antioxidant enzyme activity were measured simultaneously. The findings reveal that the presence of metabolic syndrome is related to an increase in oxidative stress biomarkers and a decrease in antioxidant capacity, which shows that metabolic syndrome is linked to a pro-inflammatory state and poor health as part of a very complex process driving cardiometabolic diseases [14,[203][204][205][206][207].
Biomarkers of oxidative stress are utilized in studies to determine patients at risk of complications and administration of the right therapy to reduce the burden of metabolic syndrome. The markers of oxidative stress include biomarkers of lipid peroxidation, protein and amino acid oxidation, and DNA oxidation [201]. Thiobarbituric acid-reactive substances (TBARS), malondyadehide (MDA), 4-hydroxy-2-nonenal (4-HNE), and F2-isoproteines are markers used to determine the presence of lipid peroxidation, which is an indicator of oxidative stress. Protein carbonyls, advanced glycation (AGEs), oxidized LDL (ox-LDL), and advanced oxidation proteins indicate protein oxidation. DNA oxidation markers include 8-oxo-2 deoxyguanosine (8-0xo-Dg),5-chlorouracil, and 5-chlorocytosine [201,206,208,209].
The markers associated with ROS generation are xanthine oxidase, gamma-glutamyl transferase (GGT), myeloperoxidase (MPO), NOX, and NOS [202]. Gamma-glutamyl transferase (GGT) is an enzyme found in many parts of the body, such as the kidney, pancreas, liver, spleen, heart, and brain. It recycles precursors to glutathione (GSH), which is an antioxidant and metabolic substrate. Metabolic syndrome, diabetes, high blood pressure, and stroke risk can all be predicted by a raised GGT [13,210]. The non-enzymatic markers include glycoprotein A (GPA), C-reactive protein (CRP), ferritin, and uric acid. CRP is a non-specific biomarker used to assess disease activity, diagnose, and classify inflammatory disorders such as rheumatic diseases [211]. Dyslipidemia, diabetes, and metabolic syndrome are linked to elevated CRP [211]. Individuals with metabolic syndrome have high serum ferritin without transferrin saturation [212]. Serum ferritin correlated positively with two indicators of oxidative stress: liver damage and insulin resistance [212]. Here, serum ferritin levels are important in metabolic syndrome diagnosis [212].
Other useful biomarkers that are significantly positively associated with metabolic syndrome include adipokines such as adiponectin and lectins. Adipose tissue expresses adiponectin levels, and levels are inversely related to the degree of adiposity [213][214][215]. Adiponectin is a well-known and accepted marker for metabolic syndrome and diabetes [213][214][215]. Decreased levels of adiponectin in the serum have been linked to the development of metabolic syndrome, and others have suggested its use to predict metabolic syndrome [213][214][215]. There is a connection between the hormone leptin, insulin resistance, and abdominal obesity. Leptin is a hormone that regulates energy metabolism. According to many studies, a substantial positive association exists between levels of leptin and metabolic syndrome. The presence of high leptin levels has been proposed as a potential marker for the development of metabolic syndrome [216,217].
Targeted Therapeutic Strategies for Metabolic Disease
Most of the biomarkers used in the detection of metabolic syndrome are not specific. This is because most of the products of oxidative stress are unstable and have a short half-life in the bloodstream [160]. Although most of the biomarkers are not specific, studies have shown that indirect methods for the detection of metabolic syndrome are reliable and depend on certain macromolecules such as DNA, lipids, and proteins, as these molecules experience significant damage due to oxidative stress [218]. Multidisciplinary strategies are needed to prevent and manage metabolic diseases, including lifestyle interventions and surgical or pharmacotherapeutic approaches [219].
Nutrition is a major environmental factor contributing to metabolic syndrome [220]. Westernization of lifestyles has led to an increase in convenience foods, fast-food availability, food marketing, and larger food portions, leading to metabolic syndrome worldwide [219]. Different therapeutic strategies have been suggested to counter the effects of reactive oxygen species. However, only a few strategies have been elaborated, most of which utilize macromolecules at different levels of biosynthetic pathways [221]. Several studies have shown that the type of diet influences the gut microbiota. An example is the Western diet, which decreases microbial richness and increases the Firmicutes/Bacteroidetes ratio, while a diet rich in -3 polyunsaturated fatty acids (PUFAs) is associated with anti-inflammatory effects [171]. Several studies have found that replacing energy intake from saturated fatty acid with equivalent energy intake from Polyunsaturated fat (PUFA) and monounsaturated fat (MUFA) or high-quality carbohydrate such as whole grains can lower CVD risk [222]. MUFA can inhibit adipose (NLRP3) inflammasome-mediated IL-1β secretion, NLRP3 secretion, and insulin resistance, even in mice with diet-induced obesity [223]. Fruits, vegetables, legumes, and whole grains are appropriate sources for cardioprotective components [2]. The Mediterranean diet, consisting of fruits, vegetables, olive oil, red wine, nuts, and other food components, has been reported to have beneficial effects on longevity and ameliorating metabolic syndrome [224]. The Mediterranean diet contains natural antioxidants and bioactive compounds such as polyphenols such as naringenin, apigenin, and ellagic acid from olives, which have beneficial properties that lower the risk for metabolic syndrome and CVD [225,226]. One of the mechanisms of action involved in reducing the risk for the development of metabolic syndrome by polyphenols is inhibiting the inflammasome and NF-кB, leading to a decrease in the secretion of proinflammatory cytokines [227]. Nuts, such as almonds and walnuts, reduce inflammation and oxidative stress by decreasing the levels of C-reactive protein, IL-6, endothelial adhesion molecules, and oxLDL, thereby reducing the risk for metabolic syndrome [228][229][230]. Tocopherols, key lipophilic radical-scavenging antioxidants, could interrupt the lipid peroxidation cycle and modulate the nuclear factor erythroid 2/electrophile-responsive element (Nrf2/EpRE), PI3K/Akt/mTOR, and NF-κB signaling pathways, and hence improving the quality of life [231].
Studies involving pharmacotherapeutic agents suggest that inhibition of protein synthesis, which has been activated as a result of reactive oxygen species, is one way that targeted therapy could be achieved, as elaborated by Vassalle et al. [232]. Most drugs with antioxidant properties, such as those used in the treatment of CVD, have simultaneous effects in relation to oxidative stress, i.e., beta blocks, angiotensin-converting enzymes (ACE), and angiotensin receptor blockers (ARB) have multiple effects on different pathways [233]. Therapeutic agents such as statins show both antioxidant and anti-inflammatory properties, as there is a reduction in cytokine production once administered [234]. Recent studies have shown that statins improve both blood vessel and heart-related diseases and achieve this by inhibiting specific proteins such as Rac and Rho [233,235]. The pharmacological effect of statins is through targeting and blocking a coenzyme known as hydroxylmethylgrutaryl A reductase. This in turn will lead to the inhibition of the biosynthesis of mevalonic acid, which is a progeny of nonsteroidal isoprenoids and a lipid molecule where Rac and Rho attach [236]. Furthermore, Rho acts on the endothelium and negatively moderates nitric oxide synthase, while on the other hand, Rac acts as a key target for the organization and action of NADPH oxidase [234,236]. NADPH oxidase is a precursor of most reactive oxygen species. Statin has thus been known to target NADPH oxidase and negatively inhibit it, thereby reducing the production of ROS [234].
Conclusions
The mechanisms underlying metabolic syndrome mediated by oxidative stress are complex and intricately interrelated. Biomarkers of metabolic syndrome that explain disease severity are available. However, more clinical studies are required to understand their value and usage in the clinical setup.
•
Oxidative stress plays a role in metabolic derangements in obesity, diabetes, and cardiovascular pathogenesis; • Biomarkers and molecular targets may help us develop innovative methods for preventing, diagnosing, and treating inflammatory and metabolic disorders; • Antioxidants can be used as a preventative or therapeutic treatment for metabolic diseases.
What Is New
• Mitochondrial oxidative stress and dysfunction may be the primary causes of oxidative damage and metabolic abnormalities in metabolic syndrome; • Several signaling pathways involving NF-kB, PKC, MAPK, polyol, JNK, ERK, and NOX are activated to induce metabolic syndrome and multiple organ damage; • Adiposity plays a vital role in inducing oxidative stress that results in endothelial dysfunction, cardiovascular remodeling, and hypertension; • Components of the Mediterranean diet, such as polyphenols found in olives, can lower oxidative stress and reduce the risk of the development of metabolic syndrome. Institutional Review Board Statement: Not applicable.
|
2023-04-29T15:06:18.521Z
|
2023-04-26T00:00:00.000
|
{
"year": 2023,
"sha1": "d876b2b5c9f112d9c55a29555dbc9b7c9e365b9f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms24097898",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b691e41e6b8dd75016d319f26e70926076483bc0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
132876491
|
pes2o/s2orc
|
v3-fos-license
|
Altered Swimming Behaviors in Zebrafish Larvae Lacking Cannabinoid Receptor 2
Abstract Background and Objectives: The cannabinoid receptor 2 (CB2) was previously implicated in brain functions, including complex behaviors. Here, we assessed the role of CB2 in selected swimming behaviors in zebrafish larvae and developed an in vivo upscalable whole-organism approach for CB2 ligand screening. Experimental Approach: Using CRISPR-Cas9 technology, we generated a novel null allele (cnr2upr1) and a stable homozygote-viable loss-of-function (CB2-KO) line. We measured in untreated wild-type and cnr2upr1/upr1 larvae, photo-dependent (swimming) responses (PDR) and center occupancy (CO) to establish quantifiable anxiety-like parameters. Next, we measured PDR alteration and CO variation while exposing wild-type and mutant animals to an anxiolytic drug (valproic acid [VPA]) or to an anxiogenic drug (pentylenetetrazol [PTZ]). Finally, we treated wild-type and mutant larvae with two CB2-specific agonists (JWH-133 and HU-308) and two CB2-specific antagonists, inverse agonists (AM-630 and SR-144528). Results: Untreated CB2-KO showed a different PDR than wild-type larvae as well as a decreased CO. VPA treatments diminished swimming activity in all animals but to a lesser extend in mutants. CO was strongly diminished and even more in mutants. PTZ-induced inverted PDR was significantly stronger in light and weaker in dark periods and the CO lower in PTZ-treated mutants. Finally, two of four tested CB2 ligands had a detectable activity in the assay. Conclusions: We showed that larvae lacking CB2 behave differently in complex behaviors that can be assimilated to anxiety-like behaviors. Mutant larvae responded differently to VPA and PTZ treatments, providing in vivo evidence of CB2 modulating complex behaviors. We also established an upscalable combined genetic/behavioral approach in a whole organism that could be further developed for high-throughput drug discovery platforms.
Introduction
The endocannabinoid (eCB) system is a key modulator of excitatory and inhibitory neuronal activity 1 and its dysregulation has been linked to several psychiatric disorders. [2][3][4][5] The two cannabinoid receptors (CB1 and CB2) belong to the G protein-coupled receptor family. They are both activated by endogenous ligands (eCBs) 6,7 and exogenous compounds such as D 9 -THC, the main psychoactive component in cannabis. 8 CB1 is highly expressed in the CNS and implicated in numerous neurological diseases (for review, see Marco et al., 3 Kendall and Yudowski, 5 Bilkei-Gorzo, 9 Di Marzo et al., 10 and Pavlopoulos et al. 11 ). By comparison, CB2 expression was initially described in the immune system but more recently also in discreet brain regions where its role is still poorly understood.
A genome-wide association study showed association between specific SNPs in the CNR2 gene encoding CB2 and schizophrenia. 12 Several lines of evidence suggest a role for CB2 in complex and specific behaviors in adult rodents. [13][14][15] CB2-KO mice displayed schizophreniarelated behaviors 16 altered cognitive function, 17,18 modified cocaine-reward behaviors, 19 as well as increased aggressiveness. 20 CB2 overexpression showed reduced anxiety-like behaviors 21 and resistance to depression, 22 whereas temporary blockage of CB2 expression exhibited reduced aversion to open space. 15 Conditional CB2-KO demonstrated that CB2 can regulate synaptic transmission in hippocampal pyramidal cells and modulate gamma oscillation. 23 Modulation of CB2 expression in the hippocampus showed a regulatory role in fear and working memory. 18 Suppression of CB2 expression in dopamine neurons inhibited psychomotor behaviors, altered anxiety, and depression measurements, as well as alcohol preferences. 24 So far, few developmental studies were performed for CB2 25,26 and none was exploring complex behaviors during this critical period.
Zebrafish is a powerful, genetic, and developmental model, which also provides the unique feature of upscalability, allowing high-throughput applicable to pharmacological screens (for review, see Rennekamp and Peterson 27 ). Using CRISPR-Cas9, we created a novel null allele (cnr2 upr1 ) and established a stable loss-of-function (CB2-KO) zebrafish mutant line. 28 Homozygote larvae were viable without an overt phenotype and were raised into fertile and healthy adults over several generations, which were all completely lacking CB2. Next, we tested swimming behaviors in 6-day postfertilization (dpf) wild-type and homozygote larvae monitoring photodependent responses (PDR) 29,30 and measuring the center occupancy (CO) of wells providing an inverse measure of center avoidance. We found that CB2-KO were swimming significantly less in light and significantly more in dark periods with a decreased CO, when compared with wild-type larvae. When adding a broad-spectrum anxiolytic drug (valproic acid [VPA]) 30,31 just prior recording, we found that swimming activity and CO were strongly reduced in all animals in a similar manner, but swimming was slightly less and CO more diminished in mutants. When adding a classical anxiogenic drug, pentylenetetrazol (PTZ), a well-characterized GABA A inhibitor in many animal models, [32][33][34][35][36] we found that larvae lacking CB2 presented an increase in swimming activity and a decrease in CO when compared with wild type. Taken together, we provide in vivo evidence for CB2 modulating complex behaviors in zebrafish larvae.
Finally, to test the potential of our approach for CB2 ligand screening, we treated wild-type and CB2-KO larvae with two CB2-specific agonists ( JWH-133 and HU-308) and two CB2-specific antagonists (AM-630 and SR-144528). 37 We found that two of four treatments elicited detectable PDR alterations, possibly CB2 mediated for the most part. Thus, we described a novel, upscalable behavioral approach for drug screening in a whole organism, providing a complementary alternative to current methods.
Materials and Methods
Zebrafish care and husbandry We used TAB5 or NHGRI wild-type animals that we raised and maintained in our fish room following standard procedures and IACUC protocol (#A880216).
CB2 mRNA and protein expression
Total RNA was prepared from genotyped larvae and the subregion of interest in the cnr2 mRNA transcribed using retrotranscriptase (Sigma) and the following primers (F: 5¢-CAGCTGCCACGTGATATAAGTA-3¢; R: 5¢-ATGCCAGCATTTCTCCCCTC-3¢), and subsequently sequenced with the same primers.
The behavioral swimming assay: the PDR Five dpf larvae were loaded into 48-well plates (CELLSTAR Ò ) 24 h prior recording, for animals to adapt to the new environment. A single larva was placed in each well in 450 lL system water (SW). Next day, wells were topped off to 500 lL with SW and the desired concentration of the drug at study or with SW, and plates were immediately placed in the recording device (Zebrabox, Viewpoint, France). After a 30min adaptation/incubation in the dark, animals were subjected to 10 min of light (L) at maximum intensity (=385 Lux) followed by 10 min of dark (D) in four successive cycles. All results were binned into 1-min intervals.
Center occupancy
We virtually defined an inner central diameter (0.5 cm) within the whole well (1.0 cm) and recorded inner and total traveled distances. The percentage of CO was calculated as follows: CO = inner traveled distance/total traveled distance · 100.
Toxicity/survival assay CB2 ligands Kis are in the nM range. However, to overcome diffusion problem and permeability issues, we expected to use them in the lM range. To exclude aberrant swimming patterns induced by overexposure compromising health and survival, we treated healthy 5 and 6 dpf larvae with CB2 ligands at 1, 10, and 50 lM (n = 10/ligand/concentration) for 24 h and assessed the following criteria: overall larval morphology, spontaneous swimming, and responses to sound and mechanical stimuli. Concentrations resulting in abnormal morphology or responses were considered not well-tolerated and excluded from further experiments.
Statistical analysis
We analyzed averaged total traveled distances per larva (with a minimum of triplicate experiments and 24 animals/treatment) in GraphPad Prism (v.7). All results were binned into 1-min intervals and error bars represent meanstandard error of the mean. Statistical differences between direct comparisons were calculated using multiple t-tests controlling the effect of the correlation among the number of fixed repeated measures. We performed two-way analysis of variance in graphs when two or more groups were compared simultaneously. Differences with p < 0.05 were considered significant (*).
Results
Generation of a CB2 loss-of-function stable mutant line (null allele: cnr2 upr1 ) In zebrafish, a single cnr2 gene with two transcripts (Fig. 1A) resulting in identical translated exons (solid red blocks) encodes a 383 amino acid (aa) CB2 protein, which is slightly longer than the human homologue (360 aa). To generate loss-of-function alleles, we designed guide RNAs targeting the 5¢ end of the second translated exon (Fig. 1A, T in light blue).
We outcrossed adult founders (F 0 ) and genotyped the offspring (F 1 ) for germ line transmission of INDEL in the target site. We identified a two-nucleotide deletion (D2: CT) just 3 base pair upstream of the protospacer adjacent motif site (purple square in bottom of Fig. 1A) introducing a translation frameshift, which would predictably create an early stop codon in aa position 159. The resulting truncated protein, if not degraded, would have only two transmembrane domains, thus very likely a loss-of-function mutation. We grew this allele (cnr2 upr1 ) to homozygosity and genotyped animals by classical sequencing (Fig. 1B, left panels) or by fluorescent PCR 40 (Fig. 1B, right panels). We closely monitored heterozygote or homozygote larvae development and morphology between 2-and 9-dpf and found no obvious phenotype. Likewise, adult genotyped cnr2 upr1/upr1 animals were healthy, fertile, and inbred into a stable F 3 generation, from which we obtained cnr2 upr1/upr1 larvae used in all the behavioral studies.
To confirm that we had generated a loss of function, we analyzed the cnr2 gene products. First, we synthetized and sequenced the cDNA obtained from 6 dpf wild-type (n = 3) and genotyped cnr2 upr1/upr1 (n = 3) single larvae, all of which carried the deletion (D2: CT, Fig. 1C). Next, we performed Western blots with an anti-CB2 Ab raised against an epitope located before the first transmembrane domain of human CB2, to allow detection of a putative truncated protein. We prepared protein extracts from dissected adult brains of wild-type (n = 3), and genotyped cnr2 upr1/upr1 (n = 3) animals. As predicted, we found a CB2-specific band at *40 kDa in all wild-type (Fig. 1D, top left panels, Wt1 and Wt2 are shown) but not in any cnr2 upr1/upr1 extracts (À/À1 and À/À2 are shown). Notably, no shorter CB2-KO-specific product was found, arguing that the truncated protein was unstable. We further validated specificity of the CB2-Ab in HEK293 cells ( Fig. 1D lower left panels), which we transfected with a tagged CB2 construct (SEP-CB2) detected at 70 kDA. We probed tubulin expression in all samples (Fig. 1D, lower bands in left panels) and quantified it to determine the relative CB2 expression (graphs on the right). Taken together, we demonstrated that CB2 was expressed in adult fish brain but absent in cnr2 upr1/upr1 , and that it was a null allele resulting in viable homozygote larvae and adult totally devoid of CB2.
Absence of CB2 affects the swimming PDR To determine if CB2 could have a role in complex behaviors, we assessed swimming behaviors using a previously established PDR assay, 29 in which we measured traveled distance and CO during four successive 10- Percentage of CO calculated as follows: inner/total distance traveled · 100. Error bars represent the standard errors of the mean (SEM). Statistical significance, *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001, and for clarity ns is omitted. CO, center occupancy; dpf, day postfertilization; ns, not significant; PDR, photo-dependent responses. min-long light (L) and dark (D) periods. We recorded simultaneously wild-type and mutant larvae individually distributed in 48-well plates. L/D cycling was started after 30 min of adaptation to darkness. We graphed the averaged traveled distance/larva/min for the entire recording time ( Fig. 2A, Wt black and KO red, N = 5, n = 120 larvae/genotype). A highly reproducible swimming pattern (=PDR) emerged: lower swimming activity in L, which strongly increased in D periods. Most drastic changes were always occurring immediately after a light change and leveling out over the remainder of the period. Thus, we decomposed the analysis into successive (left panels) and cumulative (right panels) post-transition (=first min after an L/D or D/L change, Fig. 2B), and nontransition (=remainder of a period, Fig. 2C).
Notably, cnr2 upr1/upr1 traveled significantly less in L and significantly more in D periods than wild-type animals (* in Fig. 2A Inner traveled distances were not significantly and consistently different between wild-type and mutant larvae (Fig. 2D left panel), but when we calculated the CO, it was significantly lower in mutants in D/L (right panel white CO Wt = 14.70% vs. CO KO = 9.80%, p < 0.05) and L/D post-transitions (gray CO Wt = 22.70% vs. CO KO = 18.40%, p < 0.001). Thus, CB2-KO animals spent less time in the center independently of the total distance traveled, suggesting that mutant larvae were avoiding open spaces more than wild-type animals. Taken together, animals lacking CB2 were hypoactive in L, hyperactive in D periods, and had decreased CO. Those results led us to postulate that CB2 was modulating complex behaviors.
Animals lacking CB2 respond differently to anxiolytic drug VPA We previously showed that larvae treated with VPA [2 mM] (VPA 2 ) had an altered PDR with overall lower swimming activity. 30 To assess a possible involvement of CB2, we set up parallel VPA 2 treatments of wildtype and mutant larvae. After recording, we graphed the averaged traveled distance/larva/min (Fig. 3A, N = 5, untreated Wt black n = 40; untreated KO red n = 40; Wt VPA2 blue n = 80; and KO VPA2 orange n = 80). All VPA 2 -treated larvae were exhibiting a strong overall decreased activity similar at most recorded time points, independently of the genotype. We found a few significant differences with mutant traveling less than wildtype animals in D/L post-transitions (Fig. 3B, successive: left panel white, p < 0.01; cumulative: right panel white Wt VPA2 blue = 4.82 cm/min vs. KO VPA2 orange = 2.77 cm/ min, p < 0.0001). Also, in D periods, treated mutant traveled slightly but significantly more than treated wild-type larvae in nontransitions (Fig. 3C, successive: left panel gray, p < 0.01 except D4; and cumulative: right panel gray Wt VPA2 blue = 4.06 cm/min vs. KO VPA2 orange = 5.16 cm/min, p < 0.005). Thus, these results suggested that VPA-triggered decreased swimming activity was partially modulated by CB2.
Inner traveled distances in D periods were reduced in both treated wild-type and mutant (Fig. 3D, left panel gray, p < 0.01) and the CO was reduced (right panel gray box CO Wt black = 20.26% vs. CO Wt-VPA2 blue = 13.94%, p < 0.001; CO KO red = 13.94% vs. CO KO-VPA2 orange = 5.75%, p < 0.0001). In L periods, the CO was also significantly reduced in treated versus nontreated mutants (white, CO KO red = 3.42% vs. CO KO-VPA2 orange = 0.99%, p < 0.0001). Thus, CO was strongly decreased in all treated animals and even more so in CB2-KO animals.
Inner traveled distances were strongly reduced in all PTZ 7.5 -treated animals, but only in D periods (Fig. 4D, left panel, gray boxes, p < 0.0001). The CO was strongly reduced in L (right panel white box CO Wt black = 10.38% vs. CO Wt-PTZ green = 5.04%, p < 0.0001; and CO KO red = 12.06% vs. CO KO-PTZ khaki = 2.91%, p < 0.0001) and in D periods (gray box CO Wt black = 20.68% vs. CO Wt-PTZ green = 4.00%, p < 0.0001; and CO KO red = 18.72% vs. CO KO-PTZ khaki = 2.21%, p < 0.0001). Remarkably, the CO reduction was even more pronounced in CB2-KO animals. Taken together, PTZ-triggered inverted PDR and decreased CO were significantly altered in the absence of CB2, suggesting that CB2 was a modulator of the PTZ anxiogenic effects.
A subset of CB2 ligand activity alters the PDR differently in wild-type and CB2-KO larvae To assess if the PDR could detect CB2 ligand activity, we treated wild-type and mutant larvae with agonists ( JWH-133 and HU-380) and antagonists (AM-630 and SR-144528). All ligands were pretested for overexposure-induced side effects (as detailed in the Materials and Methods section), and we analyzed further the PDR after treatment with the highest welltolerated concentration (Fig. 5).
With JWH-133 (top panel in Fig. 5A, Wt-JWH-133 50 dark green n = 24, and KO-JWH-133 50 light green n = 24), when we compared untreated versus treated wild type (black vs. dark green), we found that treated animals traveled significantly less in L (dark green* mid-dle row, 30/40 in L and 4/40 in D) possibly as a result of CB2 binding and activation. Furthermore, when comparing untreated versus treated mutants (red vs. light green), we found few significant differences (light green* top row, 4/40 in L and 4/40 in D), meaning there was little activity detected in the absence of CB2, suggesting a good ligand specificity and low off-target effects. Thus, we concluded that JWH-133 50 altered PDR, possibly in a CB2-specific manner.
With HU-308 (bottom panel in Fig. 5A, Wt-HU-308 10 brown n = 24, and KO-HU-308 10 khaki n = 24), when we compared untreated versus treated wild type (black vs. brown), we found very few significant differences (brown*middle row, 6/40 in light and 1/40 in dark) and likewise when comparing untreated versus treated-CB2-KO larvae (red vs. khaki and khaki* top row, 6/40 in L and 4/40 in D) pointing to a weak in vivo effect at well-tolerated concentrations.
With AM-630 (top panel in Fig. 5B, Wt-AM-630 3.5 dark blue n = 24, and KO-AM-630 3.5 light blue n = 24), when we compared untreated versus treated wild type (black vs. dark blue), we found that treated animals traveled less during nontransitions (dark blue* middle row 13/40 in L and 14/40 in D), possibly as a result of CB2 binding and activation. When comparing untreated versus treated mutant larvae (red vs. light blue), we found a few significant differences mostly in D periods (light blue*, top row 2/40 in L and 12/40 in D), suggesting that the detectable AM-630 3.5 -induced in vivo effect might be CB2 specific in L, but be off-target effects in D periods.
With SR-144528 (bottom panel in Fig. 5B, Wt-SR-144528 10 purple n = 24 and KO-SR-144528 10 pink n = 24), when we compared untreated versus treated wild type (black vs. purple), we found no significant differences (purple* middle row 0/40 in L and 1/40 in D) and likewise with untreated versus treated mutant larvae (red vs. pink, and pink* top row 2/40 in L and 12/40 in D) pointing to a weak in vivo effect at welltolerated concentrations. Taken together, we elicited detectable in vivo effects with two of four tested CB2ligands, which altered the PDR significantly at a subset of time points in wild type but not in mutants, arguing for CB2 specificity of the observed effects.
Discussion and Conclusions
To assess CB2 involvement in complex behaviors during vertebrate development, we generated with CRISPR-Cas9 technology CB2-KO animals and tested homozygote (cnr2 upr1/upr1 ) larvae in a PDR swimming behavior 98 ACEVEDO-CANABAL AND COLÓ N-CRUZ ET AL.
assay. We showed that mutant animals were swimming significantly less in light, more in the dark, and avoiding open spaces more than wild type. Thus, we provide evidence for CB2 involvement in complex larval behaviors. Hyperactivity and hypoactivity associated with, but not limited to, light changes are well-accepted measures of anxiety-like behaviors in rodents and have been also explored in adult fish [41][42][43] although not yet extensively in larvae. 44,45 We tested 6-dpf larvae because at this developmental stage, animals swim upright and exhibit complex behaviors comparable with adults. Using larvae presents major experimental advantages such as enabling upscalability. The small size (*2 mm) and relative permeability of young larvae simplify chemical treatments that can be simply added to the water and will penetrate the animal by simple diffusion. Weekly spawning (*100 eggs/couple/week) can provide ample number of animals for parallel testing of various concentrations of compounds. Center avoidance is another classical measure of anxiety-like behavior. 46,47 As described previously, wild-type larvae swim mostly near the walls but travel more in the center during L/D post-transitions. 29 So, we measured inner traveled distances in post-transitions. However, variation of inner distances might simply reflect variation of the total activity, so we expressed relative distances traveled as a ratio: inner traveled distance/total traveled distance · 100 to obtain the percentage of CO, providing an inverse reading of center avoidance. Therefore, we established anxiety-like in vivo parameters for fish that open new experimental avenues.
Next, we showed that when treating larvae with the anxiolytic drug VPA, the PDR was strongly reduced similarly in wild-type and mutant larvae, but with stronger hypoactivity in D/L post-transitions and more activity in dark nontransitions in the latter. VPA (or valproate) is a broad-spectrum anxiolytic drug 31 that increases gamma-aminobutyric acid (GABA) turnover, inhibits glutamate/N-methyl-Daspartate (NMDA) receptors, and blocks voltagedependent sodium channels. [48][49][50] The complex mode of action in a whole organism is yet to be clarified, and our results argue for only a marginal modulation by CB2. Surprisingly, the CO was strongly reduced in darkness indicating that all treated animals were avoiding open space. Those results were suggesting that VPA had an anxiogenic effect in fish larvae, which was amplified in the absence of CB2. Alternatively, this might reflect sedation, a commonly described side effect in VPA. 49 PTZ is commonly used in animal models to induce anxiety and seizure-like activity that is principally mediated via GABA A inhibition. 35,51 We and others have previously shown that with a fixed concentration of PTZ [7.5 mM] (PTZ 7.5 ), a strong inverted PDR could be induced in wild-type larvae, namely hyperactivity in light periods and hypoactivity in dark periods. 30,34,40 Treated CB2-KO larvae had consistent heightened inverted PDR in light periods. Involvement of CB2 in the PTZ-elicited GABA A inhibition was previously shown in rodents 52 and offers a potential explanation. However, a greater sensitivity to treatments of mutant larvae, as well as possible additive effects occurring in parallel signaling pathways, cannot be excluded at this point. Testing of different doses of VPA and PTZ as well as cotreatments will help elucidate CB2 involvement.
We also measured the effect on the PDR with four known CB2 ligands and showed that we elicited with two of them, PDR alterations that were possibly CB2 mediated. Our data are proof of principle that such an approach could be further developed into an effective means to screen ligand-binding efficacy, specificity, as well as drug safety in a whole organism. However, a few major drawbacks must be addressed before exploiting this approach on a large scale. First, possibly because of the mode of administration of the ligands (directly into the water), we had to use very high concentrations of ligands to elicit a detectable response. This significantly narrowed the testable range of concentrations before reaching toxic levels. Alternative means of drug administration should be explored such as food additives. Second, we found an internal variation of the PDR across experiments in untreated wild-type and mutant larvae alike, rendering phenotypic differences in dark periods less consistent across experiments with different treatments especially when using smaller sample size (n < 40). However, we found that the robustness of the phenotype could be easily strengthened by augmenting the number of tested animals, and by always using nontreated wildtype and mutant animals from the same clutches in parallel runs to provide solid internal controls.
In summary, we present an innovative upscalable approach that can be coupled to automated readout for significant differences and applied to drug discovery pipeline to test new CB2 ligand lead compounds. Likewise, mutant lines for CB1, opioid receptors, or GABA subunits could be established and double or even triple-KO lines used as screening tools. Finally, the indispensable preclinical safety and efficacy testing needed for bringing new drugs to the market could be performed in zebrafish larvae, offering a cost-effective, fast, and easy alternative or complement to the more classical preclinical models.
|
2019-04-26T13:49:49.946Z
|
2019-06-14T00:00:00.000
|
{
"year": 2019,
"sha1": "97e64615ac1b01b7c5f63d1cfc13751e6d7ce329",
"oa_license": "CCBY",
"oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/can.2018.0025",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "97e64615ac1b01b7c5f63d1cfc13751e6d7ce329",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
261456050
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Telemedicine in Glycemic Control in Adult Patients with Diabetes during the COVID-19 Era—A Systematic Review
Telemedicine can be an effective tool for managing chronic diseases. The disruption in traditional diabetes care resulting from the COVID-19 pandemic led to global interest in telemedicine. With this manuscript, we evaluated the use of telemedicine for the management of diabetes during the pandemic and its impact on glycemic control, focusing on retrospective and prospective studies which included adult, non-pregnant patients with diabetes. We evaluated whether there was an improvement in HbA1c, time in range (TIR), glucose management indicator (GMI), mean glucose values, hypoglycemic episodes, time below range (TBR), or hospitalizations for hypoglycemia/DKA, depending on the available information provided. This review article highlights the benefits of telemedicine during the global state of emergency, which altered the standard of healthcare delivery. Across the studies reported in this review, telemedicine was shown to be an effective tool for the management of diabetes, illustrating its potential to be the new standard of care. Although these improvements may be confounded by potential extraneous factors present during the pandemic, telemedicine was shown to positively impact glycemic control. Overall, this article highlights the benefits of telemedicine on glycemic control during the global state of emergency, which altered the standard of care. With the rollback of COVID-19 restrictions, and a return to the office, this article emphasizes the necessity to study how telemedicine can be best utilized for diabetes management when compared to the traditional standard of care.
Introduction
Diabetes is one of the most prevalent diseases worldwide.According to the CDC, from 2001 to 2020, the prevalence of diabetes significantly increased among adults in the United States.Furthermore, the CDC estimated that 37.3 million people, representing 11.3% of the US population, have diabetes [1].Notably, the global prevalence is expected to rise to 578 million by 2030 [2].Following COVID-19 pandemic declaration, patients with diabetes were found to be at particularly high risk of intensive care admission (ICU) and mortality from COVID-19 infection, representing a vulnerable population [3][4][5][6].The advent of the pandemic ushered in a new era in medical care, especially for diabetes, by allowing telehealth to become a key alternative tool that can help modernize care through the use of tools such as continuous glucose monitors, smart pens, and smart phones [7].The outbreak of the COVID-19 pandemic created an additional challenge in providing care for chronic diseases such as diabetes.Given its highly contagious nature and propensity to spread from one person to another through direct transmission, measures such as social distancing, lockdowns, and travel restrictions were implemented to mitigate virus spread and reduce hospitalizations in different parts of the world, which led countries to adapt different strategies [8].In the United States, there was a significant drop in in-person outpatient visits, prompting a shift towards the use of telemedicine as a consequence [9].However, the impact of the pandemic extended beyond the United States and had a major repercussion in care across different countries around the world and medical specialties [10].
Overall, the change in the landscape of medical care posed a challenge to the way healthcare was delivered.Consequently, institutions increasingly utilized virtual clinics and telemedicine interventions to provide appropriate care for patients, including those with diabetes, to protect against COVID-19 infections.Despite the sudden change in care, telemedicine was positively received by patients [11,12].Telemedicine is defined by the Institute of Medicine as "the use of electronic information and communications technologies to provide and support health care when distance separates the participants" [13].The Centers for Medicare and Medicaid Services (CMS) describes telemedicine as "the exchange of medical information from one site to another through electronic communication to improve a patient's health" [14].Telemedicine can be an effective tool for more than just patients with an established diabetes diagnosis.It can also be used to navigate challenging situations such as insulin pump training through virtual clinics or management of new-onset diabetes, circumstances where in-person care were traditionally deemed necessary [15,16].Although telemedicine was not broadly used prior to the onset of the pandemic, it swiftly became an instrumental tool for the care of patients with diabetes; that, in conjunction with the use of technology such as continuous glucose monitors (CGM), allowed physicians to provide adequate care and makes telemedicine feasible [17].
The COVID-19 pandemic led to worldwide interest in telemedicine, as evidenced by the multiple publications presented in this paper.In this article, we evaluate the use of telemedicine for the management of diabetes by presenting a comprehensive review of papers that focused on the use of telemedicine on glycemic control in adults after the COVID-19 pandemic declaration.
Methods
An electronic search of PubMed was conducted by two independent reviewers (F.S., R.H.) to analyze publications relating to diabetes management, telemedicine, and COVID-19.The search was conducted via PubMed advanced search builder using the following key words: 'Diabetes telemedicine clinic and COVID-19', or 'Glycemic control telemedicine clinic and COVID-19', or 'Diabetes management and SARS-CoV lockdown', or 'Telemedicine diabetes and lockdown' or 'Impact telemedicine and diabetes control lockdown'.The search resulted in a total of 646 articles, which we filtered based on publication date.Using '11 March 2020-31 July 2022', a total of 376 records remained.Two duplicate records were removed, and those that included pediatric patients or pregnant patients were excluded.From the 317 reports that remained, a filter was used to exclude review articles, systematic reviews, and meta-analysis articles.The remaining articles were screened for relevance, study purpose, and outcome measures.Those that did not have glycemic control evaluation as either primary or secondary end points, did not describe the impact of telemedicine on diabetes management during the pandemic, or studied diabetes comorbidities were excluded (Figure 1).This review did not focus on the financial impact of telemedicine.In the included studies, time in range (TIR), time above range (TAR), glucose ma agement indicator (GMI), mean glucose value, postprandial plasma glucose (PPPG), fas ing plasma glucose (FPG), hemoglobin A1c (HbA1c), time below range (TBR), and hyp glycemic events were used as parameters for evaluating glycemic control.Glucose mon toring methods used to monitor patients included continuous glucose monitor (CGM self-monitoring of blood glucose (SMBG), and flash glucose monitoring (FGM).Additio ally, multiple daily insulin injections (MDI), continuous subcutaneous insulin infusio (CSII), and non-insulin hypoglycemic medications (oral hypoglycemic agents and GLP agonists) were among the different glucose treatment methods used in the various studie (Tables 1 and 2).In the included studies, time in range (TIR), time above range (TAR), glucose management indicator (GMI), mean glucose value, postprandial plasma glucose (PPPG), fasting plasma glucose (FPG), hemoglobin A1c (HbA1c), time below range (TBR), and hypoglycemic events were used as parameters for evaluating glycemic control.Glucose monitoring methods used to monitor patients included continuous glucose monitor (CGM), self-monitoring of blood glucose (SMBG), and flash glucose monitoring (FGM).Additionally, multiple daily insulin injections (MDI), continuous subcutaneous insulin infusion (CSII), and non-insulin hypoglycemic medications (oral hypoglycemic agents and GLP-1 agonists) were among the different glucose treatment methods used in the various studies.(Tables 1 and 2).
•
Among those managed with CSII c , there was also an improvement (mean glucose of 157.9 vs. 152.6 in those using CSII c 4 weeks before and 4 weeks after TM q visit, respectively, p = 0.003).
•
There was no significant change in TBR n (<70 mg/dL) with 3 to 5%, p = 0.06 in those who had a telemedicine visit vs. 4.5 to 5.5%, p = 0.40 in those who did attend a telemedicine visit.
•
No significant changes in hypoglycemic events from 6 to 8 events p = 0.22 vs. 11 to 8 events, p = 0.28 in those who attended a telemedicine visit vs. those who did not, respectively.
Insulin OHA e GLP-1 f N/A t • Hospitalizations for DKA n were 2.2% in the telemedicine group vs. 6.71% in the T1D Exchange.
•
Among those that have been followed by telemedicine, there was a change in mean GMI l of −0.66% (from 9.91 to 9.25%).Glucagon-like peptide-1 receptor agonists; g CGM: Continuous Glucose Monitor; h SMBG: Self-Monitoring Blood Glucose; i TIR: Time in Range; j TAR: Time Above Range; k TBR: Time below range; l GMI: Glucose Management Indicator; m HbA1c%: Hemoglobin A1c; n DKA: diabetes ketoacidosis.
Evidence from Retrospective Studies
Among the retrospective studies published (Table 1), three of them assessed patients exclusively with type 1 diabetes (T1D) and included patients who used insulin pumps or MDI as methods of treatment and either CGM or FGM as glycemic monitoring methods [18][19][20].
A study conducted with 30 T1D patients on hybrid closed loop (HCL) insulin pumps [18] evaluated glycemic control through telemedicine across four different time points during the pandemic lockdown period (two weeks before the lockdown, Time 0), during the first two weeks of lockdown (Time 1), last two weeks of lockdown (Time 2), and first two weeks after the lockdown (Time 3) [18].The study found an improvement in mean glucose value (155 mg/dL in Time 0 vs. 153 mg/dL in Time 3, p = 0.004), a significant improvement in TIR (68.5% in Time 0 vs. 73.5% in Time 3, p = 0.012) without an increase in level 1 (54-69 mg/dL) and level 2 (<54 mg/dL) hypoglycemia.The improvement in TIR was instead associated with a reduction in TAR (Table 1).
Another study by Boscari et al. [19], which enrolled 71 T1D patients managed by either MDI or CSII, analyzed the efficacy of telemedicine by comparing CGM/FGM combined data gathered four weeks before and four weeks after patients attended a telephone visit.This study showed a reduction in GMI from 7.16 to 7.05%, p = 0.002, a reduction in mean glucose value from 161.1 mg/dL to 156.3 mg/dL, p = 0.001, a reduction in TAR (>180 mg/dL) from 33.4 to 30.5%, p = 0.002, with an improvement in TIR (70-180 mg/dL) from 63.6 to 66.4%, p < 0.001.Furthermore, among those managed by CSII, there was a reduction in mean glucose value from 157.9 mg/dL to 152.6 mg/dL, p = 0.003 [19].No changes were observed in TBR (<70 mg/dL) with 3.0 vs. 3.2% p = 0.6, respectively.
Alharthi et al. [20] evaluated patients with T1D and compared glycemic control from FGM device data in a total of 101 patients who attended a specialized diabetes clinic during the six-week lockdown period 61 patients attended a telemedicine visit (TM) and a total of 40 patients did not [20].The study showed improvements in average blood glucose from 180 mg/dL to 159 mg/dL, p < 0.01 in those who attended a TM visit vs. 159.5 to 160 mg/dL p = 0.99 in those who did not.An improvement in TIR (70-180 mg/dL) from 46.0% to 55.0%, p < 0.01 vs. 58.0 to 57.0%, p = 0.20, was also observed.The authors also found a reduction in GMI from 7.7 to 7.2%, p = 0.03 vs. 7.3 to 7.2%, p = 0.65 in those who attended a TM visit vs. those who did not attend a TM visit, respectively.Additionally, a reduction in TAR (>180 mg/dL) was noted, without any significant change in TBR (<70 mg/dL) or in hypoglycemic events [20].
Four studies explored the impact of telemedicine on glycemic control in patients with type 2 diabetes (T2D) [21][22][23][24].These studies monitored glycemic control through SMBG, fasting, or postprandial blood glucose.Unlike the studies mentioned above, none of the subjects used a continuous or flash glucose monitor.In addition, a wide range of medications, such as insulin, GLP-RA, and SGLT2i, were used for glucose control in these studies; insulin pumps in patients with type 2 diabetes were not explored.
In another study, Dutta et al. [22] compared glycemic control among a cohort of 96 patients with T2D who were followed for a six-month period through telemedicine or in-person visits [20].The study found a reduction in HbA1c from baseline 8.7% ± 1.8 to 6.9 ± 1.1 in the telemedicine compared to the in-person group, which had a reduction in HbA1c from baseline 8.6% ± 2.1% to 7.0% ± 1.0%, p = 0.88 at six months follow-up.A reduction in FPG (fasting plasma glucose) and PPPG (post prandial plasma glucose) was noted in both groups as well [22].
The clinical effectiveness of telemedicine vs. a traditional care model was evaluated in 200 patients with uncontrolled T2D (HbA1c > 9%) who attended an outpatient diabetes clinic [23].The telemedicine arm included patients that attended a virtual clinic between March and June 2020 and the traditional care model included patients who received inperson care between August and November 2020.The telemedicine group had a reduction of 1.82% ± 1.35% (95% Cl = 1.56-2.09,p < 0.001) when compared to the traditional care model, which had a mean reduction of 1.54% ± 1.56% (95% Cl = 1.23-1.85,p < 0.001 [23]. Another study explored the impact of telemedicine on HbA1c in high-risk patients (HbA1c > 8%) with T2D before and after the implementation of a pharmacist-led telehealth service [24].The study evaluated the change in HbA1c between the pre-COVID-19 group (August 2019-February 2020) and the COVID-19 group (March 2020-October 2020).The study showed an HbA1c reduction of 1.3% in the pre-COVID-19 group vs. 2% in the COVID-19 group at three months follow-up, p = 0.305.An HbA1c reduction of 1.2% in the pre-COVID-19 vs. 2.2% in the COVID-19 group, p = 0.249 at six months follow-up, was also observed [24].
Finally, three retrospective studies enrolled both T1D and T2D patients to analyze the efficacy of telemedicine during the state of emergency [25][26][27].Of these studies, one evaluated outpatient diabetes care and HbA1c levels during the 2020 pandemic to 2019 by comparing the 13 weeks before (pre-period) and after (post-period) the lockdown period (26 May-24 August 2020) with the same time frame in 2019 [25].This study found a post-period HbA1c of 7.2% in 2020 and 7.2% in 2019 (p = 0.43) with a change in HbA1c of −0.1 and −0.2 from the pre-period, respectively (p < 0.001).A propensity analysis done between clinic visits vs. telemedicine visits in 2020 showed a reduction in HbA1c from baseline 7.6 to 7.5%, p = 0.023, with a difference reduction of -0.15 in the telemedicine compared with the clinic visit group that showed a reduction of HbA1c from 7.6 to 7.4%, p = 0.023 with a reduction of −0.23, p = 0.019 favoring clinic visit over telemedicine [26].The second study conducted a multiple regression analysis of patients with T1D and T2D (N = 2727), which showed that following adjustment for sex and type of diabetes, lower pre-BMI, lower pre-HbA1c, younger age, and clinic visit and/or telemedicine visit were associated with a higher chance of achieving an HbA1c < 7% [26].Lastly, a study conducted by Wong et al. analyzed a cohort of 504 patients with both T1D and T2D) [27].The study assessed telehealth consultations that took place between 1st April 2020 and 1st September 2020 (Visit A) and compared it to the proportion of patients that attended a face-to-face encounter during the same months in the year 2019 (Visit B) and finally compared it to patients that attended the clinic between April and September 2020 and had been attending the clinic face-to-face for at least 12 months prior to the onset of the pandemic (Visit C).When assessing HbA1c available at all patients, the study found improvements in HbA1c of 7.8% ± 1.6% at Visit A when compared to 8.1 ± 1.4 at Visit B and 8.2 ± 1.7% at visit C (p < 0.001).Patients with T2D also had a lower HbA1c at visit A compared to visit B and visit C.However, in patients with T1D, there was no significant difference in glycemic control between visit A, visit B, and visit C, with an HbA1c of 8.3 ± 1.4%, 8.4 ± 1.7, and 8.4 ± 1.8, respectively [27].
Evidence from Prospective Studies
Three prospective studies evaluated the effect of telemedicine in improving glycemic control in individuals with T1D and T2D.Two of the three studies enrolled patients with T1D and one enrolled patient with T2D [28][29][30].
A pilot study, which included 166 patients with T1D, aimed to evaluate different glycemic outcomes collected during two virtual visits during the lockdown period [26].The study considered different methods of insulin delivery and glucose monitoring for its assessment (CSII + CGM, MDI + CGM, and CSII or MDI + SMBG), showing that TIR increased from baseline to follow-up visits in all patients).There was a non-statistically significant improvement in TBR and GMI compared to baseline and statistically significant improvements in TAR and mean daily glucose (Table 2) [28].Notably, the CSII and MDI+SMBG group displayed better improvements in the TAR from baseline compared to follow-up visits (40.0%± 18.0% vs. 28.0%± 15.0%, respectively; p = 0.03), a reduction in mean daily glucose (176± 49 mg/dL vs. 150 ± 25 mg/dL; p = 0.04), and improvement in GMI (7.5% ± 1.1% vs. 6.9% ± 0.6%; p = 0.04), and CV (36.0% ± 8.0% vs. 42.0%±9.0%; p = 0.04) compared to the other groups.In a subgroup analysis, the authors found a significant improvement in TIR in those with a GMI > 7.5% as compared to those with a GMI < 7.5% [28].
Another study enrolled 87 patients with uncontrolled T1D diabetes (GMI > 9%) and followed patients between March and June 2020 through online visits, conferences, and group sessions [29].The authors evaluated the number of hospitalizations for DKA and severe hypoglycemia causing loss of consciousness or seizures and, as a secondary endpoint, reduction in GMI.The participant's outcomes were compared to patient data from patients with HbA1c > 9% in the TID exchange.The study found fewer hospitalizations for DKA in the enrolled patients vs. T1D exchange (2.2 vs. 6.71%),fewer episodes of severe hypoglycemia in telemedicine vs. T1D exchange (1.1% vs. 7%) and change in mean GMI of −0.66% (reduced from 9.91 to 9.25%) during this period [29].
Finally, a study assessed 130 T2D patients with HbA1c > 9% who attended a virtual integrated care clinic over four months during the pandemic.Using Hb1Ac as a marker for glycemic control, this single-arm observational study showed a decrease in pre-intervention HbA1c from 9.98 ± 1.33 to 8.32 ± 1.31 (p < 0.001) post-intervention [30].
Discussion
Hyperglycemia, hypoglycemia, or increased glucose variability have been associated with increased morbidity, frequent hospitalizations/emergency department (ED) visits, and higher mortality [31][32][33][34][35]. Achieving better glucose control is important and frequent clinic visits are often required for medication adjustments.In addition, many patients with diabetes have underlying comorbidities that restrict mobility or live in remote/rural areas posing barriers to seeking in-person care.Telemedicine can serve as an alternative method of providing less time consuming and more accessible patient care, it is just a matter of embracing the technological options already available [36]; by doing so, it could allow quicker titration of diabetes medications, improving monitoring and glycemic parameters compliance in medication taking, and improving outcomes [37].Although telemedicine can be an option, most visits are still performed in-person.Telemedicine can utilize different telecommunication options, among them video conference applications, which have expanded following the COVID-19 declaration.With a growing number of patients using smartphones and having Internet access (more than 85% of the US population using smartphones [38] and 93% having Internet access [39], a figure that is constantly rising), utilizing the Internet to transfer data and perform telemedicine should not be considered a futuristic solution for healthcare delivery, but an option to use at the present time.
Overall, telemedicine proved to be a timely solution in the face of the COVID-19 outbreak, allowing for appropriate glycemic control (Figure 2).The studies reported were conducted across different countries worldwide, showcasing a diverse population.They focused on patients with both type 1 and type 2 diabetes with different treatment modalities (insulin pump, multiple daily insulin injections, oral hypoglycemic agents, GLP-1 agonists) and different glucose monitoring methods (CGM, FGM, SMBG) (Tables 1 and 2).Notably, the retrospective studies focusing on individuals with T1D showed improvements across various glycemic control measures regardless of the treatment modality [18][19][20].These studies showed improvements in TIR, reductions in TAR, improvements in mean glucose values, and reductions in HbA1c% and GMI [18][19][20].In the retrospective studies following patients with T2D, most studies found that the use of telemedicine led to reductions in HbA1c [21,24], with one study showing it to be equally effective as the standard of care model [22].Similar findings were observed in studies that used mixed population of patients with T1D and T2D, in which telemedicine led to improvements in HbA1c [25][26][27].
population of patients with T1D and T2D, in which telemedicine led to improvements in HbA1c [25][26][27].It is crucial to acknowledge that the heterogenicity of these studies, including variations in outcomes, patient population, sample size, methods of glycemic monitoring, and insulin delivery restricts the clear interpretation of telemedicine's role on diabetes management.While some prospective studies showed improvements in TIR, mean glucose value, and reductions in TAR [18][19][20], it is important to note that their small sample size could contribute to their results.Additionally, while reductions in HbA1c were noted across all telemedicine groups [21][22][23][24][25][26][27]30], some studies only found slight improvements [21], and another found no statistical significance among the groups [24].Furthermore, studies focused on T1D patients used CGM devices to monitor glycemic control [18][19][20], It is crucial to acknowledge that the heterogenicity of these studies, including variations in outcomes, patient population, sample size, methods of glycemic monitoring, and insulin delivery restricts the clear interpretation of telemedicine's role on diabetes management.While some prospective studies showed improvements in TIR, mean glucose value, and reductions in TAR [18][19][20], it is important to note that their small sample size could contribute to their results.Additionally, while reductions in HbA1c were noted across all telemedicine groups [21][22][23][24][25][26][27]30], some studies only found slight improvements [21], and another found no statistical significance among the groups [24].Furthermore, studies focused on T1D patients used CGM devices to monitor glycemic control [18][19][20], potentially confounding the role of telemedicine.Therefore, while the telemedicine groups did show improvements in glycemic control, the use of CGM devices could have contributed to their overall improvement.Nonetheless, the prospective study conducted by Parise et al. [28] highlighted that in all patients with T1D, the telemedicine group showed improvements in TIR regardless of the glucose monitoring method.In addition, lockdown could have allowed patients to have more time to allocate to diabetes care, hence confounding the effect of telemedicine.It should be also noted that that current evidence is based on retrospective-observational studies, as the number of prospective studies which were conducted evaluating the role of telemedicine in patients with diabetes during the COVID-19 era is much smaller.Large randomized clinical trials are needed to evaluate the role of telemedicine in glycemic control in patients with diabetes.
Even with the heterogenicity of these studies, telemedicine showed improvements in diabetes control, across different monitoring methods and treatment modalities, proving effective in diabetes management across various studies; however, even those with similar glycemic control outcomes did not exhibit a clear clink between telemedicine and specific measures.Furthermore, not all articles focused on the impact of hospitalization or events such as DKA or hypoglycemia.We also did not focus on the financial impact of telemedicine, as we deemed that that deserves a separate analysis of its own.The use of diabetes technology, such as CGM or FGM, has emerged as an important tool for diabetes management.As shown in the studies presented, such technology seems to make diabetes management suitable for telemedicine by allowing a provider to review data remotely.With the development of new integrative information sharing, telemedicine can impact how diabetes is managed in the future.Remote monitoring can lead to improved glycemic measures, and as healthcare becomes more integrative, individuals with diabetes can be closely monitored by their physician.Expanding on these services will further allow those with diabetes to play a more active role in the management of their chronic illness.However, it faces significant hurdles such as cost, patient education, and need of technology training.As mentioned earlier, telemedicine was not widely used prior to the COVID-19 pandemic, but quickly became adopted as an instrument for diabetes care during the initial stages [17].Our article emphasizes the variability in the current literature regarding the use of telemedicine in diabetes management, highlighting that while telemedicine has been shown to be a safe, valid, and adequate option for managing chronic diseases such as diabetes [17], its precise role is yet to be understood.Furthermore, as restrictions are lifted and life returns to normal, this article seeks to highlight the need for randomized clinical trials that assess telemedicine's impact beyond the pandemic's initial phases and how it can be optimized for diabetes management as we move forward from the pandemic.
Conclusions
Across different studies reported in this review, telemedicine was shown to be an effective tool for the management of diabetes, illustrating potential to be the new standard of care.Indeed, telemedicine became an invaluable tool during the initial phases of the pandemic and continues to prove crucial in managing chronic diseases.The evolution of technology is set to play a crucial role in future diabetes care.Tools such as continuous glucose monitors, insulin pumps, and smart pens not only have a positive impact on diabetes management but can also allow telemedicine to become standard practice in this group of patients.The heterogenicity and variability in the study results make it apparent that we do not yet fully understand how to best optimize telemedicine for the management of diabetes.Yet, these studies showed that telemedicine can be a promising and safe method of health care delivery in patients with diabetes compared to in-person visits.
Figure 2 .
Figure 2. Use of telemedicine for remote glucose monitoring.a CGM: Continuous Glucose Monitor; b FGM: Flash Glucose Monitor; c SMBG: Self-Monitoring Blood Glucose; d CSII: Continuous Subcutaneous Insulin Infusion; e MDI: Multiple Daily Injection; f OHA: Oral Hypoglycemic Agent.
Figure 2 .
Figure 2. Use of telemedicine for remote glucose monitoring.a CGM: Continuous Glucose Monitor; b FGM: Flash Glucose Monitor; c SMBG: Self-Monitoring Blood Glucose; d CSII: Continuous Subcutaneous Insulin Infusion; e MDI: Multiple Daily Injection; f OHA: Oral Hypoglycemic Agent.
Table 1 .
Retrospective studies which examined telemedicine use in patients with DM.
57%, p = 0.20 in those who did not have a telemedicine visit.•Improvement in TAR m (>180 mg/dL) from 48 to 35%, p < 0.01 vs. 35% to 35%, p = 0.83 in those who did not have a telemedicine visit.•Improvement in GMI o from 7.7 to 7.2%, p = 0.03 vs. 7.3 to 7.2%, p = 0.65 in those who did not attend a telemedicine visit.
Table 1 .
Cont.Improvement in baseline HbA1c p from baseline 8.7% ± 1.8% to 6.9% ± 1.1% in the telemedicine compared to a reduction in HbA1c p from baseline 8.6% ± 2.1% to 7% ± 1%, p = 0.88 in the IPV r group.•Improvement in baseline FPG i from baseline 184.1 ± 69 to 120.3 ± 20.8 in the telemedicine group vs. improvement from baseline 184.9 ± 73.1 to 118.6 mg/dL, p = 0.761 in the IPV r group.
|
2023-09-02T15:20:24.115Z
|
2023-08-31T00:00:00.000
|
{
"year": 2023,
"sha1": "f34ccbfb1f8672031549e9e7f1dc10106033d63c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/17/5673/pdf?version=1693471364",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d6f316d002708096a1998b85b8626a6dc0d2713",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
235708256
|
pes2o/s2orc
|
v3-fos-license
|
Towards a High Rejection Desalination Membrane: The Confined Growth of Polyamide Nanofilm Induced by Alkyl-Capped Graphene Oxide
In this paper, we used an octadecylamine functionalized graphene oxide (ODA@GO) to induce the confined growth of a polyamide nanofilm in the organic and aqueous phase during interfacial polymerization (IP). The ODA@GO, fully dispersed in the organic phase, was applied as a physical barrier to confine the amine diffusion and therefore limiting the IP reaction close to the interface. The morphology and crosslinking degree of the PA nanofilm could be controlled by doping different amounts of ODA@GO (therefore adjusting the diffusion resistance). At standard seawater desalination conditions (32,000 ppm NaCl, ~55 bar), the flux of the resultant thin film nanocomposite (TFN) membrane reached 59.6 L m−2 h−1, which was approximately 17% more than the virgin TFC membrane. Meanwhile, the optimal salt rejection at seawater conditions (i.e., 32,000 ppm NaCl) achieved 99.6%. Concurrently, the boron rejection rate was also elevated by 13.3% compared with the TFC membrane without confined growth.
Introduction
Polyamide thin-film-composite (TFC) reverse osmosis (RO) membranes have been widely used in desalination because of their low energy consumption and high separation efficiency [1,2]. The pursuit of high rejection and high selectivity RO membranes is one of the development areas that could yield high-quality product water and a more costeffective process [3,4]. Secondly, the boron removal of the membrane needs to be further improved [5,6]. The selective layers in TFC-RO membranes are fabricated by the interfacial polymerization (IP) process, in which a polyamide (PA) film is formed at the interface of aqueous amine solution and organic acyl chloride solution. By doping nanomaterials into a PA matrix during interfacial polymerization (IP), researchers have fabricated various thinfilm-nanocomposite (TFN) RO membranes. A group of nanomaterials can be exploited for this purpose. Some examples include: zeolites [7], carbon nanotubes (CNTs) [8], polyhedral oligomeric silsesquioxane (POSS) [9], graphene oxide (GO) [10], metal-organic frameworks (MOFs) [11], silica nanoparticles, etc. [12].
In recent years, the design and preparation of TFN membranes by combining new inorganic or organic nanomaterials with a traditional polyamide layer is a new research direction in the membrane separation field [13,14]. While most of these efforts are directed towards elevating the performance of the TFN membranes, the effect of the nanomaterials on the growth of the PA nanofilm, especially on the physicochemical aspects of the resultant Single layer graphene oxide powder (GO) was purchased from Hangzhou Gaoxi Technology Co., Ltd., Hangzhou, China. Octadecylamine (ODA, 97%), m-phenylene diamine (MPD, 99%), camphorsulfonic acid (CSA, 99%), triethylamine (TEA), 1,3,5-benzenetricarbonyl trichloride (TMC, 98%) and boric acid were purchased from Shanghai Aladdin Reagent Co. Ltd, Shanghai, China and used as received. Polysulfone (PSF) substrate membranes with a MWCO of 35 kDa were used from Huzhou laboratory pilot line, and deionized (DI) water with the electrical conductivity 1.6-2.3 was taken from the laboratory. Isopar-G was obtained from ExxonMobil Chemical Company, while n-hexane was from Shanghai Lingfeng Chemical Reagent CO., Ltd., Shanghai, China. Dehydrated alcohol (EtOH) was obtained from Anhui Ante Food Co., Ltd., Anhui, China and dimethylformamide (DMF) was purchased from Wuxi Haishuo Biological CO., Ltd., Wuxi, China. Sodium hydroxide (NaOH) and sodium chloride (NaCl) were purchased from Xilong Scientific Co., Ltd., Xilong, China and Guangdong Guanghua Sci-Tech CO., Ltd., Guangdong, China, respectively. All reagents were analytical grade unless otherwise stated.
Preparation of ODA@GO
Functional GO nanosheets were formed by binding octadecylamine (ODA) with oxygen-containing groups on GO, which can be seen from Figure 1. Briefly, 100 mg GO was dispersed in 50 mL DI water by bath ultrasound for 1 h. ODA solution (100 mg in 10 mL EtOH) was added into GO suspension and stirred well to blend. The mixed solution was poured into the 100 mL hydrothermal reactor and reacted at 90 • C for 24 h in a constant temperature oven. After the reaction, the resultant composite was rinsed with ethanol several times to remove unreacted ODA, then it was vacuum dried at 50 • C for 24 h [24]. The obtained black powder was stored for further usage and named as ODA@GO.
Membranes 2021, 11, x FOR PEER REVIEW 3 of 15 temperature oven. After the reaction, the resultant composite was rinsed with ethanol several times to remove unreacted ODA, then it was vacuum dried at 50 °C for 24 h [24]. The obtained black powder was stored for further usage and named as ODA@GO.
Characterizations of GO and ODA@GO
The Morphology of GO and ODA@GO was observed by field emission scan electron microscope (FESEM, SU8010, Hitachi) and atomic force microscopy (AFM, Bruker, Dimension Icon). First, a few drops of GO aqueous dispersion and ODA@GO hexane solution were dropped on a silicon wafer, to observe their morphologies by SEM. GO aqueous dispersion and ODA@GO hexane solution were also dropped on the mica wafer, to test their sizes and thickness by AFM.
The chemical compositions of the membranes were analyzed by Fourier transform infrared (FT-IR, ThermoFisher Nicolet-is50, USA) spectroscopy, X-ray diffraction (XRD, Panalytical-X'Pert Pro, Holland) and X-ray photoelectron spectroscopy (XPS, Kratos AXIS Ultra DLD, UK). GO, ODA and ODA@GO powder were mixed with KBr (mass ratio was 1:200) and compressed into a tablet to test the transmittance at room temperature by FTIR. The crystal structure of GO and ODA@GO nanosheets were detected by XRD with Cu Kα excitation radiation. The component element of GO and ODA@GO was analyzed by XPS using Al Kα (1486.6 eV) as the radiation source.
Preparation of RO Membrane
The TFC RO membranes were fabricated by the IP process, wherein 2.2% (w/v) MPD aqueous phase with CSA and TEA buffer solution (adjusted pH = 10) was reacted with 0.11% (w/v) TMC dissolved in isopar-G. The PSF ultrafiltration porous substrate was soaked in the MPD solution for 2 min, after which the residue was removed and then it was dried with sweeping N2. Subsequently, the TMC solution was impregnated for 1 min to remove the excess organic solution and form a thin layer. Finally, it was heated in an oven at 95 °C for 8 min to form a dense layer of PA that was named the virgin reference group.
The TFN RO membranes were prepared using the same steps above, but a series of ODA@GO nanosheets with different mass concentrations (0.001%, 0.003%, 0.005%, 0.01%, and 0.02% (w/v)) were added to the organic solution and mixed under bath ultrasonication for 1 h before the IP reaction. The sheets act as barriers in the growth process of PA. The prepared TFN membranes were named TFN-1 to TFN-5, in which a series of concentrations (from 0.001% to 0.02% (w/v)) of ODA@GO sheets were doped, respectively.
Characterization of RO Membrane
The fabricated RO membranes were cleaned with DI water and dried in a vacuum oven at room temperature for 24 h before the analyses were conducted. The top surface
Characterizations of GO and ODA@GO
The Morphology of GO and ODA@GO was observed by field emission scan electron microscope (FESEM, SU8010, Hitachi) and atomic force microscopy (AFM, Bruker, Dimension Icon). First, a few drops of GO aqueous dispersion and ODA@GO hexane solution were dropped on a silicon wafer, to observe their morphologies by SEM. GO aqueous dispersion and ODA@GO hexane solution were also dropped on the mica wafer, to test their sizes and thickness by AFM.
The chemical compositions of the membranes were analyzed by Fourier transform infrared (FT-IR, ThermoFisher Nicolet-is50, USA) spectroscopy, X-ray diffraction (XRD, Panalytical-X'Pert Pro, Holland) and X-ray photoelectron spectroscopy (XPS, Kratos AXIS Ultra DLD, UK). GO, ODA and ODA@GO powder were mixed with KBr (mass ratio was 1:200) and compressed into a tablet to test the transmittance at room temperature by FTIR. The crystal structure of GO and ODA@GO nanosheets were detected by XRD with Cu Kα excitation radiation. The component element of GO and ODA@GO was analyzed by XPS using Al Kα (1486.6 eV) as the radiation source.
Preparation of RO Membrane
The TFC RO membranes were fabricated by the IP process, wherein 2.2% (w/v) MPD aqueous phase with CSA and TEA buffer solution (adjusted pH = 10) was reacted with 0.11% (w/v) TMC dissolved in isopar-G. The PSF ultrafiltration porous substrate was soaked in the MPD solution for 2 min, after which the residue was removed and then it was dried with sweeping N 2 . Subsequently, the TMC solution was impregnated for 1 min to remove the excess organic solution and form a thin layer. Finally, it was heated in an oven at 95 • C for 8 min to form a dense layer of PA that was named the virgin reference group.
The TFN RO membranes were prepared using the same steps above, but a series of ODA@GO nanosheets with different mass concentrations (0.001%, 0.003%, 0.005%, 0.01%, and 0.02% (w/v)) were added to the organic solution and mixed under bath ultrasonication for 1 h before the IP reaction. The sheets act as barriers in the growth process of PA. The prepared TFN membranes were named TFN-1 to TFN-5, in which a series of concentrations (from 0.001% to 0.02% (w/v)) of ODA@GO sheets were doped, respectively.
Characterization of RO Membrane
The fabricated RO membranes were cleaned with DI water and dried in a vacuum oven at room temperature for 24 h before the analyses were conducted. The top surface and cross-section of each membrane were examined by FESEM to observe the cross-sectional morphology. The samples were frozen in liquid nitrogen and then fractured. Before observation, all samples were coated with gold for 60 s. Transmission electron microscopy (TEM, HT7700, Hitachi) was conducted at 100 kV to examine the top surface and crosssection for further observing the morphology and evaluating the apparent and intrinsic thickness of the membranes. Briefly, after being separated from the PSF layer via DMF, the PA layer was overlaid on the top surfaces of copper grids thereafter to observe a specific morphology. The cross-sectional samples were embedded in resin for 8 h and then cut into approximately 80 nm-thick sections to place on the copper grids, respectively. AFM was used to observe the surface roughness of each 5 × 5 µm 2 membrane by comparing Ra values from the obtained three-dimensional morphology images.
XPS was used to analyze the elemental content of the PA top surface within 10 nm of the PA layer by utilizing Al Kα (1486.6 eV) as the radiation source.
The hydrophilicity/hydrophobicity of each membrane surface was measured with a contact angle meter (CA, OCA15EC, Germany) using the sessile drop technique with DI water as the reference liquid. A droplet of DI water of approximately 3 µL was deposited on the leveled membrane surface to measure the contact angle of each sample. The mean static contact angle was calculated from six different positions.
A solid-surface zeta potentiometer (Zeta potential, Anton Paar SurPASS 3, Austria) was used to characterize the charge on the membrane surface over the pH range of 3-10. The background electrolyte solution was 1 mmol L −1 KCl. The pH was adjusted with 0.05 mol L −1 HCl and 0.05 mol L −1 NaOH.
Performance of the RO Membrane
The high-pressure cross-flow RO evaluation setup ( Figure 2) was used to test the separation performance of the prepared membrane under brackish water and seawater conditions. In the process of our experiment, the flow rate and surface cross-flow velocity are 3 L/min and 0.31 m/s, respectively. Before the experiment, the device was operated for 1 h to stabilize the system pressure. First, the system was operated with pure water to calculate the water permeability coefficient (A value). Then, the system was operated with brackish water (2000 ppm NaCl solution) to calculate the solute permeability coefficient (B value) and with seawater water (32,000 ppm NaCl solution) to calculate the solute permeability coefficient (B value). The testing pressure was 16 bar for pure and brackish water testing and 55 bar for seawater. The other test conditions were constant (pH = 8;
Characterizations of GO and ODA@GO
The physiochemical properties of pristine GO and ODA@GO were characterized by FESEM, AFM, FTIR, XRD, and XPS (See Figures 3 and 4). Digital photos of GO dispersion in water and isopar-G and ODA@GO in both solvents are listed from left to right ( Figure 4a). The size of ODA@GO nanosheets was mainly around 1-2 um (Figure 3a,b) [25,26], which was consistent with the measurements of the AFM images (Figure 3c,d). The sheet size was calculated by line analysis using Nanoscope software. Meanwhile, the thickness of a single-layered ODA@GO increased to ~2.7 nm due to alkylation as compared to the virgin GO nanosheets (~1.3 nm). As shown in Figure 4a, unmodified GO was super-hydrophilic and dispersed in water instantly and was immiscible with isopar-G. While the modified hydrophobic ODA@GO was dispersed easily in isopar-G and not dispersible in the aqueous phase. Such oleophilic and hydrophobic properties render the ODA@GO nanosheets an ideal medium to inhibit the diffusion of MPD into the organic phase and hence confine the growth of the PA nanofilm, which is largely dependent on MPD diffusion [18][19][20].
Characterizations of GO and ODA@GO
The physiochemical properties of pristine GO and ODA@GO were characterized by FESEM, AFM, FTIR, XRD, and XPS (See Figures 3 and 4). Digital photos of GO dispersion in water and isopar-G and ODA@GO in both solvents are listed from left to right (Figure 4a). The size of ODA@GO nanosheets was mainly around 1-2 um (Figure 3a,b) [25,26], which was consistent with the measurements of the AFM images (Figure 3c,d). The sheet size was calculated by line analysis using Nanoscope software. Meanwhile, the thickness of a single-layered ODA@GO increased to~2.7 nm due to alkylation as compared to the virgin GO nanosheets (~1.3 nm). As shown in Figure 4a, unmodified GO was super-hydrophilic and dispersed in water instantly and was immiscible with isopar-G. While the modified hydrophobic ODA@GO was dispersed easily in isopar-G and not dispersible in the aqueous phase. Such oleophilic and hydrophobic properties render the ODA@GO nanosheets an ideal medium to inhibit the diffusion of MPD into the organic phase and hence confine the growth of the PA nanofilm, which is largely dependent on MPD diffusion [18][19][20].
The chemical compositions of GO and ODA@GO were analyzed using FTIR and XPS. The FTIR absorption spectra of the GO, ODA, and the ODA@GO were compared in Figure 4b. In the GO spectrum, bands were observed at 3397, 1719, 1637, 1100, and 683 cm −1 , which are associated with the stretching vibrations of -OH, C=O, C=C, C-O, and C-H, respectively. In the ODA@GO spectrum, the peaks at 2920 cm −1 , 2850 cm −1 , and 721 cm −1 are attributed to ODA molecules assigning to C-H stretching, while characteristic peaks of C-N and N-H at 1467 cm −1 and 1577 cm −1 indicate that the epoxy group in the GO layer experienced a ring-opening reaction due to the nucleophilic substitution of protonated amino groups [27,28]. The XRD results of Figure 4c showed a diffraction peak at 2θ = 9 • for the GO nanosheets, while the peak value of ODA@GO decreased to 2θ = 3.3 • , which can be explained by the increment of the nanosheet spacing by intercalated alkyl chains. Meanwhile, a new weak peak of ODA@GO appeared at 21 • , which might be related to the partial reduction of GO by ODA molecules [26,29]. Figure 4d,e showed the XPS analysis results of GO and ODA@GO, respectively. Comparing the C1s peak of GO and ODA@GO, it can be seen that the C-O peak fraction decreased significantly, and a new fraction attributed to C-N appeared at the same time, indicating that the epoxy group in the GO went through a ring-opening reaction [30], which agrees well with the FTIR results in Figure 4b. The above results confirmed the successful modification of GO from the physical and chemical aspects and the ODA@GO increased single layer thickness due to the grafting of ODA molecules onto the GO nanosheets.
virgin GO nanosheets (~1.3 nm). As shown in Figure 4a, unmodified GO was super-hydrophilic and dispersed in water instantly and was immiscible with isopar-G. While the modified hydrophobic ODA@GO was dispersed easily in isopar-G and not dispersible in the aqueous phase. Such oleophilic and hydrophobic properties render the ODA@GO nanosheets an ideal medium to inhibit the diffusion of MPD into the organic phase and hence confine the growth of the PA nanofilm, which is largely dependent on MPD diffusion [18][19][20]. The chemical compositions of GO and ODA@GO were analyzed using FTIR and XPS. The FTIR absorption spectra of the GO, ODA, and the ODA@GO were compared in Figure 4b. In the GO spectrum, bands were observed at 3397, 1719, 1637, 1100, and 683 cm −1 , which are associated with the stretching vibrations of -OH, C=O, C=C, C-O, and C-H, respectively. In the ODA@GO spectrum, the peaks at 2920 cm −1 , 2850 cm −1 , and 721 cm −1 are attributed to ODA molecules assigning to C-H stretching, while characteristic peaks of C- The chemical compositions of GO and ODA@GO were analyzed using FTIR and XPS. The FTIR absorption spectra of the GO, ODA, and the ODA@GO were compared in Figure 4b. In the GO spectrum, bands were observed at 3397, 1719, 1637, 1100, and 683 cm −1 , which are associated with the stretching vibrations of -OH, C=O, C=C, C-O, and C-H, respectively. In the ODA@GO spectrum, the peaks at 2920 cm −1 , 2850 cm −1 , and 721 cm −1 are attributed to ODA molecules assigning to C-H stretching, while characteristic peaks of C-
Morphology
The surface morphologies of the TFC and TFN membranes analyzed by SEM and TEM were shown in Figures 5a-c and 5g-i respectively. The surface of the TFC membrane was characterized predominantly by a nodular structure. Then, the leaf-like structures, which are collapsed large-sized nodules [15], gradually appeared as the doping of ODA@GO increased. From the cross-sectional morphology of SEM Figure 5d-f of the RO membrane, we can observe that the PA nodular structure showed interconnected hollow voids inside, with the size of the majority of them less than 50 nm [19], which agreed well with the crosssectional TEM images in Figure 5j-l. With more ODA@GO nanosheets doped in the organic solution, the apparent thickness of the PA layer (namely, the overall thickness of the PA layer) generally decreased from~114.9 nm to~69.2 nm (i.e., virgin membrane and the TFN-5 membrane, respectively. See Figure 6 and Table 1). In contrast to the apparent thickness, however, the intrinsic thickness (namely, the thickness of the polyamide nanofilm that forms the wall of the voids) increased from~15.93 nm (for the virgin membrane) to~21.19 nm (for the TFN-5 membrane). Interestingly, the pure water permeability coefficient A value of the TFN membranes increased gradually, which could be explained by the enhanced leaf-like structures on the former membrane favoring the higher water transportation surface area of the TFN membrane. Under the seawater condition performance test, the B values of TFN membranes decreased compared with the virgin TFC membrane, which was consistent with the increase in the membrane intrinsic thickness and cross-linking degree (see Figure 6 and Table 2). At the same time, the A/B value, which represents the selectivity of solvent (water) over the solute (NaCl), increased from 23.9 bar −1 (TFC membrane) to 26.63 bar −1 (TFN-4 membrane), pronouncing the optimizable selectivity of water and salt. AFM was used to further explore the surface roughness of membranes ( Table 1). The R a value of the TFC membrane was~47.3 nm. After adding ODA@GO into the TMC solution, the TFN membranes became relatively smoother and the roughness decreased to~33.5 nm (for 0.02% (w/v) loading). This is because the nodule characterized surface of the TFC membrane generally transfers into the leaf-like structure characterized membrane surface of the TFN membranes. These leaf-like structures are essentially collapsed large nanobubbles in the dry state. They overlap with each other and conceal the roughness beneath their flat structures, therefore reducing the surface roughness of the membrane [15,17]. We will address this phenomenon more systematically by combining other experimental details in Section 3.3 after addressing the chemical aspects.
Chemical Analysis
The element composition of the membrane surface was determined by XPS (Figure 7). Compared to the TFC membrane, the C-C/C=C main peak area observed on the TFN membrane at 284.2 eV was increased, which should be attributed to the alkyl chains in ODA@GO. With the increase of the doping amount, two other main peak areas at 285.6 eV (C-O) and 284.8 eV (C-N) increased progressively, which should be attributed to the ringopening of epoxy group in GO and the addition of amine group in ODA [26]. At the same time, as the elemental composition showed in Table 2, the greater the ODA@GO addition, the higher the C content was, and the O/C ratio decreased accordingly, which indicates a more hydrophobic membrane top surface [31]. Also, such analysis was further supported by the higher water contact angle for the ODA@GO incorporated membranes. Specifically, the contact angle increased from 84 • for the virgin up to 134 • for the TFN-5 membrane (Figure 8a). These analyses collectively suggest that ODA@GO nanosheets are partially incorporated in the top surface of the TFN membranes. Theoretically, a fully cross-linked polyamide layer should exhibit an O/N value of ~1, and is only linearly-linked with an O/N ratio of 2 [32]. Here, the O/N ratio was 1.19 for the virgin membrane, which is a typical surface O/N ratio when compared with a serial of commercial RO membranes [17,33]. Interestingly, the O/N ratio instead followed a decreasing trend as more ODA@GO was added (for TFN-5, the O/N ratio approached ~1). Although the doped ODA@GO could be observed in the polyamide matrix (Figure 5h,i), the decrease of the O/N ratio on the membrane surface should not be attributed to the incorporation of ODA@GO. This is because the O/N ratio of the ODA@GO was ~3.7 (Table 2), a value significantly higher than that of the O/N ratio of polyamide. Rather, the decreasing O/N ratio reflects a higher crosslinking degree of the bulk PA nanofilm. A similar conclusion can be drawn from the analysis of zeta potential. As shown in Figure 8b, the TFC membrane was typically negatively charged in the pH range of about 4.2-9.7, due to the hydrolysis of the acyl groups to give the carboxyl groups on the membrane surface. After the doping of the ODA@GO nanosheets, the negative charge of the TFN RO membrane surface gradually lessened, which was likely caused by the smaller amount of free carboxyl groups, therefore implying a higher crosslinking degree [33]. Theoretically, a fully cross-linked polyamide layer should exhibit an O/N value of ~1, and is only linearly-linked with an O/N ratio of 2 [32]. Here, the O/N ratio was 1.19 for the virgin membrane, which is a typical surface O/N ratio when compared with a serial of commercial RO membranes [17,33]. Interestingly, the O/N ratio instead followed a decreasing trend as more ODA@GO was added (for TFN-5, the O/N ratio approached ~1). Although the doped ODA@GO could be observed in the polyamide matrix (Figure 5h,i), the decrease of the O/N ratio on the membrane surface should not be attributed to the incorporation of ODA@GO. This is because the O/N ratio of the ODA@GO was ~3.7 (Table 2), a value significantly higher than that of the O/N ratio of polyamide. Rather, the decreasing O/N ratio reflects a higher crosslinking degree of the bulk PA nanofilm. A similar conclusion can be drawn from the analysis of zeta potential. As shown in Figure 8b, the TFC membrane was typically negatively charged in the pH range of about 4.2-9.7, due to the hydrolysis of the acyl groups to give the carboxyl groups on the membrane surface. After the doping of the ODA@GO nanosheets, the negative charge of the TFN RO membrane surface gradually lessened, which was likely caused by the smaller amount of free carboxyl groups, therefore implying a higher crosslinking degree [33]. Theoretically, a fully cross-linked polyamide layer should exhibit an O/N value of 1, and is only linearly-linked with an O/N ratio of 2 [32]. Here, the O/N ratio was 1.19 for the virgin membrane, which is a typical surface O/N ratio when compared with a serial of commercial RO membranes [17,33]. Interestingly, the O/N ratio instead followed a decreasing trend as more ODA@GO was added (for TFN-5, the O/N ratio approached~1). Although the doped ODA@GO could be observed in the polyamide matrix (Figure 5h,i), the decrease of the O/N ratio on the membrane surface should not be attributed to the incorporation of ODA@GO. This is because the O/N ratio of the ODA@GO was~3.7 (Table 2), a value significantly higher than that of the O/N ratio of polyamide. Rather, the decreasing O/N ratio reflects a higher crosslinking degree of the bulk PA nanofilm. A similar conclusion can be drawn from the analysis of zeta potential. As shown in Figure 8b, the TFC membrane was typically negatively charged in the pH range of about 4.2-9.7, due to the hydrolysis of the acyl groups to give the carboxyl groups on the membrane surface. After the doping of the ODA@GO nanosheets, the negative charge of the TFN RO membrane surface gradually lessened, which was likely caused by the smaller amount of free carboxyl groups, therefore implying a higher crosslinking degree [33].
The Effect of Confined Growth Mechanism on the Resultant PA Layer
As mentioned above, the ODA@GO doped TFN membranes have developed leaf-like nanostructure characterized surfaces, which are distinctive from the virgin TFC membrane. Especially at high ODA@GO concentration, ODA@GO nanosheets can be readily observed at locations where the leaf-like structures are observed (Figure 5h,i). Such a phenomenon can be explained by the confined growth of polyamide at the interface due to the limiting effect of the ODA@GO nanosheets to MPD diffusion. Specifically, the limiting effect can be interpreted in two ways: Firstly, the limited diffusion of MPD molecules into the organic phase results in higher MPD concentration at the interface [23], therefore resulting in a more intense IP reaction, hence resulting in a greater occurrence of the leaf-like structures [34]. Secondly, the presence of the ODA@GO nanosheets limit the growth of the nanobubbles in the z-direction (the direction that is perpendicular to the membrane surface), therefore the nanobubbles are more inclined to develop laterally and finally into leaf-like structures. Collectively, the decreasing trend of the apparent thickness, the increasing trend of the intrinsic thickness and the crosslinking degree of the PA layer agree well with the confined growth mechanism, as the apparent thickness is mainly governed by the vertical growth of the nanoscale structures, while the intrinsic thickness and crosslinking degree is mainly governed by the enhanced intensity of the IP reaction. The above-mentioned confined growth mechanism of the PA layer is illustrated in Figure 9. of the nanobubbles in the z-direction (the direction that is perpendicular to the membrane surface), therefore the nanobubbles are more inclined to develop laterally and finally into leaf-like structures. Collectively, the decreasing trend of the apparent thickness, the increasing trend of the intrinsic thickness and the crosslinking degree of the PA layer agree well with the confined growth mechanism, as the apparent thickness is mainly governed by the vertical growth of the nanoscale structures, while the intrinsic thickness and crosslinking degree is mainly governed by the enhanced intensity of the IP reaction. The abovementioned confined growth mechanism of the PA layer is illustrated in Figure 9.
Performance Evaluation of the As-Developed Membrane
As can be seen from Figure 10, with the increase in the incorporation amount in the membrane, water flux generally increased initially. For example, compared with the virgin membrane, the optimal brackish water flux increased by 11% to 47.9 L m −2 h −1 at a doping amount of 0.01% (w/v), and the salt rejection was at approximately the same level as the virgin membrane (~99.7%). The enhancement of the water flux can be attributed to the horizontal growth of the leaf-like structures, which enlarged the surface area of the polyamide nanofilm [17]. The increment of flux was also observed in the case of seawater
Performance Evaluation of the As-Developed Membrane
As can be seen from Figure 10, with the increase in the incorporation amount in the membrane, water flux generally increased initially. For example, compared with the virgin membrane, the optimal brackish water flux increased by 11% to 47.9 L m −2 h −1 at a doping amount of 0.01% (w/v), and the salt rejection was at approximately the same level as the virgin membrane (~99.7%). The enhancement of the water flux can be attributed to the horizontal growth of the leaf-like structures, which enlarged the surface area of the polyamide nanofilm [17]. The increment of flux was also observed in the case of seawater desalination conditions. However, the optimal doping amount was 0.005% (w/v) when an optimum flux of 59.6 L m −2 h −1 was achieved, which had a 17% increase compared with the virgin membrane at the same testing condition. Simultaneously, the TFN-3 membrane achieved 99.6% salt rejection, a significant elevation from the~99.1% for the virgin membrane. It even reached the rejection of some commercial seawater desalination membranes, and the flux is much higher than that of partial commercial membrane (Table 3). It is rather interesting to note that the ODA@GO TFN membranes had higher salt rejection than the virgin membrane at seawater operation conditions. This phenomenon can be explained by the solution diffusion mechanism [35,36]. For an ideally dense RO membrane, the solute flux is mainly dependent on the concentration difference between the feed and the permeate. Therefore, operating at high water flux (i.e., higher hydraulic pressure) helps to dilute the solute flux, resulting in lower salt concentration in the permeate. Therefore, the net result is higher salt rejection at higher operation pressure [11,12,17,21]. On the other hand, for the looser TFC membrane, operating at high pressure facilitates both the solute flux (convective diffusion) and the water flux. Hence, the net result was that its brackish water salt rejection (~99.7%) was significantly higher than its seawater salt rejection (~99.1%).
Membranes 2021, 11, x FOR PEER REVIEW 12 of 15 the permeate. Therefore, operating at high water flux (i.e., higher hydraulic pressure) helps to dilute the solute flux, resulting in lower salt concentration in the permeate. Therefore, the net result is higher salt rejection at higher operation pressure [11,12,17,21]. On the other hand, for the looser TFC membrane, operating at high pressure facilitates both the solute flux (convective diffusion) and the water flux. Hence, the net result was that its brackish water salt rejection (~99.7%) was significantly higher than its seawater salt rejection (~99.1%). Meanwhile, the boron rejection was also enhanced by the doping of ODA@GO (Figure 11). The initial boron removal rate of the virgin TFC membrane was 59.2%. This value could be improved by 13.3% when an optimum ODA@GO amount was doped. Accompanying this, the coefficient of boron removal Bs decreased significantly over the doping of ODA@GO. This decrease in the boron diffusion coefficient and elevation of boron rejection can be explained by the increased intrinsic thickness and the cross-linking degree of the PA nanofilm that increased the separation efficiency [17]. Meanwhile, the boron rejection was also enhanced by the doping of ODA@GO ( Figure 11). The initial boron removal rate of the virgin TFC membrane was 59.2%. This value could be improved by 13.3% when an optimum ODA@GO amount was doped. Accompanying this, the coefficient of boron removal B s decreased significantly over the doping of ODA@GO. This decrease in the boron diffusion coefficient and elevation of boron rejection can be explained by the increased intrinsic thickness and the cross-linking degree of the PA nanofilm that increased the separation efficiency [17].
Conclusions
In conclusion, we have discovered in this study that the growth of the PA nanofilm under the confinement effect of the 2D ODA@GO nanosheets during the interfacial polymerization can effectively shape the nanoscale structures and customize the properties of the polyamide nanofilms. The ODA@GO nanosheets dispersed in the organic phase served as an effective barrier limiting the diffusion of amine molecules into the organic phase. As a result, the PA nanofilm was shaped with a significant amount of leaf-like structures, which promoted the horizontal growth of the PA nanofilm. As a net result, the apparent thickness of the PA layer was decreased but the overall effective surface area was enhanced, making the PA layer more efficient for water permeation. In the meantime, both the intrinsic thickness and the cross-linking degree of the PA nanofilm were enhanced due to the elevated amine concentration at the interface, rendering the PA nanofilm a better barrier for salt and neutral molecules such as boron acid. Therefore, we have demonstrated that proper doping of 2D nanosheets in the organic phase during IP reaction has the potential to produce more effective PA nanofilms. This interesting finding will pave the road for further studies to customize higher-selectivity PA-based polymeric TFN membranes for seawater desalination.
Conclusions
In conclusion, we have discovered in this study that the growth of the PA nanofilm under the confinement effect of the 2D ODA@GO nanosheets during the interfacial polymerization can effectively shape the nanoscale structures and customize the properties of the polyamide nanofilms. The ODA@GO nanosheets dispersed in the organic phase served as an effective barrier limiting the diffusion of amine molecules into the organic phase. As a result, the PA nanofilm was shaped with a significant amount of leaf-like structures, which promoted the horizontal growth of the PA nanofilm. As a net result, the apparent thickness of the PA layer was decreased but the overall effective surface area was enhanced, making the PA layer more efficient for water permeation. In the meantime, both the intrinsic thickness and the cross-linking degree of the PA nanofilm were enhanced due to the elevated amine concentration at the interface, rendering the PA nanofilm a better barrier for salt and neutral molecules such as boron acid. Therefore, we have demonstrated that proper doping of 2D nanosheets in the organic phase during IP reaction has the potential to produce more effective PA nanofilms. This interesting finding will pave the road for further studies to customize higher-selectivity PA-based polymeric TFN membranes for seawater desalination.
|
2021-07-03T06:17:04.284Z
|
2021-06-29T00:00:00.000
|
{
"year": 2021,
"sha1": "b3c061498d29eca0c8c17e35822aeb453b81f558",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/11/7/488/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfc98bddd517d39ce67503482dc3380ea5d4808c",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
148693967
|
pes2o/s2orc
|
v3-fos-license
|
A Study of Michigan Farm Bureau's Publication: Rural Living
The purpose of this study was to evaluate the effectiveness of various communication vehicles of the Michigan Farm Bureau's (MFB) information and public relations division. This research is available in Journal of Applied Communications: https://newprairiepress.org/jac/vol69/iss1/6 A Study of Michigan Farm Bureau's Publication: Rural Living James Bernstein Michael V. Doyle Daniel T. Davis The purpose of this study was to evaluate the effectiveness of various communication vehicles of the Michigan Farm Bureau's (MFB) information and public relations division. Specifically, this paper addresses Farm Bureau 's publication Rural Living. The method for performing this evaluation was a survey of the organization's membership. The study was designed to allow management of the division and the overall organization to gauge the effectiveness of the division 's output through the eyes of the consumers of the materials. Because MFB relies heavily on its membership, this latter pOint was considered vital. That is, management believed that the best way to determine whether improvement was necessary in the organization's communications was to go to those at whom the communications are directed. With this in mind, MFB commissioned the researchers to conduct a comprehensive study of member attitudes, opinions and behaviors toward a variety of information division functions . Utmost among these functions has been the monthly publication Rural Living, so the focus of the study was membership reaction toward this magazine. Specifically, the following questions were addressed in the study concerning Rural Living: 1. Does the magazine carry the proper mix of organizational information and general information about agriculture? Increasing demands from organizational management to make the James Bernstein Is adjunct faculty, School of Journalism; Michael V. Doyle Is assistant professor, Department of Agricultural and Extension Education, and Daniel T. Davis Is research analyst, College of Agriculture. All are at Michigan State University. The article Is an edited portion of Farm Bureau's entire communications program. 28 1 Bernstein, et al.: A Study of Michigan Farm Bureau's Publication: Rural Living Published by New Prairie Press, 2017 publication oriented more toward MFB activities may have conflicted with readers' desires for more feature-oriented material on agricultural issues and less hard-sell material. The study was designed to find out whether the conflict actually existed. 2. How does Rural Living magazine compare with the Rural Leader newsletter published by MFB's information division? In part, this question relates to the first issue in that the Rural Leader newsletter is designed to provide the hard-sell information mentioned earlier. If, however, Rural Living were duplicating that function, one would expect respondents to have a preference for either publication. 3. To what extent do MFB members read the regular features of Rural Living magazine, such as the president's message and the classified advertisements? 4. What is the nature of Rural Living readership in terms of when it is read, how much of it is read, reasons that members do not read it more frequently, and who is reading it? In addition to the specific research questions related to Rural Living, the study addressed many more general issues on organizational membership and its acquisition of agricultural information. 5. How important to MFB members are various issues related to where members obtain information about those issues? To assess these questions, respondents were asked to rate the importance of several issues. They were also asked to indicate their current primary sources for agricultural information and what their preferences would be for the information, jf they had the choice. And finally, the study explored membership acquisition of agricultural information from broadcast media and from county Farm Bureau newsletters. For the latter item, respondent members indicated their attitudes and opinions toward the newsletters in a variety of ways, such as how much of the newsletter was read, reasons for not reading the newsletter, and satisfaction with content of the newsletter. 29 2 Journal of Applied Communications, Vol. 69, Iss. 1 [2017], Art. 6 https://newprairiepress.org/jac/vol69/iss1/6 DOI: 10.4148/1051-0834.1603
A Study of Michigan Farm Bureau's Publication: Rural Living
James Bernstein Michael V. Doyle Daniel T. Davis The purpose of this study was to evaluate the effectiveness of various communication vehicles of the Michigan Farm Bureau's (MFB) information and public relations division. Specifically, this paper addresses Farm Bureau 's publication Rural Living. The method for performing this evaluation was a survey of the organization's membership. The study was designed to allow management of the division and the overall organization to gauge the effectiveness of the division 's output through the eyes of the consumers of the materials.
Because MFB relies heavily on its membership, this latter pOint was considered vital. That is, management believed that the best way to determine whether improvement was necessary in the organization's comm unications was to go to those at whom the communications are directed. With this in mind, MFB commissioned the researchers to conduct a comprehensive study of member attitudes, opinions and behaviors toward a variety of information division functions . Utmost among these functions has been the monthly publication Rural Living, so the focus of the study was membership reaction toward this magazine.
Specifically, the following questions were addressed in the study concerning Rural Living: 1. Does the magazine carry the proper mix of organizational information and general information about agriculture? Increasing demands from organizational management to make the James Bernstein Is adjunct faculty, School of Journalism; Michael V. Doyle Is assistant professor, Department of Agricultural and Extension Education, and Daniel T. Davis Is research analyst, College of Agriculture. All are at Michigan State University. The article Is an edited portion of Farm Bureau 's entire communications program.
28
1 publication oriented more toward MFB activities may have conflicted with readers' desires for more feature-oriented material on agricultural issues and less hard-sell material. The study was designed to find out whether the conflict actually existed.
2. How does Rural Living magazine compare with the Rural Leader newsletter published by MFB's information division? In part, this question relates to the first issue in that the Rural Leader newsletter is designed to provide the hard-sell information mentioned earlier. If, however, Rural Living were duplicating that function, one would expect respondents to have a preference for either publication.
3. To what extent do MFB members read the regular features of Rural Living magazine, such as the president's message and the classified advertisements? 4. What is the nature of Rural Living readership in terms of when it is read , how much of it is read, reasons that members do not read it more frequently, and who is reading it?
In addition to the specific research questions related to Rural Living, the study addressed many more general issues on organizational membership and its acquisition of agricultural information.
5. How important to MFB members are various issues related to where members obtain information about those issues? To assess these questions, respondents were asked to rate the importance of several issues. They were also asked to indicate their current primary sources for agricultural information and what their preferences would be for the information, jf they had the choice.
And finally, the study explored membership acquisition of agricultural information from broadcast media and from county Farm Bureau newsletters. For the latter item, respondent members indicated their attitudes and opinions toward the newsletters in a variety of ways, such as how much of the newsletter was read, reasons for not reading the newsletter, and satisfaction with content of the newsletter.
Methods of Study
The sampling design for the MFB communication study was a stratified random sample design. That is, respondents were chosen at random from two separate groups, associate members and regular members. Theoretically, simple random selection would have resulted in equal representation of the groups, because each group represents approximately half the organization's membership. That is, drawing a sample from the entire MFB membership population would have yielded an equal number of regular and associate members. To insure that this was the case and to increase precision, the membership was divided into two subgroups-regular and associateand half the potential respondents were chosen from each.
The questionnaire administered to respondents was basically the same for both groups of members, although the introductions differed slightly and additional questions were included for regular members. Respondents answered questions dealing with the importance of various agricultural-related issues, their preferences for getting information about those issues, and their opinions about Farm Bureau publications.
Trained interviewers administered the questionnaire by telephone from the Farm Bureau Center in Lansing, MI the week of Sept. 9, 1985. A total of 206 interviews were completed and 92 potential respondents refused to be interviewed, resulting in a completion rate of 69 percent.
The data were analyzed with a Control Data Corporation Cyber 170, Model 750 computer at Michigan State University. The Statistical Package for the Social Sciences was used.
Summary and Conclusions
In general, the data clearly show that public policy issues such as the cost of health care insurance, the federal budget deficit, and the farm financial crisis are perceived as having much greater salience to the respondents than organizational issues like the activities of other MFB members, MFB membership drive activities, and political action committees. The inference drawn from this analysis is that the more salient items should have greater reader interest than those perceived as being less salient.
30
While the data might seem to suggest that only Rural Living magazine qualifies as a significant source of information, it is important to remember that not all the respondents has access to the other MFB communications sources. Without more detailed analysis of the readership of Rural Leader or the County Newsletter, it is difficult to assess their communication effectiveness. Additionally I more in-depth analysis should be made of the "other sources" cited frequently by the respondents. From the data presented here, however, it does appear that Rural Living is thought of by all its readers as an important source of information, both practically and ideally, for all of the content areas considered.
|
2018-12-18T22:35:15.013Z
|
1986-01-01T00:00:00.000
|
{
"year": 1986,
"sha1": "a8cb200f65d5ccd0da6cbe08018cc897d3de66be",
"oa_license": "CCBYNCSA",
"oa_url": "http://newprairiepress.org/cgi/viewcontent.cgi?article=1603&context=jac",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7f6e27e5198258a7f3500f379e6e4d2180382304",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Geography"
]
}
|
143993676
|
pes2o/s2orc
|
v3-fos-license
|
Development of Examination Behaviour Inventory: An Integrity Measure for Prevention of Cheating in School Exams
Cheating in examinations is an educational menace that has threatened the very essence of schooling in most countries of the world. Therefore, it has become imperative for researchers in education to seek alternative strategies for curbing it in order to restore the dignity of school examinations as an instrument for assessing actual educational attainment by students. This research study addresses this challenge by developing an inventory that could be used to measure the examination behaviour of prospective candidates for school certificate examinations. The rationale for developing the instrument is based on providing a tool for identifying students who have positive tendencies towards engaging in cheating behaviour during school examinations. The initial sample used for the validation of the Examination Behaviour Inventory was 2000 candidates enrolled for the 2013 Senior School Certificate Examinations in Nigeria while the standardization of the instrument involved 4000 candidates. Cronbach Alpha index of the instrument is .843 and Factor Analysis delineated 12 principal component factors. Other psychometric properties of the inventory and the detailed processes involved in the construction, validation and standardization of this valuable educational instrument is reported. The instrument is recommended to School Counsellors, Psychologists, Teachers, Administrators and other stakeholders in education who are interested in the identification of prospective candidates who have a tendency to engage in cheating during examinations so as to apply proactive reformation on them.
Introduction
In all countries of the world education has been considered to be the most veritable instrument for national development. It is also a tool for training the citizenry in order to live a better and rewarding life for themselves and for the society in general. In a nutshell, it has been described as the best legacy that any Nation or individual can leave behind for generations yet to come (Issa, 2011). Over the decades, education through the curriculum has been the means through which the school transmits the cultural heritage of societies. In all nations, the purposes of education are clearly stated in their philosophical postulations. These statements are concerned with what education will do for the improvement of the citizens of the society at large. They are aspects of the worthwhile cultural heritage that are supposed to be transmitted to children, youths and adults through the curriculum of schools. That is why Maduewesi, Aboho and Okwuedei (2010) stated that the curriculum is made up of all these essential and desirable aspects of the society's culture that the school should provide for the learners to enable them become educated to lead a better and rewarding life. By implication, the national philosophy of education encapsulates the educational objectives of the society, and they portray what the learners are supposed to learn as specific learning tasks.
In Nigeria the formal, non-formal and informal systems of education has multilateral aims with the end objectives being to produce an individual who is honest, respectable, skilled, co-operative and conforms to the social order of the day. For example, Fafunwa's account (as cited in Maduewesi, Aboho & Okwuedei, 2010) identified seven aspects of the informal education which are also applicable to the formal system of education, despite its multilateral nature, to include the development of the child's latent physical skills; character; respect for elders and those in position of authority; intellectual skills; vocational skills and healthy attitude towards honest labour; a sense of belonging and to participate actively in family and community affairs; and, appreciating and promotion of the cultural heritage of the community at large. If the above aims and objectives of informal and non-formal education are juxtaposed with those of the formal system of education in Nigeria as clearly enunciated in the National Policy on Education (Federal Republic of Nigeria, 2004) then the common goals of both systems of education are clearly delineated as the inculcation of national consciousness and national unity; the inculcation of the right type of values and attitude for the survival of the individual and the Nigerian Society; the training of the mind in the understanding of the world around; acquisition of appropriate skills; and the development of mental, physical and social abilities and competencies as equipment for the individual to live in and contribute to the development of the society.
In the light of all the above, Nigerian education was geared towards self-realization, better human relationship, self and national economic efficiency, good citizenship, national consciousness, national unity, social and political progress as well as national reconstruction. In pursuance of these objectives, therefore, our educational institutions from the pre-primary to university levels have designed their curricula programmes in such a way that functional individuals who will be capable of contributing their quota to national development are produced. It is, however, sad to say that these lofty goals of education are far from being realized in Nigeria due to the cankerworm called examination malpractice especially in the secondary schools (Obimba, 2002).
Examination is the process of finding out how much of the objectives of specific learning tasks a learner has assimilated (Bello, Kolajo & Uduh, 2010). Fagbamuye's report (as cited by Bello et al) described examination as a tool for measuring and judging the standard of education in any country. They further asserted that examinations are used for selection, certification and reporting of progress to parents and policy makers. To them, results of public examinations are also used to monitor the performance of the educational system and as an accountability measure in schools. It is also regarded as a tool for measuring and judging the standard of education in any country. As an integral part of the curriculum process, it is thus an important tool in the teaching-learning process but the incidence of examination malpractices has made nonsense of examinations in the school system, especially in the secondary schools which is an important foundation of tertiary education.
In Nigerian secondary schools, examinations are internally and externally conducted. Internal examinations are school based while external examinations are referred to as public examinations. The internal examinations are developed and administered by schools using teacher-made tests whereas the public examinations are developed and administered by public examination bodies in Nigeria. These bodies include the West African Examinations Council (WAEC), the National Examination Council (NECO), the National Business and Technical Examinations Board (NABTEB), the National Teachers' Institute (NTI) and the Joint Admissions and Matriculation Board (JAMB). In all examinations, especially the public ones conducted by the bodies mentioned above, there are rules and regulations laid down to guide their conduct. Failure to adhere to these rules and regulations is called examination malpractice (Ossai, 2004;Cizek, 2001;Bello, Kolajo & Uduh, 2010;Olanipekun, 2003). Examination malpractice has also been described as any act of dishonesty that occurs before, during and after an examination or assessment which is intended to obtain or offer an unfair advantage to a candidate or candidates in that examination or assessment (University of Exeter, 2002;Illoakasy, 1999;Afemikhe, 2010). Realizing the danger which examination malpractices pose to the educational system in Nigeria, the Federal Government enacted the Examination Malpractices Decree No. 33 of 1999 to curtail their occurrence. The act provides penalties ranging from imprisonment, fine or both for persons and bodies found guilty of involvement in, aiding, abetting, negligence or dereliction in the conduct of examinations. JAMB (2009) states that punishable offences under this Act include the following: cheating at examinations; stealing of question papers; impersonation; disorderliness at examinations; disturbances at examination; misconduct at examination; obstruction of supervisors; forgery of result slip; breach of duty; conspiracy and aiding. According to JAMB the courts will invoke appropriate penalties on persons and bodies found guilty of any of the offences under the Act. In this regard, coordinators, supervisors, invigilators and all individuals involved in examinations have been warned and advised to conduct themselves in a proper and responsible manner to avoid any breach of the law. In spite of these punitive measures put in place by the Examination Malpractice Act No. 33 of 1999 to ensure credibility in examinations, the conduct of examinations in Nigeria has continued to be bedeviled by examination malpractices which have been recognized to be the most intractable malaise that erodes the credibility of our examination system (Afemikhe, 2010;Ossai, 2010). Other offences that manifest themselves in form of examination malpractices according to Bello, Kolajo and Uduh (2010) include: leakage, collusion, bringing prepared answer scripts to the examination hall, swapping of candidates' scripts, sending answers to candidates using electronic gadgets, impersonation and unreliable continuous assessment scores from school authorities.
Paul and Bodunde's report (as cited in Esomonu, 2010), stated that the first episode of examination malpractice occurred in 1965 during the West African School Certificate Examinations; the very first time WAEC took complete charge of setting and conducting of the examination hitherto set in England by Cambridge Overseas Examination Board. According to Esomonu, this malpractice which probably began with leakage masterminded by some few dishonest workers has undergone some metamorphosis over the decades; growing in methods and techniques as well as number and caliber of practitioners. Most recently, some parents pay money to obtain question papers for their children and wards and sponsor these young ones to some examination centers known as "miracle centers". Some entice teachers to pass their dependents who have not done well enough in examinations. Some communities even go as far as launching examination comfort fund to enable their children excel in public examinations. In some cases, some teachers at secondary schools are involved by way of encouraging students to contribute money (cooperation fee) in order to secure the needed assistance during such examinations probably because they, the teachers, are aware of the inadequacy in the preparation of their students before examinations due to lack of adequate facilities and other factors (Odia & Omofonwan, 2007).
Rationale for the Inventory
In recent times, examination malpractices have manifested in various forms, shapes and sizes with different designations such as 'microchips', 'macrochips', 'down-loads', 'laptops', 'giraffe' and quite recently the use of mercenaries. Microchips and macrochips have to do with small pieces and more significant sizes of papers, respectively, with prepared answers smuggled into the examination hall. 'Down-loads' simply refers to the act of bringing in of the whole textbook from which the candidate intends to copy. 'Laptops', which is most common with ladies, is the technique whereby the individual candidate's lap is used as the writing surface from where relevant information to an on-going examinations can be copied in the examination hall as the need arises. Giraffe is a style whereby candidates use neck stretching to look at what another candidate has written with the intent to copy. Another sophisticated method of examination malpractice is that of the examination mercenary syndrome. This refers to the practice whereby candidates employ and pay external persons to sit in and write examinations on their behalf. Afemikhe (2010) cited the trend in examination malpractices in the June/July Senior School Certificate Examination (SSCE) conducted by the National Examination Council (NECO) in Nigeria for the period 2005-2009 in Table 1. Afemikhe, 2010). Table 1, the number of candidates involved in examination malpractices is very alarming. Perhaps this led some writers like Bello, Kolajo and Uduh (2010) to conclude that examination malpractice has assumed new and wide dimension in WAEC SSCE like in other examinations in Nigeria. This unfortunate trend in examination malpractice has contributed in great measure to the diminishing standard of education in Nigeria. It has also helped to cast aspersion on candidates' certificates, which many often claim, have not always been a true reflection of the academic achievement of their holders in Nigeria. Due to this negative educational phenomenon, it is not surprising therefore, that many candidates who secured admission into higher institutions with such results have been much of a disappointment. They simply could not leave up to their billing in all ramifications.
As shown in
There is no doubt, therefore, that examination malpractices in all ramifications are to be checkmated by stakeholders in the education sector. This is for the simple fact that to compromise academic standards is one sure way to mortgage the future of a people. A country's today, and whatever it stands for, represents the foundation of her tomorrow. Prevalence of examination malpractice indicates the weak foundation upon which the country's tomorrow is being built.
This research represents an effort to proffer a preventive approach in the fight against examination malpractices. Since the punitive approach that has been in vogue for about a century in Nigeria has failed to curb the 'monstrous' examination malpractices, there is need, therefore, to look for alternative strategies in this war. Rather than wait for the candidates or students to engage in examination malpractices and thereafter they are punished for the wrongful act, it will be plausible to devise ways of preventing them from engaging in the act in the first instance. The aphorism, "prevention is better than cure" applies in this case. In other words, is it possible to develop an instrument that could be used to determine a student's likely behavioral disposition towards examination malpractices before he or she sits for examinations? This is the challenge of the present study.
The major purpose of the study is to construct, validate and standardize an Examination Behaviour Inventory (EBI) for Secondary School Students. In order to achieve this purpose, the study addressed the following specific objectives: i. describe the processes involved in the construction, validation and standardization of an Examination Behaviour Inventory for secondary school students in Nigeria.
ii. state the psychometric properties of a validated and standardized Examination Behaviour Inventory; iii. explain how a standardized Examination Behaviour Inventory could be used to measure students likely behavior in public examinations; and iv. discuss the usefulness of the Examination Behavior Inventory.
The attainment of the above stated purpose and objectives of this study will give impetus to the war against examination malpractice in Nigerian schools especially the secondary schools and other countries of the world where the instrument will be adapted. Therefore, this study is very significant because it provided a new strategy for tackling the menace of examination malpractice. A proactive strategy was established by this study which is quite unique and different from the punitive measures adopted by school authorities, examination bodies and the government in the fight against examination malpractices. Thus, the students, teachers, counselors, parents, school administrators, examination bodies, the government and Nigerian society in general will benefit from the use of EBI.
Theoretical Framework
The construction and validation of an inventory to measure likely examination behaviour of students is based on the Theory of Planned Behaviour (TPB) as espoused by Ajzen (2006). Three basic considerations are involved in TPB, namely, attitude towards a particular behaviour (such as cheating in examinations); subjective evaluation of how significant others in the life of an individual view the behaviour (such as parents, teachers, friends, etc.); and perceived ease or difficulty of executing the behaviour or action (examination malpractices). These three considerations determine whether an "intention" to engage in the behaviour will be formed and ultimately lead to the exhibition of the "behaviour". These three basic components of TPB are referred to as "Attitude Toward the Behaviour" (ATB), "Subjective Norm" (SN) and "Perceived Behavioural Control" (PBC) respectively according to Ajzen. The phenomenon of examination malpractices is a behavioural variable which could be premeditated or spontaneously carried out when an opportunity presents itself. In either case, however, ATB, SN and PBC must be positively disposed towards the act. Hence, there are direct connections between ATB, SN, PBC and Behaviour (examination malpractices). In constructing the Examination Behaviour Inventory (EBI) these fundamental elements of TPB by Ajzen were considered. The primary elements of TBP described by Ajzen are Target, Action, Context and Time (TACT) in defining the behaviour of interest. The "Target" behaviour is cheating in examinations (Examination malpractices). The "Actions" are those observed and reported events in literature as constituting examination malpractices such as impersonation, copying from prepared material, spying, etc. The "Context" is the Senior School Certificate Examination (SSCE) in Nigeria. The "Time" entails prior to, during and after the actual examinations. Thus, examination behaviour covers actions taken before, during and after the writing of the examinations.
Construction of the Instrument
Construction of a Test Blueprint to cover the components of examination behaviour as it relates to positive and negative actions before, during and after the public examination was carried out as presented in Table 2. Rules and regulations guiding the conduct of WAEC, NECO, NABTEB and JAMB examinations were consulted and adapted into the Test Blueprint along with other relevant materials on examination behaviour. A total of 50 items were initially constructed but these were prone down to 40 after initial item analysis from a pilot study and expert judgment of Senior Colleagues in educational measurement and evaluation. As shown in Table 2, the forty items EBI were geared towards measuring the aspects of examination behaviour specified, namely, Study Habits (items 1, 2, 3, 8); Examination Anxiety (items 4, 5, 6, 23, 29, 34); Collusion in Examination Hall (items 13, 14, 16, 18,26, 27, 32); Examination Ethics (items 10, 11, 12, 15, 17, 19, 20, 21, 28, 30, 31, 33); Attitude towards Examination Malpractices (items 7,9,22,24,25,40;36,37,3 8,39). This procedure adopted in the construction of the EBI is similar to that used by Institute for Personality and Ability Testing (2003); Ezeh and Odo (1997); Bakare (1977); Buchanan, Goldberg and Johnson (1999); and Spielberger (1980)
Research Design and Sample
The descriptive survey design was used to generate two sets of data for analysis in order to validate and standardize the instrument. The first stage involved a sample size of 2000 out of a total number of 48704 candidates enrolled for the May/June 2013 WAEC SSCE in Delta State of Nigeria. Cronbach Alpha and Factor Analysis were run on the data generated using the Statistical Package for the Social Sciences (SPSS) version 17 to determine the reliability and validity of the EBI and the results are reported in the next section of this article. The second stage required a larger sample of 4000 candidates enrolled for the same certificate examination drawn from the six geo-political zones covering the entire country (Nigeria). The validated EBI was administered to the 4000 participants out of the 1 689 188 candidates that enrolled for the examination. Data collected from the second sample were analysed to establish the national norms of the EBI as reported in the results section. In the two stages, the Multistage Stratified Random Sampling technique was used to ensure representation of the various subsets of the population with regards to gender, age, type of school, Senatorial Districts, Local Government Areas, States and Geo-political zones of the country. This procedure was found very appropriate since it is consistent with the reported empirical descriptive survey model recommended in research literature for the study of human attributes as they occur in the real world (Akinboye & Akinboye, 1998;Cherry, 2014).
Administration and Scoring the EBI
EBI is a paper and pencil test. Candidates taking the test are provided with the Test Inventory containing the instructions and the 40 items. The candidate is required to read the instructions and write his or her gender and age in the spaces provided and then proceeds to ticking the column that agrees with his or her behaviour or disposition towards examination situations from the following options: SA = Strongly Agree; A = Agree; D = Disagree; SD = Strongly Disagree.
When the EBI is administered to a single student, it may not be necessary for the student to write his or her name on the inventory. However, for group administration, it is recommended that numbers or codes be assigned to each student taking the test and this number or code should be pre-written by the administrator on the inventory which will be used later by the test administrator to identify each test taker. The idea is to prevent the psychological effect of biased response associated with students having to write their names on the inventory. During the process of developing the EBI it was not necessary to write names nor assign codes to the test takers since follow-up was not a consideration. Administration of the inventory takes about 20 -30 minutes. Generally, the test administrator ensures that the test taker is comfortable and assured that the results of the test will not result in any punitive action against the candidate. EBI is scored by assigning numerical values from 1 -4 to each response as contained below but items 8 and 25 are scored in reverse order because they are positively toned while all other items are scored as follows: SA = 1; A = 2; D = 3; and SD = 4. The numerical values scored on each item are added up to get a total score for each test taker. The minimum and maximum scores obtainable are 40 and 160 respectively. An individual's total score is interpreted by comparing it with the national norms presented in Table 6. The national mean score for male students is 104.43 while that of female students is 106.95.
Reliability and Validation of EBI
Cronbach Alpha index for the EBI is .843 which is above the .70 cut-off threshold for most Social Science research as stated by Institute for Digital Research and Education, UCLA (2013). Moreover, all 40 items contributed significantly to the overall Cronbach Alpha index (.843) as shown in Table 3. Table 3 shows internal consistency of all 40 items in EBI. According to Santos (1999) the value of an item to the overall Cronbach Alpha (α) is determined from the (α) index if the item is deleted. If the α improves significantly, it means that the item should be dropped but in this case deletion of any of the items will not lead to significant improvement in the overall α as seen in Table 3. Therefore, the EBI is internally consistent. Reliability over time was determined through test-retest over a period of four weeks on a sample of 60 SSCE candidates and Pearson correlation (r) for the two administrations is .85. This shows that EBI has high stability. Validity of the EBI was determined with Factor Analysis using Principal Component Analysis (PCA) of Statistical Package for Social Sciences (SPSS) version 17 as presented in Tables 4 and 5. Data in Table 4 show that 12 factors were extracted from the 40 item EBI. Those 12 factors accounted for 55.53% variance among the independent variables (EBI items). Table 4 presents the factor loadings of the items on a threshold of .30. Factor loadings less than .30 were suppressed in the Factor Analysis since according to Precision Consulting (2013, p. 57) "a factor loading of less than .30 is not that significant". Most of the items loaded on Factor 1 which could best be described as "Examination Behaviour (During and Post-examination)". Other discernable factors are: Preparation/Study Habits (Pre-examination behaviours); Examination Anxiety; Examination Attitude; Assistance and Cooperation to cheat during Examination or Collusion. Items 32, 28, 22, and 39 had very high factor loadings on factor 1 at .680, .639, .636 and .627 respectively. .596 *Extraction Method: Principal Component Analysis. Further to the reliability and validation of the EBI as described in the preceding section, national norms were established for the instrument. According to Pareek (2002, p. 37) "Norms are the standards against which a score can be judged as low, normal or high. Generally, these are calculated from data from a large sample (say 1,000) in terms of the mean and standard deviation values". Moreover, norms are developed for particular groups such as gender, age, category of students and so on.
The procedure adopted in producing the national norms for the EBI involved selecting a large sample of 4000 students from the six geo-political zones of the country and the Federal Capital Territory of Abuja. Data collected from the 4000 participants were subjected to data analysis using SPSS version 17 software and the norms obtained are presented in Table 6. Data in Table 6 Show the national norms against which an individual's score on EBI should be judged. Scores below the mean are indicative of positive behavioural disposition towards engaging in examination malpractices while scores above the mean are pointers to negative tendency to engage in examination malpractices.
Discussion
EBI was developed for measuring the behavioural tendencies of prospective Senior School Certificate Examination (SSCE) candidates in examination conditions. Primarily it is to be used for determining whether a Senior Secondary School student has a positive or negative behavioural disposition towards engaging in examination malpractices as enunciated in the theoretical framework. The elements of the TPB regarding "TACT" were manifested in EBI. The "Target" behaviour was tendency to cheat in the 2013 SSCE. Responses to the EBI by the registered students for the examination and the data analysis demonstrated the reliability and validity of the EBI for identifying students who are favourably disposed towards cheating in examinations and therefore require reformative counselling to discourage them from actually engaging in examination malpractices. School Guidance Counsellors could use EBI scores as a basis for preventive counselling against examination malpractices. Studies have shown that Guidance and Counselling programmes in the school system is geared towards helping students improve in their study habits so as to be fully prepared for writing their examinations confidently without engaging in examination malpractices (Ossai, 2012(Ossai, , 2013UNESCO, 2000a,b). Moreover, high examination anxiety has also been implicated by the Ossai (2013) study as one of the variables along with poor study habits that propels students to engage in examination malpractices. A good number of items in the EBI assess Study Habits and Examination Anxiety (see section on Construction of EBI for these items). Therefore, school Guidance Counsellors should help students who score low in these sections to improve their study habits as well as control their examination anxiety levels since poor study habits and high examination anxiety are always significantly correlated with positive tendency to engage in examination malpractices (Ossai, 2004(Ossai, , 2011(Ossai, , 2012(Ossai, , 2013. The implication of this correlation is that when Guidance Counsellors help students to improve their study habits and reduce their examination anxiety levels, they will be less prone to engage in the other acts of examination malpractices covered in the inventory such as collusion, violation of examination ethics, and so on. These various acts of examination malpractices covered by EBI were derived from literature on examination cheating behaviour and they constitute the "Action" component of TACT. The "Context" of the EBI was the May/June 2013 Senior School Certificate Examination in which 112 865 candidates engaged in cheating behaviour out of 1 671 268 candidates who actually sat for the examination. This shows that the prospect for using EBI is enormous giving the prevalence of this educational menace in the country. Other stakeholders within the school system (students, teachers and administrators) will benefit by using the validated and standardized Examination Behaviour Inventory (EBI) to identify students who are likely to engage in examination malpractices. Such identified students will be subjected to relevant counselling therapies such as Cognitive Behavior Therapy (CBT) and Rational Emotive Behaviour Therapy (REBT) to reform or re-orientate them before they sit for the actual examinations. Moreover, students who are identified to be prone to examination malpractices based on their scores in the EBI will be assisted further through study habits induction and self-regulated learning strategies. Research has shown that Counselling therapies and study habits induction are very useful strategies for improving students' academic performance due to their efficacy in reducing debilitating examination anxiety which otherwise leads to involvement in examination malpractices (Ossai, 2013;Spielberger & Vagg, 1987). The students who are thus helped will be saved from the ugly consequences of engaging in examination malpractices. EBI relates to "Time" in TACT by encapsulating the three dimensions of prior, during and after the actual examinations when the cheating behaviour occurs.
Consistent with the Subjective Norm component of the Theory of Planned Behaviour upon which EBI is based, a culture of intolerance of cheating behaviour in examinations must be established in all countries of the world. Teachers, School Administrators, Parents and Researchers in Education and Social Sciences will also find EBI very useful in their efforts to checkmate examination malpractices in all educational systems. The menace of examination malpractices pervades all educational system of the world even in developed countries. For instance, McCabe, Trevino and Butterfield (2001) concluded from a meta-analysis of a decade of research on cheating in academic institutions in America that "cheating is prevalent and that some forms of cheating have increased dramatically in the last 30 years" (p. 219). Denise Pope's account (as cited in Walker, 2012) further corroborates the McCabe et al (2001) report with the following words "between 80 and 95 percent of high school students admitted to cheating at least once in the past year and 75 percent admitted to cheating four or more times". Therefore, EBI is recommended for use by researchers in education and social sciences as a veritable tool for diagnosing students in high schools who may have the attitude and behavioural tendencies towards engaging in examination cheating behaviour. The instrument could be revalidated and used in other countries of the world. Teachers and School Administrators who are concerned about the dangers posed by the menace of examination malpractices to the very essence of schooling will also find this inventory useful in curbing the incidences of their occurrences in their schools. Every stakeholder in the education system of Nigeria and other countries of the world are appalled at the deplorable state of cheating behaviour in examinations. The EBI is simple enough to be administered by concerned teachers and school administrators who are genuinely interested in identifying students who may have tendencies towards cheating in examinations so as to provide reformative reorientation for them before they actually sit for certificate examinations. Parents and Guardians will also find the EBI useful for mapping the examination cheating profile of their children and wards. Well-meaning parents and guardians realize that cheating in examinations is the worst investment they can make for their off springs. Certificates acquired through examination malpractices are worthless as there will always be a day of reckoning in which holders of such certificates will have to demonstrate their knowledge and skills. Interested parents and guardians should use the EBI as an instrument to check their children's disposition towards engaging in examination malpractices with the objectives of dissuading those who will be identified by the EBI as examination malpractice prone from actually engaging in the act. If these important stakeholders in education (teachers, administrators, parents and researchers) take active interest and massively engage in the fight against examination malpractices by, for instance, utilizing the EBI measure appropriately, then the subjective norm would have been eliminated from the framework. The Perceived Behaviour Control component would have been strengthened as the students will realize that it may no longer be easy to engage in the cheating behaviour during examinations. Deployment of EBI in the school system requires collective action of all well-meaning stakeholders in education. Such collective action is necessary in order to tame this hydra-headed monster called examination malpractices.
The examination bodies (WAEC, NECO, NABTEB, NTI and JAMB) that have been in the frontline in the war against examination malpractices in Nigeria will also benefit from deployment of EBI in secondary schools. Even though these examination bodies have been in the forefront of the war against examination malpractices in the country, their efforts have produced limited results as the incidents of candidates engaging in examination malpractices have continued to rise over the years (Afemikhe, 2010). Therefore, EBI will be of interest to them since it will help to identify candidates who are likely to engage in examination malpractices rather than waiting for them to engage in the act before drastic measures are taken. EBI will help to restore dignity to the certificates of secondary school leavers.
There is no doubt that the incidence of secret cult saga in Nigerian schools is linked with the menace of examination malpractices (Ossai & Avwenagha, 2010). Students who cheat to obtain school certificates and pass the Unified Tertiary Matriculation Examination (UTME) often find out that they cannot cope with the academic rigours of higher education hence they seek succor in secret cults. Therefore, curtailing the menace of examination malpractices in Nigerian secondary schools through effective use of EBI will have far reaching positive consequences for the Nigerian educational system and the society in general. Those who engage in examination malpractices often find out to their chagrin that they have to defend their certificates someday and when they fail to justify the certificates they possess, they often resort to criminal activities. Hence, if this pandemic of examination malpractices is ripped in the bud, the Nigerian society will be better for it.
Conclusion
Adequate care was taken in the construction, validation and standardization of the EBI to ensure that it serves the purpose of helping to diagnose students who have a tendency to engage in cheating behaviour during school examinations. This proactive strategy is worth trying in the face of the apparent failure of the punitive sanctions that have been used extensively over the years in most countries of the world. EBI as a tool for preventive action differs from the punitive strategy in that it aims at identification of students inclined to cheating for reformation before they sit for the examinations rather than wait for them to engage in the cheating and then they are punished. The instrument takes into practical cognizance all the elements of the Theory of Planned Behaviour (TPB) especially regarding attitude towards examination malpractices, the subjective norm or encouragement of the acts by people who should vehemently oppose them and the perceived ease of engaging in the actions. Examination Behaviour Inventory (EBI) is a valuable educational tool that should be used to measure the disposition of students towards examination malpractices. It has been proven to be a reliable and valid instrument for this purpose among secondary school students in Nigeria. Therefore, it is recommended for adaptation in other countries of the world as an instrument for diagnosing the tendency of students to engage in cheating behaviour in school examinations. Thereafter, school personnel such as counsellors, psychologists, teachers and administrators should devise proactive strategies to prevent such identified students from engaging in examination malpractices. It is hoped that such proactive strategies will contribute towards curbing the rising incidents of examination malpractices especially when used as complimentary to the existing punitive measures in most countries of the world.
|
2019-05-04T13:08:14.240Z
|
2014-03-07T00:00:00.000
|
{
"year": 2014,
"sha1": "e29ab36e2a0eb5ae10d82feca116fbcb8ee9980b",
"oa_license": null,
"oa_url": "http://www.sciedu.ca/journal/index.php/wje/article/download/4413/2526",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "08f5fbbda29aff45853a313470811674e8e70dc6",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
1089384
|
pes2o/s2orc
|
v3-fos-license
|
Dealing with missing phase and missing data in phylogeny-based analysis
We recently described a new method to identify disease susceptibility loci, based on the analysis of the evolutionary relationships between haplotypes of cases and controls. However, haplotypes are often unknown and the problem of phase inference is even more crucial when there are missing data. In this work, we suggest using a multiple imputation algorithm to deal with missing phase and missing data, prior to a phylogeny-based analysis. We used the simulated data of Genetic Analysis Workshop 15 (Problem 3, answer known) to assess the power of the phylogeny-based analysis to detect disease susceptibility loci after reconstruction of haplotypes by a multiple-imputation method. We compare, for various rates of missing data, the performance of the multiple imputation method with the performance achieved when considering only the most probable haplotypic configurations or the true phase. When only the phase is unknown, all methods perform approximately the same to identify disease susceptibility sites. In the presence of missing data however, the detection of disease susceptibility sites is significantly better when reconstructing haplotypes by multiple imputation than when considering only the best haplotype configurations.
Background
In the last few years, various phylogeny-based approaches have been developed to test for association between a candidate gene and a disease [1][2][3][4]. These tests are based on the grouping of haplotypes according to their evolutionary relationships represented by a phylogenetic tree. This grouping reduces the degree of freedom of the association tests and thus, increases their power. Interestingly, the haplotype phylogeny can also be used to precisely identify loci involved in the determinism of the disease. We recently described a new method to localize disease susceptibility loci (DS loci), based on the definition of a coevolution index (V i ) between the markers and the disease. putative DS sites [4]. Simulations have shown that the method performs well at identifying DS loci, especially when several DS loci exist.
To reconstruct the phylogenetic tree, haplotype information is used. In most situation, this information is not available from the data and needs to be inferred. In our method, this was done by determining the most probable haplotypes of the different individuals and analyzing them as if they were the known haplotypes. However, this approach may lead to incorrect inferences because it does not take into account the uncertainty of the phase that might be very large, especially in the presence of missing data. In this context, the use of multiple imputation to reconstruct missing phase and missing data might be an interesting alternative. In this paper, we used the simulated data of Genetic Analysis Workshop 15 (GAW15) to compare the relative power of these two approaches to haplotype reconstruction to correctly identify the simulated DS sites when using a phylogeny-based analysis.
Data
We analyzed the 100 replicates simulated for GAW 15 (Problem 3). To apply a phylogeny-based method, we need to work on a candidate region where the disease susceptibility site is typed, and where the recombination rate is low. We used the answers to choose a 200-kb region of chromosome 6 around the DR locus that contained two DS sites: the DR locus and locus C. In this region, nine single-nucleotide polymorphisms (SNPs) (including locus C) were selected. A tenth biallelic locus was added, corresponding to the DR locus in which the lower risk alleles DR1 and DRX were pooled. The linkage disequilibrium is low within these ten sites: the highest r 2 is between locus C and SNP 4 (r 2 = 0.65) and it is the only pair of loci with an r 2 above 0.2.
For each replicate the first affected child of the first 500 families was selected to obtain 500 trios. Missing data were generated on the different loci (with the same percentage of missing data on each locus) on both parents and children. In each replicates, the same individuals had their genotypes missing at the same loci in order to ensure a similar pattern of missing data over replicates.
Reconstruction of missing data and missing phases
Missing phases and missing genotypes were reconstructed either only by an algorithm to infer the most probable haplotypes without missing data for each individual, or by a multiple imputation method. For both methods, the first step was the inference of all the possible haplotypic configurations and their probabilities. It was performed with the software ZAPLO [5]. The first method then consists of picking the most likely haplotypes for each indi-vidual. The only families kept for the analysis were those with a low level of haplotype uncertainty; i.e., families with a best configuration posterior probability >50% and at least 25% difference between the posterior probabilities of the best and second best configuration. Similar results were obtained with other cut-off values (data not shown). The multiple imputation procedure is the same as the one described in Croiseau et al. [6]. Briefly, it consists of repeating two steps: 1) given the current values of two parameters (population haplotype frequencies and affected child genotype frequencies), sampling a complete data set according to the posterior probabilities of each genotypic configuration and 2) given the current data set, updating the two parameters. After a burn-in period of 1000 iterations, every 1000 iterations, the current complete data file was retained. We ran the algorithm until we obtained ten complete data sets.
Identification of the susceptibility sites
The identification of the DS sites was performed with the software ALTree [7]. At first, 1000 equiparsimonious unrooted trees were reconstructed for the 30 most frequent haplotypes using the parsimony method implemented in the software PAUP*, version 4.0b10 [8]. To ensure that various tree configurations were explored, PAUP* was launched 10 times, 100 trees being retained each time. Then, a new character called S, which represents the disease status, was defined for each haplotypes. The state of this character depends on the proportion of cases carrying a given haplotype (state 1 for a large proportion of cases and 0 otherwise). The character state changes were optimized on the tree for each character (including S) using the deltran option. A correlated evolution index (V i ) was calculated between the changes of each site i and the changes of the character S. This index was defined as the difference between the number of observed and expected co-mutations between site i and character S, divided by the square root of the number of expected comutations [4]. To take into account the 10 imputed data sets, we calculated the median of the V i over these 10 data sets. Finally, the sites with V i ≤ 0 were discarded and the two site(s) with the highest V i are retained as putative DS sites.
Results
The power to identify the DS sites is measured as the percentage of replicates among the 100 replicates available in which the simulated DS sites have the highest V i .
Missing phase
In Table 1, we compare the power to identify the DS sites on the complete data set in three conditions: 1) when the phase is known, 2) when only the best haplotype configuration is kept, or 3) when using multiple imputation to infer missing data. The results show no significant difference between the three methods. Figure 1 shows that for different rates of missing data, the percentage of replicates in which the site with the highest V i (best site) is one of the two true DS site. Interestingly, this percentage remains very similar for the different rates of missing data when using multiple imputation. This is not the case when considering only the most likely haplotypes with more than 15% of missing data. With the multiple imputation, there are fewer errors on the second best site and more replicates in which no other site is detected than when using the most likely haplotypes (Figure 1). Figure 2 shows the percentage of replicates in which the two best sites are the two simulated DS sites. For up to 20% of missing data, there is no difference of power when using the most likely haplotypes or the multiple imputation method. For higher rates of missing data, the multiple imputation method leads to higher power but this is not significant at the 5% level because the 95% confidence intervals overlap. The multiple imputation method is found to be more accurate: it is significantly more powerful to identify only the two DS sites (no other site having a V i > 0).
Missing data
The error rate, defined as the percentage of replicates in which the true DS sites are not correctly identified, is presented in Figure 3. With the multiple imputation method, the locus with the highest V i is always either DR or locus Power to identify the two susceptibility sites for different rates of missing data Figure 2 Power to identify the two susceptibility sites for different rates of missing data. Missing data and missing phases are reconstructed using a multiple imputation method (in red) or the most likely haplotypes obtained with ZAPLO (in black). The percentage of replicates in which the two sites with the highest V i values are DR and locus C are reported in the two situations in which there are other sites with V i > 0 (open bars) or there is no other site with V i > 0 (colored bars). Power to identify one of the two susceptibility sites for dif-ferent rates of missing data Figure 1 Power to identify one of the two susceptibility sites for different rates of missing data. Missing data and missing phases are reconstructed using a multiple imputation method (in red) or the most likely haplotypes obtained with ZAPLO (in black). The percentage of replicates in which the site with the highest V i is one of the simulated DS sites is shown according to the properties of the second-best site (if any): i) no second-best site is identified with a V i > 0 (striped bars); ii) the second-best site is a DS site (open bars); iii) the second-best site is not a DS site (colored bars).
C (more often DR). There is also significantly less error on the two best sites (sum of the error on the best site and on the second best site) than when the most likely haplotypes are used. Indeed, this two best site error rate is stable at around 10% for up to 30% of missing data and increases to 35% for 50% missing data. On the contrary, when the most likely haplotypes are used, the two best site error rate constantly increases and reaches 70% for 50% of missing data.
Discussion
The analysis of the GAW15 simulated data allowed us to confirm the power of phylogeny-based tests to identify several DS sites located in the same region. We have shown that the method is particularly powerful to identify locus DR as a susceptibility site. This may be explained by the very high risks attributed to individuals carrying the DR4 allele. The method also allowed us to detect locus C, generally as the second best site and with a lower power than DR (Figure 3 shows more errors on the second best site than on the best site). However, this locus only increases the risk in women, and our analysis has been performed regardless of the sex of the individuals.
Our results show that the use of a multiple imputation method to reconstruct haplotypes allows a better detection of the DS sites in the presence of missing data than the use of the best haplotypic configuration. In particular, it is more accurate (the DS sites are often the only one detected) and it drastically decreases the error rate for the DS site identification. In this study, in the absence of missing data, no difference between the three phase imputation methods was found, but this is probably a particular situation where phase is not very ambiguous thanks to the familial information available. Indeed, when we use the most likely haplotypes, only a mean of 12.48 families (over 500 families in the sample) are discarded from the analysis because of their high level of phase uncertainty. The relative performance of the three methods might be different using case-control data with no familial information available.
To tackle the problem of phase resolution, two types of strategies were suggested. In one-stage procedures, the phase inference and the analysis are performed simultaneously. In two-stage procedures, haplotype frequencies that are estimated in the first stage are used as weights in the second stage. Concerning phylogeny-based analyses, a one-stage procedure will be very difficult to develop because haplotypes need to be known to reconstruct the phylogenetic tree. This probably explains why only twostage procedures have been proposed [2,9]. The problem with these different two-stage methods is that the phylogenetic tree is reconstructed on all possible haplotypes, even if they do not really exist. This can significantly increase the number of haplotypes considered, and thus, lead to an increase in the computation time (especially for parsimony-based tree reconstruction) and possibly, to a loss in power. With the multiple imputation method, ten imputed files are analyzed, which will also increase the computation time, but only the haplotypes observed in these files are used in the phylogenetic reconstruction. Further work will need to be done to compare the multiple imputation approach with these two-stage procedures.
Conclusion
In conclusion, the analysis of the GAW15 simulated data shows that multiple imputation can be of great value in dealing with missing genotypes prior to a phylogenybased analysis. In comparison with a strategy using only the most likely haplotypes, it increases the chances to correctly identify disease susceptibility loci.
Error in the identification of the susceptibilityloci for differ-ent rates of missing data Figure 3 Error in the identification of the susceptibilityloci for different rates of missing data. Missing data and missing phases are reconstructed using a multiple imputation method (in red) or the most likely haplotypes obtained with ZAPLO (in black). Colored bars: the best site (with the highest V i ) is neither DR nor locus C. Empty bars: sum of two error rates, error on the best site and error on the second best site only (i.e., the site with the highest V i is either locus C or DR, but the site with the second highest V i is neither locus C nor DR).
|
2020-04-17T14:34:24.290Z
|
2007-12-18T00:00:00.000
|
{
"year": 2007,
"sha1": "29006a80d311a95b1a2fdf210025d350cb9eb008",
"oa_license": "CCBY",
"oa_url": "https://bmcproc.biomedcentral.com/track/pdf/10.1186/1753-6561-1-S1-S22",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d671219c899c89ffe9be8b50ca16cbac7c457b90",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
90439365
|
pes2o/s2orc
|
v3-fos-license
|
EFFECT OF TEMPERATURE ON THE ERADICATION OF HOUSE LONGHORN BEETLE LARVAE IN WOOD BY MICROWAVE TREATMENT
In repressive protection of cultural/historical woodwork, microwaves have many advantages over conventional heating. The research’s objective was to examine conditions for eradication of house longhorn beetle larvae (Hylotrupes bajulus) in spruce wood (Picea abies) using microwaves. Larvae, inserted at different depths of spruce blocks containing 12% and 42% of moisture, were exposed to microwaves. Two apparatuses were used; a 750W commercial microwave chamber and a newly developed horn antenna microwave device with the power of 800W and a frequency of 2.45GHz, for targeted radiation. We discovered that the inner part of wood warmed up quicker than the surface heated to 65°C. For successful suppression larvae in the wood need to be heated to 54.5°C, for 10 seconds. The necessary exposure time increases with increasing thickness of wood. The surface of wood containing more moisture is heated quicker, but increased moisture slows down the penetration of microwaves into the wood specimen. Therefore, larvae in wood of lower moisture (12%) died faster, both those 20mm under surface (1.5min) and those at 130 mm of depth (10.5min).
INTRODUCTION UVOD
In certain conditions wooden objects are threatened by wood pests such as wood fungi and insects.Chemical agents, suffocations and methods of freezing and heating with moisture regulation are the most common methods used for their eradication (Beiner and Ogilvie, 2005).Microwave heating is one of the thermal methods and had been successfully used decades ago for the treatment of insect-infested (Andreuccetti et al., 1994;Fleming et al., 2005;Henin et al., 2014) or fungi-infected wood (Bech-Andersen in et al., 1992;Strätling et al., 2008;Terebesyová et al., 2010).
Microwaves can effectively destroy wooden pests in all stages of development.(Bini et al., 1997).Microwave heating is one of the fastest and most effective physical methods for subsequent treatment of attacked woodwork.Its advantage lies in the fact that, unlike the conventional methods, wood is heated from the inside out (Zielonka and Gierlik, 1999).However, due to possible adverse effects, treatment of wood with microwaves is still of limited use.
Microwave irradiation can be performed in a chamber (closed system) or using a device with a horn antenna for targeted radiation (open system).In a closed system, the rays deflect from the chamber walls and thus cover the entire exposed object.Open system is much more useful, because the size of the object and any metal parts do not pose a restriction.However, with rays traveling only in one direction, only a part of the object is irradiated at a time, therefore it is necessary to relocate the device in order to expose the whole object to microwaves.
Since both systems have their advantages, we will compare them with respect to their impact on warming of the insects in the depths of the wood.Microwaves are being absorbed in the direction of irradiation and consumed from the surface towards the interior, so the temperature in the depth of the wood decreases (Makovíny et al., 2012).Therefore we assume that the depth penetration of the microwaves and thereby the heating depend on the orientation and humidity of the sample being irradiated.
Insects are ectothermal organisms, and can survive in a very wide temperature range, from -34 to 64°C.Eradication of different developmental stages of most wood insects requires elevated temperatures between 45°C and 64°C.Temperatures above 55°C cause coagulation of proteins in the larvae cells and the injuries are irreversible (Strang, 1992).Both type and stage of the development have an indirect effect on the lethal temperature of each pest.Tissues in different development stages of insects have different dielectric properties and therefore heat up differently (Ondráček and Brunnhofer, 1984), so the effect of temperature varies depending on the tissue (Denlinger and Yocum, 1998).The relationship between lethal temperature and exposure time is also important.Larvae, throughout the whole cross-section of infested wood, need to be exposed to lethal temperatures from few minutes up to few hours (Grosser, 1985;Strang, 1992).Insects are able to adapt to a slowly changing temperature (Strang, 1992).Consequently, the lethal temperature, achieved in a short exposure time when they do not have time to adjust, is highly efficient (Nelson, 1996).Becker and Loeb (1961) state the sensitivity of the most important dry wood insect larvae to high temperatures with conventional heating.Larvae of common furniture beetle (Anobium punctatum) and lyctus powderpost beetle (Lyctus brunneus) die when exposed to 58°C in 20 minutes, while for larger and more resistant larvae of longhorn beetle Hylotrupes bajulus the following exposures are necessary: 50°C for 300 minutes; 54 °C for 90 minutes or 58°C for 55 minutes.Eradication times of microwaves heating are shorter than hot air heating.Fleming et al. (2003) stated that at the temperature of 60°C eradication of the Asian longhorn beetle with conventional heating takes 123 minutes and with the microwaves 5 minutes.Andreuccetti et al. (1995) have determined the lethal temperature for longhorn beetle Hylotrupes bajulus larvae in a water bath to be between 52 and 53°C.When inside wood, this temperature destroys them within 3 minutes.
In fresh wood of Scots pine (thickness about 10 cm) microwave radiation in a chamber at 2,45 GHz (Fleming et al. 2005) achieves a 100 percent mortality at a temperature of 62°C.Henin et al. (2008;2014) have exposed the larvae of house longhorn beetle in the boards of 22 mm thickness to temperatures exceeding 55°C.Larvae died, while the surface of the wood heated to 60°C.By using microwave device with horn antenna at a frequency of 2.45 GHz, Kisternaya and Kozlov (2007) and Makovíny et al. (2012) irradiated wood of larger cross-sections.Kisternaya and Kozlov (2007) reached the lethal temperature of 53 to 55°C in about 120 to 240 minutes.In order to successfully destroy larvae, it was necessary to maintain this temperature for at least 30 minutes.In pine samples with 150 mm in cross-section, Makovíny et al. (2012) destroyed larvae in 34 minutes at a temperature of 50°C, and in 19 minutes at the temperature of 65°C, using a radiation power density of 1,0 Wcm -2 .There are also results published for other wood pests: Andreuccetti et al. (1995) for Oligomerus ptilinoide, Lewis et al. (2000) for Incisitermes minor, Fleming et al. (2003) for the Asian beetle (Anoplophora glabripennis), Fleming et al. (2004) for Plectrodera scalator, Fleming et al. (2005) and Hoover et al. (2010) for PWN, Bursaphelenchus xylophilus, Nzokou et al. (2008) for Agrilus Pennisi, Bisceglia et al. (2009) on various types of nematodes and Massa et al. (2011) for Rhynchophorus ferrugineus.
In the case of cultural/historical wooden objects treated with various coatings, it is necessary to bear in mind that at temperatures above 60°C there is a risk of natural waxes softening and damages of adhesives, paints and varnishes on polichromated sculptures or furniture (Unger, 2001).
From the carried out research it can be established that the repressive protection with microwave irradiation is effective and reliable.However, it is necessary to optimize the two basic factors -temperature and exposure time.These factors for the particular type of pest directly depend on the dimension of the object, wood moisture and the direction of irradiation.
As the literature shows very different data on the time and temperature necessary to destroy each species of wood insect, and the impact of wood as a material (its humidity and thickness) on their heating is not well known either, we carried out a study on the impact of these factors on mortality of the house longhorn beetle larvae.
MATERIALS AND METHODS MATERIAL IN METODE DELA 2.1 Samples preparation 2.1 Priprava vzorcev
For this study we prepared blocks of spruce (Picea abies) in two dimensions.For the best possible approximation to the real situation, in determination of the conditions for larvae eradication, we prepared 50 × 40 × 100 mm samples with 12% humidity.Wood was then thickness-wise split in half with a chisel.With a rounded chisel two grooves were then carved in the middle of each piece.The dimension of the groove was adjusted to the size of the larvae.Doing this we tried as much as possible to simulate the real situation (Figure 1).These samples were prepared in XPS.Two longhorn beetle (Hylotrupes bajulus) larvae of different sizes were inserted into the groove and then the halves joined.The sample was fixed with a rubber band, taking care that the halves fit to each other well, so that the cross section of wood remained unchanged allowing the moisture to pass through the whole sample while heated.Thus prepared samples have enabled us to quickly view and measure the temperature inside the sample and in the groove.
For estimation of the impact of different directions of irradiation on heating throughout the wood volume we used 150 × 150 × 150 mm samples with 12% and 42% humidity.Sample size was limited to the size of the microwave chamber.Thickness-wise they were split into six approximately equal parts.For chamber irradiation, 5 grooves for larvae insertion were carved into each part using a rounded chisel; one in the middle, other four 20 mm from the surface (Fig. 2a).For longitudinal and radial irradiation we made three grooves on each part -at the depths of 20, 75 and 130 mm (Figs.2b and 2c).We inserted larvae of approximately the same weight into the same levels to nullify the impact of the larvae size to heating.
Larvae preparation Priprava larv
Larvae weight varied from 0,03 to 0,32 g.Some larvae were taken from the collection of the Department of Pathology and wood preservation at Biotechnical Faculty in Ljubljana, others have been harvested from infested Scots pine (Pinus sylvestris) boards.Larvae were temporarily inserted into the samples of spruce sapwood and grown at room temperature until the execution of the experiment.For the initial experiments and method optimization, we also used larvae of Colorado potato beetle (Leptinotarsa decemlineata Say) and lesser mealworm (Alphitobius diaperinus Panzer), as they were available in larger quantities.2.3 Determination of the impact of a microwave treatment system on the heating of wood and larvae 2.3 Določanje vpliva sistema mikrovalovnega žarčenja na segrevanje lesa in larv In this study, we used two different systems of radiation.A closed system, where we used a commercial microwave oven Whirlpool AT 329 ALU with the power of 750W and a capacity of 22 litres, and an opened system for unidirectional radiation.For this purpose a commercial microwave Sharp R-613 with an output of 800 W and a frequency of 2.45 GHz was reconstructed and a horn antenna with the dimensions of 300 × 285 mm added, which enabled targeted radiation.
House longhorn beetle larvae were heated in smaller samples (wood and XPS) in order to achieve optimal distribution of heat throughout the wood volume in the microwave chamber, at the power of 750 W, for different time durations: 5, 10, 13, 15 and 30 seconds.After exposure, the samples were opened to measure the surface temperatures of the larvae and the wood interior and to check the viability of larvae.
We heated the surfaces of samples of larger dimensions also to 65°C with both devices.Therefore we used at the beginning different time intervals from 30 to 120 s.With further shorter exposures we achieved and maintained the target temperature of 65°C at the wood surface, until all the larvae died.
Temperature measurement Merjenje temperature
The temperature of the material was measured before and after the exposure to microwave radiation by IR thermal camera Trotec IC 080 LV with a resolution of 384 × 288 pixels.The camera allows temperature measurement in the range of -20°C to +600°C with measurement accuracy of ±2 °C.Regarding the specimens, the camera emissivity was set to 0.94.Infrared came-ra provides a complete temperature field display and temperature measurement in one to five selected points.Since it is a surface measurement, in order to establish the wood temperature profile the samples were set apart and the surface temperature of each part was measured, then samples we re-assembled and further irradiated.
To establish the average, more precise measurements and processing of thermographs, the computer program IC IR Report was used.
Determination of vital functions or the vitality of larvae Ugotavljanje življenjskih funkcij oz. vitalnosti larv
After completing the program of the microwave exposure and measuring of temperature, we immediately verified the survival rate of the larvae.Their state was assessed according to the criteria cited by Fleming et al. (2003), specifically: dehydration of the body, change of the colour and opacity, as well as the movement of the body and the jaw.After two to 24 hours at room temperature, the status of larvae was re-checked.
RESULTS AND DISCUSSION REZULTATI IN RAZPRAVA 3.1 The material and grooves' influence on the heating of larvae 3.1 Vpliv materiala in utorov na segrevanje larv
In order to explain the process of larvae warming, they were exposed to microwaves in different environments: bare larvae in the microwave chamber, embedded in XPS and in wood.Larvae inserted free in a microwave chamber at full power of 750W achieved a higher average temperature than larvae embedded in XPS.XPS is a material which does not heat up by penetrating microwaves, because it does not contain water molecules.Thus, the microwaves heated only larvae, which were then emitting the heat and moisture to the surroundings and thus heated the groove (shaft) (Fig. 3).Most heated were the larvae inserted in wood (Fig. 4), as they achieved up to a 30°C higher temperature at the same exposure time and power (Table 1).Temperature difference between the heat of the empty groove and groove with embedded larvae in the wood was not found.
Opposite to XPS, where larvae heat the surroundings of the groove, in wood they are heated together.In the initial irradiation larvae get heated faster and more than wood, due to higher moisture content.In short exposures, wood is very unevenly heated both at the surface as in the interior, where the difference was up to 17°C (Figure 4).The process of heating wood with microwaves causes migration of moisture from the interior towards the outer surfaces.This results in accumulation of moisture (and heat, consequently) on the walls of the tunnels (in our case a man-made groove).On average, the groove was by 6°C warmer than the surrounding wood, which further contributes to the rapid death of larvae.On average, the difference between the surrounding wood in some parts and larvae reaches 4°C, which is less than the reported data of Andreuccetti et al. (1994), according to whom larvae in dry wood are heated up by10°C more than the surrounding wood.
Impact of microwaves on lethality of larvae Vpliv mikrovalov na smrtnost larv
At full power of the magnetron (750 W) for 5 seconds, all larvae in 50 × 40 × 100 mm wood samples survived.Longer exposure times lead to higher temperatures and thus to a higher mortality rate of larvae.At wood temperature of 54.6°C (larvae heated to 54.5°C) all larvae died within 10 seconds.Henin et al. (2008) exposed the larvae of a longhorn beetle to temperatures above 55°C and achieved 100 percent mortality within 2 minutes.Fleming et al. (2005) reached 100 percent mortality of longhorn beetle larvae at a wood temperature of 62°C.Our findings regarding the temperature are in accordance with previous studies, but the time required for the larvae eradication is significantly shorter -instead of one minute it is only 10 seconds, which is from the point of conservation more acceptable since the cultural/historical objects get exposed to radiation and high temperatures for a shorter time period.Smaller and lighter house longhorn beetle larvae with an average weight of 0.07 g were heated up to about 1.8°C higher temperatures than the larger larvae with an average weight of 0.23 g (Table 2).A possible explanation is that smaller larvae represent a smaller moisture concentration creating a higher microwaves point concentration resulting in increased heating.Stronger warming up of smaller larvae is an encouraging result in regard to suppression of the common furniture beetle family, whose larvae are smaller and twice as sensitive to temperature as the house longhorn beetle larvae.The attacks by these insects are much more common on objects of cultural heritage.
When checking the mortality of larvae we did not observe any changes in colour immediately after exposure to microwaves, as already indicated by Makovíny et al. (2012).We assume that the changes did not appear due to the shorter exposure times and lower temperatures.The dead larvae darkened after 24 hours though, which was noticed when the state of survival was re-examined.The larvae dehydration is also not the most effective method for the survival determination, because it is difficult to estimate, so we focused mostly on the movement of larvae.The movement of the irradiated larvae followed a certain pattern; firstly, the larvae became more lively with raising temperature (28-42°C), when the temperature is elevated to about 49°C the larvae calm down, and they finally die when their temperature reaches 54.5°C.
Impact of wood moisture, depth and direction of irradiation on heating of the larvae Vpliv lesne vlage, globine in smeri obsevanja
na segrevanje larv The 150 × 150 × 150 mm samples were heated and the wood surface temperature kept at a maximum of 65°C, which is still a safe temperature for a brief exposure of surface coatings used on objects of cultural heritage (Nicolaus, 1999), except for waxes, which melt at 62-64°C (Rivers and Umney, 2003).Direction of irradiation, wood moisture and depth penetration affected the speed of wood heating up, therefore we adjusted the interval of heating (of 30 to 120 s) on the heating rate of the wood.Pauses during exposure were maintained for two minutes.This was the time that we needed to carry out interim measurements.Within two minutes, the wood surface cooled down for 8°C.
In the chamber, the samples were heated slowly and more evenly than in the case of direct radiation, since the microwaves are being absorbed over the whole surface and are not concentrated at one point, as in the case of direct irradiation (Figure 5a).The wood with 12% of moisture content (MC) heated faster through volume than wood with moisture content of 42%.Wood with more moisture absorbed and consumed the majority of microwaves on the very surface or just below it.On the other hand, the dry wood surface of wood with 12% MC absorbed only part of the microwaves, while the rest could penetrate deeper into the woods interior.Thus the entire volume of the wood got heated faster and more evenly.The speed of heating the wood affects the intensity of larvae heating.
In both cases all larvae at all five levels were killed before the temperature of the wood surface reached the targeted 65°C.In the wood with 12% moisture the exposure time needed was 2 minutes, while the wood surface heated on average to 51,3°C in radial and 50,6°C in longitudinal plane.A somewhat larger gap between heated surfaces shows in a 42% moist wood, where a difference is up to 2°C, while the larvae of the wood along the fibres at a depth of 2 cm on average get more heated than larvae in the radial direction.This could be attributed to the passage of moisture in the wood, since the permeability in the longitudinal direction is greater while in the shaft (groove) more moisture is captured.That results in a more intensive heating of larvae.After a two-minute exposure, only one of 15 larvae survived, so after two-minute break we exposed the samples again for one minute to microwaves (Table 3).
In order to determine the influence of the orientation on the heating of the wood interior, we applied direct current (DC) irradiation of the samples longitudinally (along the fibres) and radially (at right angles to the wood fibres) until reaching the average surface temperature of 65°C.Few millimetres below the surface of the wood was heated the most.This was attributable to lower moisture on the surface of the sample and rapid cooling of the surface, influenced by the surrounding temperature.In the cross section the sample is heated in the form of elliptical paraboloide.On the irradiation side the heating area is wider and narrows with a depth of wood sample.Along the fibres wood heating is more featured with sharper boundaries of heated area in comparison with irradiation perpendicular to the fibres.It was probable that the boundaries in radial direction were blurred due to easier and faster moisture transfer from the interior towards the surface (Figs.5b and 5c).Also the warming area of the radially irradiated wood was wider than in the radiation along the fibres, because water vapour in moist wood passes from the point of irradiation on each side, thereby allowing deeper penetration of microwaves as the longitudinal when steam is being pushed mostly frontally.
In moist wood, the difference in temperature between early wood, which was heated more, and late wood can be up to 14°C.That contributes to the eradication of larvae as the larvae of the house longhorn beetle eat and stay mainly in the early wood (Unger, 2001).Early wood has large lumens and thin walls through which the microwaves can penetrate more quickly, at the same time the lumens may hold more steam.
In comparison with the closed chamber, the eradication of larvae by DC irradiation needed differently long exposures.In general, the time for the larvae eradication in the open system is longer.When heated in a longitudinal direction (Table 4), the surface of the wood warmed faster compared to the heating in the radial direction (Table 5), but the depth of penetration was slower.Therefore, we used a series of short exposures.Since the area of the heating inside timber narrows down with increasing depth, we determined larvae mortality and temperature only in the middle of the sample in a diameter of 8 cm in the direction of irradiation.
Larvae 2 cm below the wood surface died in 1.5minutes of exposure time in all samples, except for the radial radiation of wood with 42% moisture content, where wood heating was slower and the larvae died in 2 minutes at an average wood surface temperature of 54.8°C.Meanwhile, at the level of 7.5 to 13 cm, larger differences in the humidity of the wood and the direction of irradiation occurred.In longitudinal exposure we needed shorter heating times to maintain the target surface temperature for eradication of larvae at a depth of 7.5 to 13 cm, in comparison with the heating perpendicular to the fibres.However, in radial direction of irradiation the microwaves were penetrating more quickly in the depth of the wood with 42% moisture content, so the complete eradication of the larvae was achieved in 10.5 minutes.The longest exposure was necessary for frontal irradiation and it took 12.5 minutes.If the irradiation had not been interrupted for 2 minutes for intermediate measurements, the temperature variations would be lower and probably less time would be required for the suppression of larvae (Tables 4 and 5).
According to irradiation data obtained, we found that the time required for the eradication of house longhorn beetle in relation to the wood volume and moisture content and the direction of irradiation varies too much to be evidenced for all wooden objects of cultural heritage, which are very diverse.Therefore, at DC radiation we checked the temperatures of surfaces on the opposite side of irradiation.The lowest temperature of the opposite wood surface, at which all the larvae died, was 48.4°C.This is the lowest temperature that we need to reach at the opposite side of the one irradiated to successfully suppress house longhorn beetle, regardless of the humidity, volume or direction of irradiation.
Since the area of the heating narrows with depth, it is necessary to irradiate the entire surface, to move the microwave device to the areas where the target temperature has not been reached on the opposite side.
According to the literature and our assumptions, a few minutes of exposure to temperatures around 50°C is safe for the natural materials used in the surface treatment of furniture, polychromated statues, panel painting and other cultural/historical objects.
CONCLUSIONS ZAKLJUČKI
The aim of our study was to optimize technological conditions and factors for microwave heating of spruce wood, infested with larvae of house longhorn beetle Hylotrupes bajulus.Our findings show that we can achieve effective eradication of H. bajulus larvae with heating above 54.5°C.The inner part of wood warms up quicker and more than the surface.In a microwave chamber, the wood is heated more slowly and more evenly over the entire volume in comparison with DC radiation.Volume of the irradiated wood has a strong influence on the heating dynamics of wood and larvae.Larvae in smaller samples (5 × 4 × 10 cm) died at irradiation power of 750 W in 10 seconds, whereas in larger samples (15 × 15 × 15 cm) at the same depth in the same conditions death occurred within 120 s, when wood surface was heated to above 50.6°Cand larvae of the wood interior reached a temperature above 54.5°C.With DC irradiation the times are slightly longer.Lar- vae inserted 2 cm below the surface died at a temperature of wood surface above 59.3°C in 1.5 minutes, while the deeper inserted larvae (7.5 to 13 cm) died much later, depending on wood humidity and direction of irradiation.The longest irradiation time was12.5 minutes in the longitudinal irradiation of wood with 42% moisture content.We found that, in the case of directed radiation, it is necessary to reach the surface temperature above 49°C on the opposite side of the irradiated area, regardless of the time, wood moisture and direction of the radiation, in order to successfully suppress house longhorn beetle larvae.This temperature is quite low and we think that within a few minutes of exposure such a temperature will not cause damage to polychromated materials.
The data obtained will serve for further research of the impact of microwave heating on a variety of surface coatings used in objects of cultural heritage, as well as for application in conservatory work.
Fig. 1 :
Fig. 1: Sample of spruce sapwood 50 x 40 x 100 mm with 2 grooves for 2 larvae of the house longhorn beetle
Fig. 3 :
Fig. 3: Heating of larvae in the sample of XPS
Fig. 5 :
Fig. 5: Heating the interior of the wood: a-heating in a chamber, b-longitudinal heating and c-radial heating
Table 1 :
Effect of exposure time and material (XPS, wood) to heating of larvae in a chamber Preglednica 1: Vpliv časa izpostavitve in materiala (XPS, les) na segrevanje larv v komori
Table 2 :
Time and temperature of house longhorn beetle larvae eradication in 50 × 40 × 100 mm wood sample
Table 3 :
Impact of moisture content of wood samples on the heating dynamics of wood and the lethality of house longhorn beetle larvae exposed to microwaves in the chamber Legend / legenda: R= radial plane / radialna ravnina, T= transverse plane / prečna ravnina, H= heating / segrevanje, P= pause / premor
Table 4 :
The effect of microwave heating from direct current microwave radiation in longitudinal direction of house longhorn beetle larvae
Table 5 :
The effect of microwave heating from direct current microwave radiation in radial direction of house longhorn beetle larvae
|
2019-04-02T13:12:00.341Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "d7c7a86ae7c591f6a69326e9fcd1773c70119b03",
"oa_license": "CCBYSA",
"oa_url": "http://dirros.openscience.si/Dokument.php?dn=&id=9985&lang=slv",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1c23493b164ffeb2a075780225391cd5fd0eba36",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
27605076
|
pes2o/s2orc
|
v3-fos-license
|
Persistent severe hypokalemia : Gitelman syndrome and differential diagnosis
Hipocalemia é um problema clínico bastante comum. Situações que diminuam a ingesta, aumentem a translocação de potássio para o intracelular ou aumentem as perdas de potássio na urina, trato gastrointestinal ou suor levam a uma redução do potássio sérico, resultando em hipocalemia e suas manifestações clínicas.1 Depois de confirmada hipocalemia, deve-se realizar a coleta de anamnese com detalhada história clínica e exames laboratoriais devem ser feitos, a fim de identificar a causa, a qual é normalmente Hipocalemia grave persistente: Síndrome de Gitelman e diagnósticos diferenciais
The main causes of hypokalemia are usually evident in the clinical history of patients, with previous episodes of vomiting, diarrhea or diuretic use.However, in some patients the cause of hypokalemia can become a challenge.In such cases, two major components of the investigation must be performed: assessment of urinary excretion potassium and the acid-base status.This article presents a case report of a patient with severe persistent hypokalemia, complementary laboratory tests indicated that's it was hypomagnesaemia and hypocalciuria associated with metabolic alkalosis, and increase of thyroid hormones.Thyrotoxic periodic paralysis was included in the differential diagnosis, but evolved into euthyroid state, persisting with severe hypokalemia, which led to be diagnosed as Gitelman syndrome.
IntRoductIon
Hypokalemia is a common clinical problem.Situations that decrease intake, increase translocation into the cells or increase losses in the urine, gastrointestinal tract, or sweat lead to a reduction in the serum potassium concentration, resulting in hypokalemia and its clinical manifestations. 1 After hypokalemia is documented, attempts should be made based on the history and laboratory findings to identify the cause, which is often secondary to vomiting, diarrhea or diuretic therapy.][4][5] This article presents a case report of a patient with severe persistent hypokalemia, with complementary laboratory findings characterized by hypomagnesaemia and hypocalciuria associated with metabolic alkalosis, and increase of thyroid hormones.
cAse RepoRt
A 41-year-old Brazilian woman was referred to Nephrologist with complaints of weakness, fatigue and muscle crumps.She was first admitted to a Intensive Care Unit of a tertiary hospital eight months before the first visit to our clinic with 32 mEq/24h; chloride 73 mmol/L; TSH 4.03 µUI/mL; T4 livre 1.3 ng/dl; blood gases pH 7.46 / bicarbonate 40 mEq/L.She had elevated levels of aldosterone and renin activity, 212 ng/dL and 215 ng/mL, respectively.
Because of a high risk of volume depletion in our patient, who was already presenting low blood pressure (systolic blood pressure around 80-90 mmHg), diuretic tests were not performed.Literature recommends performing the test in normotensive patients with hypokalemic alkalosis phenotype, in which an abnormal test allows to predict with a very high sensitivity and specificity the GS genotype and thus avoid the need for genotyping. 6Genetic studies were not performed, so far, because of its high cost.
dIscussIon
THPP is a rare metabolic myopathy that consists of acute systemic muscle weakness associated with hypokalemia, with potentially fatal episodes of muscle weakness or paralysis that can affect the respiratory muscles.The most common cause of hypokalemic paralysis is primary or may be familial hypokalemic periodic paralysis.
The familial forms have a genetic substrate, autosomal dominant penetrance, and the symptoms occur due to hereditary defects in the ion channels, in younger individuals.Sporadic paralysis are related to the dysfunction of ion channels caused by electrolytic disturbances, as the patient in this case report.The causes of sporadic forms could include secondary to thyrotoxicosis, renal tubular acidosis, primary hyperaldosteronism, Gitelman syndrome (GS), diarrhea or barium intoxication. 7HPP occurs predominantly in Asian male descendants.The thyrotoxic condition causes an ion channel defect leading to a rapid shift of potassium into the intracellular space. 7erum potassium levels may decline to as low as 1.5 to 2.5 mEq/L in acute attacks, which are precipitated by rest after exercise, carbohydrate meal or stress.Serum potassium is normal during periods between the attacks of paralysis.This is a characteristic that can help distinguish periodic paralysis from other forms of hypokalemic paralysis.
Patients with THPP will also have a low urinary potassium excretion; wich, can also help distinguish patients from those who have hypokalemic paralysis due to renal loss of potassium. 8Our patient evolved into euthyroid state, persisting with severe the symptoms previously mentioned.The symptoms were exacerbated by vomiting, nausea and diarrhea, and associated with EKG abnormalities and severe hypokalemia.
Past history of weight loss, not quantified, associated with amenorrhea and low blood pressure.She had thyroid cancer 20 years ago, when she was submitted to thyroidectomy.She had a cancer relapse, five years ago, treated with radioactive iodine therapy.She was taking regularly oral levothyroxine 100 mcg daily.
Family history of cancer in two family members, not related to thyroid cancer, and her father suffered an acute myocardial infarction.She had no history of abuse of laxatives or diuretics.
Renal ultrasound was normal.The clinical examination results showed that her height was 1.57 m and her weight was 38 kg (Body Mass Index =15.4 kg/ m 2 ).There was no prominent bulging of the eyes, her pulse rate was 84 beats per minute without arrhythmia, and the cardiac auscultation area had no pathological murmurs.
Upon admission to our clinic, the patient was taking an overdose of levothyroxine; therefore, thyrotoxic hypokalemic periodic paralysis (THPP) was considered the cause of the hypokalemic paralysis.The levothyroxine dose was diminished up to 75 mcg daily.
However, despite euthyroid status and potassium and magnesium replacement, the severe hypokalemic condition persisted.Then, further investigations were made to reveal the etiology, and Gitelman Syndrome (GS) was considered.New laboratory tests revealed: creatinine 1.26 mg/dl; magnesium 2.8 meq/L; sodium 130 meq/L; potassium 1.8 meq/L; chloride ur 80 mEq/L; calcium ur 29.2 mg/24h; magnesium ur 32.6 mg/24h; sodium ur 74 mmol/24h; potassium ur In the medical literature, there are only two case reports describing hypokalemic paralysis due to thyrotoxicosis accompanied by GS.One report in a 16-year-old Japanese male patient 13 and another, of a 35-year-old Indian male. 7We have also found, two letters to the editor, describing four cases of Asian females with concurrent diagnosis of Grave's disease and GS. 18,19So far, our case is the first case report of a Brazilian female with concurrent symptoms of hyperthyroidism and Gitelman Syndrome.The authors also did not find any other cases reporting periodic paralysis to hiperthyroidism due to drugs.hypokalemia, then clinical diagnosis of Gitelman syndrome (GS) was made.
GS is an autosomal recessive renal tubular disease with clinical manifestations similar to those of Bartter syndrome (BS).][11] GS is a rare inherited autosomal recessive tubulopathy that causes loss of salt, characterized by hypokalemic metabolic alkalosis, hypomagnesemia, hypocalciuria, and secondary hyperaldosteronism. 11,12][13][14] In contrast to BS, GS is known to be characterized by more frequent hypomagnesemia and low urinary calcium excretion. 10S is very often asymptomatic. 6The natural history of GS is variable in terms of age at clinical diagnosis, biologic phenotype, and clinical manifestations. 12Most of the patients with GS are asymptomatic or complain of mild intermittent cramps, fatigue, muscle weakness, or irritability.Thirst, polyuria, carpopedal spasm, paraesthesiae, palpitations and joint pain are related to the disorder as well. 5,8,12,15he phenotypic variations and the absence of a standard diagnostic method make a definite diagnosis of GS more difficult. 9,16Nevertheless, even without a genetic report, clinicians should be able to make correct clinical diagnosis, predict prognosis, and manage the condition correctly. 10he aims of treatment are to improve patient symptoms, quality of life and serum electrolyte levels, and to ensure cardiac rhythm stability.Standard treatment includes a diet with high levels of salt, potassium and magnesium, as well as oral magnesium and potassium supplements, sometimes together with K-sparing diuretics (if hypotension permits). 6,9n summary, in our case report, the patient had concurrent THPP and GS, two simultaneous pathological mechanisms triggered by hypokalemia. 15Clinical findings did not provide clues for a pathological condition other than thyrotoxicosis. 17ypokalemia is recognized as the first possible symptom for patients with GS and THPP.The difference between them is that patients with THPP often have transient hypokalaemia. 18
|
2017-11-04T17:25:54.200Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "84a04e6f449d0730339368a4e012abab5fada627",
"oa_license": "CCBY",
"oa_url": "http://bjn.org.br/export-pdf/1973/0101-2800-jbn-39-03-0337.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ddc07126913918201cd87cdc1d0eb7bbf8156afe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237426565
|
pes2o/s2orc
|
v3-fos-license
|
APOΕ4 lowers energy expenditure in females and impairs glucose oxidation by increasing flux through aerobic glycolysis
Background Cerebral glucose hypometabolism is consistently observed in individuals with Alzheimer’s disease (AD), as well as in young cognitively normal carriers of the Ε4 allele of Apolipoprotein E (APOE), the strongest genetic predictor of late-onset AD. While this clinical feature has been described for over two decades, the mechanism underlying these changes in cerebral glucose metabolism remains a critical knowledge gap in the field. Methods Here, we undertook a multi-omic approach by combining single-cell RNA sequencing (scRNAseq) and stable isotope resolved metabolomics (SIRM) to define a metabolic rewiring across astrocytes, brain tissue, mice, and human subjects expressing APOE4. Results Single-cell analysis of brain tissue from mice expressing human APOE revealed E4-associated decreases in genes related to oxidative phosphorylation, particularly in astrocytes. This shift was confirmed on a metabolic level with isotopic tracing of 13C-glucose in E4 mice and astrocytes, which showed decreased pyruvate entry into the TCA cycle and increased lactate synthesis. Metabolic phenotyping of E4 astrocytes showed elevated glycolytic activity, decreased oxygen consumption, blunted oxidative flexibility, and a lower rate of glucose oxidation in the presence of lactate. Together, these cellular findings suggest an E4-associated increase in aerobic glycolysis (i.e. the Warburg effect). To test whether this phenomenon translated to APOE4 humans, we analyzed the plasma metabolome of young and middle-aged human participants with and without the Ε4 allele, and used indirect calorimetry to measure whole body oxygen consumption and energy expenditure. In line with data from E4-expressing female mice, a subgroup analysis revealed that young female E4 carriers showed a striking decrease in energy expenditure compared to non-carriers. This decrease in energy expenditure was primarily driven by a lower rate of oxygen consumption, and was exaggerated following a dietary glucose challenge. Further, the stunted oxygen consumption was accompanied by markedly increased lactate in the plasma of E4 carriers, and a pathway analysis of the plasma metabolome suggested an increase in aerobic glycolysis. Conclusions Together, these results suggest astrocyte, brain and system-level metabolic reprogramming in the presence of APOE4, a ‘Warburg like’ endophenotype that is observable in young females decades prior to clinically manifest AD. Supplementary Information The online version contains supplementary material available at 10.1186/s13024-021-00483-y.
Results: Single-cell analysis of brain tissue from mice expressing human APOE revealed E4-associated decreases in genes related to oxidative phosphorylation, particularly in astrocytes. This shift was confirmed on a metabolic level with isotopic tracing of 13 C-glucose in E4 mice and astrocytes, which showed decreased pyruvate entry into the TCA cycle and increased lactate synthesis. Metabolic phenotyping of E4 astrocytes showed elevated glycolytic activity, decreased oxygen consumption, blunted oxidative flexibility, and a lower rate of glucose oxidation in the presence of lactate. Together, these cellular findings suggest an E4-associated increase in aerobic glycolysis (i.e. the Warburg effect). To test whether this phenomenon translated to APOE4 humans, we analyzed the plasma metabolome of young and middle-aged human participants with and without the Ε4 allele, and used indirect calorimetry to measure whole body oxygen consumption and energy expenditure. In line with data from E4expressing female mice, a subgroup analysis revealed that young female E4 carriers showed a striking decrease in energy expenditure compared to non-carriers. This decrease in energy expenditure was primarily driven by a lower rate of oxygen consumption, and was exaggerated following a dietary glucose challenge. Further, the stunted oxygen consumption was accompanied by markedly increased lactate in the plasma of E4 carriers, and a pathway analysis of the plasma metabolome suggested an increase in aerobic glycolysis.
Conclusions: Together, these results suggest astrocyte, brain and system-level metabolic reprogramming in the presence of APOE4, a 'Warburg like' endophenotype that is observable in young females decades prior to clinically manifest AD.
Background
The Ε4 allele of Apolipoprotein E (APOE) confers more risk (up to 15 fold) for the development of late-onset Alzheimer's disease (AD) than any other gene [1,2]. While E4 is a strong contributor to late-onset AD risk, the effect is even greater in females [3]. Female E4 carriers have an increased odds ratio for AD [4], increased incidence of AD [5], elevated hazard ratio for conversion to mild cognitive impairment [6], increased CSF tau [7], and reduced hippocampal volume [8], compared to male E4 carriers. To date, studies investigating the mechanism by which Ε4 and sex increase disease risk have primarily focused on the important associations of Ε4 with the neuropathological hallmarks of ADi.e. the increased amyloid load seen in Ε4 carriers [9,10] and the APOEdependence of tau propagation [11,12].
Alternatively, investigating Ε4 carriers who have not yet developed neuropathology may provide insight into early E4 mechanisms and unveil additional therapeutic targets for the prevention of AD. For example, an early and consistent biological hallmark of AD is cerebral glucose hypometabolism as observed by 18 F-fluorodeoxyglucose positron emission tomography (FDG-PET) imaging [13][14][15]. Interestingly, Ε4 carriers also display an "AD-like" pattern of decreased glucose metabolism by FDG-PET long before clinical symptomology [16,17]. Since glucose hypometabolism occurs early in AD and early in Ε4 carriers, it may represent a critical initial phase of AD pathogenesis that predisposes individuals to subsequent symptomology.
Beyond this FDG-PET finding, it is not clear if APOE has other discernable metabolic effects in pre-cognitively impaired young people, and clinical research focused on how APOE may regulate metabolism outside of the brain is limited [18]. Most studies have utilized a targeted replacement mouse model of APOE in which the murine Apoe alleles are replaced by the human orthologs [19,20]. For example, several studies have found Ε4 mice to exhibit increased susceptibility to insulin resistance, and one report characterized E4 mice as deficient in extracting energy from dietary sources [21][22][23]. While these preclinical studies have been critical to our understanding of Ε4-associated impairments in glucose metabolism, the mechanism underlying these changes, and the extent to which systemic glucose metabolism is regulated by APOE in young healthy humans, remain largely unknown.
In the current study, we combined single-cell RNA sequencing (scRNAseq) and stable isotope resolved metabolomics (SIRM) to define a metabolic shift toward aerobic glycolysis across astrocytes, brain tissue, mice, and human subjects expressing APOE4. We highlight an astrocyte-directed shift in gene expression away from oxidative phosphorylation in the brains of mice expressing human E4, and confirm this metabolic reprogramming through the use of isotopic tracing of 13 C-glucose in both E4 mice and astrocytes. To test whether this phenomenon translated to APOE4 humans, we used indirect calorimetry to measure whole body oxygen consumption and energy expenditure in young and middleaged human participants with and without the Ε4 allele. Strikingly, a subgroup analysis revealed that young female E4 carriers showed a significant decrease in resting energy expenditure compared to non-carriers, a decrease driven primarily by reductions in oxygen consumption. Interestingly, this stunted oxygen consumption was exaggerated following a dietary glucose challenge and was accompanied by markedly increased lactate in the plasma of E4 carriers. Together, these results suggest astrocyte, brain and system-level metabolic reprogramming in the presence of APOE4, a pro-glycolytic shift that is observable in young women decades prior to clinically manifest AD.
Clinical research study design
The study objectives were to i) determine if APOE genotype influences peripheral and cerebral metabolism in young cognitively normal human subjects, and if so, ii) elucidate potential mechanisms using mouse and cell models of human APOE. For the clinical research study, healthy volunteers between 18 and 65 were prescreened for diagnoses that may affect cognitive function (ex. stroke, Parkinson's), metabolic diseases (diabetes), alcoholism, drug abuse, chronic major psychiatric disorders, medications that interfere with cognition (narcotic analgesics, anti-depressants), medications that interfere with first use of energy expenditure (EE), expand abbreviation (stimulants, beta-blockers) and vision or hearing deficits that may interfere with testing. The prescreening checklist with a full list of medications and conditions excluded for can be found in the supplemental materials (Extended Data Table 5). Eligible candidates were brought in for informed consent after a 12-h fast in which subjects were asked not to exercise and to abstain from everything except for water. We employed a power analysis based on a feasibility study, and the required sample size per group for a power level of 0.9 was calculated to be n = 30 per "group" (i.e. E2+, E3/E3 and E4+), for a total of 90 subjects. To account for potential biological outliers, non-consenting subjects, and post-recruitment exclusion criteria being met, we recruited a total of 100 individuals for this observational study. The study was conducted under Clinical Trial #: NCT03109661, and supporting data can be found at https://clinicaltrials.gov/ct2/show/ NCT03109561. The primary outcome measure was to measure resting state respiratory quotient in cognitively normal participants with various APOE genotypes using indirect calorimetry. Secondary measures included measuring respiratory quotient during a cognitive task and other outcome measures included biospecimen (urine and blood) analysis. Data acquisition was blinded as APOE genotypes were determined after the study. Prior to unblinding to APOE genotype, individuals who had IC values more than 2 standard deviations from the mean were excluded from analysis, leaving 94 individuals for analysis. Following completion of the study, several subgroup analyses were pursued, including analyses of age and sex as variables. As we were primarily interested in APOE effects in young individuals, we stratified our sample population into a young cohort (under 40 years old) and a middle-aged cohort (40-65 years old). We chose 40 as the age-cutoff based on a meta-analysis of APOE genotype and AD-risk which found the Ε4 effect on disease to be observable in individuals 40 and over [4]. Body mass index (BMI), waist to hip ratio, and blood pressure were first recorded. Thereafter, participants were fitted with an airtight mask that was connected to an MGC Diagnostics Ultima CPX metabolic cart which measures VO 2 , VCO 2 , and respiratory rate. EE is defined as the amount of energy an individual uses to maintain homeostasis in kcal per day, and can be calculated using the Weir eq. (EE = 1.44 (3.94 VO 2 + 1.11 VCO 2 ) [24]. EE is composed of the resting energy expenditure (REE), the thermic effect of feeding (TEF), and activity related energy expenditure (AEE). In motionless and fasted humans, EE is equivalent to the REE since the TEF and AEE have been controlled for. Participants were instructed to remain motionless and to refrain from sleep for 30 min as data was gathered. All testing occurred between 8:30-11:30 am in a temperature controlled (20-22°C) out-patient research unit (Center for Clinical and Translational Science, University of Kentucky). Body temperature was taken periodically via temporal thermometer to ensure thermostasis and provide intermittent stimulation to ensure wakefulness. After the resting period came a 30 min cognitive test period. We then introduced a novel-image-novel-location (NINL) object recognition test consisting of a series of images which participants were later asked to recall. This test has been shown previously to study APOE allele effects on cognition [25]. After the cognitive test period, a blood draw was taken via venipuncture and placed on ice. Participants then consumed a sugary milk drink consisting of 50 g of sugar dissolved in whole milk. The drink was consumed within a 2 min time span. The mask was then refitted and participants were instructed to again remain motionless for 30 min for data collection. Data from the first 5 min of the study time periods were excluded to allow a 5 min steady state adjustment [26,27]. After the glucose challenge, participants provided a second blood sample (~45 min after the initial blood draw). Participants then exited the study and were compensated for their participation.
APOE genotyping APOE genotype was determined by extracting genomic DNA from participants' blood samples using a GenElute Blood Genomic DNA Kit (Sigma). After confirming concentration and quality by Nanodrop, APOE genotype was determined using PCR with TaqMan assay primers for the two allele-determining SNPs of APOE: rs7412 and rs429358 (Thermo). Positive controls for the six possible APOE genotypes were included with each assay.
Plasma metabolomics and GCMS sample preparation
Plasma was separated from blood by centrifugation at 2500 x g for 10 min at 4°C, and stored in 200uL aliquots at − 80°C until further use. Upon thawing, ice cold 100% methanol solution containing 40 nM L-norvaline (internal standard) was added to 80 μl of plasma and kept on ice for 20 min with regular vortexing. The solution was then centrifuged for 10 min (14,000 rpm, 4°C). Supernatant containing polar metabolites was removed to a new tube and kept at − 80°C until prepped for GCMS analysis. Polar metabolites were thawed on ice then dried under vacuum. The dried pellet was dissolved in 50 μL methoxyamine HCl-pyridine (20 mg/ ml) solution and heated 60 min at 60°C. Following heating, samples were transferred to v-shaped glass chromatography vials and 80 μl of MTSFA + 1% TMCS (Thermo Scientific) was added. Samples were then heated for 60 min at 60°C, allowed to cool to room temperature, and then analyzed via GCMS with parameters as previously described [28]. Briefly, a GC temperature gradient of 130°C was held for 4 min, rising at 6°C/min to 243°C, rising at 60°C/min to 280°C and held for 2 min. Electron ionization energy was set to 70 eV. Scan and full scan mode used for metabolite analysis, spectra were translated to relative abundance using the Automated Mass Spectral Deconvolution and Identification System (AMDIS) software with retention time and fragmentation pattern matched to FiehnLib library with a confidence score of > 80. Chromatograms were quantified using Data Extraction for Stable Isotope-labelled metabolites (DExSI) with a primary ion and two or more matching qualifying ions. Metabolite quantification was normalized to relative abundance of internal standard (L-norvaline), brain and cell data also normalized to protein concentration. Metabolomics data was analyzed using the web-based data processing tool Metaboanalyst [29]. Metabolites significantly altered by APOE genotype and/or time point were defined by ANOVA and subsequent false discovery rate cutoff of < 0.05. All identified metabolites for which > 75% of participants had a measurable concentration were included, and missing values were estimated with an optimized random forest method [30]. For the pathway impact analysis, the parameters were set to 'global test' and 'Relative-betweenness Centrality', a node centrality measure which reflects metabolic pathway 'hub' importance. For enrichment analyses, parameters were set to "Pathway-associated metabolite sets (SMPDB)", a library that contains 99 metabolite sets based on normal human metabolism. For both pathway and impact analyses, only metabolic pathways with 3+ metabolites represented in our data set were included, and a false discovery rate cutoff of < 0.05 was utilized.
Mice and metabolic phenotyping
Mice expressing human APOE display many of the phenotypic characteristics observed in humans including several metabolic variations noted in epidemiological studies [31][32][33]. In this "knock-in" model, the mouse Apoe locus is targeted and replaced with the various human APOE alleles, thereby remaining under control of the endogenous mouse Apoe promoter and resulting in a physiologically relevant pattern and level of human APOE expression [17,19,[34][35][36][37]. Mice used in this study were homozygous for either the human E3 or E4 alleles, aged 2-4 months (young) and group housed in sterile micro-isolator cages (Lab Products, Maywood, NJ), and fed autoclaved food and acidified water ad libitum. Animal protocols were reviewed and approved by the University of Kentucky Institutional Animal Use and Care Committee. Human E3 and Ε4 mice were evaluated by indirect calorimetry (TSE Systems, Chesterfield, MO). Mouse body composition was measured using EchoMRI (Echo Medical Systems, Houston, TX) the morning prior to being singly housed in the indirect calorimetry system. Mice were acclimated to singly housed cage conditions for 1 week prior to beginning data recording. After 5 days on standard chow diet (Teklad Global 18% protein rodent diet; 2018; Teklad, Madison, WI), mice were fasted overnight before being introduced to a high carb diet (Open Source Diets, Control Diet for Ketogenic Diet with Mostly Cocoa Butter, D10070802) for 5 days. Mice were monitored for O 2 consumption, CO 2 production, movement, and food and water consumption. Chambers were sampled in succession and were reported as the average of 30 min intervals in reference to an unoccupied chamber. To negate the effects of activity on EE readouts, we chose to only analyze the light cycles of the mice where activity, and feeding, is minimal. The EE then becomes analogous to a "resting" EE similar to the resting period in the human study and differences observed are likely due to basal metabolic rate differences instead of confounding factors such as feeding and activity [38].
Cell culture
Primary astrocytes were isolated from postnatal day 0-4 pups of mice homozygous for E3 or Ε4. The brain was surgically excised and meninges were removed from cortical tissue in cold DMEM. Tissue from pups of the same genotype was pooled and coarsely chopped to encourage suspension. Tissue homogenates were incubated in serum free DMEM with 0.25% trypsin and DNAse for 30 min with gentle shaking. Cell suspension was then filtered through 40 μm strainer and spun for 5 min at 1100 x g. Suspended primary cells were then plated in a polylysine coated plate and allowed to grow to confluence in Advanced DMEM (Gibco) with 10% FBS. Immortalized astrocytes were derived from targeted replacement mice expressing human APOE alleles (kind gift from Dr. David Holtzman). These immortalized cell lines secrete human ApoE in HDL-like particles at equivalent levels to primary astrocytes from targeted replacement APOE knock-in mice and have been relied upon for studies of APOE's role in astrocyte metabolism by several groups [39][40][41]. Cells were maintained in Advanced DMEM (Gibco) supplemented with 1 mM sodium pyruvate, 1X Geneticin, and 10% fetal bovine serum unless otherwise noted.
Single-cell RNA sequencing
Brain tissues were processed for creating single cell suspensions as previously described [42]. Briefly, 11-12 month old female E3/E3 and E4/E4 mice (pooled n = 3 per genotype) were anesthetized via 5.0% isoflurane before exsanguination and transcardial perfusion with icecold Dulbecco's phosphate buffered saline (DPBS; Gibco # 14040133). Following perfusion, brains were quickly removed and a whole right hemispheres sans brainstem and cerebellum were quickly minced using forceps on top of an ice-chilled petri dish. Minced tissue from the 3 pooled hemispheres per genotype were immediately transferred into gentleMACS C-tube (Miltenyi #130-093-237) containing Adult Brain Dissociation Kit (ADBK) enzymatic digest reagents (Miltenyi #130-107-677) prepared according to manufacturer's protocol. Tissues were dissociated using the "37C_ABDK" protocol on the gentle-MACS Octo Dissociator instrument (Miltenyi #130-095-937) with heaters attached. After tissue digestion, cell suspensions were processed for debris removal and filtered through 70 μm mesh cell filters following the manufacturer's suggested ABDK protocol. The resultant suspension was filtered sequentially two more times using fresh 30 μm mesh filters. Cell viability was checked using AO/PI viability kit (Logos Biosystems # LGBD10012) both cell suspensions were determined to have > 88% viable cells. Following viability and counting, cells were diluted to achieve a concentration of~1000 cells/100uL. The diluted cell suspensions were loaded onto the 10x Genomics Chromium Controller. Each sample was loaded into a separate channel on the Single Cell 3′ Chip and libraries were prepared using the Chromium v3 Single Cell 3′ Library and Gel Bead Kit (10x Genomics). Final library quantification and quality check was performed using BioAnalyzer (Agilent), and sequencing performed on a NovaSeq 6000 S4 flow cell, 150 bp Paired-End sequencing (Novogene). Raw sequencing data was de-multiplexed and aligned using Cell Ranger (10x Genomics), and further processed using Partek software. Gene ontology and pathway enrichment analyses were performed using Partek's "filter groups" feature to selectively analyze astrocytes, followed by gene set enrichment with a set threshold of q < 0.05, followed by the "differential analysis > pathway analysis" features. To remove likely multiplet and dead cells, cells were discarded if they had total read counts less than 50 or greater than 50,000 UMIs, or mitochondrial read counts more than 30%. UMAP projections were visualized with 20 principal components. Clusters were assigned to cell types using known marker genes. Two small clusters (< 250 cells) were removed from downstream analysis due to suspected doublets/triplets based on positive gene expression of multiple cell-specific gene markers (astrocytes, microglia, mural cells and/or endothelial cells). The final dataset consisted of a total of 18,167 cells (8216 and 9951 cells from E3 and E4, respectively) that passed quality control thresholds.
Glucose tracing in vivo
Female TR mice homozygous for E3 or Ε4 (12-13 month) were fasted for 2-3 h then, via oral gavage, administered 250 μL [U-13 C] glucose solution at a concentration of 2 g/kg of body weight based on average cohort bodyweight. 45 min following gavage, mice were euthanized by cervical dislocation, brains were removed and quickly washed twice in PBS, once in H 2 O then frozen in liquid N 2 . Tissues were kept at − 80°C until ground under liquid N 2 using a Freezer/Mill Cryogenic Grinder (SPEX SamplePrep model 9875D). Approximately 60 mg of tissue was placed in a 1.5 mL tube then 1 mL extraction buffer (50% methanol, 20 nM norvaline) was added followed by a brief vortex and placement on ice for 20 min (briefly vortexed every 5 min). Samples were then centrifuged at 14,000 rpm, 4°C for 10 min. The supernatant containing polar metabolites was moved to a new tube and kept at − 80°C until prepped for GCMS. The resulting pellet was re-suspended in RIPA buffer (Sigma) and protein concentration was measured with BCA kit (Pierce) for normalization.
Glucose metabolism assays
For glucose oxidation assays, astrocytes were plated in a 24-well plate at 300,000 cells/well with 500 μL of maintenance media (Advanced DMEM, 10% FBS, 1% sodium pyruvate, 0.4% Geneticin) and incubated at 5% CO 2 and 37°C and allowed to grow to confluence for 24 h. Using a previously published protocol [43], cells were then incubated with 1 μCi/mL [U-14 C] glucose in maintenance media (25 mM glucose) or starvation media (same as maintenance except 0 mM glucose) for 3 h. Buffered 14 CO 2 in the media was then liberated by addition of 1 M perchloric acid and captured on a filter paper disc pre-soaked with 1 N sodium hydroxide using airtight acidification vials. Radioactivity of the filter paper was measured in a Microbeta 2 Scintillation Counter (Perkin Elmer) after addition of 3 mL Ultima-Gold Scintillation Fluid. For glucose tracing in primary astrocytes, cells were plated in a 6-well plate at 600,000 cell/well in astrocyte growth media (Advanced DMEM, 10% FBS, 1% sodium pyruvate, 1% penicillin-streptomycin) and incubated at 5% CO 2 and 37°C. After 48 h, growth media was replaced with tracer media (Glucose-free DMEM containing 10% dialyzed FBS, 10 mM [U-13 C] glucose) and incubated under previous conditions for 24 h at which time quenching and metabolite extraction were carried out as follows: Plates were retrieved from incubator and placed on ice, tracer media removed and wells washed once with ice-cold PBS. Immediately following washing, 1 mL of ice-cold extraction buffer (50% methanol, 20 nM norvaline) was added to quench enzymatic activity and plates were placed at − 20°C for 10 min. Cellular contents were then scraped with a cell-scraper in extraction buffer and collected into 1.5 mL and tubes placed on ice for 20 min with regular vortexing. Samples were then centrifuged at 14,000 rpm, 10 min, 4°C after which supernatant containing polar metabolites were removed to a new tube and frozen at − 80°C until prepped for GCMS analysis. The resulting pellet was re-suspended in RIPA buffer (Sigma) and protein concentration was measured with BCA kit (Pierce) for normalization.
Mitochondrial respiration assays
Astrocytes were plated at 40,000 cells/well in maintenance media and grown to confluence for 24 h. The following day media was replaced with assay running media (Seahorse XF Base Medium, 1 mM pyruvate, 2 mM glutamine, and 10 mM glucose) and after 1 h oxygen Consumption rate (OCR) and extracellular acidification rate (ECAR) were measured using a Seahorse 96XF instrument as previously described [44]. Baseline measurements of ECAR and OCR were taken prior to injection of mitochondrial inhibitor oligomycin (4 μM) and glycolytic inhibitor 2-deoxyglucose (500 mM). Manufacturer protocols were followed for the glycolysis stress test assay and Mito fuel flex assay (Category # 103260 Agilent). Briefly, the glycolysis stress test assesses the ability of cells to respond to challenging conditions by increase the rate of glycolytic activity. Glycolytic capacity refers to the glycolytic response to energetic demand from stress (Glycolytic capacity = ECAR post-oligomycin -Baseline ECAR) while glycolytic reserve refers to the capacity available to utilize glycolysis beyond the basal rate (Glycolytic reserve = ECAR post-oligomycin -
Statistical analysis
All results are reported as mean +/− SEM unless otherwise stated. For comparisons between two groups, an unpaired two-tailed Student's t-test was used. For pairwise comparison of two time points a paired two-tailed Student's t-test was used. One-way analysis of variance (ANOVA) was used for comparing multiple groups followed by Sidak's multiple comparisons test. Two-way ANOVA with repeated measures was used for time course analyses. Covariates for the clinical study included age, sex, BMI, waist to hip ratio, blood pressure, and body temperature. Pearson r correlation test was used for correlative analysis. For dependent variables with categorical independent variables we analyzed covariance (ANCOVA) to assess collinearity. P < 0.05 was considered significant.
Results
Single-cell RNA sequencing highlights a role for APOE4 in astrocyte oxidative phosphorylation and glycolysis Given the outsized role of APOE in modulating AD risk, we first undertook an unbiased survey of E4 effects in various cell types by performing single cell RNA sequencing (scRNA-seq) on brain tissue from female mice expressing human E3 or E4. To visualize and identify cell populations with distinct transcriptional signatures, we performed a Uniform Manifold Approximation and Projection (UMAP) on a total of 18,167 cells (E3 8216; E4 9951) from pooled (n = 3) whole brain tissue ( Fig. 1a; Supplemental Fig. 1a). We then used a list of established marker genes to assign cluster identity (Fig. 1b), including four clusters that highly expressed Aldoc, Aqp4, Gja1 and Aldh1l1, which we assigned as astrocytes (Fig. 1b, blue; Supplemental Fig. 1b). Notably, these astrocyte clusters showed both the highest expression of APOE (Fig. 1c) and the highest cumulative expression of a list of 39 genes directly involved in glycolysis (Fig. 1d). When we performed a sub-UMAP on only astrocytes, the cells clustered into eight unique subpopulations with distinct transcriptional signatures ( Fig. 1e; Supplemental Table 1). Interestingly, APOE expression was higher in E4 astrocytes, an effect primarily driven by clusters 1, 2, 3 and 5 (Supplemental Fig. 2). As expected based on previous bulk sequencing studies of human APOE mice and APOE genotyped human brain tissue, a number of other differentially expressed genes (DEGs) were noted between E4 vs E3 cells, including 562 DEGs specifically in astrocytes ( Fig. 1f; Supplemental File 1). Notably, gene ontology (GO) analyses of all cells underscored a number of metabolic processes, including several mitochondrial related GO terms (Fig. 1g). In particular, pathway enrichment analyses specifically highlighted "Alzheimer's disease" and "oxidative phosphorylation" as top hits in astrocytes (Fig. 1h), where a number of genes related to mitochondrial beta-oxidation showed lower expression in the presence of E4 (Supplemental Figs. 3 and 4).
Stable isotope resolved metabolomics reveals increased lactate synthesis and decreased glucose entry into the TCA cycle in Ε4 brains and astrocytes The single-cell gene expression patterns suggested astrocyte-directed changes in glycolysis and oxidative phosphorylation in E4 cells. To test whether the gene Fig. 1 Single-cell RNA sequencing highlights E4-associated changes in glycolysis and oxidative phosphorylation in astrocytes. Whole brain tissue from E3 and E4 mice was digested and subjected to single-cell RNA sequencing (scRNA-seq). a UMAP visualization of cells from E3 and E4 mouse brains (3 pooled hemi-brains per genotype). Cells are colored by cell type. b Assignment of clusters to specific cell types based on expression of known gene markers (astrocytes, Aldoc; microglia, Tmem119; macrophages, Mgl2; oligodendrocytes, Mog; choroid plexus, Kl; ependymal cells, Foxj1; mural cells, Vtn; Ednothelial cells, Emnc; meningeal, Slc47a1; neuroprogenitor cells, Dcx). c, d Expression of both APOE (c) and glycolysis genes (d) was highest in astrocyte cell populations. Glycolysis gene expression is shown as the sum of the expression of 39 detected genes belonging to the KEGG pathway "glycolysis and gluconeogenesis". e UMAP visualization of astrocytes (Aldoc+ cells). Cells are colored by cluster. f Volcano plot showing differentially expressed genes in E3 and E4 astrocytes. g, h Gene ontology (g) and pathway enrichment (h) analyses highlights APOE-associated gene expression changes in metabolic pathways, particularly mitochondrial complex and oxidative phosphorylation (highlighted in red). Abbreviations: CMV, cytomegalovirus; EC, endocannabinoid; ER, endoplasmic reticulum; GnRH, Gonadotropin-releasing hormone; HV, herpesvirus; KS, Kaposi sarcoma; NAFLD, non-alcoholic fatty liver disease; NT, Neurotrophin; reg., regulation expression patterns observed in the scRNAseq analysis held in whole brain tissue, we measured gene expression of select genes encoding for rate-limiting enzymes within glycolysis and the TCA cycle. While there were no significant differences between E3 and E4 expressing mouse brain tissue, there was a consistent trend of increased glycolysis and decreased TCA cycle gene expression in female E4 brains compared to E3 (Supplemental Fig. 5a). Astrocytes are the primary source of both cerebral lactate (the end product of glycolysis) [45] and ApoE [37]. Therefore, we next utilized stable isotope resolved metabolomics (SIRM) to quantitatively assess glucose utilization in vivo in mice expressing human E3 or E4 and in vitro in primary astrocytes expressing human APOE (Fig. 2a). Fasted E3 and Ε4 mice were administered an oral gavage of fully labeled [U-13 C] glucose and brain tissue was collected 45 min later for mass spectrometry analysis of 13 C enrichment in central carbon metabolites. Notably, the brains of Ε4 mice showed significantly higher 13 C-lactate (fully labeled, m + 3) compared to E3 mice (Fig. 2b).
We next incubated primary astrocytes expressing E3 or E4 with [U-13 C] glucose and collected cell lysates 24 h later for 13 C enrichment analysis. While there were no APOE differences in monocarboxylate transporter gene (Slc16a1, Slc16a3) or protein (MCT1, MCT4) expression in astrocytes nor whole brain tissue (Supplemental Figure 5b-c), E4 astrocytes did show a significant increase in both gene expression of Ldha as well as the amount of lactate dehydrogenase (LDH) protein, the enzyme responsible for interconversion of pyruvate and lactate (Supplemental Fig. 5d-e). Additionally, E4 astrocytes showed a significant increase in 13 C-glucose conversion to lactate (Fig. 2c-d), indicative of higher LDH activity. Perhaps unsurprisingly, lactate generation was higher in astrocytes (a highly glycolytic cell type) compared to whole brain homogenates (Fig. 2b vs d). Conversely, E4 astrocytes displayed substantially lower 13 C enrichment of TCA intermediates compared to E3 astrocytes, suggesting decreased glucose entry into the TCA cycle (Fig. 2e). To confirm these results, we performed an independent 13 C-glucose tracing experiment in immortalized astrocytes expressing human E3 or Ε4 [46] and quantified 13 C-lactate production using nuclear magnetic resonance (NMR) spectroscopy (Fig. 2f). Again, Ε4 astrocytes showed significantly higher lactate synthesis, as evidenced by increased 13 C-lactate both intracellularly and in the media (Fig. 2f, insert). Together, these data describe an Ε4-associated increase in glucose flux into late glycolysis at the expense of entry into the TCA cycle for oxidative phosphorylation.
Ε4 astrocytes exhibit impairments in glucose oxidation
To functionally assess astrocyte glycolytic flux in vitro, we measured the extracellular acidification rate (ECAR, a marker of glycolysis and lactate export) before and after glucose injection. E4 astrocytes displayed significantly higher ECAR after addition of glucose compared to E3 astrocytes, as well as a higher glycolytic capacity, suggesting these cells shunt more glucose to lactate (Fig. 2g-h). Ε4 astrocytes also displayed a significantly lower oxygen consumption rate (OCR), both before and after addition of glucose to the media, suggesting an inherent reduction in oxidative metabolism (Fig. 2i-j). Together these data further support an E4-associated shift toward glycolysis (Fig. 2k). We next measured glucose oxidation by treating astrocytes with radiolabeled 14 C-glucose and capturing the oxidative product 14 CO 2 . Ε4 astrocytes oxidized less glucose to CO 2 compared to E3, but only when the radiolabel ([nM]) was given with a substantial amount of non-labeled glucose ([mM]) (Fig. 2l). Ε4 astrocytes also displayed decreased capacity and flexibility in regards to glucose oxidation, as they were relatively unable to increase glucose oxidation when other fuel sources (fatty acids and glutamine) were inhibited (Fig. 2m). We reasoned that lower rates of glucose oxidation in a glucose rich environment in E4 cells may be due to increased conversion of glucose to lactate, which in turn inhibits downstream oxidative processes [47]. Therefore, we tested glucose oxidation following lactate supplementation, and found that Ε4 astrocytes oxidize less glucose in the presence of lactate than E3 astrocytes (Fig. 2n). Together, these results suggest that Ε4 astrocytes exhibit increased reliance on aerobic glycolysis and are less flexible and less able to oxidize glucose, a phenotype seemingly exacerbated by a high glucose environment and/or the presence of lactate.
Ε4 mice fail to increase energy expenditure on a high carbohydrate diet Given the apparent shift toward aerobic glycolysis in the brain and astrocytes of mice expressing APOE4, we next asked if this metabolic reprogramming was a global phenomenon (i.e. could it be detected with whole body measures). Indirect calorimetry (IC) assesses energy expenditure by measuring metabolic gases to calculate the energy released when substrates are oxidized. Energy expenditure (EE) is estimated using the Weir eq. (EE = 3.9 * VO 2 + 1.11 VCO 2 ), with the assumption that anaerobic respiration is negligible and substrates are fully oxidized to CO 2 [24]. However, this assumption is confounded when energy is derived through non-oxidative processes such as aerobic glycolysisa phenomenon in which glucose is fully metabolized to lactate despite normoxia [48]. To test whether mice expressing APOE4 display an aerobic glycolysis related shift in metabolism, we used IC to track energy expenditure in mice expressing human E3 or E4. Young mice carrying the human E4 allele exhibited significantly lower EE, VCO 2 , and VO 2 Fig. 2 E4 increases lactate production in mouse brain and E4 astrocytes show increased glycolytic flux and lower oxidative respiration. a Experimental design ( 13 C, blue filled circles; 12 C, white circles; (m + n, where n is the number of 13 C labeled carbons within a metabolite). [U- 13 C] glucose was administered in vivo to E3 (n = 6) and E4 (n = 8) mice via oral gavage, brain tissue was collected after 45 min, and metabolites analyzed for 13 C enrichment in pyruvate and lactate. E3 and E4 expressing astrocytes were cultured in [U-13 C] glucose media for 24 h, media collected, cells washed, and metabolites analyzed for 13 C enrichment (n = 6). b While fully labeled pyruvate is present in similar amounts in E3 and E4 brains, lactate synthesized from 13 C-glucose is higher in E4 mouse brains. c-e Primary astrocytes expressing E4 show increased 13 C enrichment in lactate (c), higher LDH activity (d), and decreased 13 C enrichment in the TCA cycle (average of all detected TCA intermediates) (e). f Increased lactate synthesis as measured by HSQCAD NMR spectroscopy (n = 3). Representative NMR spectra (f) showing E4 astrocytes have both increased intracellular 13 C-lactate and export more lactate into extracellular media (bar graph insert). g Extracellular acidification rate (ECAR) of E3 and E4 primary astrocytes shown over time during the glycolysis stress test (n = 24 for both groups). h Contributions to ECAR at baseline, in response to glucose (glycolysis), in response to stress (glycolytic capacity), and un-tapped reserve were calculated. i Oxygen consumption rate (OCR) during the glycolysis stress test assay was graphed over time and j represented as average respiration before and after glucose. k Metabolic phenotypes of E3 and E4 astrocytes were characterized by plotting ECAR vs. OCR. l E3 and E4 astrocytes were incubated in glucose free media (−) or glucose rich media (+) and oxidation of 1.0 μCi/mL 14 C-glucose was measured by trapping 14 CO 2 and counting radio activity. (*P < 0.05 unpaired t-test, two-tailed, n = 4 per genotype). m Glucose oxidation capacity, dependency, and flexibility was assessed in E3 and E4 astrocytes via the Mito Fuel Flex Assay. n E3 and E4 astrocytes were incubated in 1.0 μCi/mL 14 C-glucose with (+) or without (−) 12.5 mM lactate (n = 3). (b-l,n, *P < 0.05, ***P < 0.001, ****P < 0.0001, unpaired t-test, two tailed) (m, *P < 0.05 Two-way ANOVA, Sidak's multiple comparisons test) compared to young E3 mice during their inactive period (light cycle) (Fig. 3a-c). Since mouse IC cages allow for a prolonged and controlled assessment of metabolism, we provided a long-term glucose challenge by way of a high carbohydrate diet (HCD). Interestingly, these E4associated decreases were exaggerated following introduction of the HCD; Ε4 mice again showed substantially lower EE, VCO 2 , and VO 2 compared to E3 mice (Fig. 3df). Further, when we analyzed the HCD-induced change in EE, VCO 2 , and VO 2 from baseline (normal chow), we found both genotypes to show significant positive changes except for E4 VO 2 (Fig. 3i). This suggests that E4 mice fail to increase oxygen consumption in response to excess dietary carbohydrates. These changes occurred independently of differences in activity and food intake, and was not explained by differences in body weight (Fig. 3j-l). Together, these data suggest that Ε4 acts in young mice to lower energy expenditure via a mechanism outside of the typical contributions of feeding, body mass, and activity.
Young female Ε4 carriers have a lower resting energy expenditure
We next asked if this E4-associated shift toward aerobic glycolysis observed in cell and animal models translated to APOE4+ humans. To test this, we used IC to test the effect of APOE on whole body metabolism in a cohort of healthy, cognitively normal young and middle-aged volunteers (Supplemental Tables 2 and 3). Using a mobile metabolic cart designed to measure VO 2 and VCO 2 , we assessed exhaled breath measures of volunteers at rest, during a cognitive task, and after a glucose challenge ( Fig. 4a and Supplemental Fig. 6). We began each session by assessing the resting energy expenditure (REE) and respiratory exchange ratio (RER) of participants. After a five-minute buffer to achieve steady state [26,27], we recorded REE over a 25 min period at 15 s intervals and averaged the RER and REE for each individual. There was no APOE effect on RER (Supplemental Fig. 7). Consistent with previous studies, we found REE and age to be negatively correlated (Fig. 4c). However, when we stratified our analysis by E4 status, linear regression revealed significantly different slopes between carriers and non-carriers, suggesting an Ε4-associated confound in the age versus energy expenditure relationship (Fig. 4d).
We then separated Ε4+ and Ε4individuals into young (< 40 years of age) and middle aged (40-65 years of age) cohorts based on previous literature [4,17]. After adjusting for covariates, we observed a significantly lower REE in female Ε4 carriers compared to non-carriers, particularly in the young cohort (Fig. 4e). This E4 effect on REE was not significant in males (Supplemental Fig. 8), together suggesting that there is no age-related REE decline in E4 carriers, and that the energy expenditure-APOE interaction is modified by sex, with female Ε4 carriers displaying lower REE. Fig. 3 E4 mice have lower energy expenditure and fail to increase oxygen consumption following a high carbohydrate diet. a-f Female E3 and E4 mice were housed individually for 48 h with ad libitum access to normal chow (a-c) or a high carbohydrate diet (HCD) (d-f) and energy expenditure (EE), VCO 2 and VO 2 were measured. Dark cycles are indicated in grey with light cycles in white. Light cycles were used for calculating averages of EE, VCO 2 and VO 2 (shown to the right) (***P < 0.0001, ****P < 0.00001, unpaired t-test, two-tailed; E3 n = 13, E4 n = 20). j Activity and k food consumption during light cycles were averaged for E3 and E4 mice (E3 n = 10, E4 n = 14). l Analysis of covariance was performed by separately correlating average EE and body weight for E3 and E4 mice. (Spearman correlation r = 0.86, ***P < 0.001)
E4 does not alter cognitive energy expenditure
Given the critical role of APOE in modulating cognitive function and dementia risk, we next tested if a mental stressor would reveal further genotype-specific differences in energy expenditure. To avoid potential confounding readouts of movement, subjects were asked to remain perfectly still while completing a challenging Novel Image Novel Location test (Supplemental Fig. 6c). We observed a significant increase in average EE during the cognitive challenge in all subjects (Fig. 4f). However, we found no difference in cognitive energy expenditure (CEE), nor in test response accuracy, between Ε4 carriers and non-carriers ( Fig. 4g and Supplemental Fig. 9). To our knowledge, only two other studies have attempted to utilize IC to quantify the contribution of cerebral activation (i.e. a mental task) to whole body metabolic measures [49,50]. While we did not observe an APOE effect on metabolic measures during the cognitive challenge, we did find that IC is a sensitive tool to evaluate metabolic changes due to mental stress, as all participants showed a significant increase in energy expenditure (Fig. 4f).
Female E4 carriers have a blunted increase in oxygen consumption after a dietary glucose challenge
We next sought to measure the thermic effect of feeding (TEF) -a constituent of EE that indicates the energy used to absorb, digest, and metabolize dietary energy [51,52]. To induce TEF, all participants consumed a Fig. 4 Female Ε4 carriers show lower resting energy expenditure and lower thermic effect of feeding after a glucose challenge. a Experimental design of study. Individual components of energy expenditure (EE) were assessed in three distinct periods. Resting energy expenditure (REE) was assessed during the resting period. Cognitive energy expenditure (CEE) was assessed during the cognitive challenge and defined as difference in the area under the curve (AUC) of EE during the cognitive challenge and the AUC of EE from the resting period. Thermic effect of feeding (TEF) was assessed during the glucose challenge and calculated as the difference in AUC of EE during the glucose challenge and AUC of REE. b APOE genotypes of subjects represented in the study (E4-n = 61, E4+ n = 33; E2/E4 n = 2, E2/E3 n = 10, E3/E3 n = 51, E3/E4 n = 28, E4/E4 n = 3). c Correlation of average REE with participant age (Pearson correlation R 2 = 0.11, **P < 0.01, n = 94). d Correlation of average REE and participant age separated by Ε4 carriers (purple) and non-carriers (blue) (Ε4-R 2 = 0.233, ****P < 0.0001; Ε4+ R 2 = 0.0042, P = 0.719, E4-n = 61 and E4+ n = 33). Shaded areas refer to 95% confidence intervals. e Average REE for all, young, and middle-age E4-(n = 44, 33, and 11 respectively) and E4+ females (n = 27, 20, and 7 respectively) (*P < 0.05, **P < 0.01, unpaired t-test, two-tailed). f Average EE between resting and cognitive test periods in young (n = 71) and middle-aged (n = 23) participants. (***P < 0.001, paired t-test, two-tailed). g CEE for all female participants and for the two age cohorts. h Average EE between resting and glucose challenge periods in young and middle-aged participants (***P < 0.001, paired t-test, two-tailed). i TEF for all females and for the two age cohorts, further separated by Ε4 carriers and non-carriers. (*P < 0.05, unpaired t-test, two-tailed) high carbohydrate drink in less than 2 min (Supplemental Fig. 6d). Energy expenditure during the dietary challenge increased significantly in all participants (Fig. 4h), and similar to resting EE, young female E4 carriers displayed a significantly lower TEF than non-carriers (Fig. 4i).
Plotting the time course of EE after participants consumed the glucose drink revealed a dramatically blunted energy response in E4+ individuals, an effect driven by E4+ females (Fig. 5a and Supplemental Fig. 10). Further stratification by individual genotypes showed a clear stepwise effect of APOE (Fig. 5b). Post-glucose drink VCO 2 values revealed a similar, but non-significant trend of lower CO 2 production in E4 carriers (Supplemental Fig. 11). Importantly, we observed that while non-carriers significantly increased their oxygen consumption following the glucose drink, female Ε4 carriers did not, as noted by significant E4-associated decreases in total oxygen consumption across the postglucose period (Fig. 5c-d).
Targeted metabolomics reveals glycolysis as a differentially regulated pathway in Ε4+ plasma To determine if the observed APOE differences in energy expenditure were reflected in the plasma metabolome, we conducted a targeted metabolomics analysis of human plasma samples before and after the glucose challenge (Supplemental Table 4). A pathway analysis of the plasma metabolome before the glucose drink highlighted E4-associated differences in glycolysis and pyruvate metabolism (Fig. 5e), and further analyses of E2/E4 n = 2, E2/E3 n = 10, E3/E3 n = 51, E3/E4 n = 28, E4/E4 n = 3) (*P < 0.05, **P < 0.01, unpaired t-test, two-tailed; #P < 0.05 One-way ANOVA). e, h Pathway impact analysis highlights pyruvate metabolism and glycolysis as pathways significantly altered by E4 carriage in human plasma at baseline (e), while multiple carbohydrate and lipid processing pathways are altered by E4 carriage following the glucose drink (h) (FDR < 0.01). f, i Volcano plots showing changes in plasma metabolites. Lactate was the most significantly altered metabolite by APOE genotype at baseline (f), while multiple metabolites differed post-glucose drink (i) (ANOVA, FDR < 0.05). g Lactate values in individual subjects as determined by GC-MS analysis. j Enrichment analysis highlights multiple metabolic pathways as significantly altered by E4 carriage following the glucose drink, including the top hit of 'Warburg effect'. All comparisons are E4+ (n = 33) vs E4-(n = 61) individual metabolites revealed lactate as the metabolite most strongly affected by E4 carriage (Fig. 5f). Indeed, E4 carriers displayed dramatically higher plasma lactate concentrations before and after the glucose drink (Fig. 5g, Supplemental Fig. 12). Following the glucose challenge, there was an increase in the number of carbohydrate processing pathways and metabolites that were differentially altered in Ε4 carriers (Fig. 5h, i), and a pathway enrichment analysis highlighted top hits of "Warburg effect" and "Transfer of acetyl groups into mitochondria" (Fig. 5j). Together, analysis of the plasma metabolome from cognitively E4+ individuals suggests a preference for aerobic glycolysis compared to non-carriers.
Discussion
In the current study, we used indirect calorimetry to show that APOE4 reduces energy expenditure in a cohort of young, cognitively normal females, a phenomenon exacerbated by a dietary glucose challenge. Analysis of the plasma metabolome revealed E4associated increases in pathways related to carbohydrate processing, specifically aerobic glycolysis, highlighted by higher concentrations of the glycolytic end-product lactate. By applying single-cell RNA sequencing and stable isotope-resolved metabolomics in vivo, along with functional assays of cellular respiration in vitro, we discovered that both E4 expressing mouse brains and E4 expressing astrocytes increase glucose flux through aerobic glycolysis at the expense of TCA cycle entry and oxidative phosphorylation. Cumulatively, these data highlight a novel mechanism whereby Ε4 lowers energy expenditure in young women and decreases glucose oxidation by redirecting flux through aerobic glycolysis.
These results are congruent with other studies of APOΕ4 and AD. For example, a recent study by our group demonstrated that Ε4 astrocytes have increased lactate production [53], and neurons expressing Ε4 exhibit increased reliance on glycolysis for ATP production with apparent deficits in mitochondrial respiration [54]. Similarly, a recent study by Qi et al. showed increased rates of glycolysis in E4-expressing primary astrocytes, as measured using the Seahorse ECAR assay. The authors also showed an increase in aerobic glycolysis in hippocampal slices collected from E4 mice, compared to E3 mice [55]. While these results are in agreement with our own findings here, Qi et al. conversely report an increase, rather than decrease, in oxygen consumption in primary astrocytes expressing E4 [55]. The reason for this discrepancy is not immediately clear, but may be due to differing glucose concentrations (10 vs 25 mM) present in the media. In addition to modulating glucose metabolism in astrocytes, APOE may drive metabolic changes in microglia as suggested by a recent study by Konttinen and colleagues [56]. In that study, iPSCderived microglia expressing E4 had lower respiration, decreased ATP production, and lower rates of glycolysis compared to E3 microglia, and this broad metabolic quiescence was associated with decreased functionality [56]. Given the important interplay between metabolism and immune cell phenotype [57], further exploration of the potential role of APOE in regulating immunometabolism is likely warranted. Interestingly, another study showed that fibroblasts from AD patients show a 'Warburg-type' (aerobic glycolysis) shift from oxidative phosphorylation to glycolysis with increased lactate production [58]. Aerobic glycolysis refers to the metabolism of glucose to lactate instead of the oxidative TCA cycle, despite the presence of abundant oxygen. In the brain, this phenomenon occurs in young individuals with a peak around 5 years of age (when 30% of cerebral glucose is processed anaerobically), and then steadily declines with age [59]. Aerobic glycolysis in the brain appears to be cell and region specific, with astrocytes playing a major role in certain regions such as the dorsolateral prefrontal cortex, precuneus, and the posterior cingulate cortex [60]. Importantly, areas associated with aerobic glycolysis also overlap with areas known to accumulate amyloid β, indicating that the anaerobic metabolism of certain brain regions may possibly predict amyloid burden in later life [61]. Furthermore, recent proteomic profiling of over 2000 AD brain samples revealed that changes in the expression of proteins involved in glial metabolism was the most significant module associated with AD pathology and cognitive decline [62]. Increased expression of enzymes in this module included lactate dehydrogenase, pyruvate kinase, and glyceraldehyde-3-phosphate dehydrogenase, all of which are elevated in aerobic glycolysis phenotypes.
Interestingly, recent evidence has shown that lactate is an energy substrate used by the brain [63] and a competitive glucose alternative [64][65][66]. Lactate has also been shown to decrease FDG-PET signal [67]. An increase in astrocyte-derived lactate in Ε4 carriers may compete with glucose as a substrate for brain metabolism and decrease CMRglc. While we did not find any significant differences in MCT1 or MCT4 expression between E3 and E4 astrocytes or mouse brain tissue, changes in the expression of the various lactate transporters, and thus shuttling from astrocytes to neurons, may help explain these findings in the more complex setting of the human brain. For example, the monocarboxylate transporter MCT2 was found to be upregulated, while MCT4 was downregulated, in postmortem brain tissue of young APOE4 carriers [68]. Further, an increase in aerobic glycolysis might also act to lower energy expenditure, as glycolysis produces only 2 mol of ATP compared to the 34 mol of ATP from a mole of glucose metabolized via mitochondrial oxidative phosphorylation. This balance of anaerobic glycolysis versus oxidative phosphorylation behaves reciprocally [69]. Increased mitochondrial ATP production downregulates glycolysis, while glycolytic ATP synthesis can suppress aerobic respiration [70]. Given our findings of lower O 2 consumption and increased production of lactate, we speculate that E4 carriers have lower energy expenditure due to glycolysis being less energetically costly than downstream pathways.
Our study has several limitations. First, as this was an exploratory clinical research trial, sample size calculations were difficult to estimate and the primary outcome measure of the study was to measure resting state respiratory quotient in cognitively normal participants with various APOE genotypes. Thus, our findings that APOE4-associated differences in EE, VO 2 , and VCO 2 , were limited to young female E4 carriers, resulted from subsequent subgroup analyses. Future a priori studies in larger and more diverse cohorts will be necessary to fully clarify whether this E4-associated decrease in EE is sexspecific. Additionally, as a primary goal was to assess individual metabolic responses to glucose, we performed blood draws immediately prior and immediately after the glucose challenge. It may be possible that mental stress during the cognitive challenge (which occurred prior to the first blood draw) altered the plasma metabolome beyond the normal resting state. Further, as with all plasma metabolomics studies, the tissue of origin of the metabolites measured remains unknown. For example, are they brain derived, hepatic in origin, or synthesized in another peripheral tissue such as skeletal muscle or adipose tissue? A similar limitation in resolution exists in regards to energy expenditure, as the IC measures reported here are a summation of both brain and peripheral energy utilization. Several of these peripheral tissues, most notably liver and adipose tissue, are known to synthesize substantial amounts of APOE, and future studies leveraging mouse models of APOE will be important in clarifying the contributions of brain vs periphery in the metabolic changes observed here. Another potential confounder is that we provided glucose in the form of a sugary milk drink for the glucose challenge. While we used 50 g of sugar based on clinical guidelines for glucose challenges [71], milk also includes fats and proteins. However, the high relative content of carbohydrates to other macronutrients ensures that any observed response (particularly at the~30 min time point analyzed) can be primarily attributed to carbohydrate metabolism. Indeed, the pathways most altered by the glucose challenge included galactose metabolism, starch and sucrose metabolism, and glycolysis. Finally, while we found E2 to be associated with lower plasma lactate and higher EE relative to non-E2 carriers, the study did not include any homozygous E2 carriers and the low overall allele frequency makes interpretation challenging. Still, these results are intriguing based on E2 being a known protective allele for AD [2,4], and further study of energy expenditure and glucose metabolism in E2 carriers is warranted.
Current understanding of the development of lateonset AD supports a triad of primary risk factors: E4, female sex, and old age. However, detecting symptoms of eventual cognitive decline in young asymptomatic individuals is critical for primary prevention of AD [72]. Given the largely disappointing trial outcomes of drugs targeting AD neuropathology [73], these therapies may be intervening after a 'point of no return' and thus offer minimal benefit in prognosis [74]. In order to design therapies for early interventions in those at risk for AD, we must first identify measurable biomarkers whose severity and/or change over time correlate with risk for clinically observable AD. In the current study, we used indirect calorimetry to show that APOE4 reduces energy expenditure in a cohort of young cognitively normal females, a phenomenon exacerbated by a dietary glucose challenge. While using indirect calorimetry for metabolic studies is common in clinical settings and exercise studies [75,76], to our knowledge the method has not been previously applied to investigate biomarkers of cognitive impairment. Thus, repurposing IC to study the metabolic effects of an AD risk factor such as Ε4 represents a mobile, simple, and cost-effective new approach.
Although resting energy expenditure was significantly lower in female E4 carriers at rest, the most striking effect of APOE was observed after participants underwent a dietary carbohydrate challenge. There, E4+ individuals failed to increase VO 2 , leading to a significantly lower EE compared to non-carriers. Given the decreased VO 2 and increased plasma lactate concentrations in Ε4+ subjects, we hypothesize that these individuals are diverting a higher fraction of glucose to aerobic glycolysis as opposed to oxidative phosphorylation. Along these lines, analysis of the plasma metabolome revealed E4associated increases in pathways primarily related to carbohydrate processing, specifically aerobic glycolysis. These results were in line with our results from mouse and cell models of APOE4, where our application of scRNAseq, stable isotope-resolved metabolomics and functional assays of cellular respiration showed that both E4 expressing mouse brains and E4 expressing astrocytes increase glucose flux through aerobic glycolysis at the expense of TCA cycle entry and oxidative phosphorylation.
Conclusions
Cumulatively, these data highlight a novel mechanism whereby Ε4 lowers energy expenditure in females and decreases glucose oxidation by redirecting flux through aerobic glycolysis (Summary Fig.). While many questions remain, our study highlights novel roles for APOE and sex in modulating systemic and cerebral glucose metabolism and provides a feasible method to assess APOEdependent metabolic signatures in pre-symptomatic young individuals. These findings provide important insights that may help to define dietary and pharmacological approaches to delay or prevent incipient AD in high-risk individuals. Fig. 1e). Supplemental Table 2. Age, sex, and APOE genotype of cognitively normal individuals according to Ε4 carriage and age cohort (young = 18-39, middle-aged = 40-65). Values represent means +/− (SD). Supplemental Table 3. Clinical characteristics of cognitively unimpaired individuals according to Ε4 carriage and age cohort (young = 18-39, middle-aged = 40-65). Values represent means +/− (SD). Ca, Caucasian; AA, African American; His, Hispanic; A, Asian; BMI, body mass index. Supplemental Table 4. Plasma metabolites of study participants analyzed by gas chromatographybefore and after a dietary glucose challenge. Supplemental Table 5. Pre-screening checklist. A response of "yes" to any of the following resulted in exclusion from the study. Supplemental Fig. 1. Cluster cell counts. Number of cells in each graph-based cluster from all cells (a), and astrocytes only (b). Bars represent mean number of cells in each cluster, with the number of E3 cells (circles) and E4 cells (squares) noted by symbols. Supplemental Fig. 2. APOE expression in single-cells and specific astrocyte clusters. (a) UMAP visualization of E3 (left) and E4 (right) cells showing expression of APOE. APOE expression is primarily limited to cells identified as astrocytes. (b) Expression of APOE in astrocyte-only UMAP (Aldoc + cells). Inset shows the 8 distinct astrocyte clusters. (c) Violin plots showing expression of APOE in all astrocytes (left) and within each astrocyte cluster (right). (**P < 0.01, ***P < 0.001, unpaired t-test, two-tailed). Supplemental Fig. 3. E4 is associated with decreases in many genes of the oxidative phosphorylation KEGG pathway. Pathway map for KEGG pathway "Oxidative Phosphorylation" showing genes differentially expressed between E3 and E4 astrocytes. Genes highlighted in green are downregulated in E4, genes in red are upregulated in E4. Supplemental Fig. 4. E4 is associated with decreases in many genes of the "Alzheimer's disease" KEGG pathway. Pathway map for KEGG pathway "Oxidative Phosphorylation" showing genes differentially expressed between E3 and E4 astrocytes. Genes highlighted in green are downregulated in E4, genes in red are upregulated in E4. Supplemental Fig. 5. LDH expression is increased in E4 astrocytes; MCT expression is unchanged. (a) Gene expression of critical enzymes in glycolysis and TCA cycle in whole brain homogenates from female E3 and E4 mice. Hk, hexokinase; Pfk, phosphofructokinase; Ldh, lactate dehydrogenase; Aco, aconitase; Idh, isocitrate dehydrogenase; Ogdh, oxoglutarate dehydrogenase; Sdh, succinate dehydrogenase; Mdh, malate dehydrogenase; Cs, citrate synthase. Data analyzed by multiple t-tests with Sidak multiple comparison correction. (b) Slc16a1 and Slc16a3 gene expression in astrocytes from the scRNAseq data from Fig. 1 (left), in primary astrocytes isolated from E3 or E4 mice (middle), and in whole brain homogenates from female E3 or E4 mice (right). (c) MCT1 and MCT4 expression was quantified in brain tissue from mice expressing E3 or E4 via western blot (n = 7-8). MCT expression normalized to β-actin loading control and expressed as a percent of E3 (value/mean E3). (d) Ldha and Ldhb gene expression in primary astrocytes. All gene expression values expressed as a percent of E3 (value/ mean E3). *P < 0.05, **P < 0.01, t-test. (e) LDH protein expression was measured via western blot in primary astrocytes expressing E3 or E4 (n = 8). LDH expression normalized to β-actin loading control and expressed as a percent of E3 (value/mean E3). *p < 0.05, t-test. Supplemental Fig. 6. Human indirect calorimetry study design. (a) Representative time course of energy expenditure (EE) measures during the three periods of the study (rest in gray, cognitive challenge in green, and glucose challenge in orange). Data was only analyzed during the last 25 min of the resting and glucose periods and during a common 5-15 min span during the cognitive challenge in which all 100 subjects were actively engaged in the taskdenoted by grey bar on x axis. Blood was drawn immediately prior and after the glucose challenge. (b) Representative photo of a participant during the resting challenge connected to the Ultima MGX indirect calorimetry (IC) system. (c) Example slides from the Novel Image Novel Location test used as a cognitive challenge. (d) The glucose challenge consisted of a blood draw, followed by ingestion of the 50 g sugar drink (all subjects consumed drink within 90 s), followed by IC measurement, and a second blood draw. Supplemental Fig. 7. Respiratory Exchange Ratio (RER) does not differ by APOE genotype. Respiratory exchange ratio (RER) (VCO 2 /VO 2 ) was not significantly different between APOE genotypes across any of the three periods tested. Supplemental Fig. 8. E4 effect on resting energy expenditure (a) E4 non-carriers' (n = 61; blue) and E4 carriers' (n = 33; purple) average resting energy expenditures were determined and stratified by young and middle-aged. (*P < 0.05, ***P < 0.001, unpaired t-test, two-tailed). (b) This was repeated for only male participants (*P < 0.05, unpaired t-test, two-tailed; E4-total n = 17, young n = 13, middle-aged n = 4; E4+ total n = 6, young n = 5, middle-aged n = 1). (c) Average EE was plotted over the resting period for females and (d) males. Dotted lines indicate liner regression results and shaded area are SEMs. Supplemental Fig. 9. Novel image novel location object recognition test response accuracy by APOE genotype. (a) The novel-image-novel-location (NINL) object recognition test contains 7 sets of 12 slides. Each slide has 3 images and 4 possible locations. Each slide is viewed for eight seconds in the order as follows: See Set A, See Set B. Test Set A, See Set C, Test Set B, See Set D, Test Set C, etc. To be considered correct, subjects must identify both the type of change and in which quadrant the change has occurred. The test is designed so that on average subjects answer 60-80% of questions correctly. Total percent correct was calculated for each genotype (b) and stratified by E4 carriage. (c) Individual slopes of EE after the cognitive challenge showing an average decrease in EE after the challenge. Supplemental Fig. 10. E4 effect on energy expenditure during glucose challenge in all subjects (left column), and in males only (right column) (a) Energy expenditure (b) VCO 2 and (c) VO 2 was plotted over the glucose challenge period in all E4-(n = 61; blue) and E4+ (n = 33; purple) participants. (*P < 0.05, Two-way ANOVA repeated measures). (d) Thermic effect of feeding was determined as a ratio of E4 non-carriers in all, young, and middle-aged participants. (**P < 0.01, unpaired t-test, two-tailed) (e) Energy expenditure (f) VCO 2 and (g) VO 2 was plotted over the glucose challenge period in male participants (E4-n = 17; E4+ n = 6). Dotted lines show linear regression trend line, shaded areas refer to SEM. (h) Thermic effect of feeding was determined as a ratio of E4 non-carriers in all, young, and middle-aged male participants. Supplemental Fig. 11. VCO 2 values during the glucose challenge period.
|
2021-09-07T13:33:45.600Z
|
2021-09-06T00:00:00.000
|
{
"year": 2021,
"sha1": "f2f6ebad1a2df10c15439dda9ced78a2f6352b10",
"oa_license": "CCBY",
"oa_url": "https://molecularneurodegeneration.biomedcentral.com/track/pdf/10.1186/s13024-021-00483-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0dae597bca81ae2c2f46466dde50645a15475355",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247793443
|
pes2o/s2orc
|
v3-fos-license
|
BAF complex-mediated chromatin relaxation is required for establishment of X chromosome inactivation
The process of epigenetic silencing, while fundamentally important, is not yet completely understood. Here we report a replenishable female mouse embryonic stem cell (mESC) system, Xmas, that allows rapid assessment of X chromosome inactivation (XCI), the epigenetic silencing mechanism of one of the two X chromosomes that enables dosage compensation in female mammals. Through a targeted genetic screen in differentiating Xmas mESCs, we reveal that the BAF complex is required to create nucleosome-depleted regions at promoters on the inactive X chromosome during the earliest stages of establishment of XCI. Without this action gene silencing fails. Xmas mESCs provide a tractable model for screen-based approaches that enable the discovery of unknown facets of the female-specific process of XCI and epigenetic silencing more broadly.
E pigenetic gene silencing facilitates cell-type-specific transcriptional signatures and is therefore fundamental to shaping cell identity in both development and disease. The silencing process is necessarily complex, multilayered, and not fully understood despite significant research efforts. X chromosome inactivation (XCI) is the mammalian compensation mechanism that ensures equal gene dosage between XX females and XY males, resulting in near-complete silencing of one female X chromosome [1][2][3][4] . XCI is therefore a powerful system in which to study epigenetic silencing across hundreds of loci in parallel.
In vitro, female mouse embryonic stem cells (mESCs), like the blastocyst cells from which they derive, have activity from both X chromosomes; a feature exclusive to these and primordial germ cells [5][6][7] . Upon differentiation mESCs undergo XCI creating an active (Xa) and an inactive (Xi) X chromosome in an apparently random process. XCI occurs stepwise after initiation by upregulation of the long non-coding RNA Xist. This heralds the establishment phase of XCI, where Xist spreads in cis to coat the future Xi 8,9 and recruits factors 10-12 that establish silencing through loss of activating 10,[13][14][15] and gain of repressive histone marks 14,[16][17][18][19][20][21][22][23][24][25][26] and adoption of a unique bipartite chromosome conformation [27][28][29][30] , mediated in part by Smchd1 [31][32][33] . Silencing is then maintained by DNA methylation at promoter elements 13,34 , complemented at a subset of genes by H3K9me3 23,35 . This rich understanding is the result of three decades of exceptional research that has contributed significantly to our understanding of epigenetic silencing. However, we still do not completely understand the process and as such, XCI remains a fertile system to identify unknown facets of silencing.
Differentiating female mESCs present as an enticing system in which to study XCI; however, complications with in vitro maintenance of these cells have severely limited their use. In culture female mESCs are epigenetically unstable, displaying global hypomethylation compared with males [36][37][38][39][40][41][42] and karyotypic instability, with XO cells rapidly dominating cultures 36,40,41 . Based on a desire to study XCI in normal differentiating female mESCs, we created X-linked fluorescent reporter alleles (Xmas), allowing efficient monitoring of karyotype and XCI status in live cells and with minimal manipulation of these sensitive cells. Xmas mESCs allowed us to perform a genetic screen, which although targeted and small in scale, is an initial screen for regulators of the establishment of XCI in its native state; revealing a role for the nucleosome remodellers Smarcc1 and Smarca4. Smarcc1 creates an accessible future Xi that allows XCI to proceed. Therefore, chromatin relaxation may be an initial step in epigenetic gene silencing, demonstrating the utility of the Xmas system for screening in normal female mESCs and subsequent discovery of unknown aspects of XCI.
Creation of Xmas reporters that reflect normal random XCI.
To create a tractable and replenishable female mESC system, we knocked either a GFP or mCherry reporter cassette into the 3′ UTR of the X-linked house-keeping gene Hypoxanthine guanine phosphoribosyltransferase (Hprt, X Hprt-GFP and X Hprt-mCherry , Fig. 1a). We chose fluorescent reporters to enable the use of fluorescence-activated cell sorting (FACS) to purify live cells with different X inactivation states, thus permitting multiple options for screen-based approaches. We chose to drive reporter expression from the endogenous Hprt promoter to give the most natural representation of X-linked expression and silencing possible. Dual reporter systems for XCI studies have been created previously; however, these did not use an endogenous promoter 35,43,44 or did not create lines of mice 40,45 , features which would enable the study of XCI in female mESCs in the most native state possible. Our reporter alleles were initially created in Bruce4 XY mESCs, then to ensure we could continually derive XX mESCs, we created two homozygous/hemizygous mouse strains from the reporter alleles which when crossed produce female offspring with GFP and mCherry marking different X chromosomes (X Hprt-GFP X Hprt-mCherry , Fig. 1b, Extended Data Fig. 1a). We call this the Xmas (X-linked markers active silent) system. We inserted an internal ribosome entry site (IRES) between the Hprt stop codon and the reporters, which diminished fluorophore intensity, yet is necessary to ensure appropriate Hprt function. The neomycin cassette was deleted to allow detectable fluorophore expression. Roughly equal male and female pups were born of each genotype ( Supplementary Fig. 1b-e). Flow cytometry of white blood cells from Xmas animals as well as hematopoietic stem and progenitor cells (LSK) and primary mouse embryonic fibroblasts (MEFs) from Xmas embryos showed the reporter alleles accurately detect random XCI with approximately half the cells positive for each fluorescent protein ( Fig. 1c-f).
Xmas induced pluripotent stem cells (iPSC) and mESCs show two active X chromosomes. Next, we wished to assess whether pluripotent Xmas cells would display the expected expression of both the Cherry and GFP reporters. Since reactivation of the Xi is a feature of late-stage cellular reprogramming 46 , we first tested whether our reporter alleles performed as expected during iPSC induction. Indeed,~80% of post-XCI X Hprt-GFP X Hprt-mCherry MEFs transduced with a doxycycline-inducible reprogramming cassette (STEMCCA, Fig. 2a) 47 detectably reactivated their Xi by the final day of the assay (Fig. 2b). These data show that pluripotent Xmas cells display the expected biallelic expression from the X chromosome and suggest that this system may be a useful tool for studying reprogramming.
We next assessed the suitability of our mouse lines for the production of Xmas mESCs (X Hprt-GFP X Hprt-mCherry , Fig. 2c, Supplementary Fig. 2a, b). Female blastocysts displayed reporter expression exclusively from the maternal X chromosome in extraembryonic cells, as expected due to imprinted XCI in trophecotoderm 48 . By contrast, both X chromosomes were active in the inner cell mass, indicating the expected reactivation of the silent paternal X chromosome in embryonic cells (Fig. 2d). Following derivation in serum-free, feeder-free conditions with inhibitors of MEK and GSK3 (2i), expression of both reporters was detectable in Xmas mESCs by microscopy and flow cytometry (Fig. 2e, f). However, the abundance of single positive Xmas mESCs progressively increased in culture, likely reflecting the increasing abundance of XO cells. We tested whether the reporters accurately reflected karyotype in mESCs by FACS followed by PCR of genomic DNA and found cells single positive for the reporters were also single positive for the corresponding allele ( Supplementary Fig. 2c). Thus, the fluorescent reporters in the Xmas system detect XX and XO mESC populations. This is a very useful feature of the Xmas system as it enables rapid and regular monitoring of the karyotype of female mESCs and offers the opportunity to purify XX cells by FACS to ensure suitability for XCI experiments and minimising confounding results that can occur due to the presence of XO cells.
To assess the similarity of Xmas mESCs to published mESC lines, and therefore their suitability for functional XCI and pluripotency studies, we compared Xmas mESC transcriptomes to previously published data sets of both naive and primed mESCs 49,50 . We found similar expression of pluripotency genes in all three groups, but lower early differentiation-associated genes compared with primed mESCs (Fig. 3a). Xmas mESCs most closely resemble naive mESCs maintained in 2i media (Fig. 3b) 51 , as expected given that Xmas mESCs were also derived and maintained in 2i media. Xmas mESCs also retain pluripotency, as they form teratomas that differentiate into all three germ layers (Fig. 3c). To assess in vitro differentiation potential of Xmas mESCs, we induced differentiation by weaning from 2i into serum-containing LIF-free media over 3 days to induce a nondirected differentiation. We performed RNA-seq daily for 9 days during the differentiation. As expected, Xmas mESCs begin most transcriptionally aligned to naive mESCs, transitioning through a primed state, before finally more closely resembling MEFs than neural stem cells ( Fig. 3d and Supplementary Data 1), likely reflective of the non-directed differentiation. These data show that Xmas mESCs display similar properties to other naive mESCs in vitro.
Xmas mESCs detect impaired XCI. We next tested whether our Xmas reporter alleles could detect random XCI in differentiating mESCs, using the same differentiation protocol as above (Fig. 2c). At the induction of differentiation, most cells were double positive for the fluorescent markers, indicating an XaXa XCI state, before the rapid loss of double positivity from days 5 to 7 with all lines becoming XaXi, as expected following random XCI (Fig. 4a). To test whether this reflected normal XCI timing we derived F1 female mESCs from FVB/NJ (FVB) dams crossed to CAST/EiJ (CAST) sires. Allele-specific analyses are enabled by single nucleotide polymorphisms between each allele and natural skewing of XCI, with the FVB allele approximately three times more likely to become the Xi upon random XCI. This avoids the need to genetically skew random XCI by Xist deletion, allowing the most normal process to occur and minimum manipulation of the cells, producing more consistent results in our hands. Allelespecific RNA-seq during F1 mESC differentiation showed Hprt follows similar XCI kinetics to other X-linked genes (Fig. 4b) and similar kinetics to our Xmas Hprt reporters (Fig. 4a). Notably, Xmas offers the advantage of a single-cell analysis of XCI. Therefore, Xmas mESCs allowed us to consider functional XCI studies.
We tested if XCI could be perturbed in Xmas mESCs by knockdown of factors known to regulate XCI, including Yy1 (initiates Xist 52 expression), Hnrnpu (tethers Xist to the Xi 53 ) and Jarid2 (directs polycomb to Xist-localised regions 54,55 ). Xmas mESCs were transduced with validated shRNAs ( Supplementary Fig. 3a) 6 days prior to differentiation, maintaining antibiotic selection to ensure hairpin activity. Flow cytometry revealed knockdown of Yy1 or Jarid2 inhibited XCI relative to a nonsilencing control (Nons) (Fig. 4c) 52,57 prior to knockdown. To confirm these results reflect a direct effect on XCI, rather than altered differentiation kinetics, we performed real-time quantitative PCR (qRT-PCR) for the pluripotency genes Nanog and Sox2 in differentiating Xmas mESCs, finding no difference in expression level following any knockdown apart from the expected accelerated loss of pluripotency upon knockdown of Hnrnpu prior to differentiation ( Supplementary Fig. 3b, c). These data indicate by varying the time of shRNA transduction, Xmas mESCs suggest when a factor is required during the process of XCI.
A targeted shRNA screen in differentiating Xmas mESCs reveals Smarcc1 and Smarca4 are required for XCI. The tractability of Xmas mESCs, allowed us to screen for genes that establish the Xi. Our previous mouse genetic screen identified epigenetic regulators of transgene variegation [58][59][60][61][62][63][64][65][66] . This screen yielded a list of seventeen candidate proteins, some of which are also known to be required for XCI 12,23,34,35,67 . Given this, we selected this suite of genes to target in our screen. Xmas mESCs were transduced with validated hairpins (Supplementary Fig. 3a) at day 2 of differentiation and assessed by flow cytometry at day 6; a timepoint we consistently observe effects from gene knockdown (Fig. 4e, f). Strikingly, XCI was impaired by shRNAs against nucleosome remodellers Smarcc1 and Smarca4, both members of the ESC-specific BAF complex [68][69][70] (Fig. 5a). We validated the screen result in a Xmas mESC differentiation timecourse, detecting the failure of XCI by day 5 for knockdowns of either Smarcc1 or Smarca4 ( Fig. 5b and Supplementary Fig. 4a). As genes in our screen are knocked down following the induction of differentiation, we cannot exclude roles early in XCI for the genes that did not readout, but here we chose to focus on the role of Smarcc1 and Smarca4.
To determine the extent Smarcc1 and Smarca4 knockdown impairs XCI across the whole X chromosome, we performed RNAseq in differentiating X FVB X CAST F1 mESCs ( Fig. 5c and Supplementary Fig. 4c, e). Knockdown was maintained throughout the assay ( Supplementary Fig. 4b, d) and resulted in increased gene expression from X FVB (preferential Xi) at day 6 of differentiation at the majority of informative X-linked genes (Fig. 5d, Supplementary Fig. 4c, e, f and Supplementary Data 2, 3), suggesting both Smarcc1 and Smarca4 are required for chromosome-wide silencing. We next focussed on Smarcc1, performing RNA-seq during differentiation and found the persistent failure of XCI in knockdown cells, detectable from day 5 ( Fig. 5f, g and Supplementary Fig. 4g).
Despite Smarcc1 and Smarca4 both being members of the same complex, there were no significantly differentially expressed genes in common between Smarcc1 and Smarca4 knockdown groups. Similarly, depletion of different subunits of the BAF complex has previously been reported to result in different chromatin states, accessibility and transcription 71,72 . In this context, the lack of overlap suggests the mechanism by which they regulate XCI is not via a secondary gene or delayed differentiation. Indeed, we found no misexpression of known protein regulators of XCI following Smarcc1 or Smarca4 depletion ( Fig. 5h and Supplementary Fig. 4h, Supplementary Data 4), nor consistent changes to pluripotency factors or early differentiation genes (Supplementary Fig. 4i, j). To further assess whether Smarcc1 or Smarca4 knockdown delayed differentiation, we performed a correlation analysis by calculating the Euclidean distance between genes in our female RNA-seq dataset and a differentiation timecourse in male cells, finding no evidence of delay ( Fig. 5i and Supplementary Fig. 4k, l and Supplementary Data 5). Subsequent qRT-PCR experiments for common pluripotency and differentiation genes were also consistent with normal differentiation upon Smarcc1 or Smarca4 depletion ( Supplementary Fig. 4m). To experimentally separate the XCI role of Smarcc1 and Smarca4 from potential roles in pluripotency, we differentiated Xmas mESCs and transduced with shRNA at day 3 to achieve depletion later during differentiation, again detecting the failure of XCI ( Supplementary Fig. 4n). The timing of knockdown of Smarcc1 and Smarca4 in this and the earlier experiment is after the stage when factors important in the initiation of XCI, such as Yy1, readout in this screen. Together, these data suggest Smarcc1 or Smarca4 depletion do not influence XCI via altered timing of exit from pluripotency or differentiation, but instead may be due to a direct effect on the establishment of XCI.
Smarcc1 and Smarca4 are required at the establishment of XCI. We next sought to investigate when the defect in XCI first occurred in the Smarcc1 and Smarca4 knockdown cells, starting with an examination of Xist activation and spreading. RNA fluorescence in situ hybridisation (FISH) for Xist detected no signal in mESCs and during an mESC differentiation timecourse found no difference in the number of cells with an Xist focus following knockdown of Smarcc1 or Smarca4 at any timepoint; however, depleted cells were largely unable to form the distinctive Xist cloud apparent at day 6 of differentiation ( Fig. 6a, Supplementary Fig. 5a, b). To understand potential defects in Xist spreading, we performed a volumetric analysis of Xist foci, finding that depletion of Smarcc1 had no effect on Xist spreading at day 4 or 5 of differentiation (Fig. 6b). Interestingly, depletion of Smarca4 appeared to show accelerated Xist spreading at day 4 of differentiation, likely due to the role of Smarca4 in maintaining pluripotency [68][69][70] , however, this acceleration was resolved by day 5 and is followed by a clear failure to coat the Xi at day 6. To gain c Representative images of teratomas produced following injection of Xmas mESCs into nude mice (n = 4 independent replicates), with differentiated cell types from endodermal, mesodermal and ectodermal lineages shown. d tSNE plot comparing the transcriptomes of Xmas mESCs (n = 4 independent replicates) from day 0 to day 8 of differentiation against published transcriptomes of mESCs grown in serum or 2i, MEFs or NSCs.
a more accurate readout of transcript levels and as these FISH experiments do not discriminate between Xist and Tsix RNA, we performed qRT-PCR, finding Xist to be 100-fold more highly expressed than Tsix. Together with no signal at day 0 these data indicate the FISH signal is likely Xist (Supplementary Fig. 5c, d).
No effect on Tsix RNA was observed following depletion of either Smarcc1 or Smarca4, whereas Xist RNA levels were slightly decreased.
The change in Xist levels and localisation only later during the timecourse of XCI suggest that Xist is correctly induced but may be destabilised due to failure to localise to the Xi 73,74 . To test this directly we performed a series of experiments in a male mESC Xmas mESCs detect impaired XCI during differentiation. a Flow cytometry data showing the kinetics of the fluorescent reporter alleles during differentiation and XCI of Xmas mESCs for multiple cell lines (n = 9 independent replicates). The triangle represents weaning from 2i to differentiation media in 25% increments over 3 days. b RNA-seq timecourse data from differentiating X FVB X CAST mESCs showing the ratio of X FVB gene expression compared with the X CAST (X FVB -X CAST log 2 ). Hprt expression is represented as a red line and other X-linked genes as grey shade. c-f Flow cytometry data showing the kinetics of the fluorescent reporter allele expression changes during differentiation and XCI of Xmas mESCs. Cells were challenged with shRNAs against the indicated known regulators of XCI or control (Nons) either prior to differentiation depicted as either raw data (c) or normalised to Nons (d), or during differentiation as either raw data (e) or normalised to Nons (f). Triangles represent weaning from 2i media into differentiation media and arrows indicate the day of shRNA viral transduction. n = 3-6 independent replicates from two independent shRNAs per gene, error bars indicate s.e.m., two-way ANOVA, p value is given for the entire timecourse of that gene knockdown. Source data are provided as a source data file.
line that carries a doxycycline-inducible Xist transgene on chromosome 17 75 . Xist induction, spreading and silencing are very rapid in this model, so to test induction of Xist we performed Xist RNA FISH at 30 min post doxycycline induction, prior to significant spreading of Xist, and found no difference in the proportion of nuclei producing an Xist signal in controls and Smarcc1 or Smarca4 depleted cells ( Supplementary Fig. 5e, f). This timepoint is akin to day 4 of differentiation in Xmas mESCs with endogenous Xist. When measured by qRT-PCR, Xist levels were normal following 24 h of continual doxycycline driven expression in control and depleted cells (Supplementary Fig. 5g performed a timecourse of immunoflourescence for mCherry (which is tethered to Xist in this cell model), finding these cells were less able to form an Xist cloud and H3K27me3 foci than control cells at days 1 and 2 post Xist induction ( Supplementary Fig. 5h, i). These timepoints are akin to day 6 of differentiation in Xmas mESCs. These data suggest that the defect upon Smarcc1 or Smarca4 depletion is downstream of Xist induction. Using the inducible system to study Xist degradation, we found the reduction in Xist transcript is likely due to destabilisation of the transcript following failure to localise to the Xi ( Supplementary Fig. 5j). Finally, Smarcc1 or Smarca4 depleted cells displayed a survival advantage over control cells, further supporting failed Xist-induced gene silencing in depleted cells ( Supplementary Fig. 5k). Taken together with our prior data showing failure of gene silencing in female cells detectable from day 5 of differentiation, normal Xist induction at days 4 and 5, but inablilty to form an Xist cloud and H3K27me3 foci at day 6, these data suggest that Smarcc1 and Smarca4 are required early in the establishment of silencing on the Xi, beyond which key XCI events fail. Notably, this is a differentiation-free model of Xistinduced silencing and therefore disentangles the roles of Smarcc1 and Smarca4 from any potential role in differentiation.
Finally, to test for potential roles in the maintenance of XCI we performed Smarcc1 and Smarca4 knockdown in post-XCI Xmas MEFs sensitised to X reactivation by treatment with the DNA methyltransferase inhibitor 5-azacytidine. Knockdown of either gene was unable to reactivate the silent reporter allele ( Supplementary Fig. 5l), but neither was the known maintenance factor Dnmt1. Reversal of XCI during maintenance is difficult, so we employed a more sensitive system [76][77][78][79][80] where MEFs carry a silent multi-copy GFP transgene on their Xi by virtue of an Xist knockout in trans to the reporter (Xi GFP Xa ΔXist MEFs) 23,81 . Again, we found no reactivation of the silent reporter upon Smarcc1 or Smarca4 knockdown, despite positive controls producing readily detectable GFP ( Supplementary Fig. 5m), therefore providing no evidence for a role in the maintenance of XCI.
BAF complex localisation to the Xi is dynamic. To determine whether Smarca4 acts directly on the Xi to establish XCI, we performed immunofluorescence for Smarca4 together with H3K27me3, a marker of a later stage establishing Xi and found colocalization of Smarca4 and H3K27me3 in some cells but exclusion in others at day 6 of differentiation ( Supplementary Fig. 6a, b). Moreover, upon either Smarcc1 or Smarca4 depletion fewer cells are able to form H3K27me3 foci at day 6 of differentiation ( Supplementary Fig. 6c, d), suggesting XCI is unable to proceed to this point. As Smarca4 is absent from the Xi in terminally differentiated cells 12,82 , these data suggest that Smarca4 is present on the establishing Xi while it is required, but excluded upon completion. With depleted Smarcc1 or Smarca4, gene silencing fails at day 5 and Xist spreading and H3K27me3 deposition fails at day 6, suggesting that Smarcc1 and Smarca4 are active prior to these events. To assess the presence of Smarca4 on the establishing Xi at day 4 of differentiation, prior to H3K27me3 deposition, we performed ChIP-seq for Smarca4 with either a Smarcc1 knockdown or non-targeting control in both male and female mESCs, finding that Smarca4 was indeed enriched on the establishing Xi at this early stage of XCI with an abundance of peaks on the female X that cannot be accounted for simply by the presence of 2 X chromosomes compared with males (Fig. 6c, d). Smarca4 peaks were found at promoters on the X chromosome (and autosomes), some of which are only found in control females (Fig. 6e, f and Supplementary Fig. 6e-g). These data suggest Smarca4 may play a role at promoters on the establishing Xi at day 4 of differentiation. Interestingly, very little Smarca4 ChIP-seq signal was produced in female cells upon Smarcc1 knockdown, including at X-linked promoters, suggesting these proteins are acting together as the BAF complex to establish the Xi.
The BAF complex depletes nucleosomes at Xi promotors prior to establishment of XCI. That the BAF complex binds to the Xi during the establishment phase of XCI and contributes functionally to establishment, suggests it acts directly on the Xi as a nucleosome remodeller. Therefore, we profiled nucleosome occupancy in differentiating X FVB X CAST mESCs by allele-specific Nucleosome Occupancy and Methylome Sequencing (NOMeseq) [83][84][85] . Nucleosome dynamics during XCI establishment have not been reported previously, so initially we concentrated on normal mESC differentiation (Nons control). The reduced coverage of allele-specific data precluded gene-specific analyses, so we averaged across X-linked genes, finding different nucleosome kinetics between X chromosomes. X CAST (preferential Xa) promotors are slightly open in mESCs, remaining so at day 4 of differentiation before opening further at day 5, then restricting again at day 6 ( Fig. 6g, h). Similar kinetics were observed on autosomes ( Supplementary Fig. 6h, j). Similar patterns were also recently seen in NOMe-seq data sets from equivalent stages of post-implantation embryos 86 , suggesting promoter opening is common during the transition from pluripotency to lineagerestricted states. The X FVB (preferential Xi) followed different kinetics, where promotors were initially slightly open, similarly to X CAST , but became nucleosome depleted at day 4, a day earlier Fig. 5 Screen in Xmas mESCs identifies Smarcc1 and Smarca4 as regulators of XCI. a Flow cytometry data at day 6 of Xmas mESC differentiation following shRNA transduction at day 2 against candidate genes (n = 2 hairpins per gene, error bars indicate S.D.). b Flow cytometry data normalised to Nons along timecourse of Xmas mESC differentiation following knockdown of Smarcc1, Smarca4 or Nons (n = 4 independent replicates with two shRNAs per gene, error bars indicate s.e.m. Two-way ANOVA, p value given). c Schematic of skewed XCI during differentiation of X FVB X CAST mESCs. d, e Allelespecific RNA-seq data of X FVB X CAST mESCs at day 6 of differentiation following knockdown of Smarcc1 (d) and Smarca4 (e). Each point represents the X FVB -X CAST log 2 expression for informative X-linked genes (n = 239-281 genes, error bars indicate s.e.m. Two-tailed Student's unpaired t test, p value given). f Graphs showing RNA-seq data designed to compare gene expression between X chromosome and autosomes. Each point represents an informative gene, X-linked genes in red, autosomal genes black. The x-axis shows the ratio of expression from FVB to CAST (X FVB -X CAST log 2 ), therefore XCI is observed as a left shift of the red dots along the x axis. The y axis shows ratio of expression from Nons compared with knockdown with Smarcc1.6 (Nons-Smarcc1.6 log 2 FC), therefore failure of XCI upon knockdown is observed as an upward shift along the y axis. Black dots give an indication of global trends in autosomal gene expression. Dotted lines indicate medians and percentages show the X-linked genes falling into each quadrant. g RNA-seq time course data showing the ratio of X FVB gene expression compared to X CAST (X FVB -X CAST log 2 ). Error bars indicate s.e.m. of informative genes (n = 246 -271 genes), Two-tailed Student's unpaired t test, p value given. h Gene expression (log 2 rpm) of known XCI regulators, the difference between knockdown and control (subtract, Nons-knockdown) indicated. Scale bar represents both log 2 rpm and log 2 FC. i Heat maps showing Euclidean distance in gene expression (log 2 rpm) between Smarcc1 knockdown and Nons control along a differentiation timecourse of male or female mESCs. Source data are provided as a source data file. than X CAST (Fig. 6g, h), suggesting Xi promotors become accessible prior to gene silencing at day 5. The X FVB subsequently becomes progressively more nucleosome dense at both promotors and gene bodies, as expected to occur with gene silencing. No allelic differences were observed at autosomal genes (Supplementary Fig. 6h).
To address the functional role of nucleosome depletion prior to silencing, we produced a Smarcc1 knockdown NOMe-seq timecourse. Depleted cells were unable to open X FVB promotors at day 4 and instead followed kinetics similar to X CAST , consistent with the XCI failure observed by RNA-seq (Fig. 6i, j). These data suggest an inability to open promotors at day 4 results in failure to establish the Xi and, together with our ChIP-seq data showing localisation of Smarca4 to promoters at this time, this appears to be directly mediated by the BAF complex. No effect of Smarcc1 depletion was observed on the X CAST or autosomes (Supplementary Fig. 6i, j), however, there are likely gene-specific abnormalities not detected, and potentially cell-type-specific effects that would not be revealed by our undirected differentiation method. NOMe-seq also detects methyl-cytosine and showed mESCs were globally hypomethylated, remaining so at promotors during differentiation, but becoming increasingly methylated at intergenic regions and gene bodies. Methylation of CpG islands on the Xi is a feature of XCI maintenance. As expected, given the timing of our samples, we did not observe such methylation occurring, and no difference was observed between the X FVB and X CAST nor upon Smarcc1 depletion ( Supplementary Fig. 6k, l).
To validate our NOMe-seq data, we performed MARs-qPCR, a micrococcal nuclease-based method to assess site-specific nucleosome occupancy 87 in differentiating Xmas mESCs for a subset of Smarcc1-responsive or unresponsive promoters. Note that increased MARS-qPCR signal indicates decreased accessibility and is therefore directionally inverse to NoME-seq signal. In agreement with NOMe-seq data, we found that all but one of the Smarcc1-responsive promoters were also less open upon Smarcc1 or Smarca4 depletion at day 4 of differentiation, whereas at day 6 they were more open, indicative of failed gene silencing ( Supplementary Fig. 6m). Smarcc1-unresponsive promoters showed no effect. In this assay we also included the Xist and Tsix promoters. In agreement with our previous data suggesting Smarcc1 or Smarca4 are not required for Xist activation, we found no change in nucleosome occupancy at the Xist promoter upon depletion of Smarcc1 or Smarca4 at day 4 of differentiation. The Tsix promoter, however, was nucleosome enriched at day 4 of differentiation upon Smarcc1 or Smarca4 depletion, likely reflecting the need to silence Tsix on the Xi and suggesting that, similar to other genes on the Xi, promoter relaxation is required for this process. Interestingly, despite altered nucleosome occupancy Tsix expression is not affected by BAF depletion at the time points measured (Supplementary Fig. 5d).
Discussion
We wished to use XCI in differentiating female mESCs as a model epigenetic system where we could learn more about features of epigenetic silencing. Despite being of high interest, complications with their in vitro culture have meant female mESCs are experimentally underutilised. To allow us to study XCI in the native female context, we created Xmas mESCs as a tractable fluorescent reporter system that requires minimal manipulation of these delicate cells. Xmas reporters enabled rapid and regular monitoring of XX vs XO cells in culture so that we could ensure a highly XX population of pluripotent female cells. The fluorescent reporters then also allowed us to monitor X inactivation during differentiation, to perform a screen for regulators of XCI establishment during normal female mESC differentiation. All previous screens for XCI regulators were performed either in differentiated cells for factors that maintain XCI 12,23,35,88-93 , or non-native systems that instead induce Xist out of context (from an autosome in male cells or prior to exit from pluripotency in female mESCs 10,11,75,94 ). These screens have been highly fruitful but will miss genes required only for the establishment of XCI or that require a differentiating cell state to be active. Although small in scale, our screen suggests Xmas mESCs will be suitable for high-throughput screening approaches.
The screen revealed a role for Smarcc1 and Smarca4 in the establishment of epigenetic silencing of the X chromosome. Smarcc1 (also known as Baf155) and Smarca4 (also known as Brg1) are members of the chromatin remodelling BAF complex, with Smarcc1 being the core subunit around which the complex forms 95 and Smarca4 one of a variable number of catalytic ATPase subunits 96,97 . The BAF complex contains different subunits dependent on cell type, with Smarcc1 and Smarca4 members of an mESC-specific complex, known as esBAF 70 . When mESCs are depleted of either Smarcc1 or Smarca4 they display reduced expression of core-pluripotency transcription factors, reduced self-renewal, and loss of pluripotency [68][69][70] . Here we reveal a role for esBAF during exit from pluripotency in females, with Smarcc1 or Smarca4 depletion causing failure of XCI. Deletion of Smarcc1 or Smarca4 in mice is lethal peri-implantation, and although consistent with XCI failure, male embryos also die, precluding conclusions about their XCI roles in vivo [98][99][100] . Two prior screens for regulators of establishment of XCI did not identify members of the BAF complex, however, these were performed in pluripotent mESCs with inducible Xist and so were unlikely to identify genes with dual roles in XCI and pluripotency, such as Smarcc1 and Smarca4 75,94 .
A previous study found Smarca4 was required for maintenance of XCI in a somatic cell line 12 , however, a later study by the same group reported that Xist repels Smarca4 from the Xi in order to maintain silencing 82 , a somewhat contradictory finding. Here we also find no evidence for maintenance of XCI by the esBAF complex. Instead, the clear failure to establish silencing following Smarcc1 depletion inspired us to profile nucleosome occupancy during establishment of XCI. This time course revealed that Xi promotors become nucleosome depleted at the very earliest stages Fig. 6 Smarcc1 opens Xi promotors in order for the establishment of XCI to proceed. a Xist RNA FISH in female mESCs at differentiation days 4, 5, 6 following knockdown with indicated hairpins. Xist staining green, DAPI blue. b Volume of Xist foci in a. Foci measured (n) are indicated. The line indicates median, box 25th to 75th percentile, error bars 5th to 95th percentile, dots indicate outliers. Two-tailed Student's unpaired t test, statistically significant p values only given. c-f Smarca4 ChIP-seq in male and female mESCs at differentiation day 4, with Smarcc1 knockdown or non-silencing control, c number macs2 peaks, d coverage plot of X chromosome, e average read density at X-linked genes ±5kb and f an example coverage plot. g Nucleosome occupancy (% GpC methylation) during female mESC differentiation determined by NOMe-seq averaged across genes on the X FVB and X CAST , displayed as a heatmap or smoothed histogram. h Accessibility of X FVB or X CAST promoters during female mESC differentiation determined by NOMe-seq. Line indicates median, box 25th to 75th percentile, error bars 5th to 95th percentile and dots indicate outliers. n = 74 to 261 informative promoters. One-tailed Student's unpaired t test without outliers, p value is given, non-significance (n.s). i Nucleosome occupancy (% GpC methylation) during female mESC differentiation determined by NOMe-seq averaged across all genes on X FVB upon Smarcc1 knockdown, displayed as a heatmap or smoothed histogram. j Accessibility of X FVB promoters upon Smarcc1 knockdown during female mESC differentiation determined by NOMe-seq. Line indicates the median, box 25th to 75th percentile, error bars 5th to 95th percentile and dots indicate outliers. n = 58 to 261 informative promoters. One-tailed Student's unpaired t test without outliers, p value is given, non-significance (n.s). k Model for BAF regulation of establishment of XCI. Open and closed chromatin depicted by nucleosome spacing, green lines represent Xist, black arrows transcription and red paddles marked 'Me' H3K27me3. This figure depicts timing during differentiation when key XCI events occur; Xist induction (day 3), BAF occupancy at promoters and promoter opening (day 4), failure of gene silencing (day 5) and failure of Xist cloud formation and H3K27me3 deposition (day 6). Source data are provided as a source data file. of gene silencing. Importantly, we functionally link promoter opening to gene silencing; cells with depleted Smarcc1 fail to open promotors and fail to establish the Xi, with the resulting Xi following a similar trajectory to the Xa, both in terms of nucleosome positioning and gene silencing. Our data suggest a model where esBAF is recruited to the future Xi to make it accessible, perhaps to silencing factors or to enable Xist spreading, with the BAF complex subsequently excluded once XCI is complete (Fig. 6k). Therefore, Smarcc1 creates a chromatin state that allows establishment of silencing to proceed.
The timing at which we observe key silencing events is pertinent. Xist is induced and its spreading appears normal at day 4 of differentiation. Smarca4 is present at promoters of the Xi at day 4 and Smarcc1-mediated promoter opening occurs the same day, placing nucleosome depletion at promoters early in the ontogeny of epigenetic silencing. Other key events are downstream of promoter opening, occurring on subsequent days. Upon knockdown of Smarcc1 or Smarca4, we detect the failure of gene silencing from day 5 of differentiation, the first day X-linked gene silencing is measurable. On day 6, there is an observable failure to form the distinctive Xist cloud in Smarcc1 and Smarca4 depleted cells and H3K27me3 deposition fails (Fig. 6k). This suggests an inability of Xist to spread or localise to the Xi late in the establishment of XCI, and the timing implies this is a consequence of failure to establish the Xi, rather than a direct requirement of nucleosome remodelling for Xist spreading. It is important to note that we cannot exclude further roles for the BAF complex either in the induction of Xist or in facets of establishment and maintenance of XCI that are untested here. We have not determined the mechanism by which Smarcc1 or Smarca4 are recruited to the establishing Xi, however, we do not believe this is likely to be through direct interaction with Xist. Firstly, Smarcc1 and Smarca4 do not possess classic RNA binding domains. Secondly, although a previous study found Smarca4 bound to Xist in vitro in differentiated cells, follow-up work by the same group showed Xist repelled Smarca4 12,82 , and other surveys of Xist interactors found no evidence of direct Smarca4 or Smarcc1 binding in cells relevant to the establishment of XCI 10,11,101 . A recent paper intriguingly found Spen was required early for the establishment of XCI and localised to promotors 102 , raising the possibility that Smarcc1 and Smarca4 may be recruited by Spen.
In summary, Xmas mESCs enabled the discovery of previously unknown requirements for establishment of the Xi, namely nucleosome remodeller-dependent chromatin opening, that occurs prior to gene silencing. It remains unclear whether this is also a requirement for autosomal gene silencing, however, as all apsects of XCI gene silencing are also features of epigenetic silencing more broadly, this is a likely possibility. The Xmas mESC system provides a renewable resource of high-quality female mESCs and makes the study of XCI and other aspects of female-specific pluripotency more feasible than ever before.
Methods
Animal strains and husbandry. Animals were housed and treated according to Walter and Eliza Hall Institute (WEHI) Animal Ethics Committee approved protocols (2014.034, 2018.004). Temperature maintained between 19°C and 24°C, 40-60% humidity, and a light/dark cycle of 14 h/10 h. Xmas mice are C57BL/6 background and were maintained as homozygous lines. D4/XEGFP mice were obtained from Jackson Labs and backcrossed onto the C57BL/6 background. Xist ΔA mice 81 were obtained from Dr. Graham Kay, Queensland Institute of Medical Research, and kept on a 129 background. Castaneus (CAST/EiJ) mice were obtained from Jackson Labs and maintained at WEHI. FVB/NJ mice were obtained from stocks held at WEHI. Oligonucleotides used for genotyping are provided in Supplementary Data 6.
Creation of Hprt knock-in alleles. The Hprt targeted alleles were generated by recombination in Bruce4 C57BL/6 mESCs. The targeting construct was produced by recombineering. This construct was designed to introduce an IRES-mCherry-polyA site or an IRES-eGFP-polyA site sequence 20 bp into the 3′-untranslated region (UTR) of Hprt, followed by a PGK-neomycin selection cassette flanked by Frt sites. Note, the mCherry used in the construct contained a synonymous mutation to remove the internal NcoI site. The targeting construct also introduced specific sites useful for the Southern blotting strategy used to validate recombination in targeted mESC clones. These sites were SphI and EcoRV at the 5′-end, after 20 bp of the 3′-UTR before the IRES, and EcoRV and NsiI at the 3′-end before the remainder of the 3′UTR.
Neomycin-resistant clones were screened by Southern blot for their 5′ and 3′ integration sites using PCR amplified probes. The 5′ probe was amplified with the 5′-AAACACACACACACTCCACAAA-3′ and 5′-GCACCCATTATGCCCTAGA TT-3′ oligos, the 3′ probe was amplified with 5′-GCTGCCTAAGAATGTGTTG CT-3′ and 5′-AAGCCTGGTTTTGGTAGCAG-3′ oligos. Each was cloned into the TopoTA vector. For the Southern blot, DNA was digested individually with EcoRV and SphI. The wild-type allele generated a 17.4 kb band with EcoRV digestion and the 5′ or 3′ probe, and a 9.2 kb and 8.3 kb knockin band for the 5′ and 3′ probe, respectively. The wild-type allele generated a 7.6 kb probe with SphI digestion and the 5′ probe, compared with a 6.4 kb knockin band. The wild-type allele generated an 8.2 kb band with NsiI digestion and the 3′ probe, compared with a 6.7 kb knockin allele.
One Hprt-IRES-mCherry-pA-Frt-neo-Frt and one Hprt-IRES-eGFP-pA-Frtneo-Frt correctly targeted clone was selected and used for blastocyst injection. The PGK-neo selection cassette was subsequently removed by crossing to the Rosa26-Flpe deleter strain 103 . The Hprt-IRES-mCherry and Hprt-IRES-GFP alleles were homozygous and maintained on a pure C57BL/6 background. Genotyping of mice was performed by PCR reaction using GoTaq Green Mix (Promega) and 0.5 µM of each primer, as given in Supplementary Data 6.
Culture method for mESCs. ESCs were maintained in suspension culture in 2i +LIF medium on non-tissue culture-treated plates at 37°C in a humidified atmosphere with 5% (v/v) carbon dioxide and 5% (v/v) oxygen. mESCs were passaged daily by collecting colonies and allowing them to settle in a tube for <5 min. The supernatant containing cellular debris was removed and mESC colonies were resuspended in Accutase (Sigma-Aldrich) and incubated at 37°C for 5 min to achieve a single-cell suspension. At least 4× volumes of mESC wash media were added to the suspension and cells were pelleted by centrifugation at 600 × g for 5 min, before plating in an appropriately sized non-tissue culture-treated plate, never flasks, in an excess of 2i+LIF media. Cells were assessed for XX karyotype regularly by flow cytometry.
Differentiation of mESCs. At least 2 days prior to inducing differentiation mESCs in suspension were allowed to attach by plating onto tissue culture-treated plates coated with 0.1% gelatin. Differentiation was induced by transitioning cells from 2i +LIF media into DME HiHi media [DMEM, 500 mg/L glucose, 4 mM L-glutamine, 110 mg/L sodium pyruvate, 15% fetal bovine serum, 100 U/mL penicillin, 100 μg/mL streptomycin, 0.1 mM non-essential amino acids and 50 μM β-mercaptoethanol] in 25% increments every 24 h. During this time cells were passaged as required. On the day of transferring into 100% DME HiHi,~10 4 cells per cm 2 were plated onto tissue culture-treated plates coated with 0.1% gelatin. Cells were not passaged for the remainder of an experiment and media was changed as required.
Transduction of mESCs. Retrovirus was produced as described 33,104 and concentrated by precipitation with 4% PEG 8000 followed by centrifugation. mESCs were either seeded at 10 5 cells per cm 2 on plates that had been coated with 0.1% gelatin, or at~10 5 cells per mL in suspension in 2i+LIF medium containing PEG concentrated viral supernatant and 8 μg/mL polybrene. The next day medium was changed, and cells were selected with 1 µg/mL puromycin. shRNA sequences are given in Supplementary Data 6. Some of the shRNAs were validated in previous studies 23, [105][106][107] .
Teratoma formation. Xmas mESCs were pelleted and washed with phosphatebuffered saline (PBS) before passing through a 70 µm cell strainer. In all, 10 5 cells were resuspended in 200 µl of 50% matrigel (Corning) in PBS and injected subcutaneous into either the left or right flank of CBA/nude mice. Teratomas were harvested after~60 days, fixed with formalin, embedded in paraffin, and stained with Haemotoxylin and Eosin.
Derivation and culture of MEFs. MEFs were derived from E13.5 embryos and cultured in Dulbecco's Modified Eagle Medium supplemented with 10% (v/v) fetal bovine serum at 37°C in a humidified atmosphere with 5% (v/v) carbon dioxide and 5% (v/v) oxygen.
qRT-PCR. Knockdown efficiency of shRNA retroviral constructs was determined using Roche Universal Probe Library (UPL) assays. Relative mRNA expression levels were determined using the 2 −ddCt method, with Hmbs as a house-keeping control. Probe numbers and oligonucleotide sequences are provided in Supplementary Data 6. qRT-PCR specific for Xist and Tsix was performed as described 108 .
FACS analysis and sorting. Cells were prepared in KDS-BSS with 2% (v/v) fetal bovine serum, with dead cells and doublets excluded by size and analysed using a BD LSRFortesssa cell analyser. Cells were prepared similarly for sorting using a FACSAria. Flow cytometry data were analysed using FlowJo.
X reactivation assay. Xmas or Xi GFP Xa ΔXist MEFs were transduced with shRNA retroviruses, selected with 3-5 µg/mL puromycin, then treated with 10 µM 5-azacytidine 3 days post transduction. Cells were analysed by FACS 7 days post transduction. This assay was run exactly as previously described 23 .
MARS-qPCR. MARS-qPCR was performed as described 87 on Xmas mESCs differentiated for either 4 or 6 days. Primers used for qPCR are listed in Supplementary Data 6. Relative DNA abundance was determined using the 2 −ddCt method, with an intergenic region on chromosome 9 used as a control for normalisation.
RNA-seq library generation and analysis. For the RNA-seq depicted in Fig. 3a, b, Xmas mESCs were derived and cultured as described above and compared with published data sets 49,50 . For the RNA-seq depicted in Fig. 3d, Xmas mESC lines were derived and differentiated using the methods described here, with samples collected daily for 8 days of differentiation and compared to published datasets 49,50 . For all Smarcc1 and Smarca4 knockdown RNA-seq in female mESCs (Fig. 5), we derived female mESCs by crossing FVB/NJ (FVB) dams with CAST/EiJ (CAST) sires. The resultant female mESC lines were expanded and then differentiated using our culture conditions. We favour this model of XCI which utilises a natural skewing in XCI, over models of non-random XCI forced by genetic deletion as we find these models lead to accelerated and non-random XO karyotypes that produce artefactual results in our hands. Cells were transduced with the indicated shRNAs at day 2 of differentiation and samples taken for RNA-seq at the indicated timepoints. For Smarcc1 and Smarca4 knockdown RNA-seq in male mESCs (Supplementary Fig. 4l), we derived male C57/Bl6 mESCs and expanded and then differentiated them using our culture conditions. Again, cells were transduced with the indicated shRNAs at day 2 of differentiation, and samples were taken for RNAseq at the indicated timepoints.
For all RNA-seq experiments, cells were harvested from plates by the addition of lysis buffer and RNA extracted with a Quick-RNA MiniPrep kit (Zymo Research). Sequencing libraries were prepared using the TruSeq RNA sample preparation kit (Illumina) and sequenced in-house on the Illumina NextSeq500 platform with 75 bp reads. For non-allele-specific RNA-seq (C57/ Bl6 samples), single-end sequencing was performed. Quality control and adapter trimming were performed with fastqc and trim_galore 111 , respectively. Reads were aligned to the mm10 reference genome using either tophat 112 or histat2 113 . Expression values in reads per million (RPM) were determined using the Seqmonk package (www.bioinformatics.babraham.ac.uk/projects/seqmonk/), using the RNA-seq Quantitation Pipeline. Further data interrogation was performed using Seqmonk.
For allele-specific RNA-seq (FVBxCAST samples), paired-end sequencing was performed to improve haplotyping efficiency. Quality control and adapter trimming were performed with fastqc and trim_galore 111 , respectively. Reads were aligned to a version of mm10 with SNPs between FVB/NJ with CAST/EiJ n-masked, created using SNPsplit 114 , using either tophat 112 or histat2 113 . Reads were haplotype phased using SNPsplit 114 and expression values in RPM determined using the Seqmonk package (www.bioinformatics.babraham.ac.uk/ projects/seqmonk/), using the RNA-seq Quantitation Pipeline. For X chromosome-specific analysis, genes were determined to be informative when they had at least 50 mapped and haplotyped reads. Further data interrogation was performed using Seqmonk.
Gene set testing and differential gene expression analysis of male mESC was performed by making two groups by pooling samples at all passages from either the traditional culture method or our improved method. Differential expression analysis between the two mESC culture methods was performed on gene-level counts with TMM normalisation, filtering out genes expressed in fewer than half of the samples, using edgeR v3.26.7 115,116 . Model-fitting was performed with voom v3.40.6 117 and linear modelling followed by empirical Bayes moderation using default settings. Differential expression results from voom were used for gene set testing with EGSEA v1.12.0 118 against the c5 Gene Ontology annotation retrieved from MSigDB, aggregating the results of all base methods but 'fry' and sorting by median rank.
Distance matrices of differentiating mESCs were determined between gene expression profiles of either Smarca4 or Smarcc1 knockdown and the Nons control by calculating the Euclidean distance between log 2 rpms with the dist function in R v3.6.1.
ChIP-seq library generation and analysis. ChIP-seq libraries were prepared from Xmas mESCs at differentiation day 4 using the ChIP-IT High Sensitivity kit (Active Motif) according to the manufacturer's instructions and 10 μL of antibody against Smarca4 (D1Q7F, Cell Signalling). Sequencing libraries were prepared using the TruSeq DNA sample preparation kit (Illumina) and sequenced in-house on the Illumina NextSeq500 platform with 75 bp single-end reads. Quality control and adapter trimming were performed with fastqc and trim_galore 111 , respectively. Reads were aligned to the mm10 reference genome using bowtie2 119 . Duplicate read removal, peak calling and metagene analysis were performed using the Seqmonk package (www.bioinformatics.babraham.ac.uk/projects/seqmonk/).
Immunofluorescence. Immunofluorescence was performed as described in ref. 120 , with modifications on differentiating Xmas female mESCs at days 5 or 6. Cells were fixed with 3% (w/v) paraformaldehyde in PBS for 10 min at room temperature, washed three times in PBS for 5 min each and permeabilised in 0.5% (v/v) triton X-100 for 5 min. Cells were blocked in 1% (w/v) bovine serum albumin (BSA) in PBS for 20 min, then incubated in primary antibody in the 1% (w/v) BSA overnight at 4°C in a humid chamber. Primary antibodies used were Smarca4 (1:100 ab110641, Abcam), Smarcc1 (1:100 #11956 S, Cell Signaling), H3K27me3 (1:100 07-449, Millipore or 1:100 C36B11, Cell Signalling Technology) and mCherry (1:100 NBP2-25158, Novus Biologicals). Cells were washed three times in PBS for 5 min each and then incubated with a secondary antibody diluted in 1% (w/v) BSA for 40 min at room temperature in a dark, humidified chamber. Secondary antibodies used were Donkey anti-rabbit IgG Alexa Fluor 555 conjugate (1:500, A315 Thermo Fisher) and Goat anti-rabbit IgG Alexa Fluor 647 conjugate (1:500, A21244 Thermo Fisher). For the simultaneous staining of Smarcc4 and H3K27me3, H3K27me3 (C36B11) rabbit mAb Alexa fluor 647 conjugate (Cell Signalling Technology) was used after the secondary antibody was washed off and incubated for 1 hour in a dark humidified chamber at room temperature. Nuclei were stained with DAPI (0.2 µg/mL) in PBS for 5 min at room temperature. Cells were mounted in Vectashield antifade mounting medium (Vector Laboratories) and visualised on LSM 880 or LSM 980 microscopes (Zeiss). For overlap analyses, image analysis was performed in a semi-automated fashion using a custom written Fiji 121 macro, available here https://github.com/DrLachie/smchd1_coloc. Manual segmentation of cells of interest using the region manager. Auto-thresholding methods were used to segment the nuclei and the H3K27me3 region, and the mean intensity of Smarca4 was measured in both the whole nucleus and region containing H3K27me3.
Xist RNA FISH. Xist RNA FISH was performed as previously described 105,120 on day 4 or day 5 in differentiated Xmas mESCs. Xist RNA was detected with a 15 kb cDNA, pCMV-Xist-PA, as previously described 122 . The Xist probe was labelled with Green-dUTP (02N32-050, Abbott) by nick translation (07J00-001, Abbott). The cells were mounted in Vectashield antifade mounting medium (Vector Laboratories) and visualised on LSM 880 or LSM 980 microscopes (Ziess). Images were analysed using the open source software FIJI 121 .
NOMe-seq library generation and analysis. Female mESCs were derived by crossing FVB/NJ dams with CAST/EiJ sires. The resultant female mESC lines were expanded and then differentiated using our culture conditions. Cells were transduced with the indicated shRNAs at day 2 of differentiation and samples fixed in 1% formaldehyde at the indicated timepoints. NOMe-seq samples were prepared as described 83 , following their protocol for fixed cells. Bisulfite treatment was performed using the EZ DNA Methylation kit (Zymo Research) and sequencing libraries prepared with the Accel-NGS Methyl-Seq DNA Library Kit (Swift Biosciences) and sequenced in-house on the Illumina NextSeq500 platform with 75 bp paired-end reads. Quality control and adapter trimming were performed with fastqc and trim_galore 111 , respectively. Using bismark 123 , reads were aligned to a version of mm10 with SNPs between FVB/NJ with CAST/EiJ n-masked, created using SNPsplit 114 then bisulfite converted using bismark. Reads were haplotype phased using SNPsplit 114 and methylation calls made with the bismark_methylation_extractor 123 . Methylation calls were filtered for informative CpG and GpC positions using coverage2cytosine with the -nome-seq flag. For analysis of GpC methylation, % methylation was determined at all covered GpC positions and then averaged over 25 positions and normalised using Enrichment normalisation with the Seqmonk package (www.bioinformatics.babraham.ac.uk/ projects/seqmonk/). Both heatmap and line plots were produced by averaging over all gene positions in the indicated genomic regions, with line graphs additionally smoothed for clarity using Seqmonk.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The data that support this study are available from the corresponding authors upon reasonable request. All next-generation sequencing data generated for this project have been deposited in the Gene Expression Omnibus (GEO) database under accession number GSE137163. Publicly available data were utilised in this study and are available from the GEO database under accession numbers GSE23943 and GSE67299. Source data are provided with this paper.
|
2022-03-31T06:22:56.817Z
|
2022-03-29T00:00:00.000
|
{
"year": 2022,
"sha1": "e0d4c941479ea21b98a4fb94bf754177ef6371d3",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-022-29333-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c4cfeda039cee38639a27b731d815d07451e1e1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
243814880
|
pes2o/s2orc
|
v3-fos-license
|
Unusual Presentation of a Rare Pneumothorax in a Patient With COVID-19 Pneumonia: A Case Report
Coronavirus disease 2019 (COVID-19) is a respiratory and systemic disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Since the start of the COVID-19 pandemic, pneumothorax (PTX) has only been reported as a complication of the virus-induced pneumonia in less than 1% of cases. The majority of them developed symptoms in the setting of either an underlying history of lung disease or being placed on a mechanical ventilator during admission. The authors report a unique case of PTX in a patient with a recent COVID pneumonia that did not fit the aforementioned clinical picture - a 41-year-old male with a complete collapse of his right lung who was previously admitted for COVID pneumonia with no known pulmonary history and was not intubated. A chest tube was placed with the resolution of the PTX and the patient is being monitored on the medicine floor.
Introduction
Since the World Health Organization (WHO) published their first disease outbreak news on what was then a "new virus" in the January of 2020, our understanding of the coronavirus disease 2019 (COVID-19) has changed drastically. As numbers across the globe continue to waver, so does our grasp on the extent of this disease. Reports from within the last year have detailed many of its associated complications, from myocarditis [1], to neurological dysfunctions [2], new-onset type I diabetes [3], and gastrointestinal bleeding [4]. Recognizing these complications has helped medical teams establish and continuously modify protocols for inpatient and outpatient management of COVID patients.
In this article, we present a new case of pneumothorax (PTX) in a 41-year-old male who was previously admitted for COVID-19 pneumonia. The patient was discharged from the first admission on home supplemental oxygen and reported significant improvement. More than a week later, he returned to the emergency room with complaints of worsening dyspnea. The patient was found to have a large right-sided PTX with a significant left mediastinal shift. A chest tube was placed and he was transferred to the intensive care unit (ICU) for further management of unstable laboratory results. Once stabilized, he was transferred to the medicine floor for monitoring PTX resolution. Of significant interest, this patient was not mechanically ventilated during his admission for COVID pneumonia and had no known history of underlying lung disease prior. His disease course is notably different from those who have been previously reported by authors exploring the association between COVID-19 infection and pneumothoraces.
Case Presentation
Informed consent was obtained from the patient himself. A 41-year-old obese male, 23-pack a year former smoker with poor primary care follow up and a past medical history of uncontrolled hypertension (HTN) and uncontrolled type II diabetes mellitus (DM) presented to the emergency department with worsening dyspnea. This patient was admitted 20 days before for sepsis and acute hypoxic respiratory failure in the setting of COVID pneumonia without requiring intubation. The patient was not vaccinated against COVID-19 and had no history of lung disease prior. During the previous admission, the patient had bilateral rales on examination and was saturating at 91%-92% on room air with improvement to 3% on 3 L nasal cannula. His laboratory results at that time were significant for neutrophilic leukocytosis, borderline anemia, and significantly elevated inflammatory markers [D-dimer, C-reactive protein (CRP), IL-6, lactate dehydrogenase (LDH), and ferritin] as seen in Table 1. Labs were significant for leukocytosis, anemia, and abnormal inflammatory markers WBC, white blood cell; RBC, red blood cell; HGB, hemoglobin; HCT, hematocrit; MCV, mean corpuscular volume; LDH, lactate dehydrogenase; CRP, Creactive protein The patient was started on remdesivir and prednisone 10 mg for five days while on 4L Oxymask. He was then discharged six days later with supplemental oxygen as needed. The patient felt well at the time of discharge and had no complaints for about one week. He then had some difficulty breathing 8-10 days since his discharge. On the 11th day, his symptoms significantly worsened despite oxygen supplementation, prompting his return to the emergency room. As per the electronic medical record (EMR), the patient was not hypotensive but was tachypneic at 111 beats per minute upon arrival, only speaking few-word sentences. He was in obvious respiratory distress with absent breath sounds on the right side, but his oxygen saturation was stable at 98%-100% on a non-rebreather mask. A chest X-ray ( Figure 1) revealed a "large right PTX with a complete collapse of the lung" and "shift of mediastinum and heart to left consistent with tension PTX" so a pigtail catheter was placed with significant improvement of PTX during insertion. A CT of the chest without contrast was subsequently ordered, which confirmed the resolving PTX and revealed bilateral patchy pulmonary opacities as well as a small right pleural fluid collection, likely residual from his recent COVID pneumonia (Figure 2A-D). Due to poor primary care follow-up, no prior imaging studies were available to definitively rule out any underlying history of emphysematous disease. The patient was started on cefepime and vancomycin for empiric coverage. The patient tolerated the procedure well and was followed by the surgical team during his entire length of stay. Daily chest X-rays showed progressive resolution of the PTX (Figure 3), the pain was well controlled, and he denied any further respiratory distress. The patient's vital signs remained stable for the rest of his admission, comfortably saturating within the 95%-99% range on room air. On the 12th day of admission, the pigtail catheter was removed. The patient was breathing comfortably and saturating well on room air at 95%-96%, and was discharged with outpatient surgery follow-up.
FIGURE 1: Chest X-ray from the ED upon presentation.
Large right sided pneumothorax with complete collapse of the right lung. Shift of mediastinum and heart to left consistent with tension pneumothorax. Proximal density left lung which may be related to tension pneumothorax or residual consolidation which was noted on patient's prior chest X-ray. Image taken one day before discharge. Previously placed pigtail catheter in the right hemithorax has been removed. Trachea is midline. Cardiac silhouette is grossly unchanged. Patchy opacities in the left mid and lower lung field are unchanged from prior. No gross pneumothorax. Alveolar opacities in the right mid and lower lung fields appear stable to minimally improved. Suspect small right pleural effusion. Visualized osseous structures are grossly unremarkable.
Discussion
Recent studies have presented PTX as a rare (seen in less than 1% of cases) complication of COVID pneumonia [5][6][7]. However, this should raise concern because COVID-19-related pneumothoraces are associated with prolonged hospitalizations, increased likelihood of ICU admission, and death, especially among the elderly [8]. According to other case reports, patients who presented with this complication either had an underlying lung disease were mechanically ventilated during the hospital course or developed the PTX within the timeline of the same admission [9]. Our patient had no known underlying lung pathology (albeit a former chronic smoker), did not require intubation, and was stabilized for discharge from his COVID admission before developing the PTX more than one week later, making his case a unique point of interest.
Pneumothorax develops when air enters the pleural space as a result of disease or injury. The resulting partial or complete collapse of the lung parenchyma is due to a loss of negative pressure between the visceral and parietal pleural membranes. The two main classifications are spontaneous and traumatic, both of which can progress to tension PTX and can lead to life-threatening complications. Many proposed mechanisms explain the relationship between COVID-19 infection and the development of a PTX. These include inflammation from the "cytokine storm," parenchymal injury, ischemia, infarction, cough, or a pneumatocele rupture [10].
A frequently cited cause is barotrauma mostly seen in patients with acute respiratory distress syndrome (ARDS) who were placed on mechanical ventilation [11]. This phenomenon was also reported by Zantah et al. in a retrospective study where four of the six COVID-19 patients who developed a PTX were intubated [9]. Our patient, however, did not require intubation, bilevel positive airway pressure (BiPAP), or continuous positive airway pressure (C-PAP) airway assistance which made his chances of getting a PTX less likely. Similarly, increased intrathoracic pressure in the setting of heavy coughing can also lead to the development of barotrauma complications but he did not report any significant bouts of coughing during his hospital stay [12].
With regard to inflammation, poorer outcomes in COVID-19 infection are associated with clinical and laboratory findings of cytokine storm syndrome (CSS), which is characterized by hyperinflammation and multiorgan disease [13]. The virus binds to human angiotensin-converting enzyme 2 (ACE2) receptors on host cells, leading to the release of cytokines [14]. A genetic predisposition for an increased number and increased sensitivity to ACE receptors could have caused a higher amount of viral infiltration and in turn, caused extensive inflammatory damage leading to the respiratory collapse. Many patients with CSS will present with blood-count abnormalities such as leukocytosis, leukopenia, anemia, thrombocytopenia, and elevated ferritin and D-dimer levels. Additionally, serum inflammatory cytokine levels such as interferon-, interleukin-6, interleukin-10, and soluble interleukin-2 receptor alpha are usually elevated [15]. These are consistent with some of the lab findings we saw with our patient during his first admission ( Table 1). Of note, some studies show a direct and strong relationship between CRP and severity of disease [16], whereas one study showed that higher interleukin-6 levels are strongly associated with shorter survival [17]. Additionally, a higher expression of a transmembrane serine protease 2 encoded by TMPRSS2 gene disproportionately found in nasal epithelial cells of individuals that self-identified as Black Americans allows for a greater burden of COVID-19 viral entry and spread via airway [18]. We, therefore, assume that, although the patient did not endorse a history of pulmonary disease, the inflammatory response to the virus resulted in enough alveolar damage to cause air to leak through the alveoli and escape into the pleural space [19][20]. These changes likely predisposed him to develop the PTX. Though we are not completely sure of the patient's medical background given his lack of follow-up, he could have had previously undiagnosed conditions that contributed to his overall clinical picture.
Given the paucity in reported cases of PTX in COVID-19 patients, our understanding of the predisposing factors for developing this complication is still limited and will require further exploration.
Conclusions
The late onset of this patient's overlying PTX as well as the lack of characteristic barotrauma and significant pulmonary history lend themselves to the importance of considering every possible complication when caring for patients with COVID-19. As the management of this infection continues to evolve, so should the astuteness to recognize and prevent the course of pulmonary sequelae.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2021-11-07T16:09:27.195Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1af9515f7cce6e2800d88528326c1969bb141c14",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/71785-unusual-presentation-of-a-rare-pneumothorax-in-a-patient-with-covid-19-pneumonia-a-case-report.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "775b0b9dedd14f7870175f11d7a4b4d8ba3e26d0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252818431
|
pes2o/s2orc
|
v3-fos-license
|
Using dialogues to increase positive attitudes towards COVID-19 vaccines in a vaccine-hesitant UK population
Recently, Altay et al. (Altay et al. 2021. J. Exp.Psychol.: Appl. (doi:10.1037/xap0000400)) showed that 5 min of interaction with a chatbot led to increases in positive COVID-19 vaccination attitudes and intentions in a French population. Here we replicate this effect in a vaccine-hesitant, UK-based population. We attempt to isolate what made the chatbot condition effective by controlling the amount of information provided, the trustworthiness of the information and the level of interactivity. Like Altay et al., our experiment allowed participants to navigate a branching dialogue by choosing questions of interest about COVID-19 vaccines. Our control condition used the same questions and answers but removed participant choice by presenting the dialogues at random. Importantly, we also targeted those who were either against or neutral towards COVID-19 vaccinations to begin with, screening-out those with already positive attitudes. Replicating Altay et al., we found a similar size increase in positive attitudes towards vaccination, and in intention to get vaccinated. Unlike Altay et al., we found no difference between our two conditions: choosing the questions did not increase vaccine attitudes or intentions any more than our control condition. These results suggest that the attitudes of the vaccine hesitant are modifiable with exposure to in-depth, trustworthy and engaging dialogues.
Here we replicate this effect in a vaccine-hesitant, UK-based population. We attempt to isolate what made the chatbot condition effective by controlling the amount of information provided, the trustworthiness of the information and the level of interactivity. Like Altay et al., our experiment allowed participants to navigate a branching dialogue by choosing questions of interest about COVID-19 vaccines. Our control condition used the same questions and answers but removed participant choice by presenting the dialogues at random. Importantly, we also targeted those who were either against or neutral towards COVID-19 vaccinations to begin with, screening-out those with already positive attitudes. Replicating Altay et al., we found a similar size increase in positive attitudes towards vaccination, and in intention to get vaccinated. Unlike Altay et al., we found no difference between our two conditions: choosing the questions did not increase vaccine attitudes or intentions any more than our control condition. These results suggest that the attitudes of the vaccine hesitant are modifiable with exposure to in-depth, trustworthy and engaging dialogues. towards how to increase COVID-19 vaccination uptake. Owing to the urgency and impact of the problem, a multi-pronged attack is warranted, and thus research rightly spans many different strategies, from preempting misinformation on social media [1], presenting information on the comparison of COVID-19 symptoms to vaccination side-effects [2], presenting information on the timeline of vaccine development [2], different styles of myth-busting [3], the use of social norms [4], framing messaging in terms of individual risk preferences [5], and even chatbots [6], all with varying levels of success.
Although chatbots are usually used for aiding the completion of tasks, for example navigating website frequently asked questions (FAQs) or purchasing personalized items (train tickets and flights), interest is growing in their ability to create engaging, human-like dialogue. One way in which chatbots could be used for attitude change is their ability to deliver counterarguments to common questions or concerns. The use of chatbots to change attitudes has previously been explored in the context of genetically modified organism (GMO) attitudes [7]. The authors found, that the chatbot increased positive attitudes towards GMO foods compared to two comparisons: (i) a short description of GMOs, and (ii) a description of the consensus scientific view, but, it did not have a positive effect compared to a third condition: a counterargument condition. In this counterargument condition, participants were exposed to all GMO beliefs and counterarguments at once, rather than choosing which counterarguments to interact with. This suggested that providing access to counterarguments, rather than the choice of information, was the driving factor behind the success of the chatbot. The authors also found that the positive attitudes were mediated by time spent in the conditions, and that people spent on average longer in the counterargument condition. Crucially, they also found that in the chatbot condition, for three out of four arguments, the best predictor for selecting a given argument was how negative their initial view towards it was, suggesting participants did seem to select arguments based on their concerns.
The idea that the choice of information is important chimes with research into people's apparent preference for choosing their own actions, making their own decisions and choosing what path to take, even foregoing monetary rewards to retain agency [8]. Domains as diverse as animal learning and robotic control have shown the importance of intrinsic motivations for agency, curiosity and control for understanding and enabling complex behaviour [9]. It is reasonable, therefore, to assume that a chatbot experience may be engaging and by turn convincing because it supports the participant in playing an active role in the dialogue, making choices about the aspects of the topic they explore.
As well as ensuring the information aligns with participant's interests, it is also crucial to communicate trust for successful public health communication [10]. Eiser et al. [11] studied public attitudes in response to communication about pollution where they lived, and found that those who didn't trust scientific communication tended to doubt that the scientists had their own interests at heart, rather than doubt their expertise. Furthermore, high trust in information from other sources, such as family and friends, was not based on a misperception of greater expertise, but on the (arguably accurate) perception that these groups had their interests at heart. Indeed, low trust in government is consistently one of the strongest predictors of vaccine hesitancy [12]. Evidently the effectiveness of communication interventions to increase vaccination intentions may be affected by how trustworthy the intervention is deemed to be. This paper replicates recent success in increasing positive attitudes towards, and intentions to take, COVID-19 vaccines by using a chatbot [6]. The chatbot study included participants from a random sample of French adults, whereas here we recruit vaccine-hesitant, UK-based adults only, and attempt to dissect what in particular it was about the chatbot that was effective. In particular, we wanted to test if the choice of information is a crucial factor driving the effectiveness of the chatbot. The French chatbot enabled participants to select frequently asked questions about COVID-19 vaccinations and then presented participants with answers to those questions. The chatbot would then present followup questions and further counterarguments. This was compared to a control condition in which the participants read 90 words of standard information from a government website. We wanted to investigate a variety of factors that may have been responsible for the increase in vaccination attitudes and intentions, such as (i) the amount of information, (ii) the time spent with the information, (iii) the interactivity or choice of information, and (iv) the trustworthiness of the information. The 'chatbot' condition manifestly allowed participants greater choice, but it also exposed participants to a greater amount of information, and they tended to spend more time engaged as a consequence. The chatbot condition also included content on the trustworthiness of the information being presented, whereas the control condition did not. As such, it is not clear which underlying factors drive the observed effect.
To address our question of what drove the increase in positive vaccination attitudes and intentions, the current study uses the same information as Altay et al. [6] but deploys two conditions in which the only difference is the interactivity of the information, i.e. the ability to choose which information to view. This allows us to directly test whether the interactivity of the information was a driving factor behind the success of the chatbot, by comparing the results of our control and choice conditions. The amount of information (number of words), time spent on the information and indicators of the trustworthiness of the information are the same in both our control and our choice conditions, allowing us to indirectly test whether these affect the success of the intervention, by comparing our results to Altay et al.'s [6] results.
Pre-registered hypotheses
Our hypotheses, predictions and analyses were pre-registered before data collection at https://osf.io/ t4gav. All of our data, code and analysis scripts are available at https://github.com/lottybrand/ clickbot_analysis.
We hypothesized that the choice condition would show a greater increase in positive attitudes towards COVID-19 vaccines, owing to the ability of participants to choose the information most interesting or important to them. A difference between conditions would be strong evidence that one of the important aspects of chatbots in changing attitudes is that they allow the participant to choose what information to engage with, aside from the trustworthiness and amount of information presented. This logic led to the following three pre-registered predictions: (i) increase in willingness to have a vaccine will be predicted by condition (those in the choice condition will be more likely to show an increase in their intention to take the vaccine); (ii) there will be an interaction between condition and time of ratings, in that vaccine attitudes will be most positive in the choice condition in the post-experiment ratings compared to the preexperiment ratings; and (iii) the choice condition will be rated as more engaging than the control condition.
Participants
Based on [6], we recruited 716 adult participants from the UK. Using the recruitment platform Prolific, we were able to prescreen for UK-based participants aged between 18 and 65 who had previously answered that they were either 'against' the COVID-19 vaccinations, or 'neutral' towards the COVID-19 vaccinations (as opposed to 'for' COVID-19 vaccinations). As there were 657 participants registered to Prolific who answered 'against' at the time of recruitment, we attempted to recruit as many from this pool as possible. We only recruited participants who answered 'against' for the first seven days of data collection, as per our pre-registration. This led to 479 participants who answered 'against' in total, and a remaining 237 who answered 'neutral'. The mean age was 35, and 207 participants were male (502 female, two non-binary, two other, three prefer-not-to-say). Ten pilot participants were recruited on 26 April 2021 and their data used for pre-registering our analysis script only (they do not contribute data to the analyses presented here). The remaining participants were recruited between 14 and 24 May 2021.
Materials
The baseline questionnaire was almost identical to Altay et al. except that we opted to use a 7-point Likert scale as opposed to five points [13]. We asked participants to rate how strongly they agree with the following statements (from 1 = strongly disagree to 7 = strongly agree): I think COVID-19 vaccines are safe, I think COVID-19 vaccines are effective, I think we've had enough time to develop COVID-19 vaccines, I think we can trust those who produce COVID-19 vaccines, I think it is important to be vaccinated against COVID-19. We also asked participants if they had yet taken a dose of any COVID-19 vaccine (yes, no) and whether they would consider taking any future dose of an approved COVID-19 vaccine offered to them (yes, no, undecided).
The information we used for our two conditions was taken from the Altay et al. study. We translated the information into English using automated translation via Google Docs, proof-read it, updated it with the most recent information at the time using official UK National Health Service and Government sources (e.g. regarding the Astra-Zeneca blood clot news), and had the information verified and factchecked again by an independent epidemiologist. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220366 To mimic the main features of their chatbot-interactive choice of questions and appropriate followup answers-we grouped the vaccine information into five main questions: (i) is the vaccine safe? (ii) is the vaccine effective? (iii) has the vaccine been rushed? (iv) can we trust who makes the vaccine? and (v) is the vaccine necessary? Within each of the five main questions were four sub-questions. Thus, there were 20 question-answer dialogues altogether, and each participant was presented with four out of those 20. We modified each sub-question to consist of a short dialogue of between 200-500 words largely avoiding repetition. Each dialogue included a short answer and two or three follow-up question-answer pairs. (These documents along with a document recording the main changes made to each section compared to the Altay paper can be found in the electronic supplementary material and on the online repository.) Thus, our participants experienced almost identical information to Altay et al., in dialogue format. As with Altay et al.'s study, the participant experience lacked some features of full interactive chat: in both Altay et al. and our study, participants were not able to freely type but chose questions from a given selection, and replies were not individually or uniquely composed. However, Altay et al.'s study did contain bot-like features, such as a symbol that the bot was 'typing', and a chat-like window, which was not present in our study.
Crucially, participants in both our control and our chatbot condition were presented with the following information about the trustworthiness of the study at the start of the condition: 'Why should I trust you? -We are two independent researchers, Lotty Brand and Tom Stafford, funded by a research council, with no links to pharmaceutical companies or other competing interests.
We are interested in learning about people's vaccine attitudes, in providing reliable information about vaccines, and learning about people's engagement with this information.
All of the information in this study has been gathered via scientific articles and reports from the past 30 years of vaccine research, as well as the most recent studies on COVID-19. The information has been checked by experts in immunology and epidemiology as of May 12th 2021.' By contrast, Altay et al.'s chatbot featured trust as one of the main question options in their chatbot condition (why should I trust you?), with a response similar to our wording above. If trust drives effectiveness of vaccine interventions then this could have driven the difference between their conditions, rather than the presence/absence of a chatbot per se. We therefore removed this question and answer from the dialogue options and inserted it at the beginning of both conditions, to ensure all participants would see it regardless of condition or choice of information. This ensured the communication of trustworthiness of our information was consistent across both conditions.
Our post-experiment questionnaire consisted of the same COVID attitude questions as the preexperiment questionnaire, as well as questions on how engaging the experience was and how clear the information was. We also asked how often participants discuss vaccination with those who disagree with them and how often they actively learn about vaccines (e.g. via reading articles, listening to podcasts). Participants were finally asked if they would recommend our study to a friend (if yes, they were given the option to share a link via Twitter or Facebook and we recorded the proportion that did), whether they would take part again in a month's time, their age, gender and education level.
We included an attention check question among both the pre-experiment questionnaire and our postexperiment questionnaire ('We would like to check that you are paying careful attention to the information in this study. Please respond to the following item with 'somewhat agree'.). We used both of these attention check answers alongside a free-response answer to check that participants were attending to the study information, i.e. we only included those that passed both attention checks and provided coherent, relevant information in the free-response text boxes (free-response text boxes were used to collect data for a different study question).
Procedure
Participants were randomly assigned to either the control or choice condition. Participants in both conditions provided informed consent (ethical approval provided by the University of Sheffield) before answering the pre-exposure questionnaire, interacting with the experimental material, and finally answering a post-exposure questionnaire.
In our control condition, participants viewed four randomly chosen dialogues of between 200-500 words each, one from each of the five possible domains of vaccination concern: (i) is the vaccine safe? (ii) is the vaccine effective? (iii) has the vaccine been rushed? (iv) can we trust who makes the vaccine? and (v) is the vaccine necessary?
royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220366 In our choice condition, participants were able to choose four dialogues in total of between 200 and 500 words each, one from each of the five possible domains of vaccination concern, as above. Each of the five domains contained four sub-questions. Thus, participants had four choices, with one choice from each of the five main domains each time. This ensured the amount of information that the participants were exposed to was the same as in the control condition. The information is displayed identically between the two conditions, in 200-500 word chunks at a time, so the information should be equally engaging and easy to read. This was also to ensure a similar engagement time across both conditions. These controls attempt to isolate any effect of choice of information (interactivity) as a cause of difference between the conditions.
Analysis
Our hypotheses, predictions and analyses were pre-registered before data collection at https://osf.io/ t4gav. All of our data, code and analysis scripts are available at https://github.com/lottybrand/ clickbot_analysis.
All models were run using the Rethinking package in R for Bayesian models [14]. We include model parameters based on a priori pre-registered hypotheses. Throughout the manuscript, we report mean model coefficients with their 89% credible intervals (CIs). Model parameters were said to have an effect on the model outcome if their 89% CI did not cross zero. Eighty-nine per cent intervals are the default CI setting for the Rethinking package, as they discourage interpreting results in terms of binary null hypothesis significance testing [14]. Ninety-five per cent intervals would not alter the interpretation of our results. When relevant, we used model comparison to aid the interpretation of results. Models were said to be a better fit to the data if their widely applicable, or Wanatabe-Aike information criterion value held the most weight out of all models tested.
Priors were chosen to be weakly regularizing, in order to control for both under-and overfitting the model to the data [14]. All models were checked for convergence using convergence criteria such as Rhat values and effective sample sizes, as well as visual inspection of trace plots.
In line with our pre-registration, we analysed whether participants increased their intention to be vaccinated using a Bayesian binomial regression model with an increase (either from 'no' to 'undecided', or from 'undecided' to 'yes') coded as a 1 (did increase intention), and all other instances as 0 (did not increase intention). We also analysed whether there was a reduction in the number of participants reporting that they would not get vaccinated, by modelling 'no' as 1, and all other responses as 0. The second approach was included after observing an increase in the percentage of participants changing from a 'no' to another category that was similar to the increase that Altay et al. found. Our analysis strategy differed slightly from Altay et al.'s, thus after we failed to find the condition effect they found, we performed an equivalent analysis to theirs. Both of these approaches are reported below.
In line with our pre-registration, when modelling Likert scale vaccination attitude responses, as well as Likert scale engagement ratings, we used ordinal categorical multi-level models, with varying intercepts for who the rater was, and for Likert scale item. This allowed us to use each Likert scale item as the unit of analysis, rather than average over several items, in accordance with recommendations on how to treat Likert scale data Liddell & Kruschke [15]. It also allows us to preserve and use all of the information and variation, and account for data clustering within items and individuals [14].
Pre-registered hypotheses
We found that participants reporting that they did not intend to get the vaccine decreased after our experiment, regardless of condition, as the number of those reporting they would not get the vaccine decreased in the post-exposure measure (mean model estimate: −0.3630144; 89% CI: −0.5304443, −0.1945209). Against prediction 1, those in the choice condition were not more likely to increase their intention to have the vaccine compared to the control condition (mean: −0.2151165; 89% CI: −0.5657521, 0.144324). These shifts in intention can be seen in table 1 and are equivalent to those found in Altay et al.'s chatbot condition; in Altay et al.'s chatbot condition, 36% of participants reported that they did not intend to get vaccinated, and this dropped to 29% afterwards. Across both royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220366 our conditions, 53% reported that they did not intend to get vaccinated and this dropped to 44% afterwards (figures 1 and 2).
We also found that vaccine attitudes increased across both conditions (mean: royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220366 both conditions. This interpretation was confirmed by a model comparison approach, in which we compared models including parameters for condition, post-treatment rating, and an interaction between condition and post-treatment rating. The best-fitting model included only the experiment effect, with the worst fitting models containing the interaction effect, and just varying intercepts (null model), suggesting that the experiment effect (change across both conditions) was most informative in predicting the difference in vaccination attitudes (see the electronic supplementary material). Increase in average vaccination attitudes can be seen in the violin plot in figure 2.
This change in vaccination attitudes is displayed in figure 1, which shows the raw vaccination attitude ratings before and after the experiment. Figures displaying the differences in vaccination attitudes within different scale items (e.g. are they safe, are they effective, have they been rushed, can we trust those who makes them, are they necessary) can be found in the electronic supplementary material. These figures suggest that the majority of our sample agreed that vaccines are effective, but were undecided as to whether they are safe, and disagreed that we can trust those who produce them, that there has been enough time to produce them, and that they are necessary.
Against prediction 3, we did not find that the choice condition was rated as more engaging than the control condition (mean: 0.1581001; 89% CI: −0.0310088, 0.3483672).
Exploratory analysis
Overall, we found that the number of people who said that 'no' they would not get a vaccination when one was offered to them decreased after taking part in our experiment. Out of 571 participants reporting that they either would not, or were undecided, about getting the vaccine, 93 reported being more likely to get vaccinated after the experiment (16% increase). Out of these 93, six changed directly from a 'no' to a 'yes,' 25 went from an 'undecided' to a 'yes' and 62 went from a 'no' to an 'undecided'.
As [6] found a stronger effect for those who spent the most time with the chatbot, we wanted to check whether a condition effect was present in those who spent more time with the information. The median amount of time spent viewing the information was 4 min, and we found that participants who spent above the median amount of time viewing the information (between 4 and 16 min, so between 1 and 4 min per dialogue) were more likely to increase their vaccination attitudes compared to those who spent less time viewing the information. We found a positive interaction between those who spent above the average amount of time and their post-treatment rating (mean: 0.4778941; 89% CI: 0.3416222, 0.6054146). This was confirmed by model comparison, in which the model including the interaction effect, as well as a main effect for post-treatment rating, was the best-fitting model (details in the electronic supplementary material).
When looking only at those who spent above the median amount of time with the information, we again found no effect of condition on intention to get vaccinated (mean: −0.165634; 89% CI: −0.6404874, 0.2988044).
By contrast, participants who spent above the average amount of time viewing the information were not more likely to show an increase in their intention to get vaccinated compared to the rest of the participants (mean: 0.2068541; 89% CI: −0.1461917, 0.5704015).
Discussion
We ran an experiment to test if the choice of information is a crucial factor driving the effectiveness of a COVID-19 vaccination chatbot. We recruited 716 adults based in the UK who had previously said they were 'against' or 'neutral' towards COVID-19 vaccines. Based on a chatbot experiment conducted with Table 1. Number of participants reporting that they do not (no), are undecided, or do (yes) intend to get vaccinated pre-and post-exposure to the dialogues in each condition. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220366 French participants [6], we created 20 dialogues split across the five topics: how safe the vaccines are, how effective they are, whether there has been enough time to develop them, whether we can trust who makes them, and whether they are necessary for young and healthy people. Participants were randomly assigned to two conditions; in one, they could choose the dialogues they saw (choice condition), in the other, the dialogues were randomly displayed (control condition). Overall, we found that, in both conditions, participants' vaccination attitudes and intentions shifted in a more positive direction after reading the dialogues; we found no difference between the choice and control condition. Crucially we found that participants who spent above the average (median) amount of time viewing the information (between 4 and 16 min, or between 1-4 min per dialogue) were more likely to increase their vaccination attitudes than those who spent below the average (median) amount of time viewing the information. This association between viewing time and increased change was not found for intentions.
Our results have implications in the light of recent interest in using chatbots or other interventions to increase vaccination uptake. We conclude that creating an engaging experience for participants that encourages them to spend quality time with the information is key for increasing positive attitudes towards vaccination.
The size of the shift in intentions we observed was similar to the results of the Altay et al. chatbot condition. In this sense, we provide a conceptual replication of their results. This is reassuring as we used identical information to theirs, only editing the information to be more appropriate for a UKbased audience and with the latest epidemiological information at the time. In both their and our experiment, we found an effect of time spent with the information, in that those who spent longer with the chatbot were more likely to increase their vaccination attitudes. This has potentially important implications for those designing public health information interventions, in that how engaging the material is (and therefore how long participants are willing to attend to the information) is crucial.
In contrast with the Altay experiment, we found no difference in effectiveness between our conditions. However, there were crucial differences between our conditions and those of Altay et al. The most obvious is that all of our participants saw information of the same length and quality. The fact that our conditions were equally effective then suggests that Altay et al.'s chatbot may have been more effective than their control condition not because there is something inherently effective about chatbots per se, but simply because it delivered more information than the control condition, as supported by Altay et al. [7]. The second crucial difference between our experiment and Altay et al.'s is that we controlled for trustworthiness of information across both of our conditions. In Altay et al.'s chatbot experiment, the chatbot included a question 'why should I trust you?' in which, if participants chose it, they saw information about who the researchers were and what their motives were. Previous research suggests trust plays a huge role in how effective science communication is [16]. The information in Altay's control condition was therefore implicitly less trustworthy than the chatbot information, given the control condition had no source and was anonymous. By contrast, both of our conditions included the 'Why should I trust you?' information at the start of the experiment, before any of the other dialogues were displayed. This included who we (the authors) are, where the information came from and what our motives are. We also stated that we had no links to pharmaceutical companies or any other vested interests. The fact that both of our conditions included this information on trustworthiness, and that both of our conditions were similarly effective at increasing positive attitudes and intentions, implicitly suggests that being transparent about the source of information could be a crucial component for shifting vaccine attitudes and intentions. Of course, because our conditions did not differ in this way, this needs to be experimentally verified in future work. Nevertheless, the indirect comparison to Altay et al.'s results, in which the chatbot contained trust information and was more effective than the control that did not, further suggests communicating trust could be an important factor.
By design, the only difference between the current study's two conditions was that in the experimental condition participants had a choice over which information they saw, whereas in the control condition the information was shown at random. This suggests that having agency or 'choice' over the information one engages with may not be the most crucial aspect of why chatbots are effective. It suggests that addressing the concerns that are of most importance or interest to the participant may not be as crucial as previously thought, although it is important to note that all information was originally chosen to address common concerns of the vaccine-hesitant. Previous research suggests participants prefer choice and agency over information when given the choice, but perhaps this preference isn't enough to override the effectiveness of accurate and relevant information in general. Importantly, we did not find a difference in engagement ratings between our conditions, royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220366 and participants spent a similar amount of time across both conditions. Again, when we compare to Altay et al.'s chatbot, we see that participants spent longer with their chatbot on average than with our information, and that time spent on the task is related to change in attitudes. These comparisons suggest that chatbots are most effective because of their ability to hold the attention of the participant and thus spend more time engaging with the information. Seemingly unimportant details of chatbots may account for their being engaging, than standard text, for example, the 'social' element of interacting with another 'agent' may be inherently more engaging, or simply the display of the information, which is often more 'bitesize' and delivered one sentence at a time.
It could be argued that our results are simply demonstrating a regression to the mean, particularly because we recruited from one end of the vaccination attitude spectrum, and we saw similar effects across both conditions. However, after investigating this possibility, it seems unlikely given that those who were rated as 'against' vaccination as opposed to 'neutral' were actually more likely to stay the same in their reported intention to get the vaccine than the neutral participants and were less likely than the neutral participants to increase their intention to get vaccinated (i.e. the opposite of what you would expect with regression to the mean). A plot displaying this is included in the electronic supplementary material. Furthermore, not only are our percentage changes very similar to Altay et al.'s chatbot condition effects, who were recruited from the general population and not specifically against vaccination, but also much greater than those in previous studies, for example when influenced by norms, participants only showed a 5% decline in rating themselves as 'undecided' or 'against' vaccination [4], whereas we found a 16% decline. Previous research also suggests that using question and answer (Q&A) style information is more effective than presenting pure fact-based information, again reporting similar effects to ours [3].
One aspect of our study worth noting is how the information was framed and how the participants were addressed throughout the study. Participants were asked if they were either against, for or neutral towards the COVID-19 vaccines, as it is worded in Prolific's pre-screening criteria. We thus used this as our wording and advertised the study as 'Your opinions on COVID-19 vaccinations'. Part of our study (results not included for this publication) was to ask participants to imagine and put forward the opposite side's reasons for and against vaccination (this was conducted after their second round of attitude and intention measures, so would not influence the results of this study). We also offered participants an opportunity to provide any other feedback they had in an 'anything else' box. These comments were insightful, and often hinted that participants were keen to have an outlet for their views. Anonymity perhaps allowed them to be honest, and we also noted many thanked us for not referring to them, or anyone, as 'anti-vaxxers'. We refrained from using this term throughout as this term is often used to stereotype or villainize those who hold those views. We wish to follow-up these comments, respond where necessary and share them with the rest of the research community to help further destigmatize those who are vaccine hesitant and help create an atmosphere of constructive dialogue and conversational receptiveness about these issues [17]. Comments are included in the Shiny App available at https://lottybrand.shinyapps.io/vaccineComments/.
Overall, we suggest it is important when designing science communication interventions to control for the amount of information, time spent with the information, trustworthiness of information and consequently to ensure a high level of engagement with the information. Simply providing Q&A style dialogues appeared to be as effective as delivering the same information via a chatbot, and more effective than previous studies using norms or simple fact-based interventions.
Ethics. This study was granted ethical approval by the University of Sheffield's ethics board on 21 April 2021 (application number 038906).
Data accessibility. All of our data, code and analysis scripts are available at https://github.com/lottybrand/clickbot_ analysis.
Data are also provided in the electronic supplementary material [18].
|
2022-10-12T13:03:38.435Z
|
2022-10-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e36d32705a60b896ab5dd750e742788941197fb0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "RoyalSociety",
"pdf_hash": "e36d32705a60b896ab5dd750e742788941197fb0",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
16023315
|
pes2o/s2orc
|
v3-fos-license
|
Uniform Spanning Forests and the bi-Laplacian Gaussian field
We construct a natural discrete random field on $\mathbb{Z}^{d}$, $d\geq 5$ that converges weakly to the bi-Laplacian Gaussian field in the scaling limit. The construction is based on assigning i.i.d. Bernoulli random variables on each component of the uniform spanning forest, thus defines an associated random function. To our knowledge, this is the first natural discrete model (besides the discrete bi-Laplacian Gaussian field) that converges to the bi-Laplacian Gaussian field.
Introduction
Uniform spanning forest is an extensively studied combinatorial object [2], [17]. The uniform spanning forest measure on Z d can be defined in two equivalent ways: either as the weak limit of the uniform spanning tree measure on a sequence of finite subgraphs that exhaust Z d , or as an output of the Wilson's algorithm [21]. Detailed descriptions of these constructions are given in Section 2.2.
In this paper, we study the following random field associated with the USF on Z d , d ≥ 5. It is known that the USF on Z d , d ≥ 5 has infinitely many tree components a.s. Conditioned on the configuration of the whole forest {T i } i∈N , we assign i.i.d Bernoulli random variables on each tree T i , with probability 1/2 to be 1 and 1/2 to be −1. We define a random function (which we call the spin of the spanning forest) h 1 from Z d to {±1}, such that for any x ∈ Z d , h 1 (x) equals the random variable associated with the tree component containing x. This random function is constructed in a similar spirit as the Edward-Sokal coupling of the FK-Ising model [6].
We would like to study the scaling limit of h 1 . For ε ≥ 0, consider the lattice εZ d , let h ε (x) = ε It turns out that the limiting field of h ε is a generalized Gaussian field (a random generalized distribution whose integral against any C ∞ 0 test function is a Gaussian) closely related to bi-Laplacian operator ∆ 2 , which we call the bi-Laplacian Gaussian field. We will give the precise definition of the bi-Laplacian Gaussian field in Section 2.1, here we offer an informal description. Intuitively, a bi-Laplacian Gaussian field is a generalized Gaussian field h whose covariance structure is given by Cov[h(x), h(y)] = |x − y| 4−d . The rigorous formulation of this definition of given in Definition 3 of Section 2.1, where we also discuss its relation to bi-Laplacian equations. It is the analogy of that of Gaussian free field(GFF) to Laplacian equation (for the definition and properties of Gaussian free field, see the survey [20]).
Here we point out that the bi-Laplacian Gaussian field fall into a bigger family of Gaussian fields called the fractional Gaussian fields(FGF) which is defined and studied in [16]. The relation of FGF and fractional Laplacian operator (−∆) s is analogous to both the Gaussian free field and the bi-Laplacian Gaussian field. Besides GFF and bi-Laplacian free field, the family of FGF also contains white noise, log-correlated Gaussian field and the fractional Brownian field [1] (a higher dimensional generalization of fractional Brownian motion).
The main result of this paper is that h ε converge to h as random variables taking values in the space of generalized function. To be precise, we have the following theorem.
The constant c d can be computed by non-intersecting probability of a simple random walk and two loop erased random walks, see Lemma 10. Gaussian fluctuations has been observed and studied for numerous physical systems. For systems in the critical regime, one expects the spatial or space-time fluctuation to be Gaussian free field. Typical examples come from domino tilings [9], random matrix theory [3][18] and random growth models [4]. In the subcritical regime, where the correlation decays faster, one expects Gaussian white noise fluctuations (see the example of edge process of spanning tree models in [8] ). Our model can be viewed as a natural example in the supercritical regime.
[19] [11][12] study the discrete bi-Laplacian Gaussian field (in physics literature, this is known as the membrane model) whose continuous counterpart is clearly the bi-Laplacian Gaussian field. Our model can be viewed as another natural discrete object that converges to the bi-Laplacian Gaussian field. In one dimensional case, Hammond and Sheffield constructed a reinforced random walk with long range memory [7], which can be associated with a spanning forest attached to Z. Our construction can also be viewed as a higher dimensional analogue of "forest random walks".
Finally, we remark on universality features of our model. We can replace i.i.d. Bernoulli random variables by general i.i.d. random variables with mean 0 and variance 1, and obtain the same scaling limit. The same argument also goes through if we replace Z d by regular lattices, the constant c d is lattice dependent. See Remark 11.
The strategy of the proof is moment method. Since (h, ϕ) is a Gaussian random variable, to prove convergence in distribution, we only need to prove that all the moment of (h ε , ϕ) converge to the corresponding moments of (h, ϕ). The paper is organized as follows. In Section 2, we give the necessary background on uniform spanning forest and the bi-Laplacian Gaussian field. In Section 3, we prove the convergence of second moment. It involves giving the precise asymptotics of the probability that two vertices are in the same tree of USF. In Section 4, we prove the convergence of higher moments. In Section 5, we discuss some further questions.
Bi-Laplacian Gaussian field
In this section, we will give a precise definition of the bi-Laplacian Gaussian field, which is a random variable taking value in the space of generalized functions ( denoted by (C ∞ 0 (R d )) ′ ) . Or equivalently, a probability distribution on (C ∞ 0 (R d )) ′ . For basic facts on generalized function, we refer to Appendix B, [15].
We first review some standard facts on white noise, [10, text]. White noise is the unique probability distribution on (C ∞ 0 (R d )) ′ such that if W is a random generalized function with this distribution, then for any ϕ ∈ C ∞ 0 (R d ), (W, ϕ) is a centred Gaussian variable with variance (ϕ, ϕ). Here (, ) is the pair of a generalized function and a compact supported smooth function.
Formally speaking, we can say that W is a Gaussian process whose parameter is in R d and the covariance structure is given by Another natural interpretation is that W is a standard normal distribution on the Hilbert space We give two equivalent definitions of the bi-Laplacian Gaussian field, that only differ by scalar multiplication. We can define the bi-Laplacian Gaussian field for all dimensions in a unified way, as in [16]. But to avoid technical details for d ≤ 4, we only define the field for d ≥ 5, which is sufficient for the purpose of this paper. From now on, we always assume d ≥ 5.
Definition 2. Bi-Laplacian Gaussian field is the unique probability distribution on (C ∞ 0 (R d )) ′ such that if h is a random generalized function with this distribution, ∆h is a white noise on R d . Here ∆ is a well defined operator on (C ∞ 0 (R d )) ′ by integration by part [15].
For this moment, we assume that there is a unique random generalized function satisfying the Definition 2 or 3, which we will explain later. Now we explain the equivalence of the two definitions. We note that if ∆h = W . Then (h, ∆ 2 f ) = (∆h, ∆f ) = (W, ∆f ) is a centred Gaussian of variance (∆f, ∆f ) = (f, ∆ 2 f ). For ϕ ∈ C ∞ 0 (R d ), we can solve the bi-Laplacian equation for example, using Fourier transform. This is the place our assumption of d ≥ 5 plays a role since otherwise not all functions in C ∞ 0 (R d ) has bi-Laplacian inverse. There will be some extra assumption for ϕ when d ≤ 4 like in the case of 2 dimensional Gaussian free field in the whole plane [20]. Therefore the variance of (h, ϕ) is given by The presence of ∆ 2 is the reason we call the field the bi-Laplacian Gaussian field. In the case of Gaussian free field, we want to solve a Laplacian equation. Here we want to solve a bi-Laplacian equation (2). Bi-Laplacian equation is a standard object in potential theory and studied for many years. For information of this equation we refer to [5] and references therein. From [5], for d ≥ 5, the fundamental solution of bi-Laplacian equation is C d |x − y| 4−d and we can use the fundamental solution to solve equation (2), which is where C d is a constant depending on d. From (3) and (4) we see that Definition 2 and 3 of a bi-Laplacian Gaussian field only differ by a constant √ C d . In Theorem 1 we use Definition 3 as our definition of a bi-Laplacian Gaussian field for convenience.
As mentioned before, the existence and uniqueness in the definition of bi-Lalacian field is not clear as a priori. The rigorous argument for existence and uniqueness is actually the same as in the definition of white noise [10]. Now we sketch the construction of white noise as a random distribution following [10]. The definition of a bi-Laplacian Gaussian field will follow by a similar argument.
Definition 4 (Countably-Hilbert Space). Let V be an infinite dimensional vector space over C, and let {| · | n } n≥1 be a collection of inner product norms on V . Define the metric d on V by is called a countably-Hilbert space.
Definition 5 (Nuclear Spaces). Let V be a countably-Hilbert space associated with an increasing sequence {| · | n } n≥1 of norms, that is, Let V n be the completion of V with respect to the norm | · | n . We say that V is a nuclear space if for any m, there exists n ≥ m such that the inclusion map V m into V n is a Hilbert-Schmidt operator, that is, there is an orthonormal basis For a proof, see [10]. If V is a topological vector space, we denote by V ′ the dual of V (that is, the space of continuous linear functionals on V ). We say that a complex-valued function ϕ on V is the characteristic function of a probability measure ν on V ′ if For a proof of the following theorem, see ( [10]).
Theorem 7 (Bochner-Minlos theorem). Let V be a real nuclear space. Then a complex-valued function Φ on V is the characteristic function of a probability measure ν on V ′ if and only if for all v 1 , . . . , v n ∈ V , and z 1 , . . . , z n ∈ C. Furthermore, Φ determines ν uniquely.
White noise will be defined as a Gaussian measure on the space of tempered distributions. To apply Theorem 7, we first note that C ∞ 0 (R d ) is a nuclear space and that the function is continuous, positive definite, and satisfies C(0) = 1. Hence Theorem 7 implies that there is a unique probability measure µ on (C ∞ 0 (R d )) ′ having C as its characteristic function. which we define as white noise W . In particular we have the relation: which implies for every ϕ ∈ C ∞ 0 (R d ) the random variable (W, ϕ) is a mean zero Gaussian with variance (ϕ, ϕ). Given f, g ∈ C ∞ 0 (R d ) we may use polarization to see that We may rewrite the above expression as and say that W has covariance kernel δ(x − y).
To show the existence and uniqueness of the bi-Laplacian Gaussian field, we only need to find its characteristic function and apply Theorem 7. From the first definition of Definition 2, it is easy to see that the characteristic function for a bi-Laplacian Gaussian field is is a continuous, positive definite functional on C ∞ 0 (R d ) that satisfies C(0) = 1.
Proof. The continuity (continuity is taken with respect to the norm (∆ −1 ϕ, ∆ −1 ϕ) 1 2 ) of C(ϕ) follows from Fourier transform and the fact that |x| 4 is integrable in R d (d ≥ 5). Further the statement C(0) = 1 is also clear. All that is left is to check that C(ϕ) is positive definite.
Let ϕ 1 , . . . , ϕ n ∈ C ∞ 0 (R d )) be a set of functions, and define V to be the subspace of C ∞ 0 (R d ) spanned by {ϕ i }. Define µ V to be the Gaussian measure on V with covariance matrix given by Applying Bochner's theorem for probability measures on R n shows us that C is positive definite.
Now apply Milnos theorem we get the existence and uniqueness of the bi-Laplacian Gaussian field.
Remark 9. In [16], the authors define the so called fractional Gaussian field in the following way. Formally speaking, the d dimensional fractional Gaussian field with index s (denoted by FGF d s ) is given by (−∆) s 2 W . Thus the bi-Laplacian Gaussian field is FGF d 2 .
Uniform spanning forest
Here we review some facts about the uniform spanning forest model (USF) on Z d . Most of the facts extend to general graphs as well. For more background, we refer the reader to the survey [2]. Given a finite graph G ⊂ Z d , the (free) uniform spanning tree (UST) measure is the probability measure that assign equal probability to the spanning trees of G. When all the vertices on Z d \G are contracted to a single vertex, the corresponding measure is called wired spanning tree.
Uniform spanning forest measure on Z d is the weak limit of uniform spanning trees on a sequence of exhausting subsets. Pemantle proved that the limit of free and wired spanning trees coincide [17], thus USF is uniquely defined and has trivial tail.
An alternative way to construct the USF is the Wilson's algorithm [21], which we now describe. For any path P in Z d that visits each vertex at most finitely many often, the loop erasure of P is constructed as erasing cycles in P in chronological order. Fix any ordering (v 1 , v 2 ...) of vertices, a growing sequence of forests {F i } i∈N can be constructed inductively. Let F 0 = ∅. Suppose the forest F i has been generated. Start a simple random walk (SRW) at v i+1 , and stop at the first time it hits F i , if it does, and otherwise let it run indefinitely. F i+1 is defined by adding the loop erasure of this SRW to F i (for d ≥ 3, SRWs are transient, so the loop erasure of SRW is well defined a.s.). The algorithm yields ∪ i∈N F i , it is shown in [2] that its distribution is independent of the ordering of vertices, and is USF.
Based on Wilson's algorithm and properties of loop erased random walks (LERW), it is shown in [17] that on Z d , the USF is a single tree a.s. if d ≤ 4, and has infinitely many tree components a.s. when d ≥ 5. The probablility that two points are in the same tree is the insection probability of a SRW and a LERW. This will be used in Lemma 10. Also, when 2 ≤ d ≤ 4, the USF has a single topological end a.s. (i.e. removing any vertex disconnect the tree into two components, one of them is infinite); when d ≥ 5, each of the infinitely many trees a.s. has at most two topological ends.
Second moment
where the remaining term lim ε→0 R ε = 0 almost surely.
So we only need to show that
As explained in the introduction, we use the moment method. Since the first moment is just 0, we start from the second moment, which is the focus of this section. Note that Let p(x, y) = P[x, y are in the same tree], then p(x, y)) × 0 = p(x, y).
As explained in Section 2.2, uniform spanning forest can be generated using Wilson algorithm on Z d . Therefore from Lemma 10 which we will prove in Section 3.1, we know that c d is a constant which we could not explicitly evaluate explicitly because we cannot evaluate the number q in ( 5 ) , Section 3.1.
Since ϕ ∈ C ∞ 0 (R d ), by dominate convergence theorem , From Section 2.1, we recognize that the RHS of above formula is just c d times the variance of (h, ϕ) as we defined in formula (3) in Section 2.1.
Asymptotic correlation
In this section we explicitly determine the asymptotics of p (x, y) = p (0, y − x). This requires to evaluate the non-intersecting probability of a SRW starts at y − x and a LERW starts at 0. Using the bounds for intersection of SRWs, Pemantle showed p (0, y − x) = O (|y − x| 4−d ). Here we show this quantity actually converges in the scaling limit. This requires a more careful estimate of SRW hitting probabilities.
LetS 1 be the time reversal of S 1 for τ to 0,S 2 be the time reversal of S 2 from ρ to 0,S 3 be S 1 from τ to ∞. Then Thus . Now for simplicity of notation we assume that S 1 , S 2 , S 3 are three independent SRWs starting at w. Then , G(·, ·) be the Green function of SRW on Z d . Now we show that the non-intersection probability , together with the fact that discrete Green's function converges to the continuous whole space Green's function [14], therefore G (z, w) = O |z − w| 2−d for z, w macroscopically apart, this implies Lemma 10.
To prove the upper bound, we fix small ε > 0 and large R > 0. Let w be in the range of |w| ≥ ε|z|, |w − z| ≥ ε|z| and j, k greater than |z| 3 2 . Let σ i be the last time when S i hits the ball B R centred at w. For fixed R and w, on the high probability event that σ 1 ≪ j and σ 2 ≪ k, as |z| → ∞, the Radon-Nikodym derivative of the joint distribution {S 1 [0, σ 1 ], S 2 [0, σ 2 ], S 3 [0, σ 3 ]} conditioned on that S 1 (j) = 0, S 2 (k) = z w.r.t the original unconditioned one tends to 1. For (w, i, j) satisfy the conditions prescribed, where δ R,z → 0 as z tends to ∞ and R fixed. On the other hand, the typical time for a SRW starting at w to hit 0 or z is O(|z| 2 ), thus j<|z| 3 2 ,k<|z| 3 2 P w (S 1 (j) = 0)P w (S 2 (k) = z) tends to zero uniformly in w as z → ∞. Also, when summing over w ∈ Z d , the contribution from |w| < ε|z| or |w − z| < ε|z| is negligible as ε → 0. Since By summing over w, i, j, first taking z → ∞, then R → ∞ and then ε → 0, we know that lim sup z→∞ w,j,k To show the lower bound, as before we first fix ε > 0, w in the range |w| ≥ ε|z|, |w − z| ≥ ε|z| and j, k ≥ |z| 3 2 . When 1 ≪ R ≪ z is fixed but large enough, as z tends to ∞, there is a high probability p R such that the distance between S 1 σ1 , S 2 σ2 , S 3 σ3 is bigger than cR, where c is a constant independent of R and p R tends to 1 as R tends to infinity. This is because that as z → ∞, the S 1 σ1 , S 2 σ2 , S 3 σ3 are close to three uniform distribution on ∂B R as we argued above. The probability that have an intersection will tend to zero, as z first goes to ∞ and then R goes to ∞, which can be seen by bounding the intersection probabilities explicitly by Green's functions. Using the asymptotic independence of S 1 , S 2 in B R and the event S 1 (j) = 0, S 2 (k) = z, we obtain where δ R,z tends to 0 as z → ∞, and ε R,z → 0 as z first goes to ∞ and then R goes to ∞. Thus lim inf z→∞ w,j,k P(A w,j,k ) w G(0, w)G(w, z) ≥ q.
Higher moments
Recall the random field {h ε }, defined for any ϕ ∈ C ∞ 0 R d as Therefore, for k ≥ 3, Where in the last equality we group the vertices in terms of components of the uniform spanning forest: we sum over all the partitions Γ of the index set {1, ..., k}, h 1 at vertices belong to different components of the forest are independent. We claim the following Wick's formula holds in the limit: It therefore uniquely identify the distribution of lim ε→0 (h ε , ϕ) to be Gaussian. By the covariance structure given is Section 3, we complete the proof that h 1 converges weakly to h.
When k is odd, at least one of the γ l contains odd number of elements, and therefore E When k is even, the non-vanishing contribution only comes from partitions such that each γ l contains even number of elements. By (6), it suffices to show that the contribution from those {γ l }, with some |γ l | ≥ 4 is negligible in the limit. We claim: And therefore, the contribution from the partition with a cycle of length 2l is which vanishes for l ≥ 2.
Note that E( 2l m=1 h 1 xm ε ) is the probability that x1 ε , ..., x 2l ε belong to the same tree component. This can be computed in terms of intersection probability of LERWs by Wilson's algorithm (see Section 2.2). It is given by the probability of the following event: start a LERW from x 1 /ε, and run indefinitely; then for m = 2, ..., 2l, start a SRW from x m /ε that eventually hit the union of the m − 1 walks, then stopped, and add its loop erasure to the union of the m − 1 walks. Since LERW is a subset of SRWs, the above quantity is bounded by the corresponding intersecting events of SRWs. The probability of each of such events can be bounded in a simple way. We prove it in detail for one example, the others are similar. For instance, let A(x 1 , ..., x 2l ) denote the event, that the SRW starting at x 2 /ε hits the SRW starting at x 1 /ε, the SRW starting at x 3 /ε hits the SRW starting at x 2 /ε, and so on. Then P (A(x 1 , ..., x 2l )) ≤ w1,...,w 2l−1 ∈Z d P SRW x1/ε hit w 1 ; SRW x2/ε hit w 1 , w 2 ; ...; SRW x 2l−1 /ε hit w 2l−2 , w 2l−1 ; SRW x 2l /ε hit w 2l−1 ≤ w1,...,w 2l−1 ∈Z d G (x 1 /ε, w 1 ) G (x 2 , w 1 ) G (x 2 /ε, w 2 ) ...G (x 2l /ε, w 2l−1 ) where the second inequality follows from the fact that the probability of a SRW hitting a point is bounded by the expected hitting time, which is given by the lattice Green's function. The last inequality follows from the Green's function asymptotics G (x/ε, w/ε) = O ε d−2 [14]. Since E 2l m=1 h 1 (x m /ε) is a sum of finitely many such probabilities, it is at most O ε (d−4)(2l−1) . And the proof is complete.
Remark 11. From the argument in Section 3 and 4, we can see that the proof does not require many special properties of Bernoulli random variables. What we need is that the sequence of i.i.d random variables have mean 0, variance 1, and all finite moments. Moreover, on other regular lattices, since the Green's function has the same asymptotic decay rate (because the SRW still converges to Brownian motions), our result also holds for uniform spanning forest on other regular lattices. In this sense, Theorem 1 is universal.
|
2013-11-30T04:28:44.000Z
|
2013-11-30T00:00:00.000
|
{
"year": 2013,
"sha1": "f91adc78846c0cacc3749888daec908d9a818f3c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f91adc78846c0cacc3749888daec908d9a818f3c",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
211190141
|
pes2o/s2orc
|
v3-fos-license
|
Involvement of five catalytically active Arabidopsis β‐amylases in leaf starch metabolism and plant growth
Abstract Starch degradation in chloroplasts requires β‐amylase (BAM) activity, but in Arabidopsis, there are nine BAM proteins, five of which are thought to be catalytic. Although single‐gene knockouts revealed the necessity of BAM3 for starch degradation, contributions of other BAMs are poorly understood. Moreover, it is not possible to detect the contribution of individual BAMs in plants containing multiple active BAMs. Therefore, we constructed a set of five quadruple mutants each expressing only one catalytically active BAM, and a quintuple mutant missing all of these BAMs (B‐Null). Using these mutants, we assessed the influence of each individual BAM on plant growth and on leaf starch degradation. Both BAM1 and BAM3 alone support wild‐type (WT) levels of growth. BAM3 alone is sufficient to degrade leaf starch completely whereas BAM1 alone can only partially degrade leaf starch. In contrast, BAM2, BAM5, and BAM6 have no detectable effect on starch degradation or plant growth, being comparable with the B‐Null plants. B‐Null plant extracts contained no measurable amylase activity, whereas BAM3 and BAM1 contributed about 70% and 14% of the WT activity, respectively. BAM2 activity was low but detectable and BAM6 contributed no measurable activity. Interestingly, activity of BAM1 and BAM3 in the mutants varied little developmentally or diurnally, and did not increase appreciably in response to osmotic or cold stress. With these genetic lines, we now have new opportunities to investigate members of this diverse gene family.
Starch is synthesized from ADP-glucose by a family of starch synthases (soluble and granule-bound) that generate linear α-1,4linked glucan chains (Zeeman et al., 2010). Starch branching enzymes (BEs) then introduce α-1,6-branches (Tomlinson & Denyer, 2003), some of which are removed by isoamylase (ISA1 and ISA2) (Delatte, Trevisan, Parker, & Zeeman, 2005) resulting in the formation of layers in which regions of branches are interspersed with crystalline regions composed of short (12-15 residue), α-glucan segments that form double helices (Buléon, Colonna, Planchot, & Ball, 1998). A consequence of this pattern of layers is that granules can become extremely large, commonly over 2,000 times the diameter of glycogen granules, the glucose storage polymer in animals (Ball, Colleoni, Cenci, Raj, & Tirtiaux, 2011). Tight packing of the double helices also excludes water making starch a good storage polymer because it lacks osmotic activity.
In Arabidopsis, there are nine BAM genes that were recently reviewed Thalmann et al., 2019). Two of these genes, BAM7 and -8, encode proteins with N-terminal DNA-binding domains that are targeted to nuclei where they function in regulating gene expression (Reinhold et al., 2011;Soyk et al., 2014). The Arabidopsis forms of BAM7 and -8 have no apparent catalytic activity as β-amylases. One of the other BAM genes, BAM4, encodes a catalytically inactive protein located in plastids where it may play a role in regulating starch metabolism (Fulton et al., 2008;Li et al., 2009). Evidence suggests that BAM9 may also be plastidic and catalytically inactive . Comparison of active-site residues (Laederach, Dowd, Coutinho, & Reilly, 1999) in these four BAM proteins with those from catalytically active BAMs reveals differences that are consistent with their lack of catalytic activity .
Of the five remaining BAM genes, four (BAM1, -2, -3, and -5) encode enzymes that are known to be catalytically active (Monroe & Preiss 1990;Monroe et al., 2017;Fulton et al., 2008;Lao et al., 1999;Li et al., 2009;Sparla, Costa, Schiavo, Pupillo, & Trost, 2006) and one (BAM6) encodes a protein that is predicted to be active based on its amino acid sequence . Of these five BAM proteins, three (BAM1, -2, and -3) were shown to be located in plastids where they could participate directly in starch degradation (Fulton et al., 2008;Lao et al., 1999). BAM6 has a predicted chloroplast transit peptide and was detected in the chloroplast proteome (Zybailov et al., 2008). The only catalytically active BAM that is not located in plastids is BAM5, which is a cytosolic enzyme found in phloem tissue where it is unlikely to come into contact with plastidic starch (Monroe & Preiss 1990;Laby, Kim, & Gibson, 2001;Wang, Monroe, & Sjolund, 1995). The function of BAM5 is currently unknown.
Studies using mutants in which individual genes are knockedout or defective have been extremely useful in determining the function of some genes, but other mutants lack phenotypes in part due to genetic redundancy (Bouché & Bouchez, 2001). For example, single mutants revealed that transitory starch accumulated in the mutant lacking BAM3 but not in the mutant lacking BAM1, yet both genes are known to play a role in this process (Fulton et al., 2008). Understanding the effects of single-gene mutations is especially difficult for those genes that encode enzymes because tissue extracts often contain multiple gene products having similar catalytic activities. One solution to these problems is to generate multiple-gene knockouts in which only one member of a gene family is functional. These higher-order mutants can then be compared with mutants lacking all functionally similar members of the family to observe phenotypes associated with the presence of one functional gene as opposed to phenotypes associated with the absence of that gene.
We applied this approach to the β-amylase gene family in Arabidopsis and present results showing the influence of each of the five catalytically active BAMs on leaf starch accumulation and plant growth. In addition, we measured the catalytic activity of each BAM in above-ground tissues. Lastly, we determined the effects on BAM1 and BAM3 activity of developmental age, time of day, and various abiotic stresses. Seeds of bam3 were a gift from David Seung. For osmotic stress experiments, 200 ml of the same nutrient solution with or without 300 mM mannitol was applied to each pot. For cold stress experiments, plants were transferred to a walk-in 4°C chamber with lighting as described above.
| T-DNA mutant analysis
T-DNA lines were verified by PCR using the primers listed in Table S1. The bam3 mutation was verified using PCR followed by digestion of the PCR product with BsrI. Multiple mutants were generated by crossing homozygous single mutants and allowing self-pollination of confirmed double heterozygotes.
| Starch analysis, enzyme extraction, and assays
Above-ground parts of plants were harvested and frozen at −80°C for later analysis. The largest 3 to 5 leaves from at least 2 plants were decolorized in hot 80% ethanol and then stained with Lugol iodine solution . Images of representative leaves were collected using a Nikon D7000 camera. For enzyme extraction, tissues were ground in 3 volumes of extraction buffer (50 mM MOPS, pH 7, 5 mM EDTA, and 2 mM dithiothreitol) with sand and centrifuged at 10,000 g for 10 min at 4°C. Amylase assays were conducted as described (Monroe et al., 2014), except soluble starch (Acros Organics #424491000) was used as the substrate.
Concentrations of soluble starch used in each assay are listed in the text. Assays of extracts from plants that contained WT BAM2 also included 100 mM KCl. Total reducing sugars were measured using the same assay except soluble starch was omitted. Native, starchcontaining PAGE was conducted as described by Doyle, Lane, Sides, Mudgett, and Monroe (2007). Total protein was measured using the Bio-Rad Protein Assay Kit with bovine serum albumin as the standard. Means of replicate enzyme assays were analyzed for statistical significance using a two-tailed Student's t test.
| RE SULTS
Starch degradation in leaves depends on β-amylase (BAM) activity, but it is unclear which of the Arabidopsis genes that encode catalytically active BAMs play a role in this process. Analysis of single-gene knockouts has not revealed phenotypes for some of these genes.
We therefore constructed a set of multiple-gene knockouts, each containing a single active BAM, and compared them with a mutant containing no active BAMs in order to test their effect on growth and starch degradation, and to examine their activity in leaf extracts.
Mutants of Arabidopsis used in the present work include T-DNA
insertion mutants lacking BAM1 (At3g23920; SALK_039895) and BAM2 (At4g00490; SALK_086084) that were previously characterized and shown to lack the respective mRNAs (Fulton et al., 2008;Kaplan & Guy, 2005). In addition, the mutant lacking BAM3 (At4g17090; CS92461) is a nonsense point mutation in the 4th exon that lacks soluble BAM3 protein (Fulton et al., 2008). The BAM5 mutant (At4g15210; SALK_004259) contains a T-DNA insertion in the first intron that leads to a lack of detectable BAM5 enzyme activity on a native, starch-containing gel ( Figure S1). The BAM6 mutant (At2g32290; SALK_023637) contains a T-DNA insertion in the third exon. This mutant was previously shown to have a mild starch excess (sex) phenotype in leaves from 8-week-old plants (Monroe et al., 2014). The five single BAM mutants were first compared with wild-type (WT) plants, all grown under a 12-hr-light/12-hr-dark photoperiod, by iodine staining leaves that were harvested at the end of the night. Only the bam3 mutant revealed a sex phenotype, and this phenotype was similar in leaves harvested from 4-, 6-, and 8-weekold plants (Figure 1a). None of the other mutants revealed an obvious leaf sex phenotype at these ages.
The five mutants each lacking one catalytically active BAM were then crossed multiple times in order to generate five quadruple mu- (Figures 1b and 2). Interestingly, each of the mutants germinated at the same rate as the WT and grew normally for the first few days, consistent with Arabidopsis seeds storing lipids and not starch, then the mutants lacking both BAM1 and BAM3 slowed their rate of growth (data not shown) displaying the largest differences at about 5 weeks of age. However, each of the mutants eventually became full sized and produced normal levels of seeds.
Leaf extracts from WT plants may contain up to four or more different BAM enzymes with similar catalytic activity, so quantifying the activity of individual BAMs in crude extracts is not possible.
The quadruple mutants that each contain only one catalytically active BAM offer a solution to this problem. Amylase activity in crude extracts of the quadruple and quintuple mutants was measured at 6 and 7 weeks of age using conditions that are likely to be nearly optimal for four of the five BAMs that have been characterized (Monroe et al., 2014(Monroe et al., , 2017. Assays were conducted at 25°C and included 80 mg/ml soluble starch, which is near the Vmax for BAM2 and saturating for BAM1, -3, and -5. Assays also included 100 mM KCl, which is required for BAM2 activity (Monroe et al., 2017).
Reducing sugars generated by BAM activity were measured using the Somogyi-Nelson method (Nelson, 1944). Compared with WT leaf extracts, which contained activity measured at about 350 nmol maltose min −1 mg protein −1 , B-Null leaf extracts contained no detectable BAM activity at either age measured (Figure 4). Of the quadruple mutants, B3-Q extracts contained the most activity, which was about 70% of the WT activity. In decreasing order, B1-Q extracts contained about 14% of the WT activity, whereas B5-Q and B2-Q contained 9% and 4% of the WT activity, respectively. All of these activities were significantly higher than activity in B-Null extracts. In contrast, activity in B6-Q was not significantly different than that of B-Null (Figure 4). Amylase activity in each of the mutants was very similar between 6-and 7-week-old plants and, importantly, the sum of activities in the five quadruple mutants was similar to that of the corresponding WT activity suggesting that the abundance of each enzyme was probably not strongly affected by the absence of the other four enzymes, but this possibility cannot be ruled out. It appears that of the five catalytically active BAMs, only BAM3 and BAM1 have a strong influence on leaf starch degradation and plant growth. We then went on to use the B3-Q and B1-Q mutants to examine the effects on BAM3 and BAM1 activity, respectively, of various conditions that are known to influence their mRNA levels.
Activity assays and mass spectrometry analysis of protein abundance have led to a general understanding that levels of metabolic enzymes often change very little diurnally or after brief periods of environmental perturbation, despite large changes in the levels of their transcripts (Piques et al., 2009;Skeffington, Graf, Duxbury, Gruissem, & Smith, 2014). It has been well documented that levels of BAM1 and BAM3 transcripts are strongly affected diurnally and by abiotic stress Thalmann & Santelia, 2017). To determine whether there were changes in the activity of BAM1 and BAM3 as plants developed, amylase activity in B1-Q and B3-Q plants was compared with activity in WT plants at 5, 7, and 9 weeks of age. Assays were conducted using 40 mg/ml soluble starch, which is nearly saturating for both enzymes (Monroe et al., 2017). BAM3 activity declined slightly in the oldest plants whereas activity in the WT extracts increased slightly, but BAM1 activity did not change significantly ( Figure 5). Previously, levels of BAM1 and −3 mRNA were shown to vary considerably over a diurnal period with peaks at the night-day and/or day-night transition in 4and 5-week-old plants ( Figures S3a and b) (Bläsing et al., 2005;Smith et al., 2004). To determine whether BAM1 and -3 activity fluctuated diurnally, leaves from B1-Q and B3-Q plants were harvested at various times during the day and night. Neither BAM3 nor BAM1 activity fluctuated dramatically over the 24-hr period ( Figure 6). The absence of one or more BAMs in the quadruple mutants could have influenced the expression of the remaining BAM, so these activity results should be viewed with caution. However, because the sum of the activities in extracts from the five quadruple mutants was similar to that of the WT extract, it is not likely that there were strong pleiotropic effects.
| D ISCUSS I ON
Generating mutants of Arabidopsis with lesions in multiple members of a gene family can be useful for determining how their expression changes developmentally or physiologically, and ultimately for determining their function. We generated a set of five quadruple mutants of Arabidopsis (B1-Q, B2-Q, B3-Q, B5-Q, and B6-Q), each containing only one of the five potentially catalytically active β-amylases and one mutant (B-Null) lacking all five BAMs in order to examine their contribution to starch metabolism and growth. We reasoned that the quadruple mutants might also be useful for measuring the activity of individual BAMs in plant extracts, assuming that the extracts contain no other enzymes with similar catalytic activity and that the lack of any given BAM does not strongly influence the expression of other BAMs.
B-Null plants contained no detectable BAM activity under the condi-
tions of the assay indicating that there are likely no additional genes in Arabidopsis that encode BAM activity (Figure 4). To minimize the F I G U R E 5 Effect of developmental age on amylase activity in crude extracts from leaves of WT, B1-Q and B3-Q plants grown under a 12-hr-light/12-hr-dark photoperiod. All extracts were assayed at 25℃ in 50 mM MES buffer, pH 6, using 40 mg/ml soluble starch. Values are means ± SD (n = 3). Means that were significantly different between weeks are labeled with *p < .05, **p < .01 F I G U R E 6 Total amylase activity in crude extracts from leaves of 5-week-old B3-Q and B1-Q plants over a diurnal period. Extracts were assayed at 25℃ in 50 mM MES buffer, pH 6, with 40 mg/ml soluble starch. Each point represents the activity in one extract prepared from leaves of 3 plants contribution of α-amylases to the measured activity, we used 5 mM EDTA in the extraction buffer to chelate Ca +2 , a requirement for some α-amylases (Swain & Dekker, 1966;Ziegler, 1988). The lack of any detectable amylase activity in the B-Null extracts suggests that α-amylase activity did not confound the results from other plants.
Leaf starch accumulation (starch excess or sex) as determined by iodine staining, although not quantitative, is often used to indicate the involvement of an enzyme in starch degradation, and among the single BAM mutants, only plants without BAM3 activity have this phenotype ( Figure 1a). Others have reported similar results (Fulton et al., 2008;Kaplan & Guy, 2005). The lack of a sex phenotype in the remaining four catalytically active BAMs (Figure 1a) could indicate that they play no role in starch degradation, or that their phenotype is masked by the activity of a different BAM. Fulton et al. (2008) showed that the double mutant lacking BAM1 and BAM3 had a stronger sex phenotype than the single bam3 mutant illustrating that these two BAMs have overlapping functions and that BAM1 can contribute to starch degradation, but this phenotype is only apparent in the absence of BAM3. Their double mutant contained WT levels of BAM2 and BAM6, the activity of which may therefore be masked by BAM3 and/or BAM1 so it is not possible to evaluate the role of BAM2 or BAM6 in starch metabolism or growth using double mutants.
Compared with B-Null plants lacking all five BAMs, which accumulated high levels of starch and had a severe growth penalty ( Figures 2 and 3), the presence of BAM2, BAM5, or BAM6 in B2-Q, B5-Q, and B6-Q plants, respectively, had no observable effect on starch degradation or plant growth (Figures 2 and 3). Small differences in starch levels between the mutants may have been masked by the dark iodine staining and might be detectable using quantitative assays. It will be interesting to determine whether the B-Null plants turn over any starch on a diurnal basis. For BAM5 in B5-Q, the lack of an effect on growth and starch levels was expected because BAM5 is a cytosolic enzyme expressed in phloem tissue, and should therefore have no direct effect on leaf starch degradation (Laby et al., 2001;Wang et al., 1995).
The lack of any observable effect of BAM2 and BAM6 on plant growth as compared with B-Null suggests that they do not contribute significantly to leaf starch degradation at the plant ages examined. BAM2 has highly unusual structural and catalytic properties (Monroe et al., 2017; suggesting that it may have a unique role in starch metabolism, and the results reported here suggest that its role does not overlap with the function of BAM3 or BAM1. Activity of BAM2 in assays of B2-Q extracts was low compared with BAM3 activity in B3-Q extracts, but they were significantly higher than that of B-Null (Figure 4). BAM2 activity has only been characterized using the purified enzyme (Monroe et al., 2017;, so this is the first evidence that the enzyme is present in shoot extracts. In contrast, activity in B6-Q extracts was not different than that of B-Null extracts so it is possible that the BAM6 protein is not present in aboveground parts of plants despite evidence of BAM6 mRNA in leaves (Winter et al., 2007). It is also possible that BAM6 has no catalytic activity despite having all of the conserved, active-site residues that are consistent with catalytic activity . A sex phenotype was observed in 8-week-old bam6 and bam3/bam6 plants (Monroe et al., 2014), so BAM6 may only function in older plants. The BAM6 gene also appears to be restricted to the Brassicaceae , so it may have a function unique to that family of plants. It is also possible that the natural glucan substrates for some of these enzymes are different enough from soluble starch that the real activity was not being measured.
The quadruple mutant lacking all of the BAMs except for BAM3 (B3-Q) was phenotypically indistinguishable from WT plants in that it had no sex phenotype and was similar in size to WT plants at 5 weeks of age (Figures 1b and 2). This indicates that BAM3 alone is used as a proxy for gene expression, but it is becoming widely recognized that despite large changes in mRNA levels either developmentally, diurnally, or in response to short-term environmental stress, the levels of protein activity often remain relatively constant (Gibon et al., 2004;Piques et al., 2009;Skeffington et al., 2014;Vogel & Marcotte, 2013). Using the quadruple mutants B1-Q and B3-Q, we measured the activities of BAM1 and BAM3 as plants developed, over a 24-hr photoperiod, and in response to several abiotic stresses, and compared our results with reported measurements of BAM1 and BAM3 mRNA. BAM1 activity in B1-Q leaves was remarkably constant developmentally ( Figure 5), diurnally (Figure 6), and it increased only marginally with osmotic stress (Figure 7a) despite large fluctuations in BAM1 mRNA over a diurnal period ( Figure S3a) and in response to osmotic stress ( Figure S4a). Mutants lacking BAM1 are compromised in osmotic stress-induced starch degradation and water uptake resulting from decreased accumulation of osmolytes Zanella et al., 2016), so this enzyme is clearly involved in these responses. Indeed, Monroe et al., (2014) Figure S3b). These observations reinforce the need for a greater reliance on enzyme activity data or mass spectroscopy measurements of protein levels rather than mRNA levels to elucidate gene function. A role for BAM3 in the response of plants to cold stress was suggested by a strong increase in BAM3 mRNA after cold stress ( Figure S4b; Kaplan & Guy, 2004Monroe et al., 2014). Moreover, bam3 mutants accumulated less sugar and were unable to maintain photosynthesis during cold stress (Kaplan & Guy, 2005). However, despite an increase in BAM3 mRNA during cold stress, BAM3 activity in B3-Q declined by over 60% over several days at 4°C (Figure 8). We previously observed starch accumulation in cold-stressed Arabidopsis leaves that could be due in part to the decline of BAM3 activity (Monroe et al., 2014).
It was suggested that BAM3 protein might be post-translationally inactivated by cold-stress-induced glutathionylation, which rapidly inactivates the enzyme with an adduct at C433 (Storm, Kohler, Berndsen, & Monroe, 2018). Alternatively, BAM3 may simply be rapidly degraded during cold stress. Indeed, Li et al. (2017) reported that BAM3 has a remarkable short half-life of 0.43 days, which is among the shortest half-lives of any Arabidopsis protein. The strong increase in BAM3 mRNA levels during the cold stress might enable a more rapid synthesis of BAM3 protein when the stress abates, facilitating recovery. Experiments using the bam3 and B3-Q mutants should be useful in addressing this question.
| CON CLUS IONS
Using five quadruple mutants, each expressing only one of the five catalytically active Arabidopsis β-amylases (BAMs) and a quintuple mutant lacking all five BAM genes, we show that in plants grown under a 12-hr-light/12-hr-dark photoperiod, complete leaf starch degradation comparable to WT, as detected by iodine staining, is dependent only on BAM3, but BAM1 alone can still provide enough carbon from starch degradation to support WT-level plant growth. BAM2 and BAM6, despite their plastid location, do not contribute significantly to leaf starch degradation as detected by iodine staining, or to plant growth under the conditions tested, similar to the cytosolic BAM5. With these mutants, we can confirm that BAM activity in chloroplasts from WT leaves is mostly from BAM3, with some contribution from BAM1 and a trace from BAM2. Importantly, a plant without any of the five BAMs lacked any detectable BAM activity indicating that these BAMs are the only proteins with this type of catalytic activity present under these conditions. With these mutants, we were able to shed light on the isolated enzyme activity of BAM1 and BAM3 under different conditions to compare with previously reported transcript levels. Activity of BAM3 and BAM1 did not change diurnally, despite reports of relatively large changes in mRNA levels. BAM1 activity increased only marginally with osmotic stress, but not to the levels expected from increases in transcript levels. Likewise, BAM3 mRNA was reported to increase dramatically with cold stress, but the detected decline in BAM3 activity with cold stress suggests that cold-induced starch accumulation may be partially a result of diminished BAM3 activity.
ACK N OWLED G M ENTS
The author is grateful to the careful reading of the manuscript by Dr. Amanda Storm, and to the dedicated work of numerous JMU undergraduates who helped generate the mutants used in this work.
CO N FLI C T O F I NTE R E S T
The authors declare no conflict of interest associated with the work described in this manuscript.
AUTH O R CO NTR I B UTI O N S
J.M. conceived the original research plans, performed the experiments, and wrote the article.
|
2020-02-13T09:22:05.479Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "51d84a80a1be7f0d2903a3a28f401a14b00a6efd",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pld3.199",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96a83605c3570ef29c7f692bce81f250b731dbd4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
102923305
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Nitric and Oxalic Acid Addition on Hard Anodizing of AlCu 4 Mg 1 in Sulphuric Acid
: The anodic oxidation process is an established means for the improvement of the wear and corrosion resistance of high-strength aluminum alloys. For high-strength aluminum-copper alloys of the 2000 series, both the current efficiency of the anodic oxidation process and the hardness of the oxide coatings are significantly reduced in comparison to unalloyed substrates. With regard to this challenge, recent investigations have indicated a beneficial effect of nitric acid addition to the commonly used sulphuric acid electrolytes both in terms of coating properties and process efficiency. The present work investigates the anodic oxidation of the AlCu 4 Mg 1 alloy in a sulphuric acid electrolyte with additions of nitric acid as well as oxalic acid as a reference in a full-factorial design of experiments (DOE). The effect of the electrolyte composition on process efficiency, coating thickness and hardness is established by using response functions. A mechanism for the participation of the nitric acid additive during the oxide formation is proposed. The statistical significance of the results is assessed by an analysis of variance (ANOVA). Eventually, scratch testing is applied in order to evaluate the failure mechanisms and the abrasion resistance of the obtained conversion coatings.
Introduction
The anodic oxidation process is a suitable means for the surface refinement of aluminum and its alloys. The formation of an oxide ceramic coating under anodic polarization in an acidic electrolyte leads to an increased corrosion and wear resistance, enhances haptic-visual properties and depending on the process regime, provides certain other surface property alterations like electrical insulation. Anodic oxide coatings with a particularly low porosity and therefore high hardness and abrasion resistance can be achieved by anodizing in sulfuric acid electrolytes at low temperatures beneath 5 • C due to the reduced chemical dissolution of the oxide. However, the so-called "hard anodizing" process itself is costly and demands the input of substantial amounts of electrical energy for both the process itself and the temperature control of the electrolyte. Another means of reducing the coating porosity lies in the addition of organics to the commonly used sulphuric acid electrolyte. Giovanardi et al. [1] proved that organic additions, e.g., glycolic acid, oxalic acid and glycerol, limit the chemical dissolution of the pore walls by adsorbing at the oxide-solution interface. Although the current efficiency of the process as well as the coatings' hardness and wear resistance are improved [2], the overall energy consumption is often in the same way increasing, since the mentioned additives increase the process voltage and thus thwart the alleged efficiency improvement [3].
A second challenge for the production of functional oxide coatings arises from the substrate influence. Being conversion coatings, the properties of the alumina coatings produced by anodizing are inevitably affected by the substrate alloy. While improving the material strength, alloying elements like copper have a detrimental effect on the aluminum oxide formation. From the thermodynamic view, the oxidation of aluminum atoms is significantly preferred to the oxidation of fine dispersed copper atoms due to the more negative Gibbs free energy per equivalent for the formation of aluminum oxide [4]. This leads to copper enrichment at the substrate coating interface [4]. Hashimoto et al. describe the formation of the θ'-phase (Al 2 Cu) within the copper-enriched layer [5]. These nanoscale copper-rich phases are oxidized at technologically relevant anodic potentials of more than 4 V [6]. Because of the semiconductive properties of copper oxide, this process is accompanied by oxygen evolution [4][5][6]. As electrical charge is consumed during this side reaction, the current efficiency of oxide growth is significantly reduced. Moreover, additional voids can be observed along the pore channels as the enrichment, oxidation and oxygen evolution process repeats in regular time intervals. Apart from this, the oxidation of intermetallic phases in the substrate alloy leads to micron scale defects in the conversion coating. Ma et al. [7] differentiate between copper-rich phases which are preferentially dissolved leaving voids and iron-rich phases which hinder the conversion process leaving highly porous volumes and voids in the coating. Because of the increased porosity, anodic oxide coatings on aluminum-copper alloys exhibit lower hardness and abrasion resistance.
Recent investigations indicate a beneficial effect of the addition of nitric acid to a sulphuric acid electrolyte on the performance of the conversion coatings produced on the popular alloy AlCu 4 Mg 1 (EN-AW 2024) [3]. In the same time, the process voltage of the hard-anodizing process is decreased, which leads to a lower energy consumption for the oxide production. The current study focuses on two major aspects: (1) determination of the effect of the nitric acid addition on the characteristics of both the hard-anodizing process and the produced coatings in dependence of the nitric acid concentration in the sulphuric acid electrolyte; (2) investigation into the mechanism behind the effects of nitric acid. Therefore, alongside the determination of the thickness, hardness and scratch-resistance of the produced coatings, as well as the quantification of the current efficiency and the energy consumption of the hard-anodizing process, the microstructure and composition of the produced coatings is investigated. Thus, it shall be clarified whether or not nitric acid is a suitable additive to improve the anodic oxidation especially of copper-alloyed aluminum substrates.
Anodizing Process
The alloy EN AW-2024 T3 (nominal composition is given in Table 1) served as substrate material for the anodic oxidation. It was supplied as sheet metal (Q-Lab, Westlake, OH, USA) and was used with dimensions of 50 mm × 25 mm × 1.5 mm. The samples were etched in 3 wt % sodium hydroxide at 50 • C for 5 min and pickled in 1:1 nitric acid at room temperature for 30 s. After each step, the samples were rinsed under deionized water. The anodic oxidation was carried out in 20 vol % sulphuric acid (corresponds to approx. 3.75 mol/L, Merck, Darmstadt, Germany) with additions of 0.4 mol/L and 0.8 mol/L nitric acid and 0.2 mol/L oxalic acid (as oxalic acid dihydrate, Merck). The electrolyte, which had a volume of 2 L, was maintained at a temperature of 5 ± 2 • C (being typical of hard-anodizing) throughout the process with a thermostat. The electrolyte was constantly stirred with a rod agitator (300 rpm). A pe1028 power station (Plating Electronic, Sexau, Germany) served as the power source. Current and voltage signals were logged internally with a sampling rate of one sample per second. The anodizing process was carried out in galvanostatic mode with a current density of 3 A/dm 2 . The process was terminated after 45 min.
Coating and Process Characterization
The effects of the nitric and oxalic acid addition were considered with regard to several process and coating properties. The electrical energy consumption during the anodizing process W el was calculated by integrating the product of current density and voltage over the process time. The thickness s of the produced coatings was obtained by eddy current measurement (Fischerscope MMS, Fischer, Sindelfingen, Germany). The values were validated on cross sections of the coatings. The mass of the anodized samples was determined before and after dissolution of the alumina in chromic/phosphoric acid (35 mL/L phosphoric acid + 20 g/L chromium(VI)oxide) at 60 • C for 4 h using a X1003S balance (Mettler Toledo, Gießen, Germany). The said solution dissolves alumina and does not attack aluminum. Preliminary tests also showed no attack on the used AlCu 4 Mg 1 alloy. The specific mass m was obtained by dividing the coating mass (which is equal to the mass loss after dissolution) by the surface area. According to Faraday´s law, the specific mass of alumina produced in a galvanostatic process carried out at 3 A/dm 2 for 45 min amounts to 1426.6 mg/dm 2 theoretically. The current efficiency η was calculated by dividing the mass loss obtained from the exposition in chromic/phosphoric acid by the theoretical coating mass. In the same way, the energy efficiency ε was calculated by dividing the electrical energy consumption during the process by the actual oxide mass, which gives a measure of how much energy is required for the formation of a certain coating mass ([ε] = J/mg). Under the assumption that the theoretical alumina density ρ (3.95 g/cm 3 ) is valid for the anodic alumina produced under the described conditions, the porosity p of the coatings was calculated by applying the following formula using the specific coating mass m and the coating thickness s: The hardness of the coatings was obtained from instrumented indentation tests at different locations of the coatings' cross sections with a Berkovich indenter (UNAT, Asmec, Dresden, Germany). A load of 5 mN was applied (load time 10 s, hold time 5 s, unload time 4 s). The distance between each indent and the substrate/coating interface was registered. The resulting hardness profiles were approximated by an exponential function, which represents the decline of the hardness H with increasing distance d from the substrate.
The lowest deviation between the experimental results and Equation (2) were obtained for a parameter H 0 = 4400 N/mm 2 over the complete set of data. The value H* represents the hardness decline. For a value H* = 1, there is no hardness decline at all. With decreasing H*, the hardness decline gets more pronounced. With the distance d in microns, H* was in a range of approx. 0.96 to 0.99 for the present set of samples. The hardness of the coatings in a distance of 20 µm to the substrate was considered as a reference value in the DOE exploitation.
DOE-Set Up and Exploitation
A full factorial design was used to assess the effects of the additives on the coating and process characteristics. The steps for the nitric acid concentration were 0 mol/L, 0.4 mol/L and 0.8 mol/L, while the oxalic acid concentration was 0 or 0.2 mol/L. The full factorial design thus included 12 different electrolytes. For each of the electrolytes, three samples were produced independently and all the properties were determined for these samples, with the exception of the hardness, which was measured on the cross section of only two samples for each of the electrolytes with approx. 20 indents across the coating cross section. The values obtained for each of the process and coating properties were used as input values for a model, which was quadratic with regard to the nitric acid concentration and linear with regard to the oxalic acid concentration. A preliminary consideration of the statistical significance of the parameter effects on the properties showed no statistical significance for the interaction of the oxalic and nitric acid additives. Interaction terms have therefore not been considered and the response function for a generic property g (e.g., coating thickness s, energy efficiency ε, hardness H 20 ) with the coefficients a i (i = 1, 2, 3, 4) and the concentrations of oxalic and nitric acid was chosen as follows: The coefficients were determined by least-square fitting of the model to the obtained results for each parameter. Thus, a quantitative relation of each property and the additive concentration was obtained. The quality of the model was checked for each of the parameters by comparing the model predictions g pred with the measured properties g meas (g meas being the mean measured value) via the following function:
Microstructure Characterisation
Metallographic cross sections were prepared by grinding on SiC paper and polishing with diamond suspension accomplished by a finish using a silicon oxide polishing suspension. Before the scanning electron microscopy (SEM) investigations, the cross-sections were cleaned thoroughly, dried at 60 • C for at least 4 h and carbon coated in order to avoid sample charging. The microstructure was investigated by SEM (LEO 1455VP, Zeiss, Jena, Germany). Both secondary electron (SE, topography contrast) and backscattered electron (BSD, element contrast) detectors were applied. For the quantitative analysis of the submicron and micron scale porosity, BSD pictures were assembled in order to get a representative impression of the coating microstructure over a length of more than 300 microns. The porosity was calculated via the grey-scale of pixels, whereby pixels with a grey-scale lower than a suitable threshold value were counted and the sum was put in relation with the total number of pixels.
Scratch Testing and Profilometry
Scratch tests were performed with a Revetest-RST scratch tester (CSM Instruments, Peseux, Switzerland) using a Rockwell diamond cone indenter (radius 200 µm) in order to evaluate the coatings' adhesion and the two-body abrasion resistance of the coatings. The sample was moved relative to the indenter with a speed of 2.5 mm/min. A prescan and a postscan were conducted at a small normal load of 0.9 N in order to calculate the remaining scratch depth. During scratch testing, the tangential force and the acoustic emission were recorded. For the quantification of the adhesive failure, the normal load was linearly increased within a scratch length of 10 mm from 1 to 100 N. This procedure will be referred to as "progressive scratch testing". The first occurrence of adhesive failure was determined by the help of the remaining scratch depth and by optical examination of the scratch. The force, at which the coating failed, meaning the breakthrough to the substrate, will be referred to as the critical force Fc. For the quantification of the abrasion resistance, scratch tests with a length of 5 mm were conducted at a constant normal load of 10 N. This procedure will be referred to as "constant scratch testing". The cross-section profile of each constant scratch was recorded at three positions using tactile measurement (T8000, Jenoptik, Jena, Germany). The software Turbo Wave was applied for the levelling of the profile and the calculation of the cross-section area. The scratch energy density W R was calculated using the mean value of the tangential force F t the mean value of the cross section area A and the scratch length l according to Equation (5).
Two progressive scratch tests and three constant scratch tests were performed on each of two anodized samples per anodizing condition.
Energy Efficiency
The voltage transients for the anodic oxidation process in the sulphuric acid electrolyte with different amounts of the oxalic and nitric acid additives differ significantly. In Figure 1, four representative voltage transients and the associated standard deviations are shown for the galvanostatic process (3 A/dm 2 ). In principal, each of the curves shows the typical voltage evolution during a galvanostatic anodizing process. The process initiation includes a steep voltage increase within the first seconds, which is attributed to the barrier layer formation and a subsequent decline of the voltage, and marks the beginning of the pore formation. Afterwards, the voltage grows at a slower rate throughout the process, which is attributed to the thickening of the porous part of the oxide coating. As can be seen in the diagram, the voltage amounts to about 23 V for the base electrolyte without additives after the process initiation. The oxalic acid additive does not affect the voltage level after the process initiation to a technologically relevant amount. However, the nitric acid additive leads to a significantly lower voltage in the first minutes of the process. With increasing process time, the voltage remains almost constant for the electrolyte without additives, while both additives lead to a significant growth of the voltage. For oxalic acid, the slope of the voltage curve is especially steep at a process time of around 25 min, while the nitric acid additive leads to a more or less constant voltage growth throughout the process. A higher amount of nitric acid addition leads to a further decrease of the voltage level after the process initiation, while the slope of the voltage in the further course of the process gets more pronounced. Therefore, the overall electrical energy consumption during the process is, of course, affected. In the sulphuric acid electrolyte without additives, the electrical energy turnover of the anodic oxidation process itself amounts to approx. 54.5 ± 0.7 Wh/dm 2 . The addition of oxalic acid leads to an increase of W el to 73 ± 6 Wh/dm 2 . In contrast, the addition of 0.4 mol/L and 0.8 mol/L nitric acid decrease the electrical energy consumption slightly to values of 52.2 ± 1.1 Wh/dm 2 and 51.6 ± 0.6 Wh/dm 2 , respectively. All the values are summarized in Table 2.
Energy Efficiency
The voltage transients for the anodic oxidation process in the sulphuric acid electrolyte with different amounts of the oxalic and nitric acid additives differ significantly. In Figure 1, four representative voltage transients and the associated standard deviations are shown for the galvanostatic process (3 A/dm 2 ). In principal, each of the curves shows the typical voltage evolution during a galvanostatic anodizing process. The process initiation includes a steep voltage increase within the first seconds, which is attributed to the barrier layer formation and a subsequent decline of the voltage, and marks the beginning of the pore formation. Afterwards, the voltage grows at a slower rate throughout the process, which is attributed to the thickening of the porous part of the oxide coating. As can be seen in the diagram, the voltage amounts to about 23 V for the base electrolyte without additives after the process initiation. The oxalic acid additive does not affect the voltage level after the process initiation to a technologically relevant amount. However, the nitric acid additive leads to a significantly lower voltage in the first minutes of the process. With increasing process time, the voltage remains almost constant for the electrolyte without additives, while both additives lead to a significant growth of the voltage. For oxalic acid, the slope of the voltage curve is especially steep at a process time of around 25 min, while the nitric acid additive leads to a more or less constant voltage growth throughout the process. A higher amount of nitric acid addition leads to a further decrease of the voltage level after the process initiation, while the slope of the voltage in the further course of the process gets more pronounced. Therefore, the overall electrical energy consumption during the process is, of course, affected. In the sulphuric acid electrolyte without additives, the electrical energy turnover of the anodic oxidation process itself amounts to approx. 54.5 ± 0.7 Wh/dm 2 . The addition of oxalic acid leads to an increase of Wel to 73 ± 6 Wh/dm 2 . In contrast, the addition of 0.4 mol/L and 0.8 mol/L nitric acid decrease the electrical energy consumption slightly to values of 52.2 ± 1.1 Wh/dm 2 and 51.6 ± 0.6 Wh/dm 2 , respectively. All the values are summarized in Table 2. The fit of the quadratic response function to the results leads to the coefficients shown in Table 3. Consequently, the graph represented in Figure 2a depicts the influence of the additives on the electrical energy consumption. The deviation of the predicted values obtained from the response function and the measured results is shown in Figure 3a. It is clearly visible that the oxalic acid addition has the biggest influence on the electrical energy consumption. The slight decrease of W el by nitric acid occurs independently of the oxalic acid addition. The response function represents the measured values well, except for the highest values obtained at an oxalic and nitric acid concentration of 0.2 mol/L and 0 mol/L, respectively, which are underestimated by the model. As can be seen in Figure 2, the voltage curve for this electrolyte shows a comparatively big standard deviation, which directly propagates into the values for the electrical energy consumption. To evaluate the efficiency of the anodic oxidation, the coating mass is considered. Referring to the results shown in Table 2 and to the graphical representation in Figure 2b, it becomes clear that the addition of oxalic acid has no effect on the mass of the produced oxide coatings. Meanwhile, the addition of nitric acid leads to an increase of the produced oxide mass. The experimental results are well represented by the response function ( Figure 3b). On the basis of this finding, two different routes shall be further pursued. The first route addresses the anodic oxidation process, namely its current and energy efficiency. The second route addresses the coating properties, namely the thickness, porosity, hardness and coating adhesion. With regard to the process, the obtained values of the oxide mass allow the calculation of the current efficiency η, i.e., how much of the overall charge turnover actually contributes to oxide formation, and the energy efficiency ε, i.e., how much energy is used for the formation of a certain amount of oxide. Since all the samples were produced in galvanostatic mode with a process time of 45 min and therefore with a constant charge turnover, the increase of the oxide mass by the addition of nitric acid into the electrolyte is directly reflected by an increased current efficiency (Table 2), while the oxalic acid additive does not affect neither of them (Figure 2c). The correlation between the values predicted by the response function and the experimental results is high (Figure 3c). In contrast, the energy efficiency ε increases significantly by the addition of oxalic acid, since the electrical energy consumption is increased without any change of the oxide mass (Figure 2d). That means, that more energy is needed for the production of a certain amount of oxide. The addition of nitric acid, meanwhile, decreases the energy efficiency, because the oxide mass is increased and the electrical energy consumption decreases at the same time. Consequently, less energy is needed for the production of a certain amount of oxide. The correlation between the predicted energy effiency and the measured values is compromised by the same error as the electrical energy consumption, so that especially the values for the electrolyte comprising only the oxalic acid additive are underestimated (Figure 3d). The coating thickness is hardly affected by the addition of oxalic acid, while it increases by approx. 10% after the addition of 0.8 mol/L nitric acid to the sulphuric acid electrolyte (Figure 2e). In the considered parameter range, the response function allows the prediction of the coating thickness with high accuracy (Figure 3e). Related to the results obtained in the base electrolyte without any additives, the increase of the coating thickness with increasing nitric acid concentration is stronger than the increase of the oxide mass. Under the assumption of a constant density of the amorphous alumina, this indicates an increasing coating porosity. meanwhile, decreases the energy efficiency, because the oxide mass is increased and the electrical energy consumption decreases at the same time. Consequently, less energy is needed for the production of a certain amount of oxide. The correlation between the predicted energy effiency and the measured values is compromised by the same error as the electrical energy consumption, so that especially the values for the electrolyte comprising only the oxalic acid additive are underestimated (Figure 3d). The coating thickness is hardly affected by the addition of oxalic acid, while it increases by approx. 10% after the addition of 0.8 mol/L nitric acid to the sulphuric acid electrolyte (Figure 2e). In the considered parameter range, the response function allows the prediction of the coating thickness with high accuracy (Figure 3e). Related to the results obtained in the base electrolyte without any additives, the increase of the coating thickness with increasing nitric acid concentration is stronger than the increase of the oxide mass. Under the assumption of a constant density of the amorphous alumina, this indicates an increasing coating porosity.
Coating Porosity and Hardness
The overall coating porosity shows a minimum at a nitric acid concentration of approx. 0.4 mol/L according to the response function as represented in Figure 4a and Table 4. The coefficients of the quadratic response function are summarized in Table 5. At a constant nitric acid concentration, the porosity always slightly increases with the addition of oxalic acid. For the porosity, low values (45% and smaller) tend to be overestimated by the model while the higher values (around 50%) are underestimated ( Figure 5a). As an additional measure for the compactness of the coatings, the hardness is considered. Generally, the size of the hardness indents is in the micrometer range and is thus in a greater order of magnitude compared to the pore channels and the periodically occurring voids. As can be seen from Figure 4c and Table 4, an increasing nitric acid concentration leads to a stronger increase of the hardness H20 and the hardness decline H* in comparison with the oxalic acid
Coating Porosity and Hardness
The overall coating porosity shows a minimum at a nitric acid concentration of approx. 0.4 mol/L according to the response function as represented in Figure 4a and Table 4. The coefficients of the quadratic response function are summarized in Table 5. At a constant nitric acid concentration, the porosity always slightly increases with the addition of oxalic acid. For the porosity, low values (45% and smaller) tend to be overestimated by the model while the higher values (around 50%) are underestimated (Figure 5a). As an additional measure for the compactness of the coatings, the hardness is considered. Generally, the size of the hardness indents is in the micrometer range and is thus in a greater order of magnitude compared to the pore channels and the periodically occurring voids. As can be seen from Figure 4c and Table 4, an increasing nitric acid concentration leads to a stronger increase of the hardness H 20 and the hardness decline H* in comparison with the oxalic acid addition. The maximum hardness H 20 can be observed for the combined addition of 0.4 mol/L nitric acid and 0.2 mol/L oxalic acid. The further increase of the nitric acid concentration to 0.8 mol/L at an oxalic acid concentration of 0.2 mol/L leads to a reduced hardness H 20 . A similar behavior can be observed for the hardness decline H*, however, the decrease of the hardness for the highest nitric acid concentration is pronounced more significantly by the hardness decline. For the nitric acid concentration, the found effect on the hardness is in accordance with the estimation of the porosity, which showed a decreasing porosity for the addition of 0.4 mol/L nitric acid and again an increase for the addition of 0.8 mol/L nitric acid. However, at a constant nitric acid concentration, the porosity always increases by the addition of oxalic acid. For 0.0 mol/L and 0.4 mol/L nitric acid, this means that both porosity and hardness are increasing simultaneously. This phenomenon will be discussed later under the consideration of the coating microstructure. The micron scale porosity of the coatings was examined by electron microscopy using th detector. As can be seen from Figure 6a, the anodic oxide coatings from the base electrolyte co plenty of spheroidal voids with diameters of less than 5 µm and some irregularly formed void The micron scale porosity of the coatings was examined by electron microscopy using the BSD detector. As can be seen from Figure 6a, the anodic oxide coatings from the base electrolyte contain plenty of spheroidal voids with diameters of less than 5 µm and some irregularly formed voids with dimensions of more than 10 µm. From grey-scale analysis, an average micron scale porosity of 3.4 ± 0.8% was obtained. With the addition of 0.4 mol/L and 0.8 mol/L nitric acid, a similar amount of cracks occurs at large voids (represented by Figure 6b). Hence, the microscale porosity increases to 4.6 ± 1.3% and 4.6 ± 0.8% respectively. With the addition of 0.2 mol/L oxalic acid, the number and volume content of the cracks generally increases for all nitric acid concentrations. The micron scale porosity ranges from 5.8 ± 1.6% for the single addition of 0.2 mol/L oxalic acid (Figure 6c) to 8.4 ± 0.7% for 0.8 mol/L nitric acid and 0.2 mol/L oxalic acid (Figure 6d) respectively. The roughness of the substrate-coating interface close to large voids seems to increase with increasing nitric acid and in particular with increasing oxalic acid concentrations. For the highest additive concentration, several large voids are connected by crack networks (Figure 6d).
Coating Adhesion and Abrasion Resistance
In order to evaluate the influence of large micron scale pores and cracks on coating adhesion and coating failure, progressive scratch tests with increasing normal load from 1 to 100 N were performed. Because of the brittleness of the oxide conversion coatings, periodically occurring cracks perpendicular to the scratch direction were already observed from the beginning. As can be seen from the light microscope image Figure 7a, the conversion coatings obtained from the base electrolyte typically chip off after a critical normal force is reached, whereby adhesive failure of oxide plates reaches beyond the scratch. In this case, the critical force of 48.7 ± 2.9 N for the first occurrence of adhesive failure can be determined clearly by both the optical investigation of the scratch and the sudden increase of the remaining scratch depth. A similar failure behavior applies to the single addition of 0.4 mol/L nitric acid, however, the adhesive failure is already observed at a slightly smaller normal force of 45.0 ± 4 N as can be seen from Table 6. For the single addition of 0.2 mol/L oxalic acid, both with and without nitric acid, no spallation of large oxide plates can be observed (Figure 7b). For this reason, it is more difficult to obtain the critical normal force from the optical investigation of the scratch. However, with the aid of the remaining scratch depth curve, a further decrease of the critical normal force to 42.0 ± 5 N and 33.3 ± 2.0 N, respectively, can be derived. For the highest nitric acid concentration of 0.8 mol/L both with and without oxalic acid, the failure mode appears to be very gradually as the remaining scratch depth increases steadily without abrupt increases. The exposure of the bare metallic substrate already occasionally appears at small normal forces due to the abrasive wear of the entire oxide thickness. Hence, a critical normal force for adhesive failure cannot be defined for these samples.
Coating Adhesion and Abrasion Resistance
In order to evaluate the influence of large micron scale pores and cracks on coating adhesion and coating failure, progressive scratch tests with increasing normal load from 1 to 100 N were performed. Because of the brittleness of the oxide conversion coatings, periodically occurring cracks perpendicular to the scratch direction were already observed from the beginning. As can be seen from the light microscope image Figure 7a, the conversion coatings obtained from the base electrolyte typically chip off after a critical normal force is reached, whereby adhesive failure of oxide plates reaches beyond the scratch. In this case, the critical force of 48.7 ± 2.9 N for the first occurrence of adhesive failure can be determined clearly by both the optical investigation of the scratch and the sudden increase of the remaining scratch depth. A similar failure behavior applies to the single addition of 0.4 mol/L nitric acid, however, the adhesive failure is already observed at a slightly smaller normal force of 45.0 ± 4 N as can be seen from Table 6. For the single addition of 0.2 mol/L oxalic acid, both with and without nitric acid, no spallation of large oxide plates can be observed (Figure 7b). For this reason, it is more difficult to obtain the critical normal force from the optical investigation of the scratch. However, with the aid of the remaining scratch depth curve, a further decrease of the critical normal force to 42.0 ± 5 N and 33.3 ± 2.0 N, respectively, can be derived. For the highest nitric acid concentration of 0.8 mol/L both with and without oxalic acid, the failure mode appears to be very gradually as the remaining scratch depth increases steadily without abrupt increases. The exposure of the bare metallic substrate already occasionally appears at small normal forces due to the abrasive wear of the entire oxide thickness. Hence, a critical normal force for adhesive failure cannot be defined for these samples. As can be seen from Table 6, the scratch energy density of anodic conversion coatings can be slightly improved through the addition of 0.4 mol/L nitric acid to the base electrolyte. However, this is not due to the reduction of the worn material volume as the cross-section area of the scratches even slightly increases, but due to the slightly increased tangential force. A further increase of the nitric acid concentration impairs the scratch energy density of the coatings considerably. This is due to the significant increase of the worn material volume. Except from the highest nitric acid concentration of 0.8 mol/L, the further addition of 0.2 mol/L oxalic acid to the electrolyte leads to increased crosssection areas and therefore to lower values of the scratch energy density.
Discussion
It was shown that both oxalic and nitric acid additions are suitable to improve coating properties. However, solely the addition of nitric acid offers the unique opportunity to enhance the thickness and hardness of anodic oxide coatings and to reduce the electrical energy consumption, simultaneously. This can be attributed to the different effect mechanisms of the additives. It is known that organic additives like oxalates from oxalic acid inhibit the chemical dissolution of alumina at the pore walls in the outer region of anodic conversion coatings [1]. This results in conversion coatings with a higher density and a smaller hardness gradient described by a higher value of the hardness decline H* in Table 4. Whereas the extended pores of anodic coatings from the base electrolyte allow an easier electrolyte penetration, the accessibility of coatings from electrolytes with oxalic acid addition decreases significantly with increasing coating thickness. Therefore, the electrical resistance increases and a more pronounced rise of the process voltage can be observed for the latter coatings. In contrast to this, the addition of nitric acid allows the reduction of the voltage from the beginning As can be seen from Table 6, the scratch energy density of anodic conversion coatings can be slightly improved through the addition of 0.4 mol/L nitric acid to the base electrolyte. However, this is not due to the reduction of the worn material volume as the cross-section area of the scratches even slightly increases, but due to the slightly increased tangential force. A further increase of the nitric acid concentration impairs the scratch energy density of the coatings considerably. This is due to the significant increase of the worn material volume. Except from the highest nitric acid concentration of 0.8 mol/L, the further addition of 0.2 mol/L oxalic acid to the electrolyte leads to increased cross-section areas and therefore to lower values of the scratch energy density.
Discussion
It was shown that both oxalic and nitric acid additions are suitable to improve coating properties. However, solely the addition of nitric acid offers the unique opportunity to enhance the thickness and hardness of anodic oxide coatings and to reduce the electrical energy consumption, simultaneously. This can be attributed to the different effect mechanisms of the additives. It is known that organic additives like oxalates from oxalic acid inhibit the chemical dissolution of alumina at the pore walls in the outer region of anodic conversion coatings [1]. This results in conversion coatings with a higher density and a smaller hardness gradient described by a higher value of the hardness decline H* in Table 4. Whereas the extended pores of anodic coatings from the base electrolyte allow an easier electrolyte penetration, the accessibility of coatings from electrolytes with oxalic acid addition decreases significantly with increasing coating thickness. Therefore, the electrical resistance increases and a more pronounced rise of the process voltage can be observed for the latter coatings. In contrast to this, the addition of nitric acid allows the reduction of the voltage from the beginning of the process. As already described above, the presence of copper oxide at the interface allows for local oxygen evolution and therefore leads to a reduced current efficiency of oxide growth. One explanation for the beneficial effect of nitric acid may be the accelerated chemical dissolution of the copper oxide at the substrate-electrolyte interface. Aqueous solutions of nitric acid are commonly used to remove the copper enriched black surface layer on copper-rich aluminum alloys after pickling.
When discussing the correlation between the anodizing parameters, porosity, hardness and scratch resistance, it is important to subdivide the porosity in different categories according to their origin: pore channels proceeding orthogonal to the substrate surface, periodically occurring voids along the pore channels due to copper-enrichment and oxygen evolution and microscale voids due to the dissolution of intermetallic phases. Obviously, the chemical dissolution of the pore walls is not substantially reduced by the addition of oxalic acid as the hardness parameters H 20 and H* are only slightly enhanced. A reason for this could be the generally low dissolution rate of anodic alumina in 20-vol % sulfuric acid solution at 5 • C. At higher electrolyte temperatures, a stronger effect of the oxalic acid addition has to be expected. In contrast to this, nitric acid addition is suitable to enhance the hardness parameters H 20 and H* significantly. Again, this effect can be explained by the accelerated chemical dissolution of copper oxide at the substrate-coating interface. According to this argumentation, the reduction of the oxygen evolution does not only improve the energy efficiency (as already described) but also reduces the amount and volume content of the periodically occurring voids along the pore channels. These results correspond to the results of Morgenstern et al. [8], who recently discovered that thickness and hardness of anodic oxide coatings are improved when the alloying element copper is not homogeneously dispersed in solid solution or in the form of atomic clusters, but concentrated in S-phase (Al 2 CuMg) precipitates. In this case, the precipitates preferentially dissolve and the detrimental effect of copper is reduced.
The characteristic micron scale voids are developed through the dissolution of micron scale intermetallic phases. Theoretically, the S-phase should completely dissolve during a long-time solution annealing treatment in order to enable the maximum effect of the subsequent age hardening process. Practically, the duration of the solution annealing treatment is limited due to high energy costs and the danger of grain coarsening. For this reason, some S-phase precipitates do not completely dissolve but reshape to a spheroidal form. As already reported in [8,9], these precipitates leave spheroidal voids within the conversion coatings due to their preferential dissolution in the sulphuric acid electrolyte. Because of their limited size of up to 5 µm in diameter and their round shape, they do not act as sharp notches and might stop rather than initiate cracks within the oxide coating. On the other hand, primary phases, e.g., iron-or silicon-rich phases, are precipitated during the solidification of the molten alloy. They are virtually insoluble in the solid aluminum matrix. These precipitates exhibit dimensions of more than 10 µm and an irregular, sharp-edged shape. During anodizing, they convert more slowly than the surrounding aluminum matrix and leave highly porous volumes and flaws within the coating according to [7]. These flaws also exhibit a size of more than 10 µm and sharp edges. Therefore, they might rather act as crack initiation sites. As shown in Figure 6, the susceptibility to cracking increases with increasing nitric acid concentration and especially with the addition of oxalic acid. One reason for this could be the embrittlement of the coatings due to the incorporation of additional elements from the electrolyte. Shih et al. [10] proposed that the enhanced hardness of anodic oxide coatings obtained from a sulphuric acid electrolyte after nitric acid addition results from a higher sulfur content within the oxide. Another explanation could be the influence of nitric acid and oxalic acid on the conversion behavior of the iron-rich intermetallic phases. As can be seen from Figure 6, the roughness of the substrate-coating interface increases in the same order as the number and volume of cracks. The interface roughness results from the different conversion rates of the intermetallic phases and the aluminum matrix. Consequently, it can be argued that the presence of oxalic acid especially inhibits the conversion of the iron-rich phase. Following this argumentation, tensile stresses evolve in the porous oxide ahead of the iron-rich phases as the conversion of the surrounding aluminum matrix is connected with volume expansion. In conjunction with the notch effect of the large, sharp-edged voids, this could finally induce cracking.
The large voids and cracks are significantly larger than the hardness indents. Consequently, the instrumented nanoindentation measurements can only be performed around large pores within more compact oxide volumes so that these voids do not affect the measured hardness values. This is the reason why oxalic acid addition results in both an increasing general porosity and increasing hardness parameters according to Table 4. However, as scratch testing is a more integral characterization method, large voids and cracks influence the coating failure mode and the scratch resistance considerably, as can be seen quantitatively from Table 6 and qualitatively from Figure 7. With an increasing number of large pores and cracks, the failure mode changes from the brittle spallation of oxide plates towards the more gradual coating failure after the abrasion of the entire coating thickness. This is understandable, because compact oxide materials are not able to relieve internal stresses and therefore fail suddenly after reaching a critical stress level. On the other hand, if cracks are already present within the coating, the crack network propagates under normal pressure. Consequently, the indenter can easily remove material volumes, which are completely separated from the surrounding material by the crack network and the oxide coatings are worn more gradually at lower critical normal forces. The scratch energy density is influenced by both the hardness of compact oxide volumes and the micron scale porosity. On the one hand, it is to be expected that the scratch resistance increases with increasing coating hardness. On the other hand, the presence of large pores and cracks deteriorates the abrasion resistance, as already discussed. The optimum scratch energy density can be observed for coatings after the single addition of 0.4 mol/L nitric acid to the base electrolyte as these coatings exhibit both an increased hardness and a comparatively low micron scale porosity.
Conclusions
The present work investigates the influences of the single and combined addition of nitric acid and oxalic acid to a sulphuric acid electrolyte on the anodic oxidation behavior of the AlCu 4 Mg 1 alloy. It was shown that-unlike conventional organic additives-the addition of nitric acid to a sulphuric acid electrolyte enables both the enhancement of coating properties, e.g., hardness by 23%, thickness by 14%, and the reduction of the electrical energy consumption by 5%, simultaneously.
In contrast to this, oxalic acid addition reduces the hardness gradient and slightly increases the hardness of the outer coating regions. Unfortunately, oxalic acid addition is connected with a significant increase in process voltage and therefore an increased energy consumption. The results also suggest that oxalic acid addition decelerates the dissolution of large iron-rich intermetallic phases. This gives rise to internal stresses and causes cracks within the conversion coating.
For the combined addition of nitric and oxalic acid the maximum hardness increase of 36% compared with the base electrolyte and the smallest hardness gradient (represented by the highest values of the hardness H 20 and the hardness decline H*) can be achieved. However, the coatings' resistance against the abrasion of a hard counter body (represented by the scratch energy density) generally decreases with increasing additive concentration. Furthermore, the failure mode changes from sudden spallation of oxide plates towards the gradual abrasion of the coating.
By exploiting the different effects of oxalic and nitric acid, the process and coating properties can be optimized with regard to different specifications (e.g., maximum hardness or minimum energy consumption). It is expected that especially the addition of nitric acid is also suitable in order to improve the properties of anodic conversion coatings obtained at ambient temperature, as well. This is the subject of further research.
|
2019-04-09T13:08:52.920Z
|
2018-02-17T00:00:00.000
|
{
"year": 2018,
"sha1": "82c78dd8cb62fb36dd977f095d13e604b0a8278f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4701/8/2/139/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a1abb1dc5f29b1b974eac8a5d4bca8588f1748a7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
228986442
|
pes2o/s2orc
|
v3-fos-license
|
Challenging Care: Professionally Not Knowing What Good Care Is
A dominant trope in the anthropology of care—of revealing a practice to be, despite our moral intuitions to the contrary, really a form of care—limits our understanding of the dynamic processes whereby care’s morality is established in practice. In the British care sector the ideal of care is clear: avoiding coercion and neglect. There are manifold rules designed to hold carers accountable to realizing it. But the rules do not reliably lead to the ideal. Rather, they leave undetermined an enormous amount for carers to fill in. In this setting, whether or not a worker’s action becomes “caring” depends on far more than good intentions or following rules. The action’s moral status rests, instead, on the contingencies of the relationship with the care recipient. We should refrain from entering into the evaluative work of rearranging the borders of good care in order to investigate how our informants themselves do this in the midst of care’s relational vicissitudes. Doing so enables us to attend to how debates about what constitutes good care are part of broader patterns by which moral responsibility is assigned and distributed within caring relationships. [care, contingency, disability, ethics, responsibility]
reveals that a practice that looks like the complete opposite of care from the outside really is care after all when one understands it ethnographically. This rite of reversal is performed on whatever abstraction might lead us to misunderstand a given local practice. Julie Livingston (2012), Tanya Luhrmann and Jocelyn Marrow (2017), for instance, undermine the idea that care must be transparent to be ethical by describing what is caring about withholding information, lying, and deception in Botswana and India, respectively (for the reverse case of anonymity see Rivas 2004;Stevenson 2014). Similarly Hannah Brown (2010), Sarah Pinto (2014), and Angela Garcia (2015) challenge the idea that care must be premised on consent and the absence of harm by demonstrating the moral and therapeutic value of the deliberate infliction of pain, coercion, confinement, and violence (see also Brodwin 2013;Sufrin 2017;Davis 2012). Garcia's (2015) article "Serenity: Violence, Inequality, and Recovery on the Edge of Mexico City" is one of the most impressive accomplishments of this genre. She focuses on the several thousand "informal, unregulated, and destitute" (2015:458) private drug rehabilitation centers, called anexos, that respond, outside the bounds of inadequate public mental health provision, to a sharp "rise in addiction and mental illness" under the shadow of Mexico's work war on drugs (2015: 461). Anexos often forcibly abduct addicts, confine them against their will, and torture them with beatings and cigarette burns (2015:465-11). But Garcia's ethnographic descriptions work to counter the "liberalized sensibility" (2015:462) that is scandalized by this violence-a moral condemnation that she argues further marginalizes the poor families who turn to this form of care for their addicted kin because they see no "other option but abandonment or death" for their relative (2015:465).
Garcia argues that far from violence being the opposite of care, as we might imagine, anexos "utilize a form of violence as care" (2015:455). She makes the case for this in two ways. First, by demonstrating that, contrary to popular discourse, families themselves are adamant they do not choose anexos with the intention of abandonment but from a desire to save their relative from addiction and street violence. Second, by arguing that the pain is "therapeutic" (2015:465). In the words of one of Garcia's informants, the violence can be "very effective" (2015:468). Garcia is clear that these effects do not equate to the kind of "invulnerability" or "cure" biomedicine looks for (2015:469). Instead, the echoes between narco-violence and anexo therapeutic practices produce personal and communal "transformation" of the violence that pervades life in Mexico City more generally (2015:469). Too briefly summarized, her argument is that because this violence comes from good intentions and is understood locally to pursue the good, we ought to question its separation from care-moving to classify it, instead, as an unexpected form that care can take.
Garcia's article exemplifies what is important and promising about this strand of the anthropology of care. It also demonstrates what I take to be a weakness of this way of exploring care's morality in ethnographic action. Garcia tells us that her aim is not "to subject" anexos "to moral scrutiny" but to "appreciate the concerns" of those who use them, "read their cultural logic," and "understand the disparate forms of recovery and sociality they produce" (2015:457). But Garcia's desire to defend anexos against their detractors leads her to a stronger conclusion: that this violence is itself care. The appellation of care functions as more than a purely neutral descriptor in that claim. It attributes to anexo violence a worth and value that we might otherwise miss if we rush to condemn it. Garcia's article, in other words, has a moral as well as analytical point: to articulate violence's "redemptive possibilities" (2015:470).
But this mode of evaluation directs us away from what about care's morality remains unsettled and uncertain, even when we have understood people's good intentions and their local understandings of what constitutes care. I describe how and why carers for people with intellectual disabilities in a British non-governmental organization (NGO) are unsure whether their actions are caring. I focus on the way that appellations of care are muddied by the complexity of interactions between caregivers and care receivers, and by debates among caregivers themselves. I focus, too, on how the NGO's commitment to ethical reflection prevents carers from hiding that complexity. In this context, whether or not an action acquires the moral status of "caring" depends on much more than people's intentions or their understandings of the good. It must be established in the vicissitudes of relationships. My argument is that the trope of evaluative reversal and revelation steers us away from these uncertainties, debates, and contingencies that shape the ethical status of actions within care. The achievement of care, I contend, is both more vulnerable and more political than Garcia's argument and the revelatory trope in general tends to imply, in the sense that it is more closely entangled with both the vicissitudes of relationality and the contentious distributions of responsibility within them.
Regulating Interactions
Bob is a gentle and gracious man in his fifties. Doctors and social workers have assessed him to have a significant "learning" or "intellectual" disability, a mental impairment that affects his capacity to carry out daily tasks involving more complex cognizing. This entitles him to welfare payments from the British government to support his upkeep. These funds are passed on to a Christian NGO called L'Arche UK, which arranges for his housing (in a home rented from the local authority) and continual care to support him and those with whom he lives. In the summer of 2013 L'Arche started employing me to work full time as a care worker in Bob's home and authorized me to conduct research through participant-observation and interviews. I worked alongside a team of carers who were a mixture of three demographics: young, middle-class, western-European men and women; middle-and working-class British women in their middle age; and women of various ages who had emigrated from Eastern Europe and sub-Saharan Africa .
L'Arche is, like all care organizations in the UK, tightly regulated by social workers, commissioners from the local authority, and the Care Quality Commission. They check up on Bob's home at regular intervals to ensure that carers are complying with all manner of rules: from giving the precise amount of medication or recording every last penny of money spent, to getting people out of the house to do meaningful activities during the week and giving them the opportunity to form relationships with others.
These rules are attempts to settle the grave risks that attend care: the possibility that a carer who is meant to be supportive might turn out to be either coercive or neglectful. They are intended to prevent, through legal accountability, the kind of harm to which Bob is constantly vulnerable and that caring relationships in the past have so frequently inflicted on individuals like him. They work to define what morally and legally justifiable care is in this context: actions that prevent a person from falling into disrepair, not through paternalistic constraint but rather through working in concert with their preferences and choices.
But following this framework does nothing to guarantee that carers will achieve the ideal form of care it gestures toward. Legalistically sticking to the rules would produce nothing coherent, let alone ideal, in terms of care. It is perfectly obvious that giving Bob the wrong medication, or denying him access to medication altogether, is not care. But when Bob walks into the road, ought one to respect it as his choice? In practice, there are significant conflicts between the rules. The clarity of the rules and of the ideal do not make the connection between the two any plainer. So quite what care requires and entails is often uncertain. What does care involve when Bob does not want to bathe enough to stay healthy? What does care look like when Bob tells you that he wants to go out for the whole day but then gets upset about leaving the TV when it is time to leave? L'Arche offers new carers a less legalistic way to handle these uncertainties in the form of routines. As part of my training in my first week on the job, I was given a sheet of paper detailing every aspect of Bob's morning-from exactly how warm to run the bath to how to give him his newspaper, from what to say to wake him up to when to give him his medication. Each care recipient's "key worker" draws on their own experience, and that of other key workers before them, in order to design a routine specifically tailored to that individual's preferences and needs. Peter had played this role for Bob for several years, and the routine he developed worked around activities to which Bob had reacted badly and those with which he happily cooperated. These written routines give a precise set of actions for carers to follow as they care for any particular individual, and they exist as particular scripts within a less formally defined plotline for the day and week, such as giving Bob a bath regularly or taking him out to the pub for a burger every Saturday.
Peter also gave all new carers informal advice about just how to interact with Bob to go along with the practical steps contained in the routine. Most important, he told me, was to avoid asking Bob questions at any point in the morning or when he was about to do something difficult. Bob, Peter told me, would always say "no" whatever you asked him,simply because he found being asked anything difficult and anxiety provoking. Best just to do, without asking, what it is you know Bob likes and he will find it much easier. As the distillation of Peter's sensitive and experienced guesses at Bob's preferences, the routine promised to give carers confidence that they were indeed caring for Bob in a way that he wanted. The routine mediated the insights of an experienced carer to those who struggled to juggle the complexities of the task.
But this project of connecting good intentions to an ideal through patterning interaction was, again, successful only to a limited degree. This was because of the contingencies that lay outside of the routines; principally, how Bob reacted to different carers. I helped Bob get up almost every day for a year. But, despite following Peter's advice to the letter, Bob continued to be angry whenever I supported him (and he reacted similarly when more occasional carers like Mier and Jacob did too), throwing his newspaper on the ground at breakfast and cursing under his breath when I brought him his toast. Although I saw him leave the house calmly with Peter often, whenever I told Bob it was time to go out-even when it was for the trip to the pub on Saturday-he became upset.
Neither my intentions to care for Bob nor my following of the routines for doing so were enough to make sure that my actions turned out to be good care. To leave Bob alone in all of these situations might well have been neglectful-it would have stopped him having a bath, eating breakfast, and going out. Yet to intervene in Bob's life in this way was not to act in concert with him. Was his anger because he did not want to go out after all? Or did he actually want to go out but was unhappy about something else? Was it simply that he did not like me? If the latter, then there was little I could do. You cannot act in concert with someone who does not want to act in concert with you (Mol 2008:94-95 Williams (1981) calls "moral luck" (see also Kittay 2019).
Breaking Rules and Making People
Precisely how to interpret Bob's feelings, and thus the morality of these situations, was a common topic of discussion among carers. Emma, a relatively recent arrival, questioned why we did not just leave Bob alone when he got upset about going out because she thought it was obvious that he wanted to stay in. Marla, a more experienced carer, had a more complex view that commanded more sway. She treated Bob's anxiety at going out as a conflict between two genuine desires: to watch TV and to go out. If Marla was right about this, then to take either option would be to ignore one of Bob's desires. To go out would be to coerce the part of him that wants to stay in, and to stay in would be to neglect the part of him that wants to go out. Either would be a form of coercion or neglect and thus not care. To put it differently, while Bob remained conflicted, there was no way to act with him. "You cannot act in concert with one who does not act in concert with himself" (Korsgaard 1992:332). On Marla's interpretation, the moral luck resides in whether or not there even are any possibilities for care to begin with.
Peter, though, read the situation differently. He told me that for the first years after Bob moved in to L'Arche he rarely left the house, and when he did so he would get so upset in the process that he would destroy all the wall hangings, throw the post all over the floor, and shout about how much he hated the place. The introduction of a TV did not change this. Bob was firmly wedded to it and hated to do much other than watch it. Peter's point was that Bob's anger at leaving the house was the sign not of a clear choice but rather of fear. For Peter, Bob was unhealthily attached to the TV as a result of his anxiety about going out.
In the years before I arrived, Peter had been steadily encouraging the team to take Bob to do things that he would never choose to do himself. Peter would listen to what Bob was talking about excitedly (watching films with "strong language" in them, football matches, and listening to Tom Jones) and then arrange for carers to take him to the cinema, watch a football match at a pub, or travel to a concert. Peter's hope in imposing upon Bob activities that Bob himself would never choose was that it would, in the long run, give Bob more choices and allow him to shape his own life to a greater extent. Peter thought that care entailed coercion in this case. But, in contrast with cases where carers come to accept the coercive logic of an institution, this was not because he thought coercion was justifiable or caring in and of itself (Chapman 2014). It was because, in a situation with no choice but to either neglect or coerce, coercion at least contained the possibility of transforming Bob's capacities for the future. In intervening coercively in his life by forcing him out of the house, Peter hoped that Bob might come to be someone who did not need to be coerced to leave the house but could act in concert with his carers in their attempts to support him.
This did not diminish Peter's responsibility but rather increased it. His interventions could only be vindicated retrospectively-their ethical justification resting to a large extent on the contingencies of who Bob became through them in the months and years that followed (Paul 2014). As it happens, the program was remarkably successful, as all the other carers recognize. Bob can now do many more things than he could a couple of years before. He has overcome his aversion to the bus, for instance, so he can travel much farther. He is less anxious during his day, and in particular when leaving the house, and it is much more likely that he will, in fact, leave. It is hard to converse with Bob about precisely what these changes mean to him, but his continued affection toward Peter and the reduction in his anxiety suggests this care did what it was meant to do: increased his well-being and freedom simultaneously, protected him and gave him choice at one and the same time.
It would thus be easy to conclude that what Peter did was care. I, like other newer carers, often tried to copy someone authoritative such as Peter in order to have some surety that what we were doing in these uncertain situations was good. But L'Arche rarely allows its carers to escape these ambiguities. It encourages them, instead, to reflect both individually and in team meetings on the complexities of their interventions. Often when newer carers asked whether an act was really caring or not, a manager would take it seriously as a topic for discussion while also encouraging us to "stay with the question" rather than think there was a resolution to it. Carers are taught, that is, to be professionally ignorant about what good care looks like in order better to debate that among themselves.
The result is that, as carers stay in L'Arche, they become less rather than more sure about what care entails. Peter told me: act in their interests? Or do they have the capacity to make that decision?" You shouldn't generally overrule.
Peter's longevity in the organization meant he got to see the positive results of his interventions. But he would not settle on a secure interpretation of them as moral. Was it really in Bob's interests to force him out of the house in any particular instance? Even if it was, might Bob actually have the capacity to take that decision for himself without interference? The suspicion of coercion or neglect haunted even the most well-intentioned, sensitive, and fruitful actions. Carers' attempts to do the right thing lead them only further away from any certainty that they have, in fact, done it or would even know what it looked like if they had.
Conclusion
The uncertainty that surrounds care in L'Arche contributes to anthropological attempts to disrupt philosophical attempts to fix care's morality without reference to the vicissitudes and variability of social life. Whether or not an action comes to be evaluated as "care" in L'Arche is dependent on more than intentions, rules, and local understandings of the good. It relies, also, on the interactional contingencies of the caring relationship itself. Most notably, it relies on Bob. The uncertainty here concerns not just whether carers can reliably know what Bob is feeling. It concerns also the fact that, because Bob is human, their actions do not have a simple reliable effect upon him that could ever be known in advance. Because Bob is continually changing, in part through the care he receives, the shape and pattern of even those contingencies keeps shifting.
The shape of these uncertainties is unlikely to be universal (indeed, their dependence on social context is precisely my point). The relative absence of material about debate and uncertainty in Garcia's article about the anexos likely indicates an ethnographic difference worth attending to. It might be that, existing outside of state and professional regulation, care's morality or even efficacy is not such an explicit concern in anexos as it is in L'Arche and thus not the subject of such extensive reflection. These differences do not correlate straightforwardly with the (culturally specific) distinction between "formal" and "informal" care. For instance, kinship care of an elderly relative in India (Cohen 2000) or of children in China (Kuan 2015) can be the cause of intense moral concern. But different assumptions about the stakes of caring actions, and their dependence on the recipient's mind, can also mitigate against the necessity for this kind of reflection in analogous forms of care (see Aulino 2016 on eldercare in Thailand; and Mezzenzana 2020 on childcare in the Ecuadorian Amazon). Similarly, there are plenty of ways in which professional care settings outside the family can close down, rather than provoke, ethical reflection (Chapman 2014;Johnson 1998;Lester 2009).
My argument is not that the example of reflection in L'Arche represents something universal about how doubt manifests and is managed in care. The presence of intense uncertainty in L'Arche is the product of a very particular combination of high ideals and strict rules in the contemporary British care sector. This combination works to place a huge stake on fragile and complex caring interactions, without leaving any way to fix these interactions in any reliable way. Debate about care is generated by the transfer of responsibility to carers for controlling the realization of a moral ideal within the vicissitudes of interactive relationships that they, by definition, cannot morally exert total control over. Carers are thus always responsible for the realization of a care that is not within their power. These differences in the way that moral responsibility for care is distributed raise sharp questions of any philosophy that attempts to fix obligation and blame outside of the particular ways in which interaction is configured in a given context. But my argument is that an ethnographic attention to the distribution of responsibility challenges not only philosophical theories about care but also the dominant way in which anthropologists have sought to undermine them to date: through the trope of ethnographically unmasking surface appearances in order to reveal what, underneath, really constitutes care in a given context. L'Arche's particular way of responding to the distinct moral climate it sits within reveals the limits of this trope. Because my informants do not know what care consists of, I cannot claim to do so either (unless I were to claim that their uncertainty itself really is caring-though I would find that disingenuous on a number of fronts). Their debates and doubts thus help us appreciate the inherently vulnerable, political, and interactive nature of claims to know what good care is-including those made by anthropologists. The revelatory trope directs us away from questions about just how any local moral understanding plays out in practice. (In Garcia's case, for example, who resists the cigarette burns and remains skeptical about their efficacy or legitimacy?) When we classify something as care for evaluative reasons (such as to defend its value ethnographically against its detractors), we find ourselves in the company not only of philosophers but also those we seek to study. In as much as the revelatory trope sets up anthropologists as spokespeople for what good care really looks like on the ground, it runs the risk of reproducing a particular therapeutic ideology rather than the interaction between different visions of good care and the way they play out in the contingencies of relationships. This threatens to reduce care's ethics to a two-dimensional moral imagination (however richly we paint it) rather than something more expansively established in the dynamic interactions by which responsibility is assigned. If uneven debates about what constitutes good care are part of what creates the morally fraught conditions in which carers work, then there is an important role for an anthropology of care that understands these ethical processes before it contributes to them.
|
2020-10-29T09:07:57.655Z
|
2020-10-27T00:00:00.000
|
{
"year": 2020,
"sha1": "98ff52b9798ace75e80a698db7bf8a8cdc3758a0",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/anhu.12302",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "38425a2bdd9ea0635df274ef4e55819eccdf3e95",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
12029207
|
pes2o/s2orc
|
v3-fos-license
|
FAP Associated Papillary Thyroid Carcinoma: A Peculiar Subtype of Familial Nonmedullary Thyroid Cancer
Familial Nonmedullary Thyroid Carcinoma (FNMTC) makes up to 5–10% of all thyroid cancers, also including those FNMTC occurring as a minor component of familial cancer syndromes, such as Familial Adenomatous Polyposis (FAP). We give evidence that this extracolonic manifestation of FAP is determined by the same germline mutation of the APC gene responsible for colonic polyps and cancer but also shows some unusual features (F : M ratio = 80 : 1, absence of LOH for APC in the thyroid tumoral tissue, and indolent biological behaviour, despite frequent multicentricity and lymph nodal involvement), suggesting that the APC gene confers only a generic susceptibility to thyroid cancer, but perhaps other factors, namely, modifier genes, sex-related factors, or environmental factors, are also required for its phenotypic expression. This great variability is against the possibility of classifying all FNMTC as a single entity, not only with a unique or prevalent causative genetic factor, but also with a unique or common biological behavior and a commonly dismal prognosis. A new paradigm is also suggested that could be useful (1) for a proper classification of FAP associated PTC within the larger group of FNMTC and (2) for making inferences to sporadic carcinogenesis, based on the lesson from FAP.
Introduction
Familial Nonmedullary Thyroid Carcinoma (FNMTC) is a nonmedullary thyroid cancer occurring in a subject with germline mutation for a gene responsible for an inherited multitumoral syndrome, including thyroid carcinoma as a part of the syndrome (or in more than 3 members of the same kindred, even in the absence of a known mutation in a putative gene). FNMTC makes up to 5-10% of all thyroid cancers, also including those FNMTC occurring as a minor component of familial cancer syndromes, such as Gardner's syndrome, Cowden's disease, Carney complex type-1, Werner's syndrome, McCune Albright syndrome, or Familial Adenomatous Polyposis (FAP) [1,2]. In particular, a recent review outlines that "FNTMC is associated with more aggressive disease than sporadic cases, with higher rates of multicentric tumours, lymph node metastasis, extrathyroidal invasion, and shorter disease-free survival" [1].
In addition, it has been suggested that "the genetic inheritance of FNMTC, (in patients in whom it is the predominant feature), remains 'unknown. . .'", but "it has been observed an increased percentage of male patients with FNMTC compared to those with sporadic NMTC" [1].
FAP is inherited in an autosomal dominant fashion and is characterized by multiple adenomatous polyps in the colon and rectum and a near certainty of developing colorectal cancer unless a risk-reducing prophylactic colectomy is performed. Therefore, early identification and intervention in FAP patients is of paramount importance. FAP is also associated with several extracolonic malignancies, including malignancies of the upper gastrointestinal tract, hepatobiliary tract, central nervous system, and endocrine system (thyroid, adrenal) [3][4][5][6][7][8][9][10].
Age at Diagnosis of PTC and/or Colonic Polyps
More than 80% of FAP associated FNPTC were diagnosed between 18 and 35 years of age. In particular, in our 18 patients, diagnosis was concomitant in 6 (1/3), whereas in 6 (1/3) FAP preceded and in 6 (1/3) PTC preceded. This is very important, because the early occurrence of PTC can facilitate diagnosis in some kindred with undetected FAP [5,16].
Long-Term Prognosis
In an overall series of 200 cases reported in the literature (112 before 2000 and 90 after that year), there were very few recurrences and only 1 death, possibly related to FAP associated PTC [16,17,26]. In particular, there was no recurrence in 9 of our patients, with a follow-up longer than 15 years (180 months) in every subject [26].
Prevalence of PTC in FAP Patients
Concerning the actual prevalence of PTC in FAP patients, it has been reported as 0.4% to 2% in various retrospective series. More recently, results of prospective registry screening programs for PTC in patients with FAP have reported prevalences of 2.6% [27] to 11.8% [28]. This is an increased value in comparison with the previously reported data (1.2%) [23]. We deem that a 3 to 5% prevalence of PTC in FAP patients could be a more realistic value in the present era of improved early diagnosis [26]; even if recent intensive screening protocols in patients belong to FAP registries, the prevalence was 6.1% overall and 11.1% in women [29,30]. The main criticism that can be raised in the interpretation of the findings of the last series, in which all FAP patients underwent intensive screening, concerns the significance of observed data. In fact, in some cases thyroid nodules resulted positive at FNAB, even in males and after age 60. This observation contrasts with previous observations, suggesting that these tumors are likely to be different from those currently associated with FAP, occurring in females aged less than 35.
Age of Patients
In our first report the mean age of patients was 24.8 years, in a series of 15 female patients [5], but it was also 24.8 years in a series of 97 patients collected from the literature. The mean age has been similar in the 81 patients reported in the literature after 2000 [26].
Histologic Variants of PTC
Harach et al. [19] reported 4 cases of thyroid carcinoma in FAP patients and noted the following unusual features: multifocality (differing from sporadic PTC) multifocalitytumors encapsulated, and unusual histologic patterns: cribriform, solid, spindling, and whorls. They concluded that FAP associated thyroid carcinomas were likely related to PTC but were sufficiently different.
Cameselle-Teijeiro and Chan [20] described 4 cases with similar or identical morphology, apparently occurring in patients without FAP. These findings were also observed by others [21]. These tumors have been termed the "cribriform morular" variant of PTC (CMV PTC). The cribriform areas are composed of anastomosing bars and arches of cells without intervening stroma with the follicular spaces devoid of colloid. The morulas are composed of spindled cells with "peculiar nuclear clearing." These clear nuclei differ from both the optically clear nuclei and intranuclear pseudoinclusions more typically seen in PTC and consist of accumulation of biotin [20][21][22].
Harach et al. [19] speculated that the morphology of this variant could be related to the involvement of the APC gene in the pathogenesis. However, we never found balletic inactivation in the thyroid tumoral tissue (that was found in the colonic tumoral tissue); Soravia et al. [25] in 9 samples from 4 patients found somatic APC mutations in 1 sample. These findings suggest that although somatic mutations of APC may be seen in a few cases of FAP associated thyroid carcinoma, it is not a required step in the pathogenesis of these neoplasms. On the contrary, we were the first to show a very high incidence of ret/PTC oncogene activation, which is known to be an early molecular event in papillary thyroid carcinoma oncogenesis [7]. These findings support the concept that FAP associated thyroid tumors are variant of PTC [7]. In a subset of CMV of PTC Xu et al. [22] have demonstrated that aberrant nuclear accumulation of mutant beta-catenin may substitute for APC mutations. It is thought that these sporadic cases are due to a somatic mutation in exon 3 of the beta-catenin gene (CTNNB1), further highlighting the analogous role to the APC-betacatenin pathway. In fact, not only sporadic CMV PTC but also FAP associated PTC usually, but not always, show nuclear and cytoplasmatic expression of beta-catenin [12,16,26].
On the contrary, the most common genetic abnormality in papillary thyroid carcinoma BRAF mutation appears to be absent in FAP associated thyroid cancer, suggesting that BRAF and APC RET/PTC mutations are mutually exclusive of each other in the occurrence of different types of PTC [16,17,26].
However, there are some clinical, demographic, and prognostic differences between sporadic CMV-PTCs, which may occur in males (instead of exclusively in females) and in old age, and many reports show an aggressive behavior with distant metastasis [20,21,29] and the typical indolent behavior of FAP associated PTC, occurring almost always in females and showing an indolent behavior [5,26]. There was only 1 death because of thyroid related complications out of 200 FAP associated PTCs [26].
Therefore, we suggest caution before stating that, despite similar features and the possibility of diagnosing CMV PTC preoperatively by FNAB, these tumors (sporadic and FAP associated) should be considered as a single entity [16,17].
Interestingly, whereas tumors in the colon rectum occur invariably in almost 100% of subjects with APC germline mutations, with no prevalence for any sex and with the same incidence in males and in females (and in most colorectal polyps or cancers, there is a complete loss of the APC function documented by the high rate of LOH for APC in the tumoral tissue), FAP associated FNMTC occurs in a minority of affected siblings, in the absence of LOH for APC [6] and almost invariably in the female sex [26]. This is opposite to what has been reported for FNMTC as a homogeneous entity [1], since these authors observed a relative prevalence of males in comparison to the usual F : M ratio = 3 : 1 in sporadic tumors.
In particular, this is a very unusual finding in an inherited multitumoral syndrome. There is no doubt that FAP associated FNMTC is part of the multitumoral syndrome due to germline mutations of a tumor suppressor gene as APC. In fact, there is a frequent association of FNMTC in siblings with the same germline mutation (all the 23 siblings with FAP associated FNMPC were female).
There is a statistically significant association between FNMTC and the site of germline mutations (almost all mutations are located in the proximal portion of the gene, 5 to codon 1220) [5].
However, the absence of complete inactivation of the gene [6] suggests that the germline mutation of the APC gene confers only a generic susceptibility to thyroid cancer [5][6][7][8][9][10][11], but perhaps other factors, namely, modifier genes, sex-related factors (hormonal, but also dietary, metabolic, and immunological), or environmental factors [31], are also required for the phenotypic expression.
It is likely that FAP associated FNMTC represents a veritable example of cooperation between purely inherited factors (APC germline mutation) and epigenetic or environmental factors, namely, those strictly connected with the female sex, as the striking female to male ratio of 80 : 1 strongly suggests [26].
These peculiar features have been documented in detail only for FAP associated FNMTC, with the striking female preponderance.
In FNMTC associated with Werner syndrome, a relative male prevalence has been suggested by some authors [1]; as well in other FNMTC associated with other inherited multitumoral syndromes there could be other associations with epigenetic or environmental factors [1].
We suggest that a similar multifactorial cooperation could also occur for the most frequent variant of FNMTC that is not associated with a germline mutation of a known tumor suppressor gene. We suspect that the 5-10% of FNMTC do not include just a few rare manifestations of 3-4 inherited multitumoral syndromes (such as those quoted in a recent review) [1], whereas the remaining patients belong to a unique disease, for which (in addition to a common prognosis or similar biological behavior) also a single predisposing gene or a common molecular or pathophysiologic mechanism should be envisaged. This is an oversimplification of a complex entity.
Actually, Navas-Carrillo et al. [1] report (1) on the American family with 5 members affected by PTC, one by colon cancer and 2 by papillary renal neoplasm, with a possible susceptibility gene (PTC/PRN), located at 1q21 [32], (2) on the susceptibility locus (NMTC1) on chromosome 2q21, firstly identified in a large Tasmanian pedigree with recurrence of PTC [33], and (3) on the susceptibility locus to chromosome 8p23.1-p22 in a large Portuguese family with 11 cases of benign thyroid disease and 5 cases of thyroid cancer [34]. In particular, patients with these genetic alterations may partially overlap with, or be completely different from, those showing genetic anticipation (i.e., diagnosis at an earlier age in patients of the second generation) [35].
Anyway all these patients are smaller in number than the 200 homogeneous patients with FAP associated NMPTC, with documented association with an inherited germline mutation, for whom we have shown a biological behavior (striking female prevalence and little tumor aggressiveness) different from that suggested by Navas-Carrillo et al. [1] for the total amount of FNMTC.
On the basis of these cumulative data, it is more likely that there is a galaxy, a wide multiplicity, of potential germline mutations, each of which can confer an increased susceptibility, but other genes or factors, namely, environmental factors (maybe playing a greater role than congenital predisposition), are also required for PTC manifestation.
This great variability of germline predisposing factors and of epigenetic and environmental factors is against the possibility of classifying all FNMTC as a single entity, not only with a unique or prevalent causative genetic factor, but also with a unique or common biological behavior, such as "higher rates of metastases, extrathyroidal invasion, and shorter disease free survival." The only common characteristic (in addition to the possibility of earlier diagnosis, because of a previous affected sibling in the same kindred) is the frequent multicentricity. But, also in this respect, it must be outlined that we had no recurrence in the contralateral lobe (after a minimum followup of 180 months for all patients), in 5 out of 5 subjects with FAP associated FNMTC, who had hemithyroidectomy, because they refused total thyroidectomy, in association with total colectomy and other concomitant invasive operations at a young age, which are usually required in FAP subjects [16,17,26].
In particular, there is no single genetic alteration predisposing specifically to FNMTC in all cases. On the contrary, there are various syndromes, also including some familial 4 Pathology Research International cancer syndromes, that have multiple siblings with FNMTC as a part of the syndrome. Therefore, if FNMTC is a very heterogeneous syndrome, to try to select common features as well as a uniform prognosis or the same biological behavior can be misleading.
Furthermore, the lesson from the galaxy of heterogeneous FNMTC, if correctly interpreted, could contribute to open new avenues for a deeper knowledge of pathogenic factors determining the occurrence of all "common" or most frequent types of cancers.
From a more general point of view, concerning inherited predisposition of tumors, we hypothesize that, concerning genetic predisposition, in the vast majority of "common cancers" (such as lung, breast, cancer, pancreas, and liver, also including thyroid cancer), there is an individual way to cancer occurrence; that is, there is a single gene or an oligogenic alteration, that is, one or more germline mutations occurring in some cancer facilitating or controlling genes, but also in genes controlling immune response or other functions, which predisposes to cancer. This cancer predisposition that is not due to a single germline mutation in a tumor suppressor gene (such as in the rare inherited multitumoral syndromes), but due to a combination of genes, could segregate more frequently in siblings, facilitating the familial occurrence of some cancers [36].
In particular, our recent studies on oligogenic germline mutations in nonsmoker-discordant siblings with lung adenocarcinoma (1) confirmed, as a "proof of concept," the hypothesis of an oligogenic combination for cancer susceptibility, (2) further support a model of "private genetic epidemiology" for a better understanding of the genetic effects in families with common cancers, and (3) suggest the possibility that each individual may have his/her personal way to cancer. These findings could have important implications for personalized medicine [36]. It is noteworthy that, for thyroid cancer, the common exposure to radiation [31] or other environmental factors to which siblings living in the same site could be exposed, in association with inherited predisposition from a wide range of susceptibility genes, should also be taken into account [26].
Conclusion and Lesson from FAP Associated PTC to Familial and Sporadic Carcinogenesis
(1) Concerning FNMTC, the preliminary knowledge of the familial aggregation of a given cancer could be a useful tool for early tumor detection. But this should not be used to conclude that FNMTC should be considered as a single entity, a unique disease, with a common pathogenic mechanism.
(2) We must recognize that, on the basis of present data, we are unable to answer uniformly to the question as to whether FNMTC is more or less aggressive than its sporadic counterpart. The correct answer could be "in some cases yes, in others no." It depends on inherited genetic alterations facilitating cancer susceptibility, but also on the individual tumor of a given subject.
(3) We suggest that time has come that, on the basis of current clinical evidence, we must challenge and also try to confute the "old scientific paradigm" and to provide a "new paradigm" that is more in accordance with actual genetic, pathologic, and clinical evidence. After the "genetic revolution" following DNA discovery and human genome sequencing, together with the observation that a single base change in a single gene (as a germline mutation) could be responsible for the occurrence of a given cancer in 100% of affected subjects (such as in FAP), the utopian dream has been cultivated that "targeted genetic engineering" could cure clinically evident cancer. Unfortunately, this is not the case. Analogously, clinical evidence and long-term follow-up (as in FAP associated FNMTC or in MEN 2A associated endocrine tumors) have shown that it is not true that a tumor belonging to an inherited multitumoral syndrome (even if frequently multifocal) has a more aggressive behavior and a worse prognosis than its sporadic counterpart.
(4) Not only many multitumoral syndromes (such as FAP), but also nontumoral diseases (Alport syndrome, etc.), instead of being determined by a single germline mutation in a single gene, may be determined not only by mutually exclusive mutations in different genes (APC and MYH) [37], but also by concomitant mutations in multiple genes, so determining digenic diseases (with at least 2 pathogenic germline mutations in at least 2 of the 3 genes COL4A3, COL4A4, and COL4A5, responsible for Alport syndrome) [38] or even by somatic mosaicism [39]. In summary, things and tumors have a greater complexity than what was initially hypothesized. Better classification and grouping of similar diseases can be useful, but incorrect grouping (mixing apples and oranges) can be misleading [36].
(5) The new paradigm can be the following. A single genetic mutation, but also, and perhaps more frequently, an oligogenic group of germline mutations (in some cancer related genes, but also in other genes) sometimes overlapping among individuals, but often differing from one individual to another, can be responsible for cancer (or common diseases) predisposition, together with a wide range of epigenetic or environmental factors (the "weight" of which can also be greater than that of congenital predisposition), or necessary for its full phenotypic manifestation. In addition, biological behavior, aggressiveness of the tumor, and also susceptibility to tumor occurrence, do not depend only on the quantity or the "potential danger" of the "offending agent," but also on the resistance of the subject, not only at the level of the targeted cell or tissue, but also as "host resistance" as an entire "indivisible" organism [36]. (6) The true challenge of the near future is that available data from the literature can be used and interpreted according to the "old paradigm," considering FNMTC as a single entity, with a common biological Pathology Research International 5 behavior, or according to the "new paradigm" as "a galaxy of different diseases," each one with its peculiar combination between congenital predisposing factors, also facilitating familial aggregation, and environmental factors, determining in every patient a "unique type of cancer." This new paradigm fits more and is more in accordance with the so-called "personalized medicine" [36].
|
2017-11-07T10:26:36.985Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "85e529c54a9e610b6eb4ff2c3c130d5d6b720202",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2015/309348.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "756ed2168863c753e3609153da73d778236020dd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220835190
|
pes2o/s2orc
|
v3-fos-license
|
The role of cytokine profile and lymphocyte subsets in the severity of coronavirus disease 2019 (COVID-19): A systematic review and meta-analysis
Aims This study aimed to make a comparison between the clinical laboratory-related factors, complete blood count (CBC) indices, cytokines, and lymphocyte subsets in order to distinguish severe coronavirus disease 2019 (COVID-19) cases from the non-severe ones. Materials and methods Relevant studies were searched in PubMed, Embase, Scopus, and Web of Science databases until March 31, 2020. Cochrane's Q test and the I2 statistic were used to determine heterogeneity. We used the random-effect models to pool the weighted mean differences (WMDs) and 95% confidence intervals (CIs). Key findings Out of a total of 8557 initial records, 44 articles (50 studies) with 7865 patients (ranging from 13 to 1582), were included. Our meta-analyses with random-effect models showed a significant decrease in lymphocytes, monocyte, CD4+ T cells, CD8+ T cells, CD3 cells, CD19 cells, and natural killer (NK) cells and an increase in the white blood cell (WBC), neutrophils, neutrophil to lymphocyte ratio (NLR), C-reactive protein (CRP)/hs-CRP, erythrocyte sedimentation rate (ESR), ferritin, procalcitonin (PCT), and serum amyloid A (SAA), interleukin-2 (IL-2), IL-2R, IL-4, IL-6, IL-8, IL-10, tumor necrosis factor-alpha (TNF-α), and interferon-gamma (INF-γ) in the severe group compared to the non-severe group. However, no significant differences were found in IL-1β, IL-17, and CD4/CD8 T cell ratio between the two groups. Significance Decrease in total lymphocytes and lymphocyte subsets as well as the elevation of CRP, ESR, SAA, PCT, ferritin, and cytokines, but not IL-1β and IL-17, were closely associated with COVID-19 severity, implying reliable indicators of severe COVID-19.
Introduction
Breaking out for the first time in Wuhan, China, in December 2019, the new infectious primary atypical pneumonia pandemic has formally been named as Coronavirus Disease 2019 (COVID- 19), and moreover, its causative virus as Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) [1,2]. An overall number of 8,486,923 verified patients dramatically elevated after microorganisms or medications stimulate the body, resulting in dysfunctions in the immune system [4]. Multiple organ dysfunction syndrome (MODS), acute respiratory distress syndrome (ARDS) and even death are the probable outcomes of this phenomenon [5,6].
Considering the fast dissemination of COVID-19 and the increased mortality in extreme cases, it is desperately required to better understand clinical features and to identify reliable laboratory inflammatory markers that can differentiate between severe-to-critical and mild-tomoderate infections. These data may also help to better understand pathogenesis of this emerging infection. Nonetheless, the precise role that cytokines, lymphocyte subsets, and infection-related factors play in the severity and progression of the disease is yet to be found. Therefore, the present study was conducted aiming to analyze the different characteristics of cytokine levels (Interleukin-1beta (IL-1β), IL-2, IL-2R, IL-4, IL-6, IL-8, IL-10, IL-17, tumor necrosis factor-alpha (TNF-α), and interferon-gamma (INF-γ)), lymphocyte subsets (CD3 cells, CD4+ T cells, CD8+ T cells, CD4/CD8 T cell ratio, CD19 cells, and natural killer (NK) cells), complete blood count (CBC) indices, and a number of infection-related factors (C-reactive protein (CRP)/hs-CRP, erythrocyte sedimentation rate (ESR), ferritin, procalcitonin (PCT), and serum amyloid A (SAA)) between mild/moderate and severe/critical patients, and further to screen out suitable indicators for the prediction of the disease severity in order to provide some insight into the subsequent clinical interventions.
Search strategy and selection criteria
This systematic reviews and meta-analyses was carried out in accordance with PRISMA [7]. The protocol for the review was registered with PROSPERO (Provisional registration number: CRD42020178847).
The relevant literature on the issue was identified through an online search in PubMed, Embase, Scopus, and Web of Science for studies published as of March 31, 2020. Furthermore, to improve search sensitivity, no filters or limits were used on time and language and all the included studies written in English or Chinese languages were adopted. It should be noted that the reviews in the Chinese language were translated by https://translate.google.com/ (the retrieval process is shown in Fig. 1). The medical subject headings (MESH) and the keywords searched included 'betacoronavirus' or 'betacoronavirus 1' or 'coronavirus Infection' or 'coronavirus' or 'SARS-2-CoV' or 'COVID-19' and 'inflammation' or 'cytokines' or 'C-reactive protein' or 'Interleukin-1beta' or 'interleukin-6' or 'Interleukins' or 'tumor necrosis factor-alpha' or 'antigens, CD' or 'lymphocyte subsets' or 'killer cells, natural' or 'procalcitonin' or 'blood sedimentation' or 'ferritins' or 'serum amyloid A protein'. The list of titles and abstracts and the full text of the selected manuscripts were independently examined by two reviewers (HA and SF). The disagreements as to what manuscripts to select during both title and abstract examination, and the subsequent full-text analysis, were addressed until a conclusion was reached. Besides the abovementioned databases, the identification of any remaining relevant published studies was performed using citation tracking. Moreover, the studies which were not published were retrieved from the medRxiv website.
All the studies which have addressed the inflammatory-related laboratory factors in predicting severe COVID-19 infection were incorporated. All studies with various designs conducted since the outbreak (in December 2019) were considered as ineligible, however, repeat articles, case reports, case series, reviews, letters, editorials, short communication, animal trials, correspondence, guidance, radiology studies, meeting reports, and expert opinions were considered as illegible. The exclusion criteria included: (1) studies regarding particularly pediatric or pregnant cases due to the diverse presentation of COVID-19 in these groups, (2) inadequate information on inflammatory-related laboratory parameters in either severe or nonsevere disease groups, (3) coronavirus strains other than COVID-19, (4) and studies with unusable data. Nonetheless, the diagnostic criteria for COVID-19 were explained on the basis of laboratory approved SARS-CoV-2 infection. If two or more studies were published by the same authors or institutions, only the study having the largest sample size was selected.
The data from the incorporated studies were extracted by two reviewers (SV and SF) independently. Also, a third reviewer (RT) was used to solve any arisen argument. The details of each study were collected which involve author, publication date, study location, study design, sample size, sample characteristics (age, gender, comorbidities), exposure characteristics (study definition of severity of COVID-19, the timing of classification of disease severity [on admission or otherwise], number of cases with non-severe COVID-19, number of cases with severe or critical COVID-19), the timing of blood sample collection (on admission or otherwise). Moreover, inflammatory-related laboratory factors, cytokines, lymphocyte subsets, and CBC indices were grouped by COVID-19 severity (mean [SD]) and finally, all the extracted data were transferred into Microsoft excel. Furthermore, through rechecking the primary studies, as well as discussions, any inconsistencies in the extracted data were resolved. It is worth mentioning that, using web plot digitizer online software, some graph data were converted to numerical data (https://apps.automeris.io/wpd/). In case the relevant data were missing, authors of selected studies were contacted via email. Also, it should be noted that due to inaccuracies in the research methodology for some of the studies, we reported the type of study in some articles, especially those submitted in the medRxiv, by inference.
The included studies differed in the way they defined patients' disease status, and classified the disease into 'mild, moderate, severe and critical', 'ordinary and severe/critical', 'common and severe', and 'nonsevere and severe', categories. The first outcome measure adopted was severe (including both severe and critical cases) vs. non-severe disease. The definition provided for the severity of the disease was based on the New Coronavirus Pneumonia Prevention and Control Program (6th edition) published by the National Health Commission of China [8]. (1) mild: non-pneumonia patients as shown by imaging; (2) moderate: pneumonia-diagnosed patients as shown by their symptoms and the imaging examination; (3) severe: patients with any of the following factors: (i) respiratory rate equal to or higher than 30/min; (ii) resting pulse oxygen saturation (SpO 2 ) equal to or lower than 93%; (iii) oxygen partial pressure (PaO 2 )/fraction of inspired oxygen (FiO2) equal to or lower than 300 mmHg (1 mmHg = 0.133 kPa); (iiii) imaging process showing a 50% progression in multiple pulmonary lobes of a lesion in 24-48 h; (4) critical: patients with any of the following factors: (i) the need for mechanical ventilation in case of respiratory failure (ii) shock; (iii) admission to the intensive care unit (ICU) due to simultaneous failure in another organ. It is noteworthy that mild or moderate patients were included in the non-severe group, while severe or critical patients were included in the severe one. The Newcastle-Ottawa Scale was used to evaluate quality, and moreover, assessment scores of 0-3, 4-6, and 7-9 represented poor, fair, and good studies, respectively. Additionally, discrepancies were resolved through consensus.
Statistical analysis
All statistical analyses were conducted using STATA version 12.0 (Stata Corp., College Station, TX). Laboratory factors were considered as the mean (SD) difference with 95% confidence intervals (CIs) between the severe group and the non-severe group. To pool the mean differences (SD), weighted mean difference (WMD) statistic with the random-effect model (DerSimonian-Laird method) were used. Cochrane's Q test or the I 2 statistic was used to assess heterogeneity among included studies. I 2 above 70% and Cochrane's Q test with P < 0.05 was considered as the existence of significant heterogeneity. Sensitivity analysis was used to evaluate the robustness of meta-analyses findings with applying the leave-one-out method after removing one by one included study on the pooled WMDs. Egger regression and Begg's rank correlation tests were applied to detect the potential evidence of publication bias between included studies.
Results
We yielded a total of 8557 records through initial online search in databases. Of these, 1077 were duplicate. After screening based on title and abstract, 304 articles were selected as the candidates for assess according to inclusion and exclusion criteria. Finally, 44 articles (50 studies) were identified to be eligible for current meta-analysis. Fig. 1 shows the flowchart of study identification and selection process.
All selected studies contained a total of 7865 (ranging from 13 to1582) patients including 2286 in the severe group and 5579 in the non-severe group. Forty-three of all included articles were conducted in China and one [9] of them was performed in USA.
Most of assessments on laboratory tests among included patients were conducted on admission period/before treatment. The characteristics of included studies are summarized in Table 1.
Sensitivity analysis
We found no significant differences between the pre-and post- [11], the study on TNF-α (WMD = 0.18 pg/mL, 95%CI: −0.03, 0.40), the sensitivity findings showed that there was a significant differences between pre-and postsensitivity pooled WMD for these outcomes.
Discussion
To the best of authors' knowledge, this is the first and the most comprehensive systematic review and meta-analysis that investigated the differences between severe and non-severe confirmed COVID-19 cases in terms of inflammatory-related laboratory tests along with cytokines, lymphocyte subsets and some CBC indices. According to the findings, the severity of COVID-19 has a significant, positive association with CRP/hs-CRP, ESR, PCT, SAA, and ferritin levels. Moreover, with respect to CBC indices, the findings revealed significantly higher levels in WBC and neutrophil, while lower lymphocyte and monocyte levels in severe than in non-severe confirmed COVID-19 patients. Furthermore, it was shown that, rather than IL-1β and IL-17, the circulating levels of all the investigated pro-inflammatory cytokines were significantly higher in severe vs. non-severe COVID-19 patients. Additionally, except for the CD4/CD8 T cell ratio, the levels of the investigated CD markers along with the total number of lymphocytes, were significantly lower in the severely infected cases than in non-severe ones. The number of NK cells and monocytes were also decreased in the severe group.
CRP/hs-CRP and ESR have been found to be increased in a vast number of inflammations/infections [46,51]. In this new pandemic pneumonia, the levels of CRP and ESR significantly increased in severe cases compared to non-severe COVID-19 patients [31,45], which greatly coincides with those found in the present systematic review and meta-analysis. In the present study, PCT concentrations were significantly higher in severe/critical patients than in non-severe cases. As it was previously shown, that is, the PCT does not increase with virus infections, it may indicate superimposed bacterial infection for the critically ill patients [47,52]. SAA, another important factor capable of improving inflammatory response through activation of chemokine and induction of chemotaxis even at a very low concentration [53], was found to have elevated circulating levels in severe patients and both were significantly related to COVID-19 severity. The critically ill patients were shown to have higher expressions of IL-1β, IL-6, TNF-α, and other cytokines, which boost SAA production by liver cells [16]. Likewise, induced by the activated macrophages, which produce TNF-α, ferritin was seen to undergo the same changes as SAA. An excessive amount of ferritin is also reflective of a surplus of TNF-α, which is a major apoptotic factor [16]. Consequently, these inflammatory-related factors might function as a biomarker to monitor the progression of respiratory diseases.
The higher level of IL-2 in COVID-19 patients is possibly indicative of T cell activation. An important pro-inflammatory cytokine, IL-6 can put an end to the activation of normal T cells, which may be a reason for the presence of lymphopenia. A study carried out by Gong et al. [54] showed that although levels of IL-2R and IL-10 were associated with the severity of the disease, they principally contributed to the inhibition of the inflammatory response. Consequently, this hypothesis developed further this idea that it may suggest the simultaneous inflammatory and anti-inflammatory reaction. The highly increased levels of IL-10 in severely infected patients might account for the negative feedback on the systemic and local inflammation. However, what role immunosuppression plays in the progression of the disease and whether IL-10 and IL-2R are possible therapeutic targets are yet to be answered by further research. Besides, COVID-19 infection was found to induce augmented secretion of T-helper-2 (Th2) cytokines (e.g., IL-4 and IL-10) that suppress inflammation; a finding which coincides with that of the present study but differs from SARS-CoV infection [55]. An important anti-viral cytokine generated by CD4+ T cells, CD8+ T cells, NK cells, and macrophages, IFN-γ has been reported to contribute to the cytokines storm in SARS patients [56,57]. The present study showed that levels of IFN-γ in severely infected cases were higher than those of the non-severe COVID-19 patients, suggesting that IFN-γ may efficiently indicate the status of the disease. Besides, IL-1β and IL-17 were found not to be significantly associated with the COVID-19 severity, which might be due to few numbers of included studies, hence the need for further studies to explain the role that these cytokines play in the progression of the disease. Albeit no significant association was found by some studies between the COVID-19 pneumonia severity and IL-6, IL-10, and TNF-α, this systematic review and meta-analysis indicated that IL-6, IL-10, and TNF-α could be used to assess the severity of COVID-19 and that they might be potential targets for immunotherapy of COVID-19.
The association found between lymphopenia and severity of the COVID-19 implies that, as does SARS-CoV, SARS-CoV-2 might act on lymphocytes, especially T types, hence possibly leading to the depletion of CD4+ T and CD8+ T cells [58]. The exhaustion of CD8+ T cells in severe patients may reduce their cellular immune response to SARS-CoV-2. The study conducted by Li et al. [59] showed that these multifunctional CD4+ T cells were much frequently seen in patients severely infected with SARS than in mild cases, suggesting the unique immune pathology of SARS-CoV-2 in comparison to other coronaviruses. Besides, given the significant NK cells decrease in the severe group compared to the non-severely infected cases, it can be said that the COVID-19 infection severity could be restricted by the activity of NK cells. Considering that immune adjuvant IL-2 can enhance the functioning of NK cells, there could be a new target for clinical treatment [55]. Moreover, the lower numbers of NK cells and monocytes in the severe group may also support the hypothesis that the greater role that innate immunity plays in the determination of the disease course and in the effectivity of acquired immunity in the control of this infection. This finding may have therapeutic implications as well as prophylactic importance. In this regard, any intervention promoting the innate immune system might have beneficial effects both as a preventive and therapeutic measure.
It was found that the ratio of CD4/CD8 T cell ratio in severe and non-severe COVID-19 infected groups did not change, which may indicate that CD4+ T and CD8+ T cells were equally reduced in both groups. Based on the current findings, lower levels of investigated CD markers represent inefficiency in the immune activation and a poor virus-specific T and B cell response accounts for the severe disease in SARS-CoV-2 infected patients. Collectively, these disorders in lymphocyte subsets might lead to the eventual reduction of the host antiviral immunity.
The present findings demonstrated a significant lymphopenia in severe/critical infected cases compared to mild/moderate COVID-19 ones across 46 studies. Lymphopenia assumes a high significance during infection with COVID-19, and it is under debate as to what its reasons are. It may be due to the direct contribution of the virus or redistribution of WBC via chemotaxis or apoptosis [16,41,60]. Compared to non-severe ones, severe cases are older and have comorbidity diseases [61][62][63], making them more susceptible to endothelial dysfunction and its associated lymphopenia. The fact that the present study showed elevated leukocyte levels in severe compared to non-severe patients across 42 studies is a novel and controversial finding. Moreover, neutrophils and monocytes decreased and increased in severe cases, respectively, which need further investigations in future studies. Regarded as a well-known marker for systemic inflammation and infection, NLR has been examined as a predictor of bacterial infection including pneumonia [52,64]. The elevated levels of NLR found in the present study suggest that the internal environment was seriously disturbed and that the severely infected cases were in a potentially critical condition. Liu et al. [47] revealed that the area under curve (AUC), cindex, sensitivity, and specificity for NLR were at a high level, suggesting that NLR is a reliable index of predicting the incidence of severe illness in an early time. These results point out that the easily accessible tests are potentially easy-to-use, low-cost for early screening and prognosis of the severe and/or critical COVID-19 infected cases.
Whereas COVID-2019 was initially recognized as a pulmonary disease followed by a storm of pro-inflammatory cytokines, resulting in ARDS, MODS, and death [65], recent evidence indicates that the disease is a systemic disorder affecting many organ systems including kidneys, gastrointestinal tract, liver, nervous system, and skin among others [66][67][68][69]. The mechanism of these systemic effects is not clear yet and many might be related or mediated by the effects of the cytokines and dysregulated immune system [70].
There are several limitations for this review. As most of the evidence came from China, this lack of evidence from outside China might be a limitation in the way of generalizing our results, particularly with regard to the shortage of costly laboratory tests in the context of nations with low resources. The heterogeneity of the included studies was another limitation, and the need for further studies is greatly felt. One of the main causes might be the poor description of the analytical performance features of the methods applied among the included studies. Furthermore, aging as a condition related to the inflammation was the basis of the previous studies [62,71], showing that severe patients were older than non-severe ones. Since the proinflammatory response is believed to initiate SARS-CoV-2 infection, it is logically possible that the aged cases have an overwhelming inflammatory reaction. Additionally, as the age of some of the patients were not included in some included studies, the comparison of the age differences between the two groups was not possible, which would be another probable limitation. Accordingly, the findings showed that more comprehensive clinical studies are required, including cohort studies. Besides, the immune responses of an innate and/or adaptive nature are suppressed or dysregulated by long-term stress through changing the Type 1-Type 2 cytokine balance, as a result of which low-grade chronic inflammation is induced and numbers, trafficking, and function of immunoprotective cells are suppressed. Additionally, chronic stress can also contribute to the suppression of protective immune responses and/or exacerbation of pathological immune responses [72]. In order to clarify to see if the severity of the illness itself can have effects on any of the immunological parameters or other psychological parameters like stress, independent of the virus, requires a larger number of clinical studies which would shed light on the association of chronic stress with its possible immunosuppressive role in patients with COVID-19.
COVID-19 is considered as a global health threat, consequently it is essential that clinicians have access to reliable quick pathogen tests and feasible differential diagnoses based on the clinical descriptions in their first contact with suspected patients. Although it has not been witnessed that pro-inflammatory cytokines and chemokines are directly involved in lung pathology during COVID-19, the changes in laboratory parameters, including the reduced total lymphocytes, lymphocyte subsets, and the elevated NLR, IL-2, IL-4, IL-10, IL-6, and TNF-α, as well as the routine inflammatory-related parameters in infected patients were remarkably associated with the severity of the disease. Likewise, irrespective of the crucial role that hyper-inflammatory responses play in COVID-19 pathogenesis, there could be a protective role for the innate immune system.
Contributors
All authors contributed to the study conception. HA, SF, and SV did the literature search and data extraction. RT and HA did the data synthesis, created the tables and figures, and wrote the manuscript. All authors contributed to the interpretation of the data and revision of the manuscript.
Declaration of competing interest
None.
|
2020-07-29T13:05:37.474Z
|
2020-07-28T00:00:00.000
|
{
"year": 2020,
"sha1": "c8f9844f4162833c8ff47092b16f89e52ee7cd4c",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.lfs.2020.118167",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2aec74eb53ef5e8e0822358f9e340cc92339a67",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55151458
|
pes2o/s2orc
|
v3-fos-license
|
Practicing Collaborative Skills through an Interprofessional Interview with Individuals Diagnosed with Parkinson’s Disease
Objective: To enhance student appreciation for collaboration/team-based care through participation in an interprofessional (IP) history-taking opportunity with individuals diagnosed with Parkinson’s disease (PD). Methods: Eighty-eight self-selected students from Louisiana State University Health-New Orleans and Xavier University College of Pharmacy participated in an IP elective course which included conducting an IP interview with a PD patient. To assess student perspectives regarding the IP interview, the students completed a thirteen item survey and reflection assignment. Results: Eighty-six students completed the survey and twenty-four completed the reflection assignment. 95% of students agreed the team-based interview and the development of an IP plan of care increased their awareness of the multiple perspectives to consider in designing a care plan. The Kruskal-Wallis test indicated a statistically significant difference among programs for survey question numbers two and four. All four IP education competencies (value and ethics, roles/responsibilities, interprofessional communication, teams and teamwork) were highlighted in the reflection assignment. Conclusions: The IP interview allowed students to gain knowledge of PD, better understand the role of other disciplines, and create a holistic plan of care. Received: 10/24/2016 Accepted: 02/27/2017 © 2017 Gunaldo et al. This open access article is distributed under a Creative Commons Attribution License, which allows unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. H IP & Practicing collaborative skills EDUCATIONAL STRATEGY 3(2):eP1122 | 2 Introduction For many years, the health care industry in the United States has recognized the need to prepare for an increase in the number of older adults. The complex care required by older adults can be challenging to clinicians as it is multifaceted. Older adults usually have multiple issues including syndromes, impairments, and chronic conditions such as Parkinson’s Disease (PD). Approximately four percent of individuals diagnosed with PD are younger than fifty years of age, but the overwhelming majority are older adults (Parkinson’s Disease Foundation, 2016). In 2008, the Institute of Medicine outlined three recommendations to improve the healthcare delivery system for older adults: 1) address the needs of older adults in a comprehensive manner, 2) provide efficient healthcare services, and 3) increase active participation of older adults in their healthcare (Institute of Medicine, 2008). These three recommendations require an interprofessional collaborative approach among healthcare team members. The World Health Organization (WHO) defines interprofessional collaborative practice as “when multiple health workers from different professional backgrounds work together with patients, families, carers [sic], and communities to deliver the highest quality of care” (WHO, 2010). Team-based care or practice is not a new concept in health care; however, there is a gap that exists in our health care system. Despite the fact that teamwork is needed to meet the complex needs of the communities we serve, healthcare providers in the U.S. lack the teamwork training that is needed to be members of effective teams (Interprofessional Education Collaborative, 2011). The Interprofessional Education Collaborative (IPEC), a national panel representing multiple educational professional accrediting bodies, has highlighted the need to train health professional students in teamwork with the goal of preparing students to be ready to practice in teams upon graduation (IPEC, 2011). IPEC developed four interprofessional core competencies that educational institutions can utilize to guide the development of interprofessional education (IPE) learning activities. An increasing number of academic health centers are integrating IPE into healthcare curricula in order to meet accreditation standards. IPE is defined as “when students from two or more professions learn about, from and with each other” (WHO, 2010). Health care professional students who are trained in an interprofessional manner are more likely to form collaborative practice patterns post-graduation (Pecukonis, Doyle, & Bliss, 2008). Therefore, it has become increasingly important to prepare healthcare professional students to collaborate and work in teams. The overall management of PD often requires a team of multiple health care professionals in order to ensure optimal patient care (Pretzer-Aboff & Prettyman, 2015). Health care team members can include a dentist, nurse, family medicine physician, neurologist, occupational therapist, pharmacist, physical therapist, and speech language pathologist. Despite the fact that these individual providers oversee care for the same patient, there are multiple discipline-specific plans of care. The lack of knowledge of team member roles, communication and collaboration among providers can lead to suboptimal care for individuals diagnosed with PD (Van der Eijk, Faber, Shamma, Munneke, & Bloem, 2011). Effective collaboration among team members is not an inherent skill; it must be learned and practiced (Baker, Day, & Salas, 2006). Over the past several years, Louisiana State University Health-New Orleans (LSUHNO) has implemented several IPE initiatives from a grassroots level. There are six schools within LSUHNO: Allied Health Professions, Dentistry, Graduate Studies, Medicine, Nursing, and Public Health. In 2015, LSUHNO administration demonstrated their support of team-based collaborative education and established a Center for Interprofessional Education and Collaborative Practice (CIPECP). The goal of the center is to coordinate student education by utilizing a team-based, patient-centered approach that delivers the highest quality of care resulting in improved health outcomes.
Introduction
For many years, the health care industry in the United States has recognized the need to prepare for an increase in the number of older adults.The complex care required by older adults can be challenging to clinicians as it is multifaceted.Older adults usually have multiple issues including syndromes, impairments, and chronic conditions such as Parkinson's Disease (PD).Approximately four percent of individuals diagnosed with PD are younger than fifty years of age, but the overwhelming majority are older adults (Parkinson's Disease Foundation, 2016).
In 2008, the Institute of Medicine outlined three recommendations to improve the healthcare delivery system for older adults: 1) address the needs of older adults in a comprehensive manner, 2) provide efficient healthcare services, and 3) increase active participation of older adults in their healthcare (Institute of Medicine, 2008).These three recommendations require an interprofessional collaborative approach among healthcare team members.
The World Health Organization (WHO) defines interprofessional collaborative practice as "when multiple health workers from different professional backgrounds work together with patients, families, carers [sic], and communities to deliver the highest quality of care" (WHO, 2010).Team-based care or practice is not a new concept in health care; however, there is a gap that exists in our health care system.Despite the fact that teamwork is needed to meet the complex needs of the communities we serve, healthcare providers in the U.S. lack the teamwork training that is needed to be members of effective teams (Interprofessional Education Collaborative, 2011).
The Interprofessional Education Collaborative (IPEC), a national panel representing multiple educational professional accrediting bodies, has highlighted the need to train health professional students in teamwork with the goal of preparing students to be ready to practice in teams upon graduation (IPEC, 2011).IPEC developed four interprofessional core competencies that educational institutions can utilize to guide the development of interprofessional education (IPE) learning activities.An increasing number of academic health centers are integrating IPE into healthcare curricula in order to meet accreditation standards.IPE is defined as "when students from two or more professions learn about, from and with each other" (WHO, 2010).Health care professional students who are trained in an interprofessional manner are more likely to form collaborative practice patterns post-graduation (Pecukonis, Doyle, & Bliss, 2008).Therefore, it has become increasingly important to prepare healthcare professional students to collaborate and work in teams.
The overall management of PD often requires a team of multiple health care professionals in order to ensure optimal patient care (Pretzer-Aboff & Prettyman, 2015).Health care team members can include a dentist, nurse, family medicine physician, neurologist, occupational therapist, pharmacist, physical therapist, and speech language pathologist.Despite the fact that these individual providers oversee care for the same patient, there are multiple discipline-specific plans of care.The lack of knowledge of team member roles, communication and collaboration among providers can lead to suboptimal care for individuals diagnosed with PD ( Van der Eijk, Faber, Shamma, Munneke, & Bloem, 2011).
Effective collaboration among team members is not an inherent skill; it must be learned and practiced (Baker, Day, & Salas, 2006) Within the course, students were arranged into eight multidisciplinary groups.Each group consisted of nine to twelve members.The members of the groups remained the same throughout the course.
IPE elective course and learning experience
The assignment of conducting an IP interview was scheduled towards the latter half of the course.
Students were tasked to develop a cohesive strategy to obtain necessary information from the individual and caregiver in order to develop an IP plan of care.
One week prior to the IP interview, students were provided a two-hour class session to develop a list of guiding interview questions in preparation for interviewing an individual who had been previously diagnosed with PD and a caregiver.Each student was asked to bring a list of patient history questions their profession would typically ask an individual diagnosed with PD.As a group, students were provided instructions to review the entire list of questions, remove duplicative questions, create a logical order for questions, and select a team leader who would provide an introduction to the community member and initially lead the interview questioning.
A faculty member was not physically present during this class session.Some student groups met during the designated day and time; some groups decided to meet off-campus or met on a different day and time.Students were instructed to email their faculty facilitators with any questions.
Students prepared questions for a one-hour interview.
Up to forty-five minutes of additional time was provided if the individual and caregiver continued to provide information to the student group or if the individual needed additional time because of speech impairment symptoms, such as slurred speech, soft voice, or difficulty finding words.
One week following the interview, the students were provided a two-hour class session to prepare for the culminating course project.The course project was a ten minute presentation, which included a brief overview of the individual's health and goals, a proposed plan of care, and how the student group utilized the IPEC competencies of value/ethics, teamwork, communication, and role/responsibilities during the three dedicated class sessions related to the IP interview.
Study design
In this retrospective, exploratory study, student perspectives of the impact of an IP interview as an IPE learning activity were analyzed through a survey and reflection assignment.The Institutional Review Board at LSUHNO approved the study protocol.Participant informed consent was not needed for the retrospective study.
Measures
In order to assess student perspectives after conducting the IP interview, a survey was administered (Table 1).The survey included thirteen questions using a Likert scale from 1 (low) to 5 (high).
In addition, a written reflection was a required
Data Analysis
Prior to the analysis of quantitative data, the data set was cleaned for blank responses.All analyses were performed using the Statistical Analysis System (version 9.4).The statistical analysis of the student responses by program excluded three programs due to low participation in undergraduate nursing (1 student), rehabilitation counseling (1 student), and The team-based interview and the development of an interprofessional plan of care Q1. increased my awareness of the multiple perspectives to consider in designing a care plan.Q2. provided a chance to see how my discipline can contribute to an interprofessional team.Q3. allowed me to understand how a care plan can be informed by multiple disciplines.Q4. showed how my discipline was part of designing a care plan.Q5. increased my confidence in participating in the interprofessional team approach.
During the team-based interview and the development of an interprofessional plan of care, I Q6.respected the culture and values of other health professions.Q7. communicated my professional role and responsibility clearly to patients, families and other professionals.Q8. expressed my knowledge and opinions to team members with confidence, clarity and respect.Q9. engaged or requested engagement of other health professionals, as appropriate.
During the team-based interview and the development of an interprofessional plan of care, my team members Q10.respected the culture and values of other health professions.Q11.communicated my professional role and responsibility clearly to patients, families and other professionals.Q12.expressed my knowledge and opinions to team members with confidence, clarity and respect.Q13.engaged or requested engagement of other health professionals, as appropriate.
Results
Eighty-six students completed the survey.The majority of students reported favorable engagement in the interprofessional interview from the perspective of self and team.Greater than 93% of the students agreed the team-based interview and the development of an interprofessional plan of care increased their awareness of the multiple perspectives to consider in designing a care plan and allowed them to understand how a plan of care can be informed by multiple disciplines.More than 89% of the students agreed the team-based interview and the development of an interprofessional plan of care helped them see how their discipline contributed to an interprofessional team, how their discipline was a part of creating a care plan, and increased their confidence in the interprofessional team approach.
For the student reflection of the survey, over 93% of the students believe the activity allowed them to respect the cultures of other health professionals, express with confidence their knowledge and opinions, and increase communication regarding their professional role and responsibility to the patient and family.
The final portion of the survey focused on the student's reflection of the team.More than 94% of the students believed the members of the team respected the culture and values of other health professions.The students also believed they were able to communicate their professional role and responsibility clearly, express their knowledge and opinions to other team members with confidence, and engage or request engagement of other healthcare professionals.
To evaluate how students may have differed by academic program, the Kruskal-Wallis test was conducted on the student responses in the eight programs for each of the thirteen questions presented in Table 1.The test results indicated the mean rank scores were statistically different by program for Question 2, "The team-based interview and the development of an interprofessional plan of care provided a chance to see how my discipline can contribute to an interprofessional team (H=14.06,df=7, p=0.05)." The Wilcoxon scores for Question 4, "The team-based interview and the development of an interprofessional plan of care showed how my discipline was part of designing a care plan" (H=19.70,df-7, p=0.006) were also statistically significant, as shown in Table 3. Statistically significant differences were not found for the remaining eleven questions (results not shown).
The box plots in Figures 1 and 2 illustrate the distribution of the original data for Questions 2 and 4.
Figure 1 shows the Clinical Laboratory Sciences (CLS) scores are lower than those of the other programs while Figure 2 shows the Audiology (AuD) and CLS responses are lower than the student scores from the other programs.
Twenty-four students reflected upon the PD interview in the course reflection assignment.All four IPEC competencies were highlighted within the reflection assignments.
Value and Ethics
Several students were unaware of the multiple medical and social issues individuals with PD expressed during the interview.Students identified the importance of placing the interests of the patient and caregiver at the center of interprofessional care with the goal of promoting health.Students discussed developing a comprehensive treatment plan to address the needs of the individual and caregiver.
Roles/Responsibilities
Many students discussed learning about common impairments in PD from various discipline perspectives.After the interview experience, students reported an appreciation of the different training backgrounds and approaches to PD care, which is needed in order to meet the needs of the patient and caregiver.Students also recognized the overlap in the focus of interview questions proposed by multiple disciplines and the need to avoid duplication of questions. H
Teams and Teamwork
Many students reported that working together as a team can help patients receive the best care.There was an appreciation for a team effort to develop a cohesive, comprehensive list of interview questions that reflected multiple disciplines.A couple of students also reflected on the team interview process and noted suggestions for process improvement in the future.
Discussion
The engagement of multiple disciplines is essential in meeting the health needs of individuals diagnosed with PD or other complex chronic conditions.Effective teams require respect, comfort, and trust among members (Boult, Boult, Morishita, Dowd, & Urdangarin, 2001;Sommers, Marton, Barbaccia, & Randolph, 2000).It was important to include the IP interview assignment towards the end of the semester so students would have time to build respect, comfort and trust among the team.One of the students reflected on this opportunity as: All semester, by working through smaller cases, our team was to build a firm interprofessional foundation.First, we learned how to best communicate with each other.We also respectfully listened to each other, and seized every opportunity to learn about each other's professions.I believe that these small exercises allowed us to be successful as we interviewed our client.
Even though the placement of the team-based interview was perceived to be appropriate, the activity itself may have not been beneficial for CLS and AuD students based upon the lower mean rank scores when compared to other professions.CLS is often referred to as the "hidden profession" (Forsman, 2002).Clinical laboratory scientists conduct their work outside of common patient care areas and do not interact directly with patients for evaluation and treatment purposes.Therefore, it can be difficult for CLS students to determine how they can contribute to the team through the development of a patient care plan.AuD students may have had a difficult time contributing the development of a care plan if individuals with PD did not express any problems with hearing or dizziness.
It is important to note a recent study demonstrated positive effects of interprofessional education.(Cahn, 2016).The development and implementation of an interprofessional interview provides an opportunity for students to increase their competence in collaborative skills with the goal of improving patient outcomes.
Conclusion
Using an interprofessional interview as an educational modality is an effective way to improve health care students' perceptions of interprofessional collaborative practice.Educating health professional students on the skills needed to be effective members of teams and creating active learning team experiences can assist in developing a workforce that is collaborative-practice ready.
Table 1 .
Thirteen item survey
Table 2 .
Core competencies for interprofessional collaborative practiceThe Kruskal-Wallis test was utilized to evaluate the student responses as the student response variable did not meet the normal assumption needed for a one-way ANOVA test.The exact option was used to request exact p-values.
effectively in different team roles to plan, deliver, and evaluate patient/population centered care and population health programs and policies that are safe, timely, efficient, effective, and equitable ISSN 2159-1253 Health & Interprofessional Practice | commons.pacificu.edu/hip3(2):eP1122 | 5 public health (2 students).
Table 3 .
Wilcoxon scores (rank sums) for questions 2 and 4 by program Cohen, Hagestuen, Gonzalez-Ramos, Cohen, Bassich, Book, and Morgan, (2011)utilized a National Parkinson Foundation IPE training program to improve PD knowledge, team building, and practice with a diverse group of providers and students.Incorporating IPE in continuing professional development initiatives can be a critical component in improving patient outcomes.Health care providers are optimal learners for interprofessional collaboration as they have the opportunity to immediately implement interprofessional competencies, which can demonstrate the impact of IPE
|
2018-12-12T14:38:59.992Z
|
2017-05-22T00:00:00.000
|
{
"year": 2017,
"sha1": "d55794b84f8e90d663049e7445676c0f5fbc73dc",
"oa_license": "CCBY",
"oa_url": "https://commons.pacificu.edu/cgi/viewcontent.cgi?article=1122&context=hip",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d55794b84f8e90d663049e7445676c0f5fbc73dc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9396772
|
pes2o/s2orc
|
v3-fos-license
|
Residues in the 1A rod domain segment and the linker L2 are required for stabilizing the A11 molecular alignment mode in keratin intermediate filaments.
Both analyses of x-ray diffraction patterns of well oriented specimens of trichocyte keratin intermediate filaments (IF) and in vitro cross-linking experiments on several types of IF have documented that there are three modes of alignment of pairs of antiparallel molecules in all IF: A11, A22 and A12, based on which parts of the major rod domain segments are overlapped. Here we have examined which residues may be important for stabilizing the A11 mode. Using the K5/K14 system, we have made point mutations of charged residues along the chains and examined the propensities of equimolar mixtures of wild type and mutant chains to reassemble using as criteria: the formation (or not) of IF in vitro or in vivo; and stabilities of one- and two-molecule assemblies. We identified that the conserved residue Arg10 of the 1A rod domain, and the conserved residues Glu4 and Glu6 of the linker L2, were essential for stability. Additionally, conserved residues Lys31 of 1A and Asp1 of 2A and non-conserved residues Asp/Asn9 of 1A, Asp/Asn3 of 2A, and Asp7 of L2 are important for stability. Notably, these groups of residues lie close to each other when two antiparallel molecules are aligned in the A11 mode, and are located toward the ends of the overlap region. Although other sets of residues might theoretically also contribute, we conclude that these residues in particular engage in favorable intermolecular ionic and/or H-bonding interactions and thereby may play a role in stabilizing the A11 mode of alignment in keratin IF.
To date, approximately 50 different genes encoding intermediate filament (IF) 1 chains exist in mammalian genomes. Based on differences in the organizations of their primary structures and genes, six different types of IF are now known (see Refs. 1-4 for reviews). The type I and type II keratins are the most numerous. In human, for example, each contains approximately 20 members, which are differentially expressed in various epithelial tissues. Each may be further divided into about 20 trichocyte keratin chains expressed almost exclu-sively in "hard" keratinizing tissues such as hair, and 20 cytokeratins. All keratin (as well as other) IF chains consist of a central rod domain composed of four ␣-helical segments (1A, 1B, 2A, and 2B) that possess a heptad repeat motif and are separated from one another by non-␣-helical linkers. The central rod domain is flanked on the head and tail by domains of differing size and chemical character. A large body of experimental evidence has now documented that the fundamental building block of all keratin IF is the heterodimer molecule, consisting of one type I and one type II chain (1)(2)(3)(4)(5)(6)(7). Although a number of important details remain to be resolved, this molecule is known to be stabilized in large part by the formation of a segmented ␣-helical coiled-coil by the appropriate parallel alignment of the central rod domain segments on the two chains. The next step is the formation of a pair of such molecules. Typically, this oligomer is the minimal IF structure that exists in solution, especially below the critical protein concentration required for assembly into macroscopic IF (ϳ40 g/ml). A number of biophysical, electron microscopic, and biochemical experiments have documented that the two molecules are aligned antiparallel and partly staggered in the A 11 or A 22 alignment mode, depending on whether the 1Aϩ1B or 2Aϩ2B rod domain segments overlap. Cross-linking data on cytokeratins and trichocyte keratins have revealed that both of these modes co-exist in solution, presumably in equilibrium with each other, as various experimental manipulations allow realignments (8 -10). Further, the cross-linking data have afforded quantitative estimates of the degree of overlap of the molecules. Thus, for the A 11 mode we have documented that the two molecules are displaced by approximately 112 amino acid residues with respect to each other. However, fundamental questions remain concerning the sequence features that specify and stabilize these alignment modes. In this study we have explored in K5/K14 IF which sequences may be involved in stabilizing the A 11 alignment mode. In this study, we have tested a current hypothesis that charged residues located along the rod domain segments may be important for molecular registration. Table I lists all possible charged residues that theoretically could be involved. By use of series of point substitutions of charged residues, we have identified several conserved residue positions that are important for stabilizing the A 11 alignment mode.
MATERIALS AND METHODS
Expression and Purification of K5 and K14 Chains-Full-length human K5 and K14 cDNAs were assembled into a pET11a vector and expressed in bacteria as described (12). Several mutant forms of both chains were generated by use of the QuickChange site-directed mutagenesis kit (Stratagene) (Table II). DNA sequencing was performed to confirm the mutations. Following induction, inclusion bodies were re-* The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
¶ To whom all correspondence should be addressed: NIAMS, Bldg. 6 covered, dissolved in SDS-PAGE buffer, and resolved in 3-mm-thick slab gels. The desired keratin bands were cut out, eluted into SDS gel buffer over night, and the solutions stored at Ϫ70°C. Protein concentrations were determined by amino acid analysis following acid hydrolysis.
In Vitro IF Assembly-Equimolar mixtures of either a wild type and/or mutant K5 and K14 chain were made from the stored SDS gel buffer solutions. The SDS was removed by ion-pair extraction (13) and the pelleted acetone-wet proteins redissolved (0.05 or 0.5 mg/ml) in a buffer of 9.5 M urea containing 50 mM Tris-HCl (pH 7.6), 5 mM Tris(2carboxyethyl)phosphine-HCl (TCEP) (Pierce), and 1 mM EDTA. For electron microscopy studies, IF were assembled by 1-h dialyses through solutions of decreasing urea solutions of 4, 2, and 1 M, and finally into assembly buffer of 10 mM Tris-HCl (pH 7.6), 1 mM EDTA and 5 mM TCEP (12). Final protein concentrations were 35-40 g/ml, which is below the critical concentration (C o ) for IF assembly (14), wherein mostly two-molecule assemblies formed, or 400 g/ml for optimal IF assembly. Particles were examined by electron microscopy following negative staining with 0.2-0.7% uranyl acetate over holey carbon film grids. Lengths of IF were measured (15) in fields of Ն400 m 2 . For IF assembly efficiency studies, protein mixtures in 9.5 M urea (40 l of Ϸ500 g/ml) were dialyzed directly into assembly buffer for 4 h. Solutions were then pelleted at 100,000 ϫ g for 30 min in an Airfuge (Beckman Instruments). Yields of protein in pellet were estimated by measuring the absorbance at 276 nm of the supernatant.
Transfection Experiments with K14-Green Fluorescent Protein (GFP) Plasmids-A construct encoding GFP coupled at the 5Ј end of the full-length coding sequence of wild type K14 was a generous gift of Dr. R. D. Goldman (Northwestern University Medical School, Chicago, IL). Point mutations were made in the plasmid as described above.
PtK2 (NBL-5) cells, epithelial-like rat kangaroo kidney cells, were obtained from ATCC (no. CCL-56). The cells were grown in 25-cm 2 tissue culture flasks and maintained in MEM (Eagle's minimal essential medium with nonessential amino acids, Earle's salts and reduced sodium bicarbonate at 0.85 g/liter) (Life Technologies, Inc.) with 10% fetal bovine serum. For cell passage, the cells were grown to near confluence, and the medium was aspirated, washed once with phosphate-buffered saline, and trypsinized for 20 s (0.25% trypsin; Life Technologies, Inc.). The trypsin solution was aspirated, and the cells Table III. A, wild type; B, K5 wild type and K14 1A Arg 10 3 Leu; C, K5 wild type and K14 1A Arg 10 were left at room temperature for 3 min. Five milliliters of medium were pipetted over the cells to dislodge them from the flask, and transferred to a 15-ml conical tube. Following 5 min of 1000 rpm centrifugation to pellet the cells, the medium was aspirated, and the cells were resuspended in 2 ml of medium and counted.
For direct immunofluorescence studies, 3 ϫ 10 5 cells/ml were plated in 35-mm sterile tissue culture dishes, each containing a glass coverslip. After 24 h, the cells were transfected with 1 g of plasmid DNA and 3 g of Lipofectin as described by the manufacturer (Life Technologies, Inc.). After 4 h, the mix was aspirated and 1 ml of 15% glycerol in Keratinocyte-SFM (Life Technologies, Inc.) was applied for 3.5 min. The glycerol solution was replaced with 2 ml of fresh medium and the cells incubated at 37°C with 5% CO 2 for at least 24 h. The coverslips were washed in phosphate-buffered saline and mounted onto glass slides with Gel/Mount (Biomeda Corp.). Intracellular localization of GFP fusion proteins was determined by direct fluorescent microscopy.
Protein Chemistry Procedures-To examine molecular stabilities, equimolar mixtures of the desired K5/K14 chains (ϳ40 g/ml) were equilibrated by 2-h dialyses into urea solutions of the desired concentration in a buffer of 10 mM triethanolamine (pH 8.0). The proteins were cross-linked with 25 mM disulfosuccinimidyl tartrate (DST) for 1 h at 23°C, and terminated with 0.1 M NH 4 HCO 3 (final concentration) (16). Although significant random cross-linking also occurs, these conditions were used because the near quantitative modification of all lysines allows for less diffuse bands on 3.75-7.5% gradient PAGE gels.
To assess molecular alignments in the A 11 and A 22 modes, crosslinking with DST was performed using 0.4 mM reagent as described before (8,9). We used wild type and mutant proteins that had been equilibrated into assembly buffer at about 40 g/ml for 1 h. In this case, Ͻ10% of the lysine residues were chemically modified, except for several aligned residues that formed cross-links with yields of up to about 0.3 mol/mol. Following cleavage with CNBr and trypsin digestion, peptides were resolved by HPLC as before, except that a non-linear gradient over a 120-min time period was used. The positions of elution of the peptides cross-linked by DST corresponding to the A 11 and A 22 molecular alignment modes were similar to those published previously (9), although many were confirmed by sequencing for five Edman degradation cycles on a Porton LF-3000 sequencer. Semiquantitative estimates of molar yields of each were made based on peak heights of the integrated HPLC profiles.
RESULTS AND DISCUSSION
In this paper we have made a systematic analysis of those charged residues that, based on current structural information, are located in rod domain positions that could influence the specificity and stability of the A 11 alignment mode of a pair of antiparallel heterodimer molecules in K5/K14 IF. These encompass the segments 1A, 1B, 2A, and beginning of the 2B, as
FIG. 4. Stabilities of dimer (one-molecule) and tetramer (twomolecule) assemblies of wild type and/or mutant chains in concentrated urea solutions (as shown) following cross-linking.
These and all other data are summarized in Table IV. The compositions of the assembly reactions are as shown. T, D, and M, respectively, mark the position of migration of the tetramer (two-molecule), dimer (onemolecule), and single-chain species.
well as the linkers L1, L12, and L2. We found that 41 charged positions have been conserved in the type II keratin 5 (K5) (Fig. 1, upper row) and type I K14 ( Fig. 1, lower row) chains. Based on extant ideas (2-4), we have hypothesized in this study that some of these may influence molecular alignment stabilities. Indeed, using the known quantitative estimate of molecular spacing of the A 11 alignment mode for keratin IF (about Ϫ112 residues) (9), we document in Table I that most of the 41 conserved charged residue positions lie opposite to each other and so are well sited to theoretically form stabilizing ionic salt bond pairs and/or H-bonds. Nevertheless, we discharged all conserved charged residue positions (i.e. mutated them to a non-charged residue) (all mutations are listed in Table II). In addition, there are 59 residue positions in this set that are not conserved between the K5 and K14 chains, and 4 others that are oppositely charged; several of these are also theoretically good candidates to form stabilizing salt bonds ( Table I). Some of these residue positions were discharged as well. We then examined the facility with which equimolar mixtures of one mutant and one wild type chain could assemble into one-and two-molecule oligomers, as well as IF in vitro and in vivo.
Assembly of IF in Vitro and in Vivo-The initial criterion of assembly competence was formation of pelletable IF particles by use of a sedimentation assay in the Airfuge. Experience has shown that particles must be Ն750 kDa in size in order to pellet with high efficiency. 2 This corresponds to an oligomer of as many as 16 chains (8 molecules), i.e. it consists of a full-length half-width entity characteristic of an early stage of IF assembly (17). In almost all cases, however, we found empirically that assembly of mixtures of mutant and/or wild type chains either resulted in macroscopic IF (Ն0.5 m long), which were readily pelletable in the Airfuge and clearly visible by electron microscopy after negative staining, or no large IF particles were formed at all (Ͻ0.1 m long and Ͻ4 nm wide), which did not pellet in the Airfuge and required examination over holey carbon film grids to be visible.
Sixty-six combinations of K5/14 chains were examined in in vitro assays (Fig. 2, Table II 2, 3, 10). This displays the close proximity of the two conserved and Arg/Lys 10 residues of one molecule (blue lines, large blue dots) with the conserved acidic residues Glu 4 and Glu 6 in the L2 linker (red lines, large red dots) of the other molecule. In addition, large dots delineate the possible interactions between 1A Lys 31 and 2A Asp 1 . Smaller dots delineate possible interactions involving 2A Asp 3 and L2 Glu 7 . We hypothesize that these may form several intermolecular salt and/or H-bonds and thereby contribute essential specificity and stability to the A 11 alignment mode. The segments of the molecules are marked. alignment mode; closed circles indicate those denoting the A 22 mode. Semiquantitative information of each peak is listed in Table V. in high yield and appeared as native-type IF Ͼ1 m in length. However, several combinations did not, including three positions in the 1A rod domain segment (K14 Asp 9 3 Ala, Arg 10 (Table III).
In a related second set of experiments, nine of these mutations were introduced into the GFP-K14 construct and their propensities for assembly into keratin IF in vivo were examined after transfection into PtK2 cells (Fig. 3). These cells express predominantly the K6, K7, K16, and K17 keratin chains but have been shown previously to accommodate incorporation of transfected wild type or mutant K14 chains (19). Additionally, the efficacy of incorporation of transfected GFP-K14 constructs into cultured cells to explore keratin IF cytoskeletons is now established (20). 3 Four mutants (1A Arg 10 3 Leu (Fig. 3B), 1A Lys 31 3 Met (Fig. 3D), L2 Glu 6 3 Ala (Fig. 3G), and 2B Glu 106 3 Ala (Fig. 3H)), resulted in severely disrupted cytoskeletons in which most of the keratin IF had withdrawn to a perinuclear location, and there were bright spots of unassembled GFP-labeled protein. In five other cases, the keratin IF cytoskeletons were either unchanged (1A Lys 17 3 Met (Fig. 3C), 1B Lys 71 3 Ile (data not shown)), or mildly abnormal due to some apparent clumping and/or elongation of the keratin IF (1A Glu 22 3 Ala (data not shown), 1B Glu 56 3 Ala (Fig. 3E), and 1B Glu 84 3 Ala (Fig. 3F)).
Some of these data were expected and thus serve as controls. The 1A positions 9 and 10 have been shown previously to be sites for mutation in various keratinopathy diseases (18); in vivo and/or in vitro expression of proteins containing these mutations revealed limited or no IF assembly (21). Similarly, we have recently documented that the 2B residue positions 100, 104, and 106 are required to form stable molecules because they participate in coiled-coil trigger formation in IF (11).
Cross-linking Studies with DST in Urea Solutions to Assess
One-and Two-molecule Stabilities-The second criterion of assembly competence used in this study was the formation of stable one-and two-molecule assemblies. We have previously established (16) a method to assess the stabilities of single coiled-coil molecules and pairs of them by use of a graduated urea concentration titration assay coupled with cross-linking by DST. At protein concentrations below the critical concentration for IF assembly (ϳ40 g/ml) in assembly buffer in the absence of urea, the K5 and K14 chains form mostly twomolecule (and traces of one-, three-, and four-molecule) oligomers (8,9). These dissociate into single molecules at about 6.5 M urea (approximate concentration of half loss), and then the molecules dissociate to individual chains by about 9.5 M urea, as reported earlier by Wawersik et al. (22) for K5/14 keratin IF and for vimentin and ␣-internexin (16) (see Fig. 4A).
Mutants representing every single conserved charged residue position in either the K5 or K14 chain (from Fig. 1), and some nonconserved ones, were tested in this assay. Several observations are apparent (Table IV). First, for only the 2B rod domain positions 100, 104, and 106 were both the two-molecule and single-molecule entities unstable in even 1 M urea. This is expected from our earlier data, as these residues participate in the formation of a stabilizing coiled-coil trigger motif for IF (11). Second, in all other cases, the one-molecule species was essentially as stable as the wild type. However, third, there were several conserved charged residue positions that resulted in significantly destabilized two-molecule entities (ϳ4 M urea), including a Axial positions are measured in terms of h cc , which corresponds to a 0.1485-nm rise of each residue in a coiled coil conformation. This list documents the oppositely charged residues that are located within Ϯ3 residues of each other (and thus theoretically could form a salt bond) when aligned using the known parameters of A 11 mode adduced for keratin IF. Asterisks identify those residue positions/pairs which were confirmed to be of importance in this study. two-molecule oligomer, known as A 11 , A 22 , and A 12 . In our hands, the A 12 mode exists only at high pH values and is not assembly-competent (14). However, a variety of chromatographic, ultracentrifugation, electron microscopic, solution birefringence, and cross-linking data have documented that twomolecule oligomer of a variety of mammalian IF exist in assembly-competent solutions as 60 -70-nm-long particles in which the two molecules are aligned in the A 11 and/or A 22 mode. Our previous cross-linking experiments have shown that the two must co-exist in solution, since we have been able to recover DST cross-linked peptides arising from links between antiparallel molecules aligned in both modes (8 -10). Therefore, 19 3 Leu 5Ј-TAC CTG GAC AAG GTG CtT GCT CTG GAG GAG GCC-3Ј X 1A Glu 22 we reasoned in the present experiments that the destabilization of the two-molecule oligomer in the several mutations identified above should be due to loss of one or both of these alignment modes. To check this, we performed additional larger scale cross-linking experiments with 0.4 mM DST. The proteins were then cleaved with CNBr and trypsin, and the resulting peptides were resolved by HPLC (Fig. 5), but using a broader and flatter gradient extending over 120 min versus 70 min previously (8). We found six common peaks in the one-and two-molecule species of wild type K5/14 arising from intramo- (Fig. 5B). In the wild type two-molecule oligomer, there were an additional 15 peaks due to intermolecular links, of which 5 could be assigned to linkages between molecules aligned in the A 22 mode (Fig. 5C, closed circles), and 10 to linkages denoting the A 11 alignment mode (Fig. 5C, open circles). Semiquantitative data on the amounts of each were determined on the basis of peak areas (all data summarized in Table V). These experiments were repeated for seven mutant mixtures. As found previously (11), the Glu 106 3 Ala substitution in the 2B rod domain segment resulted in loss of the A 22 alignment mode, and resultant substantial loss of the A 11 mode. However, the 1A Arg 10 3 Leu (Fig. 5D) and Lys 31 3 Met, and L2 Glu 6 3 Ala single (Fig. 5E) or Glu 4 3 Ala/Glu 6 3 Ala double substitutions resulted in almost complete loss of the A 11 mode. Further, the yields of the cross-links denoting to the A 22 mode were generally increased over the wild type amounts (Table V). These data confirm that the A 11 and A 22 modes of molecular alignment in fact exist in equilibrium in solution and suggest that loss of the former by destabilization results in a net reduction of the stability of all tetramers, together with an accumulation of molecules into the latter. The Arg 10 Substitutions in Keratinopathy Diseases-Thus, we have presented three sets of data, which document that certain residues along the keratin IF chains are especially important for: successful IF formation in vitro and in vivo; the stability of the two-molecule hierarchical stage of IF assembly; and, in particular, for specifying and stabilizing the A 11 mode of alignment of two antiparallel molecules. Indeed, several of the residues identified here correspond to residue pairs documented in Table I that are theoretically good candidates to form stabilizing ionic salt bonds. Based on the known alignment parameters of two antiparallel molecules in the A 11 mode, these residues are likely to lie very close to each other in the A 11 mode (Fig. 6). Thus, the conserved Arg 10 position of the 1A rod domain segment is closely adjacent to the conserved set of two (and in type I IF chains, three) acidic residues in positions 4, 6, and 7 in the linker L2. Notably, discharging of any one of these residues severely compromised the A 11 alignment of the two-molecule hierarchical stage of IF structure. Asp 9 (often an isosteric Asn in many IF chains) is likewise adjacent to these residues in L2. In addition, we note from Fig. 6 that the conserved 1A residue Lys 31 lies near the conserved 2A residue Asp 1 and Asp 3 (K5 chain only); likewise, discharging of these residues resulted in impaired stability of the A 11 alignment mode.
The simplest explanation of these data is that the key residues identified in this study interact to afford essential stability. One possibility is that this stability is provided by the formation of a complex intermolecular network of salt bonds and/or H-bonds. However, we cannot formally exclude the possibility that head and/or tail domain sequences also cooperate in these stabilizing phenomena. In addition, it is to be expected that many other charged residues, in addition to the key ones identified here, may also contribute in important ways to the alignment of the A 11 mode. It is also possible that these residues may participate in higher orders of IF structure, in particular the lateral association of molecules in the A 12 alignment mode, and elongation of molecules by overlapping of the A CN alignment mode. The availability of the complete atomic structure of a single IF molecule should provide the opportunity to further explore these possibilities in model building studies. Finally, it is interesting to note that the key potential interactions identified here do not involve the 1B segment, which corresponds to the central region of the A 11 overlap. Instead, both ends appear to be crucial in making favorable intermolecular interactions. Nevertheless, we speculate that apolar interactions between residues in the antiparallel 1B segments could also play a role in stabilizing the A 11 alignment mode. Interestingly, substitution of the Arg 10 residue in the 1A rod domain segment of especially type I keratins often results in a very serious phenotype in a variety of keratinopathy diseases (recently reviewed in Ref. 18). The molecular basis of the consequence of this substitution on keratin IF structure has not heretofore been determined, although one report (23) suggested the problem occurred at a structural hierarchical level above the stability of a single molecule. Our present data indicate in a straightforward way that this substitution causes a serious problem at the level of the two molecule stage of IF assembly, in particular by destabilizing the A 11 alignment mode.
|
2018-04-03T00:10:44.832Z
|
2001-01-19T00:00:00.000
|
{
"year": 2001,
"sha1": "cdbbb3a6b522ff65c04d5243e6863834fd41dfc0",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/276/3/2088.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7220fc5ec1769365f80c79f17c1bde3fdb67b61d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
197615838
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the Phase Transformation Mechanism of Titanium Dioxide by High Temperature in Situ Method
In this study, the real-time phase measurement of titanium dioxide under different temperature conditions was tested by real-time measurement of high temperature sample stage and Rietveld refinement method. The real-time phase transformation process of titanium dioxide under high temperature conditions was analyzed. The results show that the (101) crystal plane of anatase titanium dioxide is most stable when the amorphous metatitanic acid is transformed into anatase titanium dioxide during the phase transformation of titanium dioxide, and the growth of anatase has a preferred orientation along the c-axis. In the case of anatase-type titanium dioxide to rutile-type titanium dioxide, the (110) crystal plane of rutile-type titanium dioxide is the most stable, but the rutile type has no preferred orientation.
Introduction
TiO2 has good chemical stability and is widely used in aerospace, aerospace and marine applications. With the continued development of China's economy and the evolution of the international situation, people's needs for TiO2 in medical, sports and clothing are also increasing. There are three crystal forms of TiO2, namely: Rutile, Anatase, and Brookite. Among them, the plate titanium type is very unstable in nature, so it is rare in nature. At present, rutile and anatase are the main research objects and applications. Among the three crystal forms, rutile is a thermodynamically stable phase, while anatase is a metastable phase, and the phase transition of anatase to rutile (A→R phase transition) is an irreversible phase transition process [1].
Both the rutile and anatase types belong to the cubic system. The unit cell parameters are: anatase type a=3.7852Å, c=9.5139Å; rutile type a=4.58Å, c=2.95Å. Under normal conditions, the temperature at which pure anatase titanium dioxide is converted to rutile titanium dioxide is 610 to 915°C [2]. In order to completely convert anatase titanium dioxide into rutile titanium dioxide, the calcination temperature is preferably higher than 1000°C [3]. In the process of grain growth, the order of occurrence of different crystal faces is different, and the activity of each crystal face is different. Some researchers have studied the titanium dioxide dominated by anatase (001) crystal plane [4], a lot of research has been done by scholars at home and abroad on the phase transition of titanium dioxide. For example: D. M. Tebaldi used XRD to study the effect of different sol counterions on the phase transition of titanium dioxide. It was found that CI-and Br-are more likely to inhibit the A→R transition of titanium dioxide than NO3- [5]. S. Kalaiarasi and M. Jose used the anatase-type titanium dioxide prepared by hydrolysis to heat-treat the kinetic analysis of the crystal growth of the titanium oxide A→R transition using an [6]. Jeng-Shin Ma et al. used titanium dioxide prepared by hydrolysis to investigate the effect of hydrolysis of PH on the A→R transition of titanium dioxide. It was found that the phase transition activation energy increased with the increase of pH at the beginning of hydrolysis [7]. In summary, it can be found that most of the titanium dioxide phase change is detected at the normal temperature sample stage. The researchers often use the temperature to rise to the specified temperature and then lower the temperature and measure it on the normal temperature sample stage. At this time, the crystal structure of the sample dropped to normal temperature is not the sample. The type at high temperatures does not reflect the phase transition of titanium dioxide well in real time. In this study, X-ray diffraction high temperature in situ method was used to test the real-time phase of titanium dioxide under high temperature conditions. The actual phase of titanium dioxide under the temperature condition is ensured, and the change of the phase of the phase under high temperature conditions can be accurately determined, and an accurate phase change process is obtained.
The mechanism of crystal growth kinetics mainly includes Ostwald Ripening (OR) and Oriented Attachment (OA). In the crystal growth process, the two mechanisms tend to exist at the same time. The OR maturation mechanism is to consume small crystal grains through large grains, so that the total specific surface area of the crystal is reduced, so that the total interface free energy is reduced, and the driving force for the growth of the crystal grains is provided. The OA orientation adhesion mechanism is mainly the self-assembly of adjacent crystal grains, and the two crystal grains having the same crystal plane orientation are fused to grow in a specific direction [8]. In order to accurately analyze the growth mechanism of crystals during the phase transformation of titanium dioxide, Rietveld's full-spectrum fitting method was used to obtain the crystal parameters of the phase under high temperature conditions.
Synthesis of precursor-amorphous met titanic acid
Take 200ml of TiCl4 ≥ 99% solution and 100ml of 98% H2SO4. The TiCl4 ≥ 99% solution was slowly added to 100ml of 98% H2SO4 in batches to prepare a saturated yellow-green TiOSO4 solution, and the saturated yellow-green TiOSO4 solution was heated on an electric furnace until the hydrogen chloride in the yellow-green TiOSO4 solution volatilized into a nearly colorless TiOSO4 solution. Leave for 48 hours, the colorless TiOSO4 solution was added to the beaker and stirred, and a supersaturated NaOH solution prepared by dissolving NaOH ≥ 96% particles in water was added until the TiOSO4 solution was hydrolyzed to give white colloidal met titanic acid, and PH=6 was measured at this time. The met titanic acid solution was washed several times with an ultrasonic cleaner, and the impurity ions adhering to the met titanic acid were washed into the solution, and then washed by filtration. An appropriate amount of BaCl2 ≥ 99.5% particles were dissolved in distilled water to prepare a supersaturated BaCl2 solution, and AgNO3 ≥ 99.8% particles were dissolved in distilled water to prepare a supersaturated AgNO3 solution. The filtrate was added to the supersaturated BaCl2 solution and the supersaturated AgNO3 solution with a plastic dropper until no white precipitate was observed, which proved that Cl-, Na-and SO42-had been removed. The white met titanic acid was placed in an oven for 24 hours to dry and grinded into a white powder.
Methods
The obtained amorphous titanic acid was repeatedly ground with an agate polishing dish. Press the tablet press to a thickness of about 1 mm, and place the pressed sample in a STA 409PC type synchronous thermal analyzer for differential scanning. The temperature range is 26°C to 1100°C, and the heating rate is 10°C/min. Export the dsc map. In addition, the sample was placed on a X'pert Pro MPD hightemperature X-ray diffractometer for high-temperature in-situ testing, using continuous scanning, Cu target, voltage 40kV, current 40mA, step size: 0.026°, starting angle 5°, end Angle: 90°, each step time: 49s. Finally, a high temperature in situ diffraction spectrum is derived. Figure 1 is an X-ray diffraction high temperature sample stage. In the figure, 7 is a platinum piece for heating the sample, the platinum piece was heated by an external power supply of 4, and the temperature of the platinum piece was measured by thermocouples 5 and 6, and regulated in real time by a computer. In this way, the X-ray diffractometer can perform real-time scanning during the heating process. Loss of information after the sample has cooled down is avoided. The diffraction pattern was refined by Rietveld using HighScore Pluse software to obtain the lattice parameters of different crystal forms, and the mapping analysis was performed. The obtained metatitanic acid was placed in an X'pert Pro MPD type high-temperature X-ray diffractometer, and heated from 26°C to 700°C at a heating rate of 10°C/min. Continuous scanning, Cu target, voltage 40kV, current 40mA, step size: 0.026°, starting angle 5°, ending angle: 90°, residence time per step: 49s. The high temperature in-situ XRD diffraction pattern of amorphous titanium dioxide converted to anatase titanium dioxide is shown in Figure 3. As can be seen from Figure 3, the diffraction peaks at 26°C are all low Shantou peaks, indicating that the sample is better. Amorphous titanium dioxide, the peak height of the diffraction peak increases continuously from 26°C to 420°C, because the amorphous type changes to anatase, and the crystal planes of the anatase type are sequentially formed, but the intact anatase type has not yet appeared. It is in the transition zone from amorphous to anatase. When heated to 300°C, the diffraction peak of the anatase (101) crystal plane begins to appear. Generally, we believe that the lower the energy, the more stable it is. Therefore, the (101) crystal plane has the lowest energy and the best stability. When heated to 420°C, the (103) crystal plane, the (200) crystal plane and the (213) crystal plane appear almost simultaneously, indicating that the (103) crystal plane, the (200) crystal plane and the (213) crystal plane are substantially equal in stability. When heated to 430°C, diffraction peaks of about 55° diffraction angle began to appear as two anatase diffraction peaks. They are (105) crystal plane and (211) crystal plane, respectively. It is indicated that the (105) crystal plane and the (211) crystal plane are substantially the same. At this time, all the diffraction peaks of the anatase type have appeared substantially, and thereafter, the diffraction peaks of the respective anatase types are continuously increased, and the anatase type titanium dioxide is continuously grown. Therefore, 430°C is an anatase phase transition point. This is basically consistent with the phase change point of 409°C measured by DSC in Figure 2. In addition, we can see that the diffraction peak of the (101) crystal plane is significantly higher than that of other crystal planes, indicating that the anatase type mainly exists in the form of a stable (101) crystal plane. It can also be seen from Figure 3 that the transition from amorphous to anatase is a gradual process. The order of occurrence of different crystal faces is different. Therefore, the stability of different crystal faces of anatase is different. Rietveld full-spectrum fitting was performed on the diffraction spectrum of 200°C to 470°C anatase by HighScore Pluse software. The lattice parameters obtained by Rietveld full-spectrum fitting method for refining the crystal are shown in Table 1. From Table 1, Rp≤10, GOF≤5, indicating that the finishing result is better and the lattice parameters are credible. Make a picture of Table 1, and get Figure 4, anatase type titanium dioxide belongs to tetragonal system, anatase type titanium dioxide is composed of four TiO2 molecules per unit cell, and each unit cell of rutile type is composed of two TiO2 molecules [10]. It can be seen from Fig. 4 that the overall trend of the lattice parameters a and c is increased in the temperature range where the anatase crystal form is gradually improved from 200°C to 450°C, it embodies the process in which the anatase type in Figure 3 is gradually growing. However, we can clearly see that although the overall trend of a value is increased, it is reduced at 410°C, 430°C, and 440°C. Conversely, looking at the value of c has been increasing. In this respect, the crystal growth of anatase titanium dioxide is similar to the growth mode of the dendritic preferred orientation (OA orientation adhesion mechanism). That is, the anatase type is mainly elongated along the direction of the crystal axis c, and the growth of each unit cell is equivalent to the superposition of TiO2 molecules from the disordered amorphous state to the ordered anatase type continuously toward the crystal axis c. Until each anatase type unit cell contains 4 TiO2 molecules, a complete rectangular anatase-type tetragonal system is gradually formed. The values of a and c in the range of 450°C to 470°C did not change much, because a complete anatase type titanium dioxide unit cell was formed at this time, and only the growth process of the crystal grains was left. On the other hand, the value of a increases and decreases, and the value of c increases all the time, indicating that not all crystal faces appear at the same time during the growth of anatase, and the order of occurrence of the crystal faces is first and foremost, if all crystal faces appear at the same time, then a And c should be increased at the same time, there will be no increase or decrease in the value of a. The obtained met titanic acid was placed in an X'pert Pro MPD type high-temperature X-ray diffractometer, and heated from 26°C to 700°C at a heating rate of 10°C /min. Continuous scanning, Cu target, voltage 40kV, current 40mA, step size: 0.026°, starting angle 5°, ending angle: 90°, residence time per step: 49s. The high temperature in-situ XRD diffraction pattern of the anatase-to-rutile transition is shown in Figure 5. As can be seen from Figure 5, the process of the anatase-type rutile transformation is similar to the amorphous to anatase transformation. It is gradual. First, a rutile crystal face is formed, and then other crystal faces are gradually formed. Finally, a complete rutile type is formed. At 610°C, the (110) crystal plane of the rutile type begins to form, and this can be considered as the phase transition point of the anatase-type rutile type transition. The dsc of Figure 2 is basically consistent. At the same time, it can be known that the rutile type (110) crystal plane has the lowest energy and the best stability. Then, the (101) crystal plane and the (111) crystal plane of the rutile type appear almost simultaneously at 620°C, and the (211) crystal plane and the (301) crystal plane of the rutile type also appear simultaneously at 630°C. At this time, the main crystal face of the rutile type has been completely crystallized, and the crystal form of the rutile type has been basically formed. Similarly, we can find that the (110) crystal plane has the highest diffraction peak, so the rutile type mainly exists in the form of a stable (110) crystal plane. Figure 7. Refined rutile lattice constant.
A Rietveld full-spectrum fit was performed on the diffraction pattern from 580°C to 650°C using HighScore Pluse software. The lattice parameters obtained by the Rietveld full-spectrum fitting method for refining the crystal are shown in Table 2 and Table 3. Table 2 shows the lattice parameters of the anatase type, and Table 3 shows the lattice parameters of the rutile type. Rp≤10, GOF≤5, indicating that the refinement results are better and the lattice parameters are credible. Tables 2 and 3 are plotted separately, and Figures 6 and 7 can be obtained. It can be seen from Figure 6 that after the phase transition point of the anatase-type rutile transition is 610°C, the lattice parameter a of the anatase type is basically unchanged, while c shows a downward trend, anatase and The rutile type belongs to the tetragonal system, the anatase type a=b=3.7852Å, c=9.5139Å; the rutile type a=b=4.58Å, c=2.95Å. This indicates that during the anatase-type rutile transformation, the Ti-O bond on the anatase-type c-crystal axis may preferentially break and then form a rutile type. From the lattice parameters of the rutile type in Figure 7, it can be found that during the growth of the rutile type after the phase transition at 610°C, the a and c values are not as large as the anatase growth mechanism, and the total a and c are increased. Instead, a decreases and b increases. It shows that the rutile type does not show signs of preferential growth like the anatase type, but grows in all directions.
Conclusion
1) The temperature at which titanium dioxide is transformed from an amorphous phase anatase is 300°C, and the transition is gradual. The (101) crystal plane of the anatase type appears first, and the other crystal planes appear in order, so the anatase type ( 101) The crystal face has the smallest energy and is the most stable. And the anatase type mainly exists in the form of a (101) crystal plane. The phase transition temperature of the anatase-type rutile type transition is 610°C. The (110) crystal plane of the rutile type appears first, and the other crystal planes appear in order, so the rutile type (110) crystal plane has the smallest energy and is the most stable. And the rutile type mainly exists in the form of (110) crystal faces.
2) The growth mechanism of titanium dioxide transformed from amorphous phase anatase is oriented, which grows faster toward the crystal axis c, while the anatase to rutile transition has no directionality and only grows around.
|
2019-07-20T02:03:46.221Z
|
2019-03-22T00:00:00.000
|
{
"year": 2019,
"sha1": "0076512f4a4e7259e4d8c36890671266e4f2ac45",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/493/1/012010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dfaa6476e8ba9e0a6efad46698ce2f76f53743a1",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
53392920
|
pes2o/s2orc
|
v3-fos-license
|
Differential Stress , Strain Rate , and Temperatures of Mylonitization in the Ruby Mountains , Nevada : Implications for the Rate and Duration of Uplift
Kno~ledge of the m.agnitude of the differential stress during the formation of mylonitic rocks provides constramts on mechanical and thermal models for the exhumation of the metamorphosed footwalls of major low-angle detachment faults. We have analyzed the differential flow stress during the mylonitization of quartzose rocks in the Ruby Mountains, Nevada, using grain-size piezometers and kinetic laws for grain g!owth. Quartzites from mylonitic shear zones in Lamoille Canyon and Secret Creek gorge have grain SizeS of 91-151 ~m and 42-64 ~m, respectively. The peak temperature during mylonitization was 630°:i:50°C, and analysis of grain-growth kinetics indicates that mylonitization continued during cooling to temperatures S450°C. Quartz grain-size piezometers suggest that the mylonitization occurred under differential stresses (crl-cr3) of 38-64 MPa, or maximum shear stresses of 19-32 MPa. Extrapolation of quartzite flow laws indicates that the mylonitization occurred at strain rates between 10-10 and 10-13.-1. . ' arguments presented m the paper suggest that the likely range of strain rates is 10-11 to 10-12.-1. These strain rates are compatible with displacement rates of the order of 23 mm yr-l along a 1.5-km-thick simple shear zone. Such a shear zone dipping 15° would produce an uplift rate of 5.8 km m.y:1 and a horizontal extensioo rate of 22 km m.y:l. This uplift rate indicates that midcrustal mylonitic rocks could have been lifted up along a 1.5-km-thick simple shear zone dipping 15° in 2.6 m.y.
lNrRODUcrloN
middle crust [e.g., Davis et al., 1986].The ductilely deformed pynamic processes within ductilely deformed middle midcrustal rocks were lifted up along the footwalls of the continental crust played an important role in mid-Tertiary core detachment faults to Earth's surface.Although the geometrical complex extension in the North American Cordillera [e.g., evolution of brittle detachment fault systems and the Coney and Harms, 1984].Knowledge of the state of stress in kinematics of the associated ductile shear zones have been the middle crust during the extension is fundamental to studied by numerous geologists, little is known about the understanding the physics of those processes.First, the differential stress and strain rate during the evolution of these differential stress provides a direct constraint on any deeper shear zones.mechanical modelling of crustal deformation.It can also be The purpose of this paper is to use theoretically derived and used, in conjunction with the temperature history of the rocks experimentally calibrated microstructural piezometry of quartz and experimental flow laws, to infer the strain rate during the to infer the differential stress during the mylonitization of development of ductile shear zones, which are common features quartzose rocks along a shear zone in the Ruby Mountains core in Cordilleran core complexes [e.g., Crittenden et al., 1980;complex.The strain rates, and rates of extension and uplift Frost and Martin, 1982;Snoke and Lush, 1984;Davis et al., during the development of the shear zone are then calculated by 1986].
applying quartzite flow laws at the estimated stresses and The core complexes and associated detachment fault systems metamorphic temperatures.have been intensively studied in the past decade [e.g., .
Goetze
south side of Nevada highway 229 (Figure 1).The samples were all collected within 10 m of the low-angle fault contact Lamoille Canyon that separates a lower plate of mylonitic interlayered impure The western end of Lamoille Canyon contains exposures of quartzite, migmatitic schist and orthogneiss from the the mylonitic zone that forms a carapace above the higher-overlying carbonate-rich Horse Creek allochthon.grade core of the range.This mylonitic zone is at least 1.5-2 All the samples are quartz-rich mylonites with thin layers km thick and can be traced along the west flank of the range for containing muscovite or biotite (0-5%) or plagioclase (1about 100 km [Valasek et al., 1989].Sedimentary rocks within 10%).
Several samples show the mylonitic zone in this area have been attenuated to 5% of "ribbon" textures, with large quartz grains, greatly elongated their thickness outside of the zone [Snoke and Howard, 1984].parallel to the compositional layering (C), separated by fme-We collected five samples along a 300-m-long north-south grained recrystallized grains (Figure 2c).The linear intercept transect at approximately 115°28' and 40°41.5',about 300 m measurements (Table 1) do not include the large relict grains.east of the Lamoille Canyon road (Figure 1).The transect In these samples the amount of recrystallization varies from includes probable Mesozoic granite gneiss, Ordovician--20% in sample Ru-17, with relict grain dimensions parallel to Cambrian calc-silicate rocks of Verdi Peak, the Cambrian and the lineation of 0.3 x 6.0 mm, to 80% in sample Ru-12, with Late Proterozoic Prospect Mountain Quartzite, and the garnet-relict grain dimensions of 0.2 x 15.0 mm.The elongation of two-mica granite orthogneiss of Thorpe Creek.
All the the small recrystallized grains defines a weak foliation (S) that samples we collected are of Prospect Mountain Quartzite or (measured in sections parallel to the lineation and normal to quartzose layers in the orthogneiss of Thorpe Creek. the foliation) is parallel to C in samples Ru-12, Ru-16, Ru-18, The samples are all strongly foliated quartzose mylonites and Ru-19, and inclined to C at angles of 15°, 20°, and 27° in with up to 5% muscovite and/or biotite and 5% plagioclase, s~ples Ru-10, Ru-11, and Ru-15, respectively.Plagioclase is generally segregated in thin layers defining a compositional present as anhedral or augen-shaped, cracked porphyroclasts.
foliation. They have been termed S-C mylonites by Lister and
The grain-boundary configurations and internal structures of Snoke [1984], the S (schistosite) being a planar structure quartz in the rocks from Secret Creek gorge differ somewhat defined by the shape anisotropy of finely recrystallized quartz from those from Lamoille Canyon.The "ribbon" grains show, and C (cisaillement) the compositional layering defmed by the in addition to some blocky subgrain structure, continuous mica and plagioclase crystals.The mica crystals are anhedral undulatory extinction or bending.The recrystallized grains, and lozenge-or fish-shaped.Plagioclase porphyroclasts are which are less than half the size of those in the Lamoille anhedral and augen-shaped with internal fractures and Canyon mylonites, also show slight, continuous undulatory deformation bands.The S foliation is inclined at -20° to the extinction (less than 10° and commonly less than 5° of compositional interlayers (C).The quartz grains have a strong bending in a single grain), without optically visible subgrains lattice preferred orientation, as indicated by the uniformity of in most of the samples (e.g., Figure 2d).Undulatory extinction colors produced by a gypsum plate; examples are illustrated by is essentially absent in samples Ru-12 and Ru-18. This Snoke [1980, Figure 10] recrystallization, or more prolonged plastic deformation after Samples Ru-4, Ru-5, and Ru-6 show bimodal grain-size recrystallization, than in the Lamoille Canyon mylonites.The distribution with a few relatively large, flattened relict grains, grain boundaries are commonly irregular or sutured, indicating surrounded by smaller recrystallized grains of uniform size grain boundary migration during or after deformation.
In dating from the mylonitic deformation.The relict grains are general, the textures of the Secret Creek gorge mylonites generally elongate parallel to the foliation (S), with minumum suggest deformation and recrystallization at lower dimensions >0.5 x 2.0 mm, indicating that the premylonitic temperatures, with less postmylonitic grain growth than in the proto lith was coarse grained (»1.0 mm).They show Lamoille Canyon mylonites.
The large size of the quartz extensive undulatory extinction of the "blocky" type, defined ribbons and of the mica and plagioclase porphyroclasts by unbent subgrains separated by relatively high-angle indicates a coarse-grained (~2 mm) premylonitic protolith for subgrain boundaries.This is a well-recovered substructure.The most of these rocks.recrystallized grains, by contrast, show little or no undulatory ., .. extinction.
Their grain boundaries range from somewhat Gram Size DetermlnatlOn serrated (specimen Ru-7, Figure 2a), indicating grain boundary The recrystallized grain sizes were measured by the method of migration during or after the dynamic recrystallization that Ord and Christie [1984].Two I-inch round polished ~afers accompanied mylonitization, to straight (specimen Ru-8, were cut from each sample.Both wafers were c:ut ~ndlcular Figure 2b), suggesting postmylonitic grain growth.Samples to the foliation; one was cut par~lel t? the lmeatlon and the Ru-7 and Ru-8 have a unimodal grain size distribution, but the other was cut orthogonal to the lmeatlon.
Each was etchp resence of subgrain structure in a few grains suggests that they with 40% ammonium bifluoride for ~5 min to .revealgram may be relict grains.
boundaries, but not low angle sub-gram boundaries [Wegner for a single sample is the geometric mean of 20 inverse mean .linear intercepts (each of -25-50 grains) and is not the is geometric mean that would be obtained by measuring grains ĩndividually.
Grain-shape ellipsoids range from 1.0: 1.0:0.9 to We used the recrystallized-grain-size piezometers of Twiss +5 +3 +12 +32 [1977,1980], Mercier et al. [1977], and Koch [1983] e experlfllen a ca 1 ra ion error an e s an ar deviations of the grain-size measurements.Moreover, we used the same grain-size measurement technique that Koch used, so that our grain sizes are directly comparable to Koch:s.
7 and 9 MPa.Differential stresses for the rocks from Secret Differential stresses for.the rocks from Lamo1l1e Canyon Creek gorge range from 31 to 64 MPa, with eight of the 10 range from 7 to 17 MPa, With four of the five samples between samples between 35 and 49 MPa.Note that the uncertainties of these values are in some cases as large as the values themselves.Because the grain size may have increased during 1977].Invariably, grains are larger after armealing than they Twiss [1977,1980] 1.45xl04 -1.47 were during deformation, and the larger, annealed grain size .
3 leads to an underestimate of the actual flow stress (see Table 3).growth during armealing can be evaluated.minimum grain size that could have resulted from Estimates of the peak temperature and pressure during mylonitizatio~ at 540°C, corresponding to a differential stress mylonitization, determined from garnet-biotite-muscovite-of 2: ~Pa (Flgur~ 4): .° °plagioclase thermo barometry, are 630°:t50°C and 400:1:100 Similar reasonmg mdlcates that T a=450 C and T max=500 C MPa [Hurlow, 1988; H. Hurlow, personal communication, for the mylonitic rocks in Secret Creek gorge with a final grain 1989].Consideration of the grain-growth kinetics of quartz size o~ 57 ~m.Gr~~ of 5:-~~ diameter can form during aggregates, however, indicates that mylonitization must have annealmg from any mltlal graIn Size smaller than 57 ~m.At continued to lower temperatures.The average grain sizes of the S450°C, 57-~m grains do not grow, thus T a=450°C for this Lamoille Canyon and Secret Creek gorge rocks are 146 and 57 grain size and cooling rate.During cooling from 490°C, grains ~m, respectively.From the kinetic laws for the grain growth that were initially 43 ~ grow to a final grain size of 57 ~m. of quartz (fable 4) [Tullis and Yund, 1982; Pierce and Christie At temperatures ?500°C, grains rapidly grow to more than 57 growth history for grains with a final diameter of 146 and 57 limit (T max) on the temperature at the end of mylonitization of ~m developed during fiXed cooling rates (Figure 4). Figure 4 the rocks in Secret Creek gorge of 490°C, based on grainshows several grain growth paths for cooling from different growth kinetics.Thus 42 ~m is the minimum grain size that temperatures at a linear rate of 54°C m.y.-1 (the minimum could have resulted from mylonitization at 500°C, corresponding to a differential stress of 64 MPa (Figure 4).
Let Quartzite, name of the rock on which the measurements were made; Kronenberg and Tullis [1984] used both Heavitree Quartzite and Arkansas novaculite.%H20, amount of water added to the samples; talc indicates experiments in which the samples were surrounded by dehydrating talc.No., number of samples on which measurements were made.
* Includes data from eight experiments by Heard and Carter [1968].t Preexponential constant from Kirby and Kronenberg [1987] same mechanisms that operated during the experiments, then observed grain sizes and the grain growth calculations above.the constitutive relations can be used to predict one of the At temperatures between T max and T g, the predicted strain rates variables, temperature, stress, or strain rate, if the other two for the rocks from Lamoille Canyon are in the range 10-9 variables are known [Poirier, 1985].Flow laws for steady state to 10-14 s-I, and those for rocks from Secret Creek gorge dislocation creep of quartzite are listed in Table 5.We did not are 10-9 to 10-12 s-1 at these temperatures.At a lower include flow laws for vacuum-dried samples, because the temperature, 400°C, strain rates are one order of magnitude presence of biotite and muscovite indicates that the natural slower.The strain rates derived from the experiments of Koch samples were probably not deformed under anhydrous et al. [1989] are the slowest and yield the most conservative conditions.We have also excluded rheological data from extrapolations.
Even with the maximum uncertainty experiments on novaculite and flint.None of the sets of (including grain-size measurement errors, piezometerexperiments from which the flow laws were derived were ideal.calibration errors, and flow-law calibration errors), strain rates All were done in solid-medium apparatus, which can not of 10-13 s-1 or faster are predicted for the mylonitic rocks from measure stress as accurately as can gas apparatus.Kronenberg Secret Creek gorge.and Tullis ' [1984] and Jaoul et al. 's [1984] experimental samples were encapsulated in platinum, which affects the stress Implications for the Rate and Duration of Uplift measurements during the experiments.
Further, their If the thickness of the mylonitic shear zone that produced the rheological data come from creep experiments on five or fewer mylonitic rocks is known, from the strain rate we can calculate samples.Koch et al.'s [1989] experiments were done with the displacement rate across the shear zone.The mylonitic copper or copper and talc confining media, which are tstronger zone within the Ruby Mountains has a total thickness of 1.5-2 than the salt confining medium used in some of the other km [Valasek et al., 1989].However, field relationships in experiments.Although experiments have shown that the Secret Creek gorge indicate that as the mylonitic rocks cooled, rheological behavior of quartz is affected by pressure, strain localization occurred, and the fault zone became impurities such as Na [Jaoul, 1984], water content [Jaoul et al., progressively narrower [Snoke and Lush, 1984].The rocks 1984; Kronenberg and Tullis, 1984], and the a/13 transition that we sampled from Secret Creek gorge represent some [Linker and Kirby, 1981;Ross et al., 1983], none of these unknown, presumably intermediate step in the transition from effects can yet be extrapolated quantitatively to natural a thick, homogeneously deforming zone to a thinner shear conditions.
TABLE 1 .
Number of Grains Counted, Geometric Mean Grain Aspect Ratios, and Geometric Mean Grain Size Grain size estimates were made from mean linear ~ ~ too = u intercepts [Smith and Guttman, 1953] of 500-1000 grains per ~
TABLE 4 .
Experimentally Detemlined Parameters for Kinetics of Grain cooling rate given by 40 Ar/39 Ar and fission track data for the Growth in Flint and Novaculite mylonitic rocks in Lamoille Canyon, as explained earlier).
TABLE 5 .
Experimentally Detennined Parameters for Power Law Creep Constitutive Equations for Quartzites
TABLE 7 .
Shear Zone Parallel Displacement Rates .
|
2017-10-21T03:17:02.559Z
|
1990-06-10T00:00:00.000
|
{
"year": 1990,
"sha1": "f75548cce1012be58c19e4adf1d4c2546ac1fedf",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Differential_Stress_Strain_Rate_and_Temperatures_of_Mylonitization_in_the_Ruby_Mountains_Nevada_Implications_for_the_Rate_and_Duration_of_Uplift/13678135/3/files/31437394.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f75548cce1012be58c19e4adf1d4c2546ac1fedf",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
119645082
|
pes2o/s2orc
|
v3-fos-license
|
Solitons of discrete curve shortening
For a polygon in Euclidean space we consider a transformation T which is obtained by applying the midpoints polygon construction twice and using an index shift. For a closed polygon this is a curve shortening process. A polygon is called (affine) soliton of the transformation T if its image under T is an affine image of the polygon. We describe a large class of solitons by considering smooth curves which are solutions of a linear system of differential equations of second order with constant coefficients. As examples we obtain solitons lying on spiral curves which under the transformation T rotate and shrink.
Introduction
For an infinite polygon (x j ) j∈Z given by the vertices x j in the vector space R n the midpoints polygon is defined by M (x) where M (x) j := (x j + x j+1 )/2; j ∈ Z. If the polygon is closed or rather periodic, i.e. if for some N : x j+N = x j for all j ∈ Z then this midpoint mapping M defines a curve shortening process, i.e. the polygon M (x) is shorter than the polygon x unless it is a single point. If we iterate the process for a closed polygon (x j ) j=1,...,N it converges to the barycenter (x 1 + . . . + x N )/N. This elementary construction was already used by Darboux in 1878. He also showed in [6] that in the plane in the general case the sequence M k (x)/ cos(π/N ) k converges for k → ∞ to an ellipse. These results for polygons in Euclidean space were later rediscovered and extended by several authors, for example Kasner [10], Schoeneberg [14], and Berlekamp et al. [2]. Bruckstein & Shaked [3] discuss the relation with iterative smoothing procedures in shape analysis and recognition. Nowadays applications of discrete curve shortening are discussed in several papers. For example Smith et al. [15] present its connection with the rendezvous problem for mobile autonomous robots. We modify the curve shortening process M for an infinite polygon (i.e. not necessarily closed) in Euclidean space as follows: Instead of the midpoint mapping M we apply the midpoint mapping twice and use an index shift, i.e. we define the polygon T (x) = (T (x) j ) j∈Z by the Equation i.e. T (x) j = M 2 (x) j−1 . Introducing the index shift has the following advantage: Since we can view this process T as a discrete approximation of the semidiscrete flow s → x j (s) defined by on the space of polygons. On the space of closed polygons this flow is the negative gradient flow of the functional F 2 , cf. Section 4. Semidiscrete flows are discussed for example by Chow& Glickenstein [5].
The polygon T (x) is formed by the midpoints T (x) j of the medians through x j of the triangle formed by x j−1 , x j , x j+1 . Linear polygons x j = ju + v ; u, v ∈ R n are the fixed points of T.
Since the process T is affinely invariant it is natural to consider polygons which are mapped under T onto an affine image of themselves. We also use the term soliton for these polygons in analogy to the case of smooth curves or manifolds which are mapped under the mean curvature flow onto the same curve or rather manifold up to an isometry, cf. Hungerbühler & Smoczyk [9], Hungerbühler & Roost [8] and Altschuler [1]. We call a polygon x = (x j ) j a soliton of the process T if there exists an affine mapping x ∈ R n → Ax + b ∈ R n for a matrix A and a vector b such that for all j ∈ Z. The main idea of this paper is to consider not only polygons, but also smooth curves c which are affinely invariant under an analogous process on curves: We adapt the process T to curves in the following sense: We associate the one-parameter family c s , s ∈ R to the smooth curve c : For s > 0, a ∈ R we define the polygon x(a, s) = (x j (a, s) j ) j∈Z ; x j (a, s) = c(a + sj). Then the smooth curve c defines solitons of the form x(a, s) for the curve shortening process T if the following holds: For some ǫ > 0 and any s ∈ (0, ǫ) there is a one-parameter family of affine maps x ∈ R n → A(s)x + b(s) ∈ R n such that for any s ∈ (0, ǫ) and all t ∈ R : In Figure 1 we show a polygon x = (x j ) which is a soliton of the pro- Figure 2. smooth curve as soliton of T cess T. Its vertices x j = c (0.4 · j) , j ∈ Z lie on the smooth curve c(t) = (cos(2t), cos(3t)) , which is also a soliton of the process T, see Figure 2. Denote by A the diagonal matrix with entries (1 + cos(0.8))/2 and (1 + cos(1.2))/2. Then in Figure 1 the polyon x and its image T (x) are shown, which satisfy Equation (4). In Figure 2 the smooth curves c and c 0.4 are shown, which satisfy Equation ( * ) for s = 0.4, A(0.4) = A, b(0.4) = 0. Hence the process T in this case corresponds to a scaling. This example belongs to Case (1a) in Section 5. Here A(0.4) is given by Equation (8) where B is the diagonal matrix with entries b 1 = −4 and b 2 = −9.
It is the main result of this paper that solutions of Equation ( * ), i.e. smooth curves c defining solitons for the curve shortening process T, can be characterized as solutions of the inhomogeneous linear differential equation (6)c = Bc(t) + d of second order with constant coefficients with B = 2 A ′′ (0), d = 2 b ′′ (0), cf. Theorem 1 and Theorem 2. The solitons c and the maps A(s) or rather b(s) can be described in terms of the power series (7) co cf. Proposition 2 and Proposition 3 in Section 2 . We will show, that for any real matrix B solutions of Equation (6) are solitons of the curve shortening process and the matrix A(s) of Equation ( * ) is given by The vectors b(s) depend on the structure of the matrix B and vanish for closed solitons.
The one-parameter family s → c s (t) defined by Equation (5) associated to a soliton c can be used to define an affinely invariant solution of the wave equation, cf. Remark 3. The relation of polygonal curve shortening and the curve shortening flow for smooth curves is discussed in many papers, cf. [13,Sec.3], a reference for curve shortening flows is the book [4]. Self-similar solutions of the Euclidean curve shortening flow in the plane, i.e. the mean curvature flow for curves in Euclidean space, are discussed by Halldorsson [7], see also Hungerbühler & Smoczyk [9] and Altschuler [1]. These questions lead to systems of non-linear ordinary differential equations. In Section 4 we show that the solitons of the curve shortening T are also solitons of the semidiscrete flow defined in Equation (3). In Section 5 we discuss the planar case n = 2 in detail and present examples. For example the parabola is a soliton for which the curve shortening leads to a translation.
We also obtain various spirals as solitons which rotate and shrink and closed curves of Lissajous type, which scale under the mapping T. The results of Section 5 should be compared with the zoo of solitons obtained for the Euclidean curve shortening flow by Halldorsson [7], see also Hungerbühler & Smoczyk [9] and Altschuler [1].
System of linear differential equations of second order
Let B ∈ M (n; R) be a n × n matrix with real entries. We denote by ½ the identity matrix and use the following notation: Using the matrix exponential exp(B) : These power series are obviously convergent and we obtain the following . For a matrix B the above defined mappings co B (t), si B (t) satisfy the following (differential) equations: Given a real number b ∈ R we denote by cos b , sin b : R → R the unique solutions of the differential equation Then we obtain for a real number b and the matrix B = b½ : Proposition 2 (Homogeneous differential equation). For a matrix B ∈ M (n; R) and vectors v, w ∈ R n the linear system of ordinary differential equations (with constant coefficients) of second order: with initial values c(0) = v,ċ(0) = w has the unique solution Then the one-parameter family of curves s → c s for s ∈ R defined by Equation (5) satisfies Using Equation (11) we can write Remark 1 (Addition rules). The second order linear system (10) of ordinary differential equations is equivalent to the following first order linear system of ordinary differential equations d dt with constant coefficients. The solution of this system with initial values c(0) = v,ċ(0) = w is given by Hence the curve t → (c(t),ċ(t)) is the orbit of the point (v, w) ∈ R n ⊕ R n under a one-parameter group of linear transformations. Since we obtain the following addition rules: Remark 2 (Roots of B). If there is a matrix C ∈ M (n; R), such that i.e. the curve is the orbit of a point under a one-parameter group of affine transformations in R n .
Let c be a solution of the inhomogeneous linear differential equationc(t) = Bc(t) + d, let U ∈ M (n; R) be an invertible matrix and e a vector, then Therefore one can assume that B ∈ M (n; R) has already complex Jordan normal form cf. [11,Thm. 5.4.10] or real Jordan normal form, cf. [11,Thm. 5.6.3]. If an eigenvalue λ of A is not real then the conjugate value λ is an eigenvalue, too. In Section 5 we discuss the possible real Jordan normal forms for n = 2. For a complex number λ the Jordan block J m (λ) is given by: The complex Jordan normal form of a real matrix B consists of Jordan blocks of the form J m (λ), m ≥ 1 for an eigenvalue λ ∈ C.
The real Jordan normal form of a real matrix B consists of two different types of Jordan blocks: For a real eigenvalue λ there are Jordan blocks of is a non-real eigenvalue, then for some m ≥ 1 real Jordan blocks of the form occur. The real Jordan block J 2m (α, β) is always invertible. Hence the Jordan normal form of a singular real matrix contains a nilpotent Jordan block of the form J m (0) , m ≥ 1. Therefore it is sufficient to discuss the following cases: Proposition 3 (Inhomogeneous differential equation). For a matrix B ∈ M (n; R) and a vector d ∈ R n we consider the inhomogeneous linear system of ordinary differential equations (with constant coefficients) of second order: and we consider the one-parameter family of curves c s defined by Equation (5). Then we have i.e. Equation ( * ) and Equation (8) hold . We consider three cases: (a) If there is a vector d * such that d = B · d * , then for v, w ∈ R n the unique solution of Equation (17) with intitial values c(0) = v,ċ(0) = w is given by is given by . And the unique solution of Equation (17) with intitial values c(0) = v,ċ(0) = w is given by , w =ċ(0) and the one-parameter family c s (t) is given by: Proof. The curve This already proves Equation (17) in case (a), or rather Equation (19). The addition rules Equation (13) show: One checks that c * (t) given by Equation (20) Hence we conclude from Part (a) that This implies that c(t) satisfies Equation (21). Since together with Equation (23) proves Equation ( * ) and Equation (22).
be a smooth curve with the one-parameter family of curves c s defined by Equation (5). Then Then If the smooth curve c is a solution of Equation (17) then Proposition 3 implies: Hence during the evolution s → c s the affine form of the curve c = c 0 is preserved. This motivates the notion soliton for these solutions of the wave equation.
In the sequel we study which invertible matrices D can be written in the form (½ + co B (s)) /2 for some real matrix B. Since for any invertible matrix U we have co U BU −1 (s) = U co B (s)U −1 it is sufficient to check the possible Jordan normal forms of co B (s) for the different Jordan blocks B. It turns out that a large class of invertible matrices D can be written in this form. Proof. We compute for the possible complex Jordan normal forms J of a real matrix B the complex Jordan normal form of co J (s) resp. f (J, s). Since the matrix D = (½ + co B (s)) /2 is supposed to be invertible we exclude −1 as eigenvalue of co B (s). s) is a diagonal matrix with non-negative real eigenvalues.
Discrete Curve Shortening
An (infinite) polygon x = (x j ) j∈Z in R n is defined by its vertices x j ∈ R n . We call P = P(R n ) the vector space of these polygons. We can identify the polygon x with the piecewise linear curve x : R → R n which is a straight line on any interval [j, j + 1] and satisfies x(j + u) = (1 − u)x j + ux j+1 for any u ∈ [0, 1], j ∈ Z. If there is a positive number N such that x j+N = x j for all j ∈ Z, then we call the polygon x closed or rather periodic with N vertices or rather of period N. In this case we can identify the index set with Z N = Z/(N · Z). We denote the set of closed polygons with N vertices by P N = P N (R n ). The midpoint mapping is given by For a closed polygon x ∈ P N (R n ) its length is given by L (x) := N −1 j=0 x j+1 − x j , here . denotes the Euclidean norm. The triangle inequality implies the following curve shortening property of the midpoint mapping in the general case: Definition 2 (Curve shortening process). We introduce the following mapping T : P(R n ) → P(R n ) : (a) We call a polygon x = (x j ) j∈Z affinely invariant under the mapping T or rather an (affine) soliton of the curve shortening process T ) if there is an affine map (A, b), A ∈ Gl (n, R) , b ∈ R n such that for all j ∈ Z : for all s ∈ (0, ǫ), t ∈ R.
It is obvious that these notions are affinely invariant. For a ∈ R, s ∈ (0, ǫ) the polygon x = x(a, s) with x(a, s) j = c(a + sj), j ∈ Z lying on a smooth curve c : R −→ R n is a soliton of the curve shortening process T if the curve c is also a soliton of the corresponding process T on curves. On the other hand: A polygon x = (x j ) j∈Z which is a soliton of the curve shortening process T satisfying Equation (26) can be obtained from a smooth curve, which is a soliton as defined in Equation (27) if and only if A can be written in the form A = (½ + co B (s)) /2. In Proposition 4 the Jordan normal form of these matrices are classified.
Remark 4 (Eigenpolygons of T ). If we consider closed polygons then the midpoint mapping defines a linear map M : P N −→ P N on the (n · N )-dimensional vector space P N , and one can use a decomposition into eigenspaces, cf. [14] and [2]. The matrix is in particular circulant.
and eigenvectors z (k) given by Equation (28). Note that all polygons z (k) given by Equation (28) Proposition 5 (assignment matrix A → polygon). Let (A, b) : x ∈ R n −→ Ax + b ∈ R n be an affine map and u, v ∈ R n be two points in R n , and j 0 ∈ Z. Then there is a unique polygon x ∈ P(R n ) with x j 0 = u, x j 0 +1 = v which is affinely invariant (with respect to A and b) under the mapping T.
Proof. If x ∈ P(R n ) is affinely invariant under the mapping T we have Hence the sequence (x j ) with x j 0 = u, x j 0 +1 = v is uniquely determined by the recursion formulae For a given smooth curve c : R → R n we define the one-parameter family c s : R → R n by Equation (5). The curves c s are obtained from c = c 0 by applying the mapping T as follows. For arbitrary a ∈ R, s > 0 let x = x(a, s) be the polygon x j = x j (a, s) := c (a + js) , j ∈ Z. Then c s (a + js) = (T (x)) j = 1 4 {c(a + (j − 1)s) + 2c(a + js) + c(a + (j + 1)s)} .
Hence the vertices (T (x(a, s))) j of the image T (x(a, s)) of the polygon x = x(a, s) under the mapping T lie on the curve c s , or rather the curve c s is formed by the images T (x (a, s)) of polygons of the form x = x(a, s) on the curve c.
Theorem 1 (solitons as solutions of an ode). Let c : R → R n be a smooth curve such that the one-parameter family defined by Equation (5) defined for all s ∈ (−ǫ, ǫ) and some ǫ > 0 satisfies: for all t ∈ R, −ǫ < s < ǫ and for a smooth one-parameter family s → A(s) ∈ Gl(n, R) of linear isomorphisms and a smooth curve s → b(s) ∈ R n . I.e. the curve c is affinely invariant under the mapping T, cf. Definition 2 (b). Assume in addition that for some t 0 ∈ R the vectorsċ(t 0 ),c(t 0 ), . . . , c (n) (t 0 ) are linearly independent. Then the curve c is the unique solution of the differential equation Note that for the potential we can write instead of Equation (31) the following Equation: (34)c(t) = −grad U (c(t)) .
It follows that the function
is constant for any solution of Equation (31).
We can combine the results of Theorem 1 and Proposition 3 to give the following characterization of affinely invariant curves under the affine mapping T as solutions of an inhomogeneous linear differential equation of second order: Theorem 2 (characterization of affinely invariant curves as solutions of an ode). (a) Let c : R −→ R n be a smooth curve affinely invariant under the mapping T. Assume in addition that for some t 0 ∈ R the vectorsċ(t 0 ),c(t 0 ), . . . , c (n) (t 0 ) are linearly independent. Then there is a unique matrix B ∈ M (n; R) and a unique vector d ∈ R n such thaẗ (b) Let B be a real matrix and d a vector in R n . Then any solution c = c(t) of the inhomogenous linear differential equationc(t) = Bc(t) + d with constant coefficients defines an affinely invariant smooth curve under the mapping T.
For a closed polygon x ∈ P N the center of mass x cm is given by Since (T (x)) cm = x cm we conclude: There is no translation-invariant closed polygon, since the center of mass is preserved under the curve shortening processes B or rather T.
Remark 5 (Generalization of the map T ). For three points x, y, z ∈ R n define the affine map T : R n × R n × R n −→ R n ; T (x, y, z) = 1 4 {x + 2y + z} .
Hence the mapping T : P(R n ) −→ P(R n ) introduced in Definition 2 satisfies for all j ∈ Z : The one-parameter family c s associated to the smooth curve c by Equation (5) can be written as: In the following we will allow a slightly more general curve shortening process based on the affine map T α : R n −→ R n ; T α (x, y, z) = αx + (1 − 2α)y + αz for α = 0, i.e. T = T 1/4 . For α = 1/3 the point T 1/3 (x, y, z) is the center of mass (x + y + z)/3. The curve α ∈ [0, 1/2] → T α (x j−1 , x j , x j+1 ) ∈ R n is a parametrization of the straight line connecting x j with the midpoint (x j−1 + x j+1 )/2 of the points x j−1 , x j+1 . These mappings T α are considered for example in [2, p.238-39] and [3, ch.5.1]. For a smooth curve c one defines the associated one-parameter family of curves We call a smooth curve c affinely invariant (or a soliton) with respect to T α if there is a one-parameter family (A α (s), b α (s)) , s ∈ (−ǫ, ǫ) for some ǫ > 0 of affine mappings such that c α,s (t) = A α (s)c(t)+b α (s) for all t ∈ R, s ∈ (−ǫ, ǫ). Then . We conclude: A smooth curve c is a soliton for the transformation T α for α = 0 if and only if it is a soliton for the transformation T = T 1/4 and for the corresponding affine maps (A α (s), b α (s)) we obtain:
Semidiscrete flows of polygons
The mapping T introduced in Definition 2 or T α defined in Remark 5 or Equation (39) can be seen as a discrete version of the semidiscrete flow defined on the space P(R n ) of polygons: For a given polygon (x j ) j∈Z the flow s → (x j (s)) ∈ P(R n ) is defined by the Equation This flow is discussed for example in [5]. It is a linear first order system of differential equations with constant coefficients forming a circulant matrix. Hence one can write down the solutions explicitely. If we approximate the left hand side of this equation by (x j (s + α) − x j (s)) /α we obtain Therefore the mappings T and T α can be seen as discrete versions of the flow Equation (40). In [5] the flow s → x j (s) introduced in Equation (40) is called semidiscrete since it is a smooth flow on a space of discrete objects (polygons). Then a discretization of the semidiscrete flow yields to the discrete process T and T α discussed here. On the other hand the connection of the semidiscrete flow with the smooth curve shortening flow in Euclidean space is discussed in detail in [5,Sec. 5].
If we consider the functional x j+1 − x j 2 on the space P N of closed polygons then we obtain for a curve s ∈ (−ǫ, ǫ) → x(s) = (x j (s)) j∈Z N with x = x(0) andẋ =ẋ(0) : and we obtain for the gradient gradF 2 (x) : Hence the semidiscrete flow can be viewed as the negative gradient flow of the functional F 2 , cf. [5,Sec.6]. An affine transformation x ∈ R n → A(x) = A(x) + b ∈ R n on R n induces an affine transformation A on P N : A = (x 1 , . . . , x n ) = (A (x 1 ) , . . . , A (x N )) . In contrast to the functional F 2 its gradient gradF 2 is invariant under A : for all s ∈ (0, ǫ), t ∈ R .
In the following Proposition we show that the solitons of the mapping T or T α coincide with the solitons of the semidiscrete flow given by Equation (40) Using We conclude from Equation (42) and Equation (43): Hence the curve c is affinely invariant under the semidiscrete flow.
Planar solitons
We study the planar case n = 2. We conclude from Theorem 2 that solitons c are solutions c(t) = (x(t), y(t)) of the differential equation We discuss these solutions using Proposition 2 and Proposition 3. Let B ∈ M (2; R) be a matrix in (real) Jordan normal form. Then we consider the following cases: (1) B is diagonalizable (over R) and invertible, and d = 0. i.e.
(2) The following case corresponds to the case that the matrix B has no real eigenvalues. Hence B is a similarity, i.e. a composition of a rotation and a dilation x → λx for some λ = 0. We identify R 2 with the complex numbers C and assume that the matrix A is complex linear, i.e. can be identified with the multiplication with a non-zero complex number µ Then we are looking for a solution z : t ∈ R → z(t) = x(t) + iy(t) ∈ C of the differential equationz = µz. For a complex number w with µ = w 2 , w = u 1 + iu 2 , u 1 , u 2 ∈ R, u 1 = ℜ(w), u 2 = ℑ(w) a solution has the form z(t) = h 1 exp(wt) + h 2 exp(−wt).
|
2016-06-20T21:03:22.000Z
|
2015-08-28T00:00:00.000
|
{
"year": 2017,
"sha1": "4b7277fc619b87b8f90f5ed0cf8430a60dd02f34",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1508.07274",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4b7277fc619b87b8f90f5ed0cf8430a60dd02f34",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
254305956
|
pes2o/s2orc
|
v3-fos-license
|
The Preferential Therapeutic Potential of Chlorella vulgaris against Aflatoxin-Induced Hepatic Injury in Quail
Aflatoxins (AFs) are the most detrimental mycotoxin, potentially hazardous to animals and humans. AFs in food threaten the health of consumers and cause liver cancer. Therefore, a safe, efficient, and friendly approach is attributed to the control of aflatoxicosis. Therefore, this study aimed to evaluate the impacts of Chlorella vulgaris (CLV) on hepatic aflatoxicosis, aflatoxin residues, and meat quality in quails. Quails were allocated into a control group; the CLV group received CLV (1 g/kg diet); the AF group received an AF-contaminated diet (50 ppb); and the AF+CLV group received both treatments. The results revealed that AF decreased the growth performance and caused a hepatic injury, exhibited as an increase in liver enzymes and disrupted lipid metabolism. In addition, AF induced oxidative stress, exhibited by a dramatic increase in the malondialdehyde (MDA) level and decreases in glutathione (GSH) level, superoxide dismutase (SOD), and glutathione peroxidase (GPx) activities. Significant up-regulation in the inflammatory cytokine (TNF-α, IL-1β, and IL-6) mRNA expression was also documented. Moreover, aflatoxin residues were detected in the liver and meat with an elevation of fat% alongside a decrease in meat protein%. On the other hand, CLV supplementation ameliorated AF-induced oxidative stress and inflammatory condition in addition to improving the nutritional value of meat and significantly reducing AF residues. CLV mitigated AF-induced hepatic damage, decreased growth performance, and lowered meat quality via its antioxidant and nutritional constituents.
Introduction
Aflatoxins (AFs) are the most predominant class of mycotoxins with the highest hazardous potential to humans and animals. AFs are released into the food and feed as a secondary metabolite of Aspergillus flavus and A. parasiticus [1,2]. In a favorable condition of this fungi growth, AFs contaminate the crop during production, harvesting, storage, and processing, resulting in building up hazardous concentrations. Worldwide climate change that occurs as a consequence of global warming provides optimal conditions for fungal growth as well as mycotoxin production [3]. It has been reported that AFs contaminate approximately 25% of the world's crops [4]. In developing countries, around 4.5 billion people are in danger of chronic AF exposure [5]. The International Agency for Research on Cancer categorizes AF as a class I carcinogen. A growing body of literature has documented the hepatotoxic, immunosuppressive, and carcinogenic effects of AFs [2,[6][7][8]. On the animal farm scale, AFs are known to significantly affect the performance of farm animals, including poultry, by lowering the growth rate, feed conversion, and meat production [9].
AFs constitute a serious health hazard due to their ability to accumulate in various tissues, particularly liver, where they are metabolized to a highly toxic reactive epoxide (mainly, aflatoxin-exo-8,9-epoxide; AFO), causing liver injury [10,11]. There is substantial evidence suggesting the occurrence of oxidative distress, DNA damage, inflammatory reactions, and apoptosis following AF exposure [2,7,8,12]. Moreover, to date, the stability of AFs and their metabolites against physical and chemical protocols used to reduce AF concentrations in food and feed may pose a health risk to consumers, elaborating the need for safe, feasible, and effective protocols for controlling AFs' negative impacts [13]. The biological approaches using natural feed additives, such as curcumin [14], grape seed, or seabuckthorn [15], have been gaining attention recently. In addition, employing the beneficial micro-organisms in AF control is encouraging since they have antioxidant, anti-inflammatory, and immune-stimulating properties.
Recently, microalgae have been introduced in animal feed as a feasible replacement for fish meal, a good source of polyunsaturated fatty acids [16]. Moreover, it has been reported to have a valuable role in improving immunity, growth performance, and meat quality [17,18]. One of the green microalgae, Chlorella vulgaris (CLV), has an enormous nutritional value, since it is rich in various macro-and micronutrients. It contains polysaccharides, essential amino acids, essential fatty acids, more than 20 vitamins and minerals, and pigments [16]. Previous studies have attributed CLV's antioxidant, hepato-protective, anti-inflammatory, and growth-promoting properties to its wide range of bioactive nutrients [17,19].
Quail production has recently witnessed a remarkable development, providing a new source of food security. Therefore, the current study was designed to investigate the potential use of CLV as a feed supplement of the biological source to minimize the hepatic toxic effects of AF on quails, reduce AF bioaccumulation, and improve meat quality. Consumers targeted quail meat for its low fat and cholesterol content, in addition to its high-biological-value protein, minerals, vitamins, and essential fatty acids [20].
Growth Performance
The impact of CLV supplementation on the growth performance of quails fed an AFcontaminated diet is presented in Table 1. The total body weight (BW) and body-weight gain (BWG) were significantly lowered in AF-exposed birds compared to all other treated ones and the highest BW and BWG were recorded in the CLV group. On the other hand, the AF+CLV group exhibited a marked increase in BWG compared to the AF group. Meanwhile, the feed conversion ratio (FCR) in the AF group was significantly higher than in other groups. On the other hand, the AF+CLV group exhibited a non-significant decrease in FCR compared to the AF group. Interestingly, the CLV group showed a significant decrease in FCR and a dramatic increase in productive performance parameters (BW, BWG, and survival rate) compared to the control group. In addition, no mortalities were observed in the control and CLV groups in contrast to AF-intoxicated quails and the AF+CLV group where the mortality rates were recorded at 26.67% and 3.33%, respectively. Consistently supplementing CLV to the AF diet promoted the AF-decreased survival rate (73.33%) up to 96.67%.
Changes in Liver Function Indices and Lipid Profile
According to the data presented in Figure 1, birds fed an AF-contaminated diet exhibited a noticeable increase in the activity of liver enzymes, including alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), and gamma-glutamyl transpeptidase (GGT), when compared to those receiving basal diet (controls). On the other hand, adding CLV to an AF-contaminated diet showed amelioration in the activities of hepatic enzymes in the birds receiving that diet compared to those exposed to AF alone.
Changes in Hepatic Oxidant/Antioxidant Hemostasis
The study revealed that AF provoked a pronounced state of oxidative stress, as demonstrated by a dramatic increase in the hepatic malondialdehyde (MDA) and decrease in reduced-glutathione (GSH) levels, together with a noticeable reduction in the enzyme activities of the superoxide dismutase (SOD) and glutathione peroxidase (GPx) when compared to the control group. Of note, quails fed an AF plus CLV diet showed improvement in the alterations of MDA and GSH levels. The MDA level was decreased by 38.46% and GSH level was increased by 28.57% in the liver tissue of the AF + CLV group, as compared to the AF group. Moreover, the activities of hepatic antioxidant enzymes were almost brought back to normal levels. SOD activity was enhanced by 34.5% and GPx activity was increased by 19.23% in liver tissue, as matched to the AF sole
Changes in Hepatic Oxidant/Antioxidant Hemostasis
The study revealed that AF provoked a pronounced state of oxidative stress, as demonstrated by a dramatic increase in the hepatic malondialdehyde (MDA) and decrease in reduced-glutathione (GSH) levels, together with a noticeable reduction in the enzyme activities of the superoxide dismutase (SOD) and glutathione peroxidase (GPx) when compared to the control group. Of note, quails fed an AF plus CLV diet showed improvement in the alterations of MDA and GSH levels. The MDA level was decreased by 38.46% and GSH level was increased by 28.57% in the liver tissue of the AF+CLV group, as compared to the AF group. Moreover, the activities of hepatic antioxidant enzymes were almost brought back to normal levels. SOD activity was enhanced by 34.5% and GPx activity was increased by 19.23% in liver tissue, as matched to the AF sole treatment. Meanwhile, quails fed a CLV-supplemented diet exhibited a noticeable increase in antioxidant enzyme activities and GSH levels in the liver, as compared to the control group ( Figure 2).
Changes in Inflammatory Cytokine mRNA Expressions
As depicted in Figure 3, the AF could evoke an inflammatory reaction seen by drastic up-regulation of the inflammatory cytokine (TNF-α, IL-1β, and IL-6) mRNA expression levels in liver tissue compared to controls. However, adding CLV to the AF-contaminated
Changes in Inflammatory Cytokine mRNA Expressions
As depicted in Figure 3, the AF could evoke an inflammatory reaction seen by drastic up-regulation of the inflammatory cytokine (TNF-α, IL-1β, and IL-6) mRNA expression levels in liver tissue compared to controls. However, adding CLV to the AF-contaminated diet could attenuate the AF-stimulated inflammatory response via down-regulation of the targeted inflammatory gene expressions.
Changes in Meat Nutritive Value after C. vulgaris and/or Aflatoxin Exposure
Results presented in Table 2 indicated that the CLV group exerted a significant increase in protein and a non-significant decrease in fat, cholesterol, and triacylglycerol contents of meat compared to controls. In comparison to the AF group, CLV supplementation to Japanese quails fed an AF-contaminated diet showed a marked increase in meat protein and a reduction in fat and cholesterol contents. Moreover, the triacylglycerols of meat were mostly brought back to the control level.
Changes in Meat Nutritive Value after C. vulgaris and/or Aflatoxin Exposure
Results presented in Table 2 indicated that the CLV group exerted a significant increase in protein and a non-significant decrease in fat, cholesterol, and triacylglycerol contents of meat compared to controls. In comparison to the AF group, CLV supplementation to Japanese quails fed an AF-contaminated diet showed a marked increase in meat protein and a reduction in fat and cholesterol contents. Moreover, the triacylglycerols of meat were mostly brought back to the control level.
Impact of C. vulgaris on Aflatoxin Residues in Liver Tissue and Meat
Data shown in Table 2 revealed that quail fed an AF-contaminated diet exhibited a remarkable increase in AF residues in liver tissue and meat in comparison to birds receiving a basal diet. On the other hand, CLV supplementation could reduce the AF bioaccumulation in the liver and meat obtained from birds fed an AF-contaminated diet. However, no AF residues were detected in both groups fed on a basal diet or supplemented by CLV alone. The chromatograms of AF liberated from the HPLC are illustrated in Figures S1 and S2.
Hierarchical Clustering Heatmap and Variable Importance in Projection (VIP) Score
Then, multivariate analyses were performed to unravel the relationships between different parameters and treatments, as depicted in Figure 4. The clustering heatmap provides a clear visual of all the data sets in Figure 4A, highlighting the significant difference in concentration levels of all variables in response to AF toxicity in relation to the other treatments. Furthermore, the variable importance in projection (VIP) score indicated that AST, GGT, cholesterol, triglycerides, ALP, ALT, MDA, TNF-α, AF residue, IL-6, IL-1β, and GSH were the top influencing variables in our study, which were sensitive to different treatments and can discriminate AF treatment from others ( Figure 4B).
Impact of C. vulgaris on Aflatoxin Residues in Liver Tissue and Meat
Data shown in Table 2 revealed that quail fed an AF-contaminated diet exhibited a remarkable increase in AF residues in liver tissue and meat in comparison to birds receiving a basal diet. On the other hand, CLV supplementation could reduce the AF bioaccumulation in the liver and meat obtained from birds fed an AF-contaminated diet. However, no AF residues were detected in both groups fed on a basal diet or supplemented by CLV alone. The chromatograms of AF liberated from the HPLC are illustrated in Figures S1 and S2.
Hierarchical Clustering Heatmap and Variable Importance in Projection (VIP) Score
Then, multivariate analyses were performed to unravel the relationships between different parameters and treatments, as depicted in Figure 4. The clustering heatmap provides a clear visual of all the data sets in Figure 4A, highlighting the significant difference in concentration levels of all variables in response to AF toxicity in relation to the other treatments. Furthermore, the variable importance in projection (VIP) score indicated that AST, GGT, cholesterol, triglycerides, ALP, ALT, MDA, TNF-α, AF residue, IL-6, IL-1β, and GSH were the top influencing variables in our study, which were sensitive to different treatments and can discriminate AF treatment from others ( Figure 4B). On the gradation scale, dark red is the highest value and blue is the lowest. (B) Variable importance in projection (VIP) score, the relative concentrations of the pertinent measured parameters are shown in colored boxes on the right for each study group, and a colored scale from highest (red) to lowest (blue) indicates the contribution intensity (blue).
Clinical, Postmortem, and Histopathological Examination
During daily observations, the birds fed an AF-contaminated diet showed watery dropping, abnormal gait, and ruffled feathers. Macroscopically, the liver of AF-intoxicated birds appeared enlarged, friable, and light-yellow greenish in color, along with distended gall bladder and hemorrhages on the surface ( Figure 5C-E). The histopathological examination indicated that the AF induced diffuse fatty degeneration in liver tissue with circumscribed vacuolated hepatocytes, congestion in the central vein, and focal mononuclear cell infiltration microscopically ( Figure 6). On the other hand, CLV supplementation with the AF-contaminated diet significantly alleviated abnormal changes in liver tissue. Microscopically, CLV restored the damaged histological structure, except for focal degeneration in liver tissue (Figure 6). At most, birds in the AF+CLV group appeared healthy and alert with a normal gait, feather, and dropping.
Clinical, Postmortem, and Histopathological Examination
During daily observations, the birds fed an AF-contaminated diet showed watery dropping, abnormal gait, and ruffled feathers. Macroscopically, the liver of AFintoxicated birds appeared enlarged, friable, and light-yellow greenish in color, along with distended gall bladder and hemorrhages on the surface ( Figure 5C-E). The histopathological examination indicated that the AF induced diffuse fatty degeneration in liver tissue with circumscribed vacuolated hepatocytes, congestion in the central vein, and focal mononuclear cell infiltration microscopically ( Figure 6). On the other hand, CLV supplementation with the AF-contaminated diet significantly alleviated abnormal changes in liver tissue. Microscopically, CLV restored the damaged histological structure, except for focal degeneration in liver tissue (Figure 6). At most, birds in the AF+CLV group appeared healthy and alert with a normal gait, feather, and dropping.
Clinical, Postmortem, and Histopathological Examination
During daily observations, the birds fed an AF-contaminated diet showed watery dropping, abnormal gait, and ruffled feathers. Macroscopically, the liver of AFintoxicated birds appeared enlarged, friable, and light-yellow greenish in color, along with distended gall bladder and hemorrhages on the surface ( Figure 5C-E). The histopathological examination indicated that the AF induced diffuse fatty degeneration in liver tissue with circumscribed vacuolated hepatocytes, congestion in the central vein, and focal mononuclear cell infiltration microscopically ( Figure 6). On the other hand, CLV supplementation with the AF-contaminated diet significantly alleviated abnormal changes in liver tissue. Microscopically, CLV restored the damaged histological structure, except for focal degeneration in liver tissue (Figure 6). At most, birds in the AF+CLV group appeared healthy and alert with a normal gait, feather, and dropping.
Discussion
Our previous studies, along with others, strongly suggested the implication of oxidative stress and inflammation in AF-induced liver injury [2,7,8,12,21]. It is well known that the biotransformation of AF is catalyzed by hepatic cytochrome P450 elaborating more toxic metabolites, mainly aflatoxin-exo-8,9-epoxide (AFO). AFO has a strong affinity for electrons and can damage the liver by establishing irreversible covalent bonds with the nitrogen, oxygen, and sulphur heteroatoms found in biological macromolecules [22][23][24]. As a result, considerable amounts of free radicals are generated, such as O 2 •− , OH • , H 2 O 2 , and NO. These reactive radicals triggered lipid peroxidation (altered hepatocyte membrane integrity), mitochondrial dysfunction, protein misfolding, endoplasmic reticulum stress, exhaustion of cellular antioxidant competence, and formation of DNA adducts [2,[25][26][27][28]. OH • is the most injurious radical among others that can remotely attack the lipid bilayer in the cell membrane, causing lipid peroxidation, where the hepatocyte membrane loses its integrity, leading to the release of transaminases (ALT and AST), ALP, and GGT into the bloodstream, as indicated in the current study [23]. The documented increase in the level of MDA affirms the occurrence of lipid peroxidation in response to AF intoxication. MDA can also attack the distant cellular macromolecule, causing protein and DNA damage, making the matter worse [2]. The altered lipid profile could be attributed to the disruption of the biliary epithelium, as evidenced by increased serum GGT activity [29]. Furthermore, the AF-GSH complex is formed to eliminate the AF via urine. Prolonged AF exposure causes exhaustion of GSH capacity, in addition to the depletion of GPx, which is required for regeneration of GSH that leads to a depletion of the GSH store. The SOD activity was also reduced due to the overproduction of O 2 •-, since it is necessary for the dismutation of O 2 •− . Eventually, these events cause general perturbation of the cellular redox hemostasis, as indicated in the present study, wherein our current study along with previous findings strongly support the concept that oxidative stress and lipid peroxidation are the main modulatory mechanisms located behind AF-induced liver damage [2,7,8,12]. In addition, Zhang et al. recently reported the involvement of BACH1 in AF-induced oxidative stress and lipid peroxidation [30]. Sakamoto and his group also documented alterations in the quail performance and biochemical indices after exposure to AF [29]. The present gross and microscopical pathological findings confirmed the existence of severe damage after AF-exposed birds. The current investigation offers compelling evidence for the hepato-protective potency of CLV against aflatoxicosis. Relevant studies have confirmed the protective efficacy of CLV against diazinon [19], deltamethrin [31], and sodium nitrite [32] toxicity. They attributed the ameliorative effect of CLV against hepatic toxicity of various toxicants to its antioxidant and ROS scavenging activities due to the vitamins and polyphenolic content [16]. Consistently, the observed improvements in liver function and oxidant/antioxidant state in the present investigation in birds co-administrated with AF and CLV might be due to the aforementioned reasons.
Moreover, the inflammatory condition observed in this study in response to AF intoxication is consistent with the previous results, which recorded up-regulation of inflammatory cytokine (TNF-α, IL-1β, and IL-6) gene expression in liver tissue in mice [2] or broiler chickens [33] exposed to AF. They attributed this up-regulation to the acute-phase response to inflammation. The modulatory role of oxidative stress in the inflammatory pathways cannot be ruled out. Excess ROS production stimulates the NF-κβ pathway, hence, inducing up-regulation of inflammatory cytokines, such as TNF-α, IL-1β, and IL-6, which was confirmed by the presence of lymphocytic infiltrations appearing after H&E staining. This was supported by the findings obtained by our group [2] and by Li et al., 2014 [34], where up-regulation of NF-κβ mRNA expression along with its downstream cytokines was reported. The current study supports CLV as an anti-inflammatory agent, as seen in the down-regulation of inflammatory cytokines. This is in good agreement with Abdelhamid et al., 2020 [19], who recorded the down-regulation of splenic TNF-α in a diazinon-intoxicated fish when supplemented with CLV. Suppressing the AF-induced inflammation might be attributed to the ROS scavenging potency of CLV and, consequently, inhibiting the NF-κβ signaling pathway [2,35].
A growing body of literature has evaluated the economic impact of different levels of aflatoxins regarding growth performance. Reductions in weight gain with deteriorations in FCR and productive efficiency were recorded after dietary AF inclusion in Japanese quails [29] as well as in broiler chickens [36], as observed in the present study. The adverse effect of AF on growth performance might be attributed to the negative nitrogen balance and impaired protein synthesis, gut health, and metabolic processes. Nitrogen intake is decreased, as the early sign of AF exposure is anorexia [36]. In addition, the oxidative deleterious impact and depletion of the cellular antioxidant competence of AF could be other possible mechanisms affecting the growth performance. It is worth mentioning that the results of our study emphasize the economic benefits of CLV as a growth promotor and production enhancer ( Table 2). Such results are similar to those obtained by previous reports, which tested the beneficial impact of CLV on growth performance in quail [37], laying hens [38], and in rabbits [39]. A reasonable explanation is the digestibility and high-biological-value protein of CLV, which guarantees an increase in nitrogen intake, despite the feed intake [17]. Moreover, the polysaccharide content of CLV was noted to improve gut health via increasing lactic-acid-producing bacteria, which provides a suitable medium for optimal digestive performance [40].
The current data revealed that AF affected the nutritional value of meat obtained from the AF-exposed quails, unlike other treated groups. That was exhibited by deceased protein content and increased total fat, cholesterol, and triacylglycerols content. Since AF strongly causes DNA adducts and protein oxidation via the generated ROS, protein synthesis and lipid metabolism are disrupted [2,7,8]. The lipid composition of meat reflected the disturbing serum lipid profile in response to AF insult. Thus, it is hypothesized that these mechanisms might have a role in AF-altered nutritional value. As expected, high residuals of AF in liver and meat tissue were also documented in the current trial, which, in turn, affect the safety and quality of edible parts obtained from quail fed an AF-contaminated ration. Alternatively, supplementation of CLV enhanced the nutritive value of meat and reduced the AF residue in comparison to the AF group. Such improvements might relate to the antioxidant activity of CLV that could attenuate the toxic damage of AF, improving the liver function and, consequently, the metabolic processes. Moreover, CLV is an enriched source of high-quality nutrients, including essential amino acids, vitamins, and minerals correlated to its activity as a growth promotor and productive enhancer [41]. The effect of CLV on intestinal microflora played a crucial role in improving digestive efficiency, which would be another cause in improving the growth performance and nutritional value of quail meat [40]. These findings were in the same data frame as the previous reports [42][43][44].
Furthermore, multivariate statistical analyses, represented by clustering heatmap and VIP score, were performed to compile the variable contributions influenced by various treatments on liver tissue. The clustering heatmap clearly summarizes that AF exposure caused substantial changes in all studied parameters compared to other treated groups, suggesting potential improvements in those parameters when CLV was added. Additionally, the VIP score revealed that the top influencing variables in our study were AST, GGT, cholesterol, triglycerides, ALP, ALT, MDA, TNF-α, AF residue, IL-6, IL-1β, and GSH. Figure 7 underpins the molecular mechanisms behind the protective effect of CLV against AF-induced liver injury. Figure 7. Molecular mechanisms behind the protective effect of CLV against AF-induced liver injury. AF, aflatoxins; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CLV, Chlorella vulgaris; GGT, gamma-glutamyl transpeptidase GPx, glutathione peroxidase; GSH, reduced-glutathione; OH • , hydroxyl radical; IL-1β, interleukin-1β; IL-6, interleukin-6; MDA, malondialdehyde; NF-κB, nuclear factor kappa-B transcription factor; ROS, reactive oxygen species; SOD, superoxide dismutase; TNF-α, tumor necrosis factor-α.
Conclusions
AF evoked a remarkable hepatic dysfunction and disrupted the metabolism via the induction of oxidative damage, lipid peroxidation, and inflammatory reactions, lowering the meat nutritional value and increasing the AF tissue residues. CLV supplementation has the ability to protect the hepatocytes from the injurious impact of AF, which might be CLV's antioxidant, ROS-scavenging, and anti-inflammatory activities, along with its enriched nutritive value. We anticipate that CLV supplementation could be an efficient, safe, and feasible biological procedure, counteracting the hazardous impact of AF on animals and humans.
Experimental Design
Ten-week-old Japanese quails were purchased from the Faculty of Veterinary Medicine, Benha University, Egypt. Quails were subjected to 17 h of light per day during the study and were fed a commercial corn and soybean meal basal diet that meets all the nutritional requirements for quails according to specifications of the NRC (1994) and water was provided ad libitum.
Conclusions
AF evoked a remarkable hepatic dysfunction and disrupted the metabolism via the induction of oxidative damage, lipid peroxidation, and inflammatory reactions, lowering the meat nutritional value and increasing the AF tissue residues. CLV supplementation has the ability to protect the hepatocytes from the injurious impact of AF, which might be CLV's antioxidant, ROS-scavenging, and anti-inflammatory activities, along with its enriched nutritive value. We anticipate that CLV supplementation could be an efficient, safe, and feasible biological procedure, counteracting the hazardous impact of AF on animals and humans.
Experimental Design
Ten-week-old Japanese quails were purchased from the Faculty of Veterinary Medicine, Benha University, Egypt. Quails were subjected to 17 h of light per day during the study and were fed a commercial corn and soybean meal basal diet that meets all the nutritional requirements for quails according to specifications of the NRC (1994) and water was provided ad libitum.
After an acclimatization period of two weeks, the birds were allocated into four groups with three replicates of five birds each. The experimental groups include: control group, received a basal diet, C. vulgaris group (CLV), received a basal diet supplemented with CLV (1 g/kg) [17], aflatoxin group (AF), received an AF-contaminated diet (50 ppb; Aflatoxin mix, purity > 98%, Merk, Darmstadt, Germany), and aflatoxin and C. vulgaris group (AF+CLV), received AF-contaminated diet supplemented with CLV for three weeks (Figure 8). Birds in all experimental groups were reared under the same management, hygienic, and environmental conditions. Toxins 2022, 14, 843 12 of After an acclimatization period of two weeks, the birds were allocated into fo groups with three replicates of five birds each. The experimental groups include: contr group, received a basal diet, C. vulgaris group (CLV), received a basal diet supplement with CLV (1 g/kg) [17], aflatoxin group (AF), received an AF-contaminated diet (50 pp Aflatoxin mix, purity >98%, Merk, Darmstadt, Germany), and aflatoxin and C. vulga group (AF+CLV), received AF-contaminated diet supplemented with CLV for thr weeks (Figure 8). Birds in all experimental groups were reared under the sam management, hygienic, and environmental conditions.
Growth Performance Assessment
The weight of birds was recorded at the beginning and end of the experiment calculate body-weight gain (BWG). The feed conversion ratio (FCR) was calculated fro feed intake (kg) in relation to the BWG (kg). Clinical symptoms were observed in experimental groups and the mortality and survival rates were recorded.
Biochemical Analyses
At the end of the experiment, blood samples were collected from the jugular vein a sera were harvested and stored at −20 °C for further biochemical analysis. Liver enzym included ALT, AST, ALP, and GGT activities. Serum total cholesterol and triacylglycero were also determined. All procedures were carried out according to the manufacture instructions (Laboratory Biodiagnostics, Cairo, Egypt).
Liver tissue was collected from the humanely slaughtered bird and stored at −80 until used for determination of tissue oxidation indices spectrophotometrically a inflammatory markers via the qRT-PCR technique. Moreover, the MDA and GSH leve in addition to the SOD and GPx activities, were evaluated following the manufactur manual (Laboratory Biodiagnostics).
Growth Performance Assessment
The weight of birds was recorded at the beginning and end of the experiment to calculate body-weight gain (BWG). The feed conversion ratio (FCR) was calculated from feed intake (kg) in relation to the BWG (kg). Clinical symptoms were observed in all experimental groups and the mortality and survival rates were recorded.
Biochemical Analyses
At the end of the experiment, blood samples were collected from the jugular vein and sera were harvested and stored at −20 • C for further biochemical analysis. Liver enzymes included ALT, AST, ALP, and GGT activities. Serum total cholesterol and triacylglycerols were also determined. All procedures were carried out according to the manufacturers' instructions (Laboratory Biodiagnostics, Cairo, Egypt).
Liver tissue was collected from the humanely slaughtered bird and stored at −80 • C until used for determination of tissue oxidation indices spectrophotometrically and inflammatory markers via the qRT-PCR technique. Moreover, the MDA and GSH levels, in addition to the SOD and GPx activities, were evaluated following the manufacturer manual (Laboratory Biodiagnostics).
Quantitative Real-Time PCR (qRT-PCR)
The total RNA was extracted using RNeasy Mini Kit (Cat#74104, QIAGEN Sciences Inc., Germantown, MD, USA) following the manufacturer's procedures. The used primer sequences of the targeted genes (inflammatory cytokines (TNF-α, IL-1β, and IL-6) and 28S rRNA, a housekeeping gene) are listed in Table 3. The qRT-PCR was executed using QuantiTect probe RT-PCR (Cat#204443, QIAGEN Sciences Inc.). The cycling condition of PCR was conducted at 50 • C for 30 min, 94 • C for 10 min, 40 cycles at 94 • C for 15 s, and 60 • C for 1 min using a real-time PCR machine (Applied Biosystems, Waltham, CA, USA). The Stratagene MX3005P software determined the amplification curves and cycle threshold (Ct) values. Fold changes in the expression levels were calculated using the 2 −∆∆Ct method and after normalization against the housekeeping gene.
Evaluation of Meat Nutritive Value
Total protein and fat content in meat were evaluated following the method of Anderson, 2007 [48]. The spectrophotometric estimation of cholesterol and triacylglycerols contents was also performed as described by El-Medany and El-Reffaei, 2015 [49].
Aflatoxin Residues in Liver Tissue and Meat
Liver tissues and meat were collected from the slaughtered birds at the end of the trial to determine the AF residue using high-performance liquid chromatography. The extraction and analysis were carried out as described by Abdel-Monem et al., 2015 [50]. Aflatoxin B 1 , B 2 , G 1 , and G 2 reference materials (Supelco ® , Merk) were employed in those analyses.
Postmortem and Histopathological Examination
The sacrificed birds were immediately examined for postmortem lesions with a special focus on liver. Parts from the liver were gathered and immediately fixed in 10% formalin for at least 24 h. The fixed specimens were processed for histopathological examination following the standard protocols. Sections of 4 µm thickness were cut and stained with hematoxylin and eosin (H & E), examined under a light microscope, and imaged using a digital-camera-integrated system.
Statistical Analyses
The obtained data were analyzed using a one-way analysis of variance (ANOVA) followed by LSD as the post hoc test. The analysis was performed using SPSS 25 software for Windows (SPSS Inc., Chicago, IL, USA). Data were expressed as the mean ± SE. The data were statistically significant at p values < 0.05. Moreover, a clustering heatmap and variable importance projection (VIP) score were generated by RStudio under R version 4.0.2.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/toxins14120843/s1, Figure S1: A representative chromatogram of AF detected in liver tissue; Figure S2: A representative chromatogram of AF detected in meat.
|
2022-12-07T17:43:41.713Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "e1961545300d5d0d2822e0ba25ba0f528f111409",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6651/14/12/843/pdf?version=1670295577",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7ac4499dc99f0fd27649b000f74c67504c6464d",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256656939
|
pes2o/s2orc
|
v3-fos-license
|
Development and Optimisation of Inhalable EGCG Nano-Liposomes as a Potential Treatment for Pulmonary Arterial Hypertension by Implementation of the Design of Experiments Approach
Epigallocatechin gallate (EGCG), the main ingredient in green tea, holds promise as a potential treatment for pulmonary arterial hypertension (PAH). However, EGCG has many drawbacks, including stability issues, low bioavailability, and a short half-life. Therefore, the purpose of this research was to develop and optimize an inhalable EGCG nano-liposome formulation aiming to overcome EGCG’s drawbacks by applying a design of experiments strategy. The aerodynamic behaviour of the optimum formulation was determined using the next-generation impactor (NGI), and its effects on the TGF-β pathway were determined using a cell-based reporter assay. The newly formulated inhalable EGCG liposome had an average liposome size of 105 nm, a polydispersity index (PDI) of 0.18, a zeta potential of −25.5 mV, an encapsulation efficiency of 90.5%, and a PDI after one month of 0.19. These results are in complete agreement with the predicted values of the model. Its aerodynamic properties were as follows: the mass median aerodynamic diameter (MMAD) was 4.41 µm, the fine particle fraction (FPF) was 53.46%, and the percentage of particles equal to or less than 3 µm was 34.3%. This demonstrates that the novel EGCG liposome has all the properties required to be inhalable, and it is expected to be deposited deeply in the lung. The TGFβ pathway is activated in PAH lungs, and the optimum EGCG nano-liposome inhibits TGFβ signalling in cell-based studies and thus holds promise as a potential treatment for PAH.
Introduction
Pulmonary arterial hypertension (PAH) is a rare group of life-threatening vascular disorders characterised by the abnormal production of various endothelial vasoactive mediators, for example, nitric oxide, prostacyclin, or endothelin [1]. In PAH, there is a decreased production of both nitric oxide and prostacyclin (vasodilators), and an elevated production of the potent vasoconstrictor endothelin-1 [1]. In addition, we and others have identified mutations in the bone morphogenetic protein type 2 receptor (BMPR2) and SMAD genes, which result in the excessive proliferation of pulmonary artery smooth muscle cells and the attenuation of apoptosis, which contribute to the pathogenesis of PAH [2][3][4].
Although modern pharmacological drugs have enhanced the life expectancy and quality of life of PAH patients, these medications have many drawbacks, including a short half-life, instability, a lack of organ specificity, and several formulation limitations, which limit the efficacy and increase their side effects [1]. Therefore, it is essential to find new medications or formulations that solve the limitations of the currently available medications.
Green tea contains several polyphenols [5], including many catechin compounds, such as the most abundant green tea catechin, (-)-epigallocatechin gallate (EGCG) [6]. EGCG utility. Various studies have applied nano-systems to the delivery of EGCG to solve its pharmacokinetic limitations in cancer therapy. These nano-systems include liposomes, lipid nanoparticles, and gold, as well as inorganic, protein-based nanocarriers [15,29]. Several approaches have been applied to develop EGCG liposomes, including the response surface methodology, and in silico and experimental strategies [29][30][31]. Luo et al. used the response surface method to optimise an oral EGCG-loaded liposome formulation using phosphatidylcholine, cholesterol, and Tween 80. The obtained oral liposomes had high encapsulation efficiency (85.79%) [29]. However, they included the surfactant Tween 80 (polysorbate 80, 1.08 mg/mL) in their formulation [29], which was cytotoxic to human bronchial epithelial cells when tested on lung cells for the inhalation route [32]. Ethanolic EGCG liposomes were prepared for use for intratumoral injection into basal cell carcinomas using egg phosphatidylcholine and cholesterol as the lipid bilayers [33]. A phospholipid with a low-phase transition temperature was used to prepare ethanolic EGCG liposomes, and it was previously reported by other researchers that phospholipids with a high-phase transition temperature should be used for inhalable liposomes to maintain their stability during nebulisation [27,34,35]. Moreover, several studies have reported that ethanol affects an aerosol's performance by decreasing the fine particle mass [36][37][38]. This makes the prepared injectable EGCG unsuitable for inhalation. Other studies have used in silico and experimental strategies to develop EGCG liposomes [30]. They used 1,2-dioleoylsn-glycero-3-phosphoethanolamine and 1-palmitoyl-2-oleoylphosphatidyl-choline in the presence of a 5:1 molar ratio of magnesium to prepare the optimum EGCG liposome [30]. However, the resulting liposome was not suitable for inhalation because a phospholipid with a very low-phase transition temperature was used to prepare it [27,34,35]. Another study applied a quality-by-design strategy to prepare an EGCG liposome for use as an antioxidant for mesenchymal stem cells of the dental follicle [31]. Their optimum EGCG liposome contained only 221.9 µg/mL of EGCG, with a relatively low encapsulation efficiency (69.2%) [31].
However, none of these EGCG liposomes was designed for the management of PAH or for the inhalation route to selectively exert its pharmacological effect on the specific site of action; in addition, some of them used surfactants or co-surfactants in their formulations.
This study aimed to optimise the development of an inhalable EGCG nano-liposome formulation with high encapsulation efficiency using high-phase transition phospholipids to maintain the stability of the prepared liposomes during nebulisation without using any surfactants or co-surfactants. A second aim was to investigate the in vitro effects of an inhalable nano-liposome formulation on the inhibition of the TGF-β pathway as a potential treatment for PAH.
To the best of our knowledge, this is the first research to optimise and prepare an inhalable EGCG nano-liposome formulation that may have experimental and clinical applications in PAH.
Preparation of Inhalable EGCG Nano-Liposome Formulations
The thin-film rehydration method, which was established in 1965 by Bangham et al. [39], was used to prepare the inhalable EGCG liposomes. EGCG and a total of 40 mg of the two included lipids and cholesterol were dissolved in 8 mL (1:1, v/v) of methanol and chloroform. The molar ratio of DPPG was kept the same (20%) for all runs. However, when the molar ratio of cholesterol was 0, 10, and 20 (Table 1), the molar ratio of DPPC was 80, 70, and 60, respectively. The amount of the EGCG was calculated for each run individually to have a final drug-to-lipid (D/L) molar ratio of 5, 8, and 11, as indicated in Table 1. The organic solvents were evaporated via a vacuum rotary evaporator at 52 • C for 8 min, and then the thin film was left under a vacuum to evaporate the residual organic solvents from the thin film. The PBS and deionized water, at a 70:30 v/v ratio, were used as a rehydration solution. The volume of the rehydration solution was measured for each run individually to have a final total lipid concentration of 5, 10, and 15 mg/mL. The rehydration solution was then added to the lipid thin film at 60 • C and mixed using a hand mixing technique to form large multilamellar vesicles. The size of the liposomes was reduced by a Probe sonicator (Sonics & Materials. Inc., Newtown, CT, USA, 500-Watt Ultrasonic Processor, model VCX 500). The amplitude was set to 22%, the time was set to 50 s, and the pulse was set to 15 s on and 20 s off.
High-Performance Liquid Chromatography (HPLC)
The EGCG was quantified via reverse-phase HPLC (Agilent 1100 ® , Santa Clara, CA, UAS) with a diode array detector. The mobile phase was 0.1% trifluoracetic acid in water/methanol at 70/30% v/v in an isocratic condition [40]. The column used for this was a SUPELCOSIL LC-18-T HPLC column (5 µm particle size, L × I.D., 15 cm × 4.6 mm; temperature, 25 • C). The detection wavelength was 270 nm, the flow rate was 1 mL/min, the injection volume was 50 µL, and the run time was 5 min, with a retention time of 3.4 min. A methanol:PBS buffer (50:50 v/v) was used as the diluent for all EGCG samples. The tested detection range for the stock and standard solutions of EGCG was 1-500 µg/mL. The R 2 of the final calibration curve and the linear equation was 0.993 and y = 40.071 x + 110.79, respectively.
Implementation of DOE
Polynomial models were constructed for the optimisation process by employing a 29-run, 4-factor, 3-level Box-Behnken design using Design Expert software to develop and optimise the inhalable EGCG nano-liposome formulation. The Box-Behnken design was selected, as it demands fewer runs in the case of 3 or 4 independent variables compared with the centre composite design, and its avoids factor extremes, since the range of these factors was determined on the basis of the literature and our screening experiments to be the best acceptable range to achieve our formulation goal [41]. Four independent variables were evaluated, namely (A) the total lipid concentration (the total concentration of the two included lipids and cholesterol in the liposome solution, mg/mL), (B) the pH of the dispersion media (the rehydration solution), (C) the molar percentage of cholesterol, and (D) the D/L molar ratio, which is also known as "loading capacity" [42]. The 3 levels of each factor were represented as −1, 0, and +1, as depicted in Table 1.
The variables were selected on the basis of data compiled from a literature review [31,41,43]. For example, it was reported that the total lipid concentration affected the encapsulation efficiency in some liposomal formulations [44], and that the pH of the dispersion media in the liposome formulations affected their sizes [45]. In addition, it was reported that the percentage of cholesterol in the liposome formulations affected the liposomes' physical stability, including the PDI [43,46]. The D/L molar ratio is considered to be a critical factor that expresses the actual capacity of the liposome to accommodate the drug. Maximising the D:L molar ratio can optimise a liposomal formulation [42].
In this research, the range of total lipid concentration (A) was 5-15 mg/mL, since the total concentration of lipids in the majority of liposome formulations in medicines falls within this range [47]. The range of pH of the dispersion media (B) was chosen to be between 3 and 6.5 because this range is suitable for the inhalation route [48]. The molar percentage of cholesterol (C) in the range of 0-20% was chosen to measure the impact of cholesterol on the stability of the formulation, the encapsulation efficiency, and other independent factors. The molar ratio of DPPG was kept the same (20%) for all the tested formulations in this design, as the presence of the negatively charged lipid, DPPG, ensured a sufficiently negative zeta potential and thus prevented agglomeration of the liposome [49,50]. When the molar ratio of cholesterol was 0, 10, and 20, the DPPC molar ratio used was 80, 70, and 60, respectively. The D/L molar ratio (D) within the 8-11 range was used, as a higher ratio was shown to formulate unstable liposomes. As was shown in our screening experiments, when the D/L molar ratio for this formulation was higher than 11, agglomeration of the liposomes occurred. The selected responses were as follows: R1, liposome size (z-average, nm); R2, polydispersity index (PDI); R3, encapsulation efficiency (%); R4, zeta potential; R5, PDI after 1 month. The centre point (CP) was run four times to measure the curvature and precision of the production process. All 29 formulations that were proposed by the Design Expert software, as shown in Table 2, were prepared to generate, evaluate, and analyse the model. Polynomial equations that described the correlation between the dependent and independent variables were obtained. After the analysis of all responses, the Design Expert 13 software proposed the optimum nano-liposome formulation according to the required optimal conditions selected. To confirm the predictivity of the selected model, the proposed optimal nano-liposome formulation was prepared and characterised, and the values of the experimental responses were contrasted with the predictive ones.
2.6. Characterisation of the Inhalable EGCG Nano-Liposome Formulations 2.6.1. Inhalable EGCG Nano-Liposome Formulations' Particle Size, Dispersity, and Zeta Potential Measurements The particle size (Z-average), dispersity (PDI), and charge (zeta potential) of the prepared nano-liposome formulations were quantified using a Zetasizer NanoZs with dynamic light scattering (DLS, Malvern Instruments, UK) at 25 • C and a scattering angle of 173 • . The measurements were performed in triplicate and the average was then calculated.
pH Measurements of the Liposome Formulations
The pH value of the prepared formulations was measured using a pH meter (Mettler-Toledo, Leicester, UK).
Determination of the Encapsulation Efficiency
To determine the encapsulation efficiency, the free drug and total drug concentrations in the mixture were measured. The purification of EGCG liposomes from the free EGCG was achieved using an Ultracel 50 kDa centrifugal filter. It was placed in an Eppendorf vial and centrifuged at a speed of 12,000 rpm for 12 min. The amount of free EGCG was determined from the filtrate. The collected filtrate was diluted with the same amount of methanol to allow detection by HPLC, since the diluent of the sample in this research was a 50:50 v/v buffer: methanol mixture. To determine the total EGCG content in the EGCG liposomes and the free EGCG mixture, 100 µL of the mixture was mixed for 5 min with 300 µL of methanol to confirm the complete lysis of the EGCG liposomes, and then 200 µL of the buffer was added to allow detection via HPLC. The previously mentioned HPLC method was used. The following equation was applied to assess the percentage of encapsulation efficiency: Encapsulation e f f iciency (%) = (1 − concentration o f f ree EGCG/Total concentration o f EGCG) × 100 %).
Viscosity Measurements of the Optimum EGCG Nano-Liposome Formulation
The viscosity of the optimum EGCG liposome formulation was evaluated using a SV-10 Viscometer (Malvern Panalytical Ltd., Malvern, UK) at a temperature of 25 • C. The average viscosity and SD were determined from three measurements.
Osmolality Measurements of the Optimum EGCG Nano-Liposome Formulation
A 3320 Micro-Osmometer (Model 3320) (Advanced Instruments, Wimborne, UK) was used to measure the osmolality of the optimum liposome formulation. The osmolality was examined three times, and the average and SD were then determined.
2.6.6. Transmission Electron Microscopy (TEM) of the Optimum EGCG Nano-Liposome Morphology Using fine tweezers, a TEM grid was placed on a piece of clean filter paper with the shiny side up. A drop of the optimum liposome formulation solution was dripped onto the TEM grid and allowed to sit for 5 min. The grid was then blotted using filter paper and rinsed well through repeated dipping (a total of 50 times) in three sequential changes of distilled carbonate-free water. A single drop of Uranyless was placed on a piece of parafilm, and the grid was placed on the Uranyless drop for 5 min to allow the heavy metals to provide contrast. The grid was again picked up with tweezers and rinsed well through repeated dipping (a total of 50 times) in three sequential changes of clean distilled carbonate-free water. Finally, it was dried for 5 min before being inserted into the TEM column and observed [51,52].
Determination of the Aerodynamic Behaviour of the Optimum EGCG Nano-Liposome Formulation Using NGI
The particle distribution of the optimum EGCG nano-liposome formulation was measured using NGI (Copley Scientific, Nottingham, UK). The NGI was cooled to 5 • C with the aid of the cooling system for at least 1.5 hours to prevent the evaporation of the aerosol droplets [53]. The NGI was enclosed, and the throat was attached. A t-piece was used to connect the nebuliser to the NGI's throat. The nebuliser's chamber was filled with 5 mL of the purified optimum EGCG liposome formulation, which contained 3439 µg of the EGCG, which were aerosolised by vibrating the mesh nebuliser (Aeroneb GO, Aerogen Inc., Chicago, IL, USA). The flow rate was set to 15 L/min as recommended by the USP [54].
Different volumes of methanol were used for washing and bursting all the liposomes from all stages of the NGI to allow the measurement of a quantifiable concentration: 5 mL each for the nebuliser, T-piece, throat, and NGI Stages 1 to 4 was used; for NGI Stages from 5 to 7 and the micro-orifice collector (MOC), 3 mL of methanol was used for each. The solutions were then diluted with the same amount of PBS buffer to allow detection by HPLC. Three runs were performed using the optimum EGCG liposome formulation. A 2 mL sample was taken from every single flask and filtered via 0.22 µm membrane syringe filters into the HPLC vials for analysis. The mean of the total recovered dose, the recovered dose fraction, the total delivered dose, the emitted fraction (EF), the mass median aerodynamic diameter (MMAD), the geometric standard deviation (GSD), the FPF, and the fraction of particles equal to or less than 3 µm were calculated by Copley Inhaler Testing Data Analysis Software (CITDAS) [Copley scientific limited, Nottingham, NG4 2JY, UK].
Stability of EGCG Nano-Liposome after Nebulisation
The liposome size, PDI, and encapsulation efficiency were assessed in triplicate after generation of the aerosol to assess the impact of nebulisation on the stability of the prepared liposome.
In Vitro Test of the Effectiveness of the Optimum EGCG Nano-Liposome Formulation and free EGCG on the TGF-β Pathway
Cell culture, DNA transfection, and cell-based reporter assays were carried out using established protocols [3,55]. HEK293T cells were selected in this study, as they are easy to transfect and have a high transfection efficiency. HEK293T cells were seeded in a half-area of a 96-well tissue-treated microtiter plate and incubated for 24 h at 37 • C. The cells were transfected with plasmids encoding the TGFBRII receptor gene (100 ng), the SBE-Luc reporter (100 ng), and the pJ7Lac-Z plasmid (50 ng) using GeneJammer transfection reagent (Stratagene, San Diego, CA, USA) following the manufacturer's instructions. Cells were then treated with drugs at 0.001 µM, 0.01 µM, 1 µM, and 10 µM for 24 h. Twenty-four hours after the treatment, the cells were lysed with 1× Reporter Lysis Buffer (Promega, Madison, WI, USA). Measurement of the luciferase and b-galactosidase was carried out as described elsewhere [3].
Statistical Analysis
Design Expert Version 13 was used to perform the statistical analysis of the DOE. The level of significance of the regression model was analysed using the ANOVA test. The ANOVA test was assessed for each response in the quadratic model of the response surface to select the most fitted model on the basis of the F-test and the degree of significance of the model according to the p-value. The values with p < 0.05 were considered to be statistically significant. Moreover, the coefficient of determination (R 2 ), adjusted R 2 , predicted (R 2 ), and coefficient of variation (CV%) were applied to confirm the adequacy of the selected model. One-way ANOVA and Tukey's post-hoc tests were used for statistical analysis of the effect of the EGCG nano-liposomes on TGFβ signalling.
Optimisation of the Inhalable EGCG Using DOE
Although EGCG is an excellent target as a potential treatment option for several diseases because it has many biological activities, it has very low bioavailability and stability and a short half-life [6,15,[56][57][58]. Several studies have attempted to address these limitations by formulating EGCG liposomes, but these studies were found to have several limitations (see Section 1). In addition, there has been no attempt to prepare EGCG for delivery through the inhalation route, and none of the studies has targeted PAH.
Consequently, the DOE was implemented in this research by applying the response surface methodology of Box-Behnken design to develop and optimise an inhalable EGCG nano-liposome formulation using high-phase transition phospholipids, and to maintain the stability of the prepared liposome during nebulisation, without using any surfactants or co-surfactants such as Tween 80 or ethanol in the formulation as a potential treatment for PAH. Liposomes were selected as the carrier for EGCG, as their phospholipid components are the same as those present in the lung's surfactant [24]. Moreover, they provide sustained release kinetics and increase the drug's stability [24]. On the other hand, by developing the EGCG as an inhaled liposome, it could be delivered systemically or locally to the lung, thus reducing the systemic side effects, enhancing the patients' compliance, increasing its permeability, and preventing it from being affected by first-pass metabolism [24]. The proposed 29 runs of Box-Behnken design were performed and their results were assessed by Design Expert 13 software (see Table 2).
All the responses fitted the reduced quadratic model, except for the last response (PDI after 1 month), which was better fitted by the reduced two-factor interaction model (2FI). Table 3 shows the minimum values, maximum values, average, and SD and the chosen model for each of the given responses. Selection of the model was based on the ANOVA test, where the suggested model was not aliased, and had a significant p-value, an insignificant lack of fit, and the maximum adjusted and predicted R 2 . Backward reductions of the models were applied to remove the insignificant terms in each model. All the selected models for each response had p < 0.0001 with an insignificant lack of fit, which meant that the selected reduced models were highly significant, and the data were properly represented by the model, respectively. The values of R 2 , adjusted R 2 , predicted R 2 , and CV% for all responses are listed in Table 4. Liposome size is an essential characteristic in a liposome formulation. It defines the physical features and biodistribution of the liposomes [59,60]. The method of preparing the liposome affects its size, which can ranges from nanometres to several micrometres [61].
According to the vesicular arrangement, liposomes are categorised as follows. Firstly, a liposome with one lipid bilayer is known as a small unilamellar liposome with a 20-100 nm size range, and is widely produced by the sonication method [61,62]. Secondly, large unilamellar vesicles with an average size of 100 nm-1 mm can be produced through extrusion [61][62][63]. Thirdly, multilamellar vesicles with a size of more than 1 mm are usually formulated via shaking by hand [61,62].
In general, the size of liposomes used in medical applications varies from 50 nm to 450 nm [61]. This research aimed to minimize the particle size of the formulated nanoliposome to increase the mass distribution profile of the liposome's aerosol during the nebulisation process, thus increasing the percentage of the drug that would reach the alveoli. It was reported previously by our group that by decreasing the particle size of the nebulised formulations from the microscale to the nanoscale, the output's performance would be improved, resulting in greater lung deposition and particle distribution (higher FPF% and lower MMAD) [64].
The ANOVA test showed that the following terms have an impact on the liposomes' size with very highly significant effects (p ≤ 0.0001): pH (B), molar percentage of cholesterol (C), and B 2 . Moreover, the D/L molar ratio showed a significant impact on this response with a p-value equal to 0.0004. However, the influence of lipid concentration on liposomes' size was insignificant, with a p-value equal to 0.326.
The inverse significant correlation (p ≤ 0.0001) between the pH of the buffer solution and the liposomes' size may have been caused by the protonation of the phospholipid heads, especially DPPG, in this formulation at a low pH [65]. When the protonation of DPPG heads occurs at a low pH, the electrostatic repulsion between them will increase leading to the formulation of liposomes with a larger particle size [65]. In contrast, increasing the D/L molar ratio significantly (p ≤ 0.0004) increased the liposomes' size. The maximum D/L molar ratio in this design was chosen to be 11 because aggregation of the liposomes occurred when a higher D/L molar ratio was used during our screening experiment. This result agrees with the study of Brgles et al. [66], which noted that when the amount of the drug in the liposome formulation increased, the size of the liposomes increased and aggregation of the liposome occurred [66]. This explains why the minimum liposome size in this model was 85 for Run 6 when the buffer's pH was 6.5 and the D/L ratio was 5, while the maximum liposome size value was obtained in this model when the pH was 3 and the D/L ratio was 11 in Run 5; the other factors were kept the same for both these runs. Regarding the cholesterol content in the formulation, a highly significant (p ≤ 0.0001) decrease in the liposomes' size occurred when the cholesterol content increased. This finding agrees with the results obtained by Pathak et al., who investigated the effect of the cholesterol concentration on the liposomes' size [67].
The B 2 term also had significant negative correlations (p ≤ 0.0001) with the size of the liposomes, while the influence of the lipid concentration on the liposomes' size was insignificant, with a p-value equal to 0.326. The following polynomial equation was obtained based on the ANOVA for the response results liposome size: Liposome size = 124.53 − 39.33 B − 21.83 C + 19.5 D − 18.5 CD + 35.97 B 2 . The effect of the factors on the liposomes' size was represented as a 3D plot, as depicted in Figure 1.
somes occurred when a higher D/L molar ratio was used during our screening experiment. This result agrees with the study of Brgles et al. [66], which noted that when the amount of the drug in the liposome formulation increased, the size of the liposomes increased and aggregation of the liposome occurred [66]. This explains why the minimum liposome size in this model was 85 for Run 6 when the buffer's pH was 6.5 and the D/L ratio was 5, while the maximum liposome size value was obtained in this model when the pH was 3 and the D/L ratio was 11 in Run 5; the other factors were kept the same for both these runs. Regarding the cholesterol content in the formulation, a highly significant (p ≤ 0.0001) decrease in the liposomes' size occurred when the cholesterol content increased. This finding agrees with the results obtained by Pathak et al., who investigated the effect of the cholesterol concentration on the liposomes' size [67].
The B 2 term also had significant negative correlations (p ≤ 0.0001) with the size of the liposomes, while the influence of the lipid concentration on the liposomes' size was insignificant, with a p-value equal to 0.326. The following polynomial equation was obtained based on the ANOVA for the response results liposome size: = 124.53 − 39.33 − 21.83 + 19.5 − 18.5 + 35.97 2 . The effect of the factors on the liposomes' size was represented as a 3D plot, as depicted in Figure 1.
The Impact of the Formulation's Composition on the Liposomes' PDI
PDI is one of the most critical parameters in a liposome formulation. It measures the level of homogeneity of the size of the formulated liposomes [68]. To obtain homogenous particles, the PDI should be less than 0.2, while a PDI of more than 0.3 indicates the heterogeneity of the particles [40]. One of our aims in the optimisation process was to formulate an inhalable EGCG liposome with a low PDI to ensure the homogeneity of the liposome vesicles [40]. The pH and molar percentage of cholesterol proved to be the two key factors for this response, with a high antagonist influence on PDI (p < 0.0001), followed by B 2 (p < 0.05), as was shown in the resulting polynomial equation of this response: PDI = 0.24 − 0.052 B − 0.093 C + 0.048 B 2 . The effect of the pH on PDI may be attributable to the protonation of the DPPG heads that alter the size of the vesicles of the formulated liposome and, consequently, the homogeneity of the liposome formulation [65]. The negative effect of cholesterol content on PDI was attributed to its stabilisation of the liposome formulation, as it can weaken the interaction between the acyl chains of the phospholipids, subsequently hindering aggregation of the vesicles, which resulted in the narrow distribution of the vesicle size of the liposome formulation (low PDI) [69,70]. The influence of the factors on the liposomes' PDI was represented as a 3D plot, as depicted in Figure 2.
level of homogeneity of the size of the formulated liposomes [68]. To obtain homogenous particles, the PDI should be less than 0.2, while a PDI of more than 0.3 indicates the heterogeneity of the particles [40]. One of our aims in the optimisation process was to formulate an inhalable EGCG liposome with a low PDI to ensure the homogeneity of the liposome vesicles [40]. The pH and molar percentage of cholesterol proved to be the two key factors for this response, with a high antagonist influence on PDI (p < 0.0001), followed by B 2 (p < 0.05), as was shown in the resulting polynomial equation of this response: = 0.24 − 0.052 − 0.093 + 0.048 2 . The effect of the pH on PDI may be attributable to the protonation of the DPPG heads that alter the size of the vesicles of the formulated liposome and, consequently, the homogeneity of the liposome formulation [65]. The negative effect of cholesterol content on PDI was attributed to its stabilisation of the liposome formulation, as it can weaken the interaction between the acyl chains of the phospholipids, subsequently hindering aggregation of the vesicles, which resulted in the narrow distribution of the vesicle size of the liposome formulation (low PDI) [69,70]. The influence of the factors on the liposomes' PDI was represented as a 3D plot, as depicted in Figure 2.
The Impact of the Formulation's Composition on the Encapsulation Efficiency
The lipid concentration and pH had a positive significant influence (p < 0.0001) on the encapsulation efficiency of EGCG liposomes. For example, the minimum encapsulation efficiency (69%) in this design was obtained by Run 14 when the pH was 3. However, the minimum and maximum percentage of encapsulation efficiency was obtained at pH
The Impact of the Formulation's Composition on the Encapsulation Efficiency
The lipid concentration and pH had a positive significant influence (p < 0.0001) on the encapsulation efficiency of EGCG liposomes. For example, the minimum encapsulation efficiency (69%) in this design was obtained by Run 14 when the pH was 3. However, the minimum and maximum percentage of encapsulation efficiency was obtained at pH = 6.5 was 88.5% and 96.5%, respectively. See Table 2. AD, BC, BD, A 2 , and C 2 also showed significant impacts on the encapsulation efficiency (p < 0.05). However, the D/L molar ratio had an insignificant influence on this response (p = 0.4038). The following polynomial equation was obtained on the basis of ANOVA for the response results of the encapsulation efficiency: Encapsulation e f f iciency = 90.03 + 5.33 A + 4.19 B − 3.63 C − 6.25 AB + 3.5 AD + 3.85 BC + 4.18 BD − 3.03 A 2 + 2.49 C 2 . The effect of the factors on the liposomes' size was represented as a 3D plot, as shown in Figure 3.
The Impact of the Formulation's Composition on the Liposomes' Zeta Potential
One of the fundamental characteristics of a liposome formulation is the zeta potential. It represents the charge of the liposome vesicles and controls the aggregation or precipitation of the liposome vesicles; therefore, it alters the formulations' stability [68].
Increasing the pH of the liposome media resulted in significantly higher absolute values of the zeta potentials (p < 0.0001). This trend can be explained by the fact that the deprotonated form of DPPG increases when the pH increases, causing a higher negative charge in the formulation, as was revealed previously in similar research [38]. In addition, increasing the cholesterol content (p = 0.0007) resulted in a significant rise in the absolute value of the zeta potential of the formulation. This can be attributed to the fact that all the formulations in this study had the same molar percentage of negatively charged DPPG; however, when we increased the cholesterol levels, we decreased the amount of the neutral lipid, DPPC. The terms B 2 , D-D/L molar ratio, and CD also had a significant impact on this response (p < 0.05). The following polynomial equation was obtained on the basis of the ANOVA for response results of zeta potential: Zeta potential = −23. 47 The effect of the factors on the liposomes' size was represented as a 3D plot, as illustrated in Figure 4.
Pharmaceutics 2022, 14, x FOR PEER REVIEW 12 of 24 = 6.5 was 88.5% and 96.5%, respectively. See Table 2. AD, BC, BD, A 2 , and C 2 also showed significant impacts on the encapsulation efficiency (p < 0.05). However, the D/L molar ratio had an insignificant influence on this response (p = 0.4038). The effect of the factors on the liposomes' size was represented as a 3D plot, as shown in Figure 3.
The Impact of the Formulation's Composition on the Liposomes' Zeta Potential
One of the fundamental characteristics of a liposome formulation is the zeta potential. It represents the charge of the liposome vesicles and controls the aggregation or precipitation of the liposome vesicles; therefore, it alters the formulations' stability [68].
Increasing the pH of the liposome media resulted in significantly higher absolute values of the zeta potentials (p < 0.0001). This trend can be explained by the fact that the deprotonated form of DPPG increases when the pH increases, causing a higher negative charge in the formulation, as was revealed previously in similar research [38]. In addition, increasing the cholesterol content (p = 0.0007) resulted in a significant rise in the absolute value of the zeta potential of the formulation. This can be attributed to the fact that all the formulations in this study had the same molar percentage of negatively charged DPPG; however, when we increased the cholesterol levels, we decreased the amount of the neutral lipid, DPPC. The terms B 2 , D-D/L molar ratio, and CD also had a significant impact on this response (p < 0.05). The following polynomial equation was obtained on the basis of the ANOVA for response results of zeta potential: = −23.47 − 4.46 − 1.96 + 1.13 − 3 − 4.26 2. The effect of the factors on the liposomes' size was represented as a 3D plot, as illustrated in Figure 4.
The Impact of the Formulation's Composition on the Liposomes' PDI after One Month
Sustaining a constant size distribution of the formulated liposome (a constant PDI) for a long period of time is a demonstration of the liposomes' stability [71,72]. The most influential factor for this response was C (molar percentage of cholesterol), which significantly antagonized this response (p < 0.0001). This is attributable to the fact that an increase in the cholesterol concentration results in a corresponding increase in the rigidity and stability of the liposomes, which prevents a significant change in the size and keeps the liposome suspension monodispersed [34]. The pH also impacted this response significantly (p = 0.0021). Increasing the buffer's pH led to an increase in the negative zeta potential of this formulation [38]. A higher zeta potential value induces a more stable formulation, as it limits the fusion and agglomeration of the liposome vesicles, ensuring low and constant PDI values [68]. It should be mentioned that neither A (total lipid concentration) nor D (D/L molar ratio) showed a significant impact on this response individually; however, the terms AB and BD were shown to influence this response significantly (p < 0.05), as the
The Impact of the Formulation's Composition on the Liposomes' PDI after One Month
Sustaining a constant size distribution of the formulated liposome (a constant PDI) for a long period of time is a demonstration of the liposomes' stability [71,72]. The most influential factor for this response was C (molar percentage of cholesterol), which significantly antagonized this response (p < 0.0001). This is attributable to the fact that an increase in the cholesterol concentration results in a corresponding increase in the rigidity and stability of the liposomes, which prevents a significant change in the size and keeps the liposome suspension monodispersed [34]. The pH also impacted this response significantly (p = 0.0021). Increasing the buffer's pH led to an increase in the negative zeta potential of this formulation [38]. A higher zeta potential value induces a more stable formulation, as it limits the fusion and agglomeration of the liposome vesicles, ensuring low and constant PDI values [68]. It should be mentioned that neither A (total lipid concentration) nor D (D/L molar ratio) showed a significant impact on this response individually; however, the terms AB and BD were shown to influence this response significantly (p < 0.05), as the obtained equation was
Preparation and Characterisation of the Optimum Proposed Nano-Liposome Formulation and Confirming the Predictivity of the Model
The following target criteria for the optimum inhalable EGCG nano-liposome were set in this research: the minimum liposomal particle size, the minimum liposomal PDI,
Preparation and Characterisation of the Optimum Proposed Nano-Liposome Formulation and Confirming the Predictivity of the Model
The following target criteria for the optimum inhalable EGCG nano-liposome were set in this research: the minimum liposomal particle size, the minimum liposomal PDI, the maximum absolute value of liposomal zeta potential (due to its negative value, in this study, it was set as a minimum in Table 5), the maximum liposomal encapsulation efficiency, the minimum liposomal PDI after 1 month, and the maximum D/L molar ratio; the target pH was set to 6, as this value is suitable for the inhalation route [48], and the other factors were set within the ranges depicted in Table 5. As indicated in Table 5, the importance was chosen to be 3 for all factors and responses, since all of them were equally important for our optimum formulation. Design Expert software proposed some optimum solutions. The estimated optimal points for the selected solution had a lipid concentration of 10 mg/mL, a pH of 6, a cholesterol percentage of 20%, and D/L molar ratio of 11.
The suggested liposome preparation was formulated, and the experimental values of the responses were as follows: the average particle size was 105 nm, the PDI was 0.18 (see Figure 6), the zeta potential was −25.5, the encapsulation efficiency was 90.5%, and the PDI after 1 month was 0.19. The PDI of the optimum formulation was also assessed after 2 months and 3 months, and it was 0.19. Moreover, the average particle size was also assessed after 1 month, 2 months, and 3 months, and it was 107 nm, 106 nm, and 109 nm, respectively. Consequently, the optimum EGCG liposome formulation was shown to be in the nanoscale, and it was physically very stable for at least 3 months as well as having an excellent encapsulation efficiency of more than 90%. All the actual results of these responses were in strong agreement with the values predicted by the model, which demonstrated the excellent predictivity of the model.
Our next steps were to study the chemical stability and the in vitro release profile of the optimum EGCG nano-liposome to confirm the chemical integrity of the drug and phospholipids, and the release profile of the EGCG, respectively.
Viscosity Measurements of the Optimum EGCG Nano-Liposome Formulation
The viscosity of the optimum EGCG liposome formulation was assessed at 25 • C using a SV-10 Viscometer (Malvern Panalytical Ltd., Malvern, UK), and it was 9 mPas.
Osmolality Measurements of the Optimum EGCG Nano-Liposome Formulation
For the nebulised formulation, the range of suitable osmolality is between 130 and 500 mOsm/kg. It has been reported that an aerosol with osmolality outside this range could induce coughing and bronchoconstriction [48]. The osmolality of the optimum EGCG liposome formulation was equal to 359 ± 3 mOsm/kg, which was within the acceptable range.
Viscosity Measurements of the Optimum EGCG Nano-Liposome Formulation
The viscosity of the optimum EGCG liposome formulation was assessed at 25 °C using a SV-10 Viscometer (Malvern Panalytical Ltd., UK), and it was 9 mPas.
Osmolality Measurements of the Optimum EGCG Nano-Liposome Formulation
For the nebulised formulation, the range of suitable osmolality is between 130 and 500 mOsm/kg. It has been reported that an aerosol with osmolality outside this range could induce coughing and bronchoconstriction [48]. The osmolality of the optimum EGCG liposome formulation was equal to 359 ± 3 mOsm/kg, which was within the acceptable range.
The Morphology of the Optimum EGCG Nano-Liposome
In this study, negative staining and TEM, a well-established method for imaging liposomes, was used to image the optimum EGCG nano-liposome formulation because it is Figure 6. Optimum inhalable EGCG nano-liposomes' size and size distribution.
The Morphology of the Optimum EGCG Nano-Liposome
In this study, negative staining and TEM, a well-established method for imaging liposomes, was used to image the optimum EGCG nano-liposome formulation because it is faster and simpler than cryo-TEM and requires less advanced equipment [52]. As shown in Figure 7, the optimum EGCG nano-liposome has a spherical shape and a size of around 105 nm. faster and simpler than cryo-TEM and requires less advanced equipment [52]. As shown in Figure 7, the optimum EGCG nano-liposome has a spherical shape and a size of around 105 nm.
The Aerodynamic Behaviour of the Optimum EGCG Nano-Liposome Formulation
The type of nebuliser plays a significant role in sustaining the stability of liposomes during nebulisation [27,73,74]. The vibrating mesh nebuliser has been proved to be less disruptive to liposomes' walls [27,[73][74][75]. The heat and the waves that are produced during nebulisation by ultrasonic nebulisers disrupt the lipid bilayers of the liposomes, leading to aggregation and/or drug loss [73,74]. The shearing forces generated by air-jet nebulisers may disrupt the liposomes and cause a significant loss of the encapsulated drugs [27,76,77]. Therefore, the vibrating mesh nebuliser Aeroneb ® GO was selected in this study for the aerosol spray of the optimum nanoliposome formulation, as it has advantages over air-jet and ultrasonic nebulisers for liposomes [27,[73][74][75].
Characterisation of the lung deposition of the inhaled optimum EGCG liposome and the aerosol was conducted using the NGI (Copley Scientific, Nottingham, UK). The flow rate was set to 15 L/min in order to simulate the midpoint of tidal breathing for a healthy adult user [54].
The mass distribution profile of the aerosol of the optimum EGCG nano-liposome
The Aerodynamic Behaviour of the Optimum EGCG Nano-Liposome Formulation
The type of nebuliser plays a significant role in sustaining the stability of liposomes during nebulisation [27,73,74]. The vibrating mesh nebuliser has been proved to be less disruptive to liposomes' walls [27,[73][74][75]. The heat and the waves that are produced during nebulisation by ultrasonic nebulisers disrupt the lipid bilayers of the liposomes, leading to aggregation and/or drug loss [73,74]. The shearing forces generated by air-jet nebulisers may disrupt the liposomes and cause a significant loss of the encapsulated drugs [27,76,77]. Therefore, the vibrating mesh nebuliser Aeroneb ® GO was selected in this study for the aerosol spray of the optimum nanoliposome formulation, as it has advantages over air-jet and ultrasonic nebulisers for liposomes [27,[73][74][75].
Characterisation of the lung deposition of the inhaled optimum EGCG liposome and the aerosol was conducted using the NGI (Copley Scientific, Nottingham, UK). The flow rate was set to 15 L/min in order to simulate the midpoint of tidal breathing for a healthy adult user [54].
The mass distribution profile of the aerosol of the optimum EGCG nano-liposome formulation in the NGI stages of the Aeroneb GO nebuliser at a flow rate of 15 L/m is shown in Figure 8. Its nebulisation time was 13 min. The mean of the total recovered dose was 3242.5 ± 55.8 µg, which means that the recovered dose fraction was excellent and equal to 94.3%. The mean of the total delivered dose was 2638.5 ± 50.2 µg, and a high EF was obtained that equalled 81.4%.
For the size distribution measurements, the MMAD was used, as it estimates the median size distribution of the aerodynamic particles of the aerosol. The GSD was determined to measure the droplets' polydispersity [36]. The MMAD of the optimum EGCG liposome was 4.41 µm, with a GSD of 2.6, indicating that the emitted dose would be deposited in the lungs. FPF% was estimated to determine the percentage of liposomes that are considered to be inhalable, as they have an aerodynamic diameter of ≤5µm [36]. The FPF was 53.46%, implying that the aerosolization performance of the formulation was good. As it has been reported that particles with a MMAD of 1 to 5 µm were accumulate deeply in the lung, smaller particles are more preferable for deposition in the alveolar region [78,79]. The size distribution measurements were the same as those of the FDA-approved liposome inhalation suspension of amikacin (Arikayce ® ), which has a mass median aerodynamic diameter of 4.7 µm [80], a GSD of about 1.63, and a FPF% < 5 µm that ranged from 50.3% to 53.5% [81]. The deposition target of the aerosol droplets in pulmonary hypertension disease is the alveolar region; therefore, the inhaled particles must be smaller than 3 µm to be loaded in this deep region [27,82,83]. Consequently, the fraction of particles equal to or less than 3 µm was determined to measure the percentage of the dose that is expected to be deposited in the alveolar region, and it was 34.3% [78,79]. These in vitro results demonstrated that the prepared optimum EGCG liposome has all the properties needed to be inhalable and it is expected to be deposited in the narrower airways. deposited in the lungs. FPF% was estimated to determine the percentage of liposomes that are considered to be inhalable, as they have an aerodynamic diameter of ≤5µm [36]. The FPF was 53.46%, implying that the aerosolization performance of the formulation was good. As it has been reported that particles with a MMAD of 1 to 5 µ m were accumulate deeply in the lung, smaller particles are more preferable for deposition in the alveolar region [78,79]. The size distribution measurements were the same as those of the FDAapproved liposome inhalation suspension of amikacin (Arikayce ® ), which has a mass median aerodynamic diameter of 4.7 µm [80], a GSD of about 1.63, and a FPF% < 5 µ m that ranged from 50.3% to 53.5% [81]. The deposition target of the aerosol droplets in pulmonary hypertension disease is the alveolar region; therefore, the inhaled particles must be smaller than 3 µ m to be loaded in this deep region [27,82,83]. Consequently, the fraction of particles equal to or less than 3 µ m was determined to measure the percentage of the dose that is expected to be deposited in the alveolar region, and it was 34.3% [78,79]. These in vitro results demonstrated that the prepared optimum EGCG liposome has all the properties needed to be inhalable and it is expected to be deposited in the narrower airways.
Stability of the Nebulised Liposome after Nebulisation
The liposome size, PDI, and encapsulation efficiency were assessed after nebulisation to analyse the impact of the nebuliser on the stability of the prepared liposomes. Both PDI and encapsulation efficiency were not affected by nebulisation and had the same values as before nebulisation. However, the liposomes' size rose slightly from 105 nm before nebulisation to 120 nm after nebulisation. This indicated that nebulisation does not influence the stability or membrane rigidity of the prepared liposome. This is attributed to the high-
Stability of the Nebulised Liposome after Nebulisation
The liposome size, PDI, and encapsulation efficiency were assessed after nebulisation to analyse the impact of the nebuliser on the stability of the prepared liposomes. Both PDI and encapsulation efficiency were not affected by nebulisation and had the same values as before nebulisation. However, the liposomes' size rose slightly from 105 nm before nebulisation to 120 nm after nebulisation. This indicated that nebulisation does not influence the stability or membrane rigidity of the prepared liposome. This is attributed to the high-phase transition phospholipids that were used in this liposome preparation, which included DPPC and DPPG [27,34,35]. Moreover, the inclusion of cholesterol increased the rigidity of the liposomes' wall, thus increasing the liposomes' stability after nebulisation [34]. Another reason for the high stability of this liposome formulation after nebulisation is the type of nebuliser used in this study [73][74][75]. It has been reported that vibrating mesh nebulisers are less disruptive to the liposomes' wall and maintain the liposomes' stability after nebulisation [73][74][75]. The stability of our optimum EGCG nanoliposome was in strong agreement with that in the stability study on Arikace, a liposome formulation that has reached a Phase III trial [85,86]. Its success has been attributed to the use of suitable ingredients in the formulation and an appropriate inhalation device (a PARI eFlow mesh nebuliser) [85,86]. DPPC, a phospholipid with a high-transition temperature, and cholesterol were used as ingredients in the formulation [85,86]. In contrast, a significant decrease in the encapsulation efficiency was observed when ultrasonic nebulisation was applied to inhalable liposomes of sildenafil citrate, with the observed reduction in the encapsulation efficiency ranging from 12.39% to 26.23% depending on the formulation's ingredients [87].
In Vitro Test of the Effectiveness of the Optimum EGCG Nano-Liposome Formulation and the Free EGCG on the TGF-β Pathway
To determine the effect of the newly formed compounds, a TGFβ-responsive reporter assay was used. The reporter assay was validated by overexpressing the TGFBRII receptor, a recognised component of the TGFβ signalling pathway (Figure 9). HEK293T cells were transfected with SBE-Luc, B-gal, and TGFBRII, for 24 h. A plasmid containing the bacterial Lac-Z gene was used as an internal standard. Cells overexpressing the receptor significantly increased the activity of the reporter, indicating the validity of the assay method. Subsequently, HEK293T cells were overexpressed with the reporter and treated with the formulated EGCG nano-liposomal compound, and their activities were compared with those of the free EGCG. Both the free EGCG and the EGCG nano-liposome formulation inhibited the reporter's activity at a concentration of 10 µM. Interestingly, the EGCG nano-liposome inhibited the reporter's activity at 1 µM, whilst the free EGCG failed to inhibit the reporter's activity at this concentration. This may be attributed to the lower stability of EGCG at this lower concentration compared with 10 µM. It has been reported that EGCG's stability depends on the concentration and that EGCG degrades quickly at low concentrations. This may also demonstrate the protective effects of the liposome formulation on EGCG's stability [17]. The liposomal formulation itself (i.e., the lipid vehicle free from EGCG) did not elicit any discernible effect ( Figure 10). formulation inhibited the reporter's activity at a concentration of 10 µ M. Interestingly, the EGCG nano-liposome inhibited the reporter's activity at 1 µ M, whilst the free EGCG failed to inhibit the reporter's activity at this concentration. This may be attributed to the lower stability of EGCG at this lower concentration compared with 10 µ M. It has been reported that EGCG's stability depends on the concentration and that EGCG degrades quickly at low concentrations. This may also demonstrate the protective effects of the liposome formulation on EGCG's stability [17]. The liposomal formulation itself (i.e., the lipid vehicle free from EGCG) did not elicit any discernible effect ( Figure 10). Figure 9. Validation of the SBE-Luc reporter assay. HEK293T cells were transfected with SBE-Luc, β-gal, and TGFBRII, for 24 h. The untreated ratio was set to 100%, cells transfected with SBE-Luc and β-gal for 24 h. Mean values of the relative Luc-Gal ratio (%) followed by the standard error of mean (SEM) were used to plot this graph in GraphPad Prism Version 9. An unpaired parametric t-test was used for statistical analysis, where the APA style p-values were used: <0.001 (***). Further work is planned to confirm this in vitro effect in HEK293T cells. Cell culture validation systems will be used, including quantitative polymerase chain reaction (qPCR) and Western blotting analysis, to understand the gene expression levels. These experiments will then be further validated using monocrotaline (MCT) and hypoxia-induced PAH animal models to evaluate cell apoptosis. Figure 10. The effects of the EGCG nano-liposome formulation on TGF-β signalling. After 24 h of seeding the HEK293T cells in a 96-well half-area plate, cells were then transfected with TGFBRII, SBE-Luc, and βgal. After 24 h, these were treated with the compounds EGCG (free EGCG) and L(EGCG) (EGCG nano-liposomes at various concentrations), and L (liposomes without EGCG). These treatments were compared with the untreated TGFBRII, SBE-Luc, and βgal, which was set as 100. One-way ANOVA and Tukey's post-hoc test were used. An unpaired parametric t-test was used for statistical analysis, where the APA style p-values were used: 0.12 (ns), 0.033 (*), and <0.001 (***).
Conclusions
An optimum inhalable EGCG nano-liposome was developed using high-phase transition phospholipids to maintain the liposomes' stability during nebulisation without using any surfactants or co-surfactants in the formulation by applying the response surface methodology. Our understanding of the production method and the impact of the composition of inhalable EGCG nano-liposomes was enhanced by the implementation of the DOE strategy. The results revealed that all the studied formulation factors significantly influenced the characteristics of the prepared EGCG nano-liposome formulations. The optimum EGCG liposome formulation was stable for at least 3 months, with an encapsulation efficiency of more than 90%. The aerodynamic behaviour demonstrated the suitabil- Figure 10. The effects of the EGCG nano-liposome formulation on TGF-β signalling. After 24 h of seeding the HEK293T cells in a 96-well half-area plate, cells were then transfected with TGFBRII, SBE-Luc, and βgal. After 24 h, these were treated with the compounds EGCG (free EGCG) and L(EGCG) (EGCG nano-liposomes at various concentrations), and L (liposomes without EGCG). These treatments were compared with the untreated TGFBRII, SBE-Luc, and βgal, which was set as 100. One-way ANOVA and Tukey's post-hoc test were used. An unpaired parametric t-test was used for statistical analysis, where the APA style p-values were used: 0.12 (ns), 0.033 (*), and <0.001 (***).
Further work is planned to confirm this in vitro effect in HEK293T cells. Cell culture validation systems will be used, including quantitative polymerase chain reaction (qPCR) and Western blotting analysis, to understand the gene expression levels. These experiments will then be further validated using monocrotaline (MCT) and hypoxia-induced PAH animal models to evaluate cell apoptosis.
Conclusions
An optimum inhalable EGCG nano-liposome was developed using high-phase transition phospholipids to maintain the liposomes' stability during nebulisation without using any surfactants or co-surfactants in the formulation by applying the response surface methodology. Our understanding of the production method and the impact of the composition of inhalable EGCG nano-liposomes was enhanced by the implementation of the DOE strategy. The results revealed that all the studied formulation factors significantly influenced the characteristics of the prepared EGCG nano-liposome formulations. The optimum EGCG liposome formulation was stable for at least 3 months, with an encapsulation efficiency of more than 90%. The aerodynamic behaviour demonstrated the suitability of this EGCG liposome nano-formulation for inhalation and deposition in the alveolar region in PAH lungs. Moreover, the new optimum EGCG nano-liposome formulation was shown to be very stable after nebulisation with a vibrating mesh nebuliser. Whilst both the free EGCG and the optimum EGCG nano-liposomes inhibited the reporter's activity at 10 µM, the EGCG nano-liposome formulation showed higher efficacy in inhibiting TGFβ signalling at 1 µM (p-value < 0.05). These points suggest that the newly formulated EGCG may have experimental and clinical applications in PAH. Acknowledgments: Not applicable.
Conflicts of Interest:
A patent application is in progress.
BMPR2
Bone Coefficient of determination TEM Transmission electron microscopy TGF-β Transforming growth factor β 2FI Two-factor model interaction model
|
2023-02-08T16:13:17.595Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "9bd03338ae34bb1ab97deb308e002eda178c7f58",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/15/2/539/pdf?version=1675676237",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbd9f4ff5ab68d03d665da7a3d675dc6e26bc158",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195848267
|
pes2o/s2orc
|
v3-fos-license
|
Modelling Levy space-time white noises
Based on the theory of independently scattered random measures, we introduce a natural generalisation of Gaussian space-time white noise to a Levy-type setting, which we call Levy-valued random measures. We determine the subclass of cylindrical Levy processes which correspond to Levy-valued random measures, and describe the elements of this subclass uniquely by their characteristic function. We embed the Levy-valued random measure, or the corresponding cylindrical Levy process, in the space of general and tempered distributions. For the latter case, we show that this embedding is possible if and only if a certain integrability condition is satisfied. Similar to existing definitions, we introduce Levy-valued additive sheets, and show that integrating a Levy-valued random measure in space defines a Levy-valued additive sheet. This relation is manifested by the result, that a Levy-valued random measure can be viewed as the weak derivative of a Levy-valued additive sheet in the space of distributions.
Introduction
Gaussian random perturbations of partial differential equations are most often modelled either as a cylindrical Brownian motion or a Gaussian space-time white noise. The choice usually depends on the exploited method, in which one follows either a semi-group approach, based on the work by Da Prato and Zabczyk in [13], or a random field approach, originating from the work by Walsh in [42]. It is well known that both models essentially result in the same dynamics as established by Dalang and Quer-Sardanyons in [16].
Cylindrical Brownian motions can be naturally generalised to cylindrical Lévy processes by exploiting the theory of cylindrical measures and random variables. This was accomplished by one of us together with Applebaum in [2]. In the random field approach, Gaussian spacetime white noise is generalised to Lévy space-time white noise as an infinitely divisible random measure, often represented by integrals with respect to Gaussian and Poisson random measures. Both generalisations, cylindrical Lévy processes and Lévy space-time white noises, serve as a model for random perturbations of complex dynamical systems. These applications can be found for cylindrical Lévy processes, for example, in the monograph in Peszat and Zabczyk [34] or in Kumar and Riedle [30], and for Lévy space-time white noise in Applebaum and Wu [3], Chong [11], Chong and Kevei [12] and Dalang and Humeau [15], among many others. Another approach to model such perturbed dynamical systems, for example, parabolic stochastic partical differential equations, is provided by the recently introduced ambit fields, presented in the monograph [6] by Barndorff-Nielsen, Benth and Veraart, and their relations to SPDE investigated in [7] by the same authors.
The main objectives of our work are the comparison of cylindrical Lévy processes and Lévy space-time white noises, as well as their embeddings in the space of general and tempered (Schwartz) distributions. It turns out that these results significantly differ from the Gaussian situation. Only the standard cylindrical Brownian motion corresponds to the Gaussian section, Section 5, is devoted to the comparison of cylindrical Lévy processes and Lévy-valued random measures. Our main results here characterise exactly the subclass of cylindrical Lévy processes which correspond to Lévy-valued random measures. In the last section, Section 6, we complete the picture by establishing Lévy-valued random measures as the weak derivative of Lévy-valued additive sheets. The Lebesgue measure on B(R d ) is denoted by leb. The closed unit ball in R d is denoted by Throughout the paper, we fix a probability space (Ω, A, P ). The space of P -equivalence classes of measurable functions f : Ω → R is denoted by L 0 (Ω, P ), and of pth integrable functions by L p (Ω, P ) for p > 0. These spaces are equipped with their standard metrics and (quasi-)norms.
Lévy-valued random measures
Our definition of Lévy-valued random measures is based on the work [36] by Rajput and Rosinski. Instead of general δ-rings, it is sufficient for us to restrict ourselves to the δ-ring For the notion of measures on a ring, see, for example, [23]. We call the triple (γ, Σ, ν) the characteristics of M . Furthermore, we may extend the total variation γ TV of γ and Σ to σ-finite measures on B(O). In this case, the mapping defines a σ-finite measure, which is called the control measure of M . We note that λ( We extend Definition 2.1 to include a dynamical aspect, that is, a time variable. This extension can be thought of as a similar construction to that of Walsh in [42].
and n ∈ N, the stochastic process is a Lévy process in R n . We shall write M (t, A) := M (t)(A).
Let (M (t) : t 0) be a Lévy-valued random measure on B b (O), and suppose (γ, Σ, ν) and λ are the characteristics and control measure, respectively, of the infinitely divisible random measure M (1). Then, it follows from the stationary increments of the process (M (t, A) : t 0) that for each t 0 the characteristics of the infinitely divisible random measure M (t, A) are given by (tγ, tΣ, tν), and the control measure of M (t) is given by tλ. We shall refer to (γ, Σ, ν) as the characteristics of M and λ as the control measure of M .
Our definition above of Lévy-valued random measures assigns a special role to the time domain although this is not necessary for infinitely divisible random measures in general. However, as we will later compare Lévy-valued random measures with cylindrical Lévy processes, which are naturally carrying a time domain as generalised stochastic processes, we found it more illustrative to have the time domain distinguished. Indeed, the following theorem shows that a Lévy-valued random measure corresponds to an infinitely divisible random measure on the product space of time and space domain if the stationarity in the time domain is described by the control measure accordingly.
Proof. This can be proved similarly as [39,Theorem 3.2] The relation between random measures and models of Lévy-type noise utilising a Lévy-Itô decomposition is well known. We rigorously formulate this result in our setting in the following proposition; for a converse conclusion, see Remark 3.6.
Proposition 2.5. Let ζ be a σ-finite Borel measure on B(O) and (U, U , ν) a σ-finite measure space. Assume that: is Poisson random measure with intensity leb ⊗ ζ ⊗ ν, independent of W , and with compensated Poisson random measure N .
Then for any functions: we define a mapping M : Then we obtain a Lévy-valued random measure on B b (O) by the prescription Proof. The existence of the Gaussian integral is guaranteed by [42,Theorem 2.5] and that of the Poisson integrals by [27,Lemma 12.13]. The characteristic function of M ([0, t] × A), see, for example, in [40,Proposition 19.5], shows that M is an infinitely divisible random measure, and thus applying Proposition 2.3 completes the proof.
Example 2.6. The class of α-stable random measures is introduced, for example, in [38,Section 3.3]. These can be obtained from Proposition 2.5 by defining for bounded sets B in for some p + q = 1 (for the case α = 1, it is required that p = q = 1 2 ); see Balan [5] for this construction. Proposition 2.5 guarantees that, by defining M (t, A) := M ((0, t] × A) for t 0 and A ∈ B b (R d ), we obtain a Lévy-valued random measure M on B b (R d ). Direct calculation shows that for α = 1, the characteristic function of M (t, A) is given by, for t 0, where β := p − q, and thus we see the characteristics of M are (β α 1−α leb, 0, leb ⊗ ν α ). The control measure is given by For the case α = 1, the characteristic function of M (t, A) is given by Example 2.7. Mytnik, in [31], considers a martingale-valued measure (M (t, A) : t 0, A ∈ B b (R d )) in the sense of Walsh [42], such that for any A ∈ B b (R d ), the process (M (t, A) : t 0) is a real-valued α-stable process (α ∈ (1, 2)), with Laplace transform The author terms M an α-stable measure without negative jumps.
Lévy-valued additive sheets
Just as the Brownian sheet is the generalisation of a Brownian motion to a multidimensional index set, additive sheets are defined as the corresponding generalisation of an additive process. Adler et al. [1] first defined additive random fields on R d , and termed them 'Lévy processes' should they be stochastically continuous. In [17], Dalang and Walsh discuss Lévy sheets in R 2 . Additive fields with stationary increments are considered by Barndorff-Nielsen and Pedersen in [8] and are called 'homogeneous Lévy sheets'. Herein we present our definition based on the deposition of Dalang and Humeau in [14], which extends [1], and results from Pedersen [33]. For a function f : where c j (0) = b j and c j (1) = a j . For example, in the case d = 2, we have Δ b The càdlàg property is generalised to random fields in the following way: a function f : R d → R has limits along monotone paths (lamp) if for every x ∈ R d and any sequence (x n ) n∈N ⊆ R d converging to x with either x n,j < x j or x n,j x j for all n ∈ N and j ∈ {1, . . . , d} where x = (x 1 , . . . , x d ) and x n = (x n,1 , . . . , x n,d ), the limit f (x n ) exists as n → ∞ and furthermore f is right-continuous if f (x n ) → f (x) as n → ∞ for all sequences with x x n for all n ∈ N. We note that this property is a path-based property, and thus in contrast to random measures we define our sheets as mappings from R d × Ω → R. For relaxing the requirements in Definition 3.1, we refer to [1], for example, to capture arbitrary initial conditions or sheets which are not continuous in probability. In particular, it is shown that Conditions (a)-(c) guarantee the existence of a lamp and rightcontinuous modification.
If (X(x) : x ∈ I) is an additive sheet, then for fixed x ∈ I the random variable X(x) is infinitely divisible; see Adler [1, Theorem 3.1]; let its characteristics be denoted by (p x , A x , μ x ). The additive sheet is said to be natural if the mapping x → p x , which is necessarily continuous, is of bounded variation, or equivalently, if there exists an atomless signed measure γ with p x = γ((0, x]) for all x ∈ I; here, we use the convention (0, Similarly as for infinitely divisible random measures, we introduce a dynamical aspect in the following definition: is called a Lévy-valued additive sheet if for every x 1 , . . . , x n ∈ R d and n ∈ N, the stochastic process is a Lévy process in R n .
The wording 'Lévy-valued additive sheet' is motivated by the following result: Proof. The domain of definition and Conditions (a), (b) and (d) of Definition 3.1 are clearly met. Regarding stochastic continuity, let (t n , x n ) n∈N be a sequence in R + × R d converging to (0, x). For each n ∈ N, the random variable X(1, x n ) is infinitely divisible, say with characteristics (p xn , V xn , μ xn ). As X(1, ·) is a natural, additive sheet, there exists a signed measure γ such that p xn = γ((0, x n ]). Since the Lévy process (X(t, x n ) : t 0) has stationary increments, it follows that each X(t, x n ) has characteristics (tp xn , tV xn , tμ xn ) for every t 0. Theorem 3.1 in [1] implies that there exist a measure Σ on B(R d ) such that V xn = Σ((0, x n ]), and a measure ν on B(R d × R) such that, for each B ∈ B(R), the mapping ν(· × B) is a measure on B(R d ), and μ xn = ν((0, x n ] × ·). Therefore, the Lévy symbol of X(t n , x n ) is given by, for u ∈ R, As the set {x n : n ∈ N} is bounded, there exists a bounded box I ⊆ R d containing every box (0, x n ], n ∈ N. Thus, we obtain for each u ∈ R that Finiteness of the right side follows from the fact that the measures are finite on I. Therefore, it follows that X(t n , x n ) → 0 in probability as (t n , x n ) converges to (0, x). If (t n , x n ) is an arbitrary sequence converging to (t, x), stationary increments imply for each c > 0 that . Consequently, the above established continuity in probability shows the general case.
The fact that X(z) is natural can be seen from the form of the characteristic function, where we have p z = tγ((0, x]) for z = (t, x).
We are now able to state the link between Lévy-valued random measures and Lévy-valued additive sheets by formulating a result from Pedersen in [33] in our setting.
Then any lamp and right-continuous modification of the stochastic process
Proof. See [33].
Remark 3.6. Theorem 3.5 and its proof enables us to conclude a converse implication of Proposition 2.5. If M is a Lévy-valued random measure with atomless control measure λ, then it satisfies a Lévy-Itô decomposition of the form Furthermore, we see that one does not achieve larger generality by allowing an arbitrary measure space (U, U , ν) in Proposition 2.5, as the Poissonian components can be represented as integrals over R.
Lévy-valued measures in the space of distributions
In this section, we embed the Lévy-valued random measure into the spaces of distributions and of tempered distributions. These embeddings are based on the integration theory for independently scattered infinitely divisible measures developed by Rajput and Rosinski in [36]. The multiplicative relation between the characteristics of the infinitely divisible random measures M (1) and M (t), remarked after Definition 2.2, enables us to apply directly the integration theory for infinitely divisible random measures to Lévy-valued random measures (M (t) : t 0) on B b (O): for a simple function for α k ∈ R and pairwise disjoint sets An arbitrary measurable function f : O → R is said to be M -integrable if the following hold.
(1) There exists a sequence of simple functions (f n ) n∈N of the form (4.1) such that f n converges pointwise to f λ-a.e., where λ is the control measure of M .
(2) For each A ∈ B(O) and t 0, the sequence ( A f n (x) M (t, dx)) n∈N converges in probability.
In this case, the integral of f is defined as It is clear, by the stationarity of the increments of Lévy processes, that Condition (2) Here, (γ, Σ, ν) denotes the characteristics of M . The measure Furthermore for all t 0, the mapping For an open set O ⊆ R d , let D(O) denote the space of infinitely differentiable functions with compact support. We equip D(O) with the inductive topology, that is, the strict inductive limit of the Fréchet spaces D( is called the space of distributions, which we equip with the strong topology, that is the topology generated by the family of seminorms Analogously as locally integrable functions and measures are identified with distributions, we proceed to relate a Lévy-valued random measure M on B b (O) to a distribution-valued process. For this purpose, we define for each t 0 the integral mapping In the proof of Theorem 4.1 below, we show that D(O) is continuously embedded in L M (O, λ), and thus the mapping J D (t) is well defined.
In the following theorem as in the reminder of the article, we use the phrase genuine Lévy process in a space F to emphasise that this is a Lévy process in the space Our proof of this theorem relies on the following two Lemmas.
is a Lévy process in R n .
Proof. Let f k for k = 1, . . . , n be simple functions of the form for α k,j ∈ R and A k,j ∈ B b (O) with A k,1 , . . . , A k,m k disjoint for each k ∈ {1, . . . , n}. By taking the intersections of all possible permutations of the sets A k,j , we can assume that for all k = 1, . . . , n, whereα k,j ∈ R and disjoint setsà 1 , . . . ,à m ∈ B b (O) for some m ∈ N. For each 0 t 1 < · · · < t n , we obtain by the definition in (4.2) that .
and thus the stochastic continuity of J(·)f implies that of ((J(t)f 1 , . . . , J(t)f n ) : t 0). Consequently, the latter is verified as an n-dimensional Lévy process. Proof. Denote the characteristics of M by (γ, Σ, ν). Note, that for arbitrary g ∈ L p (O, λ) and p ∈ [1, 2], we have Furthermore we obtain from the definition of U and (4.9), recalling that Let C := sup{|y| : (x, y) ∈ K}. Define for n ∈ N, x ∈ O and y ∈ R functions g n (x, y) := f n (x). Since (f n ) converges in L 1 (O, λ) it follows from (4.9) that (g n ) converges to 0 in L 1 (O × B c R , ν), and thus in ν 1 -measure. Consequently, there exists N ∈ N such that, for n N , which shows the claim.
Consequently, it follows from (4.10) that (f n ) converges in L M (Φ, λ), which completes the proof.
Proof of Theorem 4.1. We first show that the space D(K) is continuously embedded in L M (O, λ) for each compact K ⊆ O. Trivially, the space D(K) is continuously embedded in L ∞ (K, λ). As K ∈ B b (O), the control measure λ is finite on K, and it follows that L ∞ (K, λ) is continuously embedded in L 2 (K, λ). The latter is continuously embedded in L M (K, λ) by Lemma 4.3. Because whenever supp(f ) ⊆ K, we have In the second part of this section, we embed the Lévy-valued random measure into the space of tempered distribution S * (R d ). We introduce the Schwartz space Define for each t 0 the integral mapping In this case, the mapping J S (t) as defined in (4.11) is well defined and continuous for each t 0. Furthermore, there exists a genuine Lévy process (Y (t) : λ). Let (f n ) n∈N ⊆ S(R d ) be a sequence converging to 0 in S(R d ). As the convergence is uniform in x, we have the existence of another K > 0 such that (
Proof. We begin by showing the implication (b) ⇒ (a), for which we suppose there exists
which completes the proof of the implication (b) ⇒ (a). Conversely, suppose S(R d ) is continuously embedded in L M (R d , λ). Thus, the identity mapping ι : λ) is continuous. Then, there exists a neighbourhood for some k ∈ N and δ > 0 such that ι maps U (0; k, δ) into the open unit ball of L M (R d , λ). Let (f n ) n∈N ⊆ S(R d ) be any sequence such that f n S k → 0. Then, (f n ) is eventually in U (0; k, δ) and thus (ιf n ) is eventually in the unit ball and so is bounded in L M (R d , λ). By [25,Proposition 4,p. 41], we have the continuity of ι in the semi-norm · S k , and thus we may extend ι by continuity to the completion of S(R d ) in this semi-norm. We thus obtain the integrability condition by observing that the C ∞ (R d ) mapping x → (1 + |x| 2 ) r has finite semi-norm · S k for r −k.
As in the proof of Theorem 4.1, an application of [21, Lemma 4.2 and Theorem 3.8] establishes the existence of the Lévy process Y in S(R d ).
Remark 4.5. In Kabanava [26], it is shown that a Radon measure ζ can be identified with a tempered distribution in S * (R d ) if and only if there is a real number r such that r is integrable over R d with respect to ζ. Our condition for the mapping J S in Theorem 4.4 is analogous. Remark 4.7. In a series of papers, for example, [4, 14 18, 19], Dalang, Humeau, Unser and co-authors have studied the Lévy white noise Z defined as a distribution. Here, Z is defined as a cylindrical random variable in D * (R d ), that is, a linear and continuous mapping Z : D(R d ) → L 0 (Ω, P ), with characteristic function for some constants p ∈ R and σ 2 ∈ R + and a Lévy measure ν 0 on R.
Let M be a Lévy-valued random measure on B b (R d ) with characteristics (γ, Σ, ν) and J D (t) the corresponding operator defined in (4.7) for t 0. By comparing the Lévy symbol in (4.6) with (4.12), it follows that, for fixed t 0, the mapping J D (t) is a Lévy white noise in the above sense, if and only if γ = p · leb, Σ = σ 2 · leb, ν = leb ⊗ ν 0 , for some p ∈ R, σ 2 ∈ R + and a Lévy measure ν 0 on R. It follows that M (t, A) D
= M (t, B) for any sets A, B ∈ B b (R d ) with leb(A) = leb(B). In this case, we call M stationary in space.
Dalang and Humeau have shown in [14] that a Lévy white noise in D * (R d ) with Lévy symbol (4.12) takes values in S * (R d ) P -a.s. if and only if This result is analogous to our Theorem 4.4. However, as Lévy-valued random measures are not necessarily stationary in space, our condition is more complex. For example, even in the pure Gaussian case with characteristics (0, Σ, 0), the measure Σ must be tempered; cf. Remark 4.5.
Regularity of the Lévy white noise Z in terms of Besov spaces is studied in [4]. Their results can be applied to a Lévy-valued random measure if it is additionally assumed to be stationary in space, that is, which can be considered as a Lévy white noise in the above sense. We illustrate such an application in the following example.
Example 4.8. Let M be the α-stable random measure, α ∈ (0, 2), described in Example 2.6. For simplicity we consider the symmetric case, that is, p = q = 1 2 . As the characteristics of M is given by (0, 0, leb ⊗ ν α ), it follows that M is stationary in space. Thus, for a fixed time t 0, the mapping J D (t) or, equivalently the random variable Y (t), where Y denotes the Lévy process derived in Theorem 4.1, can be considered as a Lévy white noise in D * (R d ); see Remark 4.7. Furthermore, since R (|y| ε ∧ |y| 2 ) ν α (dy) < ∞ for ε < α, we have that Y (t) is in S * (R d ) P -a.s. By applying the results from [4], we obtain the following: for p ∈ (0, 2) ∪ 2N ∪ {∞} and for all t 0, we have, almost surely: where B τ p (R d , ρ) is the weighted Besov space of integrability p, smoothness τ and asymptotic growth rate ρ. Furthermore, a modification of Y is a Lévy process in any Besov space satisfying (4.13), since its characteristic function is continuous in 0, guaranteeing stochastic continuity.
Cylindrical Lévy processes
The concept of cylindrical Lévy processes in Banach spaces is introduced in [2]. It naturally generalises the notation of cylindrical Brownian motion, based on the theory of cylindrical measures and cylindrical random variables. Here, a cylindrical random variable Z on a Banach space F is a linear and continuous mapping Z : In many cases, we will choose F = L p (O, ζ) for some p 1 and an arbitrary locally finite Borel measure ζ. In this case, F * = L p (O, ζ) for p := p p−1 . The characteristic function of a cylindrical Lévy process (L(t) : t 0) is given by for all t 0. Here, Ψ L : F * → C is called the (cylindrical) symbol of L, and is of the form where a : F * → R is a continuous mapping with a(0) = 0, the mapping Q : F * → F * * is a positive, symmetric operator and μ is a finitely additive measure on Z(F ) satisfying Here, Z(F ) is the algebra of all sets of the form {g ∈ F : ( g, f 1 , . . . , g, f n ) ∈ B} for some f 1 , . . . , f n ∈ F * , B ∈ B(R n \ {0}) and n ∈ N. We call (a, Q, μ) the (cylindrical) characteristics of L.
defines a cylindrical Lévy processes L in F . In this case, the characteristics (a, Q, μ) of L is given by , dy), Proof. Lemma 4.2 shows that L is a cylindrical Lévy process in F . The claimed characteristics follows from (4.6) after rearranging the terms accordingly.
The integration theory developed in [36] and briefly recalled in Section 4 guarantees that (5.1) is well defined for every f ∈ L M . However, in order to be in the framework of cylindrical Lévy processes, we need that the domain of L(t) is the dual of a Banach space (or alternatively of a nuclear space). Since the Musielak-Orlicz space L M is not in general the dual of a Banach space, for the hypothesis of Theorem 5.2 we require the existence of the Banach space F with F * continuously embedded in L M . If the control measure λ of M is finite on O, then Lemma 4.3 gives us that L 2 (O, λ) is continuously embedded in L M (O, λ). It is possible as illustrated in the following example to relax the condition on finiteness of λ, but also the same example shows that there are cases where the finiteness of λ is necessary for any L p space to be continuously embedded. Assume α ∈ (1, 2). Then Theorem 5.2 implies that (5.1) defines a cylindrical Lévy process L in F = L α (O, leb), and its symbol is given by We now turn to the question of which cylindrical Lévy processes induce Lévy-valued random measures. For this purpose, we introduce the following: Proof. If L is independently scattered, then Theorem 5.5 implies that L defines a Lévyvalued random measure M by (5.2). Denote the characteristics of M by (γ, Σ, ν) and its control measure by λ. For a simple function f of the form (4.1), we obtain For an arbitrary function f ∈ L p (O, ζ), let (f n ) n∈N be a sequence of simple functions converging to f both pointwise ζ-almost everywhere and in L p (O, ζ). We note that, as L(t)1 A = 0 whenever ζ(A) = 0, ζ-null sets have null λ-measure, and thus we have (t, dx). We obtain the stated form of the characteristic function of L by (4.6). Conversely, if the Lévy symbol is given by (5.3), then this form implies for any disjoint sets for all u 1 , . . . , u n ∈ R.
Applying Theorem 5.5 to a given cylindrical Lévy process L on L p (O, ζ) gives the corresponding Lévy-valued random measure M , say with control measure λ. The first part of the proof of Theorem 5.6 shows that L p (O, ζ) is a subspace of L M (O, λ). The following result guarantees that the embedding is continuous in non-degenerated cases. Let (f n ) be a sequence in L p (O, ζ) converging to f ∈ L p (O, ζ) and assume that ιf n converges to some g ∈ L M (O, λ). As lim n→∞ J(t)(ιf n ) = J(t)g and lim n→∞ L(t)f n = L(t)f = J(t)(ιf ), we derive J(t)(g − ιf ) = 0. Since J(t) is injective, we conclude g = ιf λ-a.e., and the closed graph theorem implies the continuity of ι.
where N is a Poisson random measure on R + × O × R with intensity leb ⊗ ζ ⊗ μ for a Lévy measure μ on B(R); see also [2,Example 3.6]. Since its symbol is given by Theorem 5.6 guarantees that L is independently scattered.
The Lévy measure of the Lévy process ((L(t)1 A , L(t)1 B ) : t 0) in R 2 is given by μ • π −1 1A,1B . As L(1)1 A and L(1)1 B are independent, it follows from the uniqueness of the characteristic functions that where μ • π −1 1A is the Lévy measure of (L(t)1 A : t 0) and μ • π −1 1B is the Lévy measure of (L(t)1 B : t 0). It follows in particular that On the other hand, [37,Lemma 4.2] implies where r k : R → R 2 is defined by r k (x) = ( 1 A , e k x, 1 B , e k x). It follows from (5.5) that which results in a contradiction.
Weak derivative of a Lévy-valued random measure
In this last section, we establish the relation between a Lévy-valued random measure and a Lévy-valued additive sheet. For this purpose, we introduce a stochastic integral of deterministic functions f : R d → R with respect to a Lévy-valued additive sheet. Instead of following the standard approach starting with simple functions and extending the integral operator by continuity, we utilise the correspondence between Lévy-valued additive sheets and Lévy valued random measures, established in Theorem 3.5, and refer to the integration for the latter developed in Rajput and Rosinksi [36] as presented in Section 4. For a Lévy-valued additive sheet (X(t, x) : t 0, x ∈ R d ), let M denote the corresponding Lévy-valued random measure on B b (R d ) with control measure λ. Then we define for all f ∈ L M (R d , λ), A ∈ B(R d ) and t 0: In other words, if we neglect the embedding by the operators I D and J D , we could interpret this result that M is the weak derivative of X. This is not very surprising, since, if we adapt notions from classical measure theory, the relation M (t, (0, x]) = X(t, x) derived in Theorem 3.5, can be seen that X is the cumulative distribution function of the random measure M . Proof. We show that, for each f ∈ D(O), the process (I D (t)f : t 0) has a càdlàg modification. First we consider a sequence (t n ) decreasing monotonically to some t 0. Let K be the support of f . Then, as (t n ) is bounded, there exists a C > 0 such that t n ∈ [t, t + C] for each n. The lamp property of X implies that X is bounded on the compact set [t, t + C] × K.
Thus, since X(t n , x) converges to X(t, x) in probability for each x ∈ O, Lebesgue's dominated convergence theorem (for a stochastically convergent sequence) implies To show (6.4), we use ideas from [14]. By the fundamental theorem of calculus, as f has compact support,
|
2019-07-09T14:35:17.000Z
|
2019-07-09T00:00:00.000
|
{
"year": 2021,
"sha1": "f33285016e7361ac19990ef09617b7ed39eda83a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1112/jlms.12465",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "02afa26a8defa5fa0b091e034e67c6033993cc3e",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
3564785
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of full-text publication and publishing predictors of abstracts presented at an Italian public health meeting (2005–2007)
Background In Public Health, a thorough review of abstract quality evaluations and the publication history of studies presented at scientific meetings has never been conducted. To analyse the long-term outcome of quality abstracts submitted to conferences of Italian Society of Hygiene and Public Health (SItI) from 2005 to 2007, we conducted a second analysis of previously published material aiming to estimate full-text publication rate of high quality abstract presented at Italian public health meetings, and to identify predictors of full-text publication. Methods The search was undertaken through scientific databases and search engines and through the web sites of the major Italian journals of Public Health. For each publication confirmed as a full text paper, the journal name, impact factor, year of publication, gender of the first author, type of study design, characteristics of the results and sample size were collected. Results The overall publication rate of the abstracts presented is 23.5 %; most of the papers were published in Public Health journals (average impact factor: 3.007). Non universitary affiliation had resulted in a lower probability of publication, while some of the Conference topics had predisposed the studies to an increased likelihood of publication as well as poster form presentation. Conclusions The method presented in this study provides a good framework for the evaluation of the scientific evidence. The findings achieved should be taken into consideration by the Scientific Societies during the contributions selection phase, with the aim of achieving a continuous improvement of work quality. In the future, it would be interesting to survey the abstract authors to identify reasons for unpublished data.
Background
In the international literature, few Medical Scientific Societies and Associations performed quality evaluations of the studies presented at their scientific meetings, but some studies have investigated distinct aspects, such as positive outcome or institutional bias, associated with acceptance at scientific meetings [1][2][3][4]; however, none of these associations, neither in Italy nor in other countries, is a Public Health Organization [5][6][7][8]. Moreover, most of the available papers, in addition to the abstract quality assessment, are involved in analysing the long term outcome and publication history of the works presented to congress or conferences, with the final aim of identifying factors predicting full publication [9][10][11][12][13][14][15][16][17][18][19][20][21][22]. Even this type of qualitative analysis has never been conducted in the Public Health field.
Moreover, some elements of the scientific data selection process remained unclear. Thus, with the aim of improving the understanding of the pathway of scientific data from congress documents to scientific evidence, some systematic reviews have been conducted [23][24][25].
Von Elm et al. [25] concluded that approximately onethird of abstracts submitted to biomedical meetings are eventually published as full reports. He identified five factors that possibly play a role in subsequent publication: abstracts that reported on a positive study outcome, abstracts that reported basic research, abstracts presented at meetings with a selected number of participants and abstracts submitted to United States meetings. Using survival-type analysis, he estimated that 27 % were published after 2, 41 % after 4 and 44 % after 6 years.
In Italy, there are few studies of this issue and none in the Public Health field. Vecchi et al. [26] focused on the abstracts' results and their association with the full publication of contributions presented at the Annual Meeting of College on Problems of Drug Dependence; they concluded that 62 % of the abstracts were subsequently published in peer reviewed journals and that studies with positive findings were more likely to be published.
Considering these data, there is a clear need to provide to public health professional an objective analysis of the potential possibilities and achievements of the evidence discussed during a Public Health Meeting.
The Congress of the Italian Society of Hygiene (SItI) appears to be, in the Italian context, an essential moment at which scientific knowledge is made available to the scientific community, an opportunity for participants to gain experience and an important step in scientific progress. Indeed, these events promote and facilitate collaboration between research groups, and the results obtained from the Congress works are often used in decision making by all Public Health professionals.
Considering the important role played by these conferences in the dissemination of knowledge, in recent years, there has emerged a strong need to submit all the contributions sent as oral communication or posters to an evaluation process, with the aim of analysing the main characteristics and quality of work accepted and then published in the Abstract Books from 2005 to 2010. Castaldi et al. [27] developed an evaluation tool, and the results showed that the average score among all the abstracts reviewed was good. Oral communications showed an average score higher than posters, and according to the affiliation, the highest scores were associated with Universities.
Starting from the results presented by the study mentioned above [27], we have deepened our analysis to analyse more specifically the long-term outcome of good quality abstracts submitted to SItI conferences over a 3-year period (from 2005 to 2007).
Our main objectives are to estimate full-text publication rate of high quality abstract presented at Italian public health meetings, and to identify predictors of fulltext publication.
Methods
During a previous study [27], a total of 4399 abstracts presented to SItI congresses or conferences from 2005 to 2010 were analysed. As reported in this previous article, the reviewers were 11 students from the Postgraduated School in Public Health of the Universities of Turin and Milan, under the supervision of their two School Directors. The amount of agreement within the eight individual criteria of the evaluation checklist was measured by Intraclass Correlation Coefficient (ICC) [27].
The evaluation used eight items related to coherency, structure, originality of the study, definition of study objectives, definition of the type of study, description of data sources, description of results, and conclusions, discussion and practical implications of the study.
For each item, the researcher could assign marks from 0 to 3, so the maximum total marks for each form was 24.
Among all abstracts, only the ones evaluated as "good quality works" were selected for the present study (N = 621). This group includes not only papers with a total score equal to or greater than 19 but also papers with a lower score (between 16 and 18) that scored well in all the items analysed but were not evaluated on one specific item (the "Inherence" item) because it belonged to the miscellaneous topic group. In this regard, the categories of topics were identified according the congress sessions groups, when available. If the themes of the sessions were not available (i.e. for the abstracts accepted as posters in 2005) we classified by a manual revision the abstracts according to the congress sessions of the other years. Following this strategy we identified 11 categories: Food and Nutrition; Health Education; Organization; Vaccines; Epidemiology of Infective Diseases; Epidemiology of Chronic Degenerative Diseases; Environment; Hospital Hygiene; Miscellaneous; Dental Hygiene. Only in the case of Abstracts relating to different subjects but not attributable to previous specific groups we decided to put them in the 'Miscellaneous' group.
After a pilot study, the publication history of each abstract presented during 2005, 2006 and 2007 meetings was determined in July 2012, enabling at least a 5-year follow-up. The search was undertaken through PubMed, MEDLINE, the Cochrane library and the web sites of the major Italian journals of Public Health and Hygiene, with no language restrictions. In order to find further papers not published on the previous databases, we decided to include Google Scholar, despite its relative limited scientific value, in our search strategy.
The first search criterion was the combination of the first author's name and keywords available in the title or abstract. When this search strategy did not identify any publications, to minimise errors in the follow-up, various combinations of words taken in the title and abstract, keywords and author names were tested.
The abstract was considered "published" if at least one author of the abstract was an author of the full publication and the main outcome from the abstract was an outcome in the full manuscript. A change in the sample size, the title or the name or order of some authors or minor changes to the objectives was not considered as a de novo study, whereas manuscripts describing different endpoints were considered as such.
For each publication confirmed as a full-text paper, the journal name, its impact factor and the year of publication were collected.
In the case of abstracts published more than once, we used the earliest publication. Abstracts published in full before the presentation at the Conference were excluded.
In addition, for each abstract of the sample, the following information was collected: gender of the first author (through web search engines), type of study design (experimental, observational descriptive, observational analytical, revision), characteristics of the results (positive or negative) and sample size (n ≤ 100 or n > 100).
All the analysis was performed using STATA-MP 11 software. We performed a descriptive statistical analysis to describe the publication history and the main characteristics of the sample.
All the abstracts characteristics that were available in the conferences databases were included, in particular: affiliation, topics, year, abstract, geographic area, first author gender, study design, results, sample size, total score. It was not performed a preventive selection of the characteristics included.
Then, a univariate logistic regression analysis was performed to test the strength of the associations hypothesised and, finally, the variables associated with a positive outcome of publication (accepted level of statistical significance: p < 0.25, according to the Hosmer-Lemeshow test) were included in a model of multivariate analysis, with the aim of identifying possible factors predicting publication and to remove any confounders [28]. We included the following variables: affiliation, topics, abstract, first author gender, study design, results, total score.
Results
Among the 4399 abstracts accepted from 2005 to 2007 by the SItI for its annual conference, only 621 abstracts were included in the analysis (31.6 %), meeting the main inclusion criteria of the study.
The main descriptive results are shown in Table 1 . The more frequent affiliation was University (68 %), followed by Non Universitary Hospitals (15 %). Although the works were all selected for their good quality, it was decided to split them into three groups according to the total score previously achieved.
Thirty per cent of the papers reached a score between 16 and 18 (medium quality); 52.5 % were high quality works that had a score between 19 and 21, while only 17.5 % could be defined as very high quality works with a score between 22 and 24.
By considering the main outcome of the study (Table 2), it can be noted that the overall publication rate of the abstracts presented is 23.5 % and that most of the papers were published in Public Health journals (53.4 %). Among all the journals, 63 % were peer reviewed, and the impact factor goes from 0.441 to 6.600 with an average value of 3.007.
The average time gap between the presentation at the SItI Conferences and the publication in full text was 2.1 years. Table 3 shows the characteristics of the papers published in full according to the most cited variables predictive of publication [16,17,19,28,29]. The University affiliation was more associated with the publication in full text, as were some Conference topics (48.6 % Vaccine, 37 % Chronic disease).
The study design (33.3 % of the experimental studies and 25.8 % of the observational analytical ones) and the characteristics of the results (9.8 % of the studies with negative results and 26.9 % of those with positive results) seem to be associated with the likelihood of being published.
Furthermore, the increase of the quality score assigned to the works during the evaluation phase seems to be a characteristic more associated with the subsequent publication. All these associations were supported by statistical significance (p < 0.005). The results related to the potential highest rate of publication by females (26.4 vs 20.7 %; p = 0.216) and by sample size (24.7 % of the studies with n > 100 were published in extenso compared to 21.4 % of the studies with n ≤ 100; p = 0.342) were, instead, no statistically significant.
With the aim of testing the strength of all these associations, we carried out a univariate linear regression. Through this type of analysis, we investigated the single association between each variable and the main outcome of the study: the publication of the works in extenso.
The variables associate, with an accepted level of statistical significance (p < 0.25, according to the Hosmer-Lemeshow test), with a positive outcome of publication in a model of multivariate analysis. The results are shown in Table 4.
The Non Universitary Hospitals affiliation, in comparison with the University one, results in a lower probability of publication. This finding does not change in the multivariate analysis, with a corrected odds ratio of 0.09 (p < 0.001).
Moreover, the analysis revealed some topics that predispose the studies to a statistically significant increased likelihood of publication, such as Dental hygiene (OR 10.52, but the abstracts related to this topic were only 7) and Vaccine (OR 3.45).
The first author female gender is confirmed to be associated with an increased likelihood of publication (adjusted OR 1.31), but this association is not statistically significant (p = 0.212).
Similarly, the association between a higher probability of publication and the typology of the study shows an advantage of the experimental designs over the descriptive observational studies (adjusted OR 0.74), but the statistical significance (p = 0.011) revealed in the univariate analysis is not confirmed in the multivariate one.
Regarding the abstract quality score, a positive trend emerges: a high evaluation score means there is a higher probability the work will be published in extenso (p = 0.003).
Discussion
As the SItI conferences represent a fundamental moment in the Italian Public Health field, we think an evaluation of potential predictors of publication on the international literature of studies presented in these meetings can represent a due starting point for suggesting improvements.
In regard to the publication rate, from the analysis, it emerged that 23.5 % of the high quality abstracts presented at SItI conferences were subsequently published in the literature. This value is lower compared to other studies: for example, Winnik et al. [19] indicated a publication rate of 38 %, and Raptis et al. [20] indicated a rate of 40 %. However, our value is similar to the Yoon et al. [16] rate (30 %) and to the Chand et al. [18] rate (30 %). As example, the study of Chand et al. [18] retrieved all abstracts from the Scientific Meetings of the Cardiac Society of Australia and New Zealand from 1999 to 2005. Only 30 % of the 2172 abstracts were followed by publication of a full-text article, and most publications were published within 1 (61 %) or 2 years (84 %).
Such diversity could be related to the differences in the study designs. In the clinical field, there are more frequent randomised clinical trials, which are subsequently published more easily than observational studies. As example, in regard to the surgical field, Raptis et al. [20] conducted an evaluation to assess the peer review process of the European Surgical Association from 2002 to 2007. Approximately one-third of the contributions were accepted for presentation at the annual meetings and, of those, 40 % were published in Annals of Surgery. The authors found, accordingly with the previous hypothesis, only two independent factors able to promote subsequent publication: randomised controlled trials as the study design and a sample size with more than 100 patients. Other good quality abstracts do not reach publication, in our opinion, for logistical or qualitative reasons. Logistical reasons could be due to various possibilities: (1) "Lack of time" for the preparation of a full manuscript text (e.g., for professionals employed in non universitary hospitals). (2) Losing confidence when results are not clinically or statistically significant [22,29]. (3) Other papers have similar findings. Qualitative reasons include inadequate study design, methodology or grammatical style, including language barriers, which may prevent the work from surviving the peer review process.
Our results are different than those of Gorman et al. [11], who concluded that only 36 % of abstracts presented in Toxicology Meetings were published in peer review journals.
Regarding the overall mean impact factor, the Yoon et al. study [16] reported a value for published research of 2.90. Thus, the overall publication rate was relatively low compared not only with other urological meetings held in America and Europe but also with the SItI Conferences.
Conversely, Winnik et al. [19] indicated that the works presented to the European Society of Cardiology Congress reached very high impact factor values: approximately 40 % of the abstracts were placed over 5. In this case, however, the types of works presented include Randomized Clinical Trials, meta-analyses and systematic reviews that are almost absent in our sample and the fact that Public Health Journals have, usually, a lower IF than clinical ones.
The distribution of time to publication for abstracts was consistent with previous studies of publication, occurring within 2-3 years [16,17,30,31].
The analyses revealed a significant disadvantage for non-university-affiliated institutions. The reasons behind this difference may result from a greater willingness and ability of academic professionals compared to hospital ones in conducting and directing the different steps that range from abstract to publication. It must be noted that this result is in agreement with the conclusions reached by other authors [19,32]. Winnik et al. [19], as example, performed a 4-year follow-up of the abstracts submitted to the European Society of Cardiology Congress in 2006 in order to identify factors predicting high-quality research. In their study they found that 38 % of all accepted studies were subsequently published and that the presence of an academic affiliation and a prospective study design were associated with full-text publication.
Moreover, from the analysis, it emerges that certain Conference topics predispose the studies to an increased likelihood of publication. This result can be partially explained by the fact that both the topics (Dental hygiene and Vaccine) are, on one hand, more subjected to clinical trial and, on the other, not strictly related to national settings.
Regarding oral presentation, most authors did not analyze this item [5,18] or because the study design [10,11,13] or because they were not able to distinguish whether the study was presented as a poster or podium presentation [16]. Winnik [19] analyzed the abstracts presentation type but did not find any statistical correlation. Otherwise, according to our findings, Krzyzanowska [9] found that studies with oral or plenary presentation were published sooner than those not orally presented (p = 0.002) and also Schnatz [17] wrote that the average time to publication for oral presentations was 1.7 ± 1.3 years, while for poster presentations was 2.0 ± 1.5 years (P = 0.241). The publication rate of oral presentations was significantly higher than the poster presentations rate (57.7 vs 36.5 %; P < 0.003).
We may assume that the research that is presented orally may be judged by the reviewers as having greater interest and clinical relevance along with more sound methodology and better results.
In the literature, few authors have analysed how gender could affect the success of authors submitting posters or abstracts [19].
Interestingly, the rate of full-text publication of male authors seemed lower compared with their female colleagues (20.7 vs. 26.4 %), but in the multivariate analysis no statistical significance was found for the gender in predicting full-text publication.
Our results differ from those of Winnik et al. [19] in that, in the cardiology field, the female gender was identified as a factor that negatively affects scientific success.
Of course, all of the above findings should be interpreted cautiously and considered exploratory. The importance of understanding the role of gender in research is critical and certainly requires further consideration.
No statistically significant differences were identified regarding the study designs of abstracts included in our analysis. This result is quite interesting, considering the peculiarities of public health field, where very often the papers published are not comprehensive of numeric data but instead related to organizational perspectives or policies discussion.
Abstracts that claim to have achieved results positive and consistent with objectives are more likely to be published (adjusted OR 3.43). This result might suggest that scientific journals tend to prefer works with positive results or that authors themselves are inclined to send such works to editors, making a selection a priori and focusing on more appealing studies. These types of behaviours certainly promote the publication bias.
Regarding the abstract quality score, a positive trend emerges: with the score increasing, there is a higher probability that the work is published in extenso. This result shows that the evaluation method applied has a high degree of agreement with the scientific journal editors' opinions and judgments.
Limitations and further studies
This study has some limitations that deserve discussion. First of all, our search algorithm could potentially miss some papers that may have been published in journals not listed on Medline. It is known that Medline lists up to 80 % of the total journal articles published worldwide [33]. Moreover, if the authors, the title or the hypothesis of the study were substantially modified during the process of editing and supplementing the data, our algorithm may have not detected the article in our search. However, we tried to limit this phenomenon by performing very thorough research.
A potential limitation is represented by the choice to include only high quality abstracts in our analysis. However, we declared this selection strategy as main inclusion criteria in the aim and in the methods section of the study.
Conclusions
Authors have an ethical obligation to endeavour in the distribution of their original findings through scientific publication, consequently improving the quality of scientific research. Once available to the public and to other health professionals, this research can be followed up and implemented in the best interest of the patient [34].
To make a useful and precise selection, it is necessary to know the main features related to the publication, and the data presented in this study provide a good framework.
It would be interesting, through further research, to survey the abstract authors to identify reasons for unpublished data and to learn what percentage is due to logistical versus qualitative reasons. As part of that follow-up, analysis of funding type, the country from which the research originated, pharmaceutical company involvement or support, clinical versus laboratory studies or other potential biases for publication could be assessed to evaluate whether they affect either the likelihood of or time to publication. Insight into reasons for delays and the number of submissions until publication would also be informative.
|
2017-08-03T02:30:09.190Z
|
2015-09-29T00:00:00.000
|
{
"year": 2015,
"sha1": "0ca89bde5bf17fc0a28dbb02803e012725dda901",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-015-1463-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a32c30f39a69e14b175cdc2b9bf1f2a2f08e65c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267770983
|
pes2o/s2orc
|
v3-fos-license
|
Impact of preexisting interstitial lung disease on mortality in COVID-19 patients from the early pandemic to the delta variant epidemic: a nationwide population-based study
Background COVID-19 patients with preexisting interstitial lung disease (ILD) were reported to have a high mortality rate; however, this was based on data from the early stages of the pandemic. It is uncertain how their mortality rates have changed with the emergence of new variants of concern as well as the development of COVID-19 vaccines and treatments. It is also unclear whether having ILD still poses a risk factor for mortality. As COVID-19 continues to be a major concern, further research on COVID-19 patients with preexisting ILD is necessary. Methods We extracted data on COVID-19 patients between January 2020–August 2021 from a Japanese nationwide insurance claims database and divided them into those with and without preexisting ILD. We investigated all-cause mortality of COVID-19 patients with preexisting ILD in wild-type-, alpha-, and delta-predominant waves, to determine whether preexisting ILD was associated with increased mortality. Results Of the 937,758 adult COVID-19 patients, 7,333 (0.8%) had preexisting ILD. The proportion of all COVID-19 patients who had preexisting ILD in the wild-type-, alpha-, and delta-predominant waves was 1.2%, 0.8%, and 0.3%, respectively, and their 60-day mortality was 16.0%, 14.6%, and 7.5%, respectively. The 60-day mortality significantly decreased from the alpha-predominant to delta-predominant waves (difference − 7.1%, 95% confidence intervals (CI) − 9.3% to − 4.9%). In multivariable analysis, preexisting ILD was independently associated with increased mortality in all waves with the wild-type-predominant, odds ratio (OR) 2.10, 95% CI 1.91–2.30, the alpha-predominant wave, OR 2.14, 95% CI 1.84–2.50, and the delta-predominant wave, OR 2.10, 95%CI 1.66–2.66. Conclusions All-cause mortality rates for COVID-19 patients with preexisting ILD decreased from the wild-type- to the more recent delta-predominant waves. However, these patients were consistently at higher mortality risk than those without preexisting ILD. We emphasize that careful attention should be given to patients with preexisting ILD despite the change in the COVID-19 environment. Supplementary Information The online version contains supplementary material available at 10.1186/s12931-024-02723-3.
Introduction
Coronavirus Disease 2019 (COVID- 19) is an infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).By May 2023, 760 million people had been infected with COVID-19 globally, with 6.9 million deaths [1].Over this period, variants such as alpha, beta, gamma, delta, and omicron emerged, which the World Health Organization (WHO) designated as variants of concern (VOCs) [2].Furthermore, the COVID-19 environment is changing due to the dissemination of vaccines and the development of therapeutic agents.We have reported previously that the clinical characteristics of COVID-19 patients changed, with decreasing mortality from the early pandemic to the delta variant epidemic [3].We must continue to update the relevant evidence to improve COVID-19 management.
Preexisting interstitial lung disease (ILD) is a risk factor for mortality in COVID-19 patients, which is reported to range from 12 to 49% [4][5][6][7][8][9], and is higher than in patients without ILD [4][5][6].However, these studies were conducted in the early pandemic, and it is unclear how the mortality of COVID-19 patients with preexisting ILD has changed since VOCs became prevalent and vaccines and COVID-19 therapies became available.Given that COVID-19 is still a major problem, further research on COVID-19 patients with preexisting ILD is needed.
The National Database of Health Insurance Claims and Specific Health Checkups of Japan (NDB) is one of the biggest medical databases in the world, covering most Japanese claims data [10].We used this database to investigate the changes in the clinical characteristics and all-cause mortality of COVID-19 patients with preexisting ILD from the early pandemic to the delta variant epidemic.We also sought to clarify whether preexisting ILD posed an increased risk of all-cause mortality after COVID-19 diagnosis during each epidemic.
Dataset and waves
The NDB covers > 126 million people and 1.9 billion claims annually, including > 99% of Japanese inpatients and outpatients claims data [10].This database contains information on age, sex, diseases based on the International Statistical Classification of Diseases and Related Health Problems, 10th revision (ICD-10), prescribed drugs and medical procedures covered by insurance, and mortality.It does not include information on smoking history, vaccinations, laboratory/physiological findings, and drugs not covered by insurance.We extracted anonymized information on adult patients with a confirmed diagnosis of COVID-19 between January 2020 and August 2021.During this period, the definitive diagnosis of COVID-19 in Japan was made mainly through nucleic acid amplification (e.g., reverse-transcription polymerase chain reaction) or antigen testing.In this study, COVID-19 patients were divided into those who already had underlying ILD before the onset of COVID-19 (preexisting ILD group) and those who did not (non-ILD group).The ICD-10 codes for any ILD, regardless of etiology, and for prespecified ILDs for this study (such as idiopathic pulmonary fibrosis [IPF], rheumatoid arthritisassociated ILD [RA-ILD], systemic lupus erythematosusassociated ILD [SLE-ILD], pulmonary sarcoidosis, etc. [11]).are listed in Additional file: Table S1.For patients' pre-COVID-19 diagnosis comorbidities, information on cerebrovascular disease [12], malignancy [13], renal disease [14], congestive heart failure [12], liver disease [15], and diabetes mellitus [16] was extracted (Additional file: Table S2), with information on the use of long term oxygen therapy (LTOT) before COVID-19 diagnosis.For COVID-19 treatment, information on drugs, including corticosteroids, tocilizumab, baricitinib, heparin, and respiratory supportive care within 60 days of COVID-19 diagnosis, including oxygen therapy, high-flow nasal cannula, mechanical ventilation, and extracorporeal membrane oxygenation, was extracted.Death was defined as all-cause death within 60 days of COVID-19 diagnosis.
This database does not include information on the SARS-CoV-2 variants confirmed in each patient.As mentioned in our previous reports [3] based on the survey of the variants detected in Tokyo, Japan [17], when the detection rate of a VOC exceeded 50% of the tests performed, it was defined as the predominant VOC.The waves of the study period were (1) wild-type-predominant, from January 01, 2020 to April
Statistical analysis
Categorical variables are expressed as number (%).To compare proportions between waves, the differences and corresponding 95% confidence intervals (CI) were calculated using the Wald-test based method.We used multivariate logistic regression models adjusted for age, sex, with/without wave, and comorbidities to explore the association of preexisting ILD with all-cause mortality.The odds ratio (OR) and 95% CI were also calculated.The multicollinearity between variables was checked.A P-value of < 0.05 was considered statistically significant.However, due to the large sample size in this study, absolute standardized differences (ASDs) were presented to enable us to assess differences in the baseline characteristic variables between two groups.When the ASD was < 0.1, the variables between the two groups were taken as approximately equivalent, even if the P-value was significant.All data were analyzed using SAS software, version 9.4 (SAS Institute Inc., NC, USA).
Patient characteristic and mortality
A total of 937,758 adult COVID-19 patients were identified.Of these, 7,333 (0.8%) had preexisting ILD and 930,425 (99.2%) did not.The clinical characteristics of the groups are shown in Table 1.Patients in the preexisting ILD group were significantly older than those in the non-ILD group (median age category; 70-74 years and 40-44 years, respectively; ASD 1.52).The proportion of patients who had received LTOT before the COVID-19 diagnosis was also higher in the preexisting ILD group than in the non-ILD group (7.7% and 0.2%, respectively; ASD 0.40).
The proportion of patients who were treated with corticosteroids was higher in the preexisting ILD group than in the non-ILD group (41.5% and 14.4%, respectively; ASD 0.63).A higher proportion of patients in the preexisting ILD group than in the non-ILD group received oxygen therapy (44.2% and 10.5%, respectively; ASD 0.81), high-flow nasal cannula (HFNC) (6.0% and 1.1%, respectively; ASD 0.27), and mechanical ventilation (6.9% and 1.4%, respectively; ASD 0.28).The 60-day mortality was higher in the preexisting ILD group than in the non-ILD group (14.2% and 1.7%, respectively, ASD 0.48).
Changes in respiratory supportive care and mortality are shown in Fig. 2 and Table 2.As the wave shifted from the wild-type-, to alpha-, and delta-predominant waves, the proportions of patients receiving oxygen therapy and mechanical ventilation decreased.The 60-day mortality rates in the wild-type-, alpha-, and delta-predominant waves were 16.0%, 14.6%, and 7.5%, respectively.The mortality rates decreased significantly from the alpha-predominant to the delta-predominant waves (difference − 7.1%, 95% CI − 9.3% to − 4.9%).There was also a decrease in the use of respiratory support care and 60-day mortality in the non-ILD group (Additional file: Table S4).
The results of the multivariate analysis by etiology of preexisting ILD are shown in Fig. 4.Over the total period, preexisting ILD of all etiologies was consistently associated with increased mortality.The OR was particularly high in IPF (OR 3.38, 95% CI 2.51-4.56)relative to ILD
Discussion
This is the first study to investigate the changes in characteristics and all-cause mortality of patients with COVID-19 who had underlying ILD from the early pandemic to the delta variant epidemic using a large-scale database.
As the waves evolved, the number of patients with preexisting ILD and their proportion among all patients with COVID-19 decreased.In the preexisting ILD group, the number and proportion of elderly patients, and patients who required oxygen therapy, HFNC, and mechanical ventilation also decreased.Furthermore, the number of deaths and all-cause mortality rates within 60 days of COVID-19 diagnosis also decreased.However, in all waves, having preexisting ILD was consistently associated with a higher mortality than not having an ILD.In this study, even as the waves shifted from the wildtype-to alpha-, and delta-predominant waves, the overall number of patients with COVID-19 remained high (approximately 360,000, 200,000, and 370,000, respectively), while the number of patients with preexisting ILD and their proportion among all patients decreased markedly.Since the COVID-19 vaccine was not widely available during the alpha-predominant wave in Japan, the decrease in the number and proportion of patients with preexisting ILD from the wild-type-to the alphapredominant waves can be assumed to be mainly due to the patients' efforts to prevent infection.For example, vulnerable patients at high risk of severe disease or death may have stayed indoors or practiced strict social distancing.However, the similar decrease from the alpha-to delta-predominant waves may also be due to widespread vaccination.In Japan, the vaccination program started in the middle of the alpha-dominant wave, giving priority to patients aged ≥ 65 years or with a comorbidity.During the delta-predominant wave, the second vaccination coverage was about 20% for those aged < 65 years and about 90% for those aged ≥ 65 years [18].As shown in Table 2, from alpha-to delta-predominant waves, the number of patients with preexisting ILD aged < 65 years did not decrease, while the number of those aged ≥ 65 years decreased significantly.
Fig. 2 Respiratory support care and mortality in COVID-19 patients with preexisting interstitial lung disease by wave.Among the COVID-19 patients with preexisting interstitial lung disease in each wave, the proportion of patients who required oxygen therapy were 48.2%, 44.7%, and 29.4% in the wild-type-, alpha-, and delta-predominant wave, respectively.The proportion of patients who required high-flow nasal cannula was 5.7%, 8.0%, and 4.5% in the wildtype-, alpha-, and delta-predominant wave, respectively.The proportion of patients who required mechanical ventilation was 8.6%, 6.2%, and 1.8% in the wild-type-, alpha-, and delta-predominant wave, respectively.The proportion of death in COVID-19 patients with interstitial lung disease was 16.0%, 14.6%, and 7.5%, in wild-type-, alpha-predominant wave, delta-predominant wave, respectively.Wild-type-predominant wave, January 01, 2020-April The 30-day mortality rate for patients with COVID-19 and preexisting ILD was reported to be 25.2% in a study by Gallay et al. [7] and 13.4% between January and June 2020 in a Korean study using nationwide data [6]; both studies were based on data from the early pandemic period.The present study showed that the 60-day mortality rates in the wild-type and alpha-dominant waves were 16.0% and 14.6%, respectively.Although the delta variant was considered to be as virulent as the alpha variant [19], we found that in the delta-predominant wave, the number and proportion of patients requiring respiratory supportive care including oxygen therapy, HFNC, and mechanical ventilation, and the 60-day mortality rate (7.5%) decreased significantly in patients with preexisting ILD.Similar decreases were observed in the non-ILD group.The exact reason for this decline is not clear, but it may be associated with the availability of vaccines and the development of COVID-19 therapies.The number of infected elderly patients at high risk of severe disease or mortality, decreased during the period of high coverage of the second vaccination.In addition, even when infected, the vaccination may have reduced the risk of severe outcomes.Before the delta-predominant wave, therapeutic regimens with dexamethasone, baricitinib, and remdesivir were developed [20][21][22][23], and casirivimab/ imdevimab became available in Japan during the deltapredominant wave [24,25].These improvements may have contributed to a decrease in mortality in patients with and without ILD.
Studies in the early pandemic period reported that COVID-19 patients who had preexisting ILD had a higher risk of mortality than those without ILD [4][5][6].However, it is unclear whether this remained true during the VOC epidemics after the early pandemic period.This study found that although 60-day mortality in patients with preexisting ILD decreased as the wave shifted, having a preexisting ILD was consistently associated with increased mortality in the wild-type-, alpha-and deltapredominant waves.This suggests that regardless of changes in prevalent variants, widespread vaccination, and the development of treatments, patients with preexisting ILD are at high mortality risk for COVID-19, and we should be vigilant when managing these patients in clinical practice.
Among the etiologies of preexisting ILDs, the 60-day mortality rate of patients who had IPF was the highest in any wave.While the rate in patients with other ILDs, including RA-ILD, SLE-ILD, and pulmonary sarcoidosis, decreased significantly from the alpha-to delta-predominant waves, the 60-day mortality of patients with IPF did not decrease significantly.Furthermore, having IPF was independently associated with increased mortality in all waves, compared to those without preexisting ILD.Therefore, thorough preventive measures, including vaccination, should be taken by ILD patients and early and aggressive treatment should be initiated if infected, especially in patients with IPF.
This study had several limitations.First, the database does not include SARS-CoV-2 variant information for individual patients.Second, the NDB does not include vaccination history for each patient.Third, the inclusion criteria for patients with ILD were based on ICD-10 codes, thus a misclassification of the diagnosis of each ILD type might be present.Forth, the NDB does not include data on cause of death.As it was not feasible to distinguish whether deaths within 60 days of COVID-19 diagnosis in patients with preexisting ILD were due to COVID-19, ILD, or other causes, all-cause mortality was reported in this study.Fifth, the data were derived from patients diagnosed up to the delta-predominant wave, and further studies are required to understand the patterns of the omicron-predominant wave.
In conclusion, the clinical characteristics of COVID-19 in patients with preexisting ILD changed from the early pandemic to the delta-predominant wave, including a decrease in the 60-day mortality.However, compared to those without, COVID-19 patients with preexisting ILD were consistently at higher risk of all-cause mortality.We emphasize that careful attention should be given to patients with preexisting ILD despite the change in the COVID-19 environment.
7 )
Data are presented as number (%) a Wild-type-predominant wave, January 01, 2020-April 18, 2021; alpha-predominant wave, April 19, 2021-July 18, 2021; delta-predominant wave, July 19, 2021-August 31, 2021 b Earlier wave was used as reference c Corticosteroids newly administered within 60 days of COVID-19 diagnosis or corticosteroid dosage increased within 60 days of diagnosis in patients who had been using corticosteroids prior to COVID-19 diagnosis d Corticosteroid use equivalent to 500 mg or more of methylprednisolone at least once within 60 days of COVID-19 diagnosis e Median age category CI, confidence interval; LTOT, long term oxygen therapy
Table 1
Patient characteristics Data are presented as median age category or number (%) a Langerhans cell histiocytosis, lymphangioleiomyomatosis, radiation pneumonitis, eosinophilic pneumonia, granulomatosis with polyangiitis-associated ILD, eosinophilic granulomatosis with polyangiitis-associated ILD, mixed connective tissue disease-associated ILD, idiopathic interstitial pneumonias other than idiopathic pulmonary fibrosis, and unspecified ILD b Corticosteroids newly administered within 60 days of COVID-19 diagnosis or corticosteroid dosage increased within 60 days of COVID-19 diagnosis in patients on corticosteroids before diagnosis c The use of corticosteroids equivalent to 500 mg or more of methylprednisolone at least once within 60 days of COVID-19 diagnosis
Table 2
Characteristics of COVID-19 patients with preexisting interstitial lung disease by wave
|
2024-02-22T14:08:30.290Z
|
2024-02-21T00:00:00.000
|
{
"year": 2024,
"sha1": "a18468d74e44ed2fef7895e1b11bd6789d4a7887",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c22d3c5f6cec8413f2079eb98561141e4127a866",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12796745
|
pes2o/s2orc
|
v3-fos-license
|
Informed consent in the era of biobanks
Biorepositories collecting human specimens and health information have proliferated in recent years. Efforts to set a range of policies related to biorepositories, including those related to procedures for obtaining informed consent and recontacting participants, have been hindered by a paucity of data on the diverse forms biorepositories take and the variety of institutional settings where they are established. A recent survey demonstrates in detail, for the first time, the diversity of biorepositories in the USA. See research article: http://genomemedicine.com/content/5/1/3
The 'scaling-up' of research
Approaches to the way investigators obtain consent and later recontact research participants are regulated in the USA under a set of policies focused on protecting research subjects, often referred to as the Common Rule, that were published in 1991 [1]. In recent years, however, the development of novel research approaches has caused some to raise questions over the practicability of traditional procedures for obtaining consent and recon tact ing participants. For example, biorepositories can include very large numbers of biosamples collected from large populations of individuals. Traditional procedures used to obtain informed consent to participation in research, such as enrollment visits that can last over an hour, seem better suited to studies with participants who number in the hundreds, rather than to biorepositories whose participants can number in the hundreds of thousands.
Th e 'scaling-up' of research approaches has led to increased interest in identifying the best ways for investigators to engage with research participants when the number of participants becomes very large [2]. In fact, a recent proposal for revisions to the Common Rule included a suggestion that permission to collect biosamples might be obtained using a brief permission form rather than a detailed informed consent process [3]. Th is and other proposed reforms to the Common Rule may be intended, in part, to address the concerns that have arisen in building biorepositories. Until now, however, our understanding of the scale of the problem of balancing adequate engagement with practicability in the development of biorepositories has been based on an incomplete picture of the biorepository landscape. Are biorepositories with hundreds of thousands of biosamples really that common? Where do they obtain their samples and with what consent approach? Which stakeholders are involved in developing and carrying out governance and oversight for these collections? In this issue of Genome Medicine, Henderson et al. for the fi rst time provide data and analysis on the diversity of biorepositories in the USA [4]. Th e fi ndings in this report are wide-ranging and will help move a number of ongoing policy debates forward.
Consent models
Th e two core ethical aims for informed consent encounters are: (1) to ensure that potential participants are adequately informed about the risks and benefi ts associated with research participation, and (2) to obtain participants' voluntary agreement to participate in research. In practice, the approaches that can be taken to achieve these aims in the setting of biorepositories are numerous. In the procedures adopted by many biorepositories, participants are informed of the general scope of planned research and asked to consent en bloc (that is, provide 'blanket' consent) to all future research. Th e alternative is to recontact participants periodically to request consent for use of stored biosamples in newly developed research projects.
Even though the fi ndings reported in Henderson et al. [4] do not address the consent approach adopted by biorepositories, they do help place this choice into context across the range of biorepositories currently in operation in the USA. Fifteen percent of biorepositories report having fewer than 500 samples. For these biorepositories that are similar in size to more traditional types of medical research studies, a 'blanket' consent approach may not be necessary.
However, a number of biorepositories are extremely large. Over 20% of biorepositories contain more than 100,000 specimens, and at least one biorepository reported collecting biosamples from more than 10 million individuals! Since the majority of biobanks (75%) obtain samples directly from the individuals donating them, we can begin to see the scale of the effort needed to obtain consent from participants on just one occasion. Despite the many salutary features of the periodic recontact model, the data from this study indicate that this model may not be feasible for a significant percentage of biorepositories.
The passage of time poses another challenge to the recontact approach. Henderson et al. [4] found that 17% of biorepositories were established prior to 1990. Although we do not know whether samples collected prior to that time are still in use, it is daunting to consider the operational challenge of recontacting participants over a 20 year period! While these findings put a number of claims into their empirical context, they can provide no direct resolution of the debate. For example, although a significant number of biorepositories are either very large or have been in operation for a long time, Henderson et al. have not reported whether any face both challenges. And going beyond these findings, it is clear that under exceptional circumstances, large, long-term research projects can maintain meaningful engagement with participants [5]. Finally, even biorepositories that have adopted a onetime, 'blanket' consent model may later find that they need to recontact participants, such as when the scope of planned research changes or when plans to share data are developed [6].
Return of results
Just as the size and duration of biorepositories can pose challenges to recontacting participants for the purpose of expanding consent, they can also create barriers to return ing research results to participants. If we imaginatively combine the findings provided by Henderson et al. with recent studies that demonstrate that incidental findings generated through DNA-based tests are relatively common [7], we may conclude that returning incidental findings to 100,000 or 500,000 participants included in a genomic biorepository could represent a remarkably expensive and time-consuming effort. This is of particular interest, since 41% of biorepositories already consider long-term sustainability to be a major concern.
The scope of this challenge is mitigated significantly, however, if we assume that only those results expected to provide significant and timely clinical utility should be returned. Taken in this light, DNA-based biorepositories may not pose the most significant challenge in terms of return of results, since we may expect them only infrequently to generate findings that are both urgent and diagnostic. But as scientific knowledge increases in coming years, a great number of RNA and protein-based biomarkers are likely to emerge as both highly predictive and timely markers for disease. Although nearly 50% of biorepositories currently focus on DNA research, the findings of Henderson et al. [4] raise our awareness that 24% of biorepositories are focused primarily on RNA and 7% are focused primarily on protein. In this way, these findings direct our attention beyond return of genomic results toward results that we may soon find are far more convincing -and urgent -candidates for return to participants.
Looking ahead
In this brief article, I have addressed only one narrow area of interest in ethics and policy issues related to biorepositories. My aim has been to demonstrate how the new empirical findings reported by Gail Henderson and her colleagues can serve as a starting point for grounding discourse on a range of issues related to biorepository design, oversight and governance.
At the same time, these findings direct our attention toward emerging challenges. As the trends revealed in this report indicate, widespread innovation in approaches to research is likely to continue. With this innovation will come a continuing need to evaluate the advances that are taking place in all quarters, especially since they are likely to bring new challenges to efforts to enact ethical, legal and societal commitments into practicable policies.
|
2015-03-03T00:52:33.000Z
|
2013-01-25T00:00:00.000
|
{
"year": 2013,
"sha1": "9f4d7c6d85731773a07b7b03ae1f0554813aa579",
"oa_license": "CCBY",
"oa_url": "https://genomemedicine.biomedcentral.com/track/pdf/10.1186/gm408",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35eb157ba79a8b7d9970ce4ed92548fb0cb9a159",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261989163
|
pes2o/s2orc
|
v3-fos-license
|
Bibliometric network of scientific research on knowledge sharing
.
Introduction
Technological developments in this global era have benefited from the existence of knowledge management practices.Is essential for the management of complexity, interaction and initiative, coordination and problem solving, and decision making [1].Openly sharing knowledge undoubtedly offers numerous advantages.For instance, innovative organizations rely on the fusion and incorporation of knowledge to create fresh procedures, products, insights, and solutions [2].Enhancing the mechanisms for knowledge sharing among key enterprises is crucial for fostering synchronized operations and achieving sustainability [3].These aspects present new prospects for innovation and leadership [4].Knowledge sharing involves the transfer of knowledge between individuals.To be precise, only information can be transferred, and if both the sender and the recipient understand the information's meaning and its context, it counts as knowledge [5].This comprehension involves assessing and grasping the existing state of knowledge in the broader context, generating and persistently working with promising ideas, offering and receiving constructive feedback, exchanging and amalgamating various viewpoints, anticipating and identifying challenges, and resolving problems [6].In addition, companies need to have the means to adopt innovations or even access disseminated information.Social technology has previously been used to facilitate more equitable development terms than is offered in the standard innovation diffusion approach [7].
The growth of industries driven by significant technological advancements and pressing developmental requirements serves as a crucial force in steering both the overall economy and its sustainable development [1]; [6].Furthermore, social network analysis is indispensable as it enables the investigation of structural connections and impacts within networks, the flow of information within networks, the spread of innovative concepts, tools, or methods, and the sustainability of networks.It involves scrutinizing network structure and how structural characteristics shape information-related behaviors [8] to create space to align motivation and build trust [9] and activities designed to enhance their teaching-knowledge, attitudes and behavior to enhance learning [10].Social networks, or social media, provide the ability to foster extensive knowledge sharing within an organization [11], which can modify web content, share knowledge and socialize with others online [1].Human resources for increased diversity in the workplace include not only better utilization of talent and market understanding but also increased creativity and problem-solving abilities [12].Contributes in problem-solving, knowledge exchange, and career advancement [13] in order to achieve sustainable development goals [14].
Socialization is a process involving the acquisition, enactment, and creation of culture and knowledge, which in turn influences identity formation.By internalizing the dominant values, norms, and behaviors, newcomers become integrated members of an organization [15].Typically, research in the field of knowledge sharing is quite focused, often centered around a single research topic within a specific field [16], and limited to a particular country [17].According to [18], organizational justice has an impact on organizational commitment, knowledge sharing, and company performance.Additionally, organizational commitment affects both knowledge sharing and company performance.Furthermore, it was discovered that knowledge sharing significantly influences company performance [18].Regrettably, despite visualizing a comprehensive global image map based on details from numerous published studies over the years, there hasn't been a substantial amount of research on knowledge sharing.The strong positive correlation between affiliations, scholars, yet no publication has specifically studied the effect of academic studies.Since 1992, we have been observing the rise in the number of academic papers on knowledge sharing that have been printed and included in the Scopus database.
Literature Review
Knowledge sharing is a means through knowledge, which is owned by individuals and then transferred to groups and organizations as a whole, so that the benefit is that it can create new knowledge, innovation and improve the performance of the company organization [19].As pointed out by M Pitt, knowledge management should not be ad-hoc but [20], exemplary organizations systematically make intuitive knowledge-based experiences available to others thereby disseminating them throughout the organization.Roberts et al. [21] pointed out that it should not be assumed that knowledge flows from the center of the company but is likely to be based on the periphery of the organization.Information flow in organizations helps in promoting creativity.When information flows freely within an organization, it creates opportunities for new ideas, beliefs, choices, and information to interact and enables a creative enabling environment.A trusting environment allows individuals to take risks by sharing information and working closely with their team members creating a feeling of collaboration.
Method
An overview of the research on knowledge sharing that has been done around the world over the past 27 years is given in this review.The study used document search queries to get data from Scopus in November 2020.The study used bibliometric techniques, which involved data visualization and analysis using the VOSViewer tool and Scopus' search result analysis feature [22].This research identifies keywords related to knowledge sharing to search for publications in the Scopus database.It focuses on 1,391 documents published globally between 1992 and 2019, with data collection extending up to 2019 and excluding 2020.To provide a comprehensive view of the research throughout the year, academic data was collected from January to December annually.TITLE-ABS-KEY ("knowledge sharing") AND PUBYEAR < 2020 AND (LIMIT-TO (ACCESSTYPE(OA))) when obtaining information about academic publications from the Scopus online database during the data mining process, the query input command is used.
In order to create a network of international researcher collaboration, the study uses co-authorship analysis employing authors' analysis units and systematic calculating methodologies through VOSViewer.In order to generate a keyword map network, the research also does a thorough co-occurrence analysis and keyword association analysis using VOSViewer's full systematic calculating approach.
Results and discussion
It seems that the number of publications related to Knowledge Sharing is likely to rise each year.The peak year for international publications was in 2019, with 289 documents.Research on knowledge sharing at the global level commenced in 1992.
Knowledge Sharing Research Most Common Organizational Affiliations
The top research institutions in knowledge sharing research was University of Technology Sydney with 14 papers.Then, with 13 Computer Science was the most prevalent subject area in international publications on knowledge sharing Research, accounting for 417 papers (17.7%).Persuades by fields of study in the areas of social sciences (15.2%), with 360 papers, engineering (10.1%), 239 papers, business, management, and accounting (8.6%), 170 papers for medicine, 165 for decision sciences, 134 for environmental science, 87 for economics, econometrics, and finance, 80 for mathematics, and 76 papers for agricultural and biological sciences.
The Largest Frequency of Publication of Knowledge Sharing Research by Type Document
Among international publications related to knowledge sharing research, the most common document type was "Article," accounting for 941 documents or 67.7 percent.Following that, "Conference Paper" made up 25.1 percent with 349 papers, "Review" constituted 4.7 percent with 65 papers, "Editorial" accounted for 0.7 percent with 10 papers, "Book Chapter" and "Erratum" each represented 0.6 percent with 8 papers, "Note" comprised 0.4 percent with 6 papers, and "Book" and "Data Paper" each contributed 0.1 percent with 1 paper each.
Publication Theme Map
By utilizing the VOSViewer program for analysis and visualization, a framework for knowledge sharing keywords in publication themes was established.The criterion for the minimum number of keyword-related documents was set at five occurrences.Consequently, out of 7,890 keywords, 494 keywords met this threshold.
Figure 9 Network of Keyword
According to Figure 9, the international academic publications in knowledge sharing research were categorized into five thematic groups based on the study keywords.These thematic groups are referred to as FSSEF themes, simplifying and abbreviating their representation.1) Knowledge cluster (Red).The keywords knowledge management, knowledge-based systems, education, elearning, human resource management, social media, and strategy dominated in this cluster.Many of these keywords are linked to themes in knowledge.2) Human cluster (Green).The keywords male, female, human relation, young adult, primary health care and middle aged dominated in this cluster.3) Innovation cluster (Blue).The keywords sustainability, climate change, policy making and motivation dominated in this cluster.4) Research cluster (Yellow).The keywords from research cluster is a medical research, health survey, health policy, human experiment, and leadership.5) Information cluster (Purple).We can find information retrieval, internet, software, data analysis, and metadata and information storage and retry themes in this cluster.This cluster was related by the keyword's information.6) Industry cluster (Blue).The keywords automotive industry and greenhouse gases dominated in this cluster.
Network of Authorship
Collaboration between implementations working in different contexts can contribute to the development of new and common approaches [12].Using the VOSViewer program, a framework for knowledge sharing researchers was created to construct an authorship network map.A minimum of three publications per author was one of the criteria for participation.As a result, 70 researchers who met this requirement were found out of 4,405 researchers.As shown in Figure 10, there was a single collaborative network among international researchers in knowledge sharing research publications.The red cluster represented knowledge sharing publications within this network.2) Green cluster: Zhang, X., Chen, J., Zhang, H., Yang, J., Tsai, S. B., and Wang, J., 3) Blue cluster: Zhou, X., Wang, H., Liu, L., Liu, X., and Liu, Z. 4) Yellow cluster: Ma, X., Li, I., Xu, J., Chen, H., and Zhao, I 5) Purple cluster: Li, Y., Li, C., and Wu, H. 6) Light blue cluster: Zhang, I., Wu, X., and Chen, Y.
Managerial Implication
Researchers can find out their knowledge for authorship network maps.Namely a group partnership network between international researchers in knowledge-sharing research publications.
Conclusion
The findings of this research highlight a consistent yearly increase in international publications related to "Knowledge Sharing," accompanied by the emergence of maps and visual patterns.The University of Technology Sydney was the most active research institution in the publication of knowledge sharing papers, with 14 contributions.In the realm of knowledge sharing research publications, Oliveira, M. emerged as the individual academic researcher with the highest number of publications, totaling 6 papers.The United States played a pivotal role in knowledge sharing research, contributing significantly with 197 papers.Notably, the National Natural Science Foundation of China was the leading funding sponsor in this research, backing 36 papers.
Within knowledge sharing studies, computer science was the most intensively studied field, accounting for 17.7 percent of the publications, while Articles dominated the document types, comprising 67.7 percent of the corpus.Among the sources of knowledge sharing research, "Sustainability Switzerland" led with 44 papers, and the highest global scholarly publication output occurred in 2019, with 289 papers.The works of Mannix, E., and Neale, M.A. stood out with the most citations, particularly their 2005 publication, " What variations are significant?Diversity in teams: The promise and reality in organizations," cited 639 times.
Regarding the knowledge implications, this study suggests the creation of a convergence axis classification, referred to as the KHIRII theme, encompassing Knowledge, Human, Innovation, Research, Information, and Industry.This classification can help organize the body of knowledge accumulated over 27 years of academic publication.Recognizing key themes in knowledge sharing provides practical insights, fostering awareness of research gaps and the need for specialized expertise in various disciplines.These themes often underscore the significant contributions of knowledge sharing to information, innovation, technology, and management.
papers, The University of Hong Kong followed.Wageningen, University & Research with 12 papers, Universidade de Sao Paulo -USP with 11 papers, University of Toronto with 11 papers, Universidade de Lisboa with 11 papers, and McGill University with 10 papers.
Figure 1
Figure 1 Organizational Affiliation Number of Annual Publication of Knowledge Sharing Research4.2.Knowledge Sharing Research Most Individual ResearcherThe researcher in the area of knowledge sharing to the most writings was Oliveira, M. 6 papers with it.Pursued by Curado, C. with five papers.And followed by Assegaff, S., Eisenbardt, M., Harvey, G., Ikeda, M., Rehman, M., Tsai, S.B., Vogel, A.L., Ziemba, E. with 4 papers.
Figure 2
Figure 2 Most individual Knowledge Sharing Researcher4.3.Nation Number of Annual Publication of Knowledge Sharing ResearchIn the realm of knowledge sharing research publications, the United States led the way with 197 academic documents.Following closely were China with 155 documents, the United Kingdom with 150 papers, Australia with 85 papers, Canada with 78 papers, Brazil with 66 papers, the Netherlands with 59 papers, Malaysia with 56 papers, France with 52 papers, and Germany with 52 papers.
Figure 3
Figure 3 Number of Documents by Nation from the Knowledge Sharing Research4.4.Number by Funding Sponsor of Knowledge Sharing ResearchThe primary funding sponsor in knowledge sharing research was the National Natural Science Foundation of China, which supported 36 papers.It was followed by the National Science Foundation with 13 papers, the European Commission with 11 papers, the Conselho Nacional de Desenvolvimento Científico e Tecnológico with 10 papers, the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior with 10 papers, the Fundamental Research Funds for the Central Universities with 10 papers, and the Japan Society for the Promotion of Science with 9 papers.
Figure 4 Figure 5
Figure 4 Most Frequency of Knowledge Sharing Research by Funding Sponsor E. The Largest Frequency of Publication of Knowledge Sharing Research by Subject Area
Figure 6
Figure 6 The Largest Frequent Type Document of Knowledge Sharing Research 4.6.Year Documents of Knowledge Sharing Research Publication Sources In terms of the yearly quantity of knowledge sharing research publication sources, Sustainability Switzerland held the top position with 44 papers.Following closely were IFIP Advances in Information and Communication Technology with 43 papers, Procedia Computer Science with 32 papers, IFIP International Federation for Information Processing with 21 papers, and there are 21 papers in the subseries Lecture Notes in Computer Science, which also contains Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics.
Figure 7
Figure 7 Number of Annual Documents Based on the Knowledge Sharing Research Sources 4.7.Year Number of Annual documents from the Knowledge Sharing Research The quantity of academic documents addressing knowledge sharing has exhibited a consistent annual increase.Research in the field of knowledge sharing commenced in 1992.The zenith of publication activity was reached in 2019, with 289 articles, while 2018 saw a total of 191 articles published.
Figure 8
Figure 8 Number of Annual Documents Based on the Knowledge Sharing Research Sources 4.8.The Knowledge Sharing Research Article Cited The research conducted by Mannix, E., and Neale, M.A. garnered the highest number of citations in the field of knowledge sharing research.Their most cited work, titled " What variations are significant?Diversity in teams: The promise and reality in organizations," received 639 citations in 2005.
|
2023-09-17T15:10:38.758Z
|
2023-09-30T00:00:00.000
|
{
"year": 2023,
"sha1": "ff74655844166b68c4cf6813083461473244bf76",
"oa_license": "CCBY",
"oa_url": "https://ijsra.net/sites/default/files/IJSRA-2023-0747.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c6daa44edc20477c08b01cabe684b236512f9f36",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
119243242
|
pes2o/s2orc
|
v3-fos-license
|
A generic frequency dependence for the atmospheric tidal torque of terrestrial planets
Thermal atmospheric tides have a strong impact on the rotation of terrestrial planets. They can lock these planets into an asynchronous rotation state of equilibrium. We aim at characterizing the dependence of the tidal torque resulting from the semidiurnal thermal tide on the tidal frequency, the planet orbital radius, and the atmospheric surface pressure. The tidal torque is computed from full 3D simulations of the atmospheric climate and mean flows using a generic version of the LMDZ general circulation model (GCM) in the case of a nitrogen-dominated atmosphere. Numerical results are discussed with the help of an updated linear analytical framework. Power scaling laws governing the evolution of the torque with the planet orbital radius and surface pressure are derived. The tidal torque exhibits i) a thermal peak in the vicinity of synchronization, ii) a resonant peak associated with the excitation of the Lamb mode in the high frequency range, and iii) well defined frequency slopes outside these resonances. These features are well explained by our linear theory. Whatever the star-planet distance and surface pressure, the torque frequency spectrum -- when rescaled with the relevant power laws -- always presents the same behaviour. This allows us to provide a single and easily usable empirical formula describing the atmospheric tidal torque over the whole parameter space. With such a formula, the effect of the atmospheric tidal torque can be implemented in evolutionary models of the rotational dynamics of a planet in a computationally efficient, and yet relatively accurate way.
Introduction
Understanding the evolution of planetary systems has become a crucial question with the rapidly growing number of exoplanets discovered up to now. Terrestrial planets particularly retain our attention as they offer a fascinating diversity of orbital configurations, and possible climates and surface conditions. This diversity is well illustrated by Proxima-b, an exo-Earth with a minimum mass of 1.3 M ⊕ orbiting Proxima Centauri (Anglada-Escudé et al. 2016;Ribas et al. 2016), and the TRAPPIST-1 system, which is a tightly-packed system of seven Earth-sized planets orbiting an ultracool dwarf star (Gillon et al. 2017;Grimm et al. 2018).
Characterizing the atmospheric dynamics and climate of these planets is a topic that motivated numerous theoretical works, both analytical and numerical (e.g. Pierrehumbert 2011;Heng & Kopparla 2012;Leconte et al. 2013;Heng & Workman 2014;Wolf et al. 2017;Wolf 2017;Turbet et al. 2018). This tendency will be reinforced in the future by the rise of forthcoming space observatories such as the James Webb Space Telescope (JWST), which will unravel features of the planetary atmospheric structure by performing high resolution spectroscopy over the infrared frequency range (Lagage 2015).
Constraining the climate and surface conditions of the observed terrestrial planets requires constraint of their rotation rate first because of the key role played by this parameter in the equilibrium atmospheric dynamics (Vallis 2006;Pierrehumbert 2010). Particularly, it is important to know whether a planet is locked into the configuration of spin-orbit synchronization with its host star and the extent to which asynchronous rotation states of equilibrium might exist. Over long timescales, the planet rotation is driven by tidal effects, that is the distortion of the planet by its neighbours (star, planets and satellites) resulting from mutual distance interactions. Tides are a source of internal dissipation inducing a variation of mass distribution delayed with respect to the direction of the perturber. As a consequence, the planet undergoes a tidal torque, which modifies its rotation by establishing a transfer of angular momentum between the orbital and spin motions.
Tides can be generated by forcings of different natures. First, the whole planet is distorted by the gravitational tidal potential generated by the perturber, and is driven by the resulting tidal torque towards spin-orbit synchronous rotation and a circular orbital configuration. Second, if the perturber is the host star, the atmosphere of the planet undergoes a heating generated by the day-night cycle of the incoming stellar flux. The variations of the atmospheric mass distribution generated by this forcing are the so-called thermal atmospheric tides (Chapman & Lindzen 1970).
As demonstrated by the pioneering study by Gold & Soter (1969) in the case of Venus, thermal tides are able to drive a terrestrial planet away from spin-orbit synchronization since they induce a tidal torque in opposition with that resulting from solid tides in the low frequency range. Hence, the competition between the two effects locks the planet into an asynchronous rotation state of equilibrium, which explains the departure of the rotation rate of Venus to spin-orbit synchronization.
The understanding of this mechanism has been progressively consolidated by analytical works based upon the classical tidal theory (e.g. Ingersoll & Dobrovolskis 1978;Dobrovolskis & Ingersoll 1980;Auclair-Desrotour et al. 2017a,b) or using parametrized models (Correia & Laskar 2001. Over the past decade, the growing performances of computers have made full numerical approaches affordable, and the atmospheric torque created by the thermal tide was computed using general circulation models (GCM; Leconte et al. 2015). This approach remains complementary with analytical models owing to its high computational cost. However, it is particularly interesting since it allows to characterize the atmospheric tidal response of a planet by taking into account the atmospheric structure, mean flows and other internal processes by solving the primitive equations of fluid dynamics in a self-consistent way.
By using a generic version of the LMDZ GCM (Hourdin et al. 2006), Leconte et al. (2015) retrieved the frequencydependence of the tidal torque predicted by ab initio analytical models (Ingersoll & Dobrovolskis 1978;Auclair-Desrotour et al. 2017a. The torque increases linearly with the tidal frequency in the vicinity of synchronization. It reaches a maximum associated with a thermal time of the atmosphere and then decays in the high-frequency range. This behaviour is approximated at the first order by the Maxwell model, which describes the forced response of a damped harmonic oscillator. It shows evidence of the important role played by dissipative processes such as radiative cooling in Venus-like configurations. To better understand the action of the thermal tide on the planet rotation, this frequency-dependent behaviour has to be characterized. Thus, our purpose in this study is to investigate the dependences of the tidal torque created by the semidiurnal tide on the tidal frequency and on key control parameters. We follow along the line by Leconte et al. (2015) for the method, and treat the case of an idealized dry terrestrial planet hosting a nitrogendominated atmosphere and orbiting a Sun-like star. Hence, we recall in Sect. 2 the mechanism of the thermal atmospheric tide. In Sect. 3, we detail the method and the physical setup of the treated case.
In Sect. 4, we compute the tidal torque exerted on the atmosphere from simulations using the LMDZ GCM and examine its dependence on the tidal frequency. We introduce in this section two new models for the thermally generated atmospheric tidal torque: an ab initio analytical model based upon the linear theory of atmospheric tides (e.g. Chapman & Lindzen 1970), and a parametrized semi-analytical model derived from results obtained using GCM simulations. This later model describes in a realistic way the behaviour of the torque in the low-frequency range, where a thermal peak is observed. In addition, we investigate in this section the role played by the ground-atmosphere thermal coupling in the lag of the tidal bulge.
In Sect. 5, we examine the dependence of the tidal response on the planet orbital radius and surface pressure. We thus establish empirical scaling laws describing the evolution of the characteristic amplitude and timescale of the thermal peak with these two parameters. Combining together the obtained results, we finally derive a new generic formula to quantify the atmospheric tidal torque created by the thermal semidiurnal tide in the case of a N 2 -dominated atmosphere. We give our conclusions in Sect. 6.
Basic principle
We briefly recall in this section the main aspects of the mechanism of atmospheric tides in the case of terrestrial planets, and we introduce analytical expressions that will be used in the following to compute the resulting tidal torque. For the sake of simplification, we consider in this study the case of a spherical planet of radius R p and mass M p , orbiting its host star, of mass M , circularly. The star-planet distance is denoted a, the mean motion of the system n , and the obliquity of the planet is set to zero. We assume that the planet rotates at the spin angular velocity Ω, which is positive if the spin rotation is along the same direction as the orbital motion, and negative otherwise.
The atmosphere of the planet undergoes both the tidal gravitational and thermal forcings of the host star. Below a certain orbital radius, the planet is sufficiently close to the star to make gravitational forces predominate. Thus, its rotation is driven towards spin-orbit synchronization (Ω = n ), which is the unique possible final state of equilibrium for the planet rotation in the absence of obliquity and eccentricity. Conversely, the predominance of the thermal tide enables the existence of asynchronous final rotation states of equilibrium, as showed in the case of Venus (e.g. Gold & Soter 1969;Ingersoll & Dobrovolskis 1978;Dobrovolskis & Ingersoll 1980;Correia & Laskar 2001;Auclair-Desrotour et al. 2017a). As a consequence, we ignore here the action of gravitational forces on the atmosphere. We note however that the action of these forces on the atmospheric tidal bulge will be taken into account to compute the tidal torque, as seen in the following.
The thermal forcing results from the day-night periodic cycle. The atmosphere undergoes heating variations due to the time-varying component of the incoming stellar flux F, which scales as the equilibrium one, F = L / 4πa 2 , where L is the luminosity of the star. Hence, the absorbed energy induces a delayed variation of the atmospheric mass distribution. Let us assume the hydrostatic approximation (i.e. that pressure and gravitational forces compensate each other exactly in the vertical direction) and consider that the surface of the planet is rigid enough to support the atmospheric pressure variations with negligible distortions. It follows that the variation of mass distribution is directly proportional to the surface pressure anomaly, where l and m designate the latitudinal and longitudinal degrees of a mode, θ and ϕ the colatitude and longitude in the reference frame co-rotating with the planet, t the time, Y m l the normalized spherical harmonics (see Appendix A), δp m,σ s;l the associated components, and σ = m (Ω − n ) the associated forcing frequencies (see e.g. Efroimsky 2012; Ogilvie 2014) 1 .
The tidal torque exerted on the atmosphere is obtained by integrating the gravitational force undergone by the tidal bulge over the sphere. Hence, denoting U T the tidal gravitational potential at the planet surface, the atmospheric tidal torque is defined in the thin-layer approximation (H R p ) as (e.g. Zahn 1966) where the notation g refers to the surface gravity of the planet, ∂ ϕ to the partial derivative in longitude, S to the sphere of radius R p , and dS = R 2 p sin θdθdϕ to the surface element.
Similarly as the surface pressure anomaly, U T can be expanded in Fourier series of time and spherical harmonics, where the U m,σ T;l are the amplitudes of the different modes. Terms associated with l = 1 do not contribute to the tidal torque since they just correspond to a displacement of the planet gravity centre. Thus, the main components of the expansion are those associated with the quadrupolar semidiurnal tide, that is with degrees l = |m| = 2. Besides, since the U m,σ T;l scale as U m,σ T;l ∝ R p /a l , terms of greater order in l can be neglected with respect to the quadrupolar components if the radius of the planet R p is assumed to be small compared to the star-planet distance, which is the case in the present study. Thus, by substituting U T and δp s by their expansions in spherical harmonics in Eq. (2), we note that the quadrupolar terms l = |m| = 2 only remain, as for the tidal potential U T , and we end up with the well-known expression of the semidiurnal quadrupolar torque in the thin layer approximation (e.g. Leconte et al. 2015), with σ = 2 (Ω − n ), and where the notation refers to the imaginary part of a complex number ( referring to the real part). In this expression, δp 2,σ s;2 designates the component of degrees l = 2 and m = 2 in the expansion on spherical harmonics given by Eq. (1). This complex quantity is the most important one since it encompasses the whole physics of the atmospheric tidal response. In the following, it will be calculated using a GCM.
The action of the torque on the planet is fully determined by the sign of the product η = sign (σ) δp 2,σ s;2 . When η < 0 (η > 0), the atmospheric tidal torque pushes the planet towards (away from) spin-orbit synchronization, |Ω − n | decays (increases). Positions for which η = 0 correspond to the stable ( dη/dσ| eq < 0) or unstable ( dη/dσ| eq > 0) equilibrium rotation rates that the planet would reach if it were subject to atmospheric tides only, that is if solid tides were ignored in the case of a dry terrestrial planet.
Method
As mentioned in the previous section, the guideline of the method is to compute the quadrupolar component of the surface pressure anomaly from 3D GCM simulations. We detail here the basic physical setup of these simulations in a first time, and the way δp 2,σ s;2 is extracted from pressure snapshots in a second time. In the whole study, we focus on a Venus-sized planet orbiting a Sun-like star.
A 'reference case' of fixed surface pressure and star-planet distance is defined. Specifically, the surface pressure is set in this case to p s = 10 bar and we assume that the planet is located at the Venus-Sun distance, that is a Venus = 0.723 au. This configuration, characterized in Sect. 4, corresponds to the case illustrated by Fig. 1 of Leconte et al. (2015), and seems thereby a convenient choice for comparisons with this early work.
In Sect. 5, two families of configurations will be studied, both including the reference case. In the first family, the surface pressure is set to p s = 10 bar and the semi-major axis varies.
Conversely, in the second family, planets have the same orbital radius, a = a Venus , and various surface pressures.
Physical setup of the 3D simulations
Apart from the surface pressure and star-planet distance, all simulations are based on a common physical setup. For the stellar incoming flux, the emission spectrum of the Sun is used. The planet is assumed to be dry, with no surface liquid water or water vapour, which allows us to filter out effects associated with the formation of clouds in the study of its atmospheric tidal response. The atmosphere is arbitrarily assumed to be nitrogendominated. However, a pure N 2 -atmosphere would be an extreme case for radiative transfer owing to the absence of radiator. Hence, we have to set a non-zero volume mixing ratio for carbon dioxyde to avoid numerical issues in the treatment of the radiative transfer with the LMDZ, which was originally designed to study the Earth atmosphere. Although any value could be used, we choose to set the value of the CO 2 volume mixing ratio to that of the Earth atmosphere at the beginning of the twenty-first century, that is ∼370 ppmv (e.g. Etheridge et al. 1996). The mass ratio corresponding to this volume mixing ratio being negligible, we use the value of N 2 for the mean molecular mass of the atmosphere, M atm = 28.0134 g mol −1 (Meija et al. 2016).
For a perfect diatomic gas, the ratio of heat capacities (also called first adiabatic exponent) is Γ 1 = 1.4, and it follows that κ = (Γ 1 − 1) /Γ 1 = 0.285 (the parameter κ can also be written κ = R GP / M atm C p , where R GP and C p stand for the perfect gaz constant and the thermal capacity per unit mass of the atmosphere, respectively). The effects of topography are ignored and the surface of the planet is thus considered as an isotropic sphere of albedo A s = 0.2 and thermal inertia I gr = 2000 J m −2 s −1/2 K −1 , which is a typical value for Venus-like soils (see e.g. Lebonnois et al. 2010) 2 . All of these parameters remain unchanged for the whole study and are summarized in Table 1. Our simulations are performed with an upgraded version of the LMD GCM specifically developed for the study of extrasolar planets and paleoclimates (see e.g. Wordsworth et al. 2010Wordsworth et al. , 2011Wordsworth et al. , 2013Forget et al. 2013;Leconte et al. 2013), and used previously by Leconte et al. (2015) for the study of atmospheric tides. The model is based on the dynamical core of the LMDZ 4 GCM (Hourdin et al. 2006), which uses a finite-difference formulation of the primitive equations of geophysical fluid dynamics. Particularly, the following approximations are assumed.
The main one is the hydrostatic approximation (e.g. Vallis 2006), meaning that the pressure and gravitational forces compensate each other exactly along the vertical direction. The second approximation is the traditional approximation (e.g. Unno et al. 1989), which consists in ignoring the components of the Coriolis acceleration associated with a vertical motion of fluid particles or generating a force along the vertical direction. The third important assumption in the code is the thin layer approximation, meaning that the thickness of the atmosphere is considered as small with respect to the radius of the planet (e.g. Vallis 2006).
A spatial resolution of 32 × 32 × 26 in longitude, latitude, and altitude is used for the simulations.
The radiative transfer is computed in the model using a method similar to Wordsworth et al. (2011) and Leconte et al. (2013). High-resolution spectra characterizing optical properties were preliminary produced for the chosen gas mixture over a A&A 624, A17 (2019) wide range of temperatures and pressures using the HITRAN 2008 database (Rothman et al. 2009). These spectra are interpolated every radiative timestep during simulations to determine local radiative transfers. The method is commonly used and has been thoroughly discussed in past studies (e.g. Leconte et al. 2013). We thus refer the readers to these works for a detailed description.
Extraction of the quadrupolar surface pressure anomaly
For a given planet, of fixed rotation, semi-major axis and surface pressure, the calculation of the quadrupolar surface pressure anomaly follows several steps.
First, the GCM is run for a period P conv corresponding to the convergence timescale necessary to reach a steady cycle. We note that this period has to be specified for each doublet (a, p s ). As a first approximation, it depends on the radiative timescale of the deepest layers of the atmosphere τ rad , which scales as where T e stands for the mean effective, or black body, temperature of the atmosphere (see e.g. Showman & Guillot 2002, Eq. (10)). In the reference case, we observe that the atmospheric state has converged towards a steady cycle after ∼5800 Earth Solar days, and we thus use this value for calculations in this section. After this first step, a simulation is run for 300 Solar days of the planet, defined by P sol = 2π/ |Ω − n |, except in the case of the spin-orbit synchronization (Ω = n ), where there is no daynight cycle (in this case, the simulation is simply run for 3000 Earth Solar days). At the end of the simulation, we have at our disposal a time series of snapshots of the surface pressure given as a function of the longitude and latitude (see e.g. Fig. 1).
The third step consists in post-processing these data. We first remove the constant component, that is the mean surface Surface pressure and horizontal winds computed with the LMDZ GCM for a Venus-sized terrestrial planet hosting a 10 bar atmosphere (reference case). In this study, the surface pressure anomaly is folded over one Solar day and expanded in spherical harmonics to calculate the atmospheric tidal torque using the formula given by Eq. (4).
pressure. Then, we proceed to a change of variable: the time coordinate is replaced by the Solar zenith angle, so that snapshots are all centred on the substellar point. Since meteorological fluctuations can be considered as a perturbation varying randomly over short timescales, we filter them out by folding the surface pressure anomaly over one Solar day.
We finally apply a spherical harmonics transform to the resulting averaged surface pressure snapshot in order to get the complex coefficient δp 2,σ s;2 associated with the semidiurnal tidal mode of degrees l = 2 and m = 2 (see Eq. (1)). The method is illustrated by Fig. 2 in the reference case (a = a Venus and p s = 10 bar).
This procedure provides the value of the tidal torque for a given forcing frequency. In practice, the torque is computed over an interval of the normalized frequency ω = (Ω − n ) /n centred on synchronization (ω = 0) with n fixed, the planet rotation rate being deduced from ω (the normalized frequency ω is employed here instead of σ to follow along the line by Leconte et al. 2015). Typically, we use −30 ≤ ω ≤ 30 to study the low-frequency regime of the atmospheric tidal response and −300 ≤ ω ≤ 300 to study the high-frequency regime.
The frequency range is thus divided into N intervals, meaning that the whole above procedure has to be repeated N + 1 times to construct a frequency-spectrum of the tidal torque. The size of an interval is defined as ∆ω ≡ ω sup − ω inf /N. For instance, for the exploration of the parameters space detailed in Sect. 5, N = 20, ω inf = − 30, ω sup = 30, and thus ∆ω = 3.
Frequency behaviour of the atmospheric tidal torque
The apparent complexity of the physics involved in thermal atmospheric tides requires that we opt for a graduated approach of the problem. Hence, before investigating the dependence of the tidal torque on the planet orbital radius and atmospheric surface pressure as mentioned above, we have to preliminary characterize how it varies with the tidal frequency. To address Mean pressure anomaly [Pa] Semidiurnal component [Pa] Fig. 2. Surface pressure anomaly created by the thermal tide. Left panels: daily averaged spatial distribution of the departure of the surface pressure from its mean value created by the thermal tide. Right panels: spatial distribution of the semidiurnal component only. The surface pressure anomaly is computed for 300 Solar days and folded over one Solar day centred on the substellar point, whose location and direction of motion are shown with a white arrow. From top to bottom panels: normalized forcing frequency ω = (Ω − n ) /n is increased from 0 (spin-orbit synchronization) to 24 (this corresponds to the length of the Solar day P sol = 9.36 days) for the reference case of the study (a = a Venus and p s = 10 bar).
this question, we consider the reference case (p s = 10 bar and a = a Venus ).
Characterization of the reference case
In order to characterize the reference case, frequency-spectra of the atmospheric torque created by the semidiurnal thermal tide are computed in low-frequency and high-frequency ranges. For convenience, we introduce the function f GCM (σ), which is the interpolating function of GCM results with cubic splines.
Noting that the tidal torque should be an odd function of the tidal frequency in the absence of rotation (or if the effect of rotation on the tidal response were negligible), we also introduce the function f odd , defined by which is the odd function f minimizing for any σ the distance defined by The complementary function f even , such that f GCM = f odd + f even , is defined by " 10 bar). Top left: and provides a measure of the impact of Coriolis effects on the tidal torque. The data, the interpolating function f GCM and its components f odd and f even are plotted in Fig. 3 as functions of the normalized tidal frequency ω = σ/ (2n ) in linear and logarithmic scales. Additional functions of the frequency are plotted in dashed lines. They correspond to the ab initio analytical ('Ana.'), Maxwell, and parametrized ('Param.') models that will be introduced and discussed further.
We first consider the low-frequency range (−30 ≤ ω ≤ 30). The reference case of our study exactly reproduces the results plotted in Fig. 1 of Leconte et al. (2015), with a maximum slightly greater that 2000 Pa located around ω ∼ 5. We introduce here the maximal value of the peak q max ≡ max { f odd (σ)} and the associated frequency σ max , such that f odd (σ max ) = q max , timescale τ max ≡ σ −1 max and normalized fre- The tidal torque is negative for σ < 0 and positive otherwise, which corresponds to the typical behaviour of the thermally induced atmospheric tidal response in the vicinity of synchronization, as discussed in Sect. 2. As shown by early studies (Gold & Soter 1969;Ingersoll & Dobrovolskis 1978;Dobrovolskis & Ingersoll 1980;Correia & Laskar 2001), thermal atmospheric tides thus tends to drive the planet away from synchronous rotation and determines its non-synchronized rotation states of equilibrium.
In the zero-frequency limit, the torque scales as T ∝ σ α , with α ≈ 0.73. In the high-frequency range (20 |ω| ≤ 300), it scales as T ∝ σ −1 with a remarkable regularity (see Fig. 3, bottom left panel) and exhibits a resonance at ω ≈ 260. We will see in the next section that these features can be explained using the linear theory of atmospheric tides (Wilkes 1949;Siebert 1961;Lindzen & Chapman 1969).
We note that the spectrum of f GCM exhibits a slight systematic asymmetry with respect to the synchronization. This feature is obvious in the low-frequency range, where | f GCM (−σ)| > | f GCM (σ)|, and tends to vanish while |σ| increases. Particularly, a small departure between f GCM and f odd can be observed around the extrema of the tidal torque, and we note that the atmosphere undergoes a non-negligible tidal torque at synchronization (σ = 0), although the perturber does not move in the reference frame co-rotating with the planet.
This asymmetry is an effect of the Coriolis acceleration, which comes from the fact that |Ω (−σ)| |Ω (σ)| (in the low-frequency range, the spin rotation rate is not proportional to the tidal frequency). The Coriolis acceleration affects the atmospheric general circulation by generating strong zonal jets through the mechanism of non-linear Rossby waves pumping angular momentum equatorward (e.g. Showman & Polvani 2011). These jets induce a Doppler-like angular lag of the tidal bulge with respect to the direction of the perturber.
Ab initio analytical model
The behaviour of the torque in the high-frequency range can be explained with the help of the linear theory of thermal atmospheric tides (Wilkes 1949;Siebert 1961;Lindzen & Chapman 1969). In Appendix B, by using an ab initio approach, we compute analytically the atmospheric tidal torque created by the semidiurnal thermal tide in the idealized case of an isothermal atmosphere undergoing the tidal heating of the planet surface. The atmospheric structure is here characterized by the constant pressure height where R s and T s designate the specific gas constant and the surface temperature, respectively. It allows to renormalizes the altitude z with the introduction of the pressure height scale x = z/H. In the analytic model, we choose for the heat per unit mass inducing the tidal response the vertical profile J = J s e −b J x , where J s is the heat per unit mass at the planet surface, and b J a dimensionless optical depth corresponding to the inverse of the characteristic thickness of the heated layer. We note that the limit b J → +∞ corresponds to the case studied by Dobrovolskis & Ingersoll (1980), where the vertical profile of heat is approximated by a Dirac distribution. The surface pressure anomaly is obtained by solving the vertical structure equation of the dominating mode with the above profile of the forcing. We refer the reader to the appendix for the detail of approximations and calculations made to get this result. Particularly, we note that dissipative processes are ignored since they are associated with timescales that are supposed to far exceed typical tidal periods in the high-frequency range.
The solution takes two different forms depending on the way σ compares to the frequency characterizing the turning point, where the vertical wavenumber annihilates (see Appendix B), The notation Λ 0 designates here the eigenvalue of the predominating mode in the expansion of perturbed quantities on the basis of Hough functions (see Eq. (B.17)). This mode is the gravity mode of latitudinal wavenumber n = 0 in the indexing notation used by Lee & Saio (1997). Its eigenvalue Λ 0 can be approximated as a constant provided that n |Ω|. Hence, introducing the equivalent depth of the mode, we obtain, for |σ| ≤ σ TP , and, for |σ| > σ TP , We recall that κ = R s /C p , where C p designates the heat capacity per unit mass, and Γ 1 = 1/ (1 − κ) the adiabatic exponent at constant entropy (Gerkema & Zimmerman 2008). The solution given by Eqs. (12) and (13) provides a useful diagnosis about the frequency-behaviour of the torque in the high-frequency range.
The most striking feature of this behaviour is the peak that can be observed in Fig. 3 (top and bottom left panels) at the normalized frequency ω ≈ 260. This peak correspond to the fundamental resonance of the atmospheric vertical structure associated with the propagation of the Lamb mode (e.g. Lindzen et al. 1968;Bretherton 1969;Lindzen & Blake 1972;Platzman 1988;Unno et al. 1989), which is an acoustic type wave of long horizontal wavelengh. In an inviscid, isothermal atmosphere, the Lamb mode is characterized by the equivalent depth h L = Γ 1 H (Lindzen & Blake 1972). In the asymptotic regime, where n |Ω|, the characteristic Lamb frequency follows from Eq. (11), By noticing that σ L > σ TP in the case of a diatomic gas (Γ 1 = 1.4) and substituting h by h L into the corresponding expression of the solution -that is Eq. (13) -we can easily observe that the tidal torque is singular at |σ| = σ L . The resonance hence occurs when the phase velocity of the forced mode equalizes the characteristic Lamb velocity V L = gh L .
With the numerical values given by Table 1 and the mean surface temperature computed from GCM simulations (T s ≈ 316 K), the isothermal approximation leads to H ≈ 10.6 km and h L ≈ 15 km for the reference case. Besides, Λ 0 ≈ 11.1 in the adiabatic asymptotic regime of high rotation rates. It thus follows that ω L ≈ 308, and we recover the order of magnitude of the frequency identified in Fig. 3 (top left panel) using GCM simulations (i.e. ω L ≈ 260).
The observed departure between values of ω L obtained in analytical and numerical approaches can be explained by the dependence of the resonance on the atmospheric vertical structure (see e.g. Bretherton 1969;Lindzen & Blake 1972). The analytical value corresponds to the case of an isothermal atmosphere of temperature T s . In reality, the mean temperature vertical profile is characterized by a strong gradient in the troposphere, the temperature decaying linearly from ∼316 K at z = 0 to ∼160 K at z ≈ 25 km in GCM simulations. As a consequence, the mean pressure height scale of the tidally heated layer is less than the surface pressure heigh scale, which leads to smaller equivalent depth and resonance frequency for the Lamb mode.
The other interesting feature highlighted by Fig. 3 is the scaling law of the torque T ∝ σ −1 in the range of intermediate frequencies, that is between the thermal and Lamb resonances, typically. This behaviour is described by the analytical model. As discussed before (see Eq. (14)), σ TP and σ L are close to each other. The intermediate-frequency range thus corresponds to the case |σ| < σ TP , which leads us to consider the solution given by Eq. (12). We place ourselves in the configuration where characteristic timescales are clearly separated, that is |σ| σ TP and n |Ω| in the meantime. As H/h ∝ σ −2 , the preceding condition implies that H/h 1. It follows that By invoking the strong optical thickness of the atmosphere in the infrared (b J 1), we remark that we recover analytically the scaling law T ∝ σ −1 observed in Fig. 3 from the moment that the condition 1 H/h κ −1 b 2 J is satisfied. This provides a definition for the intermediate frequency-range, which is now the range corresponding to σ J |σ| σ L , where we have introduced the thermal frequency Basically, σ J is the frequency for which the vertical wavelength of the mode and the characteristic depth of the heated layer are of the same order of magnitude.
From the moment that |σ| σ L (or H/h 1), Eq. (12) can be approximated by the function where the associated characteristic timescale τ J and maximal amplitude of the pressure anomaly q J are We recognize in the form of the function given by Eq. (17) the well-known Maxwell model, which is commonly used to describe the dependence of the tidally dissipated energy on the forcing frequency in the case of solid bodies (e.g. Efroimsky 2012; Correia et al. 2014). Its use in the case of thermal atmospheric tides is discussed in the next section.
Discussion on the Maxwell model
Analytic ab initio approaches based on a linear analysis of the atmospheric tidal response -including this work (cf. previous section) -predict that the imaginary part of surface pressure variations can be expressed as a function of the forcing frequency σ = 2 (Ω − n ) as (e.g. Ingersoll & Dobrovolskis 1978;Auclair-Desrotour et al. 2017a) the notations τ M and q M referring to an effective thermal time constant and the amplitude of the maximum (located at σ = τ −1 M ), respectively (the factor 2 sets the maximal amplitude to q M ). This functional form corresponds to the so-called Maxwell model mentioned above. It describes the behaviour of an idealized forced oscillator composed of a string and a damper arranged in series (Greenberg 2009;Efroimsky 2012;Correia et al. 2014).
We note that other works based upon different approaches converged towards the functional form of the Maxwell model. For instance, Correia & Laskar (2001) used the parametrized function f (σ) = σ −1 1 − e −γσ 2 (γ being a real parameter, see Eq. (26) of the article) to mimic the behaviour of the atmospheric tidal torque, while Leconte et al. (2015) retrieved Eq. (19) empirically by analysing results obtained from simulations run with the LMDZ GCM.
An important remark should be made here concerning the behaviour of the tidal torque in the vicinity of the synchronization (i.e. for σ ≈ 0). To our knowledge, most of early works using the classical tidal theory to study the spin rotation of Venus and ignoring dissipative processes obtained a torque scaling as T ∝ σ −1 , and thus singular at the synchronization (e.g. Dobrovolskis & Ingersoll 1980;Correia & Laskar 2001. This is precisely the reason that led Correia & Laskar (2001) to introduce the regular ad hoc parametrized function mentioned above. Conversely, Ingersoll & Dobrovolskis (1978) and, later, Auclair-Desrotour et al. (2017a), derived a Maxwell-like tidal torque analytically by introducing a characteristic thermal time associated with boundary layer processes and radiative cooling. These early results may let think that dissipative processes are a necessary ingredient for a regular tidal torque to exist at the synchronization.
Although dissipative processes definitely regularize the atmospheric tidal torque at the synchronization (e.g. Auclair-Desrotour et al. 2017a), we showed in Sect. 4.2 that regularity also naturally emerges from approaches ignoring them when the vertical structure equation is solved in a self-consistent way. For a sufficiently small frequency, namely |σ| σ J , the torque derived from our analytic solution in the absence of dissipative mechanisms scales as T ∝ σ. Therefore, it seems that the singularity at σ = 0 obtained by early works could result from oversimplifying hypotheses, such as neglecting the three-dimensional aspect of the tidal response or tidal winds. For instance, we note that our analytical model asymptotically converges towards the function obtained by Dobrovolskis & Ingersoll (1980) when the vertical profile of tidal heating tends towards the Dirac distribution used by these authors (i.e. when b J → +∞).
The above statement means that the analytical solutions given by Eqs. (12) and (13) can be used in practice over the whole range of tidal frequencies without leading to unrealistic behaviours at the vicinity of synchronization, notwithstanding the fact that they were derived assuming that characteristic timescales associated with dissipative processes far exceed the tidal period.
In studies taking into account dissipative processes (e.g. Ingersoll & Dobrovolskis 1978;Auclair-Desrotour et al. 2017a), the parameter τ M of Eq. (19) can be interpreted as an effective timescale associated with the radiative cooling of the atmosphere in the Newtonian cooling approximation, where radiative losses are assumed to be proportional to temperature variations (Lindzen & McKenzie 1967;Auclair-Desrotour et al. 2017a;Auclair-Desrotour & Leconte 2018). These early analytical works established the following expression of the tidal torque (see e.g. Ingersoll & Dobrovolskis 1978, Eq. (2)), where ε stands for the effective fraction of the incoming flux absorbed by the atmosphere. Substituting δp 2,σ s;2 by Eq. (19) in Eq. (4) and comparing the obtained result with the preceding expression leads to a relationship between the Maxwell thermal time and maximum, which is the notation G referring to the gravitational constant.
Assuming that the atmosphere is optically thin in the visible frequency range and that the surface temperature corresponds to a black body equilibrium, we write the mean surface temperature as where we have introduced the Stefan-Boltzmann constant σ SB and the surface albedo A s . By substituting T s by Eq. (22) in Eq. (21), we obtain that the ratio q M /τ M does not depend on the surface pressure and scales as with ε = 1 − A s if the atmosphere is optically thick in the infrared. This relationship between τ M and q M means that the two parameters of the Maxwell model (Eq. (19)) can theoretically be reduced to the effective thermal timescale only, which is determined by complex boundary layer and dissipative processes in the general case. The scaling law given by Eq.
(23) will be tested using GCM simulations in Sect. 5. We now compare the Maxwell model to numerical results by assimilating the Maxwell amplitude and timescales to the maximum value of f odd and its associated timescale, respectively. The ab initio analytical solution given by Eqs. (12) and (13) ('Ana.') and its Maxwell-like form, derived for |σ| σ L and given by Eq. (17) ('Maxwell'), are both plotted in Fig. 3 as functions of the normalized forcing frequency (ω). The numerical values of σ L and σ TP used for the plot are determined by the eigenfrequency of the resonance associated with the Lamb mode in GCM simulations, that is ω L ≈ 260. We arbitrarily choose to set σ J = σ max (correspondence between the numerically-derived and the Maxwell maxima), which determines the value of b J (i.e. b J ≈ 14). Finally, the maximum q J is obtained by fitting the slope in the intermediate frequency-range to numerical results (q J ≈ 1042 Pa), and provides the value of the parameter J s (i.e. J s ≈ 0.05 W.kg −1 ). Figure 3 highlights the fact that the Maxwell model does not allow us to recover the behaviour of the torque in the low-frequency regime. The functional form given by numerical results and the Maxwell function clearly differ in this regime. Particularly, the maximal amplitude obtained from GCM simulations is about twice larger than that given by the model. We note that a smaller departure between the Maxwell and numerical maxima would certainly be obtained by fitting the Maxwell function to the whole spectrum of numerical results, and not only to the peak. However, this would also lead to overestimate the Maxwell timescale, and the fit would not be satisfactory either. A a consequence, a novel parametrized model has to be introduced to better describe the behaviour of the tidal torque in the low-frequency range. This is the purpose of the next section.
Introduction of a new parametrized model
It has been shown that the ab initio analytic model described in Sect. 4.2 and Appendix B reproduces the main features of the tidal torque in the high-frequency range, namely the resonance associated with the Lamb mode and the asymptotic scaling law T ∝ σ −1 . However, in the low-frequency range, the behaviour of the torque appears to be a little bit more complex than that predicted by the model, which reduces to a simple Maxwell function. This is not surprising since the atmospheric tidal response at low tidal frequencies involves complex non-linear mechanisms, interactions with mean flows, and dissipative processes, which are clearly outside of the scope of the classical tidal theory used to establish the solution given by Eqs. (12) and (13).
Yet, the frequency dependence of the tidal torque has to be characterized in the vicinity of synchronization as this is where its action of the planetary rotation is the strongest. Our effort has thus to be concentrated on the low-frequency regime and the transition with the high-frequency regime. As they treat the full non-linear 3D dynamics of the atmosphere in a selfconsistent way, GCM simulations are particularly useful in this prospect.
To make oneself an intuition of the behaviour of the torque, it is instructive to look at the logarithmic plot of Fig. 3 (bottom left panel), which enables us to identify the different regimes at first glance. We basically observe two tendencies, highlighted in the plot by slopes taking the form of a straight line, in the zerofrequency limit (log (ω) 0.5) and the high-frequency asymptotic regime (log (ω) 1.5). In the interval 0.5 log (ω) 1.5, the tidal torque reaches a maximum and undergoes an abrupt decay.
Considering these observations, it seems relevant to approximate the logarithm of the torque by linear functions corresponding to the low and high-frequency regimes, and multiplied by sigmoid activation functions. By introducing the notation χ ≡ log ω, we thus define the parametrized function as where b trans ≈ log (q max ) is the level of the transition plateau, a 1 , b 1 , a 2 and b 2 the dimensionless coefficients of linear functions describing asymptotic regimes, and F 1 and F 2 two sigmoid activation functions expressed as In these expressions, the dimensionless parameters χ 1 and χ 2 designate the cutoff frequencies of F 1 and F 2 in logarithmic scale, and d 1 and d 2 the widths of transition intervals. The corresponding tidal torque is given by As the scaling law T ∝ σ −1 was derived from the ab initio model of Sect. 4.2 in the high-frequency range, we enforce it by setting a 2 = − 1. The eight left parameters are then obtained by fitting the function given by Eq. (24) to numerical results (as done previously, the odd function f odd is used). We thus end up with and plot the model function F par in Fig. 3 using these numerical values ('Param.').
As shown by Fig. 3, the parametrized function defined by Eq. (24) describes important features that escaped the Maxwell function, such as the fact that the tidal torque does not scale linearly with the forcing frequency in the zero-frequency limit, and the rapid decay characterizing the transition between low and high-frequency regimes.
Dependence of the tidal torque on the atmospheric composition
As it clearly has a strong impact, the dependence of the tidal torque on the atmospheric composition has to be discussed. In Appendix C, we treat the case of a CO 2 -dominated atmosphere with a mixture of water and sulphuric acid (H 2 SO 4 ) comparable to that hosted by the Venus planet. The obtained spectrum and the associated functions introduced above are plotted in Fig. 4, and shall be compared to those computed for the N 2 -dominated atmosphere, plotted in Fig. 3 (right panel). Several interesting features may be noted. First, the tidal torque exerted on the CO 2 -dominated atmosphere is more than twice weaker than that exerted on the N 2 -dominated atmosphere. Particularly, peaks are strongly attenuated. This results from the vertical distribution of tidal heating. Because of the optical thickness of carbon dioxide in the visible frequency range, an important part of the incoming stellar flux is absorbed above clouds. This is not the case of the N 2 -dominated atmosphere, where most of the flux reaches the planet surface and is re-emited in the infrared frequency range, leading to the thermal forcing of dense atmospheric layers located at high pressure levels.
Second, we observe a greater asymmetry between the negative and positive frequency ranges, the function f even being not negligible with respect to f odd . This is also an effect of the vertical distribution of tidal heating. In the case of the N 2 -dominated A17, page 9 of 22 atmosphere, most of the tidal torque is generated by density variations occurring at low altitudes, where the fluid is well coupled to the solid part of the planet by frictional forces. Switching from N 2 to CO 2 decreases the contribution of these layers, while it increases the contribution of layers located at pressure levels where the strong zonal jets mentioned above are generated. Despite the clear interest there is to study the tidal response of CO 2 -dominated atmospheres for the similarity of configurations they offer with the Venus planet, we choose to focus in this work on N 2 -dominated atmospheres owing to their simpler frequency-behaviour.
The surface-atmosphere coupling
The specific role played by the surface thermal response is not taken into account in linear models used to establish the Maxwell-like behaviour of the tidal torque described by Eq. (19) (e.g. Dobrovolskis & Ingersoll 1980;Auclair-Desrotour et al. 2017a). In these early works, the thermal forcing is assumed to be in phase with the stellar incoming flux, which amounts to considering that thermal tides are caused by the direct absorption of the flux. This approximation seems realistic in the case of Venuslike planets given that their atmospheres are optically thick in the visible range, and sufficiently dense to neglect their interactions with the surface.
However, it appears as a rough approximation in the case of optically thin atmospheres, where most of the stellar flux reaches the surface. In this case, thermal tides are mainly caused by the absorption of the flux emitted by the surface in the infrared range, which is delayed with respect to the stellar incoming flux owing to the surface inertia and dissipative processes such as thermal diffusion. Our N 2 -dominated atmosphere belongs to the second category. Thus, the role played by the thermal response of the ground should be considered in the present study to explain the observed difference between the obtained tidal torque and the Maxwell model. Table 2. Scaling laws of τ max and q max obtained using the LMDZ GCM for a dry terrestrial planet with a homogeneous N 2 atmosphere.
Case
Scaling laws of q max and τ max R 2 log (q max ) = −0.69 log ( Notes. The scaling law of q max /τ max is computed using the formers and should be compared to Eq. (35). Units: a is given in au, p s in bar, τ max in days, q max in Pa, the parameters of the linear fit α, β and R 2 are dimensionless.
Concerning this point, we note that Leconte et al. (2015) included the heat capacity of the surface C s in the simplified model they used to establish the Maxwell-like behaviour of the tidal torque (see Sect. 4 in the Material and Methods of their article). Hence, by introducing the heat capacity of the atmosphere/surface system C = C p p s /g + C s and the emission temperature (T e ), they expressed the relationship between surface temperature variations δT and the variations of the incoming stellar flux δF inc as with σ M = 4σ SB T 3 e /C (the subscript M refers to the Maxwell-like form of the function given by Eq. (28)). As we generally observe that T e ≈ T s in our GCM simulations of a 10 bar atmosphere (the mean surface temperature of the planet is well approximated by the black body equilibrium temperature, given by Eq. (22), in this case), this model implies that σ M should be always less than σ sup M = 4σ SB T 3 s g/ C p p s . However, in light of typical values of τ M obtained with the GCM (see Table 2), it appears that the above formula for σ sup M leads to underestimate σ M by a factor 10 to 100 for the case treated in the present study.
To understand the role played by the ground in the atmospheric tidal response, we adopt an ab initio approach describing thermal exchanges at the surface-atmosphere interface. Following along the line by Bernard (1962; see also Auclair-Desrotour et al. 2017a), we write the local budget of perturbative power inputs and losses, where we have introduced the small variations of the incoming stellar flux δF, surface temperature δT s , radiative heating by the atmosphere δF atm and diffusive losses in the ground δQ gr and in the atmosphere δQ atm . Owing to the absence of water, latent heats associated with changes of states are ignored.
In the general case, δT s and δQ atm are coupled with the atmospheric tidal response. Particularly, in the Newtonian cooling approximation (i.e. variations of the emitted flux are proportional to temperature variations), δF atm can be expressed as where K designates an effective coefficient of Newtonian cooling. In order to avoid mathematical complications, we ignore this A17, page 10 of 22 coupling by assuming either that |δF atm | 4σ SB T 3 s |δT s |, or, following Bernard (1962), that the variation of the atmospheric flux scales as δF atm ∝ δT s in a similar way as the variation of the flux emitted by the ground. This allows us to simplify radiative terms by writing where s ≈ 1 stands for the effective emissivity of the surface. With the above approximations, surface temperature variations can be written for a given mode as δT σ s = B σ s δF σ inc . We thus end up with (see detailed calculations in Appendix D) where B 0 s = 4σ SB T 3 s s −1 , and τ s designates the characteristic timescale of the surface thermal response, which depends on the thermal inertia of the ground I gr and of the atmosphere I atm at the interface, and is expressed as We compare this model to numerical results by extracting the Y 2 2 component of the surface temperature distribution δT 2,σ s;2 provided by GCM simulations, as previously done for the surface pressure distribution. The obtained values are plotted in the complex plane in Fig. 5. In this plot, the horizontal and vertical axes correspond to the real and imaginary parts of the normalized transfer function B σ s /B 0 s (such that δT 2,σ s;2 = B σ s δF 2,σ inc;2 ), respectively. Normalization is obtained by fitting numerical results with the function given by Eq. (32) in the low-frequency range (0 ≤ ω < 3). Figure 5 shows a good agreement between the functional form of the model and numerical results in the zero-frequency limit. However, we observe that the value of the thermal time τ s ∼ 0.3 days obtained by fitting Eq. (32) to numerical results in the low-frequency range is a decade smaller than the theoretical value given by Eq. (33), τ s ≈ 4.6 days (we use values given by Table 1, set s = 1 and neglect I atm ), which shows the limitations of the approach detailed above.
While the forcing frequency increases, the behaviour of the function interpolated using numerical results starts to change radically. In the vicinity of the resonance (σ ∼ σ max ), the imaginary part of B σ s decays abruptly whereas its theoretical analogous keeps growing. This divergence suggests a strong radiative coupling between the surface and the atmosphere, which comes from the fact that the emission of the atmosphere to the surface δF atm (see Eq. (30)) can no longer be neglected, as done in the model. The abrupt variation of the surface thermal lag around the resonance partially explain the behaviour of the tidal torque in this range. Nevertheless, to better understand it, one should study the whole dynamics of the atmospheric tidal response, which is beyond the scope of this work.
In the high-frequency range, that is for σ τ −1 s , the model predicts that the amplitude of temperature variations should tend to zero. Yet, we observe that δT 2,σ s;2 increases until reaching a maximum before decaying. This maximum corresponds here to a resonance whose frequency coincides with that of the main Lamb mode identified previously, in Sect. 4.2 (see Lamb 1917;Vallis 2006).
Exploration of the parameter space
We now examine the evolution of the tidal torque with the planet semi-major axis (a) and surface pressure (p s ).
Frequency spectra of the tidal torque
Considering the planet defined in Sect. 3.1, we carry out two studies. In study 1, we set p s = 10 bar and we compute frequency spectra of the imaginary part of the Y 2 2 -surface pressure component in the low-frequency range for a varying from 0.3 to 0.9 au. In study 2, we set a = a Venus , that is a = 0.723 au, and frequency spectra are computed for p s varying from 1 to 30 bar. The reference case characterized in the previous section, and parametrized by a = a Venus and p s = 10 bar, is located at the intersection of the two studies.
Limitations concerning the lower bound of the orbital radius range and the upper bound of the surface pressure range come from the spectra of optical properties used in simulations to compute radiative transfers (see Sect. 3.1), which were produced for temperatures below 710 K. Indeed, for a < 0.3 au or p s > 30 bar, the planet surface temperature exceeds this maximum. As this might lead to erroneous estimations of radiative transfers, we choose not to treat extremal cases, although there is no formal limitation for the GCM to run normally in these conditions. Radiative transfers also determine the convergence timescale necessary for the atmosphere to reach a steady state, P conv . For study 1, we use the timescale obtained in the reference case, that is 5800 Earth Solar days, considering that the steady state is reached more rapidly in mosts cases, where the planet is closer to the star (see Eq. (5)). Similarly, to take the dependence of P conv on the planet surface pressure into account in study 2, we set P conv to 1100, 2300, 5800 and 14000 Earth Solar days for p s = 1, 3, 10, 30 bar, respectively.
The obtained frequency spectra are plotted in Fig. 6 in linear (left) and logarithmic scales (right) for study 1 (top) and study 2 (bottom). In all plots, points designate the results of GCM simulations with the method described in Sect. 3, while solid lines correspond to the associated cubic splines interpolations. The reference case (a = a Venus and p s = 10 bar) is designated by the solid grey line. Numerical values used to produce these plots are given by We retrieve here the features identified in Sect. 4. The tidal torque exhibits maxima located at the transition between the low-frequency and high-frequency asymptotic regimes. The corresponding peaks are slightly higher in the negative-frequency range than in the positive-frequency range owing to Coriolis effects and the impact of zonal jets on the angular lag of the tidal bulge. As expected, the amplitude of peaks increases with both the incoming stellar flux and the planet surface pressure. Interestingly, the evolution of q max and σ max with a and p s looks very regular. This suggests that the dependences of the peak maximum and characteristic timescale on the planet surface pressure and distance to star are well approximated by simple power scaling laws, and it is the case indeed, as shown in Sect. 5.2.
As previously noticed in the study of the reference case, the asymptotic behaviour of the tidal torque in the zero-frequency limit differs from that described by the Maxwell model. Particularly, the logarithmic plot of study 2 (bottom right panel) shows that the torque follows the scaling law f GCM (σ) ∝ σ 1/2 in cases characterized by low surface pressures, that is 1 and 3 bar. These cases correspond to the thin-atmosphere asymptotic limit, where thermal tides are driven by diffusion in the ground in the vicinity of the surface. We note that the simplified linear model of the surface thermal response detailed in Sect. 4.6 and Appendix D leads to a surface-generated radiative heating scaling as {δT s } ∝ σ 1/2 in the zero-frequency limit, which is precisely the dependence observed in Fig. 6.
Evolution of the thermal peak with the planet semi-major axis and atmospheric surface pressure
Let us now quantify the regular dependence of the peak of tidal torque on the planet orbital radius and surface pressure observed in the preceding section. We have thereby to determine how the two parameters defining the peak -namely its maximum value q max and associated timescale τ max -vary with a and p s . Thus, for each study, we fit numerical values of q max and τ max using a linear regression, formulated as where Y designates the logarithm of q max (Pa) or τ max (days), X the logarithm of a (au) or p s (bar), and α and β the dimensionless parameters of the fit. The values of these parameters are given by Table 2, as well as those of the corresponding coefficients of determination R 2 . We also compute log (q max /τ max ) for comparison with the theoretical scaling law given by Eq. (23) Linear regressions are plotted in Fig. 7 (blue solid line). In order to provide an estimation of the variability of numerical results, error bars are given for the reference case. These error bars do not literally correspond to a margin of error, but indicate the resolution of the sampling for the frequency and maximum of the tidal torque. For q max , the amplitude of the error bar is the departure between the maxima of the interpolating function and data. For τ max , the two bounds of the error bar are the values associated with the nearest points of the sampling, designated by the subscripts inf and sup, such that τ inf ≤ τ max ≤ τ sup . These error bars depend on the ratio between the size of a frequency interval and the width of the thermal peak. For example, the thermal peak is undersampled for a = 0.3 au, which makes the fit less reliable in this case.
Comparing coefficients of determination in Table 2, we observe that a better fit is systematically obtained for q max than for τ max . This difference may be explained by the aspect of spectra displayed in Figs. 6 and 3. Since the peak of the tidal torque computed with the GCM is both flatter and larger than that of the Maxwell function, the position of the maximum is more sensitive to small fluctuations than the maximum itself. As a consequence, the variability of q max is less than the variability of τ max .
Hence, the linear regression fits particularly well the dependence of q max on a, while the plot of τ max exhibits a relatively important variability with respect to the linear tendency. We note however that differences with the fit are not significative since they remain small compared to the width of the peak. Concerning the dependence of τ max on a, one may also observe that the slope, given by α = 0.86, is almost twice smaller than that predicted by the scaling law of the radiative timescale given by Eq. (5), that is τ max ∝ n −1 ∝ a 3/2 .
As regards the ratio q max /τ max however, we recover numerically the scaling law predicted by the theoretical model (Eq. (23)) with a good approximation. This scaling law is numerically expressed in the units of Table 2 as if we assume that ε = 1 − A s (i.e. the flux reemitted by the ground is entirely absorbed by the atmosphere).
As may be seen, the dependence of q max /τ max on the surface pressure is small (α = 0.13) for want of being zero, as predicted by the model. Regarding the dependence on a, the relative difference between the numerical and theoretical values of α (i.e. 1.55 and 3/2, respectively) is around 3%. However, the value of β computed from GCM simulations (2.77) is higher than that predicted by the model (2.49), despite the fact that this latter is an upper estimation. This difference illustrates the limitations of the Maxwell model, which fails to describe the sharp variations of the tidal torque with the tidal frequency when |σ| ∼ σ max .
Scaling laws and generic formula for the tidal torque
By proceeding to a quantitative study of the evolution of the tidal torque maximum with the planet orbital radius and surface pressure, we demonstrated in the preceding section the regularity observed in Fig. 3. The scaling laws given by Table 2 and plotted in Fig. 7 show that frequency-spectra have the same aspect from the moment that the horizontal and vertical axes are rescaled following the obtained dependences on a and p s . In this section, our purpose is to compute this rescaling in a robust way, by taking into account the whole set of data at our disposal rather than the maximal value of the torque and the associated timescale only. Combining this rescaling with the parametrized model given by Eq. (24), we will obtain a novel generic formula for the frequency-behaviour of the thermally generated atmospheric tidal torque.
The parameter with respect to which axes are rescaled, a or p s , is denoted by p, and the considered case is subscripted j. A given family is thus composed of N p couples of numerical vectors σ j , T j , with 1 ≤ j ≤ N p (see Tables E.1 and E.2), associated with the value p j of the varying parameter p. For a given couple of vector, one may introduce the associated interpolating function We also introduce the renormalized vectorsσ j andT j , defined bŷ A17, page 13 of 22 A&A 624, A17 (2019) where α 1 and α 2 are the exponents characterizing the renormalization. The size of frequency domains covered by theσ j vectors varies with p in the general case. As a consequence, rescaling axes requires to define the bounds of the largest common interval, the notation N σ referring to the size of the frequency sampling (typically N σ = 21, see Tables E.1 and E.2), and σ j,1 and σ j,N σ to the lower and upper bounds of the interval sampled by σ j , respectively. The values of α 1 and α 2 are obtained by minimizing the squared difference function We note that the parameters derived from these calculations, q 0 and τ 0 , slightly differ from q max and τ max . They stand for the characteristic amplitude and timescale of the peak and not for its maximum value and corresponding forcing period, as q max and τ max . These parameters are defined by functions of p (that is a or p s ), by and for the second family (a = a Venus and variable p s ), with log (q 0 ) = 0.48 log (p s ) + 2.87, (44) log (τ 0 ) = 0.30 log (p s ) + 0.038.
Let us remind here the used units: a is given in au, p s in bar, q 0 in Pa, and τ 0 in days. The last step consists in combining these scaling laws with the parametrized model derived in the reference case (Eq. (24)). Proceeding to the change of variables associated with the renormalization, we obtain the generic parametrized model function δp 2,σ s;2 par ≡ q 0 10 F par (log|τ0σ|) sign (σ) .
We remind here that F par is the function where F 1 and F 2 are the activation functions defined by The parameters characterizing the generic formula given by Eq. (46) take the values The spectra of Fig. 6 are replotted in Fig. 8 using the normalized variables derived from the axes rescaling. In addition to numerical results and their interpolating functions, the tidal torque described by the generic parametrized model (Eq. (46)) is plotted as a function of the normalized tidal frequency τ 0 σ (dashed black line). Figure 8 clearly shows the relevance of the rescaling as regards the first family, where the dependence of the torque on the star-planet distance is investigated. After rescaling, spectra look similar and the model matches them fairly well. As regards the second family, we observe a greater variability of q 0 and τ 0 with a net separation between the reference and 30 bar cases. However, the frequency behaviour of the torque does not change much from one case to another and the parametrized function given by Eq. (46) remains a reasonable approximation of its main features.
Conclusions
In order to better understand the behaviour of the atmospheric torque created by the thermal tide, we computed the tidal response of the atmosphere hosted by a terrestrial planet using the LMDZ general circulation model. This work builds on both the early study by Leconte et al. (2015), which was a first attempt to characterize the atmospheric tidal response with this approach, and the early analytical works based upon the linear theory of atmospheric tides (e.g. Auclair-Desrotour et al. 2017a,b). It is motivated by the need to merge these two different approaches together in a self-consistent picture. Our aim was to proceed to a methodic comparison of their predictions while exploring the parameter space.
Hence, we considered the simplified case of a dry Venussized terrestrial planet orbiting a Sun-like star circularly and hosting a nitrogen-dominated atmosphere. Following the method by Leconte et al. (2015), we computed the atmospheric torque created by the semidiurnal thermal tide as a function of the tidal frequency by extracting the Y 2 2 component of the surface pressure anomaly in simulations.
As a first step, we characterized the variation of the torque with the forcing frequency for a reference case (p s = 10 bar and a = a Venus ), and explained its various features with an independent analytical model. As a second step, we explored the parameters space by focusing on the dependence of the tidal torque on the planet orbital radius and atmospheric surface pressure. The obtained results were then used to derive scaling laws characterizing the torque, renormalize the pressure anomaly and forcing period, and finally propose a novel generic parametrized function to model the frequency-behaviour of the torque in a realistic way in the case of a nitrogen-dominated atmosphere.
The first investigation confirmed and extended the results obtained by Leconte et al. (2015). We showed that the torque follows two different asymptotic regimes. In the high-frequency range, the torque decays inversely proportionally to the tidal frequency until it exhibits a resonance peak. These two features are both explained by the analytical solution derived using the ab initio linear theory of atmospheric tides. Particularly, the peak corresponds to a resonance associated with the Lamb mode, an acoustic type wave of wavelength comparable with the planet radius. In the low-frequency range the torque, which is zero at synchronization, increases following a power law of index ranging from 0.5 to 0.7 until it reaches a maximum. While the increase and presence of a maximum are predicted by the analytical solution, the exponent and the value of amplitude of the peak differ significantly. These discrepancies result from the complex interactions between mechanisms laying beyond the scope of standard analytical treatments but resolved in 3D GCM simulations, such as the non-linear effects inherent to the atmospheric dynamics in the vicinity of synchronization, and the strong radiative coupling between the atmosphere and the planet surface. Typically, the low-frequency asymptotic regime of the tidal response is characterized by diurnal oscillations of large amplitude. The resulting differences in the day-and nightside temperature profiles significantly affect the stratification of the atmosphere. This clearly violates the small perturbation approximation upon which the analytic approach is based, and induces a non-linear coupling between the diurnal and semidiurnal oscillations that is important enough to modify the dependence of the tidal torque on the forcing frequency.
The parametrized function that we propose in the present work (given by Eq. (47)) appears as a good compromise as it matches numerical results in a more satisfactory way than the Maxwell model while being defined by a reasonably small number of parameters. It is thus perfectly suited to be implemented in evolutionary models of the rotational dynamics of a planet. Nevertheless, the Maxwell-like analytic solution derived by early studies (e.g. Ingersoll & Dobrovolskis 1978;Auclair-Desrotour et al. 2017a) provides a first order of magnitude approximation of the torque. It also predicts a relationship between the maximum of the thermal peak and the associated characteristic timescale. By establishing scaling laws governing the evolution of these features with the planet orbital radius and surface pressure, we retrieved numerically this relationship, which is q max /τ max ∝ a −3/2 .
The fact that scaling laws match well numerical results reveals that the torque and the tidal frequency can be normalized by the characteristic amplitude and frequency associated with the low-frequency regime. This was confirmed by the rescaling of spectra, which shows that numerical results obtained in all of the treated cases actually describe the same frequency dependence, whatever the star-planet distance and surface pressure. The combination of the parametrized function and scaling laws derived in this work thus leads to a generic empirical formula for the atmospheric tidal torque in the vicinity of synchronous rotation.
In spite of its limitations in the low-frequency regime, the analytic approach remains complementary with GCM calculations owing to the high computational cost of this method (several days of parallel computation on 80 processors are necessary to produce a spectrum with a sampling of 21 points in frequency). Results obtained from simulations can be used to improve the linear analysis, which provides in return a diagnosis of the physical and dynamical mechanisms involved in the tidal response.
As the study showed evidence of the interest of the numerical method using GCMs in characterizing the atmospheric tidal torque of terrestrial planets, several prospects can be considered for future works. First, the effects of clouds and optical thickness should be investigated owing to their strong impact on the tidal response. The case of an exo-Earth hosting a cloudy atmosphere may be treated in a similar way as the idealized planet of the present study. Second, it would be interesting to better characterize the dependence of the tidal torque on the atmospheric structure using ab initio analytic models. Third, numerical results and the derived generic parametrized function may be coupled to evolutionary models in order to quantify in a realistic way the contribution of the atmosphere to the evolution of the planet rotation over long timescales.
us to write V m,σ θ and V m,σ ϕ as functions of δp m,σ n /ρ 0 . By substituting horizontal winds by the obtained expressions in ∇ h ·V (Eq. (B.11)), and introducing the variable the whole system of governing equations given by Eqs. (B.5)-(B.10) can be put after some manipulations into the form (see Lindzen & Chapman 1969) Here, F m,σ is an operator depending on the x coordinate only and L m,ν the Laplace's tidal operator, which depends on the θ coordinate only and is formulated as (e.g. Lee & Saio 1997) L m,ν ≡ 1 sin θ ∂ θ sin θ 1 − ν 2 cos 2 θ ∂ θ (B.17) − 1 1 − ν 2 cos 2 θ mν 1 + ν 2 cos 2 θ 1 − ν 2 cos 2 θ + m 2 sin 2 θ , the quantity ν ≡ 2Ω/σ designating the so-called spin parameter. The above separation of coordinates allows us to expand the Fourier coefficients of G as and determine the equivalent depth of the mode associated with the triplet (n, m, σ) (e.g. Taylor 1936), (B.20) In the absence of resonances, the semidiurnal tidal response is generally dominated by the fundamental gravity mode, indicated by n = 0 3 , which corresponds to the associated Legendre function P 2 2 in the static case (e.g. Auclair-Desrotour & Leconte 2018). In the high-frequency regime, Λ 2,ν 0 ≈ Λ 2,1 0 ≈ 11.1, from the moment that n |Ω|. We note that this value, denoted by Λ 0 in the following, can be modified by dissipative processes. For instance, by including friction with the planet surface using a Raleigh drag of constant characteristic frequency σ R , one may show that the eigenvalue of the modes tends to the value of the static case, that is Λ 2,ν 0 ≈ 6, if σ/σ R → 0 (see e.g. Volland 1974;Auclair-Desrotour et al. 2017b).
As we focus on the n = 0 mode, we can drop the subscripts and superscripts n, m and σ to lighten notations. The function G m,σ n is now simply denoted by G, and so on for the tidal 3 We follow here the indexing notation by Lee & Saio (1997), which associates g-modes with positive n and r-modes to strictly negative n. heat source, pressure, density, temperature, wind velocity components, eigenvalues and equivalent depths. The usual change of variable G = e x/2 y leads to the vertical structure equation in its canonical form, where we have introduced the dimensionless vertical wavenumberk x , defined bŷ The vertical structure equation describe the behaviour of a forced harmonic oscillator, andk x thus corresponds to the inverse of a length scale of the variation of perturbed quantities across the vertical coordinate. Since the tidal response is adiabatic,k 2 x ∈ R, and its sign directly determines the nature of waves across the vertical axis. The conditionk 2 x > 0 indicates a propagating mode. Conversely,k 2 x < 0 corresponds to an evanescent mode. Computing analytic solutions turns out to be a very challenging problem except for a few simplified configurations. Therefore, we treat here the idealized case of the isothermal atmosphere, which is one of these configurations. We acknowledge the limitations of this academic atmospheric structure regarding real ones, where convective instability leads to a strong temperature gradient near the planet surface. However, this approach appears to be sufficient for the purpose of this appendix.
In the isothermal approximation, the temperature profile is supposed to be invariant with the radial coordinate. In light of Eq. (B.12), it immediately follows that H is a constant, dH/dx = 0, and The above expression shows the existence of a turning point for h = 4κH, where the sign ofk 2 x changes. This turning point occurs at the frequency In the reference case of the study, GCM simulations provide T s ≈ 316 K, which, combined with R s ≈ 297 J kg −1 K −1 , gives H ≈ 10.6 km in the isothermal approximation. An estimation of the normalized frequency ω TP = σ TP / (2n ) using Eq. (B.12) thus gives ω TP ≈ 270, showing that the turning point occurs in the high-frequency range and must therefore be taken into account in the calculation of an analytical solution. The conditionk 2 x > 0 (|σ| < σ TP ) corresponds to an oscillatory regime, whilek 2 x < 0 (|σ| > σ TP ) corresponds to an evanescent one. To solve the vertical structure equation, we have to choose a vertical profile for the tidal heat power per unit mass J. Following Lindzen et al. (1968), we opt for a profile of the form where J s stands for the heat absorbed at the planet surface and b J is a dimensionless optical depth characterizing the decay of heating across the vertical coordinate. This profile is derived from the Beer's law (e.g. Heng 2017) applied to an isothermal atmosphere The atmospheric torque generated by the thermal tide depends on the atmospheric composition, which has a strong impact on the vertical distribution of the tidal heating through clouds formation and the optical thickness of the gas mixture. In the study, we treat the case of a terrestrial planet hosting a cloudless N 2dominated atmosphere with a small amount of CO 2 . Hence, we ignore the effects of clouds and compute the thermal tide of an optically thin atmosphere in the visible frequency range, where the major part of the stellar flux reaches the planet surface without being absorbed.
In this appendix, we consider the case of a planet hosting a Venus-like CO 2 -dominated atmosphere with a mixture of water and sulphuric acid (H 2 SO 4 ) in the same reference configuration (a = a Venus and p s = 10 bar). We do not attempt to reproduce exactly the composition and dynamics of the Venus atmosphere, which is a complex problem beyond the scope of this study (see e.g. Lebonnois et al. 2010Lebonnois et al. , 2016, but to simply retrieve its main features (optical opacity, clouds absorption, etc.). As a consequence, we opt for a generic approach excluding a fine tuning of the atmospheric properties. We set the thermal capacity per unit mass of the gas (C p ) to 1000 J kg −1 K −1 , which is the typical value of C p in the case of Venus (e.g. Seiff et al. 1985), where the parameter decreases from 1181 J kg −1 K −1 near the surface to 904 J kg −1 K −1 at an altitude of 50 km.
Similarly, the mean molecular mass is set to 43.45 g mol −1 , and the volume mixing ratio of water vapour to 20 ppm (Moroz et al. 1979). We set the diameter of water particles to 3 µm, which is a typical value in the lower cloud (e.g. Knollenberg & Hunten 1980). To take into account the impact of sulphuric acid on the saturation pressure of water vapour p H 2 O , we use the prescription given by Gmitro & Vermeulen (1964) for aqueous sulphuric acid (see Eq. (24) of their article). This prescription is written as a function of the local temperature T , as where A, B, C, D, and E are empirical constants. The optical properties of the atmosphere used to compute radiative transfers are pre-computed using the HITRAN 2008 database (Rothman et al. 2009) for the Venus atmospheric mixture (instead of the Earth mixture used in the study). The spectrum of the atmospheric tidal torque due to the semidiurnal tide is plotted in Fig. C.1 with the spectrum of the N 2 reference case for comparison.
Because of the opacity of the atmosphere in the visible range, the fraction of the incoming stellar flux reaching the planet surface is less in the case of the CO 2 atmosphere than in the case of the N 2 atmosphere. Particularly, the resonance peak is strongly attenuated. We also observe a greater impact of Coriolis effects, the asymmetry of the tidal torque between negative and positive frequency ranges being more significant.
This difference can be explained by the vertical distribution of tidal heating. As mentioned above, the major part of the stellar flux reaches the planet surface if the atmosphere is composed of N 2 , which means that the tidal torque is mainly due to density variations occurring in the vicinity of the ground, where friction predominate over Coriolis forces. In the case of the CO 2 atmosphere, an important fraction of the incoming energy flux is absorbed at the cloud level. The contribution of this fraction is thus strongly affected by Coriolis effects through zonal mean flows characterizing the equilibrium dynamical state.
Appendix D: Simplified ab initio analytical model for the ground thermal response
As mentioned in Sect. 4.6, we follow along the line by Bernard (1962) to study the thermal response of the planet surface. In this approach, we consider the surface-atmosphere interface, located at the altitude z = 0, and write the power flux budget for a small perturbation in the framework of a frequency linear analysis. Hence, any quantity q can be expressed as q = q σ e iσt , where σ is the forcing frequency introduced in Sect. 2. In the following, we omit the superscript σ and use q in place of q σ , given that we work in the frequency domain. A variation of the effective incoming stellar flux (i.e. where the reflected component has been removed), denoted δF inc , is absorbed by the planet surface. A fraction δQ gr of this power is transmitted to the ground by thermal conduction, and an other fraction, δQ atm , is transmitted to the atmosphere through turbulent thermal diffusion. Finally, the increase of surface temperature δT s generated by δF inc induces a radiative emission, δF rad , which is expressed as δF rad = 4σ SB T 3 s δT s in the black body approximation (we recall that σ SB and T s are the Stefan-Boltzmann constant and mean surface temperature introduced in Sect. 3.2, respectively). Since the atmosphere is heated by both the incoming stellar flux and the surface thermal forcing, it undergoes a radiative cooling, similarly as the surface. The flux emitted downward to the surface is denoted δF atm . Thus, the power budget of the thermal perturbation at the interface is A17, page 20 of 22 P. Auclair-Desrotour et al.: Generic frequency dependence for the atmospheric tidal torque of terrestrial planets expressed as δF inc − 4σ SB T 3 s δT s + δF atm − δQ gr − δQ atm = 0. (D.1) To study the surface thermal response without having to consider the full atmospheric tidal response in its whole complexity, it is necessary to ignore the coupling induced by δF atm . This amounts to assuming either that the emission of the atmosphere towards the planet surface is negligible, or that it is proportional to δT s (see Bernard 1962). Thus, introducing the surface effective emissivity s , radiative terms can be reduced to 4σ SB T 3 s δT s − δF atm = 4σ SB T 3 s s δT s . (D. 2) The next step consists in defining the thermal exchanges resulting from diffusive processes, δQ gr and δQ atm . These flux are directly proportional to the gradient of the temperature profile anomaly in the vicinity of the interface, and are expressed as δQ gr = k gr (∂ z δT ) z=0 − , δQ atm = −k atm (∂ z δT ) z=0 + , (D.3) where ∂ z designates the partial derivative in altitude, δT the profile of temperature variations, and k gr and k atm the thermal conductivities of the ground and of the atmosphere at z = 0, respectively. By introducing the mean density profile of the planet ρ 0 and the thermal capacity per unit mass of the ground C gr (the analogous parameter for the atmosphere being C p ; see Sect. 3.1), the corresponding diffusivities can be defined by K gr ≡ k gr ρ 0 (0 − ) C gr and K atm ≡ k atm Temperature variations in the vicinity of the interface are described by the heat transport equation. We assume that diffusive processes predominates in the z → 0 limit. Moreover, since the typical horizontal length scale is far greater than the vertical one in the thin layer approximation, the horizontal component of the Laplacian describing diffusive processes can be neglected with respect to the vertical component in both the solid and atmospheric regions. It follows that iσδT = K gr ∂ zz δT, for z ≤ 0, (D.5) iσδT = K atm ∂ zz δT, for z > 0. (D.6) Solving these two equations with constant K gr and K atm , and ignoring the diverging term in solutions, we end up with δT (z) = δT s e [1+sign(σ)i]z/h σ gr , for z ≤ 0, (D.7) δT (z) = δT s e −[1+sign(σ)i]/h σ atm , for z > 0, (D.8) where we have introduced the frequency-dependent skin thicknesses of heat transport by thermal diffusion in the ground (h σ gr ) and atmosphere (h σ atm ), expressed as where τ s can be interpreted as the characteristic timescale of the surface thermal response. The parameter τ s is a function of the thermal inertia of the ground I gr ≡ ρ 0 (0 − ) C gr K gr and of the atmosphere I atm ≡ ρ 0 (0 + ) C p √ K atm , τ s ≡ 1 2 I gr + I atm 4σ SB T 3 s s 2 . (D.11) The above expression shows that τ s compares the efficiency of diffusive processes to that of the radiative cooling of the surface. The thermal time increases with the interface thermal inertia and decays when the surface temperature increases, scaling as τ s ∝ T −6 s . The expression of B σ s given by Eq. (D.10) highlights two asymptotic regimes. In the low-frequency regime, where |σ| τ −1 s , the surface responds instantaneously to the forcing δF, leading to a surface temperature oscillation in phase with the incoming stellar flux. At σ = 0, the incoming flux is equal to the radiative flux (δF inc = 4σ SB T 3 s s ), and B σ s = B 0 s . In the hightfrequency regime, where |σ| τ −1 s , the amplitude of the surface temperature variations decays and tends to zero in the limit |στ s | → +∞. At the transition, that is |σ| = τ −1 s , B σ s = (2/5) B 0 s and {B s } = − (1/5) B 0 s . As discussed in Sect. 4.6, the transfer function obtained using GCM simulations is well approximated by Eq. (D.10) in the low-frequency regime (see Fig. 5). But it diverges from the model when the forcing frequency increases, typically for |σ| τ −1 s . This divergences seems to result mainly from the fact that we ignored the radiative coupling between the atmosphere and the surface associated with δF atm , although it may be very strong. Particularly, this is the case for resonances of the atmospheric tidal response, where δF atm is increased similarly as the amplitude of pressure and temperature oscillations.
Appendix E: Tables of values obtained with GCM simulations for the exploration of the parameter space
The values used to plot the frequency-spectra of Fig. 6 are given by Table E.1 for study 1 (dependence on the star-planet distance) and Table E.2 for study 2 (dependence on the planet surface pressure). In both cases, the first column corresponds to the normalized tidal frequency ω = (Ω − n ) /n . Notes. The first column corresponds to the normalized tidal frequency ω = (Ω − n ) /n . Notes. The first column corresponds to the normalized tidal frequency ω = (Ω − n ) /n .
|
2019-02-01T11:25:50.000Z
|
2019-02-01T00:00:00.000
|
{
"year": 2019,
"sha1": "469a7ebc96b728aa7c29928bf3070d21e54b47af",
"oa_license": "CCBY",
"oa_url": "https://www.aanda.org/articles/aa/pdf/2019/04/aa34685-18.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d8027f76d3fd3cbc13f07ee72395f60f5c0c3e5e",
"s2fieldsofstudy": [
"Physics",
"Geology",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
29443750
|
pes2o/s2orc
|
v3-fos-license
|
Steroid Hormones as Transporters to Carry Exogenous Macromolecules into the Target Cell Nuclei in Vivo
Upon injection into the vascular system of rats, testosterone-bovine serum albumin conjugate (testosterone-BSA) is taken up by cells via the process of endocytosis. When it is taken up by the target cells of testosterone such as spermatogenic cells, it enters the nuclei of the cells. However, testosterone-BSA does not enter the nuclei of the non-target cells such as hepatocytes and thymocytes. Similarly, hydrocortisone-BSA conjugate enters the nuclei of its target cells such as hepatocytes and thymocytes. In the vesicular trafficking of testosterone-BSAs into the nucleoplasm, the vesicle membrane is likely to fuse with a nuclear hemifusion diaphragm. IgG coupled with hydrocortisone also enters the hormone-target cell nuclei, with its antigenicity kept intact. These results suggest that steroid hormones could act as transporters for conveying exogenous macromolecules into the target cell nuclei in vivo. Our studies provide a novel insight to the functions of steroid hormones.
Introduction
Steroid hormones can circulate in blood plasma in three different forms: albumin-bound, steroid hormone-binding globulin-bound, and free [1,2]. In the classical genomic model of steroid hormone action, free lipophilic hormones cross the cell membrane under passive transport, bind to their intracellular receptors in the target cells to form hormone-receptor complexes and subsequently move to the nucleus to exert their genomic effects [3] (Fig. 1). However, there are also reports on cell membrane-initiated effects, or non-genomic activities of steroid hormones [4,5]. Such reports indicate that steroid hormones might have some functions yet unknown, such as the transportation of various macromolecules into the target cell nuclei.
A Hypothesis on the Evolution of Life Based on Genetics
The purpose of my research was explaining the evolution of life based on genetics. The concept of heredity is based on the knowledge that all genes in the progenies are inherited from the parents. In the evolution of life, however, progenies tend to develop some variations in genes which do not originate from their parents. In fact, a great part of the evolution is the results of genetic changes occurring inside of the cells. Viruses are one of the major contributors to such genetic changes. Papovavirus virions such as simian virus 40 (SV40) are composed of proteins and DNA, and are able to enter the nucleus from the exterior of the cell [6] (Fig. 1). When SV40 infects 3T3 cells (mouse embryonic fibroblast), the virus induces cell transformation. This phenomenon likely portrays an example of the evolution in the unicellular organisms. Therefore, I hypothesized that the evolution may have been caused by an accumulation of exogenous genes in a cell nucleus.
Vesicular Transport of SV40 into the Nucleus
In some infection processes, virus-containing vesicles fuse with the outer nuclear membrane, delivering the virus particles into the perinuclear cisterna [7,8]. Single-bilayer diaphragms, such as shared bilayers, have been observed in the processes of fusion between the cell membrane and the secretory vacuole membrane [9], and between the cell membranes of two myoblasts [10], under transmission electron microscope. Membrane fusion without cytoplasmic fusion, referred to as a shared bilayer or hemifusion, can occur between two membranes [11]. In the search for other entryways to the nucleus, migration of SV40 was pursued in cultured cells, using ferritin and concanavalin A as cell membrane markers. Ferritin particles introduced into the cytoplasm did not enter the nucleus. In contrast, SV40-containing vesicles with ferritin particles were observed close to a single-bilayer nuclear membrane, or a hemifusion diaphragm [12]. The nucleoplasmic side of the hemifusion diaphragm was covered with electron-dense materials, and cell membrane markers were localized along the nucleoplasmic side of the inner nuclear membrane (Fig. 2). These results suggest that SV40-containing vesicle membrane fuse to hemifusion diaphragms in the nuclear envelope in order to transport virus particles into the nucleoplasm, and that the exogenous macromolecules used here as cell membrane markers were transported into the nucleus in this manner [12].
A Hypothesis about Antinuclear Antibodies
In addition to SV40, it is well-know that antinuclear antibodies such as immunoglobulin G (IgG) can also enter the nucleus from the exterior of a cell (Fig. 1). Substances such as SV40 tumor antigen and nuclear proteins migrate into the nucleus from the cytoplasm [13,14]. IgG with synthetic peptides containing nuclear localization signal sequence such as that of SV40 tumor antigen moves into the nucleus from the cytoplasm [15] (Fig. 1) by active transport through nuclear pore complexes (NPCs). Macromolecules such as native IgG which do not possess nuclear localization signals are otherwise not actively transported by NPCs, and are not likely to pass freely the cell or nuclear membrane by passive transport, either. Accordingly, the occurrence of autoimmune diseases implies that there must be other routes which allow the IgG nuclear entry besides the fusion of the outer nuclear membrane and vesicle's membrane. In addition, mechanisms that disturb the digestion by lysosomal enzymes must also exist, which protect the IgG during its vesicular transport from the outside of the cell to the nucleus.
Already back in the early 1970s, we (my team back then in Osaka City University and Osaka University, Japan) have developed a method to introduce exogenous substances into the cytoplasm of cultured cells, using inactivated Sendai virus [16]. Another possible method is to make use of the antinuclear antibodies. Immunoglobulin G has been identified as an antinuclear antibody in some autoimmune diseases, targeting endogenous contents within the cell nucleus. In order to verify whether IgG moves into the nucleus from the cytoplasm, IgG was introduced to the cytoplasm [17,18]. However, IgG did not enter the nucleus, as shown in Figure 1. Out of the possible combinations with the substances which can enter the nucleus from the exterior of the cell (Fig. 1), it was speculated that coupling with steroid hormones may enable the nuclear transfer of IgG. For example hydrocortisone has specific stimulatory effects on the epithelial cells of cultured rat prostate explants [19], even though they don't express glucocorticoid receptor. This fact strongly suggests another effects of steroid hormone.
Steroid Hormones as Transporters for Carrying Exogenous Macromolecules into the Target Cell Nuclei in Vivo
Steroid hormones conjugated with bovine serum albumin (steroid-BSAs) are used for the analyses of the binding sites of steroid hormones to cell membranes [20,21]. Pietras and Szego suggested that endocytotic vesicles appear to serve as vehicles for nuclear migration of steroid hormones [22]. Steroid hormone-binding globulin coupled with [ 3 H]-testosterone is internalized by receptor-mediated endocytosis in spermatogenic cells, which are target cells of testosterone, and then enters the nuclei of these cells in vitro [23,24]. Our group showed that colloidal gold embedded in epoxy resin becomes visible as silver deposits on the sections after silver enhancement [25]. The gold particles seem to be stable in the lysosome. Upon injection into the vascular system of rats, testosterone-bovine serum albumin conjugate labeled with 2 nm colloidal gold (testosterone-BSA-gold) is taken up by endocytosis into the target cells of testosterones such as round spermatids, and then enters the nucleoplasm [25,26]. In contrast, the nuclei of cells which are not targeted by testosterone such as thymocytes and hepatocytes showed very few silver deposits implying the presence of testosterone-BSA-gold [25]. These results suggest that the nuclear entry of testosterone-BSA-gold is specific to the target cells of testosterone; in other words, testosterone-BSA-gold does not enter the non-target cell nuclei. From the distribution of silver deposits, it has become clear that hydrocortisone-BSA-gold conjugates injected into rats enter the target cell nuclei such as hepatocytes and thymocytes. Together with the aforementioned studies on testosterone-BSA-gold conjugates, it indicates that the fate of gold labeled-steroid-BSAs may be decided at the cell membrane level [27].
In order to clearly show the migration route of testosterone-BSA-gold to the nucleoplasm through the nuclear envelope by vesicular trafficking, the round spermatids were observed under electron microscope. In spermiogenesis, the nuclear envelope of a round spermatid is divided into two forms, as a consequence of acrosome expansion over the anterior pole of the nucleus: 1) in the post-acrosomal region of the nuclear envelope, the nuclear pores continue to be present during the expansion of acrosome. 2) in the subacrosomal region, the two nuclear membranes are in close apposition and devoid of pores [28,29]. In round spermatids of the rats injected with testosterone-BSA-gold, the silver deposits were present on the cell membrane, vesicles, Golgi region, acrosome, subacrosomal space, both the post-acrosomal and the subacrosomal nuclear envelope, and the nucleoplasm. The silver deposits were also found in the perinuclear cisterna of post-acrosomal nuclear envelope, but not in the nuclear pore [26] (Fig. 3). In an observation of the post-acrosomal nuclear envelope without silver enhancement, the outer nuclear membrane showed many irregular invaginations toward the inner nuclear membrane. Furthermore, a double-membrane-like vesicle seemed to be present in the nuclear envelope. A vesicle containing gold particles was present in the pit formed by the invagination of the outer nuclear membrane (Fig. 3). These results suggest that testosterone-BSA-gold, being a macromolecule, is transported by the vesicles from the outside of the cells to the nucleoplasm [26]. This route resembles the entryway proposed for nuclear migration of SV40 in 1991 [12].
). A part of SNE is manifesting a single-bilayer diaphragm (arrow 5).
From the distribution of the silver deposits in the subacrosomal region, we suggested that testosterone-BSA-gold is also transported from the acrosome to the nucleoplasm through the subacrosomal nuclear envelope (SNE), which is devoid of pores [26]. In the observation without silver enhancement, some vesicles containing gold particles were located close to the inner membrane of acrosome in subacrosomal space, or were found to be in contact with the SNE, of which the nucleoplasmic side was covered with electron-dense material. Furthermore, there were the diaphragms, that is, single-bilayer nuclear membranes in SNE, which seemed to partially lack the nuclear lamina (Fig. 3). These results indicate the possibility that the membranes of vesicles fuse with shared bilayers in the SNE [30].
Then, we immunocytochemically investigated whether BSA in the steroid-BSAs remains intact in the hormone-target cell nuclei. For this purpose, testosterone-BSA, hydrocortisone-BSA or corticosterone-BSA was injected into the rats. BSA conjugated with steroid hormones could enter the hormone-target cell nuclei while maintaining its antigenicity. These results suggest the possibility that steroid hormone can transport the macromolecule to nucleus [31]. IgG antibodies introduced into the cells are functionally stable in the cytoplasm. When an antibody is introduced in the cytoplasm, the antibody reacts to its antigen [32]. Bovine IgG coupled with hydrocortisone injected into rat vascular system enters the hormone-target cell nuclei in the liver, maintaining the antigenicity [33]. This last finding also confirms that steroid hormones act as carriers to convey exogenous proteins into the target cell nuclei. Then, it may be possible that foreign DNA can be transported to the nucleus of target cells, such as spermatogenic cells, by associating with some protein-steroid hormone complexes, and they contribute to causing the evolution. However, whether it is more efficient to conjugate DNA with carrier proteins or steroid hormone in sending it into spermatogenic cell nuclei is still unknown.
Conclusion
Steroid hormones can function as transporters for carrying exogenous macromolecules into the target cell nuclei in vivo. The fate of proteins coupled with steroid hormone seems to be decided at the cell membrane level.
|
2019-03-20T13:03:22.408Z
|
2015-05-06T00:00:00.000
|
{
"year": 2015,
"sha1": "21773a3412362a8cd1d36f43737170a3aceea6a7",
"oa_license": null,
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajls.s.2015030302.20.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7e4e4bfc9acccdd435d156bd525763260de480b6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
247922737
|
pes2o/s2orc
|
v3-fos-license
|
Spatial Loss for Unsupervised Multi-channel Source Separation
We propose a spatial loss for unsupervised multi-channel source separation. The proposed loss exploits the duality of direction of arrival (DOA) and beamforming: the steering and beamforming vectors should be aligned for the target source, but orthogonal for interfering ones. The spatial loss encourages consistency between the mixing and demixing systems from a classic DOA estimator and a neural separator, respectively. With the proposed loss, we train the neural separators based on minimum variance distortionless response (MVDR) beamforming and independent vector analysis (IVA). We also investigate the effectiveness of combining our spatial loss and a signal loss, which uses the outputs of blind source separation as the reference. We evaluate our proposed method on synthetic and recorded (LibriCSS) mixtures. We find that the spatial loss is most effective to train IVA-based separators. For the neural MVDR beamformer, it performs best when combined with a signal loss. On synthetic mixtures, the proposed unsupervised loss leads to the same performance as a supervised loss in terms of word error rate. On LibriCSS, we obtain close to state-of-the-art performance without any labeled training data.
Introduction
Speech recordings are routinely corrupted by interference and background noise. Source separation has been studied as a powerful tool to mitigate these problems in speech systems, e.g., automatic speech recognition (ASR). On the one hand, blind source separation (BSS) such as independent component analysis (ICA) [1], independent vector analysis (IVA) [2,3], and independent low-rank matrix analysis (ILRMA) [4] have been an area of intense research. On the other hand, supervised learning of deep neural networks (DNNs) for single-channel source separation has been eagerly investigated [5][6][7]. In the multichannel setup, methods that exploit a single-channel separation network to estimate spatial cues such as DNN-based minimum variance distortionless response (MVDR) beamforming [8][9][10] or BSS with a neural source model [11][12][13] have led to stunning improvements in performance. Such linear filtering techniques have also been shown empirically to be better front-ends for ASR than non-linear ones such as single-channel separation [14]. However, supervised learning requires access to the massive amount of mixtures and corresponding ground-truth signals. Because such a dataset of natural recordings cannot be obtained, many prior works rely on simulation instead [14,15].
Recently, unsupervised source separation with a signal loss, which uses the outputs of BSS as pseudo-targets instead of ground-truth clean signals, has been proposed [16,17]. In [16], time-frequency (TF) masks estimated by a blind spatial clustering technique were used to train a deep clustering model. [17] *This work was done during an internship at LINE Corporation.
proposed a loss function that evaluates Kullback-Leibler Divergence (KLD) between the posterior probability density function of the separated signals of BSS and that of the DNN-based separator to avoid overfitting to the errors in the BSS outputs. These unsupervised losses enforce consistency between the output of BSS and that of DNN-based separators.
In contrast, we propose a spatial loss function for unsupervised multi-channel source separation that enforces consistency of the estimated spatial parameters. We exploit the duality of direction of arrival (DOA) and beamforming: since linear separation relies on the assumption that sources are mixed linearly, the mixing and the beamforming matrix should be inverse of each other. We train a neural separator so that the beamforming vector for a given source should have a large inner product with the corresponding steering vector, while being close to orthogonal to those of the other sources The steering vectors are obtained by conventional DOA estimation with the multiple signal classification (MUSIC) algorithm [18]. We also investigate the effectiveness of combining the spatial loss and the signal loss [10,17] . We train two types of neural separators with our proposed loss function, auxiliary function-based IVA (Aux-IVA) [19] with a neural source model [13], and neural MVDR beamformer [10]. We evaluate our proposed method with the recorded dataset LibriCSS [20]. Synthetic mixtures are also used for a more detailed analysis that requires the groundtruth.
The key contributions are summarized as follows. 1) We propose a spatial loss function for unsupervised multi-channel source separation. We show the superiority of the spatial loss over the signal loss. 2) We conduct extensive experiments using both synthetic mixtures and real-world recordings, whereas most prior works only considered the former. The proposed loss leads to the same performance as a supervised loss in terms of word error rate on synthetic mixtures. On LibriCSS, the proposed method outperformed strong baselines [21] and obtains close to state-of-the-art performance [22] with no data besides that available in LibriCSS, nor any ground-truth.
Background
Assuming N sources are captured by M microphones, the observed signal in the short-time Fourier transform (STFT) domain is represented with a mixing matrix A f ∈ C M ×N as, where s f,t is the clean sources vector, b f,t is the background noise, and f = 1, . . . , F and t = 1, . . . , T are the frequency bin and the time frame index. Linear source separation is the problem of estimating a demixing matrix W f ∈ C N ×M , that estimates the sources as, Thus, the optimal demixing matrix should satisfy W f A f ≈ I, where I is the identity matrix. In the following, and H denote the transpose and Hermitian transpose of vectors or matrices.
Source 1
Source 2 Figure 1: Duality of beamforming and DOA. Beam is directed to target and null is directed to interference.
DOA Estimation
Knowing the location of each microphone, dm ∈ R 3 , the steering vector of the nth source is, where ω f = 2π f c with sound speed c. qn ∈ R 3 is a unit-length vector pointing towards the nth source, and is represented with elevation φ and azimuth θ as q = [cosθsinφ, sinθsinφ, cosφ] . Here we consider the MUSIC method [18]. MUSIC assumes that a f (qn) are orthogonal to the subspaces spanned by the M − N least eigenvectors of the covariance matrix of (1), i.e., the noise subspace. Let E f be a matrix containing a basis for the noise subspace in its columns. Then, qn is estimated by finding local maxima of the following cost function, Hereafter, we denote a f (qn) as a n,f .
Source Separation
Here we mention two separation methods, AuxIVA and MVDR.
AuxIVA estimates the demixing matrix by likelihood maximization assuming the prior distribution of sources p(y n,f,t ), where y n,f,t is the nth separated signal. The demixing matrix is estimated by minimizing the following negative log-likelihood, where w H n,f is the nth row vector of W f and r n,f,t the variance of the source prior p(y n,f,t ). W f can be updated with the techniques such as iterative projection [19] or iterative source steering (ISS) [23]. While conventional IVA uses a fixed source prior, replacing the source model with a trained DNN has recently demonstrated high performance [13]. MVDR computes the demixing vector that minimizes the noise variance under the distortionless constraint of the target source, n,f is the spatial covariance matrix of the noise. The steering vectorã n,f and R (n) n,f can be obtained using DNNs [8].
Unsupervised Learning using Signal-based Loss
Recently, learning source separation using the separated signals from a BSS method as the pseudo-target signals has been proposed [16,17]. Letȳ andŷ be the separated signal of BSS and that of DNN-based separator. The training is simply done with the loss between the two signals, which we call signal loss, as Figure 2: Overview of our proposed method. We train a separator with blindly estimated speech signals and mixing matrices.
where L is the permutation invariant loss. Here we consider two loss functions, KLD [17] and CI-SDR [10]. KLD: Assuming that each TF bin of the source follows a time-varying complex Gaussian distribution, i.e., y n,f,t ∼ N (0, r n,f,t ), KLD loss for a single TF bin is, where π(n) denotes the optimal assignment among all the permutations to minimize the loss. The final loss is obtained by summing (8)
Proposed Method
We propose a spatial loss function for unsupervised multichannel source separation. We exploit the duality of DOA and beamforming. The beamforming vector for a given source should have a large inner product with the corresponding steering vector, while being close to orthogonal to those of competing sources. DOAs are estimated by a MUSIC-based technique [24]. Different from the signal-based approach described in Sec.2.3, our proposed method explicitly imposes the spatial constraints on the demixing matrix.
Spatial Loss Function
We propose a spatial loss function that exploits DOAs estimated by MUSIC as the pseudo-targets. As shown in Fig.1, there is a duality between beamforming and DOA, i.e., the beam should be directed to the target source direction and the null to the interference direction. LetĀ f be the mixing matrix computed with the estimated DOAs andŴ f be the demixing matrix estimated by the neural separator. Our proposed loss function is, where |Ŵ fĀf | is normalized so that all the elements range from 0 to 1. 1 is a vector whose all the elements are one and 1 X1 computes the sum of all the elements of a matrix X. Π denotes a permutation matrix, where only one element in each row and each column is 1 and the others are 0. For example, if N = 2, Π is either Here we consider two options of normalizing |Ŵ fĀf |. One is to normalize rows and Table 1: Average SDR in decibels, and STOI, PESQ, and WER of separated signals of IVA from the test set of WSJ1-mix dataset. Training is done with 3 channels, and evaluation is done with 2, 3 and 6 channels. The proposed method is indicated by a star ( ). columns of |Ŵ f | and |Ā f |, respectively, and the other is to normalize the rows of |Ŵ fĀf |. We denote the former as DOA1 loss and the latter as DOA2 loss, respectively.
DOA Estimator
We estimate DOA using the MUSIC with MM refinement algorithm [24]. For highly overlapped case, we simply obtain qn from the entire signal x with length T . However, when the overlap is small, we found out that such way of estimation often missed the DOA of the less active source. To bypass this problem, we estimate DOAs at regular intervals with sliding window and cluster them. The detailed procedure is as follows.
1. Obtain N DOAs for each window of length L and shift S. 2. Group them into K ≥ N clusters C1, . . . , CK with E1, . . . , EK elements by two-dimentional k-means on elevation φ and azimuth θ. The centroid of C k , (φ k , θ k ), represents the DOA of the kth source. 3. Remove C k if E k is less than a threshold E thres . 4. Remove the cluster with the lower number of elements that satisfies |θ k − θ k | < θ thres (1 ≤ k < k ≤ K).
Select top-N DOAs with the largest E k .
Because sudden estimation errors occur at some time frames and they have large impact on clustering, we set K more than N in step 2, and remove such outliers in step 3. In step 4, because the clusters with close centroids are considered to belong to the same source, we remove it. Finally, we obtain N ≤ N DOAs from the centroids of remaining cluster. Since we define the spatial loss assuming the existence of N sources, mixtures with N < N are not used for training.
The spatial loss requires knowledge of the microphone locations to obtain DOAs. It is thus applicable to datasets where most samples are recorded with a few known devices. We believe this to be reasonable. For most applications of interest, the geometry of the microphone array is known, e.g., smart speakers and conferencing systems. If it is unknown, but sufficient recordings from the same device are available, it can be estimated using blind calibration techniques [25,26].
Neural Separator
As in Fig. 2, we first apply weighted prediction error (WPE) [27], and then the dereverberated mixture is separated by the neural separator. We consider two neural separators, Aux-IVA with a neural source model (DNN-IVA) [13] and DNNbased MVDR beamforming (DNN-MVDR) [8][9][10]. In DNN-IVA, the DNN estimates 1/r n,f,t in (5) in the form of a TF mask, and spatial model update is done with ISS [23]. The network is composed of three gate linear units [28] and a transpose convolution layer as in [13]. From [13], the size of the intermediate feature is changed to 256, and group normalization [29] with four groups is used instead of batch normalization. In DNN-MVDR, the DNN estimates TF masks of the target source and the noise for each source, which are used to compute the demixing matrix. The network consists of three bi-directional long short-term memory layers and two feed forward layers as in [10], where we changed the number of units to 512.
Datasets and Experimental Setup
WSJ1-mix: We used six channel synthetic mixtures to evaluate the separation performance with speech metrics using the ground-truth. It consisted of speech from the WSJ1 corpus [30] and noise from the CHIME3 dataset [31] sampled in 16 kHz. The reverberation times were chosen randomly from 200 ms to 600 ms. The number of sources was two, with relative power from −5 dB to 5 dB. The noise was scaled to attain an SNR between 10 dB to 30 dB. Training, validation, and test sets contained 37416, 503, and 333 mixtures, approximately 98.5, 1.33 and 0.85 hours of mixtures, respectively. To evaluate WER, we trained an ASR system with clean anechoic signals using the wsj/asr1 recipe from the ESPNet framework [32]. WER for the clean, anechoic test set was 9.25 %. During training, the batch size was 16 and the input signal length was 7 seconds. Network parameters were optimized using the Adam optimizer [33] with learning rate 10 −5 . When combining signal and spatial loss, we took weighted sum of them, i.e., L doa + αLsig, where α was set to 0.2 for DNN-IVA and 1.0 for DNN-MVDR. When training DNN-IVA, three of six channels were used and the number of iterations was 15. DNN-MVDR was trained using all six channels. MUSIC and Gauss-IVA also used six channels. We used different STFT parameters for WPE and separation. The window/shift size were set to 512/128 and 4096/1024, respectively. The number of iterations, the delay and the tap length of WPE was set to 3, 3 and 10. In test of IVA, we evaluated the performance using two, three and six channels with 30, 25 and 15 iterations. When using more than two channels, two separated signals with the highest power were evaluated. For testing, we used the aver- Table 3: WER of LibriCSS dataset. Sessions 1 to 9 were used for training. The proposed method is indicated by a star ( ). Methods with a dagger ( †) are initialized with weights pre-trained on WSJ1-mix. The others are trained from scratch. [20], where we conducted utterance-wise evaluation. We used sessions 1 to 9 as training set, and session 0 as test set.
Training setup was basically the same as that on WSJ1-mix. The difference was that BSS used only three channels to generate pseudo-targets, because using all seven channels led to over-separation, resulting in degradation of WER. DNN-IVA also used only three channels both in training and test. Note that MUSIC used all 7 channels to estimate DOAs, where we set the window size L to 15, the shift S to 1, the number of clusters K to 3, the azimuth threshold θ thres to 10 and the threshold to remove the outlier clusters E thres to 0.1 k E k . When training with KLD or SDR loss only, the learning rate was 10 −5 ; otherwise 5 × 10 −5 . Table 1 shows the evaluation results of IVA algorithms on the WSJ1-mix test set. The evaluation metrics are SDR [34,35], the short-time objective intelligibility (STOI) [36], the perceptual evaluation of speech quality (PESQ) [37] and WER. The performance of supervised learning was also evaluated as an upper bound for unsupervised learning. At all channel numbers, DNN-IVA trained with our proposed DOA loss outperformed conventional Gauss-IVA and DNN-IVA trained with KLD or SDR loss. In terms of WER, our proposed method achieved comparable or even better performance than supervised learning. We conjecture this significant WER reduction to be due to the fact that our spatial loss gives the distortionless constraint to the target source direction as in MVDR. In addition, it is less susceptible to inter-frequency permutation problem, because the DOA qn is the same regardless of frequency. Furthermore, the spatial loss focuses on the direct signal, which would make the model more robust against reverberation. When we used both the spatial and the signal loss, no significant improvement was confirmed. This implies that the spatial information is sufficient to learn the source model in IVA. Table 2 shows the evaluation results of DNN-MVDR. Although the DOA loss worked well for IVA, it gave inferior performance to the SDR loss for MVDR. This would be due to the fact that IVA estimates the demixing matrix so that each separated signal is independent, whereas MVDR independently estimates the demixing matrix for each source. Thus, only the spatial information was not enough to train the network. However, we found that combining DOA and SDR loss in learning was better than either individually. Table 3 shows the WERs of separated signals from LibriCSS dataset. WERs per overlap ratio and average of them are listed. Compared to the signals losses, our proposed spatial loss led to higher performance. Both SDR and KLD loss gave only the comparable performance to pseudo-teacher, i.e., Gauss-IVA. The poor performance on overlap-free data implies that the signals loss leads to over-separation. Compared to DNN-IVA trained with WSJ1-mix, which contained trained ten times more data than LibriCSS, our proposed method achieved higher performance. In addition, our proposed method also outperformed Conformer-based MVDR beamforming [21] with less than one twentieth the amount of data. Furthermore, the performance was further improved by fine-tuning the supervised trained model on WSJ1-mix to LibriCSS using the proposed unsupervised learning. Although our proposed method did not exceed state-of-the-art performance [22], we showed high performance with the small amount of data, which demonstrates the effectiveness of unsupervised learning with in-domain data.
Conclusions
We proposed a spatial loss function that utilizes the DOAs estimated by classic techniques. It trains a neural separator so that it directs beam to the target source direction and null to the interference direction. We evaluated two neural separators, DNN-IVA and DNN-MVDR, using synthetic mixtures and realrecorded LibriCSS. In experiments with synthetic mixtures, we showed that the spatial loss worked well especially for DNN-IVA, which led to the same performance as the supervised loss. It also performed well for DNN-MVDR when combined with the signal loss. The spatial loss also gave high performance on LibriCSS. It outperformed the strong baselines with the small amount of training data without any labels.
|
2022-04-04T01:16:15.123Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2c372ad41c123e1fc83d27672952849f54ea8449",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2c372ad41c123e1fc83d27672952849f54ea8449",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
34563510
|
pes2o/s2orc
|
v3-fos-license
|
Biochemical Characterization of Molybdenum Cofactor-free Nitrate Reductase from Neurospora crassa*
Background: Eukaryotic nitrate reductase maturation is poorly understood. Results: Binding of molybdenum cofactor to apo-nitrate reductase is independent from other prosthetic groups. Conclusion: Active site formation of eukaryotic nitrate reductase is an autonomous process intrinsically tied to nitrate reductase dimerization. Significance: The understanding of molybdenum cofactor-dependent enzyme maturation is of significance as molybdenum enzymes are involved in essential cellular processes. Nitrate reductase (NR) is a complex molybdenum cofactor (Moco)-dependent homodimeric metalloenzyme that is vitally important for autotrophic organism as it catalyzes the first and rate-limiting step of nitrate assimilation. Beside Moco, eukaryotic NR also binds FAD and heme as additional redox active cofactors, and these are involved in electron transfer from NAD(P)H to the enzyme molybdenum center where reduction of nitrate to nitrite takes place. We report the first biochemical characterization of a Moco-free eukaryotic NR from the fungus Neurospora crassa, documenting that Moco is necessary and sufficient to induce dimer formation. The molybdenum center of NR reconstituted in vitro from apo-NR and Moco showed an EPR spectrum identical to holo-NR. Analysis of mutants unable to bind heme or FAD revealed that insertion of Moco into NR occurs independent from the insertion of any other NR redox cofactor. Furthermore, we showed that at least in vitro the active site formation of NR is an autonomous process.
In autotrophic organisms the major pathway for assimilating inorganic nitrogen is the nitrate assimilation pathway, with nitrate reductase (NR) 2 catalyzing the first and rate-limiting step. NR is a molybdenum cofactor (Moco)-dependent enzyme that shares its cofactor with a family of four other plant enzymes that are involved in sulfite detoxification, purine catabolism, and abscisic acid biosynthesis (1). As a unifying characteristic, these enzymes catalyze two-electron transfer (redox) reactions involving a molybdenum atom coordinated in Moco (2). This cofactor is a unique pterin derivative conserved among all kingdoms of life, and with the exception of the bacterial nitrogenase, it is found in all enzymes that hold a molybdenum atom in their active site. In addition to Moco, eukaryotic NRs harbor with heme and FAD two other prosthetic groups. For nitrate reduction, NR uses NAD(P)H as an electron donor that provides two electrons to the FAD cofactor that are subsequently transferred to the heme group and finally reduce the molybdenum center of the enzyme (3). In eukaryotes, NRs form a family of enzymes that share a high degree of sequence homology. Differences exist concerning the use of NADH or NADPH as electron donors for nitrate reduction. NADH-specific NR forms are found in higher plants and algae, in contrast to the less frequently occurring NAD(P)H-bispecific NR forms found in higher plants, algae, and fungi. NR forms specific for NADPH are only found in fungi (4). Bacterial NRs are completely different from their eukaryotic counterparts both in sequence and in structural composition. Eukaryotic NR is only functional as a homodimer. The NR monomer has a size of ϳ100 kDa and harbors its three redox cofactors, Moco, heme, and FAD, in three structurally distinct domains (4,5) (Fig. 1A). Furthermore, there is a dimerization domain that is located C-terminal to the Moco binding domain. The NAD(P)H and FAD binding domains form the so-called cytochrome b reducing fragment (CbR). Combination of the heme binding domain with the CbR fragment builds the so called cytochrome c reducing fragment (CcR). Both CbR and CcR are functional units of the NR protein capable to reduce artificial electron acceptors like ferricyanide or cytochrome c, whereby the former accepts electrons directly from the CbR fragment and the latter is reduced by the CcR fragment (Fig. 1A). Other than the cofactor housing domains, the connecting hinge regions as well as an N-terminal extension preceding the Moco domain are not conserved among eukaryotic NRs (Fig. 1B). In plants the hinge 1 region, which connects the Moco and heme domain, was shown to be important for NR activity regulation (6), and likewise the N-terminal extension is thought to possibly have a function in the post-transcriptional regulation of NR activity (7).
There are indications for Moco to function in the dimerization process of NR (4, 8 -11); however, contrary results indicate that the dimerization domain functions autonomously (12). The insertion of Moco into molybdenum -containing enzymes is an ill-defined process. Also, the insertion of other prosthetic groups during the maturation of this class of metalloenzymes is FIGURE 1. Conserved domains of N. crassa NR. A, shown is a schematic representation of N. crassa NR domain structure. The first and last residues of the NR domains are indicated. Moco and dimerization domain share a common sequence stretch comprising 11 amino acids. B, shown is a sequence comparison of N. crassa NR (NcNR) with NR from A. thaliana (AtNR), Nicotiana tabacum (NtNR), Zea mays (ZmNR), A. nidulans (AnNR), and P. angusta (PaNR). Strictly conserved residues are highlighted in black, and conserved residues are highlighted in gray. The alignment was generated with Clustal W (51). poorly understood. The reason for this gap in knowledge is that the study of cofactor insertion was hampered by the availability of well defined, stable apo-enzyme proteins in sufficiently large amounts. So far the recombinant expression and purification of a Moco-free eukaryotic NR was not reported, and all attempts to characterize the oligomerization state of Moco-free NR were constrained to whole cell extracts of Moco-deficient mutants from plants and fungi.
Virtually nothing is known about the influence of the two remaining redox active cofactors heme and FAD on the formation of the physiologically active, homodimeric eukaryotic NR. Also the sequence of redox cofactor incorporation into eukaryotic NR and likewise the underlying principles are still an open question.
In this work we characterized the recombinant Moco-free eukaryotic NR from the filamentous fungus Neurospora crassa. Biochemical characterization of the apo-enzyme revealed that Moco is solely sufficient to induce NR dimer formation. Furthermore, the sequence of prosthetic group insertion was found to be independent from Moco bound to NR.
EXPERIMENTAL PROCEDURES
Cloning of the Neurospora NR-The gene encoding Neurospora NR has been identified by Okamoto et al. (13), and the Neurospora gene-locus NCU05298 has been assigned to the NR encoding sequence (14,15). Therefore, we cloned the gene of Neurospora NR using Phusion High-Fidelity DNA Polymerase (New England Biolabs) with primers derived against the locus NCU05298. The oligonucleotides designed to amplify the gene were: forward primer (5Ј-TATTCACGTGATGGAGGC-TCCAGCTCTC-3Ј) and reverse primer (5Ј-ATTACTAGTT-CAAAAAACTAATACATCCTCATCCTTCC-3Ј). The forward primer included the sequence for a PmlI, and the reverse primer included the sequence for a SpeI restriction site. As PCR template, genomic DNA from N. crassa strain FGSC #988 (16) was used. The single intron of the Neurospora NR gene was removed by overlap extension polymerase chain reaction, thus yielding the coding DNA sequence of Neurospora NR. The CloneJET TM PCR cloning kit has been used for subcloning according to the manufacturer's instructions.
Site-directed Mutagenesis of the Neurospora NR-Based on the coding DNA sequence of N. crassa NR, site-directed mutagenesis was carried out using overlap extension-PCR. For heme-free Neurospora NR variant H654A/H677A, codons 654 (CAT) and 677 (CAC) were changed to GCC, resulting in the conversion of both histidines to alanines. Three Neurospora NR single amino acid variants with altered FAD binding characteristics were constructed; for NR variant R778E, codon 778 CGC was altered to GAA, in NR variant Y780A, codon 780 TAC was changed to GCG, and codon 811 (GGA) was changed to GTG, yielding variant G811V.
Cloning of the Neurospora NR CcR Fragment-NR CcR fragments containing amino acids 618 -984 of full-length NR were amplified by PCR using forward primer 5Ј-TGTCACGT-GGTCACTCGACTTATC-3Ј and reverse primer 5Ј-ATT-ACTAGTTCAAAAAACTAATACATCCTCATCCTTCC-3Ј. The coding DNA sequence of Neurospora NR and its variants H654A/H677A, R778E, Y780A, and G811V were used as PCR templates to generate wild type CcR and mutated CcR variants, respectively.
Expression and Purification of Recombinant Proteins-For expression of Neurospora NR and CcR fragments in Escherichia coli, the NR and CcR fragment-coding DNA sequence was subcloned into PmlI and SpeI sites of a bacterial expression vector resulting in the C-terminal fusion of a Twin-Strep-tag (IBA GmbH) to the protein. As a second tag, a His 6 tag was encoded on this vector, resulting in the N-terminal fusion of this tag to the protein, thus allowing the expression of a double-tagged protein. The His 6 tag/Twin-Strep-tag encoding vector was constructed based upon the pQE-80L vector (Qiagen GmbH). Different E. coli strains were used for recombinant NR production, thus allowing the expression of Moco-free NR (E. coli strain RK5204 (17)), molybdopterin (MPT)-containing NR (E. coli strain RK5206 (17)), or Moco containing NR (E. coli strain TP1000 (18)). Expression of Neurospora NR was carried out in LB medium containing 50 g/ml ampicillin at 22°C. For expression in TP1000 cells, 10 mM sodium molybdate was additionally added. After cell density reached an A 600 nm ϭ 0.1, NR expression was induced with 20 M isopropyl 1-thio--D-galactopyranoside, and cells were allowed to grow aerobically for 20 h. For production of NR CcR fragments, E. coli BL21 cells were used. Expression was carried out in LB medium containing 50 g/ml ampicillin at 30°C. After cell density reached an A 600 nm ϭ 0.2, cells were induced with 50 M isopropyl 1-thio--D-galactopyranoside and allowed to grow aerobically for 20 h. Cell lyses was achieved by two passages through a French pressure cell. Upon this, cells were sonicated for 1.0 min on ice. Cell lysis was carried out at 4°C. After centrifugation, doubletagged proteins were purified at 4°C under native conditions. In the first purification step Ni-NTA Superflow resin (Qiagen) was used according to the manufacturer's instructions. Cell lysis buffer contained 100 mM Tris-HCl, 150 mM NaCl, 5 mM imidazole, 2% (v/v) glycerol (pH 8.0). Washing steps were carried out using washing buffer containing 100 mM Tris-HCl, 150 mM NaCl, 10 mM imidazole, 2% (v/v) glycerol (pH 8.0). Proteins were eluted in elution buffer (100 mM Tris-HCl, 150 mM NaCl, 250 mM imidazole, 2% (v/v) glycerol (pH 8.0)). All buffers were degassed before use. Eluted fractions were pooled and subsequently loaded on Strep-Tactin Superflow high capacity resin (IBA). Washing steps were carried out using washing buffer containing 100 mM Tris-HCl, 150 mM NaCl, 1 mM EDTA, and 2% (v/v) glycerol. Proteins were eluted in elution buffer (100 mM Tris-HCl, 150 mM NaCl, 1 mM EDTA, 2% (v/v) glycerol, 20 mM D-desthiobiotin). Purity of the eluted protein was routinely documented by Coomassie Blue staining after SDS-PAGE. Pure protein fractions were concentrated (Vivaspin 6, Sartorius AG) and stored in 20-l aliquots in liquid nitrogen.
Size Exclusion Chromatography-Purified recombinant proteins were analyzed by gel filtration chromatography using an analytical Superdex 200 column (GE Healthcare) connected to an Äkta purifier system (Amersham Biosciences). As running buffer, 100 mM Hepes-KOH, 150 mM NaCl, 5 mM EDTA, and 5% (v/v) glycerol (pH 7.5) was chosen. Molecular weight standards (GE Healthcare) were used for calibration according to the manufacturer's instruction. MAY 17, 2013 • VOLUME 288 • NUMBER 20
JOURNAL OF BIOLOGICAL CHEMISTRY 14659
CD Spectroscopy-CD spectra were recorded on a Jasco Model J-810 spectropolarimeter (Jasco) at 20°C. CD spectra of purified protein preparations were recorded at protein concentrations of 1 M in 50 mM sodium phosphate, 100 mM NaF, and 2% (v/v) glycerol (pH 7.2) using quartz glass cuvettes of 1-mm cell path length between 350 and 180 nm at 0.1-nm intervals. A minimum of 10 scans was recorded, and base-line spectra were subtracted from each spectrum. Data analysis was performed using CDPro software (19).
Analytical Ultracentrifugation-Sedimentation velocity experiments were carried out in a Beckman Coulter ProteomeLab XL I analytical ultracentrifuge at 35,000 rpm and 20°C in a buffer containing 0.15 M NaCl, 1 mM EDTA, and 50 mM Tris-HCl (pH 8.0) using an An-50 Ti rotor. Concentration profiles were measured using the manufacturer's data acquisition software ProteomeLab XL-I Graphical User Interface Version 6.0 (firmware 5.06) and the UV absorption scanning optics at 280 nm. Experiments were performed in 3-or 12-mm double sector centerpieces filled with 100 or 400 l of sample, respectively. Due to thermal equilibration of the rotor before the start of the run, diluted samples were incubated at least 2 h at 20°C before centrifugation. Because it was recently found that version 6.0 of the data acquisition software records incorrect elapsed time in the data files, the program SEDFIT was used to determine correction factors and to correct scan times (20). Correction factors were in the range of 1.076 -1.078. Data analysis was performed using a model for diffusion-deconvoluted differential sedimentation coefficient distributions (c(s) distributions)) implemented in SEDFIT (21). Partial specific volume, buffer density, and viscosity were calculated by the program SEDNTERP (22) and were used to correct the experimental sedimentation coefficients to s 20,w . Contributions of bound cofactors to partial specific volume of the enzyme were not taken into account. Protein concentrations were determined spectrophotometrically using the absorption coefficients at 280 nm as calculated from amino acid composition (23) and are given in monomers throughout the text.
EPR Spectroscopy-For EPR spectroscopy proteins were incubated in 100 mM Hepes-KOH, 150 mM NaCl, 5 mM EDTA, 5% (v/v) glycerol (pH 7.5) containing 50 M FAD and 20 mM KNO 3 . Reduction of NR molybdenum was initiated by the addition of 20 mM NADPH. Samples were incubated for 25 s at room temperature and frozen in an ethanol/liquid nitrogen mixture. EPR spectra were recorded at X band frequency (9.4314 GHz) with a Bruker model EMX-6/1 spectrometer equipped with a standard TE102 rectangular cavity. The temperature was maintained at 77 K by immersion of the samples in partially silvered liquid nitrogen finger Dewar. A correction for the deviation of the magnetic field measured by the EMX-032T Hall probe at the position of the sample was calculated from the g value of the strong pitch (g ϭ 2.0028) and the microwave frequency of the ER-041-1161 counter. Spin concentrations were calculated by double integration and comparison with a 1 mM Cu 2ϩ EDTA standard under non-saturating conditions (i.e. a microwave power of Ͻ0.2 milliwatt for Mo 5ϩ and Ͻ20 microwatts for Cu 2ϩ ). A correction for the signals of 95 Mo and 97 Mo nuclear hyperfine split signals as well as a correction for the undetectable signals of the 95 Mo and 97 Mo hyperfine split sig-nals was made. To obtain a suitable signal-to-noise ratio for comparison of spectral shape and intensity, multiple spectra (up to 32) were recorded at a 2-milliwatt microwave power and 0.65-millitesla modulation amplitude (modulation frequency, 100 kHz).
Quantification of Moco/MPT-The amount of total Moco/ MPT was quantified by HPLC FormA analysis as described earlier (24).
Inductively coupled Plasma Mass Spectrometry (ICP-MS)-Molybdenum content of NR was quantified with Agilent 7700 Series ICP-MS (Agilent Technologies) according to a standard calibration curve of inorganic molybdenum (Fluka). The detection limit of ICP-MS for molybdenum ranged between 1 and 20 g/liter. Protein solutions and standards were mixed automatically with rhodium as an internal standard. All values were corrected for the molybdenum content of control samples consisting of buffer alone. All data were collected and processed using MassHunter work station software.
Quantification of Heme-The amount of heme bound to NRs was quantified using a QuantiChrom heme assay kit purchased from GENTAUR TM , and changes in absorbance at 400 nm were measured in a TECAN sunrise TM microplate reader. For calculation of NR, heme content UV-visible absorption measurements were performed with a PerkinElmer Life Sciences Lambda 25 spectrophotometer using ⑀ ϭ 120,000 M Ϫ1 cm Ϫ1 at 412 nm for heme in oxidized redox state (25) and ⑀ ϭ 156,760 M Ϫ1 cm Ϫ1 at 280 nm for NR. Nitrate Reducing Activity-Nitrate reducing activity was determined according to Evans and Nason (26) with slight modifications. For typical enzyme assays, 50 -600 ng of purified NR was incubated in 50 mM sodium phosphate, 200 mM NaCl, 5 mM EDTA (pH 7.2) buffer containing 1 mM NADPH, 10 mM potassium nitrate, 50 M FAD. The reaction was started with potassium nitrate and incubated at room temperature for 2-30 min. Enzyme assays were terminated by the addition of 0.6 M zinc acetate. For quantification of nitrite converted from nitrate, 2 volumes of 1% (w/v) sulfanilamide in 3 M HCl and 2 volumes 0.02% (w/v) N-(1-naphthyl)-ethylenediamine dihydrochloride were added. After incubation for 10 min at room temperature, the solutions were centrifuged (13,000 ϫ g) for 5 min, and the absorbance was measured at 540 nm using TECAN sunrise TM microplate reader. According to nitrite standard curve NR activity (mol of nitrite/min ϫ mg of protein) was calculated.
NADPH-dependent Cytochrome c Reducing Activity-NR NADPH-dependent cytochrome c reducing activity was determined according to Garrett and Nason (27). For a typical cytochrome c assay, 100 -500 ng of purified NR were incubated in 50 mM sodium phosphate, 200 mM NaCl, 5 mM EDTA (pH 7.2) buffer containing 5 M FAD, 82 M cytochrome c, and 0.1 mM NADPH. Cytochrome c reductase activity was measured by observing the increase in absorbance at 550 nm with a Perkin-Elmer Life Sciences Lambda 25 spectrophotometer. The molar extinction coefficient for cytochrome c of ⑀ ϭ 19,600 M Ϫ1 cm Ϫ1 (reduced-oxidized) was used to calculate the enzyme activity (mol of reduced cytochrome c/min ϫ mg NR). Conversion of NADPH was followed at 340 nm and quantified by its extinction coefficient, ⑀ ϭ 6,220 M Ϫ1 cm Ϫ1 (Sigma).
NR in Vitro
Reconstitution-For reconstitution of Moco-free NR various amounts of purified Moco carrier protein (MCP) and apo-NR were co-incubated in degassed 100 mM Hepes-KOH, 150 mM NaCl, 5 mM EDTA, 5 mM glutathione, 5% (v/v) glycerol (pH 7.5) buffer for 3 h at room temperature or at 4°C overnight. MCP was expressed and purified as described earlier (28). NR in vitro reconstitution was analyzed using HPLCbased FormA quantification, NR activity measurements, and dimer formation (SEC) as described above.
Isothermal Titration Calorimetry-Isothermal titration Calorimetry (ITC) titrations were carried out at 25°C using a MicroCal VP-ITC instrument (GE Healthcare). Before the titration, proteins and FAD were dialyzed against 100 mM Tris HCl, 150 mM NaCl, 1 mM EDTA buffer (pH 8.0) at 4°C for 2 days. After dialysis, the protein concentration was determined using the Bradford protein assay (29). The FAD concentration was determined spectrophotometrically using a molar extinction coefficient of 11,300 M Ϫ1 cm Ϫ1 at 450 nm. The reference cell of the ITC instrument was filled with dialysis buffer. In a typical experiment the respective CcR fragment (20 -35 M) in the ITC cell was titrated with FAD (0.5 mM) loaded into the ITC syringe. The first injection (2 l, omitted from analysis) was followed by 19 injections of 10 l, with 210-s intervals between injections. The cell contents were stirred at 307 rpm to provide immediate mixing. The data were collected automatically and base-line correction, peak integration, and binding parameters were performed using the ORIGIN analysis software. Normalized area data were plotted in kcal/mol of injectant versus the molar ratio of FAD/CcR. Non-linear fitting of data points was performed with the one set of sites binding model, and binding constants were calculated.
Prosthetic Group Content of Recombinant Holo-and Moco-
free Neurospora NR-Neurospora NR was purified by two-step affinity chromatography from the Moco accumulating bacterial strain TP1000 (18), the MPT accumulating bacterial strain RK5206 (17), and the Moco/MPT free bacterial strain RK5204 (17). Protein preparations obtained this way were analyzed by SDS-PAGE and CD spectroscopy (Fig. 2, A and B). Comparison of the spectra revealed no significant differences so that one can conclude that all NR proteins purified possess a highly similar secondary structure. Likewise yield and purity of the proteins obtained were comparable, thus allowing carrying out a comparative biochemical characterization of the different NR forms. In a first set of experiments, we quantified the contents of prosthetic groups (i.e. Moco/MPT, heme, and FAD) of two different, holo-, apo-, and MPT-NR preparations, respectively. Moco was quantified in NR preparations derived from TP1000. Therefore, the molybdenum content of these preparations was determined using ICP-MS, revealing on average Ͼ80% molybdenum saturation (data not shown). The same protein preparations were also analyzed for Moco content by HPLC (24), thus revealing a Moco/MPT saturation of 0.81 Ϯ 0.04 molecules per monomer (Fig. 2C). Therefore, we conclude that exclusively Moco has been co-purified with NR derived from strain TP1000. No Moco but its immediate precursor MPT was present in NR preparations derived from RK5206 (0.78 Ϯ 0.11 MPT per monomer) (Fig. 2C) as quantified by HPLC-based FormA analysis ( Fig. 2D (24)). RK5204-derived NR preparations contained neither Moco nor MPT. In the following we will use the term holo-NR for Moco-containing NR. The term apo-NR will be used for Moco/MPT-free NR, whereas the term MPT-NR will be used for NR co-purified with MPT. Next we asked whether or not the NR heme content is related to the presence of Moco/MPT. To answer this, NR-bound heme was quantified using two different methods. At first protein-bound heme was quantified by using the molar extinction coefficient of cytochrome b 5 , a method that is commonly used to quantify NR heme content (25). As a second method, a chromogenic assay was taken to quantify NR-bound heme. In this way, heme saturation of holo NR was found to be in the range of 0.89 Ϯ 0.04 molecules heme per monomer (determination was based on the extinction coefficient) or 0.97 Ϯ 0.01 molecules heme per monomer (determination by chromogenic assay) (Fig. 2E). MPT-NR showed essentially the same heme stoichiometries as identified for holo-NR. However, apo-NR displayed a slightly lower heme binding stoichiometry, i.e. 0.72 Ϯ 0.01 molecules heme per monomer (via extinction coefficient) or 0.8 Ϯ 0.03 molecules heme per monomer (via chromogenic assay) (Fig. 2E). Therefore, neither Moco nor MPT has a significant influence on the stoichiometries of NR to bind heme. Subsequently we also quantified the amount of FAD co-purified with holo-, MPT-, and apo-NR using UV-visible spectroscopy (30). Results from these experiments showed that none of the tested NRs was co-purified with FAD (data not shown). It is noteworthy that FAD deficiency was already reported for purified NRs from various other species (31,32).
Nitrate Reducing Activity of Recombinant Holo-NR from Neurospora-Hitherto expression of eukaryotic holo-NR was preferentially carried out in the methylotrophic yeast Pichia pastoris, yielding active recombinant holo-NR (3,33). Recombinant N. crassa holo-NR purified from E. coli is likewise functional. For its nitrate reducing activity, the corresponding K m value was found to be 0.25 mM with a V max of 4.32 mol of nitrite/min⅐mg of protein (Fig. 3E). The K m determined was similar to the K m values documented for the endogenous NR purified from Neurospora and the NR purified from the fungus Aspergillus nidulans. However, differences exist regarding the V max values of these NR species (Fig. 3E).
NADPH-dependent Cytochrome c Reducing Activity of Recombinant Neurospora NR-Other than holo-NR, MPT-and apo-NR have no nitrate reducing activity. However, all three NR species were purified with approximately equimolar amounts of heme (Fig. 2E). In the following we tested apo-NR and MPT-NR for their NADPH-dependent cytochrome c reducing activity, thus monitoring the electron flow from FAD to heme (Fig. 3D). As a control, the NADPH-dependent cytochrome c reducing activity of holo-NR was recorded. Correlating with its nitrate reducing activity, holo-NR also showed, as expected, NADPH-dependent cytochrome c reducing activity (Fig. 3, B and C). Likewise, apo-and MPT-NR showed cytochrome c reducing activity (Fig. 3B). However, when compared with holo-NR, apo-NR had a marginally lower (ϳ10% reduced) cytochrome c reducing activity, which correlates to its ϳ15% reduced heme content (Fig. 2E). Consequently we reason that heme bound to apo-and MPT-NR is fully capable of transferring NADPH-derived electrons.
Oligomerization State of Recombinant Neurospora NR-Next we wanted to characterize the oligomerization state of holo-, apo-, and MPT-NR. After a two-step affinity purification, SEC revealed that the NR protein pools purified from strains TP1000 and RK5206 contained dimeric NR exclusively (Fig. 4A). In contrast, protein preparations from RK5204 contained dimeric as well as monomeric NR, with both forms appearing in an ϳ1:2 molar ratio (Fig. 4A). However, the ratio of dimeric to monomeric NR varied significantly from preparation to preparation. Next we asked whether or not heme is important for NR dimerization. Therefore, cytochrome c reductase activities of the main peak fractions of monomeric and dimeric apo-NR were determined and found to be invariant, thus showing that each fraction contains NR proteins with identical heme binding stoichiometries (Fig. 4, B and C). Therefore, the lack of Moco/ MPT is solely responsible for monomer formation of apo-NR.
Analysis of NR Protein Variants by Analytical Ultracentrifugation-To further characterize the oligomerization state of the different NR variants, we examined the proteins in a concentration range from 0.6 to 7.7 M by sedimentation velocity experiments in the analytical ultracentrifuge. Holo-and MPT-NR sedimented predominantly as a single species with an s 20,w of 8.3 and 8.1 S, respectively, independently of the protein concentration used (Fig. 5, A and B). The continuous c(s) distribution model in the program SEDFIT (21) was used to determine the molar mass of the proteins from the diffusion broadening of the sedimenting boundary. At a concentration of 2.6 M, molar masses of 220 and 213 kg/mol were obtained for holo-and MPT-NR, respectively. Because a molar mass of 114.5 kg/mol can be calculated from amino acid composition, both proteins exist predominantly as dimers in solution. This is consistent with the results for the native enzyme purified from N. crassa (34). For these dimers frictional ratios of 1.65 and 1.68 can be calculated from the sedimentation coefficients. Because for a globular hydrated protein a frictional ratio of 1.1-1.2 is expected (35), both proteins deviate substantially from the shape of a sphere with the holoenzyme being slightly more compact than the MPT form. All three NR variants show, in addition to the dimeric form, a species sedimenting with about 11-12 S, which might represent a small amount of tetrameric enzyme (Fig. 5). A tendency of NR to form tetramers without significant impact on functionality has been described previ- ously (3). Furthermore, a small fraction of holo-and MPT-NR sedimented with an s 20,w value about 5-6 S (Fig. 5, A and B). Because the NR preparations were not completely saturated with Moco/MPT (Fig. 2C), these fractions might correspond to monomeric enzyme devoid of these co-factors. Consistent with the SEC results, c(s) distributions of apo-NR, in contrast to those of holo-and MPT-NR, show two major species (Fig. 5C). A faster one sedimenting with essentially the same sedimentation coefficient as MPT-NR dimers and a slower one with an nitrate reductase (NR) were analyzed in sedimentation velocity runs at 35,000 rpm and 20°C. To achieve better comparability, all sedimentation coefficient distributions have been converted to 12-mm path lengths. Whereas holoand MPT-NR exist predominantly as dimers in solution, in apoNR both monomers and dimers are clearly populated. Unexpectedly, the monomer/dimer ratio did not change significantly when protein concentration was varied by a factor of 20, implying that both oligomerization states are not in equilibrium. s 20,w of about 5.6 S, corresponding to an asymmetrical monomer with a frictional ratio of 1.5.
To investigate at which concentration apo-NR dissociates completely into monomers, 0.6 -12.2 M apo-NR were diluted from a stock solution, incubated for at least 2 h at 20°C, and analyzed in a sedimentation velocity experiment. Unexpectedly, the ratio between monomeric and dimeric apo-NR did not change significantly upon a 20-fold dilution (Fig. 5C). To rule out a very slow dissociation reaction, a similar experiment was performed after incubating the diluted protein for 24 h at 4°C (data not shown). Similar to the results shown in Fig. 5C, the monomer/dimer ratio did not change significantly. Therefore, we conclude that the monomeric and dimeric forms of apo-NR are not in equilibrium.
Moco-dependent Dimerization of Apo-NR-Hitherto, all NR oligomerization and reconstitution studies were restricted to whole cell extracts of source organisms containing Moco, solely MPT, or neither of them. Therefore, it remained enigmatic whether transition of monomeric to active dimeric NR is solely dependent on the presence of Moco or is additionally also dependent on other factors (e.g. Moco supply proteins, chaperones, etc.). To show that Moco is the determining factor to promote NR dimerization, a fully defined in vitro system was established. It contained apo-NR and a Moco source. As the Moco source, the MCP from the green algae Chlamydomonas reinhardtii was chosen. Recombinant MCP contains up to 25% co-purified Moco, and moreover, Chlamydomonas MCP was shown to bind exclusively Moco but not the metal-free MPT (28). Co-incubation was carried out using equimolar amounts of apo-NR and Moco bound to MCP. After co-incubation, the reaction mixture was subjected to SEC (Fig. 6A), unveiling the Moco-dependent transition of monomeric to dimeric NR. As a control, Moco-free MCPs were incubated with apo-NR in parallel, revealing no influence of the carrier protein on NR dimerization (Fig. 6A). Therefore, Moco is sufficient to initiate NR dimer formation.
In Vitro Reconstitution of Apo-NR-Next we asked whether or not apo-NR gains functional activity upon co-incubation with MCP, thus demonstrating the transfer of physiologically active Moco to apo-NR. Therefore, various amounts of MCP were co-incubated with apo-NR. NR activity was measurable upon co-incubation with MCP and increased with the MCP amount used for co-incubation (Fig. 6E). Based upon these results, the optimal amount of MCP for apo-NR reconstitution was identified to be 2 molecules MCP per NR monomer. Because the in vitro reconstitution system contained both the monomeric and dimeric apo-NR, we also tested both apo-NR forms for their capability to take up Moco. Therefore, we again took advantage of SEC yielding apo-NR fractions containing mainly monomeric (fraction 15, abbreviated as F15) and dimeric (F12) NR (Fig. 6, B and D). Proteins from these and fractions F13 and F14 were subsequently used for co-incubation with MCP. The highest NR activity was measurable upon co-incubation of MCP with fraction F15, containing mainly monomeric apo-NR, whereas co-incubation of MCP with fraction F12 resulted in drastically reduced Moco reconstitution activity (Fig. 6C). However, fractions F13 and F14 likewise gave rise to NR activity. Therefore, we conclude that monomeric apo-NR is competent for uptake of Moco.
Modeling of the Structure of N. crassa NR-To identify the molecular basis for Moco-dependent NR dimerization, we mapped the amino acid sequence of the N-terminal domain of N. crassa NR (Moco and dimerization domain) (Fig. 1B) onto the structure of the Pichia angusta Moco and dimerization domain with bound Moco (PDB ID 2BIH) by using the one-toone threading option of the Phyre2 modeling server (36) and analyzed the dimer interface of the model with PISA (protein interfaces, surfaces, and assemblies service at European Bioinformatics Institute) (37) (Fig. 7).
The largely hydrophobic N. crassa NR dimer interface has a buried area of about 2000 Å 2 per monomer and is spotted with conserved charged (Arg-294, Lys-314, Glu-497, Arg-523, Glu-535) and polar amino acids (e.g. Val-381, Thr-384, Arg-406, Arg-454, Glu-456, Tyr-472), which form intermolecular salt bridges and hydrogen bonds. The hydrophobicity of the interface leads to a significant free energy gain upon dimerization and creates a strong bias toward the dimer.
Moco is no direct part of the interface but strengthens it by stabilizing a small all-helical domain, which spans amino acids Asn-372 to Ile-409 and contributes about 500 Å 2 to the buried area. This domain was identified by a thorough B-factor analysis of 2BIH and the fact that it is the only part of the interface with an independent fold. According to our PISA analysis, it plays an essential role for complex formation, and its elimination renders the dimer unstable.
Quantification of Moco Transfer-Results from the in vitro reconstitution experiments demonstrate that monomeric apo-NR dimerizes upon Moco addition, yielding physiologically active NR. As a next step, the amount of Moco transferred to apo-NR had to be quantified. To address this question, we enriched reconstitution-competent, monomeric apo-NR by SEC (data not shown) and co-incubated it with various amounts of MCP. NR activity was measurable upon co-incubation with MCP and increased with the amount of MCP used for co-incubation (Fig. 8A).
Maximum NR activity was observed upon co-incubation of one monomer apo-NR with ϳ0.35 molecules of Moco. In the following, monomeric apo-NR was co-incubated with the 4-fold stoichiometric excess of Moco bound to MCP. After coincubation, strep-tagged NR was quantitatively separated from MCP by Strep-Tactin affinity chromatography as demonstrated by SDS-PAGE analysis (Fig. 8B). Subsequently the amount of Moco/MPT co-purified with NR was determined HPLC-based, thus revealing an average Moco/MPT binding stoichiometry of 0.34 Ϯ 0.04 molecules per NR monomer (Fig. 8C). Therefore, in vitro reconstituted NR has a significantly lower Moco binding stoichiometry as recombinant holo-NR.
EPR Studies of Reconstituted NR-Because EPR spectroscopy is an extremely sensitive tool to probe the microenvironment of molybdenum, we employed this technique to detect Moco transfer from MCP to apo-NR. First, suitable conditions to reproducibly elicit Mo 5ϩ EPR signals in N. crassa holo-NR had to be established. Early work on spinach NR used incubation with an excess of NADPH and nitrate to generate an axial Mo 5ϩ EPR signal (38).
As judged from g values and proton superhyperfine parameters, this signal is characteristic for NR as EPR signals with almost identical parameters were detected in partially reduced Candida nitratophila (39) and Chlorella vulgaris NR (40). N. crassa holo-NR also showed the characteristic nearly axial Mo 5ϩ EPR signals (Fig. 9, g ϭ 1.998, 1.971, 1.969). Upon exchange of the solvent for D 2 O, the coupling of the single, solvent exchangeable proton vanished (A H ϭ 1.2, 1.0, and 1.8 millitesla, respectively).
Our data confirm the ample information on N. crassa NR (41). Double integration showed that under these conditions 5-10% of the molybdenum could be trapped in the EPR active 5ϩ state. In the EPR spectral region around g ϭ 1.97, at which the sharp proton-superhyperfine split NR Mo 5ϩ signal is best detected, neither holo-MCP nor apo-NR exhibits EPR spectral features upon the addition of nitrate and NADPH. But after transfer of the Moco from holo-MCP to apo-NR, the presence of Moco with a microenvironment identical to holo-NR could unmistakenly be inferred from the characteristic EPR signal. Based on the amplitude in comparison to holo-NR, at least 34% of apo-NR could be activated by holo-MCP. This value compares favorably with the activity data as shown in Fig. 8. , respectively. After co-incubation, the reaction mixture was separated using SEC. B, 2 mg of apo-NR were subjected to SEC, and fractions F12-F15 were collected. Moco-dependent NR reconstitution activity and cytochrome c reducing activity were determined for these fractions as shown in C. For Moco-dependent NR reconstitution, 200 ng of holo-MCP were co-incubated with 500 ng of apo-NR, resulting in a 2.5 stoichiometric excess of MCP over NR. NADPH-dependent nitrate reduction and cytochrome c reduction was carried out as described under "Experimental Procedures." D, shown is SDS-PAGE analysis of SEC fractions F12-F15. E, 400 ng of recombinant apo-NR were co-incubated for 3 h at room temperature with increasing amounts of recombinant holo-MCP in degassed reconstitution buffer, and Moco-dependent NR activity was recorded as described.
The Sequence of Prosthetic Group Insertion into Neurospora NR-The timing of prosthetic group insertion into some prokaryotic molybdenum enzymes is known to be crucial for their maturation (42). To test for a mandatory sequence of prosthetic group insertion into NR, we first analyzed a NR protein defective in heme binding. Therefore, the NR variant H654A/H677A (13) was created by site-directed mutagenesis. The mutant protein was purified from the Moco-accumulating bacterial strain TP1000. CD spectroscopy showed no negative effects of the introduced mutations on protein secondary structure in comparison to the wild type protein (data not shown).
Upon chromogenic-based detection, no heme was detectable in H654A/H677A protein preparations, documenting the successful construction of a heme-free NR. The Moco/MPT content of NR H654A/H677A purified from TP1000 (determined HPLC-based) was found to be 0.81 molecules of Moco Ϯ 0.04 per monomer (Fig. 10C), thus resembling the value quantified for the wild type NR. The molybdenum content as determined by ICP-MS revealed on average Ͼ80% saturation. Therefore, we conclude that exclusively Moco is bound to NR H654A/ H677A. These findings closely resemble the Moco binding properties of the NR wild type protein.
Consequently we reason that Moco insertion into NR occurs independently from the presence of heme. In reverse, wild typelike heme binding was observed for Moco/MPT-free NR (Fig. 2E), thus documenting that NR heme binding is independent from Moco/MPT binding.
As a next step the influence of FAD on the insertion of Moco and heme, respectively, was determined. NR has a ferredoxin reductase-type FAD binding domain (43). Therefore, FAD binding mutants were constructed considering known FAD binding mutants of other enzymes with ferredoxin reductasetype FAD binding sites as well as FAD binding mutants of N. crassa and Arabidopsis thaliana NR (4,44). Accordingly, N. crassa NR was mutated, yielding FAD binding mutants R778E, Y780A, and G811V, respectively. To show that these NR variants do not bind FAD, we quantified the FAD binding properties of an N-terminal-truncated NR variant consisting of the FAD and heme binding domain (thus forming the NR CcR fragment). Expression und purification of each CcR variant was successful, yielding the highest amounts of pure protein as documented by SDS-PAGE analysis (Fig. 10D). In the following we determined the effects of the introduced mutations in the FAD domain on the protein secondary structure using CD spectroscopy. No differences as compared with the CcR wild type CDspectra were detectable (data not shown). Subsequently, the FAD binding properties of NR CcR fragments were quantified using ITC. By this, a FAD K d value of 0.61 Ϯ 0.05 M was revealed for wild type CcR (Fig. 10A). However, ITC-based no FAD binding was detectable for CcR R778E and Y780A variants (data not shown). Next we quantified FAD binding properties of the heme-free CcR variant H654A/H677A, revealing an FAD K d value of 0.52 Ϯ 0.03 M (Fig. 10B). Therefore, FAD binding to the NR CcR fragment is independent from heme binding.
In the inverse experiment we asked whether or not heme binding to NR is independent from FAD binding. Therefore, the amount of heme co-purified with NR variant R778E being unable to bind FAD was quantified. In comparison to wild type protein, NR R778E had a very similar heme content with a binding stoichiometry of 0.78 Ϯ 0.07 molecules heme per monomer (extinction coefficient-based determination) or 0.94 Ϯ 0.08 molecules heme per monomer (chromogenic-based determination) (Fig. 10C). Therefore, heme binding to NR is independent of FAD binding. Finally we wanted to know if Moco binding to NR is independent of FAD binding. Therefore, the amount of Moco bound to NR R778E upon purification from the Moco-accumulating E. coli strain TP1000 was quantified. NR variant R778E was co-purified with 0.89 Ϯ 0.11 molecules of Moco per monomer (Fig. 10C), and the ICP-MS-based molybdenum quantification revealed Ͼ80% molybdenum saturation (data not shown). Therefore, NR R778E binds essentially the same amount of Moco as does the wild type protein, thus demonstrating that Moco binding to NR is independent from FAD binding.
DISCUSSION
In the past eukaryotic NR has been characterized intensively, revealing the multidomain character of this complex metalloen-zyme. Although detailed knowledge is available for the holoenzyme (3)(4)(5)28), little is known about its redox cofactor assembly. This gap of knowledge is mainly due to insufficient amounts of well defined stable apo-enzyme proteins available for analysis.
However, 40 years ago (45) a stable apo-NR was identified in the whole cell extract of the N. crassa nit-1 mutant. nit-1 extracts were found to be a valuable tool for Moco research (11) as the addition of physiologically active Moco results in forma- FIGURE 8. Reconstitution of monomeric Moco-free NR. A, NADPH-dependent NR activity of reconstituted monomeric NR after titration with Moco bound to MCP. After incubation of enriched monomeric apo-NR with various amounts of MCP, NADPH dependent NR activity was measured; data points were fit to a single binding site model. B, SDS-PAGE analysis upon separation of reconstituted NR and MCP is shown. A 4-fold excess of MCP over monomeric apo-NR was co-incubated for 14 h at 4°C in reconstitution buffer. After co-incubation, the sample was diluted 1:1 with buffer containing 100 mM Tris-HCl, 150 mM NaCl, and 10% (v/v) glycerol and subjected to StrepTactin Macroprep resin. C, quantification of Moco bound to reconstituted NR (rec.-NR). Subsequent to purification of reconstituted NR, Moco/MPT content was quantified using HPLC-based FormA analysis and compared with apo-and holo-NR. ND ϭ not definable. tion of active NR, thus representing the only enzymatic assay system available for detecting physiologically active Moco.
Structural elucidation of the eukaryotic NR (46) revealed that Moco is deeply buried within the protein, at the end of the substrate funnel. This finding led to the questions of (i) how Moco is inserted, and (ii) considering that besides Moco, heme and FAD are also NR redox cofactors, is there a mandatory sequence for NR redox cofactor assembly? In this study we addressed both questions using the N. crassa NR as model enzyme. In a first set of experiments we characterized the Moco insertion process using recombinant Moco-free apo-NR and found that Moco is sufficient to induce NR dimer formation in this fully defined in vitro system. Consistently, cofactor-induced dimerization was already previously observed in the undefined whole cell extracts of the nit-1 system (45,47,48), but its sole dependence on Moco was not provable in this system.
How can Moco-induced NR dimer formation be explained? To address this question we built a structural model of N. crassa NR on the basis of our data set for the molybdenum domain of Pichia NR and analyzed its dimer interface. By this, we identified a small domain that provides ϳ25% of the buried interface area.
We speculate that this domain is flexible and that it triggers dimerization upon Moco binding. In the absence of Moco it adopts a conformation unfit for dimerization, and as a result, the monomer becomes the abundant species. In the presence of Moco, the all-helical domain becomes part of the interface, and therefore, the equilibrium shifts toward the dimer. Once formed, the dimer is kinetically locked, thus precluding accessibility to the Moco binding site.
We postulate that the monomer is also kinetically locked until dimerization is triggered. Due to the abundance of Moco under natural conditions, NR is usually found to be Moco-loaded and, hence, dimeric. The emergence of both forms could result from NR overexpression in a Moco-free E. coli strain. It cannot further be ruled out that certain E. coli metabolites with weak affinity occupy the Moco binding site and hence trigger dimerization.
Analytical ultracentrifugation revealed that monomeric apo-NR is not in equilibrium with dimeric apo-NR, thus substantiating our model of NR dimerization. We were not able to verify the proposed model because as yet apo-NR crystallization failed. Furthermore, solely monomeric apo-NR is reconstitution competent, whereas dimeric apo-NR is not,
|
2018-04-03T02:37:30.072Z
|
2013-03-28T00:00:00.000
|
{
"year": 2013,
"sha1": "d5d9020e4c912d0eae854c780b45b3c7a60f1b21",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/288/20/14657.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "63ef04f4de2b3842d2ab9598868356698432b133",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
49582351
|
pes2o/s2orc
|
v3-fos-license
|
Obstructive Sleep Apnea among Players in the National Football League: A Scoping Review
Objective Obstructive sleep apnea (OSA) is a common sleep-disordered breathing condition that has emerged as a significant public health problem given its increased prevalence over the past decade. The high prevalence of obesity and large waist circumference among NFL players are two risk factors that might contribute to the high susceptibility of football players to develop OSA. National Football League linemen might be particularly vulnerable since they tend to have a higher body mass index. In this scoping review, we aim to bring attention to the limited research regarding OSA among National Football League players and highlight the negative consequences of OSA in an attempt to increase awareness of the urgent need for further research in this area. Methods Search terms associated with obstructive sleep apnea and football were used to examine Google Scholar, EMBASE, CINAHL, PubMed, ProQuest, and Web of Science Plus for relevant studies. All relevant studies were included and documented. Results Findings included (n=4) studies of interest. All 4 studies revealed a near or slightly above 50% prevalence of OSA in the investigated cohorts (mostly retired NFL linemen). Most participants in the study (active NFL players) showed symptoms associated with a sleep-disorder breathing condition (snoring). Conclusion OSA requires more attention from the research and medical community. As suggested by results in the 4 studies included in this paper, OSA and associated symptoms are prevalent in the NFL population. Further research is required to investigate the extent of OSA and OSA risk in this population. There is an urgent need to conduct OSA risk surveillance in the athletic community.
Introduction
Sleep is vital for human survival and adequate bodily functioning [1,2]. When adequate sleep quality is experienced, the body responds in an efficacious manner as observed in optimal body metabolism and heightened cognitive awareness, which can promote better decisionmaking [3]. In the sports world, in order to be competitive and perform at optimal levels, it is paramount to receive efficient sleep [4]. In recent years, sleep-disordered breathing such as obstructive sleep apnea (OSA) has gained notoriety in the athletic community. OSA has been implicated as a medical condition that affects millions of Americans [5]. In the National Football League (NFL) community, the prevalence of OSA is estimated to range from 14% to 19%, and the prevalence in the general USA population is estimated at 2-5% [4,5]. During an apnea episode, which is characteristic among persons with OSA, the individual stops breathing for a short period of time resulting in forced awakening, disrupting the individual's sleep and reduces total sleep time and compromises their overall sleep health [6]. The restriction of airflow leads to impairment of baroreceptor modulation of sympathetic nerve activity; such dysregulation can exacerbate arteriosclerosis, hypertension and other cardio-metabolic disorders [7].
OSA has been proposed as an important risk factor in the cause of death among NFL players such as Reggie White, who played professionally for the Philadelphia Eagles and the Green Bay Packers. Other notable NFL players diagnosed with OSA include Tony Dorsett, Warren Sapp, Aaron Taylor, Percy Harvin and Ja Marcus Russell [7]. Research has revealed that football players have a higher prevalence of obesity, de ined as body mass index (BMI) > 30 kg/m 2 [8,9]. Obesity is among the many predictors of OSA risk [10]. Currently, there is limited information regarding the association of obstructive sleep apnea among American professional football players. Thus, the purpose of this scoping review is to evaluate the current literature on the impact of OSA among NFL players.
Methods
Research objectives were addressed using a scoping review methodology. The aim of a scoping review is to: 1) map relevant concepts and identify gaps in research [11] and 2) provide a comprehensive overview of the literature without engaging in the appraisal of multiple study outcomes [11]. The procedures of a scoping review consists of (1) identifying the research questions (2) identifying relevant studies, (3) study selection (4) charting the data (5) collating, summarizing and reporting the results and (6) an expert advisory consultation exercise [11]. Search terms were identified and agreed upon by authors and a medical librarian from New York University School of Medicine. Our literature search included a study of six databases; Google Scholar, EMBASE, CINAHL, PubMed, ProQuest, and Web of Science Plus. The following combination of terms was used to conduct a search in each database: (National football players or national football player OR professional football players or professional football player or national football league or NFL) and (Cardiovascular disease or cardiovascular diseases or heart disease or heart failure or vascular disease) and (Sleep disordered breathing OR apnea or apneas or sleep hypopneas) and (hypertension OR high blood pressure). A diagram illustrating the inclusion and exclusion of studies included is shown in Figure 1.
All articles were manually reviewed to identify any relevant studies. Grey literature was excluded; literature study was exclusive to peer-reviewed articles between 1984-2017. Citations were managed using bibliographic software manager endnote [11]. Eligibility criteria placed emphasis on whether studies provided a direct or broad description of the associations of obstructive sleep apnea among national league football players with cardiovascular disease and related sleep breathing disorders. The titles and abstracts of each citation were screened by the lead author; all relevant peer-reviewed articles were procured for full-text version.
Results
Searches were carried out during the month of June 2017. All databases were searched independently, and all articles returned were compiled and saved in Endnote. All duplicates were removed, and after final screening and data extraction (n=4). Of the 512 references recorded, 495 were excluded during the initial screening process. The remaining 17 articles were reviewed in-depth and 4 articles were included in this scoping review. A flow chart of this process can be found ( Figure 1).
Relevant information was extracted from each paper and charted to highlight the following: author, year of publication, study title, method, theme, outcome and limitations.
Discussion
The purpose of this scoping study was to organize and evaluate the limited literature regarding obstructive sleep apnea among NFL players and to increase awareness of the adverse effects of untreated OSA. As noted from NFL recruitment surveys, average NFL linemen weigh over 300 lbs; and this is now the norm compared with 3 decades ago (300 players in 2017 over 300 lbs., 10 players in 1986) [12]. This trend starts at the college level and has continued on this trajectory. There is an unprecedented need for further studies on the health of NFL linemen, fueled by the unmeasured dangers of NFL athletes increasing in size, weight, and BMI. The widow of Reggie White has called for increased awareness of OSA through an education campaign launched by a foundation under his name as well as health initiatives promoted by different organizations in the league [13]. For clinicians, there is a paucity of peer-reviewed information on OSA among NFL players. Failure to treat OSA can increase the risk of stroke and cardiovascular disease [8]. The effects of OSA are rampant, not only regarding the quality of sleep and daytime performance, but there is also a tremendous influence on the circulatory system [8,14]. Repeated and recurrent nightly episodes of dyspnea from airflow collapse leads to retention of carbon dioxide and low oxygenation [15]. This can result in increased heart rate and respiratory drive provoked after many apneic cycles per hour per night of raised stress hormones and catecholamine that leads to elevated blood pressures [16]. Over time, the incremental effect would exacerbate comorbid factors such as preexisting hypertension, metabolic syndrome, dyslipidemia, coronary artery disease and the chances of mortality from a stroke, heart attack or both [16].
Limitations
The data from the studies included in this scoping study were largely in support of the presence of OSA among linemen in the NFL. This is consistent with the hypothesis of the scoping study: NFL players are more predisposed to OSA than are non-NFL players. However, there are multiple confounding factors and limitations from the studies that should be further explored.
The power of the studies included is an important limitation. The study conducted by Rice et al. shows a negligible incidence of OSA among linemen and non-linemen. The study further revealed that investigators fell short on their recruitment goals. Drop out or inability to complete study for any number of reasons reduced the sample size even smaller.
Volunteer bias was an issue in several studies and was mentioned by Albuquerque et al. as a limitation in their study. Their study, which favored the hypothesis that NFL linemen suffered from a higher prevalence of OSA, could have been affected by the concerns of the participants that led them to volunteer for the study. The volunteers could have been concerned with having some or all of the symptoms of OSA or its associated comorbidities such as high blood pressure. The study, therefore, could have excluded those individuals that were not aware of their symptoms and or chose not to participate. Again, volunteer drop-out or inability to complete the study reduced the sample size.
Generalization of the collected data from the studies presently is difficult in part due to the limitations mentioned above, but may also be due to lack of specific factors such as ethnicity. Ethnicity of the participants was collected, but the percentage of each ethnic group regarding the presence of OSA was not reported. This may be because ethnicity was not seen as a risk factor in the development of OSA. There are even fewer studies done on whether certain races/ethnicities have a higher prevalence of developing OSA among NFL players. However, it is widely well documented that certain chronic comorbidities such as diabetes and hypertension do affect certain races/ethnicities greater, such as Blacks [8].
Other studies have found that minorities tend to experience less sleep quantity and quality than do whites [16]. Furthermore, since NFL participants were all male between the ages of 23-28 years, application of the findings to the general population must be done cautiously.
The accuracy and validity of using BMI as a predictor of obesity is questioned because it does not take into account muscle mass, bone density and body composition [15]. In addition to obesity, other attributes that put a linemen or non-linemen at risk for OSA could be dependent on other variables such as other comorbidities (diabetes and cardiovascular disease) [16]. Many studies used measurements such as neck circumference and hip-to-waist ratio in addition to BMI. Other researchers propose the use of DEXA scan for better estimation of body fat composition for determination of true obesity [15].
The accuracy and validity of home sleep apnea testing (HSAT) were called into question and this modality for OSA detection was commonly used across the studies in the scoping study. The gold standard for which HSAT is compared against is polysomnography in inpatient sleep study. In general practice, physicians would refer individuals suspected of OSA from the initial screening questionnaire to undergo overnight polysomnographic sleep study. This has been the gold standard to assess sleep apnea. In-patient sleep studies may not involve the physician directly over the course of the night but involves implementing multiple monitoring modalities. Some of the data recorded include rapid eye movement (REM) occurrence and brain wave activity via electroencephalogram (EEG), possible cardiac arrhythmias via electrocardiogram (ECG), carbon dioxide levels, muscle tone and whether substances have been utilized to induce sleep such as alcohol and medications. Compared with lab-based sleep studies, HSAT may be preferred because of convenience, accessibility, and lower costs (no overnight staff). Rice et al. state there was data loss (22 of 159) in their study from the single channel portable unattended home study reporting apnea/hypopnea episodes. The HSAT results in Rice et al. study were compared to results of the general population with OSA that was produced by polysomnography. Two different machines could have an unreported amount of variation. Furthermore, there are multiple types of HSAT and some studies reported which type was used while others did not.
Conclusion
Lifestyle changes such as weight reduction to reduce OSA severity is widely known as a method to mitigate the symptoms of OSA. However, for most NFL linemen weight reduction may not be feasible for their active careers. For this population, greater emphasis needs to be placed on adherences to OSA treatment such as Continuous Positive Airway Pressure (CPAP). Noncompliance to CPAP should be closely followed by clinicians to investigate the root reasons if noncompliance occurs. If non-adherence to CPAP is in the form of discomfort or perceived stigma of wearing a mask, increased educational awareness and successful treatment testimonials can open opportunities to increase adherence. Further studies are required to improve the rate of screening NFL players for OSA risk. The potential impact of treating OSA may have a positive influence in athletic agility and increase players performance in addition to decreasing risk for cardiovascular disease and sudden cardiac death. A flow chart using databases. Rogers
|
2018-07-08T00:58:20.288Z
|
2017-11-23T00:00:00.000
|
{
"year": 2017,
"sha1": "4abf2fc43ab0d098426924e5e552ac3975e7f961",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2167-0277.1000278",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4abf2fc43ab0d098426924e5e552ac3975e7f961",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56075431
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of Torsional Oscillations in Railway Vehicles
The development of semiconductor electro-technics during last decades allowed us to produce railway vehicles with very high tractive power. High performances reveal new problems that were unknown until these days. One of these problems are torsional oscillations of running wheelsets, which may appear and compromise the safety of railway vehicles. This article analyzes dynamical events that may occur in drives of the vehicles and that are bound with transmission of the torque to the wheels during the operation of the vehicle, which is exposed to the variable external influences (adhesion conditions, track irregularities, variation of wheel forces, etc.). This is based on the idea of a model of simplified drive of the real railway vehicle, which is used for simulation purposes. The model serves for the parametric analysis of individual components, to make an effective design or control remedy.
Introduction
High travel velocities bring a requirement on high tractive power, which is demanded to be produced and transmitted from the traction motor to the wheels.In 2009 a slightly relative rotation between the wheel and the axle has been discovered on one of the DB (German Railways) locomotives during the maintenance.This issue has been then discovered on some other locomotives of the same type.Discovered rotation was relatively small, but it meant a big security risk due to its seriousness, especially when this problem has appeared on more vehicles.The main risk is a fact that the relative rotation between the wheel and the axle means losing friction and the failure of the whole press-fitted joint.It means that a wheel can move almost freely along the axle in transversal direction in that moment, during interaction of a guiding force.In such situation the distance between both wheels (wheelset gauge) can decrease and make vehicle derailed.
Investigation has been made when the problem has been discovered, but there weren't found any manufacture problems or failures [1].Attention was turned to the phenomenon called "torsional oscillations of wheelsets".Torsional oscillation is a situation when both of the wheels start to oscillate one against the other (in an opposite phase).This event leads to slightly twisting of the axle.
Torsional oscillations requires an impulse to appear.This can happen when the adhesion on one of the wheels is lost.In case of a locomotive this may happen, when the vehicle runs with a high tractive effort and the adhesion force between a wheel and a rail is exceeded.Another reason is a situation when the vehicle passes through a small radius curvature.Oscillations with opposite phase may also merge during different slip velocities when the vehicle runs without tractive effort or breaking force [2].Long lasting or periodically repeating oscillations may also lead to the creation and developing of fatigue failures in the press-fitted joint.
Analysis of torsional oscillations and finding a way of their reduction
There has been made an intensive research of sources and ways to reduction because of the seriousness of this problem.Simulation methods are a major type of the research because of experimental measurements on a real vehicle are very expensive and require a lot of time as well.Another advantage of simulation methods is a possibility to consider the wheelset as a part of the whole drive, which is represented as a controlled dynamic system.This allows to apply design processes that are based on the theory of system control.This approach has been used by Böcker, Amann and Schulz
2016
, 6 [3] for presentation of a design of an active reduction of torsional oscillations in a drivetrain of a car.This uses an estimation regulator with help of the Kalman filter.The research was based on a simplified and idealized interpretation of flexible and inertial properties of the used components.
Situation in railway vehicles is much more difficult.Railway vehicles work with proportionally higher values of the torque, which is influenced by much higher values of inertia momentum.External influences are specific for operation of railway vehicles (effect of wind, uncompensated centrifugal forces…), especially during creation of the model of torque transmission on the tractive effort.
One of the first publications focused on torsional oscillations was the article written by authors Kaderavek and Pernicka [1].In this article the phenomenon was described on a general level.There was mentioned history of this phenomenon and its general specification, namely verbal description and some of the necessary conditions.More detailed description, objected on physical principle of this phenomenon, was written by authors Benker and Weber [4] who paid attention to this phenomenon in some of their articles.These articles were mainly aimed on a wheelset that was presented as a force excited mechanical oscillator.
Possibility of detection of the oscillations has been described by authors Markovic, Kostic and Bojovic [5].Authors made a simplified model of the class 444 locomotive, propelled via DC motors.The model was created as a scheme of transmission of the torque, using the set of parts connected via torsional elastic elements, representing the wheelset, the gearbox and the traction motor.Results of the simulations showed that in the moment when the adhesion is decreased enough and the wheelset starts to slip, the voltage of the motor increases above its nominal value.Closer look at the voltage course shows that the voltage oscillates with a small amplitude and a specific frequency.This frequency corresponds to the natural frequency of the wheelset and appears only in the moment of losing the wheelset adhesion.
Possibility of reduction of the oscillations has been described by authors Bieker, Dede, Dörner, Klein and Pusnik [6].Authors have dealt with an idea of using of brake discs elastically connected to the wheels that may serve as additional oscillators.These oscillators, suitably configured, should start to oscillate instead of the wheelset when the adhesion is lost.This means the axle shouldn't twist at all.Then brake discs can serve as absorbers of the wheelset torsional oscillations.This may work only on condition when all the parameters are well set.This may be problematic because of diameter of the wheel that may differ during the vehicles lifetime, which causes changes of the wheelset natural frequency.Another problem is a fact that the natural frequency depends on the value of tangential forces between the wheel and the rail, the concept of the drive, and velocity of the vehicle [2,7,8].
Idealization used in presented approach to the modelling of torsional oscillations
The problem seems to be related only to the wheelset, but there are more factors that may have influence on it.These factors are necessary to be considered because of complexity of the models, in simplified form with certain degree of idealization or, because of difficulty of mathematical description, fully neglected.For example a vehicle body may interact with an elastically connected two-axle bogie.The wheel force, interacting between the wheel and the rail, and the pulling force may then differ during the operation.These influences are not considered in the presented model, which is used for further analysis.The major attention has been aimed on the model of the whole drive chain and the adhesion between the wheel and the rail.
Block scheme of the model of the torque transmission from the traction motor to the wheels.
Design scheme of the drive chain containing cross section view is shown on Figure 2. As a support of the models creation, the decomposition on the base elements has been made as it can be seen in the mid part of Figure 2. The drive chain is here divided into fundamental parts, namely a traction motor, a gearbox, clutches, a hollow shaft, an axle and wheels.
Schematically shown substitute concept of the torque transmission, in the lower part of the image is the fundamental part of the creation of the mathematical model.The components, which are passing a rotary motion, are idealized and their mechanical properties are centralized into fundamental elements.These elements represent moments of inertia, torsional stiffness and torsional damping.The two rails on the bottom of the picture represent a requirement to include the adhesion model.
The model of drive chains dynamics is composed of three connected subunits: -model of transmission of the torque on the wheelset including the model of the gearbox -model of transmission of the toque on the adhesion force between the wheel and the rail -model of the traction motor
The creation of the model was based on the idea that the schematically shown mechanical components are massless and rigid and are unable to resist to the rotary motion.All of these resists are made via connected elements that represent these concentrated properties.
These elements are resolved with different line thickness or bordered with the dashed line on the Figure 2. Continuously distributed properties (moment of inertia, torsional stiffness, torsional damping) are replaced with three idealized elements: massless spiral spring with a constant torsional stiffness c, rotary hydraulic dumper with an incompressible fluid with the dumping coefficient b, rotating disc with the moment of inertia J.These three elements create a fictional component that is connected into the modelled component of the drive, so it is not disrupting the functionality of the transmission of the rotary motion.The respective component is then considered as ideally rigid and massless.
The equation description, which is based on the mentioned substitute concept, is a shortened description of the increment interpretation of all variables in the symbolic labelling of all the values that are characterizing the rotary motion (the angle of rotation ).It means all the values are the increments -deviations from the nominal values.The usually used symbol ǻ has been suppressed because of shortening of the description.
The increment model of the drive is described with following equations.Symbols in brackets show where is the value belonging to -rotor (R), gearbox (G) right torsional clutch (C r ) hollow shaft (HP), left wheel (W l ), the axle (S) and right wheel (W r ).The model of the transmission of the tangential forces between the wheel and the rail is based on Polach theory [9].Tangential forces are transmitted via friction which is bounded with a slight slip of the wheels on the rails.The amount of friction and the tangential forces as well is EDVHG RQ D IULFWLRQ FRHIILFLHQW ȝ The relation between the slip and the friction is expressed via slip characteristics, which has been analytically described for the simulation purposes with a sum of two exponential functions.The variability of the slip conditions for a dry or wet rail may be well expressed with appropriate chosen parameters ȝ MAX , ȝ RED , ʏ 1 , ʏ 1 .The slip s is defined as a dimensionless ratio of a perimeter speed of the wheel and the vehicle velocity v(t) and is described with the following equation 0 This adhesion model is based on an assumption that the vehicle is in a steady state.This means the vehicle moves with a constant velocity v 0 , the pulling force of the vehicle, divided on single wheels is constant as well as the tangential forces T w0 , which correspond with the engine torque M w0.Then the slip must have a steady value s 0 so the friction coefficient ʅ 0 = g -1 T w0 /m w corresponds with the requested value of the tangential force T w0 coinciding with the vehicle mass divided onto single wheels m w.
The angular velocity of the wheel differs by an increment in the non-steady state.This velocity is bounded with an angular acceleration and a moment of inertia J w during imbalance of the dynamic change of the torque on the wheel ǻM w (t), transmitted through the drive, and change of the torque that occurs during changes of the slip m w gǻs(t).
The calculation of the slip differences is then necessary to be completed with a dynamics of the velocity changes that depends on drive resists.The drive resist (the roll and the aero dynamical resist) can be expressed with a parabolic relation for a single vehicle or the whole train with an equation m s (a+bv(t)+cv 2
(t)).
This drive resist is necessary to be linearized according to an increment form of the model and then divided into single drives or wheels.
The model of the asynchronous motor is based on a real traction motor type ML 4550 K/6 whose parameters are following Power output 1 600 kW Nominal speed 1 825 min-1 Maximum torque 10 000 Nm Nominal torque 8 400 Nm The complex model of the traction motor has been replaced with a linear approximation of the relation between the torque and speed due to the fast dynamics of the initial simulations.
Results
The experiments were focused on the situation when the torque is rapidly changed from its steady state.This corresponds to the situation when the adhesion is decreased and the stick slip protection makes an intervention (the torque of the motor is decreased or increased in this moment).These oscillations have relatively small amplitudes but they may be dangerous when the torque control changes the value of the torque repeatedly in quick during the operation of the vehicle.As a result, this may lead to very high oscillations created in the drive of the vehicle.
Conclusions
From a series of experiments that have been carried out are presented those focused on the angular velocities of the wheels.These velocities can be expected as a possible consequences of a rapid change of the motor torque.The motor torque is the most important and easiest exploitable way how to supress possible occurrence of oscillations by means of control circuit using the motor torque as a manipulated variable.
Created simulation model includes a model of running behaviour that is respecting change of the adhesion conditions and detailed model of the asynchronous motor.This allows to study the influence of the level of knowledge of the known parameters.This is then used to assess an impact of different design settings or control actions required for reduction of the torsional oscillations.
Figure 1 .
Figure 1.The detail of a relative rotation of the wheel disc on a modern electric locomotive (see yellow indication mark in the red ellipse on the right part of the picture) (source [1]).
:KHHOVHW ± WUDQVLWLRQ IURP WKH GHVLJQ VNHWFK WR WKH VXEVWLWXWH LGHDOL]HG FRQFHSW DOI: 10.1051/ 02052 the angular velocities of the wheels make changes of the tangential forces which depend on them.
Figure 4 .
Figure 4. Slip characteristics shows relation between the slip and the friction.The characteristics is described via eq.(11).
Figure 5 .
Figure 5.Time courses of the oscillations during change of the motor torque.Torque increment is 500 N.m.
Figure 6 .DOIFigure 7 .
Figure 6.Time courses of the oscillations during change of the motor torque.Torque increment is 1000 N.m
|
2018-12-11T19:51:31.064Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "10a5824cff31effcc1103562cbeff0bb82fc9619",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/39/matecconf_cscc2016_02052.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "10a5824cff31effcc1103562cbeff0bb82fc9619",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
3522751
|
pes2o/s2orc
|
v3-fos-license
|
Cervical Cancer Prevalence, Incidence and Mortality in Low and Middle Income Countries: A Systematic Review
Introduction: Cervical cancer rates vary across the world, being highest in Eastern Africa (including Zimbabwe) and lowest in Western Asia. It is the second most common type of cancer in women in the South East Asia region and a major cause of cancer deaths among women of low and middle income countries (LMICs) like Nepal. This review is an attempt to make a comprehensive report of prevalence, incidence and mortality of cervical cancer in LMICs. Methods: The review was conducted applying a computerized search with the Medical Subject Heading (MeSH) major topics “Cervical Cancer”, “Cervical neoplasm” “Epidemiology”, (“prevalence” OR “incidence” OR “mortality”) and “HPV” OR “Human papillomavirus” as MeSH subheading. The search limits were: language (“English”), LMICs, dates (articles published from “1st January 2000 to 31st December 2015”), and species (“Humans”). The search was supplemented by cross-referencing. Publications that met the inclusion criteria were included in the synthesis. Results: Among the 20 studies reviewed; seven were from Africa, seven from Asia, three from South America, and one each from North America, Europe and Oceania. The review found the highest reported age standardized incidence rate as 17.9/100,000/year in Zimbabwe in 2000 and the lowest as 0.11/100,000/year in China in 2006. One study of Nigeria revealed a cervical cancer prevalence of 5.0 per 1,000 in 2012 in the 25-64 year age group. Further, the highest reported age standardized mortality rate was 16/100,000/year in India in 2015 and the lowest 1.8/100,000/year in Colombia in 2013. In addition, coitarche, tobacco smoking, number of sexual partners and family history of cervical cancer were reported as significant risk factors. Conclusion: The study provides a review of reported prevalence, incidence and mortality of cervical cancer in LMICs from 1st January 2000 to 31st December 2015. The scarcity of information reveals a substantial need for further studies on cervical cancer prevalence, incidence and mortality with associated risk factors in LMICs.
Introduction
Cervical cancer occurs in the lower part of the uterus that connects to the vagina; in the cells of the cervix (Fritz et al., 2000). Recent global figures estimate 527,624 new cases and 265,672 deaths due to cervical cancer annually. Cervical cancer rates are highest in Eastern Africa (including Zimbabwe) and lowest in Western Asia. However, it is the second most common type of cancer in women in the South East Asia region and the major cause of cancer deaths among women of low and middle income countries (LMICs) like Nepal. International Agency for Research on Cancer (IARC, 2012) estimated age standardized incidence rate of cervical cancer as 19.0 per 100,000 and age standardized mortality rate as 12.0 per 100,000 in Nepal (Ferlay et al., 2013).
Studies have shown that sexual behavior at an early age and increasing incidence of human papillomavirus (HPV) infection cause cervical cancer incidence to increase among younger women (Bosch et al., 1995). Studies have estimated that over 80% of sexually active women will be infected with genital HPV at some point in their lifetime (Syrjanen et al., 1990). Oncogenic HPV infection is the major etiological agent of cervical cancer of which70% are caused by HPV-16 and HPV-18 type (Munoz et al., 2004;Schiffman et al., 2007).
In spite of the high disease burden, there are limited numbers of studies conducted on prevalence, incidence and mortality of cervical cancer in LMICs and no systematic review exists in this field. Our aim is to investigate the prevalence, incidence, mortality and major risk factors of cervical cancer reported by articles published in the period from 1st Jan 2000 to 31st Dec 2015 in LMICs. We believe this systematic review will constitute valuable reference materials for epidemiologists, health policy makers and researchers on cervical cancer.
Design
We conducted the review applying a computerized systematic search to identify the prevalence, incidence and mortality of cervical cancer in LMICs. The inclusion criteria of articles were: original article, studies reporting prevalence, incidence, and mortality rates, articles in English and studies conducted on humans. Figure 1 describes the inclusion and exclusion criteria as well as extraction process.
Data extraction
We performed a three-stage selection for data extraction. In the first stage, a search of the online Medical Literature Analysis and Retrieval System (MEDLINE) database was performed with a combination of Medical Subject Heading (MeSH) terms: "Cervical Cancer" and "Cervical Neoplasm" as major topics and "Epidemiology", ("prevalence" OR "incidence" OR "mortality") and "HPV" OR "Human papillomavirus" as subheadings. A similar search was also performed in Scopus and CINAHL. The search limits were: language ("English"), dates (articles published from "1st January 2000 to 31st December 2015"), and species ("Humans"). A total of 21,444 articles were obtained from these searches.
Furthermore, the result was narrowed down by adding the name of each LMIC as defined by the World Bank. According to the latest revision of World Bank, a total of 135 countries are listed as LMICs (The World Bank, 2016). Thus at the end of first stage, we obtained 2413 specific articles.
In the second stage we reviewed titles and abstracts using predefined screening criteria. Exclusion criteria were: studies outside of LMICs, studies with no information about cervical cancer, reviews, reports, and duplicates. If the required information was not available in the abstract, we stopped reviewing further. Studies not satisfying the inclusion criteria were excluded at this stage.
In the third stage, we used the following exclusion criteria for further filtering the 190 papers from stage two: full text not in English, full text not available, not prevalence/ incidence/ mortality study, study reporting pre-cancer. Finally, we obtained 20 papers which were selected for further review and analysis ( Figure 1). The characteristics recorded for each study included: country of origin, author's name and year of publication, study duration, age group, cervical cancer prevalence per 1000 per year, crude incidence, age standardized incidence rate, crude mortality, age standardized mortality rate per 100,000 per year and classification of cancer.
Ethical consideration
This article is based on published data, and hence ethical approval is not required.
Cervical cancer prevalence and risk factor
Only one study reported prevalence as 5.0 per 1000 in 2012 (Durowade et al., 2012) in Nigeria. In addition, the study reported coitarche, tobacco smoking, number of sexual partners and family history of cervical cancer as significant risk factors (Durowade et al., 2012).
Only one study, one from Nigeria reported prevalence as 5.0 per 1,000 in 2012 in age group 25-64 (Durowade et al., 2012). Whereas, the IARC country specific estimate for cervical cancer prevalence for Nigeria was 15.6 per 1,000 per year (Bray et al., 2013). Further, the study from Nigeria reported coitarche, tobacco smoking, number of sexual partners and family history of cervical cancer as significant risk factors (Durowade et al., 2012). There is strong epidemiologic evidence indicating that HPV is the major etiology of cervical cancer (Bosch et al., 1995;Chichareon et al., 1998). In addition, the onset of sexual intercourse at an early age and a greater number of lifetime sexual partners raises the risk of cervical cancer (Brinton et al., 1987). Furthermore, long-term use of oral contraceptives may be a cofactor that increases risk of cervical cancer by up to four-fold in women who are positive for cervical HPV DNA (Moreno et al., 2002).
Cervical cancer mortality
Four studies reported cervical cancer crude mortality rate (Kalakun and Bozzetti, 2005;Sandagdorj et al., 2010;Naumovic et al., 2015;Wang, 2015) whereas seven studies reported age standardized mortality rate (Sandagdorj et al., 2010;Dikshit et al., 2012;Kuehn et al., 2012;Gonzaga et al., 2013;Pineros et al., 2013;Du et al., 2015;Naumovic Another of the reviewed studies reported that high parity and poor genital hygiene conditions were the main cofactors for cervical cancer in the population with prevalent HPV infections (Bayo et al., 2002). Therefore, the role of persistent infection with oncogenic types of HPV in the etiology of cervical cancer has encouraged the evaluation of HPV testing as a screening tool (IARC, 1995;Bosch et al., 2002;Franco, 2003). Cigarette smoking is the only nonsexual behavior consistently and strongly correlated with cervical cancer, independently increasing risk two-to four fold (Winkelstein, 1990).
In the absence of a nationwide screening program, there are disparities in screening, treatment, and ultimately survival. The variation in study population, sample size and time frame have shown variation in the reported incidence in our review (Chokunonga et al., 2000;Banda et al., 2001;Wabinga, 2002;Sriamporn et al., 2003;Chen, 2006;Gibson et al., 2008;Missaoui et al., 2010;Sandagdorj et al., 2010;Dhillon et al., 2011;Kuehn et al., 2012;Chokunonga et al., 2013;Du et al., 2015) and the IARC country specific estimates of cervical cancer (Ferlay et al., 2013). The highest age standardized incidence rate is 17.93/100,000/year in Zimbabwe in 2000 which is one of the countries in East Africa having the highest cervical cancer incidence rates.
The age standardized mortality rate was reported highest as 16/100,000/year in India in 2015 among the other LMICs (Pineros et al., 2013). Also, India has the highest age standardized mortality rates in South-East Asia followed by Myanmar and Nepal (Ferlay et al., 2013). An analysis of population-based surveys indicates that coverage of cervical cancer screening in developing countries is 19% compared to 63% in developed countries and ranges from 1% in Bangladesh to 73% in Brazil (Gakidou E, 2009). Studies have shown that population based screening programs may be an effective method to prevent cervical cancer deaths also in developing countries (Sankaranarayanan et al., 2007). According to the WHO, 80% to 100% coverage of the target population with Pap smear screening, adequate diagnosis and treatment would allow a 60% to 90% reduction in cervical cancer (Boyle and Levin, 2008). A large cluster randomized trial from India shows that a single round of HPV screening can significantly reduce the numbers of advanced cervical cancers and deaths from cervical cancer in a low resource setting (Sankaranarayanan et al., 2009). In addition, studies confirm that vaccination programs containing the seven most common HPV types would prevent about 87% of cervical cancer worldwide (Munoz et al., 2004).
Unfortunately, the majority of women in developing countries still do not have access to cervical cancer prevention programmes which is resulting in increased cervical cancer disease burden (Ferlay et al., 2013). Furthermore, low adherence among women regarding screening and diagnosis of chronic non-communicable diseases has been one of the most important factor to be considered for cervical cancer screening and diagnosis in LMICs (Hodgkins and Orbell, 1998;Floyd et al., 2000), (Shrestha et al., 2013).
Limitations
Our study is limited to MEDLINE, CINAHL and Scopus database searches. Thus, it may not cover all the studies conducted in this field; particularly those published in non-indexed local journals, non-English publications, and open access platforms not covered by MEDLINE, CINAHL and Scopus. Moreover, we did not assess the publication bias of the articles, as it is not relevant in context of prevalence, incidence and mortality studies. There are chances of considerable under reporting in the prevalence or incidence or mortality rates in our study particularly since the capacity for cancer diagnosis and data capture is limited in LMICs.
In conclusion, the review reported prevalence of cervical cancer as 5.0 per 1,000 in 2012 in Nigeria among the age group 25-64 years and the highest age standardized incidence rate as 17.93/100,000/year in Zimbabwe in 2,000 and lowest as 0.11/100,000/year in China in 2006. Further, the age standardized mortality rate was reported highest as 16/100,000/year in India in 2015 and lowest as 1.8/100,000/year in Colombia in 2013. This review provides useful information for the future prevention and clinical management guidelines on cervical cancer in LMICs, but more importantly reveals the substantial need for further studies on cervical cancer prevalence, incidence and mortality with associated risk factors in LMICs.
|
2018-04-03T02:21:00.784Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "69979c96f4b2848117a5ee9b7b7894062be61c1f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "69979c96f4b2848117a5ee9b7b7894062be61c1f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254732999
|
pes2o/s2orc
|
v3-fos-license
|
Association of shift work with incident dementia: a community-based cohort study
Background Some observational studies had found that shift work would increase risks of metabolic disorders, cancers, and cardiovascular diseases, but there was no homogeneous evidence of such an association between shift work and incident dementia. This study aimed to investigate whether shift work would increase the risk of dementia in a general population. Methods One hundred seventy thousand seven hundred twenty-two employed participants without cognitive impairment or dementia at baseline recruited between 2006 and 2010 were selected from the UK Biobank cohort study. Follow-up occurred through June 2021. Shift work status at baseline was self-reported by participants and they were categorized as non-shift workers or shift workers. Among shift workers, participants were further categorized as night shift workers or shift but non-night shift workers. The primary outcome was all-cause dementia in a time-to-event analysis, and the secondary outcomes were subtypes of dementia, including Alzheimer’s disease, vascular dementia, and other types of dementia. Results In total, 716 dementia cases were observed among 170,722 participants over a median follow-up period of 12.4 years. Shift workers had an increased risk of all-cause dementia as compared with non-shift workers after multivariable adjustment (hazard ratio [HR], 1.30, 95% confidence interval [CI], 1.08–1.58); however, among shift workers, night shift work was not associated with the risk of dementia (HR, 1.04, 95% CI, 0.73–1.47). We found no significant interaction between shift work and genetic predisposition to dementia on the primary outcome (P for interaction = 0.77). Conclusions Shift work at baseline was associated with an increased risk of all-cause dementia. Among shift workers, there was no significant association between night shift work and the risk of dementia. The increased incidence of dementia in shift workers did not differ between participants in different genetic risk strata for dementia. Supplementary Information The online version contains supplementary material available at 10.1186/s12916-022-02667-9.
unfortunately, trials aiming to treat dementia have mostly ended with failure [4]. In the absence of effective therapeutic agents, the risk factors controlling is crucial for the primary and secondary preventions of dementia [5]. Various genetic and environmental risk factors have been found that would contribute to the development of dementia, such as apolipoprotein E ε4-carriers [6], obesity [7], diabetes [8], and unhealthy lifestyles (e.g., smoking, alcohol consumption, and lack of physical activity) [9][10][11].
Shift work, where an individual's normal hours of work are, in part, outside the period of the normal day working and disrupting the circadian rhythm, has become increasingly common with socioeconomic development [12]. Shift work is usually accompanied by long-hour nature, low income, a bad working environment, and increased subjective strains [13,14], and may result in a series of health problems. Prior studies have found that shift work was associated with a 23% increased risk of myocardial infarction [15], a ~ 20% increased risk of breast cancer [16,17], a 9-40% increased risk of type 2 diabetes [18,19], and a 5% increased risk of ischemic stroke [15], some of which could contribute to the development of dementia. Moreover, recent studies reported that acute sleep deprivation would lead to increased brain β-amyloid (Aβ) burden and blood levels of t-tau [20,21], from which it could be inferred that long-term shift work might lead to sleep disturbances thereby leaving those workers with a higher incidence of dementia. Taking these negative impacts into consideration, shift work may be an important risk factor for dementia.
However, there was no homogeneous evidence about the association between shift work and incident dementia [22]. The Danish Nurse Cohort Study by Jørgensen et al., involving more than 8000 nurses from 1993 to 2018, showed that persistent night shift work may increase the risk of dementia [23]. Whereas, another cohort study by Nabe-Nielsen et al., involving 4766 male employees in Denmark from 1970 to 2014, found no significant association between shift work or long working hours and the risk of dementia [24]. Most previous studies had recruited gender-or occupation-specific participants, and evidence from a more general population would be needed to investigate the relationship between shift work and incident dementia. In addition, genetic predisposition to dementia may interact with environmental factors and alter the association between shift work and dementia, and competing events that have never been considered in previous studies (e.g., death), might lead to underestimating or overestimating the impact of shift work on dementia.
Accordingly, we conducted a community-based cohort study in UK Biobank to address whether shift work would increase the risk of all-cause dementia or dementia subtypes in a general population.
Data source and participants
For this community-based cohort study, data were extracted from the public UK Biobank Resource [25]. The UK Biobank is a prospective cohort study with over 500,000 community-dwelling participants across the UK aged 37-73 years when recruited between 2006 and 2010 [26].
Participants who indicated they were in paid employment or self-employed at baseline were included in our study. We excluded those who (1) reported previous cognitive impairment or dementia, (2) lack of information about shift work or night shift work status, and (3) have no genetic data.
Shift work definition
The definition of shift work in UK Biobank was "a schedule falling outside of 9 am to 5 pm; by definition, such schedules involved afternoon, evening, or night shifts or rotating through these shifts, " while night shift work was defined as "a work schedule that involves working through the normal sleeping hours, for instance, working through the hours from 12 to 6 am. " The UK Biobank first asked participants employed at baseline to report whether their current main job involved shift schedule; if so, participants were further asked if night shifts were involved. For both questions, response options were never/rarely, sometimes, usually, or always. We derived individual current shift work status according to responses to the two questions, and categorized them as "non-shift workers" or "shift workers, " with "non-shift workers" defined as working between hours 9 am to 5 pm; among shift workers, participants were categorized as "night shift workers" or "shift but non-night shift workers", with "non-night shift workers" defined working between hours 5 pm to 12 am; among night shift workers, participants were further categorized as "some night shift workers" or "usual/permanent night shift workers. "
Outcomes
The primary outcome was all-cause dementia in a timeto-event analysis, and the secondary outcomes included AD, VD, and other types of dementia. The electronic health records (EHRs), a data linkage to hospital inpatient admissions and death registries, include primary or secondary events in England, Scotland, and Wales. A previous comparison between EHRs and expert clinical adjudicators in the UK Biobank showed that the overall positive predictive value for dementia diagnosis is 82.5% [27], suggesting that the EHRs were effective to assess the association between risk factors and dementia. We used the algorithms provided by UK Biobank to identify dementia cases, which were generated based on EHRs, using ICD-9 and ICD-10 codes (Additional file 1: Table S1). In the time-to-event analysis, the date of incident dementia during follow-up was set as the earliest date of dementia codes recorded regardless of the source used. At the time of analysis, as hospital admission data were available until 30 June 2021, we, therefore, censored the disease-specific outcome analysis at this date or the date of the first disease incidence or death, whichever occurred first. Mortality data were available for participants until 31 May 2021.
Polygenetic risk score for dementia
We developed a polygenetic risk score (PRS) for quantifying the genetic predisposition to dementia using single-nucleotide polymorphisms (SNPs) associated with dementia based on previous genome-wide association studies that did not include UK Biobank participants [28]. Information on the 23 selected SNPs is listed in Additional file 1: Table S2. Individual SNPs were coded as 0, 1, and 2 according to the number of risk alleles. The PRS was formulated as the sum of the number of risk alleles at each locus multiplied by the respective regression coefficient, divided by the number of SNPs, using PRSice-2 [29,30]. The PRS was then divided into quartiles and categorized as low (quartiles 1 to 2), intermediate (quartile 3), and high (quartile 4) genetic predisposition to dementia (Additional file 1: Table S3).
Covariates
Possible confounding variables include: age; sex; ethnicity (white/not white); education, categorized as higher (college/university degree or other professional qualification), upper secondary (second/final stage of secondary education), lower secondary (first stage of secondary education), vocational (work-related practical qualifications), or other; socioeconomic status, categories derived from Townsend deprivation index quartiles 1 (low), 2 to 3 (intermediate), and 4 (high); diabetes mellitus (DM); hypertension (HTN); stroke; coronary heart disease (CHD); cholesterol-lowering medication; antihypertensives; aspirin; body mass index (BMI); systolic blood pressure (SBP); total cholesterol (TC); triglycerides (TG); high-density lipoprotein (HDL); low-density lipoprotein (LDL); glycated hemoglobin (HbA1c); smoking status (current or no current smoking); alcohol consumption; healthy diet, based on consumption of at least 4 of 7 commonly eaten food groups following recommendations on dietary priorities [31]; regular physical activity, defined as meeting the 2017 UK Physical activity guidelines of 150 min of moderate activity per week or 75 min of vigorous activity; years of work; sleep duration, categorized as ≤ 6, 7-8, and ≥ 9 h/day; chronotype preference (definitely a "morning" person, more a "morning" than "evening" person, more an "evening" than a "morning" person, and definitely an "evening" person).
Statistical analysis
For baseline characteristics, continuous variables conforming to normal distribution were described by their means and standard deviations, while those not conforming to normal distribution were described by medians and interquartile ranges. Categorical variables were described by counting numbers and calculating percentages. Univariate comparisons between groups were performed using Student's t, Mann-Whitney, or χ 2 tests according to the type and distribution of variables.
In the primary analysis, time-to-event analysis for all-cause dementia was performed using the Cox proportional hazard regression model, and we constructed several models that included different covariates to estimate hazard ratios (HR) and their 95% confidence intervals (95% CI). Model 1 was adjusted for age at baseline and sex. Model 2 was adjusted for terms in model 1, ethnicity, education, and socioeconomic status. Model 2 was chosen as the primary model.
We used a fixed sequence procedure for multiple comparisons, which would not inflate the type I error. We sequentially compared differences in the incidence of dementia between shift workers and non-shift workers, night shift workers and shift but non-night shift workers, and some/usual night shift workers and permanent night shift workers. In the subgroup analysis, which was set out to explore whether the impact of shift work on dementia varied in the subgroups defined according to age at baseline (≤ 60, > 60 years), ethnicity, sex, socioeconomic status, sleep duration, and genetic predisposition to dementia by PRS, the P value for interaction was calculated by the tests of exposure-by-covariate interaction in the Cox models. The secondary outcomes of dementia subtypes were analyzed using the same Cox models of the primary analysis.
We conducted several sensitivity analyses. First, we further adjusted some covariate. Model 3 was further adjusted for terms in model 2, DM, HTN, stroke, CHD, cholesterol-lowering medication, antihypertensives, aspirin, BMI, SBP, TC, TG, HDL, LDL, HbA1c, smoking status, alcohol consumption, healthy diet, and regular physical activity. Model 4 was adjusted for terms in model 3, genetic predisposition to dementia by PRS category. Model 5 was adjusted for terms in model 4, years of work. Model 6 was adjusted for terms in model 5, sleep duration. Model 7 was adjusted for terms in model 6, chronotype preference. Second, we analyzed the impact of shift work on dementia using Fine-Gray methods accounting for death as a competing risk, to assess the robustness of our findings [32]. Third, we also excluded subjects with follow-up time < 1 year or incident dementia < 1 year from baseline to perform the analysis. Forth, we perform the same analysis in the dataset containing 278,270 participants using multiple imputations by chained equations with 5 imputations to impute missing values.
All P values were reported as two-sided tests with significance defined as P < 0.05. Statistical analyses were performed in the R software (Version 4.0.3, R Core Team, https:// www.r-proje ct. org). Tables 1 and 2. Participants who had reported shift work status (vs. non-shift workers) were younger, more likely to be men, had a lower education level and higher Townsend deprivation index, and had a higher prevalence of DM and HTN. Shift workers also tended to take more cholesterol-lowering medication, antihypertensives, and aspirin, and to be not current smokers, had lower alcohol consumption, less healthy diet, but more physically active and had a shorter sleep duration. (Table 1).
Shift work or night shift work and dementia
The incidence of the primary and secondary outcomes was shown in Additional file 1: Table S4. We observed 716 dementia cases during a median follow-up period of 12.4 years, of whom 134 (18.7%) and 582 (81.3%) were in the shift workers group and the non-shift workers group, respectively. Shift workers had a higher incidence of allcause dementia compared with non-shift workers (unadjusted-HR, 1.21; 95% CI, 1.00 to 1.46; P = 0.04; Fig. 2). After adjusting for confounders, the risk of all-cause dementia among shift workers remained significantly higher than non-shift workers (adjusted-HR, 1.30; 95% CI, 1.08 to 1.58; P = 0.006; Table 3). Among shift workers, we did not observe a significant association between night shift work and the risk of dementia after multivariable adjustment in the Cox model (adjusted-HR, 1.04; 95% CI, 0.73 to 1.47; P = 0.83; Table 3), and the sensitivity analysis yielded similar results (Additional file 1: Table S5-8).
Subgroup and sensitivity analyses
As shown in Fig. 3, the impact of shift work on dementia did not differ among participants who were in the low-, intermediate-, or high-PRS subgroups (P for interaction = 0.77). Similarly, no significant interaction was observed in the subgroups of age at baseline, ethnicity, sex, socioeconomic status, and sleep duration.
In order to assess the robustness of our findings, we conducted several sensitivity analyses, including the models further adjusted for genetic predisposition to dementia by PRS, years of work, sleep duration, and chronotype category, the Fine-Gray methods under consideration of the competing risk of death, the models excluding subjects with follow-up time < 1 year or incident dementia < 1 year from baseline and the models of the imputed dataset. The results showed no substantial change of the impact of shift work on dementia (Additional file 1: Table S5-8).
Discussion
In this community-based cohort study in UK Biobank, involving 170,722 participants without cognitive impairment or dementia at baseline, we found that shift workers at baseline had a 30% increased risk of all-cause dementia as compared with non-shift workers during a median follow-up period of 12.4 years; however, among shift workers, there was no significant association between night shift work and the risk of dementia. In addition, to the best of our knowledge, it was the first study to examine the interaction between shift work and genetic predisposition to dementia, and we found that the risk of dementia associated with shift work did not significantly differ among participants in different genetic risk strata of dementia.
Although some health problems that are caused by shift work may contribute to the development of dementia, such as metabolic disorders and ischemic stroke, the mechanism that how shift work causes cognitive impairment still remains unclear. We inferred that sleep disturbance and disrupted circadian might be the main causes of cognitive impairment among shift workers [33]. Systematic reviews had shown an increased risk for shift workers to develop chronic sleep disturbance [34,35], and the prevalence of shift work sleep disorder has been estimated to be 10-23% in shift workers [36]. Extracellular levels of metabolites, including amyloid β, increase in the brain during wakefulness and are reduced during sleep, and sleep disturbances could therefore result in a reduced clearance of these metabolites [37], which contributes to the pathogenesis of AD. And cognitive performance deteriorates with sleep disturbance [38,39].
Most regulatory hormones show strong diurnal rhythms, e.g., cortisol and melatonin, and disturbed sleep is often related to a mild temporary increase in the major neuroendocrine stress systems [40]. Experimental studies showed that disturbed sleep, altered light exposure typical for shift workers, could lead to an acute circadian disruption and so influence the normal secretion of these regulatory hormones [41]. Studies of patients with AD have found that these patients had a higher prevalence Fig. 3 Association of shift work and the risk of all-cause dementia stratified by potential risk factors. Abbreviations: HR, hazard ratio; CI, confidence interval. Results were adjusted for age at baseline, sex, ethnicity, education, and socioeconomic status. Horizontal lines indicate the ranges of 95% CIs and the vertical dash lines indicate the hazard ratio of 1.0 of melatonin secretion rhythm disorders [42,43]. Animal experiments showed that melatonin can inhibit expressions of amyloid-β protein in the hippocampal area of model rats with senile dementia [44]. Activation of the type 1 melatonin receptor modulated anti-amyloidogenic and anti-inflammatory roles in AD mice brain and improved the cognitive deficits [45]. Shift work was also associated with abnormalities in brain structure that had been observed in dementia pathophysiology, giving a hint of the underlying brain mechanisms of shift work on dementia risk [46]. In addition, shift work has been linked to lower socioeconomic status, which is consistent with our results that participants who had reported shift work status (vs. non-shift workers) had higher Townsend deprivation indexes (i.e., lower socioeconomic status), and may lead to disruption of social rhythms, that is a conflict between work and family demands. Thus, shift workers may suffer from higher psychosocial work stress [33].
A systemic review by Leso et al. in 2021 found several literatures investigating the association between shift work and dementia, but failed to draw definitive conclusions on this topic, because of the limited number of available studies, a different definition of work schedules, and the possible co-exposure to other occupational risk factors [22]. The Danish Nurse Cohort Study by Jørgensen et al., involving more than 8000 nurses from 1993 to 2018, showed that persistent night shift work may increase the risk of dementia [23]. Whereas, another cohort study by Nabe-Nielsen et al., involving 4766 male employees in Denmark from 1970 to 2014, found no significant association between shift work or long working hours and the risk of dementia [24]. Previous studies have shown mixed results [47][48][49].
Since most previous studies had only recruited gender-or occupation-specific participants, our study, from a more general population in the UK biobank, provided strong evidence that shift work at baseline is associated with an increased risk of dementia, and extensive sensitivity analyses assessing the robustness of our findings have all yielded similar results. It should be emphasized that participants from the UK Biobank were not nationally representative due to the low response rate (~ 5.5%) and the fact that the participants who were in employment at baseline and included in our analysis tended to be healthier than those who had retired earlier, which might lead to potential healthy volunteer selection bias. However, given that the UK Biobank has a tremendous sample size and a median follow-up time of over 10 years, it still has the capacity to detect and identify risk factors [50], and our findings also remain of important public health implications in terms of the need for effective public measures to reduce the risk of dementia in order to improve the quality of life and health of shift workers, such as increasing the minimum hourly wage and reducing the frequency or duration of shift work.
Another important finding was that different genetic predispositions to dementia did not significantly alter the association between shift work and dementia, from which it could be inferred that shift workers may benefit from reducing the duration or frequency of shift work regardless of the genetic predisposition to dementia if the associations were causal. Furthermore, the subgroup analysis indicated that the impact of shift work on dementia was more pronounced in those aged 60 years and older. Considering that the age of onset of dementia is usually above 80 years [51] while the mean age at baseline of participants in our study was only 52 years and the median follow-up was 12.4 years, the association between shift work and dementia could have been underestimated. Further studies, enrolling more elderly volunteers, would be needed in the future to verify our findings.
Contrary to our expectations, among shift workers, this study did not find a statistical difference in the risk of dementia between night shift workers and non-night shift workers. This result appeared to be contrary to the idea that night shift work may lead to more severe circadian disturbances and sleep impairment, resulting in an increased risk of dementia. In fact, in our study, the proportion of participants with a sleep duration less than 6 h was higher in night shift workers compared with nonnight shift workers and our results (Additional file 1: Table S9) showed that sleep duration less than 6 h was associated with an increased risk of dementia, in line with previous studies [52]. Possibly due to the small number of events in subgroups, we did not have sufficient statistical power to detect a difference. Hence, further studies with larger sample sizes of shift workers would be warranted to address whether there would be a difference in the risk of dementia between night shift workers and non-night shift workers.
Overall, our study provides novel evidence based on a general population that shift work at baseline may lead to an increased risk of dementia regardless of genetic predisposition to dementia and suggests that the occupational management of reducing the duration or frequency of shift work may be crucial for long-term shift workers.
Strengths and limitations
Our study has several major strengths. Firstly, the large sample size and the wealth of information on lifestyle, and other covariates of UK Biobank participants, enabled this study of comprehensive sensitivity analyses and subgroup analyses. Secondly, to our best knowledge, it was the first study to examine the interaction between shift work and genetic predisposition to dementia. There were also several limitations in our study. Firstly, the study was a retrospective analysis of data from the UK Biobank, thus confounders that were included in the multivariable Cox model were based on available variables in the database and there might be some unknown or unmeasured biases confounding the association between shift work and dementia. Besides, some covariates had too much missing data, resulting in a loss of sample size. Secondly, although we believe that the UK Biobank has sufficient capacity to identify risk factors, its low response rate and healthy volunteer bias may still contribute to an underestimation of the impact of shift work on dementia, which needs to be further assessed in future studies. Thirdly, dementia might be misdiagnosed or underdiagnosed, and participants with cognitive impairment usually are more likely to be lost to follow-up, hence some dementia cases might not be captured by EHRs. Furthermore, the work schedule information was assessed only at the baseline. Participants' work status might change over time during the follow-up while people tend to stop doing shift or night shift work at an older age, which might bias our results toward the null hypothesis, resulting in an underestimation of the effect size [53]. Future prospective studies measuring the longitudinal change of employment status would be needed to assess the association of lifetime exposure to shift work with the risk of dementia. Finally, participants recruited by UK Biobank were mostly white British, which may limit the extrapolation of our findings to other ethnicities, such as Asians and Africans.
Conclusions
Shift workers at baseline was associated with a higher incidence of all-cause dementia compared with non-shift workers. Among shift workers, there was no significant association between night shift work and the risk of dementia. The increased incidence of dementia in shift workers did not differ between participants in different genetic risk strata for dementia. Our findings have public health implications for the primary prevention of dementia, but future prospective studies are still warranted to determine whether reducing the frequency or duration of shift work would contribute to lowering the risk of incident dementia and to clarify the underlying mechanisms.
|
2022-12-17T05:07:30.048Z
|
2022-12-15T00:00:00.000
|
{
"year": 2022,
"sha1": "fb0a315334d4dc8b540da0297c6cc20cdcad42be",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fb0a315334d4dc8b540da0297c6cc20cdcad42be",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
212938196
|
pes2o/s2orc
|
v3-fos-license
|
Application of cook balloon during aorta replacement in a pregnant Marfan-syndrome patient: a case report
Background Aortic dissection is a rare and emergent condition. Aortic dissection during pregnancy is not much known but it is quite lethal to both mother and infant. Earlier reports published show that clinicians conducted hysterectomies during cesarean section to avoid anticoagulant-induced uterine bleeding during the following aortic surgery. Case presentation A woman (38, gravida 1, para 0) in the 37th gestational week suffered an acute, severe, sharp pain in the chest and back. She was diagnosed with Standford type A aortic dissection and suspected with Marfan syndrome. An emergency cesarean section was performed immediately to deliver the baby. Since the patient was on anticoagulants during aortic replacement, so Cook balloon was inserted into the uterus to prevent postpartum hemorrhage. This helped to maintain the uterus intact. Family genetic testing showed that the patient was a carrier of FBN1 mutation which was inherited from the patient’s mother, and the newborn also carried the mutation. Hence the patient was concluded to be positive for Marfan syndrome. Conclusion It is important that clinicians should pay attention to the possibility of aortic dissection in a pregnant woman with chest, abdominal or back pain. In this case study, we employed Cook balloon during cesarean section to avoid anticoagulant-induced uterine bleeding during the following aortic surgery.
Background
Aortic dissection is a relatively rare condition, with a general occurrence rate of 2.9/100,000/yr [1]. It is, therefore, crucial to diagnose the condition correctly and rapidly and to administer immediate and appropriate treatment. Aortic dissection during pregnancy is even rarer than in normal patients. Pregnancy seems to increase the risk of aortic dissection in women with Marfan syndrome on account of blood-vessel alterations, particularly in the third trimester [2]. Through this study, we report successful treatment of Standford type A aortic dissection in a pregnant Marfan-syndrome patient.
Case presentation
The patient at hand was a woman (38, gravida 1, para 0) in the 37th gestational week. She suffered an acute, severe, sharp pain in the chest and back and was admitted to our hospital in January 2018. She had no history of vascular disease or hypertension and was uncertain of familial history. Her blood pressure measured 107/38 mmHg. A transthoracic two-dimensional echocardiography revealed a severely dilated aortic root (Ø 58 mm) and severe aortic valve insufficiency. Aortic computed tomography angiography (CTA) led to the diagnosis of the Standford type A aortic dissection (Fig. 1). The patient was slender with a height of 173 cm and featured spider-like fingers and toes (having "wrist sign"). We suspected her having Marfan syndrome. The fetal ultrasound was inconspicuous, with a normally developed fetus and 3 h post-admission, the patient underwent an emergency cesarean section with full anesthesia. Blood pressure was carefully monitored during the surgery. A 2950-g infant was born (Apgar score: 9 at 1 min, 10 at 5 min). After the baby was born, the patient was administered with 20u oxytocin intravenously. To prevent postpartum bleeding during aorta replacement, potentially caused by the anticoagulants in the extracorporeal circulation, we inserted Cook balloon containing 400 ml of normal saline into the uterus during the cesarean section. The aortic root, ascending aorta and aortic arch was replaced under cardiopulmonary bypass and hypothermia (25°C). Vaginal bleeding during the six-hour surgery was modest (< 150 ml). The Cook balloon was removed from the patient 24 h post-surgery leaving an intact uterus. The patient was released after 20 days. In order to make a definite diagnosis, we did the genetic testing of the patient's family. The patient carried an FBN1 mutation and the site was chr15:48905206. The mutation was NM_ 000138.4:c.247 + 1G>T. We also discerned that the patient's Z score of aortic root diameter was 7.5. The above findings were correlated to the Ghent-2 criteria and we confirmed that she was having Marfan syndrome. Family genetic testing showed that the mutation inherited from the patient's mother, and the newborn also carried the mutation. We explained the result of genetic testing to the patient and the genetic mode of autosomal dominant inheritance. She was advised contraception and restricting activity. Also we made her aware of the impact Marfan syndrome would have on her newborn's health. Both she and her baby need a close follow-up. The patient was followed up by the cardiologist three times after discharge.
Discussion and conclusion
Marfan syndrome (MFS) is a dominant, autosomal inherited disorder of connective tissue that leads to the damage of cardiovascular, skeletal and ocular systems.
The most serious MFS-induced complication is aortic dissection. The overall risk of a pregnant woman with MFS having an aortic dissection is about 3% [3] wherein dissection occurs often either in the last trimester of pregnancy or the early postpartum period. A recent study comprising of 12 UK centers, over the past 20 years shows that the rate of aortic dissection in pregnant women with MFS was 1.9% (one type A and four type B) and there were no deaths. It also reports that preconception counseling rates were low [4]. The patient in our case did not undergo pre-conception counseling. If the patient had been diagnosed with MFS before pregnancy, a comprehensive evaluation would have been conducted to determine whether she can be pregnant. According to the 2018 ESC guidelines, pregnancy should be avoided in Marfan patients with an aortic root diameter > 45 mm due to the increased risk of dissection [5]. Earlier studies show that aortic dissection during pregnancy is a life-threatening condition with fetal mortality around 20-30% [6]. Hence, obstetrical specialists are required to show vigilance toward known clinical presentations of MFS, including elongated extremities, wrist and thumb sign, pectus deformity, facial features, scoliosis, and, to facilitate early diagnosis to prevent detrimental outcomes.
To avoid anticoagulant-induced uterine bleeding during aortic surgery, earlier clinicians have conducted hysterectomies as a preventive measure [7,8]. In the case we present, 20u oxytocin was intravenously administered to the patient after the baby was born. We did not choose intrauterine suture-"B-Lynch" because the wound of "B-Lynch" may bleed during aortic surgery. In order to minimize the wound, we used a Cook balloon to prevent uterine bleeding.
In conclusion, clinicians should pay attention to the possibility of aortic dissection in a pregnant woman with Fig. 1 Aortic computed tomography angiograms. a Aortic dissection from ascending aorta to abdominal aorta and the fetus in the uterine cavity. b Acute aortic dissection with a false lumen. c 3-D reconstruction of the aorta, showing dilated aortic sinuses of Valsalva chest, abdominal or back pain because correct and rapid diagnoses and immediate treatment are extremely important. Through this case report, we used Cook balloon during cesarean section to avoid anticoagulant-induced uterine bleeding following aortic surgery in a pregnant patient with Marfan syndrome.
|
2020-03-19T10:18:34.894Z
|
2020-03-18T00:00:00.000
|
{
"year": 2020,
"sha1": "c1f494c4a328664e3d613ed74cb9bfb881ab4d7c",
"oa_license": "CCBY",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-020-02871-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc12b5ae7c8855c081710c13f0e1e30093d553d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3211804
|
pes2o/s2orc
|
v3-fos-license
|
Quantification of Massive Seasonal Aggregations of Blacktip Sharks (Carcharhinus limbatus) in Southeast Florida
Southeast Florida witnesses an enormous seasonal influx of upper trophic level marine predators each year as massive aggregations of migrating blacktip sharks (Carcharhinus limbatus) overwinter in nearshore waters. The narrow shelf and close proximity of the Gulf Stream current to the Palm Beach County shoreline drive tens of thousands of sharks to the shallow, coastal environment. This natural bottleneck provides a unique opportunity to estimate relative abundance. Over a four year period from 2011–2014, an aerial survey was flown approximately biweekly along the length of Palm Beach County. A high definition video camera and digital still camera mounted out of the airplane window provided a continuous record of the belt transect which extended 200 m seaward from the shoreline between Boca Raton Inlet and Jupiter Inlet. The number of sharks within the survey transect was directly counted from the video. Shark abundance peaked in the winter (January-March) with a maximum in 2011 of 12,128 individuals counted within the 75.6 km-2 belt transect. This resulted in a maximum density of 803.2 sharks km-2. By the late spring (April-May), shark abundance had sharply declined to 1.1% of its peak, where it remained until spiking again in January of the following year. Shark abundance was inversely correlated with water temperature and large numbers of sharks were found only when water temperatures were less than 25°C. Shark abundance was also correlated with day of the year but not with barometric pressure. Although shark abundance was not correlated with photoperiod, the departure of the sharks from southeast Florida occurred around the vernal equinox. The shark migration along the United States eastern seaboard corresponds spatially and temporally with the spawning aggregations of various baitfish species. These baseline abundance data can be compared to future studies to determine if shark population size is changing and if sharks are restricting their southward migration as global water temperatures increase.
Introduction photoperiod, and prey availability often co-vary, it is difficult to ascertain which factor is primarily responsible for driving the migration.
Although this annual migration is well known, there remains a dearth of empirical data on blacktip shark abundance as no rigorous studies of this phenomenon have been conducted. Southeast Florida is the presumed overwintering grounds for the blacktips [2], but this presumption is only based on anecdotal evidence, and the species' seasonal abundance and associated environmental parameters have not been quantified. Therefore, the goal of this study was to quantify shark abundance on a seasonal basis and correlate the presence of these massive aggregations with various environmental factors, including water temperature, the presumed driving factor for their movement. To accomplish this we employed an aerial survey technique which capitalized upon the clear water and close proximity of the sharks to the shore.
Materials and Methods
Aerial survey flights were conducted approximately biweekly from 04 February 2011 through 17 April 2013, and from 04 January through 01 April 2014. The survey transect extended from Boca Raton Inlet (26°20' 09" N, -80°4' 16" W) northward along the shoreline to Jupiter Inlet (26°56' 38" N, -80°4' 16" W), a distance of 75.6 km (Fig 2). Flights were flown at an altitude of approximately 150 m and an airspeed of approximately 150 km h -1 . This combination of altitude and airspeed provided sufficient resolution to easily distinguish individual animals. Survey flights were conducted only on days in which the wind speed and direction produced relatively calm sea surface conditions which facilitated viewing into the water. Flights were flown in the mornings between 0800-1100 local time, which provided optimal lighting with minimal surface glare. During each flight water clarity was ranked from 1 (excellent) to 5 (poor). To provide consistency in the evaluation of water clarity, the authors flew most of the flights together and came to a consensus on the clarity rank for each flight. At least one of the authors was present on every flight. Water temperature and barometric pressure data were acquired for each day at 1000 local time, approximately in the middle of the survey flight time. The data were acquired from the National Data Buoy Center for the Lake Worth Pier Station (http://www. ndbc.noaa.gov/station_history.php?station=lkwf1) which is located at approximately the midpoint of the survey transect. Photoperiod data for Lake Worth, Florida were collected from the United States Naval Observatory (aa.usno.navy.mil/data/docs/RS_OneYear.php).
To quantify shark abundance, a high definition (1080p) video camera with GPS capabilities (Sony HDR-CX160) was mounted on a custom fabricated bracket out of the open pilot's side window of a Cessna 172 aircraft (Fig 3). A cable from the cockpit audio panel to the camera enabled recording of all cockpit communications. The camera was outfitted with a circular polarizing filter to reduce sea surface glare, and was positioned with the lens aimed straight downward. The plane flew northward parallel to the shore and at a distance approximately 200 m offshore. This enabled the video camera to record a belt transect with a lateral field of view from the shoreline to approximately 200 m offshore during flight. The field of view was determined by georeferencing to the Lake Worth pier, which extends 265 m from the shoreline. The total area surveyed along the 75.6 km transect was approximately 15.1 km -2 and sharks outside of the belt transect were not counted.
Starting on June 22 2011, a 35mm digital SLR camera (Nikon D3100) was mounted on a custom fabricated bracket immediately behind the video camera and also positioned looking straight downward. The SLR camera was outfitted with a circular polarizing filter and had a GPS unit (Nikon GP-1A) attached which recorded location information for each frame. An intervalerometer was programmed to record a single still frame every two seconds which provided overlap between successive frames. These still photos served as a backup to the video footage and provided a higher resolution image (14.2 megapixels) to facilitate counting of dense aggregations, if necessary. The lateral field of view for the two cameras was nearly identical.
Video footage and still frames were downloaded in the laboratory using iMovie (v8.0.6) and iPhoto (v8.1.2) software, respectively. The video footage was carefully reviewed and correlated with comments on the audio track that noted the presence of sharks. For footage in which few sharks were visible, the number of sharks was tallied directly from the video. When large numbers of sharks were present, individual frames were extracted from the video, imported into ImageJ (v1.43), and two independent reviewers manually counted the number of sharks in each frame. For large aggregations that spanned across successive frames, care was taken to avoid overlapping the field of view of subsequent frames and consequently over-counting the number of sharks. The number of sharks was collated for the entire survey flight for both reviewers and the mean number is reported. All sharks were assumed to be C. limbatus, unless obviously another species.
For dates when water clarity ranked as 5 (poor) the shark abundance data were considered unreliable and were excluded from analysis. To test for seasonal differences in shark abundance, a Kruskal-Wallis test was applied. Because of the high degree of temporal repeatability, shark abundance data were pooled by quarter (January-March; April-June; July-September; October-December) over all four years and abundance was compared among quarters. To determine which environmental variables correlated with shark abundance, a Generalized Linear Model was applied with day of the year, water temperature (°C), photoperiod (minutes), and barometric pressure (hPa) as the predictor variables. Because of the time dependent nature of the model, data were analyzed only for the period of continuous sampling (February 2011-April 2013) and did not include the shark abundance data from January-April 2014. The models were evaluated for the lowest AIC and highest adjusted coefficient of determination (R 2 ). A variance inflation factor (VIF) was calculated to determine multicollinearity for each predictor in the model. A Spearman's Rank Correlation was subsequently applied to determine the relationships between shark abundance and the four predictor variables (day of the year, water temperature, photoperiod, and barometric pressure).
Results
A total of 58 survey flights were conducted with a total of 104,255 sharks counted within the belt transect (S1 Table). Sharks were found singly, in small groups, or large aggregations up to thousands of individuals. When found in small groups the sharks were typically swimming in a polarized school. However, in large aggregations the sharks did not necessarily swim in a polarized school, and individuals were often oriented in different directions (Fig 1). Although the sharks were not directly observed feeding, they were sometimes seen in close proximity to schools of baitfish. Sharks were also seen to jump out of the water, but it was difficult to determine whether they exhibited any other social behaviors.
The number of sharks counted per survey varied with season (Fig 4). Shark abundance was greatest in the winter months (January-March) with a peak winter seasonal abundance within the belt transect of 9925.0 individuals averaged over all years. Shark abundance declined precipitously in the spring and very few sharks were seen in the surveys during the summer and fall months (May-December). Summer and fall shark abundance averaged 111.7 individuals over all years, approximately 1.1% of the winter peak. Mean shark abundance differed among quarters (Kruskal-Wallis, χ 2 = 24.640, df = 3, p<0.0001). Post-hoc analysis revealed that mean shark abundance for the first quarter (January-March) was significantly greater than for the other three quarters (April-June, χ 2 = 10.447, p = 0.001; July-September, χ 2 = 13.831, p<0.0001; October-December, χ 2 = 11.772, p = 0.001). Shark abundance in the second quarter (April-June) was also significantly greater than in the fourth quarter (October-December, χ 2 = 5.010, p = 0.025).
Shark abundance within the transect area reached a peak of 12,128 individuals in February 2011. This resulted in a peak density of approximately 803.2 sharks km -2 . In contrast, lowest shark density occurred in the third and fourth quarters (July-September, October-December) yielding an average density of approximately 4.1 sharks km -2 . Sharks were distributed from just a few meters from the shore throughout the entire field of view (S1 Movie) and could also be seen on the seaward side of the plane, although those individuals were outside the field of view of the cameras and were not counted. The maximum depth of the water throughout the survey transect was <4 m and it was possible to visualize details on the seafloor throughout the entire transect area. This provided confidence that all sharks within the belt transect were visible, and not obscured by water depth. Sharks were found throughout the entire survey transect but generally in greater numbers from Boynton Inlet to Jupiter Inlet. All sharks were approximately the same size and blacktips sampled from the large winter aggregations averaged 173.4 cm total length (n = 35) (Kajiura, unpublished data).
The Generalized Linear Model predicting shark abundance by four variables (day of the year, water temperature, photoperiod, barometric pressure) was significant (F = 17.88, df = 2, p<0.0001). Upon comparing AIC criteria and adjusted R 2 , the best model was achieved with the parameters water temperature and day of the year (adjusted R 2 = 0.4399, AIC = 692.3353). The individual parameter estimates were significant for both water temperature (p = 0.0007) and day of the year (p = 0.0332). The variance inflation factor (VIF) was equal to 1.341, which indicates that multicollinearity among temperature and day of the year is likely not a confounding factor.
Shark abundance was inversely correlated with both water temperature and day of the year (Spearman's rank correlation, water temperature: ρ = -0.581, p<0.0001; day of the year: ρ = -0.594, p<0.0001) (Fig 4). Sharks were present in greatest numbers when water temperature was less than 25°C (Fig 5). In contrast, photoperiod did not show a significant correlation with shark abundance (Spearman's rank correlation, ρ = -0.137, p = 0.320). However, the seasonal decline in shark abundance corresponded closely with the vernal equinox.
Discussion
This study provides the first quantitative assessment of blacktip shark abundance in their winter aggregation site off Palm Beach County, Florida. Although their migratory movements have been previously reconstructed from catch records [2,16], this study employs high temporal resolution sampling, along a set belt transect, over multiple years, to quantify shark abundance at a single point along their migratory route. It is only possible to assess the number of individuals involved because the blacktip sharks aggregate in shallow water close to shore where they can be easily seen and counted.
Spatial distribution drivers
The spatial distribution of blacktip sharks very close to shore in southeast Florida is likely attributable to both biotic and abiotic factors. Blacktip sharks are typically associated with continental and insular shelves [1]. In the northern part of their range, from the Carolinas to central Florida, the shelf extends far from shore and the sharks have the potential to be widely distributed seaward (Fig 2). The shelf narrows dramatically in Palm Beach County, Florida, and southward migrating sharks would necessarily be funneled in close to shore. The Gulf Stream current originates at the southern tip of Florida and flows northward along the United States eastern seaboard, closely following the continental shelf, from the Florida Straits to Cape Hatteras before being deflected eastward out to sea [26]. Off southeast Florida, the Gulf Stream averages about 80 km in width, extends to a depth of 800 m, and has a maximum surface velocity of approximately 2.5 m s -1 [27]. Southward migrating sharks could minimize the energetic cost of their migration by remaining close to the shore and away from this large, strong, northward flowing current. The bathymetry and hydrology jointly contribute to the sharks being driven to the nearshore environment.
In addition to abiotic factors, the baitfish upon which the sharks are presumed to feed are largely distributed close to shore [16]. Although we did not directly observe predation we did see sharks in close proximity to large schools of baitfish. Similarly, the blacktip is prey to larger predatory sharks such as the tiger shark (Galeocerdo cuvier) and great hammerhead (Sphyrna mokarran) [2]. The presence of these larger predators might exert pressure for the blacktips to refuge in the shallow, nearshore environment, as documented for various other shark species [28,29]. The blacktips might also be utilizing the shallow nearshore waters for thermoregulation [19]. Because their seasonal movement is strongly correlated with temperature, it is plausible that these sharks are sensitive to small temperature changes. This could result in microhabitat selection for their preferred temperature range in warmer nearshore waters compared to the cooler water found offshore below the thermocline. The warmer nearshore water might augment metabolic and physiological functions including digestion and somatic growth [30][31][32]. Therefore, the large aggregations might form as a function of feeding, predator avoidance, thermoregulation, or any combination.
Temporal movement pattern
The blacktip sharks exhibit a defined movement pattern along the United States eastern seaboard, as previously described [2,16]. Along the east coast of central Florida (Melbourne Beach and Daytona Beach) there are two seasonal peaks in blacktip shark abundance-one in the spring and one in the fall [16,18]. The two peaks suggest that the sharks are transiting through those areas as part of their northward and southward migration. In contrast, the single annual peak in abundance off Palm Beach County suggests that southeast Florida is likely the southernmost terminus of their migration.
The blacktip shark movement pattern is closely correlated with water temperature and prey abundance. Various baitfish species included in the blacktip shark diet exhibit temperature dependent migration along the United States eastern seaboard [33][34][35][36]. Water temperature thus provides a good proxy for baitfish/prey availability. These baitfish form large spawning aggregations in coastal river mouths at successively lower latitudes along the United States eastern seaboard from October to January and at successively higher latitudes from February to May [33]. The timing of their presence corresponds with the southward and northward migration of the sharks and suggests that the sharks are following their food. So, whereas the shark migration correlates with temperature, prey abundance might be the causal link between the two factors.
The baitfish schools become largely depleted by midway through the blacktip sharks' three month winter residency. The sharks almost certainly supplement their baitfish diet by foraging on the local fish population. The annual influx of a large number of upper trophic level predators likely creates an acute impact on the resident fish population, which could result in cascading effects through multiple trophic levels [37][38][39].
The blacktip shark abundance did not correlate with photoperiod. However, the onset of their departure from their overwintering habitat corresponds closely with the vernal equinox. Day length increases the most rapidly at the equinox and sharks might be using this rapid change in photoperiod as a cue to begin their northward migration. Photoperiod provides a more consistent temporal cue than changes in water temperature, which can vary from year to year. Photoperiod and water temperature have been correlated with shark movements [20][21][22] and the blacktips likely rely on both environmental factors to stimulate the onset of their northward migration.
Blacktip sharks are found year round in southeast Florida, albeit at much lower numbers outside of the peak winter season (Fig 4). They are reported throughout the year in the Florida Keys with a seasonal peak in abundance in late October to early November [16]. Their presence year round indicates that the summer warm water temperature does not impose a physiological limit to their distribution. These non-migratory individuals might be non-mating females in a resting year [16], or a subpopulation with a broader thermal tolerance. They might also represent individuals who shelter in cooler, deeper water and make only occasional forays to the nearshore environment where they were detected during the aerial survey flights.
Aerial surveys
Aerial surveys are often used to quantify abundance of air breathing marine organisms, such as seabirds [40], turtles [41][42][43], and marine mammals [44][45][46], which necessarily come to the surface where they can be easily seen. Aerial surveys for sharks are typically conducted only for whale sharks [47][48][49][50][51][52] and basking sharks [53,54]. The combination of their large size and surface association facilitates their visualization from the air. Smaller sharks have also been spotted in aerial surveys, either specifically targeting them [55,56], or incidentally during marine mammal surveys [57]. Occasionally, massive schools of elasmobranchs have been documented from aerial photographs [58,59] and in some instances, aggregations have been correlated with the presence of prey [60,61].
Aerial surveys provide numerous advantages over conventional fishing surveys. For marine organisms, aerial surveys are non-disruptive; the animals are unaware of the survey vehicle and hence behaviors remain unaffected. In addition, aerial surveys are also non-selective and allow all animals in the area to be counted, including non-target species. Aerial surveys are efficient and cost effective permitting a large area of coastline to be sampled quickly. Finally, aerial surveys can provide abundance and density data that, along with other data sources (e.g. tagging and tracking studies), can help to identify critical habitats.
Although aerial surveys provide a number of attractive features, they are possible only under certain environmental conditions. For example, poor water clarity reduces the probability of detecting the target species. This is less of a problem with large organisms, such as whale sharks and basking sharks, whose size makes them relatively easy to see. The background against which the study organism is viewed also affects detectability. It is easier to spot a shark against a uniform background than against a patchwork mosaic such as a reef. Calm conditions with no surface waves contribute to water clarity and provide minimal distortion which facilitates identification of submerged organisms. Therefore, the ideal conditions for visualizing marine organisms require calm, clear water with a uniform background which contrasts with the dorsal coloration of the animal.
Even under ideal conditions, the morphological similarity of various shark species makes it impossible to distinguish species from the air, with a few exceptions. During survey flights, great hammerhead sharks (Sphyrna mokarran) could be identified by their head morphology and tiger sharks (Galeocerdo cuvier) could be identified by their blunt snout and much larger size compared to the blacktips. Anecdotally, the sharks in these aggregations are reported as blacktip or spinner sharks (Carcharhinus brevipinna). These two species appear very similar and close examination is necessary to distinguish them [1,62]. Although spinner sharks are present in southeast Florida, the adults are found in deeper offshore waters and not immediately adjacent to the beach, as seen with the blacktips [2]. Beachgoers often see sharks jumping and spinning and conclude that the aggregations are composed of spinner sharks. However, the jumping and spinning behavior is common to both blacktip and spinner sharks [1,16]. Longline fishing surveys conducted amongst the aggregating sharks confirm that the aggregations are composed almost exclusively of blacktip sharks (Kajiura unpublished).
The peak abundance was largely similar in 2011, 2012, and 2014 but with only about half the peak number of sharks counted in 2013 (Fig 4). The much lower number of sharks counted in 2013 is likely attributable to the reduced visibility that year due to beach renourishment projects along the survey transect. In that process, sand is pumped from offshore onto the beach to increase the width of the beach. This creates expansive, high turbidity conditions adjacent to the shore that make it impossible to view anything under the surface of the water. Sharks might have been present and not counted, or might have avoided the turbid conditions by moving farther offshore and outside the field of view of the survey transect.
Shark aggregations were often seen on the seaward side of the plane as well, but those sharks were outside the field of view of the survey transect and thus were not counted. As a result, the number of sharks directly counted in the survey provides an index of relative shark abundance and is not a population census. The sharks seen on the seaward side of the plane were still in fairly shallow water but sharks occurring at greater depths would be undetectable to an aerial survey. Therefore, the number of sharks directly counted is an underestimate of the total population and might represent only the tip of the iceberg of a much larger aggregation.
Conservation
Shark populations continue to decline worldwide [63], including in the western Atlantic [64]. The abundance of upper trophic level predators is a critical metric of the health of an ecosystem [65] so the baseline data on shark abundance collected now can serve as a valuable benchmark for future studies [66]. The repeatability of the abundance estimates over several years suggests that monitoring the aggregation could provide an indicator of population size and perhaps management effectiveness. This is especially important given the variety of factors that could impact shark populations, including overfishing [63,67], and ocean acidification, deoxygenation, and warming [68].
The large, densely packed blacktip aggregations present a potential management concern for these vulnerable K-selected species. The blacktip aggregation is highly predictable in space and time, which makes it especially vulnerable to exploitation. In Florida, spotter aircraft are used to direct gillnet fishermen to large aggregations [15]. Fishing regulations currently restrict harvest to one shark per person per day, or two sharks per vessel per day, in Florida state waters. These regulations protect the blacktip aggregations from exploitation within state waters, less than three nautical miles from shore [14]. In federal waters (>3 nautical miles offshore) blacktip sharks are able to be commercially harvested at a rate of 45 individuals per vessel per trip, with no limit to the number of trips per day [69]. Father north of Palm Beach County the shelf widens and the sharks have the potential to extend into federal waters, although the aggregations would likely be less condensed.
Marine organisms have been documented to occur at increasingly higher latitudes in response to warming oceans [70,71]. Because the blacktip shark migration is closely correlated with water temperature, with very few sharks found when water temperatures exceed 25°C, warming oceans may shift the spatial range of future migrations to higher latitudes [72,73]. As a result, southeast Florida may no longer represent the low latitude terminus of their migration. The resultant loss of this large annual influx of upper trophic level predators has the potential to create significant ecological ramifications, including cascading effects through multiple trophic levels [37][38][39].
To our knowledge, this blacktip shark migration is the single most massive seasonal shark migration seen in the western Atlantic. The compelling visual imagery of thousands of sharks immediately offshore captivates the public's attention. It is possible to use this engagement to inform the public about the impact of overfishing, ocean acidification, and global ocean warming on local ecosystems and to promote conservation for these important marine predators.
Supporting Information S1 Movie. Sample video clip from an aerial survey flight. This video was recorded south of the Palm Beach inlet. Thousands of sharks can be seen close to shore in this relatively short clip. (MP4) S1 Table. Environmental parameters and shark abundance. Day of the year, water temperature, barometric pressure, photoperiod, number of sharks counted, and water clarity during survey flights for the study period. (XLSX) M. Castro and S. Creager assisted with shark counts. S. Hoffmann assisted with statistical analyses. M. Royer fabricated the camera mounts.
|
2016-05-12T22:15:10.714Z
|
2016-03-30T00:00:00.000
|
{
"year": 2016,
"sha1": "72adcfb989e2c063726b3c77cd2c86d250e5b086",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0150911&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72adcfb989e2c063726b3c77cd2c86d250e5b086",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
234078824
|
pes2o/s2orc
|
v3-fos-license
|
The Control of Over-saturation at the Critical Intersection Based on the Improved SEFP Method
We have proposed the SEFP (Same Entrance Full-Pass) method in our previous research work in order to avoid the congestion at the key spot in the regional road network. The SEFP method, which can make all vehicles going to the critical intersection pass the stop-line with no stop during each release phase, reducing the vehicle delays and stops greatly. While the green time is underused in this method and the vehicle throughput at the critical intersection can be further increased. On this basis, we propose the improved SEFP method, which can formulate signal offsets control schemes at the upstream intersections by means of traffic wave theory, guaranteeing all the vehicles leave the critical intersection at saturated flow speed. In the meantime, the closure control is adopted at upstream intersections timely in light of the queuing length on the critical intersection lanes, avoiding the spill-outs effectively. This new method can improve the traffic throughput of the critical intersection while decrease the vehicle delays and stops, preventing the critical intersection from traffic over-saturation effectively. The simulation results of an actual critical intersection in Mianyang city demonstrate the validity and feasibility of the improved SEFP method.
Introduction
Nowadays, traffic congestion has attracted much attentions with the growing number of vehicles. The critical intersection, which is widely recognized as the key node in urban traffic network, often becomes the bottleneck in traffic control [1]. The accurate signal control policies adopted at intersections can effectively alleviate traffic pressure, reduce or even eliminate the negative impacts of bottleneck, and improve road capacity and service level [2]. Consequently, many scholars aiming at relieving traffic congestion have been dedicating to researching the signal optimization control plans and have achieved many gratifying results.
In documented literature, Gazis [3] is the pioneer of the over-saturated traffic research. He employed the semi-graph method to obtain the optimal solution of signal control scheme and 3 method and the derivations of the related models in the third Section. The experiment results obtained from the traffic simulation software VISSIM and the corresponding analysis will be presented in Section IV, and the conclusions will be made in Section V.
The Brief Introduction of the SEFP Method
The SEFP method is primarily aimed at making all vehicles flowing to the critical intersections in each cycle leave it directly, avoiding unnecessary vehicles stops and vehicle delays [25]. Figure 1 displays the signal phase designed in the SEFP method. Since the right-turn traffic flows are not controlled by the signal light and released all the time, they are not marked in the signal phase diagram. By applying the SEFP release mode to the critical intersection,vehicles coming from the different entrances are discharged separately and alternately, so the conflicting vehicle streams and unfair queuing can be avoid effectively. Besides, the vehicles from the upstream intersections need to be discharged and stopped on demand, so we also designed a new signal phase mode for upstream intersection, as shown in figure 2. If there are N vehicles in the moving motorcade, the speed of this motorcade is v , and the average space headway of vehicles in it is 1 U , then the release time (the green time) for this motorcade required to pass the stop-line is In general, the average distance between the normal running vehicles in a moving motorcade, represented by 1 U , is longer than that in a standing queue. If the average distance between vehicles in a standing queue is 2 U , then 2 1 U U ! . The contrast between the space headway in the moving motorcade and standing queue is shown in figure 3(a) and Figure 3(b) (from VISSIM screenshot). As show in Figure 3, the average space headway of vehicles in a moving motorcade is larger than that in a standing queue. When the moving motorcade passes the stop-line without stopping (realized in SEFP method), the time interval between two adjacent vehicles is longer than the time headway of traffic over-saturation. As a result, it is unattainable for these vehicles to leave the intersection with saturation flow speed, and the green time is underused. Under the over-saturated traffic environment, how to maximize the vehicle throughput of the intersection is the main objective of the traffic control policy, so we pay more attention to improve the utilization of green times and propose the improved SEFP method accordingly, which can take full advantage of the green time, improve the traffic capacity of the critical intersection, and prevent the traffic oversaturation at the same time.
The Release Method of "Line Up First, Then Leave"
The primary objective of the improved SEFP method is to make vehicles leave the critical intersection at saturated flow speed and take full advantage of green times. Only when the continuous vehicles exist behind the stop-line during the green time can they leave with the saturated flow speed. Therefore, the vehicles entering the critical intersection entrance lanes should queue up first, shortening the distance between vehicles, then leave. As a result, more vehicles can leave within the same time. In this way, we make the most of the green time for each release phase, raise the total traffic volume of the critical intersection, and prevent the over-saturation effectively.
If the in-coming motorcade (consists of N vehicles) arrives at the stop-line during the red light duration, the vehicles in this motorcade need to slow down, stop and line up in a queue. When the traffic light turns green, the first few vehicles in this standing queue start to move with a relatively fixed start-up delay Ts (the total delay of the first few vehicles), then pass the stop-line quickly. Other vehicles will pass the stop-line with the saturation flow speed, S , (veh/s). With this "line up first, then leave" release method, the release time for the queuing vehicles to pass the stop-line is As mentioned in Section II, a moving motorcade consists of N vehicles needs 1 T seconds to leave completely. If the release time of the "line up first, then leave" method is shorter than that in SEFP method, inequality (3) should be satisfied. 2 1 T T ! (3) Inequality (3) implies the release method of "line up first, then leave" can make more vehicles leave within the same period. During rush hours, the essential task is to get more vehicles "flowing" to maximizing the traffic volume of the critical intersection, relieving or avoiding the traffic congestion.
According to the simulation experiments results and relevant literature, the parameter values are set as follows: , it means the "line up first, then leave" method can realize its advantage only when the number of vehicles waiting in the standing queue is more than thirteen.
One thing should be note that the release method of "line up first, then leave" will increase the vehicle delays, stops, as well as queue length inevitably. Actually, The traffic volume of the critical intersection can be increased at the expense of other traffic indexes. During the traffic oversaturation period, it is impossible to optimize all the traffic indexes at the same time. In order to prevent and alleviate congestion effectively, we have to guarantee the increase of the traffic volume at the critical intersection first while ignore some traffic indicators properly.
The Release Method of "Releasing While Lining Up"
According to Traffic Wave Theory, in the course of incoming vehicles stopping and lining up behind the stop-line, the stop wave created by the stoppage of these vehicles moves upstream with speed 2 v . When the last incoming vehicle stops at the end of the standing queue, the stop wave also reaches there. When the traffic signal turns green, the vehicles in the standing queue begin to discharge, and the discharge wave also moves upstream, but at speed 1 v , 2 1 v v ! . The last vehicle in the standing queue starts at the time when the discharge wave spreads to the end of the queue. In order to reduce vehicle delay and stops and guarantee the vehicles leave at the saturated flow speed, we take "releasing while lining up" method, which allows the front vehicles of the standing queue start to be released while the subsequent incoming vehicles still in lining up. In addition, it should satisfy the condition: when the last incoming vehicle arrives at the end of the standing queue, i.e., the stop wave transmits to the end of the standing queue, the discharge wave also arrives there, thus the last vehicle does not need to stop and heads for the downstream intersection (the critical intersection) directly. In theory, the above situation can be realized by providing an appropriate signal timing scheme for the critical intersection and its upstream intersections.
It is crucial to set a suitable cycle length for the formulation of the traffic signal timing scheme. In the research of traffic signal timing, some scholars have pointed out that the optimal value of the signal cycle length for over-saturated intersections is 160 s, and the maximum value should not exceed 180 s [26]. For each signal cycle, there always be start-up delays at the initial stage of vehicle release period. Within the same over-saturation period, the longer signal cycle will lead to the fewer number of signal cycles and the shorter total start-up delays, which are conducive to the full use of the green time and the increase of the vehicle throughput for the critical intersections. However, the longer the signal cycle is, the longer the green time allocated for each phase will be, so will the waiting time of red phases. Under over-saturated conditions, the long waiting time will cause the vehicles accumulation, queue growth, and even spill-back, thereby increasing the risk of congestion deterioration and propagation. When traffic over-saturation occurs, the most important thing is to make vehicles "flowing" instead of stagnating on the road. Consequently, it is not desirable to pursue a long signal cycle to reduce the total start-up delays, which is a part of the total vehicle delays at the intersections.
In addition to the relatively larger start-up delays for the front queuing vehicles, there are also start-up delays exist in other queuing vehicles due to the operating characteristics of automobile engines. However, since the start-up delays of the rear vehicles are relatively small, they are often ignored or included in the total start-up delay Ts . If the signal cycle were long, more vehicles would wait to be released and the total vehicle delays would increase. Accordingly, we take the fourteen vehicles satisfying inequality (3) as the target value and release these vehicles as soon as they complete lining up in a standing queue. The green time i g ( 4 , 3 , 2 , 1 i ) for each phase can be obtained by equation (2), and 2 T g i , then the signal timing plan at the critical intersection can be obtained.
The Signal Offsets Control of the Released Three-turn Vehicle Streams
There is still a key problem needs to be solved when implementing the "releasing while lining up" method, that is how to control the number of the vehicles entering the critical intersection entrance coming from the adjacent intersections strictly and uniformly. In order to ensure that there is fourteen (the average value) vehicles flowing into each entrance lane within each signal cycle, we carry out coordinated control and used a new phase mode (also adopted in SEFP method) at upstream intersections. Figure 2 shows the new phase control mode. Three streams of vehicles driving to the same downstream intersection are grouped together, and they are controlled by the same signal. These three-turn vehicle streams going to the critical intersection are defined as "the related three-turn vehicle streams".
We take the one of the upstream intersections for illustration, as shown in figure 4. At the west intersection, the related three-turn vehicle streams contain: the left-turn vehicles at the north entrance (the number of left-turn lane is 1 n ), the straight vehicles at the west entrance (the number of straight-lane is 2 n ), and the right-turn vehicles at the south entrance (the number of right-turn lane is 3 n ).
The release time of the related three-turn vehicle streams at other upstream intersections can be gotten by the same way.
After the related three-turn vehicle streams entering the downstream section, they will group a new motorcade on each lane and drive to the critical intersection. According to "the releasing while queuing" method proposed above, the leading vehicle of each motorcade will arrive at the stop-line during the red light and line up. The later-coming vehicles still queuing when the traffic light turns green. If we want to make the last incoming vehicle arrive at the end of the standing queue when the discharge wave arrives there, the signal offset 1 t between the critical intersection and the west intersection should satisfy equation (5). The signal timing plans at the adjacent intersections can be obtained refer to equation (4), and the signal offsets can be calculated by using equation (5), so the coordinated control problem at the upstream intersections is resolved. The coordinated control will help the "releasing while queuing" method to play its role better at the critical intersection, not only taking full advantage of the green time, but also avoiding long waiting periods and the second stop.
The Closure Control at the Upstream Intersection
Before leaving the critical intersection, vehicles in the related three-turn vehicle streams will line up on different lanes according to their destinations. Actually, the number of vehicles on each lane is not always equal to fourteen, and fourteen is just the average value of the vehicles in each standing queue. If the number of vehicles in the standing queue is greater than fourteen, the vehicles at the rear of the queue cannot leave the critical intersection within the green time ' ) calculated just for fourteen vehicles, and they have to take up the green phase of the next cycle to leave. In this case, more vehicles will accumulate behind the stop-line as time goes on, leading to the increase of the vehicle queues length and the occurrence of spill-out. Consequently, it is necessary to control the vehicle queue length for practical application in the real traffic conditions.
In recent years, the development of image processing and video technology [27] has provided new means for urban traffic management. As a high-efficient but low-cost detector, ground sense coils are widely used in traffic management. In this research, we employ the ground-sense coil to detect the actual queuing length on each entrance lane and take the closure control of the related three-turn vehicle streams at the west intersection when the queue length reaches a certain value, stopping redundant vehicles entering the critical intersection in time. As shown in figure The red rectangle represents the ground sense coil, and the position of the ground sense coil will determine the maximum length of the vehicle queuing that is allowed.
In order to avoid the further growth of the existing queuing length, the related three-turn vehicle streams should be stopped for 1 G seconds in the next green phase, i.e. the closure time is 1 G , and To quantitative describe the relationship between 1 G and ' 1 g , we introduce the concept of the closure coefficient, and k is the closure coefficient and it reflects the proportion of the closure time 1 G in the green time ' 1 g . When the length of the existing queuing length is close to 1 l , the value of k will be close to 1, i.e., the closure time 1 G of the related three-turn vehicle streams is nearly equal to ' 1 g . During the rush hours, the vehicles' arrival rate at each entrance of the upstream intersection will increase, if 1 G were large, the release time for the related three-turn vehicle streams would be shortened, causing the serious phenomenon of queuing and even the spill-ou. Therefore, the value of k should not be set too large. According to equation (6), the value of 1 G depends on the actual queuing length that is allowed, so the value of 1 l , which depends on the position of the ground sense coil, should be set appropriately to avoid traffic congestion at the upstream intersection when taking the closure control there.
Experimental Results and Analysis
In order to verify the validation and the practicality of the improved SEFP method, we choose five intersections in the central area of Mianyang city as our study objects, and employ the professional traffic simulation software VISSIM to complete the simulations. Figure 6(a) shows the map of these five intersections. The critical intersection is marked by a red circle and the adjacent intersections are marked by yellow circles. Figure 6(b) shows the lane distribution and number on each section. . After many one-hour simulations (all begin and end with over-saturated traffic volumes) of the the three method, i.e., the SEFP method, the "line up first, then leave" method and the "releasing while lining up" method, we record the simulation results, calculate the average values of the traffic indexes for each method and list them in table 1. Q is the traffic volume of the critical intersection, l is the average queuing length on each entrance lane, D is the average vehicle delays, and P is the stops. For clear comparison, a bar graph is used to display the simulation results of each method, as shown in figure7. In each comparison group, there are three bars with different colors to denote the experiment results of different methods, and the higher the value is, the longer the bar will be. g g Figure 7. The comparisons of the experiments results.
The simulation results of each method in table 1 reveal that when the SEFP method is adopted, nearly all the vehicles flowing to the critical intersection can be discharged within the green time, so the vehicle stops 0 P . The average queuing length is only 0.22 m (shorter than one vehicle), and the average vehicle delay is 1.62 s, which is far below the utmost of drivers' patience. All these traffic indexes reach satisfactory results, but the traffic volume Q can be improved further.
By contrast, the simulation results of the "line up first, then leave" method are inferior. The traffic indexes l , D , and P are all worse than the SEFP method except for the vehicle throughput Q . In the "line up first, then leave" method, all the vehicles entering the critical intersection need to line up first for leaving with the saturation flow speed, therefore, the traffic throughput of the critical intersection is more than that in the SEFP method. The traffic volume at the critical intersection increases from 10,458 veh to 10,789 veh during the same simulation time, which is contributed to relieving the traffic pressure in traffic peaks.
According to the simulation results, we find that the "releasing while lining up" method can achieve the optimal of all traffic indexes. Only some vehicles in the motorcades need to stop first and then leave, so the values of l , D , and P are all improved relative to the "line up first, then leave" method. Although the values of l , D , and P are not as perfect as them in the SEFP method, they are all within the acceptable ranges. The average queuing length is 0.42 m (also shorter than one vehicle), almost achieving the goal that no vehicle is stopped after each green phase. The average vehicle delay is 2.03 s, slightly higher than that in the SEFP method (1.62 s), but is far lower than that in the "line up first, then leave" method (12.26 s). The vehicle stops is only 0.1 and this is also an acceptable result. Most of all, the traffic volume is 10,820 veh, which is the highest value among these three methods. Consequently, the improved SEFP method -the "releasing while lining up" method is beneficial to make more vehicles "flowing" in peak periods and avoid the traffic over-saturation at the critical intersection effectively. In this paper, we propose the improved SEFP method (the "releasing while lining up" method) according to our previous study. It can raise the throughput of the key spot in the road network obviously and is more applicable for over-saturated traffic conditions.
Conclusions
The main advantages of the improved SEFP methods are: 1) The fix-time control policy and the SEFP release mode used in this paper greatly simplify the process of traffic control, and are more readily to implement and apply in the actual traffic system.
2) The "releasing while lining up" method can make the vehicles leave the critical intersection with the saturated flow speed, take full advantage of time resources, and maximize the traffic volume of the critical intersection during the rush hours, preventing the critical intersection from over-saturation effectively.
3) In order to address the deviation existing between the theoretical analysis and the practical application, we employ the ground sense coils to monitor the actual queuing length on entrance lanes, then take the closure control at the upstream intersections if necessary, reducing the risk of spill-out caused by the growth of queuing length.
The efficient operation for the critical intersections contributes to the smooth traffic for the regional road network. The future research work is to study the optimal control for road network on the basis of the improved SEFP method.
|
2021-05-10T00:03:40.570Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "41a42286fef70eff35229c86db2c1bee7ab688fa",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1828/1/012151",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "35bf2b7805b270380fa1b0f5a129721548019233",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
15708730
|
pes2o/s2orc
|
v3-fos-license
|
Current strategies against invasive fungal infections in patients with aplastic anemia, strong power and weak weapon, a case report and review of literature
We report an 18 year old boy with Aplastic anemia complicated by serious fungal rhinosinusitis. Despite prompt treatment and early repeated surgical debridements, he died after about more than 6 weeks of hard challenges with fungal infections. Current strategies against invasive fungal infections (IFIs) in patients with Aplastic anemia may be inadequate for the management of serious complications. Antifungal prophylaxis is highly recommended in pre-transplant period for severe form of Aplastic anemia.
Introduction
Aplastic anemia is characterized by bone marrow failure and marked decrease in all marrow elements. In severe form of Aplastic anemia, rapid bone marrow transplantation after primary workup is life saving; however, protected environment and prevention of opportunistic infections may be difficult in these cases [1].
Most patients with Aplastic anemia experience repeated episodes of infection during their life. Gram positive (predominantly gram-positive cocci) and gram negative organisms (especially Multi-Drug Resistance (MDR) negative bacilli) are the most common causes of infections, but IFIs remain the main cause of death and increase the mortality among respective patients [2,3]. Aspergillosis and Mucormycosis are the most common mold infections in patients with Aplastic anemia [2].
In reported case series by Valera (2011), in 32 patients with acute invasive fungal rhinosinusitis, all deaths were reported among patients with Aplastic anemia despite surgical debridement and systemic antifungal therapy [4].
Severe neutropenia predisposes these patients to more severe forms of IFIs with a wide range of clinical manifestations. Gastrointestinal [5], upper air way [6], musculoskeletal [7] cardiac [8], renal [9], disseminated infection [10] and Rhinocerebral/sino-orbital/rhinosinusitis [11] among the most common reported manifestations of IFIs in patients with Aplastic anemia. Of these conditions, the last one is the most serious and fatal [4]. We report on a serious fungal infection in a case of Aplastic anemia and offer an appropriate strategy for the treatment and prevention in such patients.
Case
An 18 year old boy, known case of Aplastic anemia since 7 years ago, admitted with severe headache and fever in emergency ward. He was a candidate for bone marrow transplantation because of standard treatment failure that included corticosteroid, anti-thymocyte globulin (ATG) and cyclosporine, and put on in transplant waiting list. He had frequently received blood and platelet transfusion due to low hemoglobin (Hb) level and often nose bleeding.
After initial assessment, broad spectrum antibiotics (Piperacillin-Tazobactam) were started for him. Two days later, he developed pain, swelling and redness of the right side of face. Gradually, the patient exhibited high grade fever, chills, intolerable headache and periodic disorientation, so due to poor clinical response, an antifungal agent (Amphotericin-B deoxycholate) was added to his antibiotic regimen on fifth day of admission.
In serial physical examination (after primary unilateral face swelling and cellulitis), he developed necrotic lesions in soft and hard palate followed by nasal septum, right alar groove and right nasolabial fold necrosis.
Surgical consult has been also requested for diagnostic aspiration and evaluation for surgical debridement. Despite low platelet count, after receiving single donor platelet transfusion, right maxillary sinus debridement was performed and samples were sent for pathology. Fungal elements similar to mucoral hyphae were reported by pathologist.
After primary aspiration, right maxillary sinus debridement was performed during single donor platelet transfusion and samples were sent for pathology. Upon early surgical debridement and short time clinical improvement, all signs and symptoms exacerbated again after few days. Caspofungin was added and second surgical debridement planned 10 days after the first one, and has been organized for short interval surgical sinus debridements during platelet transfusion, till it becomes completely clear.
In the next surgical debridement tissue samples were cultured on Sabouraud dextrose agar (Merck, Darmstadt, Germany) and also examined for Aspergillus and Candida DNA by real time polymerase chain reaction (PCR) and Mucoral by nested PCR [12,13]. Bacterial culture and specimen were sent for microbiology and pathology, respectively.
Other successful debridements were done twice in short intervals. Finally, all involved sinuses, nasal cavity and overlying soft tissue were completely removed by anterior and posterior ethmoidectomy and sphenoidectomy. Also, posterior part of septum was removed. Detailed information about time and results of clinical samples were included in Table 1.
Adjuvant therapy with gamma interferon 100 μgr/day in combination with Granulocyte-colony stimulating factor (G-CSF) [300 μgr/day primarily and then with full dose of 600 μgr/day in two divided doses] was added to the broad antibacterial and antifungal treatments.
Other assessments including blood culture and urine culture were negative and chest x-ray, abdominal ultrasonography and echocardiography were normal in primary evaluations.
However in serial chest x-rays, early possible signs of pulmonary involvement detected in about 3 weeks after his admission (Fig. 2).
Bilateral well-circumscribed ground-glass gray opacities were detected in these chest x-rays confirmed by spiral chest CT scan (Fig. 3).
Our further investigation into fungal infection revealed positive Mucormycosis, Aspergellosis and Candidiasis by PCR and positive fungal culture for Aspergillus flavus and Candida albicans in repeated debridement (Fig. 4C).
No positive culture was obtained from Mucormycosis. During admission, the patient had several positive blood cultures (Table 2). His antibiotic was changed based on antibacterial susceptibility test. Sinus debridement during antifungal treatment was done in four times, but the patient's condition gradually worse and eventually expired.
Discussion
Frequent episodes of profound and prolonged neutropenia, aggressive chemotherapy, exposure to fungal spores in non protected environment and consequently preadmission colonization with fungal agents and high risk condition such as Aplastic anemia and Acute Myeloid Leukemia (AML) are some of typical examples that increase the risk of IFIs [14].
Along with high clinical suspicions, early diagnosis and treatment; multidisciplinary approach against IFIs significantly reduced mortality [15]. Constantly low absolute neutrophil count (ANC), lack of standard and secure control measures such as high-efficiency-particulate-air (HEPA) in many centers and progressive nature of certain fungal infections such as Mucormycosis; all are important obstacles to our attempts toward controlling fungal infections in these patients.
Despite very low platelet count (less than 6 Â 10 3 /microliter), our patient underwent four times of successful surgical debridement, during continuous single donor platelet transfusions, without any hemostatic complications. In each times he was referred to operation room after preparation of donor platelet for infusion during surgical debridement (Fig. 5).
Along with severe neutropenia, prolonged hospital stay and broad spectrum antibiotic therapy with damaged skin and mucosal barriers and presence of indwelling catheters predisposed our patient to colonization and infection with hospital acquired multidrug resistant organisms. Such patients are at greater risk for acquisition of MDR gram positive and gram negative organisms such as Acinetobacters, Methicillin-resistant Staphylococcus aureus (MRSAs) and vancomycin-resistant Enterococcus (VREs) [2]. This is an important point in management of these patients that should also take into consideration such as our patient which finally complicated by VRE Bacteremia.
Diagnostic misconception based on primary clinical presentation is one of the other important challenges in the management of IFIs in severe neutropenic patients.
Although rhinocerebral/sino-orbital/rhinosinusitis IFIs are frequently considered as clinical manifestation of Mucormycosis, it is critical to obtain proper tissue specimens for culture and biopsy, whenever possible; given the similar clinical presentations of Aspergillosis. Mucoral family is difficult to cultivate and sensitivity of culture for this family was reported around 50% [16] and thus other diagnostic modalities (histopathological carachtristics and molecular test) should be considered, if needed.
Co-detection of multiple fungi also is not an uncommon event in severe neutropenic patients and accurate diagnosis can help correct decision making in choosing proper antifungal regimen.
Although fever-driven approach for empirical antifungal therapy; based on existing guidelines is currently applied in many hematology oncology centers worldwide, but this issue continues to be a challenge and IFIs still is one of the leading causes of mortality and morbidity in neutropenic patients with persistent fever [17].
Other different treatment approaches either pre-emptive (diagnostic-driven approach) or targeted therapy also may be used, according to radiological findings, clinical symptoms and mycology test results. These approaches have been tested by researchers in different populations and in various settings in neutropenic patients.
Despite the fact that implementation of these strategies substantially has reduced the burden of IFIs in high risk patients, antifungal prophylaxis still is an important strategy for the prevention of IFIs in certain circumstances.
Currently, there is no recommendation about starting antifungal prophylaxis for patients with Aplastic anemia (as a one of [18,19]. Although prompt diagnosis and early transplantation seem to be the only reliable modality in the protection of such patients against serious infectious complications (mainly IFIs), but based on our experience and also other similar reports we recommend that patients with Aplastic anemia receive anti-mold prophylaxis during the period of profound and prolonged neutropenia (ANC less than 500/ microliter) instead of other proposed strategies for the management of patients within pre-transplantation period. It should be noted that in the case of Aspergillosis indirect tests such as galactomannan and moleculer tests have been improved our diagnostic power for early detection of IFIs due to Aspergillosis. Yet, there is no standard diagnostic test for early detection of Mucormycosis except histopathology and culture [20].
Also, in centers with high incidence of Mucormycosis, it seems better to better that use an agent which is active against both Mucormycosis and Aspegillosis for prophylaxsis.
Conflict of interest
The authors did not have any financial or other relationships, which could be regarded as a conflict of interest. Serial complete blood counts (CBC) with differential were recorded since admission. Red arrows demonstrate time of debridements. Note to very low platelet counts in the days in which patient candidate to surgical debridement. He received multiple donor platelet infusions during repeated operation.
|
2016-05-12T22:15:10.714Z
|
2016-03-01T00:00:00.000
|
{
"year": 2016,
"sha1": "8966235f2590e7e7990e59e5028836b610b235d6",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.mmcr.2016.03.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8966235f2590e7e7990e59e5028836b610b235d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
58991577
|
pes2o/s2orc
|
v3-fos-license
|
Top 50 cited journal articles on overhead throwing athletes: a bibliographic analysis
Background The frequency of citations for a journal article is a reflection of its academic impact. The purpose of this study was to identify and characterize the top 50 cited journal articles related to overhead throwing athletes in the published literature. Methods The Web of Science database was searched on January 18, 2016, using the terms “throwing athlete,” “baseball,” and “pitcher” to identify the top 50 cited articles related to overhead throwing athletes using the all-database function. The type of study, country of origin, publishing journal, and year published were reviewed for each article. Results The top 50 articles identified were cited between 95 and 471 times and were published in 13 journals between 1969 and 2011. Most of the articles were small case series or nonsystematic literature reviews. The shoulder was the most common body region studied in the top 50 articles (33 of 50 [66%]). Among original studies (n = 43), there was a good representation of surgical management of shoulder and elbow pathology in overhead athletes (9 of 43 [20.9%]); however, most of the articles reported on shoulder and elbow kinematics (19 of 43 [44.2%]) and pathoanatomy (15 of 43 [34.9%]). Conclusion The greater prevalence of nonsurgical articles may reflect a continued effort to better understand the different pathologies specific to overhead throwing athletes. An understanding of the variable content and quality of frequently cited articles on overhead throwing athletes may serve as a stepping stone for future studies to advance the diagnosis and management of complex elbow and shoulder injuries in these high functional individuals.
The number of times a journal article is cited serves as a measurement of the influence of that publication in a specific field. In 2002, Paladugu et al, 72 published the "One hundred citation classics in general surgical journals" to identify seminal contributions in general surgery. Inspired by Paladugu et al, multiple specialties have initiated similar publications, including the publications by Lefaivre et al 55 and Kelly et al 48 in orthopedic surgery. A number of subspecialties within orthopedics have published similar "top cited" or "classic papers" studies, including pediatrics, 8,47 hip and knee arthroplasty, 41 arthroscopy, 19 hip arthroscopy, 53 shoulder surgery, 67 elbow surgery, 43 knee research, 1 foot and ankle, 9,26 spine surgery, 64,87,90,100 sports medicine, 69 fracture surgery, 7 and hand surgery. 46,98 Specific journals have also published findings on their own top cited articles. 24,29,54,60,61 The number of times an article is cited is not the only way to determine its true importance or impact in a field, but it does help identify "classic" articles relevant to orthopedic knowledge and training and may serve as a way to gauge the focus of interest within a given specialty over a period of time.
In sports medicine, particularly in the field of shoulder and elbow surgery, the pathology and treatment associated with overhead throwing athletes is of great interest. Major advances have occurred in the diagnosis and management of shoulder and elbow pathology in throwing athletes in the past decade. 27,32,51,63,79,84,86,88 This is likely related to a combination of improvement in diagnostic and surgical technology and greater understanding of shoulder and elbow mechanics and pathoanatomy. As the niche for specialized care in high-level overhead throwing athletes continues to expand, identifying the top cited articles in the field provides a concise list of published articles that may serve as a stepping stone for ongoing and future research aimed at improving outcomes in complex pathologies common in overhead throwing athletes. The purpose of this study was to identify and characterize the top 50 cited journal articles related to overhead throwing athletes in the published literature.
Materials and methods
The Web of Science (formerly Web of Knowledge) database was used to search for all studies of overhead throwing athletes using the search terms "throwing athletes," "pitchers," and "baseball." Between 1945 and 2017, 5538 journal articles from 58 countries and 13,711 authors met the search criteria without restrictions in the type or specialty or journal articles. Previous studies have demonstrated that an all-database search represents a more in-depth methodology of determining the true citation ranking of articles when using this database. 19 Results were ranked by number of citations and screened for studies related to overhead throwing mechanics, upper extremity anatomy, and injuries and surgical interventions in overhead throwing athletes. Excluded were studies associated with nonoverhead sports (rugby, football [soccer]) or nonthrowing baseball studies (batting mechanics and catching), psychological or cognitive evaluations, economic analysis, and sudden death, among others ( Fig. 1).
All selected journal articles were reviewed and analyzed according to the type of article (basic science, clinical, or review), topic (pathology/injury, surgical management, nonsurgical management, or biomechanical/kinetic study), body region (shoulder, elbow, or other), authorship, country of origin, publishing journal, and year of publication. Clinical studies were further analyzed by the level of evidence based on guidelines adapted by The Journal of Bone and Joint Surgery from the Oxford Center for Evidence-Based Medicine 2011 Working Group. 92 Lastly, articles in the top 50 were assessed for citation density, defined as the number of times cited divided by number of years since publication. 67
Results
The top 50 cited articles in the present study were published between 1969 and 2006 in 13 journals, from 4 countries, and by 142 authors (Table I). The top article, by Fleisig et al, 34 was cited 475 times, and the 50th article, by Reinold et al, 79 was cited 98 times. Taken together, the top 50 articles were cited an average of 170 times and accounted for 8557 citations in the literature. The oldest article was published in 1969 by King et al, 49 and the most recent article was published in 2011 by Wilk et al. 98 Half of the articles were published on or after 2000 (Table II).
More than half of the papers (28 of 50 [56%]) were published in the American Journal of Sports Medicine, followed by Journal of Bone and Joint Surgery (4 of 50 [8%]; Table III). All but 3 articles originated from the United States. A total of 142 authors were listed; however, JR Andrews contributed to 19 of the top 50 articles (38%) in these studies.
Citation density ranged from 2.94 to 26.93. The top 3 studies with the highest citation density correlated with the top 3 most cited papers; however, the study with the fourth highest citation density ranked 49th on the total citation list. 105 Among those with citation density greater than 10 (17 of 50 [34.0%]), only 1 discussed surgical management, and its focus was ulnar collateral ligament pathology. The remainder of the studies described biomechanics and pathoanatomy of the shoulder and elbow.
Discussion
Overhead throwing athletes exert strong and repetitive forces across the shoulder and elbow joints and subject the arm to range of motion extremes. 20 These highly athletic individuals are susceptible to a wide range of complex injuries to the upper extremity. 2,3,21,89 Although the findings in this study do not provide answers into the improvement of managing different pathologies among overhead throwing athletes, specifically baseball players, our study highlights the different areas of interest published in the literature for new and ongoing research. A greater understanding of shoulder and elbow anatomy and kinetics may identify opportunities for advancement in preventing and treating a number of pathologies.
Baseball has been described as the ninth toughest sport in the world. 33 In other words, only 8 sports, most of which are contact sports (ie, boxing, hockey, American football, wrestling, and martial arts), place greater physical demands on the competing athletes. It is not surprising that many of the articles in the top 50 studies focused on the biomechanics of the shoulder girdle and elbow attempting to provide information on the muscle forces and balances at these joints during the overhead throwing motion in high-level athletes. Interestingly, 4 studies discussed scapular kinematics and its role in shoulder function, 23,42,49,66 an understanding which is critical in those who care for these athletes. Although the effect of the papers that discussed scapular kinematics on the current understanding of shoulder pathology is difficult to assess, more recent studies have reported on the role of the scapula in shoulder pathology, including rotator cuff disease, glenohumeral internal rotation deficit, subacromial impingement, internal impingement, labral tears, anterior capsule laxity, and shoulder instability. 77 As a result, assessment of scapular position, mobility, and strength is a crucial part for successful rehabilitation programs in overhead throwing athletes. 104 Only 1 review paper on shoulder pathology represented the outcomes of conservative management of throwers. Results of operative intervention of the shoulder in this population is tempered by a systematic review reporting 63% returning to the same level of play after superior labrum anteroposterior repair. 84 Effective nonoperative treatment strategies are of paramount importance to maintain high rates of return to play, but the literature lacks outcomes reporting and evidence-based treatment guidance. Only 9 of the top 50 studies (18%) in this review reported the surgical management of shoulder or elbow injuries. 4 in the United States, in 3 different journals. All were small case series or retrospective case-control studies, 3 of which did not report outcomes. The outcome studies only reported changes in range of motion or return to play. The highest ranked surgical study (#4) reported outcomes with repair or reconstruction of the medial collateral ligament of the elbow and was published in 1992.
The indication of one technique over another may likely be associated with the type of injury, patient population, and surgeon preference; however, there is currently no consensus on the best treatment option of ulnar collateral ligament injuries. 68 These findings highlight that despite significant advances in understanding of elbow pathology and surgical instrumentation, future studies should aim to apply appropriate methodology to answer clinically relevant questions with outcomes data including not only return to play, but time to return to play. 59 Although 142 authors contributed to this body of literature, 1 author (JR Andrews) was involved in more than one-third of the studies in the top 50 cited articles. Furthermore, all but 3 papers were published in the United States. This study identifies specific leaders in the field, underlines the importance of baseball within the sporting landscape of the United States, and highlights the need for greater diversity in the field of overhead throwing sports. Baseball is among the top 10 most popular sports in the world. Baseball is the second most popular sport in the United States 83 and is the most popular sport in Japan. 101 In 2016, Major League Baseball reported $10 billion in revenue, 14 and the New York Yankees are tied second for the most valuable sports franchise in the world, at $3.2 billion. 94 Clearly, there is great interest and tremendous value in the prevention and management of injuries in athletes at the highest level of competition. The money associated with Major League Baseball and the popularity of the sport in the United States both likely played a role in the greater prevalence of studies in the United States. Perhaps more important, the rise in the epidemic of youth and adolescent throwing arm injuries is cause for concern, with a need for more studies and additional understanding. 59 The usefulness or appropriateness of compiling lists of top cited articles has been questioned. 31 Some authors contend that simply ranking articles according to the number of times cited does not provide readers with high-quality publications. Our study supports this viewpoint, because the level of evidence for 61% of clinical studies was IV, and all reviews were not performed to systematic review standards. There is consensus regarding the relative weakness in methodological quality of orthopedic literature; however, evidence shows that the quality of orthopedic literature is improving. 38,40 Of the publications on this list, 54% (27 of 50) were published in the American Journal of Sports Medicine, one of the highest-rated orthopedic journals in the world, with a 5-year impact factor of 5.501. 91 Furthermore, many orthopedic journals have encouraged authors to use reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT), 81 Strengthening the Reporting of Observational studies in Epidemiology (STROBE), 93 and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), 56 to continue to improve the research quality in orthopedics.
Despite the absence of high-quality clinical studies, this top 50 list provides numerous basic science, biomechanical, and imaging studies that have served as a foundation for the understanding of complex elbow and shoulder pathology in overhead throwing athletes, particularly in baseball players. Although much has been done regarding biomechanics, fatigue, pain, and even injury, the sport still lacks scientifically sound guidelines of safety, including pitch counts and duration. The number of citations can also be influenced by the time since publication, which favors older articles. We corrected for publication duration by calculating the citation density to correct for the years since publication. The top 3 cited articles, however, demonstrated the highest number of citations and highest citation density.
Conclusion
The findings from this study highlight the contributions of investigators who have contributed significantly to the current knowledge of overhead throwing athlete pathologies. Although the list is not meant to be exhaustive, it undoubtedly provides a picture of the direction in which the literature in overhead throwing athletes is headed. Our findings also serve as a primer for the understanding of shoulder and elbow mechanics and pathoanatomy in the overhead throwing athletes and highlights the evolution in management of these complex pathologies in high demand athletes. We additionally highlight that evidence-based medicine for throwing athletes continues to evolve and that the practitioners caring for these athletes continue to make substantial contributions to the field for improved patient care and value. Lastly, this work demonstrates the paucity of high-quality clinical trials among these top cited papers, and understanding the variability in the content and quality of frequently cited articles may help improve the quality of research on overhead throwing athletes.
Disclaimer
The authors, their immediate families, and any research foundations with which they are affiliated have not received any financial payments or other benefits from any commercial entity related to the subject of this article.
|
2019-01-25T14:03:02.595Z
|
2017-06-01T00:00:00.000
|
{
"year": 2017,
"sha1": "f39f8f7f7a47a05398a866aeb6bf3188ecab8972",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jsesopenaccess.org/article/S246860261730044X/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f39f8f7f7a47a05398a866aeb6bf3188ecab8972",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54452709
|
pes2o/s2orc
|
v3-fos-license
|
Exosomes—the enigmatic regulators of bone homeostasis
Exosomes are a heterogeneous group of cell-derived membranous structures, which mediate crosstalk interaction between cells. Recent studies have revealed a close relationship between exosomes and bone homeostasis. It is suggested that bone cells can spontaneously secret exosomes containing proteins, lipids and nucleic acids, which then to regulate osteoclastogenesis and osteogenesis. However, the network of regulatory activities of exosomes in bone homeostasis as well as their therapeutic potential in bone injury remain largely unknown. This review will detail and discuss the characteristics of exosomes, the regulatory activities of exosomes in bone homeostasis as well as the clinical potential of exosomes in bone injury.
INTRODUCTION
Bone is a composite tissue, whose matrix consists of proteins and minerals, and which constantly undergoes modelling and remodeling through the coordination of osteoclasts, osteoblasts, and osteocytes. Osteoclasts, derived from mononuclear hematopoietic myeloid lineage cells, are responsible for bone resorption. 1 Osteoblasts, accounting for the (4-6)% of total resident cells in the bone, are responsible for bone formation. 2 Osteocytes, the most abundant cells in bone, are terminally differentiated from the osteoblasts, and are embedded in mineralized bone matrix. Osteocytes play a critical role in sensing mechanical loading and regulate functions of osteoclasts and osteoblasts. 3 The interaction and coordination of these bone cells are important for maintaining bone homeostasis. Bone formation usually begins with the death of osteocytes. 3 The apoptotic osteocytes release bioactive molecules, which induce other viable osteocytes to secret receptor activator of nuclear factor κB ligand (RANKL) which is important for osteoclast differentiation. 4 Subsequently, osteoclast precursors are recruited by chemokines such as monocyte chemoattractant protein (MCP)-1, -2, and -3. 5 The binding of receptor activator of nuclear factor κB (RANK)-RANKL on the surface of monocytes then initiates osteoclastogenesis. 6,7 Meanwhile, osteoblasts produce bioactive molecules including macrophage colony-stimulating factor (M-CSF), MCP-1, and RANKL for the further recruitment and differentiation of osteoclast precursors. 5,8 While resorbing damaged bone, osteoclasts spontaneously secret "coupling factors", such as insulin-like growth factor (IGF) I and II and transforming growth factor (TGF)-β, which mediate the refill of resorbed lacunae by osteoblast. 9 Finally, bone formation is completed when the newly mineralized-extracellular bone matrix completely replaces the resorbed bone matrix. 10 Bone-derived exosomes are considered to be essential for intercellular communication between bone cells. Exosomemediated transfer of nucleic acid or protein cargos between bone cells can bypass the space barriers between different cells, and plays a vital role in the crosstalk between bone cells regulating bone homeostasis. As the role of exosome is a new mechanism of bone formation and homeostasis, which has only recently emerged, we summarize the characteristics of exosomes, itemise the known functions of exosomes in bone homeostasis, and discuss their potential for clinical applications.
HISTORY OF THE EXOSOME
A general history of the vesicular nature of exosomes Exosomes, 11 microvesicles, 12 and secretory autophagosomes 13 are three typical extracellular vehicles (EVs) identified recently. However, in early studies, there was no detailed classification or understanding of these extracellular vesicles.
Cellular vesicular components were recognised 140 years ago. Under dark-ground illumination, serum-derived particles were first seen by Edmunds in 1877. 14 The main mass of these particles was then proved as fat in 1939. 14 Since the function of these particles was unclear, they were just seen as blood dust. 14 Clearer structure of cellular vesicles was then seen in microscope in 1962. 15 However, the function of cellular vesicular components remains mysterious until 1969, when the finding of crystals of appetite suggested the participation of cartilage-derived matrix vesicles in calcification. 16 Five years later, microvesicles in fetal calf serum were detected, which was the last class of EVs detected before exosome was defined. 17 In 1981, the term exosome was first used for extracellular vesicles ranging from 50 to 1 000 nm. 18 In 1983, the Stahl group and the Johnstone group reported that exosomes derived from reticulocytes could fuse with the plasma membrane and release their contents through exocytosis. 19 Then in 1985, the same group provided the electron microscopic evidence for externalization of exosomes. 20 In 1987, the formation of exosomes was described, and was the first time that the intraluminal vesicles of multivesicular endosomes (MVEs) were mentioned. 21 The analysis of exosomal characteristic developed quickly in first decade after the exosome was defined. However, the function of exosomes remained largely unknown.
A breakthrough in exosomal investigation took place in 1996 when peptide-major histocompatibility complex (MHC) class II complexes-enriched exosomes released from B cells targeting T cells were detected. This finding first described the role of exosome in cell-to-cell communication. 22 Following that, dendritic cell (DC)-derived exosomes 23 and tumor-derived exosomes 24 were investigated one after the other. These two studies showed the interactions and crosstalk between DCs and tumor cells. DCderived exosomes could suppress the growth of tumors, and tumor cell-derived exosomes which contained tumor-rejection antigens could be carried by DCs for cross-protection from tumors. 23,24 These findings were appealing to tumor investigators, and resulted in the generation of numerous reports associated with the tumor-derived exosomes.
The past decade has witnessed an acceleration of exosomal investigations, especially in studies of exosomal function. It is believed that exosomes are the most clearly defined group of secreted membrane vesicles, characteristically containing nucleic acid and proteins for cell signalling. 25 Physiologically, they are critical to the immune system as they are involved with both stimulatory and tolerogenic responses. 26 Also, it has been postulated that exosomes could be involved in regeneration, reducing tissue injury and improving tissue repair. 27 Moreover, they may also be involved in tumor progression 28 and delivery of inflammatory mediators. 29 Consequently, the investigation of exosomes is becoming increasingly attractive as they are now suggested to be the key regulators of various cellular and physiological functions (Fig. 1).
History of bone-derived exosome The history of bone-derived exosomes, however, is relatively recent. In 1975, extracellular membrane particles were first found in bone marrows which suggests a possible link between multiple myeloma-derived extracellular vesicles and bone tissue damage. 30 Then in 1979, normal bone-derived EVs were first mentioned when alveolar bone-derived extracellular matrix vesicles were detected by microscopy. 31 In 1980, osteoblast-derived matrix vesicles were investigated through ultrastructural techniques.
While comparing scanning electron microscopy (SEM) with transmission electron microscopy (TEM), researchers theorised that osteoblast-derived vesicles probably serve as the initial locus of calcification 32 (Fig. 2).
The first mention of bone-derived exosomes was 20 years after the naming of the exosome. 33 At the beginning, bone marrow stromal cell-derived exosomes were the focus of bone-derived exosomes. However other bone cell-secreted exosomes were barely mentioned until 2013 when osteoclast precursors were reported to release exosomes. 34 This initiated the investigation of exosomes from other bone cells. In 2015, the proteome of osteoblast-derived exosomes was for the first time investigated. 35 In 2016, the characteristics and regulatory activities of osteoclastderive exosomes were demonstrated. 36 Then in 2017, osteocytederived exosomes and their miRNA contents were demonstrated. 37 Now, the emerging data of bone-derived exosomes has established the details of exosome-based cell-to-cell interaction in bone.
COMPOSITIONS OF EXOSOMES
The function and biological characteristic of exosomes are determined by their specific contents. Among the exosomal components, lipids, proteins, and nucleic acids are three main cargos which determine the specificity of exosomes 38 and distinguish them from other extracellular vesicles (Fig. 3). A great variety of exosomal cargos have already been identified in exosomes and put together into a database named Exocarta, 39 which was subsequently integrated into a broader database, Vesiclepedia. 40 Some examples of exosomal cargos are summarized in (Table 1).
Lipids
Exosomal structure and cargo sorting are largely dependent on lipid composition. Various lipids in exosomes have been investigated in the past decades. In a study of cancer cellderived exosomes, more than 520 lipids from 36 different classes were identified. 41 Lipids generally are enriched in exosomal membranes. The major non-polar lipids in plasma membrane are sterols, which are highly enriched in multivesicular bodies (MVBs) from late endosomes. 42 Sphingolipids are also important for exosomal membrane construction, in which sphingomyelin is the dominant component, and is also involved in cargo sorting. 43 Among exosomal membrane phospholipids, phosphatidylserine is of importance for being the activator of negative charge and the recruiter of signalling proteins. 44,45 Besides contributing to the composition of the exosomal bilayer membranes, lipids in exosomes also play important roles in exosomal trafficking. During the formation of exosomes, the enrichment of sphingomyelin is found in membrane lipid rafts. 46 As a result of increased sphingomyelin, down-regulation of ceramide and diacylglycerol occurs and finally reaches a balanced proportion in exosomes. 47 Moreover, lipids play multiple roles in the sorting of nucleic acids and protein. In miRNA sorting, neutral sphingomyelinase 2 is the first molecule suggested to be associated with this mechanism. 48 Other lipids such as sphingomyelin, ceramide, and sphingosine 1-phosphate have been proved to play important roles in protein sorting mechanisms. 49 On the other hand, endosomal sorting complex required for transport (ESCRT)-independent exosome secretion is largely dependent on lipids, which are reported to significantly participate in the release of proteolipid-positive exosomes 50 and Aβpeptide-bearing exosomes. 51 Although lipids are not the main participators of exosomal intercellular communication, their roles for maintaining the biological characteristics of exosomes are of importance.
Proteins
Through proteomic analysis, many proteins have been found in all mammalian exosomes, such as cytoskeletal components (tubulin, actin, cofilin, profilin), annexins (annexins I, II, IV, V, and VII), and the small GTPase family members rab7 and rab11. Among all these exosomal proteins, cytosolic exosomal enriched proteins such as Alix and TSG101, tetraspanins like CD9 and CD63 are the markers for distinguishing exosomes from other extracellular particles (Table 1). The detection of the proteins listed above allows researchers to quickly assess the characteristic of exosomes. 52 Recent studies have also suggested that the heat-shock Heat-shock proteins Hsp70, Hsp90 Exosome formation or externalization during the maturation. 153 Rab GTPases proteins Rab27a, Rab27b Rab35 Involving in MVBs interaction with the plasma membrane 94,154 Annexins Annexins I, II, IV, V, and VII Membrane transport/trafficking 155 Phospholipase Phospholipase D Regulating exosome secretion via hydrolyzation of phosphatidylcholine 156 Cytosolic proteins β-catenin and Elongation factor-1α Signal transduction and protein translation 155 Lipids Glycerophospholipids Phosphatidylserine The activator of negative charge and the recruiter of signalling proteins 41,44,45 Phosphatidylglycerol Involved in transbilayer transport mechanism 86,157 Sphingolipids Sphingomyelin Involved in exosomal membrane construction and cargo sorting 41,43 Sterol lipids Oxysterol Involved in membrane contact between intracellular secretory vesicles and the plasma membrane 41,158 Neutral lipids Ceramide Triggering an exosome biogenesis pathway independent of the ESCRT machinery 50 Free cholesterol Regulating the biogenesis and cellular trafficking in endosomes 159,160 Polyglycerophospholipid BisMonoacylglyceroPhosphate (BMP) Involved in MVB formation and ILV biogenesis 161 Double-stranded DNA (Tumor) Carrying mutations identical from partental cells 70 The protein compounds selected are mainly responsible for the physiological process of exosomes including exosome formation, interaction and trafficking, whereas lipid compounds are majorly involved in the establishment of exosomal skeleton. Other bioactive compounds of exosomes are short chain nucleic acids including mRNAs, miRNAs and DNAs. They are the main single molecules that regulate recipient cells Exosomes and bone homeostasis Gao et al.
proteins (Hsp) are highly prevalent in exosomes. Among them, Hsp40 can improve the protein-folding environment in recipients, and Hsp70 is the up-regulator of pro-inflammatory cytokines. 53 Protein composition is also crucially involved in ESCRT-dependent cargo sorting during the formation of exosomes. In ESCRT (-0, -I, -II, -III), recombinant human vacuolar protein sorting proteins (VPS proteins) play a major role functioning as membrane binders and cargo recognisers. 54 Besides the various proteins mentioned above, there are also several other proteins in exosomes that reflect the specificity of cell origin and distinct exosomal functions. For example, latent membrane protein 1 (LAMP1) is highly expressed in exosomes released from nasopharyngeal cancer (NPC)-derived malignant epithelial cells. 55 Similarly, a specific cell surface proteoglycan, glypican-1 (GPC1), was detected in exosomes from pancreatic cancer. 56 Collectively, to maintain the specificity of exosomes derived from different donors, various protein cargos must be sorted into exosomes before their release.
Nucleic acids Nucleic acids are also enriched in exosomes. Coding RNAs, noncoding RNAs, single-stranded or double-stranded DNAs are all found in exosomes. [57][58][59] It is reported that more than 1 600 mRNAs and 700 miRNAs are detected in mammalian cell-derived exosomes. mRNAs contained in exosomes are usually related to cytogenesis, protein synthesis, and RNA posttranscriptional modification. 57 Exosomal mRNA have been used as biomarkers since they are specific cargos. 60 In patients with kidney diseases, downregulation of exosomal CD2AP mRNA has been detected in Urine, which can be used for early diagnosis. 61 Exosomal mRNAs are also suggested to be involved in drug resistance of tumors. Therefore, the detection of exosomal mRNA level may be used to predict optimal treatment options as well as prognosis. 62 Another recent report suggests that synthetic exosomal mRNA triggers exogenous protein expression, this may be a novel approach for treatment of genetic protein deficiencyrelated diseases. 63 Exosomes also contain abundant miRNAs. In the immune system, miRNA-enriched exosomes are released from Tlymphocyte cells, B-lymphocyte cells and DCs, and the miRNAs are involved in the interaction between T-lymphocytes and antigen-presenting cells. 64,65 In several tumors, exosomal miRNAs participate in tumor growth, 66 metastasis, and drug resistance. 67 Since specific variation of exosomal miRNAs can be detected in some diseases, exosomal profiling can be used as a tool for disease detection.
Exosomal DNA studies began much later than that of RNA, consequently, there is less information available in the literature. To the best of our knowledge, both single-stranded and doublestranded DNAs are contained in exosomes. 58 Evidence has suggested that carrying cytoplasmic DNAs in exosomes protects against cell senescence and cell death caused by DNA injury. Cells can secret exosomes and remove harmful DNAs to extracellular matrix. 68,69 However, there is only limited data to elucidate the function of double-stranded DNA in exosomes, and only little is known about the contribution of single-stranded DNAs. In a study of cancer cells, double-stranded DNA is reportedly being used to identify the mutations in cancer cells. 58 Intriguingly, the expression of DNA cargo in tumor cell-derived exosomes is much higher than that in normal cell-derived exosomes, suggesting that tumor cells can modify target cells via the transfer of DNAs. 70 There is still long way to go for the complete understanding of the role of exosomal DNA since the mechanism of chromosomal DNA sorting within intralumenal vesicles (ILVs) is still largely unknown. 71 EXOSOMAL TRAFFICKING Exosomal trafficking involves three distinct mechanisms: cargo sorting, exosome release and exosome uptake (Fig. 4). During the generation of the endosomal machinery, ILVs, the early stage of exosomes, are formed through inward budding. Together with the sorting of specific proteins, lipids and nucleic acids into ILVs, the formation of MVEs results. 72 Subsequently, MVE fuse with the cell membrane leading to the secretion of exosomes. Following that, the surface binding protein activates the uptake of exosomes in the recipient cells. 73 Finally, as endocytosis progresses, exosomes release their contents which may influence regulatory processes or they may be degraded in lysosomes. 74 Sorting cargos into exosomes Sorting of protein into exosomes relies on specialized mechanisms, which ensure the specificity of exosomes for various Exosome release and uptake in cells. a Exosome formation starts with the formation of early endosome. Subsequently, ESCRTdependent mechanism (a), which consists of four multiprotein subcomplexes (ESCRT 0, I, II, and III), or ESCRT-independent mechanism (b), which relies on tetraspanins-associated dynamic membrane platform, mediates the maturation of exosomes. After the fusion of late endosome, which contains mature exosomes, to the cell plasma membrane, exosomes are released into extracellular matrix. b Exosome uptake begins with the recognition of specific surface proteins of target cells. Subsequently, they are internalized through several internalization pathways. After that, exosomes can either release their cargos to exert their functions or be directly degraded by lysosome for recycling intracellular communication purposes. Here, the ESCRT system, constituted of four multiprotein subcomplexes (ESCRT 0, I, II, and III) appears to be the main mechanism for exosomal formation. 75 ESCRT 0, I, and II are responsible for recognizing and sequestering ubiquinated membrane proteins at the endosomal membrane, and ESCRT III is responsible for membrane budding and repartition of intraluminal vesicles. 76 Categorizing of exosomes, however, appears to be a part of cargo ubiquination and only specific ESCRT segments are involved. 77 The sorting of membrane proteins of the syndecan family into exosomes is regulated by an ESCRT accessory protein Alix through the cytosolic adaptor syntenin. 74 Alix then binds to ESCRT III which is in control of ILV formation at the MVEs. 78 Lateral involvement of heparin sulfate polysaccharide chains was reported to determine syndecan complex formation, which are degraded into shorter ones by heparinase activity in endosomes, favoring clustering of syndecans. 79 Heparinaseinduced recruitment is also believed to incite the binding of syndecan cytoplasmic domains to PDZ domains of syntenin, leading the sorting of proteins via Alix-ESCRT pathway. 78,80 ESCRT-independent protein sorting is another important pathway for exosomal formation. This process requires the formation of a tetraspanins-associated dynamic membrane platform, where cytosolic and transmembrane proteins exert their ability to accept specific proteins into ILVs. 81 Examples can be seen in CD63induced endosomal sorting in melanocytes, 82 and in tetranspanins-dependent recruitment of cholesterol-contained cone-like structures for inward budding. 83 Although ESCRTindependent protein sorting is different to its counterpart, they both undergo cargo clustering and membrane budding.
Nucleic acid sorting, however, relies on a different mechanism. While DNA sorting is still largely unknown, RNA sorting is concluded previously. Loading RNA into exosomes begins with the formation of the raft-like region. 84 Subsequently, anionic phospholipids are enriched in the raft-like region of exosomes, which then recruits neutral sphingomyelinase 2 to produce ceramide molecules, an indispensable factor for RNA sorting. [85][86][87] Binding of RNAs to the raft-like region is dependent on differential affinity of RNA motifs, 88 and randomly structured RNAs can bind to rafted domains with a 20-fold higher affinity. Once binding to the budding in the raft-like region, the RNA becomes encapsulated into ILVs and then released into the extracellular space within this vesicle. 84 Release of exosomes The greatest difference in the exocytosis pathway between exosomes and other extracellular vesicles (autophagosomes and microvesicles) is that exosomes are dependent on late endosomes for their release, 74 and fusion of the MVBs, the late endosomes containing ILVs, with the plasma membrane is the last step before the exosomes are secreted to extracellular matrix. During this phase SNARE proteins and synaptotagmin family members are the main mediators. 89 Exosomal exocytosis requires SNARE complexes, consisting of syntaxin 7, synaptotagmin 7, and VAMP 7. The SNARE complex is activated by upregulation of intercellular calcium which is Rab protein-dependent. 90 Subsequently, vesicle (v)-SNAREs and target (t)-SNAREs promote the apposition of budding vesicles and cell membranes. 91 After the coupling of v-SNAREs and t-SNAREs, the chaperone ATPase N-ethylmaleimidesensitive factor (NSF) and soluble NSF attachment proteins (SNAPs) catalyze the disassembling of SNARE complexes, leading to the release of exosomes. 91 Another key factor for exosomal release involves Rab proteins. They are a family of more than 60 proteins which participate in vesicle budding, cytoskeleton interaction and tethering of the receptor compartment to the membrane. 92 Several examples revealed their participation in exosomal release. In oligodendroglia, Rab 35 was found to participate in PLP (genuine myelin proteins)-bearing exosomal secretion. 93 Moreover, Rab 27A and Rab27B have been linked to MVBs interaction with the plasma membrane. 94 These Rab proteins are thought to participate in the eventual fusion of the membranes of exosomes and the plasmalemma of donor cells, resulting in the exocytosis of exosomes. 95 Uptake of exosomes The fusion of exosomes with recipient cells relies on the interaction of vesicular ligands with cellular receptors, such as tetraspanins, integrins, and intercellular adhesion molecules (ICAMs), which induce the binding of exosomes to the surface of target cells. The recognition of surface proteins is the first step during exosomal internalization. 96 Compelling evidence has proved that exosomal uptake is highly dependent on the signalling status of target cells and of exosomal surface proteins. 11,97,98 During exosomal internalization, various pathways, including those of endocytosis, phagocytosis, micropinocytosis, and membrane fusion, are shown to participate. 99 Among them, endocytosis seems to be the commonest way for exosomal uptake. This is a quick process occurring within 15 min. 100 The most distinctive part of exosomal endocytosis is inward budding of the plasmalemma, which is dependent on the participation of caveolin 101 and clathrin. 102 By contrast, during macropinocytosis, exosomes are attached to a highly ruffled region on the cell surface and then taken in via the internalization of the whole region. 103 This process is similar to phagocytosis. 104 Moreover, exosomes can also directly fuse their membrane with the plasma membrane of target cells. 105 This is dependent on two steps of intermediates: hemifusion structures and fusion pores. 106,107 In most cases hemifusion structures are suggested to be lipid mixture without content mixing which represents the content of outer leaflets but not the inner leaflets of the two bilayers. 106 Fusion initiates from the formation of fusion stalk, a point-like membrane protrusion of outer leaflet that establishes an hourglass-like connection between the apposed monolayers. 108 Then an immediate contact of proximal leaflets leads to the formation of hemifusion stalk where leaflets fused and distal leaflets unfused. Finally, a fusion pore opens in the hemifusion diagram dependent on the expansion of stalk, 109 where a connection between apposed membranes leads to the release of secretions. 107
EXOSOMES IN BONE HOMEOSTASIS
Bone homeostasis is of critical importance and relies on the transfer of active molecules between cells, which are summarized in (Table 2). Previous studies have suggested a direct interaction with secretion exchange among bone cells to occur. 110,111 Recently, however, compelling evidence has emerged to show the regulatory activities that exosomes exert in bone remodelling. Almost all bone cells have been suggested to secret exosomes, and the relationship between bone remodelling and bone-cell derived exosomes is now well documented. Published reports have suggested that transfer of exosomal specific proteins, mRNA and miRNA is the main mechanism for exosome-mediated bone remodelling. This crosstalk establishes a novel network for cell-tocell interaction during bone homeostasis. 112 Exosome induces osteogenic differentiation of mesenchymal stem cells (MSCs) and osteogenesis Bone remodelling is a complex process, which is mainly associated with two steps: osteoclastogenesis (for clearance of damaged bone tissues) and osteogenesis (for bone formation). It has been shown that exosomes are crucially involved in these two steps (Fig. 5).
During the process of bone formation, exosomes are suggested to involve in osteogenic differentiation of MSCs. Monocyte-derived exosomes are important stimulators for osteoblast differentiation. 34 Fusion of these exosomes with MSCs can trigger the up-regulation of two osteogenic markers: RUNX2 and BMP-2. 34 Intriguingly, newly formed osteoblasts can also secret exosomes to affect their progenitor cells. A group of researchers 113 found that mature osteoblast-derived exosomes could trigger variation of miRNA expression profiles which, in turn, cooperatively inhibit the expression of Axin1, a central component of Wnt signalling pathway. As a result, β-catenin was up-regulated leading to the enhancement of osteogenic differentiation.
Osteogenesis is also dependent on exosomal functions. Before differentiating into osteoblasts, osteoblast precursors secret exosomes to promote osteogenesis. 114 During fracture healing, bone marrow stem cell-derived exosomes express MCP-1, MCP-3, SDF-1, angiogenic factors, mRNAs and miRNAs and cooperatively contribute to bone remodelling. 114 They probably also enhance osteoblast proliferation and differentiation by upregulating osteogenesis-related proteins (RUNX-2, ALP, OCN, and OPN), as well as several genes (miRNA-196a, miRNA-27a, and miRNA-206. 115 Enhancement of osteoblast proliferation induced by MSC-derived exosomes has also been reported and that the MAPK pathway may be a key factor in exosome-mediated osteoblast activity. 116 In addition, exosomes derived from osteoblasts and osteoclasts are also involved in osteogenesis. Osteoblasts can secret exosomes to enhance osteogenesis and Let-7-enriched exosomes derived from osteoblasts have been reported to enhance osteogenesis by regulating AT-hook 2 (HMGA2) and AXIN2. 113,117 By contrast, osteoclast-derived exosomes act as inhibitors of osteogenesis. Exosomal miR-214-3p was suggested to be involved in the inhibition of osteoblast activity by targeting the 3′-untranslated region (UTR) of ATF4 mRNA. The exosomal transfer of miR-214-3p from osteoclasts to osteoblasts was also detected in vitro and thus triggered the reduction of bone mass in mice models. 36,118,119 Exosome induces osteoclastogenesis and bone resorption It is widely accepted that osteoclastogenesis is the basis for bone resorption. The classical osteoclastogenesis model is based on the direct interaction between different bone cells. However, recent studies have suggested a novel mechanism dependent on crosstalk phenomena. Initially, osteoblasts secret RANKLenriched exosomes which targets monocytes. The RANKL-RANK binding on the monocyte surface then activates osteoclastogenesis. 120 This process can be augmented by MSC-derived exosomes that can upregulate the expression of Nfatc1, Trap, and Ctsk. While osteoclast differentiation is initiated, the mechanism that controls the number of osteoclasts is initiated. This can be mediated either by osteoclast-derived exosomes or osteoblast-derived exosomes. [121][122][123] Newly formed osteoclasts release RANK-enriched exosomes, and these exosomes can either directly fuse to osteoblasts or competitively bind RANKL in the extracellular matrix to regulate the formation of osteoclasts 121 (Fig. 5b). Additionally, osteoblasts can release exosomes containing miR-503-3p to inhibit osteoclastogenesis by inactivating the RANK-RANKL signalling pathway. 113,122,123 Alternately, large numbers of monocytes can secret exosomes to promote osteoclast differentiation. 121 The end result is that osteoclasts are rapidly recruited during this phase, even though osteoclastogenesis-inhibiting exosomes are constantly released.
During bone resorption, the resorbing ability of osteoclasts can be also affected by exosomes. For example, exosomes derived from serum of osteoporotic, osteopenic or aged patients enhance bone resorption. 124 When bone resorption is close to completion, abundant RANK-enriched exosomes derived from osteoclasts impede osteoclastogenesis. Finally, RANKL-enriched exosomes that are secreted from osteoblasts can inhibit bone resorption via the induction of osteoclast apoptosis. 125 Osteocyte-derived exosomes in bone homeostasis Compared to the investigation of osteoblast and osteoclastderived exosomes, studies focusing on osteocyte-derived exosomes are few. Available data show that osteocytes also have the ability to release exosomes, 37 and there appears to be a link between osteocyte-derived exosomes and bone homeostasis.
A group of researchers have shown that myostatin-modified osteocytic exosomes can regulate osteoblastic differentiation via exosomal miRNA-218, by targeting the wnt/β-catenin-signalling pathway. 126 Wnt/β-catenin signaling is of great importance in bone homeostasis, involving both bone formation and bone resorption, and is widely believed to be orchestrated by the osteocyte. 127,128 Previous studies have revealed that Sclerostin and DKK1 were the inhibitors of Wnt signalling by binding to the Wnt co-receptors LRP5/6, thereby contributing to bone loss. 127 Interestingly, induction of exosomes containing miRNA-218 derived from myostatin-modified osteocyte was also inhibited. These exosomes were then found accepted by osteoblast leading to the up-regulation of sclerostin, DDK1, and RANKL.
Another interesting finding is that osteocytes can secret exosomes in response to mechanical loading. Initially, mechanical Besides, up-regulation loop can be seen between osteoblast and its precursor via the release of exosomes. However, osteoclast-derived exosomes play an inhibitory role in osteogenesis. Collectively, osteogenesis and osteoclastogenesis can be induced by exosomes derived from various bone cells whereas it seems that only osteoclast-secreted exosomes inhibit these two processes indicating their special role in bone homeostasis stimulation triggers immediate contraction of the actin network which results in Ca 2+ transients. Simultaneously, mechanical stimulation induces the secretion of osteocytic exosomes, shown by immunostaining with the secretory vesicle marker, lysosomalassociated membrane protein 1 (LAMP-1). This process can also be enhanced by the upregulation of intercellular Ca 2+ . Finally, released exosomes which contain sclerostin, RANKL, and osteoprotegerin target osteoblasts to activate osteogenesis. 129 Exosomes derived from tumor cells in bone homeostasis Exosomes can be released from a variety of cell types. The tumorcell derived exosomes 130 can affect bone homeostasis. These effects of cancer cells on bone remodelling provides a new perspective for understanding bone diseases in the course of malignancy.
Tumor cells can spontaneously secret exosomes, and fusion of exosomes to bone cells may trigger either inhibition or abnormal enhancement of bone cell function. Exosomes released from multiple myeloma cells have been proved to support the survival of osteoclast precursors via the down-regulation of TRAP mRNA expression induced by inhibition of caspase-3 activity. Further, enhanced differentiation of osteoclast precursor was observed, which explains the increased bone resorption in myeloma patients. 130 Enhanced osteoblast activity has been observed as well. It can be induced by the transfer of exosomal miRNA-214-3p which facilitates osteoblastic metastases. 131 As bone is the initial site for tumor metastases, 132 Exosomes can also participate in the establishment of bone metastasis, leading to tumor-induced osteolysis. 133 In the process of metastasis, exosomes play an important role as they are carrier of miRNA-192, a pivotal factor in tumor induced angiogenic activity. 134 This is likely to influence pathways involved in the generation of proteases, adhesion molecules, and chemokine ligands, contributing to the metastatic spread of the tumors.
Exosomes-based clinical applications in fracture healing Recent studies have shown the therapeutic potential of exosomes in different stage of fracture healing, suggesting that individualized strategies can be used to promote bone tissue repair. The initial step of fracture repair is the establishment of new vessels and formation of hematoma at the fracture site where inflammatory cells are being recruited. 135 Prolonged activation or attenuation of inflammation may lead to excessive bone tissue damage or accumulation of necrotic bone respectively. [136][137][138] MSC-derived exosomes are supposed to ideally attenuate inflammation-based delay of fracture healing. By using MSCderived exosomes, proinflammation factors TNF-α and IL-1β are significantly suppressed, while anti-inflammatory factor TGF-β is increased 139 (Fig. 6a). Moreover, exosomes are stable carriers for antiinflammation drug delivery. When encapsulated in exosomes, curcumin, an anti-inflammation drug, is more highly concentrated in blood. Moreover, as the drug is more accurately delivered to inflammatory cells due to target specificity of exosomes, there is obvious reduction of unwanted side effects. However, in a certain period of bone healing, inflammation is suggested to be indispensable. Over inactivation of inflammation may lead to delay of fracture healing or even non-union. 140 Thus, timing of intervention is important. Exosomes also play a role in proinflammatory processes. There is evidence to suggest that acrophage-derived exosomes induce the differentiation of naive monocytes into macrophages. 141 In this way, recruitment of macrophages, which contain approximately 29 cytokines for tissue repair and inflammation, will relieve an inflammation deficiencybased fracture healing delay. 140 Tissue repair is the second stage of bone healing when exosomes act as promoters of angiogenesis and bone regeneration (Fig. 6b). MSC-derived exosomes have been reported to contain abundant angiogenesis-related proteins. 142 The latter enable endothelial cell proliferation and vessel formation. 143 Interestingly, pro-angiogenesis effect and tissue repair are detected contemporaneously in vitro. 144 In vivo, MSC-derived exosomes are seen to promote angiogenesis and osteogenesis. Eight weeks after implantation of MSC-derived exosomes, strong formation of vessels and bone tissue is detected in osteoporotic rats compared to untreated controls. 145 These findings provide a novel approach for enhancing early tissue repair when revascularization and fibroblast proliferation in soft callus occur. 135 Also, the wide range of exosomal functions may allow the use of MSC-exosomes throughout the whole period of fracture healing. 145 Bone remodelling at its final stage generally is long-lasting (Fig. 6c). It reaches a degree of homeosteosis between different bone cells. Bone-derived exosomes have been proposed to have a regulatory function on each bone cell type. Osteoclast precursors together with osteoblast-derived exosomes have been detected to promote osteoclastogenesis in vivo, 121,125 thus, could be used to boost the clearance of damaged tissue. During bone remodelling, MSC-derived exosomes have been shown to promote this process. 146 In a femur fracture model of CD9 −/− mice, which is suppressed in exosome production, there is obvious delay of callus formation leading to retardation of bone union. By local injection of exosomes, however, this retardation is rescued. 114 Enhancement of cell proliferation and protection from cell death, MSC-derived exosomes could then serve as a powerful tool in bone remodelling. 147 Such data support the concept that MSC-exosomes-based therapy is ideal for fracture healing for the repair of large bone defects. 148
CONCLUSION
The past decade has witnessed significant progress in the investigation of exosomes as regulators of bone homeostasis, although the function of each of their single molecular species requires further analysis. Whether exosomes are, however, dominant factors in bone homeostasis needs to be further addressed in the future. Such studies will help to better understand the nosogenesis of several exosome-associated bone diseases. Although the introduction of exosomes into clinical practice is not likely to be soon, the perceived power of exosomes in bone homeostasis provides the possibility of novel approaches in the treatment of bone damage and disease.
|
2018-12-15T16:54:28.476Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "d3c5a7139f29868bbf0d528542dc78b08b8d4d56",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41413-018-0039-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3c5a7139f29868bbf0d528542dc78b08b8d4d56",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
225098112
|
pes2o/s2orc
|
v3-fos-license
|
Porcine interferon lambda 3 (IFN-λ3) shows potent anti-PRRSV activity in primary porcine alveolar macrophages (PAMs)
Background Porcine reproductive and respiratory syndrome virus (PRRSV) is a serious viral disease of swine. At present, there are vaccines for the control of PRRSV infection, but the effect is not satisfactory. The recombination of attenuated vaccines causes significant difficulties with the prevention and control of PRRSV. Type III interferons (IFNs), also called IFN-λs, were newly identified and showed potent antiviral activity within the mucosal surface and immune organs. Results Therefore, primary porcine alveolar macrophages (PAMs) were used for this investigation. To this end, we found that the replication of PRRSV in PAMs was significantly reduced after pre-treatment with IFN-λ3, and such inhibition was dose- and time-dependent. The plaque formation of PRRSV abrogated entirely, and virus yields were reduced by four orders of magnitude when the primary PAMs were treated with IFN-λ3 at 1000 ng/ml. In addition, IFN-λ3 in our study was able to induce the expression of interferon-stimulated genes 15 (ISG15), 2′-5′-oligoadenylate synthase 1 (OAS1), IFN-inducible transmembrane 3 (IFITM3), and myxoma resistance protein 1(Mx1) in primary PAMs. Conclusions IFN-λ3 had antiviral activity against PRRSV and can stimulate the expression of pivotal interferon-stimulated genes (ISGs), i.e., ISG15, Mx1, OAS1, and IFITM3. So, IFN-λ3 may serve as a useful antiviral agent. Supplementary Information The online version contains supplementary material available at 10.1186/s12917-020-02627-6.
Background
Type I Interferons (IFN-α/β) and type III IFNs (IFN-λs), as the first line of defence in innate immunity, play a crucial role in the body's resistance to exogenous pathogens [1]. Type III IFNs, also called IFN-λs, were first described in 2003 [2,3], consist of IFN-λ1, IFN-λ2, IFN-λ3 and IFN-λ4 in humans [4,5], IFN-λ2 and IFN-λ3 in mice [6,7], IFN-λ1 and IFN-λ3 in swine [8][9][10]. Both IFN-α/β and IFN-λs bind to unique receptors and induce the previous signalling pathway and expression of the IFN-stimulated genes (ISGs) to mediate antiviral activity. Type I IFN interacts with a receptor formed by Interferon alpha/beta receptor 1 (IFNAR1) and Interferon alpha/beta receptor 2 (IFNAR2). Type III IFNs bind to the specific receptor chain IFN-λR1 and IL-10R2 [11]. Type I IFN receptor is ubiquitously expressed in various types of cells and organs; However, IFN-λR1 is widely expressed on epithelial cells, dendritic cells, human peripheral blood monocytes or macrophages [2], which means that IFN-λs may provide a focused antiviral response against mucosal and immune organ infections.
Interferon bind to its receptors on the cell surface and induce the production of a large number of ISGs, including ISG15, myxoma resistance protein (Mx) family, 2′-5′-oligoadenylate synthase (OAS) family and the IFNinducible transmembrane (IFITM) family, through JAK-STAT signal transcription [12]. ISG15 is one of the most highly induced ISGs, ISG15 can inhibit viral translation, replication, or egress [13]. Mx1 is a broadly inhibitor and inhibits a wide range of viruses by blocking the endocytic traffic of incoming virus particles and the uncoating of ribonucleocapsids [14]. OAS1 can recognise the dsRNA produced by the virus in the infected cells and play an antiviral role by activating the ribonuclease L (RNase L) to degrade the diseased mRNA [15]. The IFN-inducible transmembrane (IFITM) family has a role in blocking virus entry [16]. IFITM3 has high potency against influenza A virus and severe acute respiratory syndrome (SARS) coronavirus [17].
PRRSV is a member of the family Arteriviridae in the order Nidovirales, which causes severe reproductive failure in sows and respiratory distress in piglets and growing pigs [18]. Also, PRRSV is an immunosuppressive virus that can infect the lymphatic system of the whole body and produce viraemia after infection [19]. PRRSV mainly infects and destroys porcine alveolar macrophages and leads to severe immunosuppression, which promotes the infection of Mycoplasma pneumoniae, Streptococcus, A. pleuropneumoniae, and other pathogens [20,21]. The primary PAMs derived from piglet alveoli is an appropriate model for studying the interaction of PRRSV immune responses and host-pathogen in vitro. An in vivo antiviral test of type III interferons from pigs has not been reported. In this study, the antiviral activity of porcine IFN-λ3 against PRRSV in primary PAMs was evaluated, and the expressions of ISGs genes induced by IFN-λ3 was also investigated in primary PAMs.
IFN-λ3 inhibits the replication of PRRSV in a dosedependent manner in primary PAM cells
Previous study has confirmed that porcine IFN-λ3 possess the high specific activity against porcine epidemic diarrhoea virus (PEDV), classical swine fever virus (CSFV), hepatitis E virus (HEV) and so on [12,22,23]. In this study, we verified the antiviral effect of IFN-λ3 against PRRSV in vitro on PAMs. As shown in Fig. 1, treatment of primary PAMs with IFN-λ3 could reduce the multiplication of PRRSV. The degree of cytopathic effect (CPE) decreased with the increase in IFN-λ3 concentration ( Fig. 1 A-E). The number and size of viral plaques also decreased with the increase in IFN-λ3 concentration ( Fig. 1 F-J). The virus titre was significantly reduced with the increase of IFN-λ3 treatment dose (10, Fig. 1 The CPE of primary PAMs treated with Porcine IFN-λ3 and infected with PRRSV. The primary PAMs were untreated or pre-treated with IFN-λ3 (10, 100, 1000 ng/ml). b The primary PAMs not treated. b The primary PAMs treated with 10 ng/ml IFN-λ3. c The primary PAMs treated with 100 ng/ml IFN-λ3. d The primary PAMs treated with 1000 ng/ml IFN-λ3. e Control primary PAMs. k The primary PAMs were treated or untreated with 100 ng/ml of IFN-λ3 for 12 h and then were infected with PRRSV NJ strain at 0.1 MOI. Infected cells were cultured for 12, 24, 36 or 48 h after infection. f, g, h, i, j corresponds to a, b, c, d, e with the same treatment. Magnifications, × 200 100, 1000 ng/ml), and the maximum treatment dose could reduce the virus titre by four orders of magnitude compared with the control group ( Fig. 1 K, the raw data are shown in supplementary Table S1). These results indicate that the IFN-λ3 could significantly inhibit the replication of PRRSV in a dose-dependent manner in primary PAMs.
IFN-λ3 inhibits the replication of PRRSV in a timedependent manner in primary PAM cells
To investigate the time-dependent manner of the IFN-λ3 inhibits the replication of PRRSV, we used IFN-λ3 of 100 ng/ml concentration to treat PAMs cells and infected with the PRRSV. Cell cultures were collected at specific time and determine the virus titre. As shown in Fig. 2 (The raw data are shown in supplementary file 1, Table S2), the inhibition of IFN-λ3 on PRRSV decreased with time in primary PAMs, but the inhibition still existes. PRRSV proliferation slowed down within 36 h to 48 h in primary PAMs that were treated with IFN-λ3. The above results showed that IFN-λ3 could maintain a potent anti-PRRSV activity in the later stage and significantly inhibit the replication of PRRSV in a timedependent manner in primary PAMs.
IFN-λ3 inhibits PRRSV infection by activating ISGs in primary PAMs
ISG15, Mx1, OAS1 and IFITM3 have well-known antiviral properties and may affect PRRSV replication. Therefore, we accessed the quantification of ISGs induce by IFN-λ3. As seen in Fig. 3a to d, a dose-dependent induction of ISG15, Mx1, OAS1 and IFITM3 has been observed in primary PAMs treated with IFN-λ3. The expression of mRNA for ISG15, Mx1, OAS1, and IFIT M3 was up-regulated by 70, 70, 160, and 15 times respectively at the concentration of 1000 ng/ml in primary PAMs. As shown in Fig. 3e and f (The full-length blots are presented in Supplementary file 2), a dosedependent induction of the antiviral proteins ISG15, Mx1 and OAS1 has been observed in primary PAMs treated with IFN-λ3. The expression concentration of three antiviral proteins increased with the increasing IFN-λ3 concentration. Both ISG15 and Mx1 showed low expression in the untreated condition, and IFN-λ3 induced a large amount of expression. The expression of ISG15, Mx1 and OAS1 tended to be stable when the concentration of IFN-λ3 was higher than 100 ng/ml (Fig. 3f).
Discussion
The results in our research confirm that the porcine IFN-λ3 shows potent anti-PRRSV activity in primary PAMs. PAMs are the first line of defence against pathogenic microbe infections in the lung. PRRSV replicates in monocytic lineage cell types, particularly in PAMs, and causes immunosuppression in swine [24]. Therefore, we selected the primary PAMs to carry out the IFN-λ3 in vitro anti-PRRSV study.
Alveolar macrophages are resident phagocytes of the alveolar space [24]. The expression of the IFN-λ receptor in alveolar macrophages has been confirmed and reported [25]. Macrophages express IL-10Rβ and IL-28Rα at both the mRNA and protein levels [26]. IFN-λ3 has the strongest antiviral function in IFN-λs [26]. In our study, the PRRSV proliferation reduced when primary PAMs were treated with IFN-λ3 (1000 ng/ml) (Fig. 1i). IFN-λ3 treatment could significantly reduce the virus titre of PRRSV proliferation on PAMs, and the virus titre of the 1000 ng/ml treatment group was four orders of magnitude lower than that of the control group (Fig. 2b). Consistent with these results, treatment with 10, 100 or [27]. The study of two other kinds of viruses targeted porcine intestinal epithelial, PEDV and CSFV, confirming that IFN-λ3 inhibits their infection in vitro [22,28]. All of these imply that porcine IFN-λ3 can inhibit the proliferation of porcine viruses such as CSFV, PEDV and PRRSV.
The antiviral activities of IFN-λ3 are due to ISG induction and IFN-λ3 can induce the expression of ISG. IFN-λ3 exerts its anti-HIV function by activating JAK-STAT pathway-mediated innate immunity in macrophages [29]. IFN-λ3 can bind to cell surface receptors and induce the high expression of interferon-stimulating genes of the MX, OAS and IFITM families [28,30,31]. The gene transcription profile induced by IFN-λ3, particularly the gene transcription profile induced by IFN-λ3 in primary PAMs has not been reported. In our study, we assessed whether the antiviral efficacy of IFN-λ3 was caused by the levels of ISG expression induced by IFN-λ3. Consistent with the expecting result, the expression of ISG15, OAS1, Mx1, and IFITM3 was provoked in primary PAM cells. The mRNA transcription and protein translation of the ISG15, OAS1 and Mx1 showed dose-dependence. However, the rangeability of mRNA and protein expression levels of ISG15, OAS1 and Mx1 were different. The expression levels of protein reached its peak when treated with 100 ng/ml IFN-λ3 while the expression of mRNA continuous increased (Fig. 3).
Conclusion
In summary, our data demonstrated that IFN-λ3 could inhibit the replication of PRRSV in Primary PAMs, and such inhibition is dose-and time-dependent. Alveolar macrophages are one of the earliest immune defence cells in the lungs that contact pathogenic microorganisms. They are essential components of the innate and specific immunity of the host [32]. PAMs are an essential host cell for PRRSV natural infection. IFN-λ3 can stimulate the expression of pivotal ISGs, i.e. ISG15, Mx1, OAS1, and IFITM3. This study indicated that porcine IFN-λ3 might serve as a promising therapeutic agent against PRRSV and other viruses in swine in the future. Supplementary file 2 (a to d). The grey value of protein bands was measured by image J (f). Data were presented as mean ± SEM (N = 3). *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001 by Unpaired T-test
Antiviral assay
To determine the anti-PRRSV activity of IFN-λ3 in the primary PAMs, the E. coli-derived IFN-λ3 was prepared in our laboratory. To explore the dose-dependent of IFN-λ3 antiviral, primary PAMs were untreated or pretreated with IFN-λ3 (10, 100, 1000 ng/ml) for 12 h. Then, the cells were infected with PRRSV NJ strain at 0.1 MOI for 1-2 h, washed and replenished with fresh medium containing the indicated IFN-λ3. Infected cells were cultured for 48 h after infection. To explore the time-dependent effect of IFN-λ3 antiviral, primary PAMs were pre-treated with 100 ng/ml IFN-λ3 for 12 h. Then, the cells were infected with PRRSV NJ strain at 0.1 MOI for 1-2 h, washed and replenished with fresh medium containing the indicated IFN-λ3. Infected cells were cultured for 12, 24, 36, 48 h after infection. All of the cells were submitted to two freeze-thaw cycles and titrated by 50% tissue culture infective dose (TCID 50 ) in Marc-145 cells. The cytopathic effect (CPE) units in culture plates were counted, and the viral titre analysis made use of the Reed-Muench Method. To examine the level of ISG expression in primary PAMs following IFN-λ3 stimulation, the cells were stimulated with the indicated concentrations (10, 100, 1000 ng/ml) of IFN-λ3 in 12-well plates for 12 h. Cells were then lysed, total RNA was extracted for subsequent qPCR analysis and total protein was extracted for western blot analysis. Every treatment group in this study had three duplicate samples (N = 3).
Real-time quantitative PCR (qPCR)
Total RNA was extracted from the cellular supernatant or cell lysates using the EZ-10 Spin Column Total RNA Isolation Kit (Sangon Biotech (Shanghai) Co., Ltd., China) according to the manufacturer's instructions and the RNA concentration was measured using a nucleic acid concentration analyzer (SCANDROP 200, Analytik Jena, Germany). Reverse transcription was performed using the Prime Script™ II 1st Strand cDNA Synthesis Kit (TAKARA), and qPCR was performed in a Light Cycler 96 (Roche, Switzerland) with TB Green® Premix Ex Taq™ II (Tli RNaseH Plus) (TAKARA). The thermal cycling conditions were 95°C for 30 s, followed by 40 cycles of 95°C for 5 s, and 60°C for 30 s. All acquired data were obtained using Light Cycler 96 real-time PCR machines (Roche) and analysed with Light Cycler 96 software 1.5 based on the cycle threshold (ΔΔCT) method. Primers were designed using Oligo 6.0 software and are shown in Table 1.
Western blot
Total protein was extracted from the cell lysates using the Western and IP Cell lysis Buffer (Sangon Biotech (Shanghai) Co., Ltd., China) according to the manufacturer's instructions and protein concentration was determined using the BCA protein assay kit (Sangon Biotech (Shanghai) Co., Ltd., China). After gel electrophoresis, the proteins were transferred to nitrocellulose membranes (Bio-Rad, USA), and blocked in 5% skim milk at 4°C overnight. After washing with PBST (0.5% Tween-20 in PBS), the membrane was incubated with primary antibodies for 2 h at 37°C. After washing, the membrane was incubated with horseradish peroxidase (HRP)-conjugated IgG antibody (Abcam, No: ab170487) for 1 h at 37°C. the protein bands were detected using SuperSignal™ West Pico PLUS Chemiluminescent Substrate (Thermo Scientific, USA) and chemiluminescence imaging system (BIO-RAD, ChemiDoc MP, California, USA). The primary antibodies of ISG15 (No: ab233071), OAS1(NO: ab86343), Mx1(No: ab95926) and β-actin (No: ab179467) was purchased from Abcam.
Statistical analysis
Statistical analysis was performed and histogram were drawn using GraphPad Prism™ 8.0 (GraphPad Software, USA), Paired student t-test, and one-way ANOVA was used to test differences between different groups. P values< 0.05 were considered significant. The gray intensity of protein bolts was analyzed by Image J (National Institutes of Health, USA). The layouts and cropping of the pictures were completed by Adobe Illustrator CS6 (Adobe Systems Incorporated, California, USA).
Additional file 1: Table S1. The Viral titer at 48 h after the PAMs stimulated with different dose of IFN-λ3. Table S2. The Viral titer at 12, 24, 36 or 48 h after the PAMs stimulated with IFN-λ3 (100 ng/ml).
|
2020-10-29T09:02:24.405Z
|
2020-10-28T00:00:00.000
|
{
"year": 2020,
"sha1": "ad12778a18d842e82dc5110bf1955a5ca798cf7e",
"oa_license": "CCBY",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-020-02627-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5d1d71474f058cf06f75362131c3eefbcfd155b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
7429961
|
pes2o/s2orc
|
v3-fos-license
|
Genome-Wide Fitness and Expression Profiling Implicate Mga2 in Adaptation to Hydrogen Peroxide
Caloric restriction extends lifespan, an effect once thought to involve attenuation of reactive oxygen species (ROS) generated by aerobic metabolism. However, recent evidence suggests that caloric restriction may in fact raise ROS levels, which in turn provides protection from acute doses of oxidant through a process called adaptation. To shed light on the molecular mechanisms of adaptation, we designed a series of genome-wide deletion fitness and mRNA expression screens to identify genes involved in adaptation to hydrogen peroxide. Combined with known transcriptional interactions, the integrated data implicate Yap1 and Skn7 as central transcription factors of both the adaptive and acute oxidative responses. They also identify the transcription factors Mga2 and Rox1 as active exclusively in the adaptive response and show that Mga2 is essential for adaptation. These findings are striking because Mga2 and Rox1 have been thought to control the response to hypoxic, not oxidative, conditions. Expression profiling of mga2Δ and rox1Δ knockouts shows that these factors most strongly regulate targets in ergosterol, fatty-acid, and zinc metabolic pathways. Direct quantitation of ergosterol reveals that its basal concentration indeed depends on Mga2, but that Mga2 is not required for the decrease in ergosterol observed during adaptation.
Introduction
Oxidative stress is caused by a number of reactive oxygen species (ROS) generated as a result of aerobic metabolism or chemical exposure.These compounds damage a variety of cellular products, including DNA, proteins, and lipid membranes, and are associated with a number of human pathologies.For example, in cardiovascular disease, oxidation of low-density lipoprotein causes an inflammatory response [1].The sensitivity of neurons to oxidative stress implicates ROS in neurodegenerative diseases, such as Parkinson's and Alzheimer's [2][3][4].
A continuing source of controversy is the role of oxidative stress in aging.Caloric restriction has been shown to extend lifespan in a number of species [5].Initially, it was hypothesized that the effect on lifespan occurs primarily because caloric restriction reduces the level of aerobic respiration, a major source of ROS [6].Newer evidence is challenging this hypothesis, since caloric restriction paradoxically increases respiration [7].Increased respiration, in turn, can generate mild levels of ROS which protect against high doses of oxidant [8].This process is known as adaptation or hormesis [9] and is widely conserved among eukaryotes [8,[10][11][12].One hypothesis is that adaptation to oxidative stress is the basis for the lifespan-extending effect of caloric restriction [13,14].Thus, further efforts to understand the process of adaptation may have broad implications on models of aging and disease.
In one model of adaptation, the cell increases the activity of the enzymes and pathways required to rid the cells of ROS, leaving it better equipped to process acute dosages of oxidant when they arise.Under this model, genes involved in the adaptive response are expected to be a subset of those that become active in the acute response [15].Many such candidates have been identified, including a variety of biosynthetic enzymes which produce small molecular compounds or proteins with reduction potential, such as glutathione (GSH), thioredoxin, NADPH, and trehalose [16][17][18][19][20]. Different enzymes facilitate this process for different ROS, including catalases and peroxidases (which deal with peroxide radicals) [21,22] and superoxide dismutases (which deal with superoxide radicals) [23,24].Additional proteins serve to repair the damage caused by oxidative stress.Heat shock proteins act as chaperones within the cell, allowing damaged proteins to fold properly or preparing them for disposal [25].DNA repair genes are also vital, as oxidative stress can damage both nucleotides and the phosphodiester DNA backbone [26].Several studies have implicated classical oxidative stress proteins and pathways in adaptation, including the transcription factor Yap1 [27] and glutathione synthesis [28][29][30].
In contrast to this model, a second body of evidence suggests that adaptation may be governed by novel pathways not directly involved in the response to acute oxidation.In a study of adaptation to the oxidant linoleic acid, Alic et al. found that adaptation can occur without induction of oxidative or general stress response genes following pretreatment [31].Instead, various metabolic processes were activated and protein synthesis was inhibited.Moreover, machinery with a central role in the acute response, such as the mitochondria [9,32] or the Msn2/4 environmental stress response factors, are not required for adaptation [27,33].
Nonetheless, expression studies of acute oxidative damage have helped to identify a set of genes involved in the common environmental stress response (ESR) and implicated the Msn2/4 transcription factors in control of this gene set [34][35][36].In fitness studies of yeast deletion strains, Thorpe et al. identified a set of genes required for the response to hydrogen peroxide, mainly dealing with the proper functioning of the mitochondria [37].However, to-date these genome-scale approaches have focused on the acute, rather than the adaptive, response.One study to date that has screened for adaptive genes focused on a set of 268 genes selected based on previous literature [38].
Here, we use the rich functional genomics toolbox of yeast to identify pathways involved in adaptation to hydrogen peroxide.To accomplish this goal, we use barcode arrays to screen the Saccharomyces cerevisiae gene deletion collection [39] for genes required in the acute and adaptive responses, and we couple these data with genome-wide mRNA expression profiles to build a system-wide model of adaptation.
A Genetic Screen to Identify Genes Functioning in Adaptation
As shown in Figure 1A, we elicited adaptation using a protocol consisting of a mild pretreatment of hydrogen peroxide (0.1 mM H 2 O 2 for 45 min) followed by a later high dose (0.4 mM H 2 O 2 for 1 hr).For purposes of comparison, we also conducted an acute protocol which exposed cells to the high dose only (0.4 mM H 2 O 2 for 1 hr).Consistent with previous findings [9], we observed that yeast cells undergoing the adaptation protocol exhibited a smaller reduction in viability compared to cells exposed to the acute treatment protocol (Figure 1B and Figure S1).
Given these protocols, we designed a series of yeast genomewide phenotyping experiments using the publicly available pool of 4,831 viable single-gene deletion strains [40].Each strain in the pool incorporates a pair of unique oligonucleotide barcode tags, which allow the relative prevalence of all strains to be tracked in growth experiments by hybridization of pooled genomic DNA to a barcode microarray.In a first experiment, two identical pools of deletion mutants were treated with the adaptation or acute protocol, respectively, and directly compared on a barcode array (with multiple biological replicates; see Methods).In a second experiment, a pool subjected to the acute treatment was compared against an untreated pool.
These experiments were used to identify genes required for adaptation or for the acute response, as shown in Figure 1C.Fitness in the acute response was defined as the difference in viability between the acute and untreated conditions (determined from the log ratio of intensities measured in the direct comparison of the acute and untreated pools, see Methods).Adaptive fitness was defined as the difference in viability between the acute and adapted conditions, normalized by the magnitude of the acute effect (Figure 1C).
Genes Required for Adaptation Do Not Function in Canonical Oxidative Stress Pathways
A total of 156 versus 108 genes were found to be required for the adaptive versus the acute responses, with an overlap of 88 genes (Figure 2A).A complete list of acute and adaptation-sensitive genes is provided in the Dataset S1. Surprisingly, neither the adaptive nor the acute screen was enriched for oxidative stress response genes (GO Biological Process 0006979) which encode enzymes involved in processes such as ROS detoxification and homeostasis.This may be due to the ability of this response to compensate for the loss of single gene activities, confirming earlier observations regarding the acute response by Thorpe et al. (Table S1) [37].Instead, both the adaptive and acute gene sets were heavily enriched for functions in the mitochondrial ribosome and aerobic respiration (Figure 2B).The identification of these functions is puzzling in light of an earlier finding that yeast with defective mitochondria (rho 2 mutants) adapt to oxidative stress [9,32].In these studies, a milder high dose was required to demonstrate adaptation; therefore, the observed deficiency in adaptation of mitochondrial mutants in our screen may be due to increased sensitivity to the high dose.
Adaptation Requires Transcriptional Regulators
Both sensitivity screens also highlighted several transcription factors (Figure 2A), which are particularly interesting due to their potential roles in regulation of adaptation.These factors include YAP1 and SKN7 which, in contrast to the above enrichment results, do have known involvement in the response to oxidative stress [41,42].YAP1 and SKN7 were previously identified as adaptive-sensitive in the restricted screen conducted by Ng et al. [38].The transcription factor MGA2 was required for the adaptive but not the acute response.MGA2 has been implicated in fattyacid biosynthesis and the response to hypoxia [43].
To confirm the requirement of these transcription factors for oxidative adaptation, we performed additional adaptation experiments specifically in yap1D, skn7D, mga2D, and wild type strains.For each, we quantified the severity of each protocol (acute, adapted, untreated) as the time required to recover to a specific OD 600 threshold following treatment (Figure 1B) [32].Adaptive fitness was calculated as the reduction in viability of the adapted culture, relative to that of the acute-treated culture (see Methods).Figure 3 displays the computed fitness values for each strain over a range of OD 600 thresholds.All of these strains were indeed confirmed to have fitness values less than wild type.
Distinct Sets of Genes Are Expressed during the Adaptive versus Acute Responses
Next, we performed mRNA expression profiling on each of the three treatment protocols (pretreated, adapted, acute, see Figure 1A) in comparison to untreated conditions.These profiles were analyzed to identify two types of adaptive response genes: early versus late.Early adaptive genes were defined as those that were differentially expressed after the 45 min.pretreatment relative to untreated conditions (169 genes at p,1.0610 25 , see Methods).Late adaptive genes were defined as those that were differentially expressed after the 1 hr.high dose following
Author Summary
Reactive oxygen species (ROS) damage a variety of structures within the cell, resulting in disease and aging.In a seemingly paradoxical effect termed adaptation, it is possible to prevent damage caused by ROS by pre-treating the cell with a small amount of oxidant.We studied this process in order to identify the mechanisms that provide this protection.Our study identified a number of genes and processes with previously unappreciated roles in adaptation.The mechanisms we identified are remarkable because they are distinct from those previously known to protect the cell from ROS.Although this study is conducted in yeast, the wide conservation of adaptation among many organisms suggests that the results from this study may be widely applicable.
pretreatment (391 genes).In comparison, a much larger set of 1,893 genes was differentially expressed in response to the high dose in the absence of pretreatment.
The overlap of the acute expression response with either the early or late adapted responses was significant (p = 2.1610 22 versus p = 6.8610 236 by hypergeometric test, respectively); nonetheless the overlap with the early response was much less than with the late adapted response (38% versus 60%, see Figure 2C).In addition, 26 genes that would be expected to be increasing in expression based on the acute expression data were decreasing in expression during adaptation, such as genes involved in the response to oxidative stress (GO Biological Process 0006979) (Figure 2B).Other sets of genes were expressed uniquely during early and late adaptation, including ergosterol metabolism, fatty acid synthesis, and zinc homeostasis (GO Biological Processes 0008204, 0006631, 0055069, respectively) (Figure 2B).Unlike the fitness profiling, oxidative stress genes were strongly implicated in the acute expression response (as also found by others; Tables S2 and S3). A. Yeast cells were collected following each of four hydrogen peroxide treatment conditions (pretreated, adapted, acute, and untreated, labeled 1-4).Competitive growth experiments were performed between gene deletion pools grown in adapted versus acute conditions (to identify genes required specifically for adaptation) and between pools grown in acute versus untreated conditions (to identify genes required for the acute response).Gene expression profiling was performed in either adapted or acute conditions versus untreated cells.B. Profiling of wild type growth reveals that pretreatment with mild hydrogen peroxide (green) leads to improved recovery to an OD 600 threshold (dashed line) compared to no pretreatment (red) following a high dose of hydrogen peroxide.An enlarged version of panel B is available as Figure S1.C. For an individual gene deletion, the acute sensitivity is defined as the difference between the acute and untreated viability.The adapted sensitivity is the fraction of that difference that is recovered by mild pretreatment with hydrogen peroxide.doi:10.1371/journal.pgen.1000488.g001
Centrality of Transcription Factors Mga2, Rox1, and Yap1 during Pretreatment Expression
To map the transcriptional program underlying adaptation, we computed the activity of each yeast transcription factor based on the significance of differential expression among its set of known targets (Figure 4).Lists of targets for each factor were drawn from YeastRACT, a database of literature-curated regulatory interactions [44] (Methods).Application of this method to the acute treatment protocol identified Msn2/4, Yap1, and Skn7 as key factors, all of which had been previously associated with the acute response to oxidative stress.All of these factors were also moderately active during pretreatment and became more so after transitioning to the high dose (Figure 4).Other factors exhibiting this behavior include Adr1, Hsf1, and Pdr1/3.
On the other hand, targets of Mga2 and Rox1 exhibited highly significant activity during pretreatment, but not during the acute response (Figure 4).As Rox1 is a transcriptional repressor, the upregulation of its targets suggests a decrease in Rox1 activity [45].While mga2D was also identified as an adaptive-deficient strain in the high-throughput screen (Figure 2), rox1D was not.Both of these findings were confirmed with targeted investigations of individual deletion strains (Figure 3).Like Mga2, Rox1 had previously been associated with the hypoxic, not oxidative, stress response [46].Thus, our analysis appears to classify transcription factors into two
Deletion Studies Confirm the Influence of Mga2, Rox1, and Yap1 on Gene Expression
The involvement of Mga2 in early adaptation is supported by its requirement for adaptive growth in the deletion profiling experiments (Figures 2 and 3) and the striking behavior of its targets in the expression profiling experiments (Figure 4).To further confirm the activity of Mga2, pretreatment with hydrogen peroxide was repeated in an mga2D background and gene expression was profiled versus untreated cells using quadruplicate whole-genome microarrays.In this experiment, the number of up-regulated Mga2 targets was significantly decreased (Figure 5A, p = 1.2610 22 by Fisher's Exact Test), supporting the activation of Mga2 by mild pretreatment with hydrogen peroxide.Moreover, the MGA2 gene is itself up-regulated following pretreatment and the transition to the high dose (p = 1.4610 23 and 5.3610 25 , respectively).
Rox1 (Repressor of Hypoxic Genes) is a repressor under transcriptional control of Hap1 [45].The decrease in expression of the ROX1 gene following both the pretreatment and adapted treatment protocols (p = 3.6610 211 and 1.4610 27 , respectively) suggests that this repressor is deactivated in the process of adaptation.
To confirm this observation we profiled a rox1D strain and found that the number of Rox1 targets with increased expression following pretreatment falls significantly (p = 0.046 by Fisher's Exact Test) (Figure 5B).However, as we cannot demonstrate a fitness requirement for Rox1, it is unclear whether the expression changes due to de-repression by Rox1 are functionally relevant.
A similar expression analysis suggests that Yap1 is an active regulator during both the pretreatment and high-dose phases of adaptation.To confirm the activity of Yap1 during pretreatment, we profiled the expression response of a yap1D strain versus wild type cells under the pretreatment protocol.This experiment revealed widespread changes in patterns of expression (Figure 5C).The expression responses of Yap1, Rox1, and Msn2/4 targets following mild pretreatment in the yap1D strain most closely resembled their expression responses in the wild type following acute treatment (Figure 5B-D).Thus, it is clear that Yap1 is required for many of the expression changes associated with adaptation.
Interestingly, Mga2, Rox1, and Yap1 targets were not enriched for genes that were required for adaptation in the competitive fitness screen (Figure 5A,B,C; Dataset S2 gives a list of all required targets).In the case of Mga2, not a single target gene was required for adaptation.This suggests significant functional redundancy in the genes targeted by these factors, or that their requirement for adaptation is mediated by targets that are essential for viability and therefore are not included in the deletion strain collection used in the screen for competitive fitness.
Potential Mechanisms of Mga2 and Rox1 Activation
The mechanisms by which Mga2 and Rox1 might be activated by mild pretreatment with oxidants are unknown, but several lines of Growth curves in untreated, adapted, and acute oxidative conditions were measured for wild type and each of four deletion strains starting from single-cell colonies.These curves were used to compute adaptive fitness (y-axis) which is shown over a range of OD 600 threshold values (x-axis, see Methods).For all thresholds, an adaptation defect compared to wild type was confirmed for yap1D, skn7D, mga2D (all p,5.0610 22 by unpaired t-test).No defect was observed for rox1D, which was also consistent with the genome-wide screen.Each colored band represents the range of adaptive fitness values spanned by the mean62*standard error of multiple biological replicates.doi:10.1371/journal.pgen.1000488.g003evidence suggest they are shared with the hypoxic response.Rox1 is expressed in a heme-dependent manner [47].While falling heme levels typically signal hypoxic conditions [48], hydrogen peroxide may also reduce heme levels via degradation [49].Dirmeier et al. found that ROS levels transiently increase following exposure to anoxic conditions, suggesting that this could signal the expression of hypoxic genes [50].They did not believe the activation of hypoxic genes could be replicated with exogenously supplied ROS, based on the H 2 O 2 expression profiling data of Causton et al. [36].We contradict this earlier hypothesis with the observation of increased expression of hypoxic genes as a result of treatment with H 2 O 2 .The apparent discrepancy may be a result of the higher dose of H 2 O 2 used by Causton et al. [36].
Potential Mechanisms of Mga2 Action: Ergosterol Metabolism
In response to mild pretreatment with hydrogen peroxide, Mga2 and Rox1 activate targets involved in ergosterol metabolism, zinc homeostasis, and fatty acid metabolism.Ergosterol is a cholesterollike component of the plasma membrane with diverse effects on its function [51].Branco et al. observed that adaptation is associated with an increase in membrane rigidity, an effect that is abrogated in the ergosterol-deficient erg3D and erg6D strains [52].Thus, a potential mechanism for Mga2's requirement during adaptation is that it promotes an increase in ergosterol which inhibits diffusion of H 2 O 2 across the plasma membrane.Zinc homeostasis genes may play a similar role, as these genes also influence ergosterol metabolism [53].Conversely, Tafforeau et al. observed a decrease of both squalene synthase (Erg9) activity and ergosterol content during adaptation in S. pombe, highlighting the complex relationship between ergosterol and membrane permeability [54].
To elucidate the role of ergosterol biosynthesis in adaptation, we profiled ergosterol concentration in both untreated and adaptive conditions in wild type, mga2D, and rox1D strains (see Methods).Relative to wild type, the basal concentration of ergosterol was significantly lower in the mga2D strain and slightly higher in the rox1D strain (Figure 6).This finding agrees with the regulatory roles of Mga2 and Rox1 as an activator and repressor of ergosterol biosynthesis genes, respectively.It also provides some evidence that ergosterol may be a precondition for adaptation to occur, since mga2D is the only strain tested that had low ergosterol concentration and is also the only one with an adaptation defect (Figure 3).On the other hand, in all strains ergosterol content decreased significantly from untreated to mild pretreated condi- tions (p = 1.4610 22 , 4.1610 23 , and 3.1610 22 for wild type, mga2D, rox1D strains, respectively using a paired t-test).This decrease supports the earlier work of Tafforeau et al. [54] but is surprising given it occurs uniformly in all strains, and given that the expression of ergosterol biosynthetic genes increases from untreated to pretreated conditions.One explanation is that expression of ergosterol biosynthetic genes rises in order to compensate for lowered ergosterol levels.
Therefore, we conclude that high ergosterol concentration requires Mga2, supporting a possible role for the influence of Mga2 on ergosterol levels as a precondition of adaptation.However, the change in ergosterol in response to pretreatment does not depend on Mga2 or Rox1, suggesting the involvement of other regulators of ergosterol or of other mechanisms of adaptation that are ergosterol independent.
Potential Mechanisms of Mga2 Action: Fatty Acids
Two of the most highly expressed genes following pretreatment with hydrogen peroxide were OLE1 (oleic acid requiring) and FAS1 (fatty acid synthetase), essential genes required for synthesis of fatty acids.Both genes are direct transcriptional targets of Mga2 (YeastRACT database), suggesting fatty acid pathways as an alternative to ergosterol for the key mechanism of action of Mga2 during adaptation.Although fatty acid pathways could influence the stability and permeability of the plasma membrane [55], these enzymes could also affect the mitochondrial membrane [56], and mutations in OLE1 have been linked to mitochondrial morphology and inheritance [57].
Because OLE1 and FAS1 are essential genes, their specific requirement for adaptation was difficult to assay.However, we found that the high expression of OLE1 was maintained in a rox1D background but was greatly reduced in an mga2D strain (Dataset S3; p = 7.2610 23 ).Previous work by Matias et al. reported decreased expression of FAS1 mRNA 30 minutes after treatment with 0.15 mM H 2 O 2 [55].By 1 hour, no significant differential expression was detected.In comparison, we observed increased expression of FAS1 one hour after treatment with 0.10 mM H 2 O 2 and demonstrated that adaptation occurs under these conditions.Thus, FAS1 has been observed to be both up-and down-regulated during adaptation to H 2 O 2 , albeit at slightly different doses and times.In order to determine the influence of H 2 O 2 dose and treatment time on FAS1 expression, we performed RT-PCR profiling of FAS1 following treatment with both 0.10 mM and 0.15 mM H 2 O 2 .As detailed in Figure S2, we observed an increase in FAS1 levels following treatment with 0.10 and 0.15 mM H 2 O 2 , although the measurement at 0.15 mM was not significant.This is consistent with both our microarray results and the work of Matias et al.Further testing of FAS1 mRNA levels at 30 minutes following 0.15 mM H 2 O 2 revealed no significant differential expression (p = 5.6610 21 ) (Figure S3).Therefore, we have been unable to confirm the previous report of down-regulation of fatty-acid biosynthetic genes during the process of adaptation.Increased expression was also confirmed by RT-PCR for OLE1 (Figure S2).While the precise adaptation program mediated by Mga2 remains to be elucidated, fatty acid synthesis warrants further study as a possible mechanism.
Summary and Prospective
Figure 7 shows a summary of our findings integrated with previous literature.The expression response during adaptation may be segregated into ''early'' and ''late'' phases.''Early'' genes respond to pretreatment only and not to the later high dose.Mga2 and Rox1 are likely regulators of the genes involved in the early expression response, with functions in ergosterol biosynthesis, zinc homeostasis, and fatty acid synthesis.Mga2, but not Rox1, is required for maximal adaptive fitness.Conversely, the expression response of ''late'' genes increases strongly following the high dose of the adaptation protocol.The transcription factors Yap1 and Skn7 have been previously shown to regulate many genes associated with the ''late'' response, such as those involved in redox homeostasis.In addition, both of these transcription factors are required for adaptation.
One goal for future work is to investigate whether the mechanisms of adaptation identified here also function in higher organisms or in lifespan extension.Of the 156 genes identified in this study as required for the adaptive response, 97 have some homology to higher eukaryotes [58].In humans, fibroblasts and smooth muscle cells exhibit extended replicative lifespan in response to hypoxic external conditions.This effect requires the generation of ROS inside the cell and the presence of hypoxia inducible factor (HIF).Like Mga2 and Rox1 in S. cerevisiae, HIF is a transcription factor that mediates the response to hypoxic conditions, although it is not orthologous to either protein [59,60].Further work will be required to see if HIF can be activated not only by hypoxia but also by caloric restriction.
In conclusion, we have completed the first genome-wide scan for genes required for the adaptive response to oxidative stress.By integrating these data with results from expression profiling, we have identified pathways with novel involvement in the response to oxidative stress, including the hypoxic response factor Mga2.The activation of Mga2 under adaptive conditions provides additional information about the sensing mechanism of the hypoxic response, given that we have demonstrated this response can be initiated by exogenous oxidative stress.Future studies can interrogate the manner in which the homologs of these genes are necessary for adaptation in higher organisms and explore their role in aging and disease.
Determination of Treatment Protocols
The high dose of 0.4 mM H 2 O 2 was selected to be comparable to other previous expression studies of acute hydrogen peroxide exposure (0.4 mM, 0.24 mM, 0.32 mM, for Causton, Shapira, Gasch, respectively) [34][35][36].This dose resulted in a reduction of growth rate by approximately two thirds as measured by OD 600 .The pretreatment dose was selected as the largest dose that did not result in impaired growth or viability.This criteria and the length of pretreatment (45 minutes) were selected in accordance with previous studies of adaptation to oxidative stress [9,61,62].
Sample Growth and Treatment for mRNA Profiling
We profiled the response to three hydrogen peroxide treatment protocols (pretreatment, adapted, and acute) over a series of microarray experiments.Each series consisted of four biological replicates.For each replicate in the acute treatment protocol, a single colony of BY4741 (ATCC, Manassas, Virginia, USA) was used to inoculate 10 mL of YPD media.Following overnight growth at 30uC, this culture was resuspended in 100 mL of YPD media at an OD 600 of 0.1 and placed in an orbital shaker at 30uC.At OD 600 = 0.6 cells were split into two 50 mL portions.In the acute treatment protocol growth continued for 45 minutes, at which point a high dose of hydrogen peroxide (final concentration in media: 0.4 mM H 2 O 2 ) was administered to one member of the pair (with the other receiving a sham treatment of 100 mM phosphate buffer).Treatment continued for 1 hour at which point cells were harvested by centrifugation at 3000 rpm for 5 min.Pellets were immediately frozen in liquid nitrogen and stored at 280uC.The pretreatment protocol was identical except for the final concentration of hydrogen peroxide (0.1 mM).For the adapted treatment, a pretreatment dose of hydrogen peroxide (0.1 mM) and corresponding sham treatment were administered directly after splitting the culture, but otherwise the treatment was identical to the acute protocol.
mRNA Expression Analysis
RNA from each sample was isolated via phenol extraction followed by mRNA purification [Poly(A)Purist, Ambion, Catalog Arrays were scanned using a GenePix 4000A or PerkinElmer Scanarray Lite microarray scanner and quantified with the GenePix 6.0 software package.Data from each array were subjected to background and quantile normalization [63].Intensity values are available at the GEO database (www.ncbi.nlm.nih.gov/geo/)under the accession number GSE12602.The VERA software package was used with dye bias correction [64] to assign a significance value l of differential expression to each gene.In a negative control experiment (quadruplicate untreated vs. untreated arrays), the distribution of significance values l over all genes was fit parametrically as 1.7 * x 2 1 , where x 2 1 is the chi square distribution with one degree of freedom.This null distribution was used for assignment of p-values.
RT-PCR Expression Analysis
RNA from each sample was isolated by TRIzol extraction (Invitrogen, Catalog # 15596-026) [65].The purified RNA samples were then used as template for first-strand cDNA synthesis (SuperScript III First-Strand Synthesis for qRT-PCR, Invitrogen, Catalog # 11752-050).For each sample, an RT-PCR reaction was performed with both a gene-specific pair of primers as well as primers targeted to ACT1.Sequences for primer pairs are available in Table S4.Each reaction was monitored in triplicate on a 96well real-time PCR detection system (BIO-RAD MyIQ).For each reaction, this system reports a C t value representing the number of PCR cycles required to exceed a particular fluorescence threshold.The average C t value was calculated across technical replicates for both gene-specific and ACT1 primer pairs.The mRNA level (reported as the log 2 ratio relative to the concentration of ACT1 mRNA) was determined by subtracting the average gene-specific C t value from the average ACT1 C t value.
Sample Growth and Treatment for Haploid Deletion Fitness Profiling Experiments
A pool of the 4,831 viable haploid deletion strains was created from individual collections kept in glycerol stock and divided into 1 mL aliquots stored at 280uC.Two separate types of treatment protocols (acute and adapted) were studied consisting of four and six replicate arrays, respectively.For each replicate, a single aliquot of pooled deletion strains was diluted in 15 mL YPD media and grown in a rotating wheel at 30uC to OD 600 = 0.6.The sample was then split into two 6.5 mL portions.In the adapted treatment protocol, one member of the paired samples was immediately treated with a mild dose of oxidant (final concentration in media: 0.1 mM) and the other received a sham treatment.After 45 minutes of continued growth at 30uC, a high dose was administered (final concentration in media: 0.4 mM) to both samples.After 1 hour of treatment, the cells were harvested by centrifugation at 3000 rpm for 5 min and resuspended in 50 mL of YPD media.After 5 hours of growth, the cells were once again harvested by centrifugation and the pellets were immediately frozen in liquid nitrogen and stored at 80uC.The acute treatment protocol was identical, except that no sample was treated with a mild pretreatment dose and only one member of the sample pair was treated with the high dose.
Deletion Fitness Analysis
Genomic DNA was extracted from cell pellets using a glass bead preparation [66].Subsequent DNA labeling, hybridization, and microarray design followed the protocol of Yuan et al. [67].Briefly, asymmetric PCR was used to amplify unique tag sequences in the genomic DNA of the deletion strains.In each PCR reaction, 1 mg of gDNA was used for labeling.Arrays were scanned and quantified in the same manner as the arrays prepared for the expression profiling experiments.Array intensity values are available in the GEO database (www.ncbi.nlm.nih.gov/geo/)under the accession number GSE12733.
The hoptag package (implemented in R) was used to analyze the intensity data from the scanned arrays.Briefly, median and loess correction were performed on the intensity distributions [67], after which each deletion strain was assigned an UPTAG ratio and a DNTAG ratio for each array.The logs of these ratios were averaged to derive one measurement per gene per array.Across multiple arrays measuring the same treatment protocol comparison (acute vs. untreated or acute vs. adapted), the distribution of log ratio values was quantile normalized [63].To determine an acute fitness value, we assumed that the signal intensity for a given gene deletion strain is: where I i,treatment and f i,treatment are the observed signal intensity and viability of gene deletion strain i subject to the designated treatment protocol, [C i ] and R i are the initial concentration and growth rate, respectively, of the deletion strain i, and t is time.N treatment is a constant factor applied to all intensities from the same treatment representing the shared effect of normalization procedures.For each gene deletion strain i, the log ratio of the acute and untreated signal intensities is therefore: Thus, the log ratio is proportional to the acute fitness metric as defined in Figure 1.Since each intensity distribution was normalized to share the same median, the distribution of log ratios was centered on zero.In order to indentify genes that deviate significantly from this expected value, we performed a one sample t-test testing the difference of the mean against zero.This test was regularized to share the estimate of variance among all genes.
Similarly, the log ratio obtained from the direct comparison of the acute and adapted samples was centered on zero and proportional to the log ratio of the viabilities, ln f acute f adapted À Á .Furthermore, due to median normalization of the intensity distributions, the scales of both log ratio distributions were approximately equal.Thus, for most genes without a defect in adaptive fitness, the log ratio ln I i,acute I i,adapted À Á was strongly correlated to the log ratio, ln I i,acute =I i,untreated ð Þ .A gene with a large difference between the values ln I i,acute =I i,untreated ð Þand ln I i,acute =I i,untreated ð Þindicated a deviation from the average adaptive fitness measure.A two-sample regularized t-test comparing the log ratios determined from each direct comparison was used to identify such cases.For both adaptive and acute fitness measures, the threshold for significant p-value was set at 5.0610 23 .
Validation of Sensitive Targets
To verify that the identified sensitive genes are meaningful, the sensitivities of specific gene deletions were verified in small-scale experiments.In these, a colony of a specific deletion strain of interest was incubated in YPD overnight.Following dilution to OD 600 0.1 in 30 mL YPD media, the culture was grown to OD 600 0.6 and split into three aliquots.Each aliquot was treated according to one treatment protocol (untreated, adapted, or acute).Following ten-fold dilution in YPD, growth was monitored in a 96 well optical density plate reader in 12-fold replicate.
Examples of recovery following treatment for individual biological replicates are available for wild type, yap1D, and mga2D in Figures S1, S4, and S5, respectively.For each treatment protocol, the average time required to recover to a particular OD 600 threshold was determined (Figure 1B).In Figure 3, the specific value of this OD 600 threshold is varied between 0.3 and 0.95 to illustrate that the substance of the results is not dependent on the selection of any particular value for the threshold.We calculate adaptive fitness as the difference in viability (f) between the adaptive and acute treatments relative to the difference between untreated and acute.
Adaptive Fitness
~ln f acute f adapted À Á For each treatment protocol, the formula for exponential growth relates the recovery time (t treatment ) to the fractional reduction in viability associated with that treatment (f treatment ), whre C threshold is the threshold concentration, C initial is the concentration before treatment, and r strain is the growth rate of the strain.
The following derivation illustrates how we can use this information to express the adaptive fitness measure in terms of recovery time, ln f acute f adapted ln f acute = f untreated ~{r strain t acute {t adapted À Á {r strain t acute {t untreated ð Þ ~tacute {t adapted t acute {t untreated An unpaired t-test was used to determine the significance of the difference from results obtained when applying the same procedure to wild type (BY4741) colonies.
Determination of Ergosterol Concentration
The determination of ergosterol was adapted from Arthington-Skaggs et al. [68].Following overnight incubation, a culture was grown in YPD to OD 600 0.6 and split into two aliquots of 50 mL.One of the aliquots was treated with 0.1 mM H 2 O 2 for 1 hour, after which the OD 600 of each aliquot was measured.Each aliquot was pelleted and washed once with water.The cleaned pellet was incubated for 1 hour at 85uC with 3 mL 25% alcoholic KOH.After cooling for 15 minutes, 1 mL water and 3 mL n-heptane were added and the mixture was vortexed for 3 minutes.The nheptane layer was extracted and the presence of ergosterol was detected via absorbance at OD 281 .The ergosterol concentration for each aliquot of the paired trial was reported as the ratio of OD 600 /OD 281 .
Enrichment Analysis of Gene Sets
We investigated the significance of enrichment for functional classes among both differentially expressed and sensitive genes.Functional classes were defined in one of two ways: (1) classes of genes with common annotation in the Gene Ontology (GO) hierarchy [69]) or (2) classes of genes targeted by the same transcription factor as recorded in the YeastRACT online database [44].In this database, the list of targets for each factor is compiled from literature sources where each regulatory interaction is backed with experimental evidence.To prevent the identification of redundant or overly general gene ontology categories, we limited the GO analysis to those categories that contained between 5 and 100 genes.Similarly, the YeastRACT database contained several transcription factors with an excessive number of annotated targets (Yap1 alone was annotated with over 1,500).To reduce the incidence of false positives, those studies which contributed over 100 targets for a given factor were discarded (on a per factor basis).While this may eliminate some true interactions, the goal is to generate a smaller set of highconfidence interactions which may be used to accurately assess the activity of given transcription factor.The final set of targets for each transcription factor is available as Dataset S4.A hypergeometric test was used to assess the enrichment of each gene set in the lists of differentially expressed or sensitive genes.
Since the true number of differentially expressed or sensitive genes was unknown and poorly defined, we varied the cutoff for significance between 100 and 500 genes.The minimal p-value for each gene set was returned, and the activity/sensitivity of each gene set was reported as the negative log of this minimal p-value.Since the corresponding p-value was no longer strictly accurate as a consequence of multiple hypothesis testing, significance was assessed by repeated randomization trials in which the order of genes was shuffled.Every gene set was tested and the maximum significance value was retained in each trial.Only those gene sets which exceeded the 95 th quantile in this set were determined to be significant.Table S1 Sensitive gene ontology categories following acute hydrogen peroxide stress.For our study and the study of Thorpe et al., we determined those gene ontology categories which were enriched for sensitive gene deletions.Here we report all categories which exceed the threshold for significance.Found at: doi:10.1371/journal.pgen.1000488.s006(0.16 MB PDF)
Supporting Information
Table S2 Up-regulated transcription factor target sets following acute hydrogen peroxide stress.For our and previous comparable studies (Gasch 2000, Causton 2001, Shapira 2004), the set of known targets for each transcription factor was ranked based on enrichment for genes with increased expression in response to acute hydrogen peroxide stress.Here, we report the top nine sets of transcription factor targets.To facilitate comparison, frequently occurring items are high-lighted in a consistent manner.Found at: doi:10.1371/journal.pgen.1000488.s007(0.13 MB PDF) Table S3 Up-and down-regulated gene ontology categories following acute hydrogen peroxide stress.For our and previous comparable studies (Gasch 2000, Causton 2001, Shapira 2004), a pruned set of functional categories was ranked based on enrichment for genes with increased and decreased expression in response to acute hydrogen peroxide stress.In each case, we report the top five categories.To facilitate comparison, frequently occurring categories are high-lighted in a consistent manner.Found at: doi:10.1371/journal.pgen.1000488.s008(0.62 MB PDF)
Figure 1 .
Figure 1.Study design.A. Yeast cells were collected following each of four hydrogen peroxide treatment conditions (pretreated, adapted, acute, and untreated, labeled 1-4).Competitive growth experiments were performed between gene deletion pools grown in adapted versus acute conditions (to identify genes required specifically for adaptation) and between pools grown in acute versus untreated conditions (to identify genes required for the acute response).Gene expression profiling was performed in either adapted or acute conditions versus untreated cells.B. Profiling of wild type growth reveals that pretreatment with mild hydrogen peroxide (green) leads to improved recovery to an OD 600 threshold (dashed line) compared to no pretreatment (red) following a high dose of hydrogen peroxide.An enlarged version of panel B is available as FigureS1.C. For an individual gene deletion, the acute sensitivity is defined as the difference between the acute and untreated viability.The adapted sensitivity is the fraction of that difference that is recovered by mild pretreatment with hydrogen peroxide.doi:10.1371/journal.pgen.1000488.g001
Figure 2 .
Figure 2. Fitness and expression profiling overview.A. Numbers and overlap of gene deletions that are sensitive in the adaptive (green) and acute (red) treatment protocols.B. Hierarchical clustering of the differentially expressed or sensitive genes from each screen.Clusters are annotated at right with over-represented functional groups.C. Numbers and overlap of differentially expressed genes identified in each of the three expression treatment protocols.doi:10.1371/journal.pgen.1000488.g002
Figure 3 .
Figure3.Confirmation of mutant strains deficient in adaptation.Growth curves in untreated, adapted, and acute oxidative conditions were measured for wild type and each of four deletion strains starting from single-cell colonies.These curves were used to compute adaptive fitness (y-axis) which is shown over a range of OD 600 threshold values (x-axis, see Methods).For all thresholds, an adaptation defect compared to wild type was confirmed for yap1D, skn7D, mga2D (all p,5.0610 22 by unpaired t-test).No defect was observed for rox1D, which was also consistent with the genome-wide screen.Each colored band represents the range of adaptive fitness values spanned by the mean62*standard error of multiple biological replicates.doi:10.1371/journal.pgen.1000488.g003
Figure 4 .
Figure 4. Dynamics of transcription factor target expression in mild and acute conditions.For each transcription factor, we compute a score based on a hypergeometric test representing the significance of increased expression (relative to untreated) of known targets (see Methods) following either pretreatment (0.1 mM H 2 O 2 , x-axis) or acute treatment (0.4 mM H 2 O 2 , y-axis).For those transcription factors with the most significant activity following pre-or acute treatment, the activity following adaptive treatment (0.1 mM followed by 0.4 mM H 2 O 2 ) is also displayed on the x-axis with an open circle.The size of each point corresponds to the number of known targets of that transcription factor.The dotted lines indicate a threshold for significance determined by a randomization procedure (see Methods).Although there is significant overlap in the set of expressed genes following mild and acute treatment, examination of specific transcription factors reveals those with unique behavior in each condition.Transcription factors identified in the deletion fitness analysis of the acute and adaptive treatments are indicated with ''#'' and ''+'' symbols, respectively.doi:10.1371/journal.pgen.1000488.g004
Figure 5 .
Figure 5. Expression analysis of deletion mutants validates the activation of key transcription factors in response to H 2 O 2 pretreatment.Panels A-D detail the behavior of the transcription factors Mga2, Rox1, Yap1, and Msn2/4 and their target sets, respectively.Each column represents the expression or fitness values in sorted order for a specific set of genes.doi:10.1371/journal.pgen.1000488.g005
Figure 6 .
Figure 6.Dynamics of ergosterol following mild treatment with hydrogen peroxide.Following an n-heptane extraction (see Methods), the presence of ergosterol is detected at 281 nm.The ergosterol concentration (relative to the number of cells [OD 600 value] in the original culture) is reported for wild type, mga2D, and rox1D strains in three paired trials with and without mild hydrogen peroxide pretreatment.doi:10.1371/journal.pgen.1000488.g006
Figure 7 .
Figure 7. Summary of the adaptive response.Results and hypotheses regarding transcriptional regulators and functional categories identified in this study are summarized.The influence of hydrogen peroxide is determined by its concentration within the cell.In addition to treatment dose, several cellular processes affect the level of H 2 O 2 .In order to enter the cell, hydrogen peroxide must first diffuse across the plasma membrane.Inside the cell, peroxide levels are reduced by degradation into oxygen and water.Squares denote the expression of genes or gene sets (rectangles) following each of the three treatment protocols (pretreatment, adapted, and acute).Conversely, circles denote the sensitivity of the corresponding gene deletion for a particular protein or protein set (oval) in the adapted and acute treatment protocol.Arrows between different objects indicate either an activating (triangular arrowhead) or inhibitory (flat arrowhead) influence.The figure number(s) which provides support for each link are shown in brackets.A red ''X'' denotes a hypothesis which is refuted by experimental observation.doi:10.1371/journal.pgen.1000488.g007
Figure S1
Figure S1Growth of wild type following three different treatment protocols.Following treatment with either the acute, adapted, or untreated protocols, wild type cultures are diluted 10-fold in YPD.Recovery is monitored with a 96-well OD 600 plate reader.Each line represents the average of 12 replicates.Found at: doi:10.1371/journal.pgen.1000488.s001(0.15 MB PDF)FigureS2RT-PCR profiling of OLE1 and FAS1 following H 2 O 2 treatment.mRNA levels of both FAS1 and OLE1 are profiled 60 minutes following treatment with either 0.10 mM or 0.15 mM H 2 O 2 .Levels are normalized to ACT1 and reported as a log ratio relative to untreated.Found at: doi:10.1371/journal.pgen.1000488.s002(0.06 MB PDF)FigureS3RT-PCR profiling of FAS1 mRNA levels at different time points.The level of FAS1 mRNA is profiled at 30 and 60 minutes following treatment with 0.15 mM H 2 O 2 with RT-PCR.mRNA levels are normalized relative to ACT1 and reported as a log 2 ratio relative to an untreated sample.Reported p-values are determined with a one-sample t-test testing the difference from a true mean of zero.Found at: doi:10.1371/journal.pgen.1000488.s003(0.04 MB PDF) Figure S4 Growth of yap1D following three different treatment protocols (adapted, acute, untreated).Following treatment with either the acute, adapted, or untreated protocols, yap1D cultures are diluted 10-fold in YPD.Recovery is monitored with a 96-well OD 600 plate reader.Each line represents the average of 12 replicates.Found at: doi:10.1371/journal.pgen.1000488.s004(0.15 MB PDF) Figure S5 Growth of mga2D following three different treatment protocols (adapted, acute, untreated).Following treatment with either the acute, adapted, or untreated protocols, mga2D cultures are diluted 10-fold in YPD.Recovery is monitored with a 96-well OD 600 plate reader.Each line represents the average of 12 replicates.
Table S4
Primer sequences for RT-PCR profiling of gene expression.Found at: doi:10.1371/journal.pgen.1000488.s009(0.05 MB PDF) Dataset S1 Fitness Table: P-values for acute and adaptive screens conducted in this study.Found at: doi:10.1371/journal.pgen.1000488.s010(0.20 MB TXT) Dataset S2 Enrichment Summary: Differentially expressed or sensitive members of each significantly over-represented condition or transcription factor target set mentioned in the study.Found at: doi:10.1371/journal.pgen.1000488.s011(0.04 MB TXT) Dataset S3 Expression Table: Log ratios and p-values for all micro-array expression profiling experiments conducted in this study.Found at: doi:10.1371/journal.pgen.1000488.s012(1.05 MB TXT) Dataset S4 TFs Table: Table containing all of the transcription factor target sets used in this study.Found at: doi:10.1371/journal.pgen.1000488.s013(0.11 MB TXT)
|
2016-05-17T11:25:42.678Z
|
2009-05-01T00:00:00.000
|
{
"year": 2009,
"sha1": "6fced04bc88be4229981cbcc8309eff29a31476c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1000488&type=printable",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "6fced04bc88be4229981cbcc8309eff29a31476c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
136021229
|
pes2o/s2orc
|
v3-fos-license
|
X-Ray Diffraction Analysis of Bottom Ash Waste after Plasma Treatment
The paper deals with the plasma-chemical synthesis of melts produced from the bottom ash waste for the production of new construction materials with enhanced performance characteristics. Phase composition of the plasma-treated bottom ash waste is detected by the X-ray diffraction analysis. The bottom ash waste is a mixture of SiO2 minerals. The structure and phase composition of this mixture are investigated after the plasma treatment. The obtained results are compared with the original state of the mixture. The identification and the qualitative content of ash waste as a multi-phase system are complicated by the overlapped reflections and a possible existence of the intermediate amorphous phase.
Introduction
The treatment level of industrial solid waste in thermal power plants (TPP) is, currently, extremely low that leads to a considerable accumulation of bottom ash waste in ash-disposal areas [1][2][3]. The core aspect of ash waste utilization is a high melting temperature (up to 1700°С) which depends on heterogeneity of its chemical composition. Knowledge of the structural composition of the amorphous and crystal ash phases is very useful for the provision of the effective performance characteristics. To the authors's knowledge, little publications are available in the literature that discuss investigations of the plasma treatment effect on the ash structure and phase composition.
The aim of this work is to study the processes in plasma-chemical reactor occurred during the production of ash-based silicate melt and the identification of its amorphous state after the plasma treatment.
Materials and methods
The bottom ash waste fraction (<150 μm) generated by Tomsk TPP-2, was used as a raw material for the production of the silicate melt. The original chemical composition of this material is presented in Table 1. Table 1 indicates that the chemical composition of the investigated material contains 51% SiO 2 . This content is similar to that of commercial glass and can be utilized to produce silicate melts, including the production of mineral fibers [4][5][6]. The XRD analysis was carried out by DRON-4-07 diffractometer which was modified for digital signal processing. Measurements were conducted using copper radiation (K ) and Bragg-Brentano X-ray optical scheme. Specifications for the DRON-4-07 included 0.02 0 scanning step and 16.0-92.0 0 range for angles to be scanned. The XRD analysis of obtained melts was based on the Rietveld refinement method [7][8]. This method is used to detect the relative phase content in compliance with the reference structure; space groups; refine the crystal parameters and the space distribution of atoms in crystal lattices. Phase transformations, chemical reactions occurring in the material under heating and cooling conditions were measured in air on the simultaneous thermal analyzer Netzsch STA 449C Jupiter (Germany) at a heating rate of 10 degrees per minute.
Experimental
The bottom ash waste was subjected to melting in a plasma-chemical reactor presented in [9,10]. Its specifications included U=160 V, I=220 А, Р=35.2 kW, q=1.8·10 6 W/m 2 . The operating principle of the plasma-chemical reactor is based on the interaction between the highly concentrated plasma flows and the silicate-containing powdered material. As a result of this interaction, fine particles were heated with the following formation of a homogeneous melt. The configuration of the plasma-chemical reactor allowed eliminating the loss of fine particles blowing by the plasma flow out of the melting zone. Moreover, the obtained silicate melt was homogeneous within the whole volume of the melting furnace.
X-ray diffraction analysis
Upon completion of the experiments in plasma treatment of high-temperature silicate melts, the X-ray diffraction (XRD) analysis was performed for ash powders both before and after plasma treatment. To identify the unknown phases in the studied substances, it is advisable to address to the Crystallography Open Database [11]. The modeling of amorphous states allows identifying the quantitative phase content in ash waste after the plasma treatment. The amorphous phase predominates in the integral intensity. The number of amorphous phases is obtained on the basis of crystal lattices of ash waste in its original state. The angular dependence of the reflected X-ray intensity is determined for each of the reference phases. The Rietveld refinement is used to detect the background, profile, and structural parameters of the intensity of reference phases. The degree of proximity of the estimated intensity with the experimental is determined by R wp convergence criterion. The Rietveld refinement results in the detection of the space distribution of atoms in crystal lattices and the parameters of supercells of the investigated phases. The Rietveld refinement method was implemented using the software described in [5]. Figure 1 contains the plot of the XRD patterns for the certain phases.
The diffraction pattern obtained for ash waste is complex since it includes both the strong and weak overlapping reflections. The convergence criterion for the ash original state is R wp =5.508%.
The Rietveld method allows the authors to list the reference phases in ash waste which contributes to the integral (experimental) intensity. The results of the analysis, the discovered major phases and their contribution to the integral intensity in the oxide mixture are given in Table 2. The XRD analysis shows that superposition of the integral intensity is 94.47%. This amount indicates to the proper identification of phases in ash waste. Table 2. Structure and phase content refined by the Rietveld method.
As shown in Figure 2, ash becomes amorphous after the plasma-chemical treatment. In order to identify the amorphous phases and their content using the Rietveld method, the amorphous states are simulated for SiO 2 , Al 2 O 3 and Fe 2 O 3 phases which make the main contribution to the intensity in the original state. The calculated atomic density determines the size of the original cubic cell, in which the oxide atoms are concentrated. phase composition of ash waste. Figure 2b shows the XRD patterns of plasma-treated ash waste analyzed by the Rietveld method. The convergence criterion is R wp = 5.31%. The refined parameters of cubic cells and the intensity of amorphous phases are presented in Table 3. Table 3. Amorphous structure and phase content.
The integral intensity superposition of the amorphous phases is 93.13% of the XRD diffraction pattern plotted in Figure 2b
Differential thermal analysis
The differential thermal analysis (DTA) shown in Figure 3, describes the original state of ash waste. The DTA curve indicates that the ash behavior under the heating process is similar to that of classical silicate materials. Within the temperature of 100 ○ С, the observed endothermic process corresponds to the removal of free water. A blurred exothermic process within 300÷400 ○ С range is caused by the formation of crystallization centers. A further temperature growth up to 600 ○ С leads to the removal of adsorption and chemically bound water. The exothermic process of 600÷700 ○ С relates to the transformation of the amorphous phase to a crystal. Within 800÷950 ○ С, the exothermic process is caused by softening of low-melting compounds and recrystallization of calcium and magnesian silicates with a complex composition.
Conclusions
The physicochemical investigations showed that the low-temperature plasma treatment of ash-based silicate melt resulted in the ordered structure of aluminosolicate glass. The experiments allowed the
|
2019-04-29T13:16:35.369Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "75f497377cb8ddd826671ee6078b99b06c1f5bfa",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/189/1/012021",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "191f3706b52e79b2a7c02d5c3c62282d981df60c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
250088821
|
pes2o/s2orc
|
v3-fos-license
|
Measurement of the total and leptonic decay widths of the $J/\psi$ resonance with an energy scan method at BESIII
Using $e^+e^-$ annihilation data sets collected with the BESIII detector, we measure the cross sections of the processes $e^+e^- \to e^+e^-$ and $e^+e^- \to \mu^+\mu^-$ at fifteen center-of-mass energy points in the vicinity of the $J/\psi$ resonance. By a simultaneous fit to the measured, center-of-mass energy dependent cross sections of the two processes, the combined quantities $\Gamma_{ee} \Gamma_{ee} / \Gamma_{\rm tot}$ and $\Gamma_{ee} \Gamma_{\mu\mu} / \Gamma_{\rm tot}$ are determined to be ($0.346 \pm 0.009$) and ($0.335 \pm 0.006$) keV, respectively, where $\Gamma_{ee}$, $\Gamma_{\mu\mu}$, and $\Gamma_{\rm tot}$ are the electronic, muonic, and total decay widths of the $J/\psi$ resonance, respectively. Using the resultant $\Gamma_{ee} \Gamma_{\mu\mu} / \Gamma_{\rm tot}$ and $\Gamma_{ee} \Gamma_{ee} / \Gamma_{\rm tot}$, the ratio $\Gamma_{ee} / \Gamma_{\mu\mu}$ is calculated to be $1.031 \pm 0.015$, which is consistent with the expectation of lepton universality within about two standard deviations. Assuming lepton universality and using the branching fraction of the $J/\psi$ leptonic decay measured by BESIII in 2013, $\Gamma_{\rm tot}$ and $\Gamma_{ll}$ are determined to be ($93.0 \pm 2.1$) and ($5.56 \pm 0.11$) keV, respectively, where $\Gamma_{ll}$ is the average leptonic decay width of the $J/\psi$ resonance.
Introduction
The total and electronic decay widths Γ tot and Γ ee of the J/ψ resonance, present in the Breit-Wigner formulae for all the decay modes of J/ψ produced in e + e − collisions [1], are among its most important parameters. Theoretically, these decay widths, reflecting J/ψ internal interactions, are predicted by various potential models [2][3][4][5][6] and lattice quantum chromodynamics [7]. Measurements of these decay widths and comparisons of the experimental results with different theoretical calculations can help us gain a better understanding of the underlying physics.
Furthermore, the ratio of the electronic to muonic decay widths of the J/ψ resonance, Γ ee /Γ µµ , can be used to test the lepton universality assumption [8]. Based on the assumption, the ratio is derived to be [9] Γ ee /Γ µµ = β e (3 − β 2 e ) β µ (3 − β 2 µ ) , with where m e , m µ , and M are the masses of electron, muon, and the J/ψ resonance, respectively. Taking the values of m e , m µ , and M from the Particle Data Group (PDG) [1], Γ ee /Γ µµ is calculated to be 1.00000814211 (6), which has a deviation from 1 that is far less than the experimental precision at present. Thus, any observed, significant deviation of Γ ee /Γ µµ from 1 will be a hint of physics beyond the Standard Model [10].
Since the discovery of the J/ψ resonance in 1974 [11,12], its decay widths have been measured by many experiments [13][14][15][16][17]. The precision of the measurements has been improved significantly in the past two decades. In 2004 and 2006, the J/ψ decay widths have been measured by studying the J/ψ samples produced in the initial state radiation (ISR) return process e + e − → γ ISR µ + µ − collected at the Υ(4S) and ψ(3770) peaks by BaBar [18] and CLEO [19], respectively. In 2010, KEDR improved the measurement precision by performing an energy scan (ES) around the J/ψ peak and studying the J/ψ production in the processes e + e − → e + e − and e + e − → µ + µ − [20]. In 2018 and 2020, KEDR presented new results with the J/ψ production in the processes e + e − → e + e − and e + e − → inclusive hadrons [21].
Operating in the τ -charm energy region, the high luminosity of the BEPCII collider [22] and the excellent performance of the BESIII detector [23] offer us a good opportunity for the precision measurements of the J/ψ decay widths. In 2016, BESIII measured the J/ψ decay widths by applying the ISR return technique to the data sample collected at the ψ(3770) peak, and obtained a result with improved precision [24]. In this Letter, we report a new precision measurement of the J/ψ decay widths with the ES method, confirming and complementing the above measurement.
Since the J/ψ resonance contributes to the vacuum polarization in the time-like region, the cross sections of the processes e + e − → e + e − and e + e − → µ + µ − are functions of the J/ψ decay widths [25,26]. Specifically, the cross section (σ 0 ) of each process with respect to the center-of-mass (CM) energy (W 0 ) can be written as where σ C 0 , σ R 0 and σ I 0 are the continuum, resonance and interference terms, respectively. The formula still holds after considering the ISR effect, and we take σ 0 , σ C 0 , σ R 0 and σ I 0 here as the quantities with ISR considered. Unlike the term σ C 0 , the terms σ R 0 and σ I 0 depend on the J/ψ decay widths, and their analytic forms are derived in Ref. [27] using the structure function method [28,29].
In Eq. (3), σ R 0 is the primary term related to the J/ψ decay widths, and its major subterm is proportional to Γ ee Γ ee /Γ tot and Γ ee Γ µµ /Γ tot for the processes e + e − → e + e − and e + e − → µ + µ − , respectively [27]. Therefore, we can determine Γ ee Γ ee /Γ tot and Γ ee Γ µµ /Γ tot by fitting to the measured, CM energy dependent cross sections of the two processes. Then, Γ ee /Γ µµ can be evaluated as the ratio of Γ ee Γ ee /Γ tot to Γ ee Γ µµ /Γ tot . Combined with the branching fraction of the J/ψ leptonic decay measured by BESIII in 2013 [30], the total and leptonic J/ψ decay widths can be obtained from Γ ee Γ ee /Γ tot and Γ ee Γ µµ /Γ tot as well.
Experimental facilities and data sets
The data used in this work were collected with the BESIII detector [23], which operates at the south crossing point of the BEPCII collider [22]. BEPCII is a double-ring e + e − collider operating in the τ -charm energy region (2.0-4.9 GeV) with its achieved peak luminosity of 10 33 cm −2 s −1 at the CM energy √ s = 3.773 GeV. The BESIII detector with a geometrical acceptance of 93% of 4π, consists of the following main components: (1) a small-cell, helium-based main drift chamber (MDC) measuring the momenta of charged tracks in a 1 T magnetic field with a resolution of 0.5% for 1 GeV/c transverse momentum and the specific energy loss (dE/dx) with a resolution of 6%; (2) a timeof-flight (TOF) system for particle identification composed of a barrel and two endcaps made of plastic scintillators; the time resolution is 80 ps in the barrel, and 110 ps in the endcaps; (3) an electromagnetic calorimeter (EMC) made of CsI(Tl) crystals arranged in a cylindrical shape (barrel) and two endcaps; for 1.0 GeV photons, the energy resolution is 2.5% in the barrel and 5% in the endcaps; (4) a superconducting solenoid magnet providing a nominal magnetic field of 1 T (0.9 T in 2012) parallel to the beam direction; (5) a muon chamber system made of resistive plate chambers with position resolution about 2 cm. In addition, a beam energy measurement system (BEMS), located at the north crossing point of the BEPCII storage rings, is used to determine the BEPCII beam energies by measuring the energies of Compton back-scattered photons [31].
In 2012, an ES experiment was performed at fifteen CM energy points in the vicinity of the J/ψ resonance. The measured CM energies and integrated luminosities are summarized in Table 1. The CM energies are measured by the BEMS and calibrated according to the J/ψ mass value given by the PDG [1]. As a consequence, the J/ψ mass can not be determined in this work. The CM energy calibration process fits to the J/ψ lineshapes in the e + e − and µ + µ − final states simultaneously to their preliminary measured, CM energy dependent cross sections. Adding the uncertainties of the J/ψ masses from the fit and from the PDG in quadrature gives a total calibration uncertainty of 0.033 MeV, which is comparable with the systematic uncertainty (0.043 MeV) of the calibration (via the inclusive hadronic decay mode) to the small J/ψ scan data used for the τ mass measurement [32]. The corresponding integrated luminosities are measured offline with e + e − → γγ events [33].
To determine the signal detection efficiencies, Monte Carlo (MC) simulated events of the processes e + e − → e + e − and e + e − → µ + µ − in the polar angle ranges of 34 • -146 • and 0 • -180 • , respectively, incorporating the ISR and final state radiation (FSR) effects, are simulated with a revised version of the BABAYAGA-3.5 [34] generator, which is modified by the authors to explicitly involve the J/ψ resonance in the vacuum polarization. In addition, MC events of the processes e + e − → inclusive hadrons and γ * γ * → X (e + e − e + e − , e + e − µ + µ − , etc.) are generated for background studies with the CONEXC [35] and BESTWOGAM [36] generators, respectively. When generating these events, the calibrated BEMS CM energies are used.
A GEANT4 [37] based MC simulation program including the geometric description and response of the detector is used to simulate the interaction of final state particles in the detector. Both the experimetal data and simulated MC are reconstructed and analysed under the GAUDI [38] based offline software system.
Event selection
The signal candidates of e + e − → e + e − and e + e − → µ + µ − events are required to have two oppositely charged tracks in the MDC. Each charged track has to fulfill the following requirements: it must originate from the interaction region of |V r | < 1 cm and |V z | < 10 cm, where |V r | and |V z | are its closest approach relative to the collision point in the x-y plane and along the z axis (taken as the axis of the MDC), respectively; it must hit the detector in the barrel region of | cos θ| < 0.8, where θ is the polar angle of the reconstructed momentum vector with respect to the z axis.
For e + e − → e + e − candidates, the next two criteria are applied further to each of the selected tracks: its momentum (P ) is required to be larger than 0.7 times the beam energy (E beam ), and its energy deposited in the EMC (E) has to be larger than 0.6 times the momentum.
For e + e − → µ + µ − candidates, the following conditions are required in addition: for each of the selected tracks, P must be larger than 0.8E beam , E has to be larger than 25 MeV and less than 0.25P , and valid timing information is required to be left in the TOF; at the event level, no neutral showers with a deposited energy above 25 MeV are allowed in the EMC, and the difference of the flight time of the two charged tracks (∆t µ TOF ) obtained with the TOF has to be less than 1.5 ns to suppress cosmic rays. Figure 1 shows the comparison between data and MC simulation of variables used in the event selection. The data shown in the figure are those of surviving candidate events subtracting the residual background estimated with the MC simulation. In general, the MC simulation provides a good description of the data although minor discrepancies between them are visible. The effects of these discrepancies are taken into account in the systematic uncertainty estimation (see Section 4.2 for details).
For the process e + e − → e + e − (µ + µ − ), the efficiencies obtained from the signal MC samples are about 70% (80%), and the background levels esitimated with the MC simulation are less than 0.05% (0.5%). Closer examination with a generic event type analysis tool, TOPOANA [39], shows that the backgrounds mainly arise from events with π + π − , K + K − or e + e − e + e − (e + e − µ + µ − ) final states.
Nominal results with statistical uncertainties
Usually, the cross section σ is determined from where N sig is the number of signal events selected from data, N bkgs is the number of the residual background events, L is the integrated luminosity, trg is the trigger efficiency, recsel is the reconstruction-selection efficiency and f is a reconstruction efficiency correction factor. The trigger efficiency is taken as 100% in this work [40]. The correction factor f is related to the imperfection of the detector simulation. In practice, the reconstruction efficiencies (including the tracking efficiency in the MDC and the cluster reconstruction efficiency in the EMC) from MC simulation deviate those from data. We study the corresponding reconstruction MeV. For all the plots, dots with error bars show the backgroundsubtracted data (the background level is evaluated with the MC simulation) and the histograms denote the signal MC. The small discrepancy in the last plot is due to the imperfection of the MC simulation, and it has a negligible effect on the cross section measurement of e + e − → µ + µ − . efficiencies for leptons in different cos θ bins. To compensate the deviation of the reconstruction-selection efficiency, the correction factor f is introduced as Here, N MC obs stands for the number of surviving events of signal MC samples, m (n) for the m th (n th ) cos θ bin of positively (negatively) charged leptons, data trk and MC trk ( data clst and MC clst ) for the MDC tracking efficiency (EMC cluster reconstruction efficiency) of leptons from data and MC simulation, respectively.
The measured cross sections and related input quantities at all individual CM energy points of the processes e + e − → e + e − and e + e − → µ + µ − are summarized in Tables 2 and 3, respectively.
Systematic uncertainties
The systematic uncertainties of the measured cross sections arise mainly from the integrated luminosities, trigger efficiencies, CM energies, reconstruction and selection efficiencies, efficiency correction factors and residual backgrounds.
The uncertainties due to the integrated luminosities are estimated to be less than 1.40% (1.26% at √ s = 3096.9 MeV) [33], while those resulting from trigger efficiencies are evaluated as 0.10% [40]. To estimate the uncertainties due to the CM energies, two additional sets of MC samples are generated by increasing or decreasing the CM energies by one standard deviation with respect to their nominal values. The largest changes of the efficiencies with respect to their nominal values are taken as the uncertainties.
The uncertainties associated with the momentum requirement for the process e + e − → e + e − are estimated by changing the selection criteria from P > 0.7E beam to P > 0.6E beam . The resultant changes in the calculated cross sections are taken as the uncertainties. The uncertainties related to other requirements are estimated with the similar method. Specifically, for the selection of e + e − → e + e − events, the analysis is carried out with the alternative criteria of | cos θ| < 0.7 and E/P < 0.7, individually, while for e + e − → µ + µ − , the analysis is repeated with the alternative criteria of P > 0.9E beam , | cos θ| < 0.7, E/P < 0.35 and ∆t µ TOF < 2.5 ns, 8 Tables 2 and 3, the statistical uncertainties of the efficiency correction factors are 0.02% and 0.03% for the processes e + e − → e + e − and e + e − → µ + µ − , respectively, which are determined from the statistics of the samples used to study the reconstruction efficiencies. On the other hand, detailed studies show that the purities of the control samples for the electron tracking, electron clustering, muon tracking, and muon clustering efficiencies in data are about 99.99%, 99.81%, 98.45%, and 99.52%, respectively. Considering other factors, such as the background contaminations, the uncertainties resulting from the efficiency correction factors can be roughly and conservatively estimated to be 0.10% for both the e + e − → e + e − and e + e − → µ + µ − processes.
The numbers of residual background events, estimated with the MC simulation, are subtracted from the numbers of surviving events in the calculation of the cross sections, and hence the uncertainties of background levels need be taken into account. Since the uncertainties of the cross sections for some dominant background channels (for example, e + e − → K + K − ) set in the generator are as large as 100%, we therefore take the background levels themselves as the related uncertainties. As a result, the uncertainties for the processes e + e − → e + e − and e + e − → µ + µ − at √ s = 3096.9 MeV are 0.03% and 0.25%, respectively. Table 4 shows a summary of the systematic uncertainties of the measured cross sections of the processes e + e − → e + e − and e + e − → µ + µ − at √ s = 3096.9 MeV. The total systematic uncertainties, 1.40% and 1.29% for the two processes individually, are the square root of the quadratic sum of the individual uncertainties and dominated by those associated with the integrated luminosities. The systematic uncertainties of the measured cross sections at other CM energy points are estimated with the same method, and they are summarized in Tables 2 and 3 together with the statistical uncertainties.
Correlation analysis
To consider the correlations between the measured cross sections of the same process at different CM energy points, the corresponding covariance matrices are estimated. To estimate such a covariance matrix, contributions from all related uncertainty sources are analysed and estimated according to their nature and the method of uncertainty propagation. To get an impression of the strength of these correlations, the correlation coefficient matrices of the measured cross sections of the processes e + e − → e + e − and e + e − → µ + µ − are shown in Fig. 2. We find that the correlations are strong and can not be neglected.
In the covariance matrix analysis above, the corresponding covariance matrix of the measured luminosities at different CM energy points is estimated in advance with the similar method. This matrix is required when estimating the covariance matrices of the measured cross sections and constructing the global χ 2 function for the simultaneous fit of the processes e + e − → e + e − and e + e − → µ + µ − (see Section 5.2).
Energy spread and final state radiation
To determine the J/ψ decay widths, a simultaneous fit to the measured, CM energy dependent cross sections of the processes e + e − → e + e − and e + e − → µ + µ − is required. In the theoretical formulae used in the fit, the effects of the beam energy spread and FSR are taken into account as well.
By assuming the CM energy spread follows a Gaussian distribution, the theoretical cross section is (6) Here, W (= √ s) and S W are the mean and standard deviation of the CM energy distribution, respectively. According to the formula and the expression of σ 0 in Eq. (3), σ can also be divided into three terms: the continuum term (σ C ), the resonance term (σ R ) and interference term (σ I ). In practice, σ C is evaluated with the BABAYAGA-3.5 generator [34] with the effects of J/ψ and FSR switched off, while σ R + σ I is calculated using the analytic formulae for σ R 0 + σ I 0 in Ref. [27]. In Eq. (3), only the ISR effect is involved in σ 0 . To take into account the FSR effect, we introduce a correction factor R FSR into the theoretical cross section: In practice, R FSR is obtained with the BABAYAGA-3.5 generator [34] as the ratio of the calculated cross sections with and without the FSR effect. For example, at √ s = 3096.9 MeV, R FSR is 0.980 and 0.998 for the processes e + e − → e + e − and e + e − → µ + µ − , respectively.
Due to the high-order corrections related to the FSR effect, the cross sections of the processes e + e − → e + e − and e + e − → µ + µ − are calculated by the BABAYAGA-3.5 generator with the uncertainties of 0.5% and 1.0%, respectively [34]. Thus, systematic deviations of R FSR (W ) from their truth values probably appear in the vicinity of the J/ψ resonance. To take the possible deviations into consideration, we implement one free scaling parameter in the theoretical cross section formula of each process for the simultaneous fit, Specifically, the free scaling parameters for the processes e + e − → e + e − and e + e − → µ + µ − are referred to as F ee and F µµ , respectively.
Here, σ exper ee and σ theor ee (σ exper µµ and σ theor µµ ) are the experimental measured and theoretical predicted cross sections of the process e + e − → e + e − (µ + µ − ), V ee (V µµ ) is the covariance matrix of the measured cross sections of e + e − → e + e − (µ + µ − ), V L is the covariance matrix of the measured luminosities, i and j are the horizontal and vertical indices of the 30 × 30 covariance matrix V , δ is Kronecker delta function, and ∆W is the statistcal uncertainty of the CM energy as listed in Table 1, whose systematic uncertainty, 0.033 MeV, will be taken into account by examining the changes of the fit result due to the changes of the CM energies by 0.033 MeV.
Simultaneous fit and parameter transformation
By minimizing the global χ 2 function, the simultaneous fit to the measured, CM energy dependent cross sections of the processes e + e − → e + e − and e + e − → µ + µ − is carried out. In the fit, the following six parameters M , Γ ee Γ ee /Γ tot , Γ ee Γ µµ /Γ tot , S W , F ee and F µµ are float, while σ C (W ) and R FSR (W ) are expressed as piecewise linear interpolation functions based on hundreds of pairs of (W ,σ C ) and (W ,R FSR ) values obtained with the BABAYAGA-3.5 generator. The resultant fit curves are shown in Fig. 3, and the corresponding fit quality is χ 2 min /ndf ≈ 23.0/24 ≈ 1.0, where χ 2 min and ndf are the minimized global chisquare and the number of degrees of freedom, respectively. (3) and mainly illustrated by the small dip in front of the peak], the plot for the process e + e − → µ + µ − is drawn with a logarithmic vertical axis, while the plot for e + e − → e + e − is drawn with a linear vertical axis, because in this process the interference effect is less noticeable due to the existence of the scattering channel. In the plots, the red points with error bars and blue curves are from data and fitting, respectively. The fit result of Γ ee Γ ee /Γ tot and Γ ee Γ µµ /Γ tot are (0.346 ± 0.009) and (0.335 ± 0.006) keV with the covariance and correlation coefficient between them as 0.000046 keV 2 and 0.83, respectively. Taking into account the correlation term, we evaluate Γ ee /Γ µµ to be 1.031 ± 0.015, which is consistent with the expectation of lepton universality within about 2σ. Besides, S W is fitted to be (0.916 ± 0.018) MeV, which is consistent with the designed energy spread of the BEPCII collider, and F ee and F µµ are fitted to be 0.995 ± 0.009 and 1.015 ± 0.011, respectively, which are compatible with the precision levels of the BABAYAGA-3.5 generator within uncertainties.
As mentioned previously, the impact of the systematic uncertainty (0.033 MeV) of the CM energies requires additional consideration. By increasing and decreasing the CM energies by 0.033 MeV, we repeat the entire simultaneous fit process twice, the relative changes of the results are less than 0.1%, and are neglected.
The uncertainties quoted here are total uncertainties, which are obtained with all the statistical and systematic uncertainties of the input quantities taken into consideration.
The result of Γ ee /Γ µµ is consistent with and more precise than the result (1.002 ± 0.025) given by KEDR with the same method [20]. It is also in agreement with but less precise than the previous BESIII result (1.0017 ± 0.0037) obtained with a different approach [30]. Table 5 shows a comparison of the Γ tot and Γ ll obtained in this work with those from other works and the PDG. The results given by this work agree with all other results; they come up to a new precision level, together with previous results obtained with the ISR return technique at BESIII and ES method at KEDR.
Summary
Based on the data samples collected with the BESIII detector at fifteen CM energy points in the vicinity of the J/ψ resonance, the cross sections of the processes e + e − → e + e − and e + e − → µ + µ − are measured and summarized in Tables 2 and 3, respectively. By performing a simultaneous fit of the cross sections of the two processes as functions of the center-of-mass energy, Γ ee Γ ee /Γ tot and Γ ee Γ µµ /Γ tot of the J/ψ resonance are determined to be (0.346 ± 0.009) and (0.335 ± 0.006) keV, respectively.
|
2022-06-29T01:15:47.454Z
|
2022-06-28T00:00:00.000
|
{
"year": 2022,
"sha1": "38816fd8ad765ecc038d98fb0dd30f1609ee00fb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "38816fd8ad765ecc038d98fb0dd30f1609ee00fb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
18865496
|
pes2o/s2orc
|
v3-fos-license
|
Non-coding RNAs in Exosomes: New Players in Cancer Biology
Exosomes are lipid bilayer extracellular vesicles (EVs) of 50-150nm in size, which contain nucleic acids (mRNA, ncRNAs and DNA), proteins and lipids. They are secreted by all cells and circulate in all body fluids. Exosomes are key mediators of several processes in cancer that mediate tumor progression and metastasis. These nano-vesicles, when secreted from cancer cells, are enriched in non-coding RNAs (e.g. microRNAs) complexed with the RNA-Induced Silencing Complex (RISC), that mediate an efficient and rapid silencing of mRNAs at the recipient cell, reprogramming their transcriptome. MicroRNAs in circulation encapsulated in exosomes are protected from degradation by a lipid bilayer and might serve as potential non-invasive diagnostic and screening tools to detect early stage cancer, to facilitate treatment options and possible help in curative surgical therapy decisions. Additionally, engineered exosomes can be used as therapy vehicles for targeted delivery of RNAi molecules, escaping the immune system detection.
INTRODUCTION
The conceptual idea postulated by scientists [1] that relies on RNA as the main responsible for biology to emerge from chemistry in a primitive time of the Earth, points to the existence of a dynamic yet heterogeneous set of players with different functional and biological purposes in the different organisms.
Non-Coding RNAs
Protein coding genes are the most well studied genomic sequences, however exons account for only ~2% of the genome if one considers untranslated regions (UTRs) [2]. It has become increasingly evident that this significant non-protein coding portion (98%) harbors crucial importance in terms of development, homeostasis and disease [3]. This is largely supported by fact that the biological complexity of organisms is often proportional to the non-protein coding portion of the genome, coupled with evidences regarding the transcription of this large portion of genome into non-coding RNAs (nc-RNA) [1].
The term non-coding RNA (ncRNA) most commonly refers to RNA that does not encode a specific protein in the cell. Although they were initially thought to arise as a result of low polymerase fidelity, the discovery that many genomic sequences are transcribed in accordance to the developmental state of the organism and tissue location prompted the characterization of these molecules in complex organisms [4]. There is no clear distinction between the different types of ncRNAs however they can be divided according to its size *Address correspondence to this author at the Institute of Molecular Pathology and Immunology of the University of Porto (IPATIMUP), Rua Dr Roberto Frias, s/n, 4200-465 Porto, Portugal; Tel: (00351)-2255-70700; E-mail: smelo@ipatimup.pt in short ncRNAs which comprises molecules ranging from ~17bp to ~32bp (miRNAs, piRNAs and tiRNAs), mid-size ncRNAs with sizes between 60bp and <200bp (snoRNAs, PASRs, TSSs-RNAs and PROMPTs) and long ncRNAs with >200bp (lincRNA, T-UCRs and other long ncRNAs [3,5]. Also their functions are highly heterogeneous with implication in many cellular processes such as ribosomal RNA (rRNA) modifications [6], messenger RNA (mRNA) targeting [7], chromatin modification and transcriptional regulation [4]. Of all ncRNAs, the microRNAs (miRNAs) are the most widely studied ones since the first small ncRNA lin-4 was described in C. elegans [8].
MicroRNAs are small (19-24bp) endogenous ncRNA that regulate gene expression at the posttranscriptional level, by targeting mRNA transcripts. The action of miRNAs is mediated by its binding to the 3-untranslated region (3'UTR) of the target mRNAs thus regulating targeted mRNAs stability and protein synthesis [9,10]. Regarding its biogenesis, miRNAs are transcribed in the nucleus either from transcripts of specific genes or introns of protein-coding genes, by RNA polymerase II (RNAP II) into a primary miRNA (pri-miRNA) [11]. Next the pri-miRNA is enzymatically modeled in a two-step process catalyzed by two members of RNase I family Drosha and Dicer, in association with the dsRNA-binding proteins (dsRBPs) DiGeorge Syndrome Crtical Region Gene 8 (DGCR8) and transactivationresponsive RNA-binding protein (TRBP). The first step occurs in the nucleus and comprises the recognition and cleavage of the stem-loop of the pri-miRNA, by the Drosha-DGCR8 complex (also called microprocessor complex), into a ~70 nucleotide precursor hairpin (pre-miRNA). It is then transported to the cytoplasm by the Exportin-5 (XPO5)-Ran-GTP complex for the dicing step in which the pre-miRNA is cleaved by the Dicer-TRBP complex into a double stranded miRNA. After strand separation, the functional strand of the mature miRNA is loaded together with Argonaute (AGO2) proteins into the RNA-induced silencing complex (RISC). This complex exerts its function by silencing the target mRNA through cleavage, translational repression or deadenylation. The other strand, called passenger strand is degraded ( Fig. 1) [11,12].
MiRNAs are estimated to regulate the translation of nearly 60% of protein-coding genes as they are involved in the regulation of many processes including differentiation, proliferation, apoptosis and development. Deregulation of certain miRNAs' expression in the cell was consistently observed during various pathologies including cancers [13]. Every miRNA has a unique nucleotide sequence and unique expression pattern in a certain cell type [13,14], however different miRNAs can cooperatively regulate the same target [3]. There have been more than 2000 different human miRNA species discovered so far and this amount is increasing [15]. MicroRNAs are very stable small RNAs that are most often bound to Argonaute (AGO) proteins. Interest-ingly, miRNAs have been found in circulation in the serum of healthy individuals as well as in patients with several different pathologies [16].
CIRCULATING NON-CODING RNAs
The detection of miRNAs in biological fluids including blood plasma, urine, tears, breast milk, amniotic fluid, cerebrospinal fluid, saliva, and semen, evidence the stability of these molecules in an adverse environment [15]. These extracellular circulating miRNAs survive under unfavorable physiological conditions such as extreme variations in pH, which contrast to common RNA species like mRNA, rRNA, and tRNA that are degraded within seconds after being placed in a nuclease rich extracellular environment [15].
In 2008 Hunter et al., proposed that extracellular miR-NAs were protected by encapsulation into membranevesicles, after miRNAs detection in peripheral blood microvesicles [17]. These results combined with the previous findings that cells in culture transport intracellular miRNAs into the extracellular environment by exosomes [18] led to hypothesize the existence of an intercellular and inter-organ communication system in the body mediated by Extracellular Vesicles (EVs) encapsulated miRNAs. However, four years later two studies suggested that 90-99% of extracellular miRNA are indeed outside EVs and associated with proteins of the AGO family both in blood plasma/serum and cell culture media [16,19]. Turchinovich et al., (2011) also showed the stability of AGO2 protein in protease rich environment which explains the stability and resistance of miR-NAs in biological fluids [16] thus unraveling the two mechanisms by which miRNA can be shielded from RNase activity when in circulation: 1)EVs encapsulation and 2)AGO proteins association. Whilst circulating miRNAs bound solely by AGO proteins are apparently non-specific remnants resulting from physiological activity of the cells and cell death, there is increasing evidence that cells selectively encapsulate miRNAs into EVs and secrete them outside the cell. The mechanisms behind this statement remains, however, to be elucidated [15].
EXTRACELLULAR VESICLES
The concept of intercellular communication has gained insight in recent years. Cell-cell contact and the transfer of secreted molecules are the best studied mechanisms accounting for this process, however a third mechanism has emerged which involves the intercellular transfer of EVs [20]. EVs are small membrane vesicles that contain different types of RNAs [21], proteins [22] and more recently demonstrated, DNA [23] enclosed by a phospholipid bilayer released by all eukaryotic and prokaryotic cells. The diameter of EVs typically ranges from 30 nm to 1 m, the smallest of which are 100-fold smaller than the smallest cells [24]. The origin and nomenclature of EVs is still very elusive and is mainly because current available extraction and purification methods are not clear enough to precisely discriminate between different types of vesicles, as exossomes and microvesicles (MVs) [20]. As such, many names have been proposed referring to important criteria for classification such as size, density, morphology, lipid composition, protein composition, and subcellular origin [24,25]. The terms ectosome, shedding vesicle, microparticle and MV generally refer to 150-1000 nm vesicles released by budding from the plasma membrane [26]. The term exosomes on the other hand was initially used for vesicles ranging from 40-1000 nm that are released by a variety of cultured cells [27], and later for 40-100nm vesicles resulting from multivesicular endosome fusion with the plasma membrane during the process of reticulocyte differentiation [28]. Finally a range of 40-150 nm in diameter of vesicles coined the term exosomes [25]. Later, B lymphocytes and dendritic cells were shown to release similar vesicles of endosomal origin [29]. Several studies demonstrated the release of both exosomes and MVs by a single cell type [30][31][32]. One of the main criteria for assessing the differences between exosomes and MVs resides on the shedding process of these vesicles. While larger vesicles are directly shed from the plasma membrane, exosomes derive from the intercellular endosomal compartment [22].
Starting from an early endosome ( Fig. 2A), exosomes biogenesis involves the production of intraluminal vesicles (ILVs) within multivesicular bodies (MVBs) of the membrane and the release of the vesicle to its lumen (Fig. 2B). This is a well studied process with its known intervenients, such as the endosomal sorting complex required for transport (ESCRT). The ESCRT-0, -I and -II complexes are responsible for the recognition and sequestration of ubiquitinated proteins in the endosomal membrane while ESCRT-III is responsible for the inward budding of the membrane [25,33]. MVBs containing the ILVs then fuse with cytoplasmic membrane (Fig. 2C). Further secretion of exosomes (Fig. 2D) is promoted by the RabGTPases27a and 27b which belong to a small family of proteins involved in cellular trafficking [22]. Recent studies provided evidence that exosomes biogenesis could also occur via an ESCRTindependent mechanism dependent on sphingomyelinase, an enzyme that produces ceramide [34]. These observations are consistent with the presence of high concentrations of ceramide and its derivatives in exosomes [34,35]. Importantly, the composition of an exosome is not a mere reflection of the donor cell, and it has been shown that the profiles of exosomal cargo can be substantially different from the originating cell, which indicates the existence of a highly controlled sorting process [36].
Currently, exosomes are defined as 40-150 nm diameter bilayered membrane vesicles of endocytic origin, with a cupshaped morphology with densities ranging between 1.13 -1.19 g/mL [24]. Because they fall below the resolution limit of optical microscopy, transmission electron microscopy and atomic force microscopy have been the preferred techniques for the direct observation of exosomes' size and morphology, so far [25]. Yet, recently a device allowing Nanoparticle Tracking Analysis (NTA) which tracks the movement of laser-illuminated individual particles under Brownian motion, has been developed, enabling a fast and simple way of analyzing large numbers of particles at the same time [37]. Exosomes are present in almost all biological fluids including blood, urine, ascites, cerebrospinal fuid [38][39][40][41] serum and plasma [7], and in the culture medium of cell cultures [42]. In terms of biochemical composition, the membrane of exosomes contains high levels of cholesterol, sphingomyelin, ceramide and detergent-resistant membrane domains [43,44]. One important feature of exosomes is that due to its biogenesis, the proteins on the surface of its membrane have the same orientation as the cell's. Initial proteomic studies indicated that exosomes contain a particular set of proteins depending on the cell that secretes them, while others are found in most exosomes independent on the cell type [25]. Proteins from endosomes, plasma membrane and cytosol refer to the second group, whereas proteins from the mitochondria, endoplasmic reticulum and Golgi complex are not usually present in exosomes. A clear distinction between MVs and exosomes in terms of protein characterization is hampered by the presence of ubiquitous proteins including ALIX, CHMP4B, RAB11A, RAB5 [25]. Exosomes however, are characterized by the presence of proteins involved in membrane transport and fusion processes such as Rab, GTPases, annexins and flotilin, components of the endosomal sorting complex required for transport (ESCRT) complex, tumor susceptibility gene 101 (TSG101), heat shock proteins (HSPs), integrins and tetraspanins including CD81, CD63 and CD82 [20,25,45]. Furthermore, exosomes also contain proteins that are involved in specific cell functions. For in-stance, MHC class II molecules were shown to be abundant in exosomes from all antigen presenting cells (APCs) that express MHC class II [46]. Additionally epithelial tumor cells secrete exosomes carrying the epithelial cell adhesion molecule (EpCAM) [47] and gastric [48], breast [49], and pancreas cancer [50] derived exosomes express members of the human epidermal receptor (HER) family. Although not thoroughly studied, enrichment of sphingomyelin, phosphatidylserine, cholesterol, saturated fatty acids, ceramide and its derivatives were observed as lipid components of the membranes of exosomes (Fig. 3) [34,51,52].
Although some models of exosomes uptake in target cells have been proposed, there is not a consensus regarding the definition of the mechanisms. Exosomal communication may occur through the direct interaction of membrane proteins with receptors in a target cell, and activate intracellular signaling processes (Fig. 4A). The second proposed mechanism resides in the cleavage of exosomal membrane proteins by proteases in the extracellular space resulting in fragments with different sizes. These fragments can then act as a ligand that binds to the target cell protein receptor (Fig. 4B). Exosomes can also fuse with the target cell membrane and release its content directly onto the recipient cell (Fig. 4C). Lastly exosomes are phagocyted in an actin-cytoskeletal and phsphatidylinositol 3kinase-dependent manner (Fig. 4D) [22,44].
Exosomes have recently emerged as important mediators in cell communication due to their enriched content in genetic material like mRNAs and non-coding RNAs [18]. These were shown to be functional in recipient cells [18]. Nonetheless several questions remained to be addressed, amongst them how can exosomes carry stoichiometric amounts of miRNAs to affect gene expression posttranscriptionally in the recipient cell?
EXOSOMES AND CANCER
Although the discovery of exosomes dates decades ago, only recently its study has gained serious insight as they are continuously being implicated in important disease mechanisms such as cancer. In fact exosomes can help the cancer to progress and disseminate by manipulating the local and distant biological environment. On the other hand, exosomes can also program the immune system in order to evoke an anti-tumor response by the organism [22]. The duality of roles made clear that the network of interactions created by exosomes is complex and of utmost importance for a better understanding of the carcinogenic process. Tumor cells exchange oncogenic proteins between them or with normal surrounding cells, via exosomes [53]. Although the purpose of this cross-talk is not yet known, some examples have confirmed this level of communication. For instance, the protein EGFRvIII can be delivered intercellularly through exosomes from glioma cells to nearby cells lacking this mutant form, which in turn leads to activation of transforming signaling pathways [53]. Furthermore, exosomes extracted from a KRAS mutant cell line containing mutant KRAS protein enhanced cell growth and tumorigenicity in a wild-type KRAS-expressing non-transformed cells, upon transfer [54]. In vitro experiments showed that exosomes containing TGF-B1 can trigger the differentiation of fibroblasts to myofibroblasts through SMAD-dependent signaling [55]. Since myofibroblasts are key producers of proteins involved in the remodeling of the matrix of the tumor microenvironment and actively participate in angiogenesis, the role of exosomes in the recruitment of fibroblasts could enhance angiogenesis [22]. In fact, exosomes were shown to participate in the formation of the premetastatic niche in an in vivo pancreas cancer model [56]. Another example depicting the tumorigenic role of exosomes is the study by Peinado and colleagues (2012) where they demonstrate in mice, that exosomes from metastatic melanoma cells can enhance tumorigenesis by recruiting bone marrow derived cells to initiate a pre-metastatic niche [57].
Exosomes are reported to predominantly contain different kinds of RNA and protein. Two previous studies have shown the presence of mitochondrial DNA [58], single stranded DNA and transposable elements [59] in exosomes. However, only recently, evidences were found that exosomes carry fragments of double-stranded DNA in a study where exosomes from pancreas cancer cells and serum from patients were used [23]. Furthermore, mutations in KRAS and p53 were detected in the genomic DNA of these exosomes.
MiRNAs play important roles in several cellular processes by regulating the expression of hundreds of genes. Studies reported evidences that transfer of exosomes associated miRNAs to recipient cells occurred, which results in altered gene expression and functional effects [18,[60][61][62][63]. In 2012 Chiba et al., demonstrated that the exosomes derived from three different colorectal cancer cell lines contained mRNAs, miRNAs, and natural antisense RNAs, and were delivered into recipient cells [63]. In addition, some of these reports have demonstrated that the transferred exosomal content can be functional in the receptor cells [64]. Yang et al. (2011) reported the migration of SKBR3 and MDA-MB-231 breast cancer cells in a transwell invasion assay after treating macrophages with IL-4 secreted exosomes containing the miRNA miR-223. Conversely blocking miR-223 prevented the increased invasion capacity previously observed. Furthermore the mRNA target level of that specific miR-223 was reduced in the recipient cells after exosome treatment [64].
The modulating features of exosomes were assessed in a recent study in which exosomes from normal bone marrow cells containing miR-15 can have a tumor suppressor effect upon transfer to multiple myeloma cells, where the expression of this miRNA is low [65]. Also, after infecting Blymphoblastoid cells with Epstein-Barr virus, Pegtel and colleagues, (2010) showed that exosomes secreted the virus specific miRNAs and that these affected the expression of the target gene, thus revealing the ability of exosomes to facilitate viral infection though miRNAs [61]. More recently, exosomes were implicated in the metastatic process by a study of Valencia and colleagues (2014). Using an in vivo murine model they demonstrated that the miR-192 was specifically enriched in exosomes and that these markedly appeased the metastatic burden and tumor colonization in the bone [66]. The work from Kosaka and colleagues (2012) showed the tumor suppressor effect of the exosomal miR-143 derived from normal protstate cells through inhibition of the growth of target cancer cells in vivo and in vitro [67].
Intercellular communication through exosomes has also been proposed as a possible mechanism of spread of resistance or sensitivity of cancer cells to a specific therapy. Although the precise mechanism(s) by which it occurs is still elusive, Xiao et al, (2014) demonstrated that exosomes released by cells that were exposed to chemotherapy could in turn influence the resistance of target cells to that specific agent [68].
BIOMARKERS
Since they are readily accessible in nearly all bodily fluids, exosomes can provide great diagnostic opportunities to profile cancer subtypes (virtually a liquid biopsy), unraveling new therapeutic targets and predicting therapeutic responses. The mere fact that exosomes production is increased in cancer, allows for exosomes analysis to be useful for early cancer detection and assessment of disease progression, without the need for tumor biopsy [69].
MicroRNAs in exosomes have been recently described as good biomarkers easily accessible in circulation of cancer patients [60,[70][71][72]. Based on the levels of eight exosomal miRNAs, Taylor and colleagues (2008) demonstrated that malignant ovarian cancers could be distinguished from benign disease [70]. Other studies have further demonstrated an association of specific miRNAs with a cancer type, for instance miR-107, miR-574-3p, miR-1290 and miR-375 in prostate cancer [71,73], miR-141 and miR-195 in breast cancer [72] and the serum-derived miR-21 in glioblastoma [60]. In 2014 Rodríguez and colleagues demonstrated the potential use of exosomal miRNAs as biomarkers, which were enriched in plasma of lung cancer patients compared to the bronchoalveolar lavage of these patients [74]. In another report, the authors suggested a distinction between normal and lung adenocarcinoma samples could be achieved based on the expression of the exosomal miRNAs miR-378a, miR-379, miR-139-5p and miR-200-5p [75].
It is not only in serum or plasma that exosomes exert their potential as cancer biomarkers. Recently, a miRNA panel extracted from bile-derived exosomes from cholangiocarcinoma patients was proposed to be of relevance in terms of disease diagnosis [76]. Also results from the work of Liu and colleagues (2014) showed high expression levels of miR-21 and miR-146a in exosomes derived from cervicovaginal fluid compared to those from HPV positive and HPV negative normal samples [77].
PRECURSOR microRNAs PROCESSING IN CAN-CER EXOSOMES
Several reports suggest that miRNAs contained in exosomes can influence gene expression in recipient cells [18,61,67,78]. Nonetheless, single-stranded miRNAs by themselves incorporate the RNA-Induced Silencing Complex (RISC) very poorly and therefore, cannot be efficiently directed to the target mRNAs for post-transcriptional regulation [79,80]. We have recently described a mechanism by which cancer-derived exosomes are able to incorporate precursor miRNAs (pre-miRNAs) in complex with Dicer, TRBP and AGO2 proteins, allowing for their processing in a cell-independent fashion [42]. Therefore, after their processing, miRNAs are already integral part of the RISC complex that will guide the miRNA in the recipient cell to its mRNA target more efficiently. This process allows for cancer cellderived exosomes to more efficiently regulate gene expression post-transcriptionally at the recipient cell. This was the first study to report a cell-autonomous process occurring in exosomes upon their secretion to the extracellular space [42]. This study opens a new perspective on exosomes biology, as vesicles that might have cell-independent processes occurring in the extracellular space.
NON-CODING RNAs IN EXOSOMES AND THER-APY
The intrinsic ability of exosomes to efficiently shuttle small molecules as perfect non-immunogenic carriers of therapeutic agents to target cells makes them an extremely promising therapeutic tool for numerous diseases, including cancer. Thus intercellular transfer by exosomes can be used as ncRNAs carriers for instance to restore miRNA expression in target cells where they might play a therapeutic role as tumor suppressor factors [81]. As Ohno and colleagues (2013) elegantly demonstrated, engineered exosomes expressing the transmembrane domain of the PDGF fused to the GE11 peptide can accurately deliver the let-7a miRNA after injection to EGFR-expressing xenograft breast cancer tissue in immunodeficient mice [82]. It is important to note however that miRNAs do not require full binding to their target mRNA sequences for inhibiting effect. This allows them to act synergistically on various molecules within signaling pathways lowering its target specificity [83]. The use of synthetic siRNA has been exploited as an alternative, more selective tool. Exosomes loaded with exogenous siRNA to GAPDH were injected in mice and the siRNA delivered specifically to neurons, microglia, oligodendrocytes in the brain resulting in a selective gene knockdown [84]. Additionally, the delivery of a siRNA leads to a selective gene silencing of MAPK1 in monocytes and lymphocytes in a study by Wahlgren and colleagues (2012) [85].
CONCLUSION
The prospective of accessibility of exosomes in almost all biofluids, such as plasma, lymph, cerebrospinal fluid, urine, or malignant ascites brings to the fore some truly unprecedented diagnostic opportunities. The identification of the non-coding RNAs in circulation during tumor progression and therapy, may provide a unique, remote, noninvasive, and virtually continuous access to the changing molecular make up of cancer cells (virtually a liquid biopsy), with significant clinical implications. Finally, the understanding of their role and selective packaging in cancer exosomes will unravel novel functions of these non-coding RNAs in cancer progression.
CONFLICT OF INTEREST
The author(s) confirm that this article content has no conflict of interest.
|
2016-05-12T22:15:10.714Z
|
2015-09-30T00:00:00.000
|
{
"year": 2015,
"sha1": "3dd7a9aa8ac6e58aae3312f9b42453ae5be6520c",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4763967?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "3dd7a9aa8ac6e58aae3312f9b42453ae5be6520c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
54649224
|
pes2o/s2orc
|
v3-fos-license
|
TRUTH TELLING TO LIFE-THREATENED AND DYING PATIENTS IN ISRAEL: CAN LEGISLATION IMPROVE IT?
Truth-telling by doctors to patients is a basic moral rule in developed healthcare systems. Not to tell the truth jeopardizes staff-patient trust, undermines the patient’s capacity for autonomy, and deprives the terminally-ill of a ‘good death’. Yet non-truth-telling is still common. This study explores and measures the extent of non-truth-telling to cancer patients in Israel’s modern health care system, why it happens and what consequences it leads to. Research design and methods: This Mixed Methods studyo-doctors working regularly in the field of palliative care, in both hospital, community and home care settings centred on two main tools, the first a qualitative structured in-depth interview of 15 doctors (from oncology, hospice home care and family medicine); the second a much longer quantitative self -administered questionnaire for 90 practitioners (30 hospital oncologists, 30 home care specialists, and 30 fami ly medicine specialists). The sample was made fully representative of the research population. The sampling method combined cluster, directed and convenience sampling. Data were analysed by content analysis and descriptive statistics (chiefly means and cor relations). Findings: Most oncologists had poor knowledge and a negative opinion of the 2005 act of parliament drawn up to regulate the care and treatment of terminally-ill patients. They knew and thought equally little of the palliative care approach which the Act mandated. Most doctors did not tell patients the full truth about their medical condition and avoided holding an end-of-life conversation with them or discussing Advance Medical Directives, largely out of fear and lack of the emotional resources and communications training required. Most thought truth telling took away hope and that hospice care approximated euthanasia. Many/most were reluctant to face the ‘failure’ implied in even discussing the transfer of a patient to palliative care. All doctors thought it was another doctor’s 69 both their autonomy and the quality of the decisions made without their input. The Israel Ministry of Health needs to take initiatives to firmly clarify the provisions of the 2005 Act, to provide and enforce training in its requirements, and to firmly apprise oncologists of their duty to break bad news and conduct the end-of-life conversation most patients need.
both their autonomy and the quality of the decisions made without their input. The Israel Ministry of Health needs to take initiatives to firmly clarify the provisions of the 2005 Act, to provide and enforce training in its requirements, and to firmly apprise oncologists of their duty to break bad news and conduct the end-of-life conversation most patients need.
Keywords ruth-telling, such as doctors telling the truth about a patient's illness, prognosis and treatment plan to the patient and/or their family, is a basic moral rule in the western healthcare system. Not to tell the truth can be viewed as jeopardizing trust in the staff-patient relationship, intruding on the patient's existential integrity and undermining the patient's capacity for autonomy [18]. Non-truth-telling can also mean that no Advance Medical Directives (AMDs) and drawn up and no end-of-life (EoL) planning carried out. Yet in this 21 st century non-truth-telling is still common.
This study explores and measures the extent of non-truth-telling to cancer patients in Israel's modern health care system, why it happens and what consequences it leads to. The findings it discusses are drawn from the author's doctoral research carried out between 2012 and 2014.
Research design and methods
There were two main tools. The first was a qualitative structured in-depth interview by the author of 15 doctors, five each from the fields of oncology, hospice home care and family medicine. It asked how they saw the Terminally-Ill Patients Act being implemented, about their approach to the care of terminally-ill patients and palliative and hospice care, about their conduct of EoL and AMD conversations, about coping with the challenge of truthtelling and communications skills, about the barriers to transferring patients to hospice and palliative care. The second tool was a much longer quantitative self-administered questionnaire for 90 palliative care practitioners (30 hospital oncologists, 30 home care specialists, and 30 family medicine specialists) covering their knowledge of, and attitudes to, topics such as the implementation and core principles of palliative care and Israel's Terminally-Ill Patients Act, communication issues such as truth telling, drawing up AMDs and EoL planning, the handling of ethical issues in palliative care and the Act's implementation, such as the transition from curative to palliative care..
The research population for the study was all the doctors working regularly in the field of palliative care, in both hospital, community and home care settings. The hospital sector was represented by hospital doctors working in a range of departments T (family medicine, geriatrics, internal medicine, oncology and others). The community medicine sector was represented by doctors working (a) in community clinics and (b) home-care units. The sample was made representative of the research population by ensuring that 20% of the doctors sampled were Arab-Israelis, that it was geographically heterogeneous and that all four major health management organizations participated. The sampling method combined cluster sampling, directed and convenience sampling. Data were analysed by content analysis and descriptive statistics (chiefly means and correlations).
Israeli's health care system and its provision of EoL care
Israel's population has for decades been served by an advanced health care system which aims to emulate American and European best practice. Since 2005 the Israeli healthcare system has been making a strong effort to improve its medical training new graduates are very soon involved in clinical care, of which the care of terminally-ill patients will be part and only a small minority of them will have been trained for this element of their daily ward practice. In Israel as elsewhere it is usually regular doctors and nurses who are often 'left' to care for dying patients, and many feel unprepared for this role [10]. It is clear that end-of-life training is more conspicuous by its absence than presence and that what did exist was not powerful enough to give trainees new insights or alter entrenched attitudes.
Palliative care was provided in the main in Israeli general hospitals. Hospice care was available for the six months before death but, in practice, its duration varied widely, suggesting different perspectives on the appropriate timing for the transition from curative care to palliative care. At the time of this research the 76 beds in three hospice units made up a nationwide bed-population ratio of 1.02 per 100,000, much lower than the ratio of 5 per 100,000 recommended by the Oxford Textbook of Palliative Medicine. The four main health management organizations also operated some eighty Home Care Units which provided medical, nursing and rehabilitation care across the country for bedridden persons in their own homes.
In 2005 the Terminally-Ill Patients Actwas passed. The Act was designed to regulate the care and treatment of incurable, terminally-ill persons, striking a balance between the values of the sanctity of life, recognition of the patient's autonomy of choice, and the importance of the quality of life beyond the importance of life itself. It instituted the instrument of Advance Medical Directives (AMDs) by which an individual states their wishes as to how they should be medically treated should they become terminally-ill and lose lucidity of mind. These advance instructions may be designed to rule out life-prolonging treatment or to constrain attending physicians to give such treatment even when they do not consider it medically justified. A patient's right to consent or not to any particular form of treatment had already been set out in the Patients' Rights Act, 1996, (Ministry of Health, 1996).
The provisions and non-provisions of the 2005 Actmost relevant to truth-telling were the following: 1. It required that the patient be given full information as to his/her treatment and care choices, according to their capacity to take that information in. It laid down that the patient had the right to know, the right to be told the truth and the right to prepare for death.
2. It defined the concepts of a "terminally ill patient" and an "end-stage patient".
3. It introduced the concept of Advance Medical Directives and required doctors to respect them. 4. It laid down that any decision in AMDs shall be made only by the individual themself and of their free choice, not by their family members and not according to any other consideration. 5. It laid down the importance of alleviating pain and suffering even if this involved a reasonable risk of the patient's death, 6. It stressed the importance of the "personal physician" holding an end-of-life conversation with the patient as a key to enabling the patient to realize the abovementioned rights, but it did not specify who that physician is:
Findings (a) What truth do patients not get?
We can distinguish here between (a) truth telling about the patient's illness, its prognosis and treatment plan, and (b)truth telling about EoL planning, AMDs and the resort to palliative care (see Discussion).
• No less than 78% of doctor-respondents admitted giving their patients only partial information about their medical condition.
• For a variety of reasons the majority of doctors avoided end-of-life conversations, at the best preferring to wait for the patient to broach the issue.
• In the qualitative interview all family medicine practitioners and oncologists declared that they avoided 'ethical issues' such as abandoning curative treatment for palliative/hospice care and planning for death.
• Only 37% of doctor-respondents said that they frequently, or more often, "encourage my terminally-ill patients to draw up Advance Medical Directives".
• In Israel, family members have by custom had a special role in communicating bad news. Although the 2005 Act requires that physicians disclose diagnoses first to patients themselves, whether the family agree or not, it has long been culturally approved that family members receive the information before patients, and families are requested to decide how and to what degree the patient should be told. Thus, while family members typically receive full medical information, including incurability and estimated prognosis, patients receive information gradually, and often partially, based on their preferences.
(b) Why do patients not get the truth? Potential obstacles to truth-telling reflect attitudinal, informational, economic, societal, and system barriers that are perceived differently by patients, physicians, and health care administrators. Last but not least, we should not forget that every doctor brings his/her own personal values onto the ward.
(b1) Beliefs/attitudes • Over 77% of doctors believed that "concealing information from the patient can sustain his/her hope and prevent harm".
• Almost 70% of doctors agreed in principle that "A multiplicity of treatment options is an obstacle to holding a conversation with the patient about end-of-life and a change in treatment goals".
• Israeli oncologists are trained to cure and hate to admit failure in this regard. Respondents said to me: o "Telling a patient their treatment goal has changed is not an automatic thing with me. It's easier to mend a broken leg or give antibiotics, easier to play the role of healer rather than talk about death, with all its sense of medical failure." o "We have been taught to treat to the end. I never give up. Nowadays I have a wide range of treatment options I can offer."
(b2) Doctors' knowledge of the Terminally-Ill Patients Act, palliative care, and EoL planning
The Act: • The great majority of respondents reported being given no formal training in the provisions of the Act. They picked up information about it at conferences or study days but had not studied it deeply or systematically; • Those doctors who had a more thorough knowledge of the Act's provisions said, nonetheless, that it was complicated and hard to understand. Less than 30% of doctors felt "that I have mastered the provisions of the Terminally-Ill Patients Act". No more than 28% were aware of the Act's definition of 'terminally-ill'.. Most doctors reported that they were not in a position to initiate an end-of-life conversation with a patient because the Act was not clear enough on when curative treatment should give way to end-of-life care. 61% were unable to distinguish accurately between 'hospice care', 'terminal care', 'palliative care' and 'supportive care'.
• Doctors versed in the Act and the various aspects of palliative care had positive attitudes to truth telling and palliative care; • Doctors trained in palliative care and the Act had considerably more knowledge about starting/transitioning to palliative care than doctors without this training. They also knew more about the ethical issues associated with the Act.
• The more doctors know about the effects of telling patients the truth the more positive their attitudes to doing so.
Palliative Care: • Doctors trained in palliative care were, overall, more positive in their attitudes to that form of care and its component elements; • Almost 70% held in principle that "A multiplicity of treatment options is an obstacle to holding a conversation with the patient about end-of-life and a change in treatment goals".
• 72% agreed that they "fear that referring a patient to hospice care accelerates their death". What the doctors may in fact be afraid of, without admitting it in so many words, is euthanasia (see next finding);. : • 69% agreed with the statement that "Not infusing liquids into the patient in hospice care symbolizes for me that this form of care shortens life".
• The more they know about palliative care the more positive their attitudes to it and to telling patients the truth about their prognosis.
• Only 54% of the doctors agreed that "Terminally-ill patients should get palliative care in the last 6 months of their life", i.e. the point in time when palliative care should begin and which is core to the 2005 legislation but more than half the doctorrespondents were unaware that its timing had been so fixed.
EoL planning: • 91% of doctors felt that their "limited ability to predict when a patient will die holds me back from initiating an end-of-life conversation", that is, they felt the patient was not terminally-ill enough.
• 88% felt that their "lack of time is an obstacle to holding difficult end-of-life conversations".
• 93% of doctors felt that their "lack of communication skills training is an obstacle to holding end-of-life conversations".
• Doctors felt that, lacking knowledge about palliative care, it was best they shied clear of EoL conversations with patients for fear of doing them harm..
(c) Lack of Training • Almost 70% of doctors agreed in principle that "A multiplicity of treatment options is an obstacle to holding a conversation with the patient about end-of-life and a change in treatment goals". Yet this "multiplicity of treatment options" is a sign of progress in healthcare, in that it provides doctors more treatment options to offer a patient than was the case in the past. In other words, they had not been trained in a modernday approach to EoL care and treatment.
• Two-thirds of doctors agreed that "A doctor's work with terminally-ill patients is made more complicated by ethical, social and religious issues".
Israeli oncologists perceived in themselves a general lack of the skills to handle EoL planning and care. • "Passing the Act does not mean it automatically gets implemented. The reality in oncology is that we are dependent on the media, on the state-sanctioned basket of drugs and therapies, on private health insurance policies, and so we find ourselves giving curative treatment to the end." (e) Doctors' emotional resources • Over 75% stated that "Disclosing the truth to the patient can cause me embarrassment and unease at how they (patients) will react" • Over two-thirds agreed that "An endof-life conversation with the patient raises the issue for us of our own death. As physicians, fears of our own death influence extensively how we face up to the end-of-life issue".
o "I do not initiate discussing such sensitive issues. I stick to the medical facts. I wait for the patient to raise such a matter and then I lay stress on, for example, the importance of quality of life.
o "I wait for the patient or the family to raise such a matter. I know I should take the first step but in practice I am not up to it. I simply do not have the strength for it." o As for Advance Medical Directives: "I just cannot look the patient in the eyes and say to him. 'Let's fill out some forms about your death.' So I just answer questions when I'm asked them and where I think it necessary bring in a social worker." (f) Patients' attitudes • Many patients do not want to hear the full truth about their condition (or at least that is what their family maintain); • Others want their physician to take the decisions alone; • Still others insist on every possible curative measure being attempted until the end and will listen to no other option.
(g) Family resistance Almost every doctor agreed that "Sometimes it is the family that is the main obstacle to referring a patient to hospice care". Yet the Act lays down that if a patient is cognitively competent to take decisions for himself/herself the family has no right to prevent a doctor discussing different care options with the patient.
(h) Not clear who is responsible for telling the patient the truth Every doctor thinks it is another doctor's job to inform the patient of a change in treatment site or goals. Patients themselves, at least those treated in hospitals, have no such doubts: they expect their oncologist to break bad news, after all he/she and their team have usually been caring for the patient for some time. Unfortunately, the 2005 Act is no help: it lays the responsibility on the patient's "personal physician" but does not say which doctor occupies this role. (
i) The gap between what doctors declare and what they practise
We see a wide gap between doctors' principles or at least what they feel they ought to declare as their principles and their behavior in practice.
• 80% of doctors reported that they "prefer to be told all the details of a patient's personal story". They wanted to know as much as possible about the patient's circumstances in order to manage their own situation vis-à-vis the patient. That is, they wanted more for themselves than they were willing to give the patient.
• Only 37% of doctor-respondents said that they frequently, or more often, "encourage my terminally-ill patients to draw up Advance Medical Directives". Yet over 85% of doctors agreed that "Every patient has the right to know how terminal their condition is and to have their Advance Medical Directives respected".
• Almost every doctor agreed in principle that it was important to empower the patient by giving them information about changes in treatment goals, thus preventing their uncertainty, but in ward practice the great majority of doctors usually failed to observe this principle.
(j) Variation by profession (oncologists v. home care specialists v. family doctors)
Oncologists tended to stress the difficulty of the Act's implementation: "Theoretically, the Act helps but it is hard to put into practice. "I am not the one to take hope away from my patients. If there is no choice then my preference is to talk with the family and not directly with the patient. As oncologists we prefer to keep making efforts up to the end or until the patient themself takes the initiative to talk about the end of life." The home care specialists were markedly the boldest in implementing the Act, while family doctors thought that implementing the Act was the oncologists' job. The home care specialists agreed with them that the oncologists bore the brunt of the responsibility for preparing the patient for the end but it is clear that that oncologists found this very problematic.
In talking about PC, home care specialists reported having more of the necessary skills and resources than oncologists and family doctors. This is perhaps unsurprising since it is the home care specialists who have chosen to face up to the issues of EoL planning and care and equipped themselves for that.One said: "None of my colleagues [hospital oncologists] has attained emotional awareness of their own death and so steer clear of end-oflife conversations." Home care specialists were markedly more willing to persist to the end with the issues raised by the Act in order to give their patients a more dignified death. For instance, they were more willing to give the patient the information which would enable them to make their own choices. They were correspondingly more worried by the advance of the disease bringing about cognitive deterioration, which would prevent the patient expressing their wishes, in which case a guardian or family members would have to make necessary choices. By contrast, all family doctors and oncologists responded by shying away from such issues; Home care specialists (and only home care specialists) were unafraid to face up to whatever might occur in an EoL conversation perhaps because they appreciated better what their patients wanted a true prognosis of the time left to them, to discuss their quality of life and the circumstances of their death.
Home care specialists were more open to ethical problems: "I cope with any issue that arises and even broach the subject as part of my patient intake. I want to give the patient the best care possible and so I need to know their wishes and we talk about that in team staff meetings." This multidisciplinary approach to ethical issues is a hallmark of home care: "Any issue that comes up, no matter how difficult, we face up to it as a team so that we provide the best quality of life we can." The knowledge displayed in the responses to the quantitative questionnaire about starting/ transitioning to palliative care differed significantly by specialism -doctors working in home care and family medicine know markedly more than oncologists. The same is true with respect to telling patients the truth-oncologists know the least of the three specialism groups. And oncologists also score lowest on attitudes to telling patients the truth with family medicine specialists having the most positive attitudes. On knowledge about the provisions of the Terminally-Ill Patients Act, it was the family medicine specialists who scored lowest and the home care experts who scored highest
Discussion
Israel's deficiencies in providing dying patients the quality and place of death they would prefer threatens to become a national issue of disrespect for patients' and their wishes for death with dignity. Part of the problem is that Israel is very much a multicultural society. If advanced EoL and palliative careare to expand it has to find a way to adapt its principles to divergent cultural and religious beliefs, practices and customs
The connection between truth-telling per se and truth telling about palliative care
Surely it is just to argue that not telling a patient about the possibilities of palliative and hospice care and not giving them the opportunity to discuss these matters and plan their coming care and treatment and death, surely this is not telling them a very significant part of the truth they should know. Truth-telling and EoL conversationscan you have one without the other? Further, if an oncologist is ignorant, or largely so, of the provisions and requirements of the Terminally-Ill Patients Act then he or she is quite unequipped and unable to tell their patients all the truth they should know.
The consequences of non-truth telling Truth-telling as patients' right and doctors' obligation
Open and candid communication with the patient is the heart and soul of palliative care and the basis of doctor-patient trust. A patient suffering from a life-threatening illness deserves full, accurate and honest information about his condition but the findings show that relatively few patients get this. As soon as the patient does not receive honest, straightforward information the decision-making process is distorted. They cannot plan autonomously for their own future. It is the patient's right to choose how they will be treated (or not) and how they will die. It is their right to issue Advance Medical Directives. Not given full information about their medical condition and the options available to them, they cannot decide if they want curative treatment 'to the bitter end' or prefer the dignity and quality of life of hospice care.
Non-truth-telling is a serious obstacle to the transfer of terminally-ill patients to palliative care and to other key elements of EoLcare. EoL decisions are postponed until too late so that the benefits of palliative and hospice care are not fully exploited. Relatively few patients get the chance to draw up Advance Medical Directives, or discuss the option of hospice care and their place of death. The findings of the present study make it abundantly clear that if the oncologist does not take the initiative to broach the issue of Advance Medical Directives they will in most cases not be drawn up and registered.
Truth-telling does not cause harm to patients.
On the contrary, most patients want to be involved in decision-making but doctors' awareness and attitudes on this issue and their lack of the communications skills which would help them be open with the patient often deprives patients of this right. Most patients prefer the truth and want it undecorated by euphemism and medical jargon. They want to talk about their quality of life and the circumstances of their death.Doctors frequently censor information they give to patients about their outlook on the grounds that what someone does not know cannot harm them [20] but avoidance of communication about the reality of a patient's situation does not protect them from experiencing considerable the psychological distress of uncertainty [5]. At the heart of any patient-centered approach is the need to understand the meaning of the illness for the patient, a central goal of any wholeperson approach to end-of-life care [13]. In other words, doctors must learn how to listen fully as much as to speak truthfully. He/she must be willing to listen to the patient's views, fears and preferences for their future care and treatment. This is perhaps even harder for them than to do most of the talking the mselves.
Oncologists' training needs A notable lack mentioned especially by oncologists was training in the skills needed for managing end-of-life conversations: all said this was not a part of current training programs and this and the uses of palliative care ought to be given more place in medical training. Few Israeli medical schools and even fewer residency training programs mandate courses or clinical experience in end-of-life care. Palliative care is not taught in basic medical training. Medical students, as noted at the beginning of this paper, frequently do not feel prepared to discuss end of-life issues with their patients and physician surveys have demonstrated a desire for ongoing education in this area [12]. In Israel, there are no formal courses in palliative care in doctors', nurses' and social workers' basic training. We cannot ignore that attitudes and knowledge may be markedly affected by medical education.
Studies have shown that medical students who complete clinical rotations and courses in palliative care feel more comfortable with death and caring for dying patients [17]. The differences between three professions involved in EoL care displayed in the findingsset out above demonstrates too the effectiveness of specifically designed training, although we cannot rule out that the very choice of profession results to some extent from individual beliefs and choices.
The UK General Medical Council's [9] second edition of Tomorrow's Doctors recommended core teaching on 'relieving pain and distress, together with care of the terminally-ill' [10]. The UK Department of Health too has recently highlighted the need to educate all health care professionals to try and improve 'end-of-life care' and the third edition of Tomorrow's Doctors reiterates the need for students to be prepared to care for patients at the end of life [9].
Giving knowledge does not necessarily alter beliefs
Firstly, we need to state that some training clearly works. We have seen that doctors trained in palliative care and the 2005 Act knew much more about and had far more positive attitudes towards core elements of good EoL care and treatment. However, it would seem that it is harder to use training to alter doctors' attitudes than to increase and improve their knowledge. Although some studies have assessed physicians' knowledge and attitudes concerning various aspects of terminal care few have examined the effect of knowledge and attitudes on actual physician practice on the ward (nor has the present study, unfortunately), and the results vary [7]. In a study of pain management practices of physicians the authors found no evidence that knowledge or attitudes about pain medication were associated with prescribing behaviors [3]. However, in three other studies which examined hospice-referral patterns, physicians' attitudes concerning disclosure and communication were associated with hospice-referral behaviors [2].
It is clear that education is unlikely alone to substantially change practice patterns [4]. Ideally, education would be one component of a more comprehensive systemschange approach. Empathic and compassionate communication with the patient requires from the attending physician not only the readiness and skills for this difficult task but a considerable degree of selfawareness. It will be critical for all palliative care experts to spend 40%-50% of their time educating and supporting other health care professionals and community support systems, in addition to providing consultation and direct patient/ family care [21].
It is clear that when doctors blame their lack of time for not broaching EoL conversations that the true explanation lies elsewhere. Some doctors are self-aware of this inability but many are not and need self-awareness training. For this to change doctors need to start asking themselves why they hold the attitudes they do and whether they are the ones most suited for their patients' welfare. It is vital too that trainees be active participants in their training, which will include role play, exercises in reflectivity, case analyses, maintaining a personal journal, lectures, and the analysis of video clips and films.
The gap between respondents' declared beliefs and actual ward practice
What does this gap mean or imply? Given that respondents' answers to the intensive qualitative interview were on the whole markedly more negative and sombre than the answers to the self-administered quantitative questionnaire, one possibility is that it was far harder to give self-deceiving answers to a knowledgeable interviewer than to a sheet of paper. A second possibility is that many respondents said what they thought the researcher wanted to hear or what they thought they ought to say. A third possibility is that the respondents are genuinely conflicted, that many feel that what they find themselves doing is not what ideally they would want to do,
Conclusions and Recommendations Shared responsibility /teamwork
Oncologist as 'commander': The present study has demonstrated that leaving the oncologist in sole charge of hospital-sited end-of-life care is a recipe for failure in terms of truth telling. From my own long experience it is fear of what might make its appearance in an EoL conversation that deters most oncologists from this central component of modern patient-centredEoL care. Yet shared decision-making by all members of the multidisciplinary team would take some of the responsibility off oncologists as well as ensure a higher quality of decision-making. This widening of the 'circle of responsibility' to other hospital professionals, including nurses, social workers, home care coordinators, palliative care physicians, psychiatrists, psychologists and spiritual care specialists, each with their own input and experience, is invaluable [22]. The caregivers of patients in a hospice setting perceive nurses and social workers as most helpful with the transition to hospice care [11].
The critical value of teamwork lies in this very fact that it avoids the oncologist feeling that he/she faces the patient and their family alone. Teamwork in in-patient care could also involve family doctors and hospice home care specialists, both of whom have demonstrated in the present study a universe of attitudes far more sympathetic to palliative care than oncologists display.
The oncologists interviewed for the present study admitted to being untrained in team-working. They and other potential team members frequently have little awareness about each other's informational roles and responsibilities. Oncologists in particular need to understand the roles of other disciplines and the advantages of the interdisciplinary approach in health care [6].
Medical education and training, however, provides little or no preparation for interdisciplinary practice and this recommended teamwork is unlikely to succeed without training in co-working, coordination and communication.
Researchers have suggested that attitudes and stereotyping must be addressed early in professional education. Fineberg et alwrite, "Learning together allows team members to experience the viewpoints, knowledge, skills, and particular pressures of colleagues in other disciplines." Sharing data/decisions with the patient A common situation among doctors is that they cannot predict life-expectancy with sufficient accuracy and so fear to take responsibility for initiating an end-of-life conversation. This makes it all the more important that the doctor share his/her knowledge with the patient so that the patient can plan for the end of their life. When the benefits of an intervention are not discussed and understood by patients it threatens not only their ability to participate in decision-making, but also the quality of the decisions made without their input.
Patients in qualitative studies spontaneously mentioned their participation in various decisions, indicating that it is an issue that matters to them [1]. Seven studies have examined whether palliative care patients generally prefer collaborative roles in decision-making. Five of these studies used the same five-point scale about treatment decisions and according to these five studies between 40% and 73% of the 379 participants prefer to share treatment decisions with their physicians [19].
The Israel Ministry of Health needs to take initiatives
Training oncologists in the 2005 Act The Ministry of Health has a commitment to setting standards for the study and mastery of the provision of the 2005 Act. In practical terms doctors' mastery of the 2005 Act is currently mediocre and their attitudes to it and to the principles of palliative care embodied in it are even more negative. The Ministry of Health should require doctors to take periodic short study courses and/or longer training programs in the implementation of the Act and this has to be regularly enforced: doctors should be given positive and negative feedback and penalized if necessary.
Failings of the Act The 2005 Act makes the "personal physician" responsible for informing the patient of a change in treatment goals, (but does not say who the personal physician is).
The Ministry of Health must make it clear to hospital oncologists that a key component of their responsibility as the chief provider of care and treatment to terminally-ill patients is their duty from beginning to end to maintain regular and open communication with patient and family and build up relations of trust so that, at the required moment, they, the doctor, are in a position to open an endof-life conversation. In that conversation they must be equipped to, if necessary, persuade/inform patient and family that treatment goals have to change from cure to palliation and preparation for death.
The critical sensitivities involved in handling end-of-life care in a manner that supports the patient's dignity and autonomy make it likely that certain personality traits are needed in the oncologist. The national regulator has to give thought to how these traits can be encouraged and sustained.
Recommendations formally submitted to the Israel Ministry of Health
With the aim of having the findings of the present study applied to current practice a multidisciplinary panel was appointed (including the author) to submit recommendations for action to the Israel Ministry of Health. The panel drew up the following recommendations: 1. It is our opinion that the task of breaking the bad news to a patient that they have entered the category of the "terminally ill" should be given to the hospital specialist who has been treating the patient's illness. He/she would inform the attending physician that he/she intends to do break the news and cooperate with the attending physician in the community as necessary. to inform the patient of his/her having entered this category and of their right to draw up Advance Medical Directives.
2. According to doctors the 2005 Act's definition of a 'terminally-ill patient' is insufficiently clear. Indeed, to determine that a person is definitely "terminally-ill" is extremely problematic. Medicine is not mathematics and this determination cannot be made with the required certainty. The Ministry of Health must therefore revisit and review the current definition of a 'terminallyill patient'.
|
2019-05-21T13:09:19.237Z
|
2018-02-27T00:00:00.000
|
{
"year": 2018,
"sha1": "9ce43a54edf67918f7927e9f10f66ad3b29f67a7",
"oa_license": "CCBY",
"oa_url": "http://biomedres.us/pdfs/BJSTR.MS.ID.000804.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f4e1f2b1e60398add24910c87f0deed6710c5477",
"s2fieldsofstudy": [
"Medicine",
"Law"
],
"extfieldsofstudy": [
"History"
]
}
|
25379744
|
pes2o/s2orc
|
v3-fos-license
|
Purified E255L Mutant SERCA1a and Purified PfATP6 Are Sensitive to SERCA-type Inhibitors but Insensitive to Artemisinins*
The antimalarial drugs artemisinins have been described as inhibiting Ca2+-ATPase activity of PfATP6 (Plasmodium falciparum ATP6) after expression in Xenopus oocytes. Mutation of an amino acid residue in mammalian SERCA1 (Glu255) to the equivalent one predicted in PfATP6 (Leu) was reported to induce sensitivity to artemisinin in the oocyte system. However, in the present experiments, we found that artemisinin did not inhibit mammalian SERCA1a E255L either when expressed in COS cells or after purification of the mutant expressed in Saccharomyces cerevisiae. Moreover, we found that PfATP6 after expression and purification from S. cerevisiae was insensitive to artemisinin and significantly less sensitive to thapsigargin and 2,5-di(tert-butyl)-1,4-benzohydroquinone than rabbit SERCA1 but retained higher sensitivity to cyclopiazonic acid, another type of SERCA1 inhibitor. Although mammalian SERCA and purified PfATP6 appear to have different pharmacological profiles, their insensitivity to artemisinins suggests that the mechanism of action of this class of drugs on the calcium metabolism in the intact cell is complex and cannot be ascribed to direct inhibition of PfATP6. Furthermore, the successful purification of PfATP6 affords the opportunity to develop new antimalarials by screening for inhibitors against PfATP6.
protein; and three putative ATPases that seem to belong to the Golgi-endoplasmic reticulum-type family) and a single Ca 2ϩ /H ϩ exchanger were identified to be involved in the maintenance of calcium homeostasis in P. falciparum (23).
Recently, Krishna and co-workers (24 -26) observed that the ATPase activity of the single SERCA of P. falciparum, PfATP6, expressed in Xenopus laevis oocyte was inhibited by artemisinin (K i ϳ150 nM). Isobologram analysis and competition studies with fluorophore derivatives localizing to parasites were consistent with a common target for artemisinin and thapsigargin (Tg), a specific inhibitor of SERCA-type proteins, because an antagonism was observed in the action of these drugs. PfATP6 and SERCA1 share an overall 40% identity with a well conserved transmembrane region, whereas the cytosolic sequence of the parasite Ca 2ϩ -ATPase contains about 200 additional residues. Mutation studies on PfATP6 expressed in oocytes suggested that, in particular, Leu 263 modulates the sensitivity of this enzyme to artemisinin (26). This experiment was in part based on the finding that rabbit SERCA1, whose Tg binding site is near Phe 256 (27), is insensitive to artemisinin, and its amino acid sequence contains at the homologous position of Leu 263 a glutamate (Glu 255 ). When the Leu 263 of PfATP6 was mutated to glutamate, sensitivity to artemisinin was decreased (26), and conversely when the glutamate residue of SERCA1 was mutated to a leucine, SERCA1 became sensitive to artemisinin (26). These results suggest that PfATP6 is a target for artemisinins, with further support derived from correlation between certain point mutations in PfATP6 in field isolates showing reduced in vitro sensitivity to artemether (28) and dihydroartemisinin (29), although all of the cases of artemisinin resistance are not related to these mutations but revealed some polymorphism (3,19,30,31).
Up to now, only two of these transporters (PfATP6 and PfATP4) and mutated SERCA1a E255L have been studied in the X. laevis oocyte system (24,26,32). In order to further examine the interaction of artemisinins with PfATP6 and SERCA1a E255L mutant, it is important to characterize in more detail (functionally and structurally) those Ca 2ϩ -ATPases. For that purpose, expression in alternative systems (we investigated COS-1 and yeast cells) and purification of the proteins is required. Our group recently developed a method to purify rabbit SERCA1a by affinity chromatography after its expression in yeast (33), and this method was successfully used for studying and crystallizing wild type (34) and mutated SERCA1a (35). In the present study, yeast expression was applied to purify and functionally characterize SERCA1a E255L and PfATP6 and to study the effects of artemisinin and other drugs when combined with detergent or lipids.
COS-1 Cell Experiments-Site-directed mutagenesis of cDNA encoding SERCA1a inserted into the pMT2 vector (36) was carried out using the QuikChange site-directed mutagenesis kit (Stratagene), and the mutant cDNA was sequenced throughout. To express wild type or E255L mutant cDNA, COS-1 cells were transfected using the calcium phosphate precipitation method (37). Microsomal vesicles containing the expressed proteins were isolated by differential centrifugation (38). The concentration of expressed Ca 2ϩ -ATPase was determined by an enzyme-linked immunosorbent assay (39) and by determination of the maximum capacity for phosphorylation with ATP ("active site concentration"; see Ref. 40). ATPase activity was determined by following the liberation of P i (41) in the presence of 4 M calcium ionophore A23187 to prevent inhibition caused by rebinding of Ca 2ϩ to the luminally facing Ca 2ϩ sites (40). Inhibition assays were performed at 25°C or 37°C by first preincubating microsomal vesicles together with the drug over an 8-min period and then measuring the ATPase activity for 10 or 30 min, respectively.
Yeast Transformation and Selection of Individual Clones-The Saccharomyces cerevisiae yeast strain W303.1b/Gal4 (a, leu2, his3, trp1::TRP1-GAL10-GAL4, ura3, ade2-1, can r , cir ϩ ) was the same as previously described (42). Transformation was performed according to the lithium acetate/single-stranded carrier DNA/polyethylene glycol method (43). Growth conditions and criteria for expression of the Ca 2ϩ -ATPase were carried out as described for the test of individual clones and for the expression on minimal medium (42,44). A colony streaked onto a minimum medium storage plate was toothpicked into minimum medium (0.1% bactocasamino acids, 0.7% yeast nitrogen base, 2% glucose (w/v), 20 g/ml adenine) and grown at 28°C for 24 h with shaking (200 rpm). For each assay, 500 l of the minimum medium precultures were centrifuged for 5 min at 4°C and 1000 ϫ g av (rotor AM2.19, Jouan MR22i) and resuspended in 5 ml of minimum medium with 2% galactose instead of glucose to induce expression. These cultures were incubated at 28°C for 18 h with shaking. For each culture, 4 A 600 nm were centrifuged for 5 min, at 4°C and 8000 ϫ g av . After washing with water, the pellets were resuspended in cooled 2% trichloroacetic acid. Glass beads were added, and the suspensions were mixed with a vortex at maximal speed for 8 min at room temperature to break the cells. The tubes were then placed on ice, and glass beads were sedimented. The supernatant was kept on ice. After three washes with 2% trichloroacetic acid, all of the collected supernatants were gathered. The resulting solution was kept for 15 min on ice for protein precipitation. Then the samples were centrifuged for 15 min, at 4°C and 30,000 ϫ g av . The pellets were resuspended in 100 l of 50 mM Tris-Cl, pH 7.5. These samples were analyzed by Western blotting after SDS-PAGE in order to choose which clones were best expressed.
Expression of SERCA1a E255L in Fernbach Flasks-Growth conditions of yeast and induction of the expression of the mutant were the same as previously published for native SERCA1a expressed in yeast (33).
Growth of Yeast Cells and Large Scale Expression of PfATP6 Using a Fermentor (Techfors-S Apparatus, INFORS HT, Massy, France)-This method is based on the one developed for SERCA1a in Fernbach flasks with the following modifications: 20 liters of YPGE2X were inoculated with 1.2 liters of a culture at exponential phase in minimum medium (ϳ6 ϫ 10 6 cells/ml). Culture was performed at 28°C under high aeration (1 volume of air/volume/min; stirring rate 300 rpm) at the beginning of the culture and then regulated to maintain a dioxygen saturation of 20% until the cell density reached 3 ϫ 10 8 cells/ml. The culture was then cooled to 18°C, the regulation of dioxygen saturation was stopped, and stirring rate was maintained at 300 rpm, but aeration was lowered to 0.15 volume of air/volume/ min. Thirty minutes later, a solution of sterile galactose (500 g/liter) was added to a final concentration of 20 g/liter, and the culture was continued for 13 h (45).
Preparation of Light Membrane Fractions-The light membrane (LM) fraction was obtained after breaking yeast cells with glass beads and differential centrifugation of the crude extract as described previously (42). They were finally resuspended in Hepes-sucrose buffer (20 mM Hepes-Tris (pH 7.5), 0.3 M sucrose, 0.1 mM CaCl 2 , 1 mM phenylmethylsulfonyl fluoride) at a final volume corresponding to 0.5 ml/g of the initial yeast pellet. The membranes can be stored at Ϫ80°C until use. The amount of the protein of interest is estimated by Western blot analysis using the appropriate antibody.
Solubilization and Batch Purification of PfATP6 by Streptavidin-Sepharose Chromatography-These procedures are described in the supplemental material.
Protein Estimation and Ca 2ϩ -ATPase Quantification-Protein concentrations were measured by the bicinchoninic acid procedure (46) in the presence of 2% SDS (w/v) with bovine serum albumin as a standard. SERCA1a from rabbit muscle (SR), used as a standard for protein estimation, was prepared as previously described (47). Ca 2ϩ -ATPase quantification was performed either by a Coomassie Blue staining gel after SDS-PAGE or Western blot using known amounts of SR as standards.
SDS-PAGE and Western Blotting-For SDS-PAGE, samples were mixed with an equal volume of denaturing buffer, heated at 90°C for 2 min, and loaded onto Laemmli-type 8% (w/v) polyacrylamide gels (48). The amounts of proteins or volumes of initial samples loaded in each well are indicated in the figure legends. After separation by SDS-PAGE, gels were stained with Coomassie Blue, or proteins were electroblotted onto polyvi-nylidene difluoride Immobilon P membrane (49). For each gel, molecular mass markers (Precision Protein standards, Bio-Rad) were loaded.
The Western blotting was followed by detection with avidinperoxidase for the recognition of biotinylated proteins or by immunodetection with the polyclonal antibody anti-SERCA1a 79B (a gift from A.-M. Lompré, INSERM, France) as described previously (33).
Immunodetection with Anti-PfATP6-For immunodetection with anti-PfATP6 antibody, the polyclonal antibody anti-PfATP6 generated in goat from the peptide CQSSNKKDK-SPRGINK (the sequence from Q to K corresponds to the 574 -588 region of PfATP6) was used. Anti-PfATP6 antibodies were purchased from Bethyl Laboratories. After electroblotting, membrane was blocked for 10 min in PBST (90 mM K 2 HPO 4 , 10 mM KH 2 PO 4 (pH 7.7), 100 mM NaCl, 0.2% (v/v) Tween 20) containing 5% powdered skim milk. The primary antibody (1:10,000) was then added to the solution and incubated for 1 h at room temperature. The membrane was washed once for 10 min in PBST and then incubated with horseradish peroxidaseconjugated secondary rabbit anti-goat antibody (1:10,000) in PBST containing 5% powdered skim milk. After three washes with PBST for 10 min each, detection of proteins was performed with ECL (GE Healthcare). The chemiluminescence signal was acquired with a GBox HR 16 apparatus coupled with GeneSnap acquisition software and analyzed with GeneTools analysis software (Syngene, Ozyme, France).
Preparation of Lipids-Phospholipids dissolved in chloroform were dried in a stream of nitrogen. Dried phospholipids were then dissolved at 5 mg/ml in C 12 E 8 (20 mg/ml).
Detergent Removal and Relipidation-After purification and before glycerol concentration adjustment, PfATP6 was concentrated with the aid of a 100 kDa cut-off concentrator unit (Centricon YM100, Millipore). Egg yolk phosphatidylcholine was then added to concentrated PfATP6 at a final concentration of 1 mg/ml and a lipid/protein ratio of 3:1 (w/w). To remove detergent, Bio-beads SM2, prepared as described (50), were added to the solution at a Bio-beads/detergent ratio of 200:1 (w/w), and the whole solution was gently stirred at 18°C for 3 h. Bio-beads were then removed, and the solution was kept at 4°C.
ATPase Activity Measurement-ATPase activity was assayed using a spectrophotometric method as described (51,52). In general, from 1 to 10 g of proteins was used in 2 ml of reaction buffer (50 mM Tes/Tris, pH 7.5, 0.1 M KCl, 6 mM MgCl 2 , 0.3 mM NADH, 1 mM phosphoenolpyruvate, 0.1 mg/ml lactate dehydrogenase, 0.1 mg/ml pyruvate kinase containing 0.1 mM Ca 2ϩ and 0.2:0.05 mg/ml C 12 E 8 /DOPC). Changes in reaction conditions, detergents, amount of proteins, and variations in reaction temperature are indicated in the figure legends. The reaction was started by the addition of 5 mM ATP to the medium and stopped by the addition of a final concentration of 750 M EGTA. The difference between the slopes obtained before and after the addition of EGTA is considered to be due to the Ca 2ϩ -ATPase activity. To obtain the specific activity, the concentration of Ca 2ϩ -ATPase (SERCA1a E255L or PfATP6) was determined from Coomassie Blue-stained gels after SDS-PAGE.
Inhibition Assays-The inhibition assays were performed by enzymatic spectrophotometry as described above except for vanadate because this inhibitor oxidizes NADH in a coupled enzyme system (see supplemental material).
The drugs used (stock solutions at 15 mM Tg, 20 mM 2,5di(tert-butyl)-1,4-benzohydroquinone (BHQ), 3 mM cyclopiazonic acid (CPA), 10 mM artemisinin, 10 mM artemisone, and 10 mM dihydroartemisinin) were dissolved in DMSO. The effect of DMSO alone was taken into account, and we corrected for it when calculating the specific effect of the inhibitors.
All of the inhibition assays were performed by the addition of 2 l of each drug solution. The effect of inhibitors was investigated by adding them during the ATPase turnover. In some experiments, the protein was preincubated with these drugs for a few min (but longer incubations were not more efficient) before the ATP addition, which triggers the start of the reaction (as explained in the figure legends).
Because Fe 2ϩ ions were sometimes used in the assays, a 10 mM FeSO 4 solution was freshly prepared and kept on ice (53). In some cases, we also used Fe 2ϩ in the presence of metal chelators as suggested (54).
RESULTS AND DISCUSSION
Enzymatic Properties of the SERCA1a E255L Mutant Expressed in COS-1 Cells-We found that the SERCA1a E255L mutant, previously reported as being sensitive to artemisinin (26), could be expressed in COS-1 cells to a level similar to that of the wild type protein. To estimate the catalytic turnover of the expressed proteins, we measured the maximum rate of Ca 2ϩ -activated ATP hydrolysis in the presence of the calcium ionophore A23187 to avoid the "back inhibition" imposed by Ca 2ϩ accumulated in the microsomal vesicles. The catalytic turnover rate of SERCA1a E255L mutant calculated by relating the ATPase activity to the maximal phosphorylation capacity was very similar to that of wild type, in agreement with earlier experiments showing that mutation of Glu 255 by Ala or His does not affect maximal turnover (55). The inhibition by thapsigargin, a specific inhibitor of SERCA proteins (56), was nearly complete for both wild type and E255L mutant ( Fig. 1, Tg). This result indicates, on one hand, that the turnovers measured are the result of the expressed Ca 2ϩ -ATPases and, on the other hand, that Glu 255 does not play a decisive role for thapsigargin sensitivity despite the fact that this residue is located at the binding site (27). Then we measured the effect of artemisinin alone or together with Fe 2ϩ because it was suggested that artemisinin could require the presence of iron to be an efficient inhibitor (24). Both in the presence and absence of Fe 2ϩ , there was no inhibitory effect of artemisinin, even at high drug concentration (Fig. 1, ART and ARTϩFe 2ϩ ).
Study of the Mutant SERCA1a E255L and Purification and Enzymatic Properties after Yeast Expression-The yeast light membranes containing the SERCA1a E255L mutant (endoplasmic reticulum and secretion vesicles) were prepared by membrane fractionation, as described previously (33). From 1 liter of culture containing about 35 g of yeast, 325 mg of membrane proteins with the SERCA1a E255L mutant was obtained in the light membrane fraction. The subsequent solubilization with DDM and tag-mediated affinity purification by streptavidin chromatography were performed as described (35). After thrombin cleavage, only SERCA1a E255L devoid of tag was eluted from the resin, leading to the recovery of 150 g of mutated protein at a concentration of about 30 g/ml as determined by Coomassie Blue gel staining and Western blotting (see Fig. 2, A and B). The protein was well purified, as shown by the Coomassie Blue-stained gel (to about 70%, most of the impurity being due to phenylmethylsulfonyl fluoride-inhibited thrombin; Fig. 2A).
To determine the effect of artemisinin on the purified SERCA1a E255L mutant, the specific ATPase activity of the protein was measured spectrophotometrically by a coupled enzyme system. We found that the maximal rate of ATP hydrolysis of the SERCA1a E255L mutant was slightly smaller than the specific activity of the wild type SERCA1a protein, overexpressed in yeast and measured under the same conditions. The wild type SERCA1a has the same specific activity as the wild type enzyme isolated from rabbit sarcoplasmic reticulum, indicating that yeast expression and purification is a valid method to study SERCA proteins (34), as also later confirmed for mutants of that protein (35). The Ca 2ϩ -dependent ATPase activity of the mutant, like that of the wild type, could be stopped both by thapsigargin (a specific inhibitor for SERCAtype ATPases) and by EGTA (chelating agent of calcium ions) (Fig. 2C, experiment A), supporting the suggestion that the main calcium pumping function of this protein is retained in the mutant. To perform the assay under optimal conditions, we have adopted the use of lipid/detergent mixtures. The presence of DOPC in the assay media, forming mixed micelles with C 12 E 8 , increased both the stability and enzymatic activity of solubilized Ca 2ϩ -ATPase. We found that optimal conditions were obtained in the presence of 0.2 mg/ml C 12 E 8 and 0.05 mg/ml DOPC (Fig. 2C, experiment B), and this resulted in a large increase of the specific activity (Fig. 2C, compare experiment B and experiment A). The phospholipid-dependent increase in activity of purified P-type ATPases had already been observed (see Ref. 58, with the Na ϩ /K ϩ ATPase as a recent example). The addition of artemisinin to the SERCA1a E255L mutant solubilized in phospholipid/C 12 E 8 medium did not inhibit Ca 2ϩ -ATPase activity (Fig. 2C, experiment C). The effect of artemisinin in the presence of iron (Fe 2ϩ ) was also tested (Fig. 2C, experiment E) and compared with the effect of iron alone (Fig. 2C, experiment D). There was no inhibitory effect of artemisinin and iron on the SERCA1a E255L mutant ATPase activity, whereas the combination gave rise to a slight increase in activity.
On the basis of these results and the results obtained with microsomes of COS-1 cells expressing the same mutant, we were unable to confirm that the mutation of Glu 255 to Leu in SERCA1a determines the sensitivity to artemisinin, as described previously after expression of E255L SERCA1a mutant in oocyte (26). There is thus no evidence for an artemisinin binding site with a putative localization in the binding region for thapsigargin on SERCA1a.
Study of PfATP6 and Expression and Purification of PfATP6 in Yeast-
Because in our hands the expression of PfATP6 in COS-1 cells was not successful, as previously reported (24), we then proceeded to investigate the expression of that plasmodial protein PfATP6 in yeast. Production of PfATP6 using an assay of expression on minimal medium as described under "Experimental Procedures" was attempted in parallel from the wild type and a codon-optimized gene (the gene of PfATP6 was the same as the one used in Refs. 24 and 26). Although the codon adaptation index (which is a measure of the similarity of the codon usage of a gene to that of the proposed host organism (59)) is high (0.843) for the wild type PfATP6 gene and S. cerevisiae, we designed a sequence that took into account optimal codon usage for yeast and removed most of the poly(A) or T tracts in the native sequence. Gene optimization increased the codon adaptation index to a very high value (0.959) while leaving the GC content almost unchanged (27.9% for the wild type sequence and 28.6% after modification). The use of this modified gene enormously increased the expression of PfATP6, as can be seen by immunodetection with anti-PfATP6 antibodies (Fig. 3A). An aggregated form of PfATP6-BAD is also present near 250 kDa. This is likely to be due to the trichloroacetic acid precipitation used for the recovery of the total protein content; this represents a drastic denaturing treatment of proteins that can lead to their aggregation. When expressing PfATP6-BAD from its wild type cDNA, several bands of proteins were revealed in low amounts by anti-PfATP6 antibodies, generally of lower size than the expected molecular mass of the monomeric protein. The presence of these lighter proteins can be explained by the AT-rich composition of the PfATP6 gene as described for other plasmodial proteins expressed in S. cerevisiae (60) and in Pichia pastoris (60 -62). Indeed, AT-rich regions in a gene may form hairpins and mimic a termination signal of transcription and therefore result in the synthesis of truncated proteins. Consequently, we decided to use the optimized construct for a large scale expression of PfATP6. Like the mutant SERCA1a E255L, we were able to express PfATP6 at a high level by the use of a fermentor for the yeast culture. With this equipment, 50 g of wet cells were recovered per liter of culture.
After preparation of the light yeast membrane fraction, Western blot analyses with avidin peroxidase and anti-PfATP6 antibodies confirmed that PfATP6-BAD had been expressed and had undergone in vivo biotinylation (see Fig. 3B). Because PfATP6 is a SERCA-type protein with a location in the endoplasmic reticulum, we have focused on the light membrane fraction (endoplasmic reticulum) in the next steps. The amount of PfATP6-BAD contained in the LMs represents about 2% of the total protein content as determined by Western blot revealed with anti-PfATP6 antibodies (see Fig. 3C) and 8 mg/liter of culture. We also evaluated that about 30% of it was biotinylated (2.4 mg; data not shown) and therefore subject to purification by affinity chromatography. It can be noted that naturally biotinylated yeast proteins (acetyl-CoA carboxylase (ACC), 250 kDa; pyruvate carboxylase (PC), 120 kDa; Arc1p protein (Arc1p), 45 kDa) are mainly eliminated with the soluble fraction (see Fig. 3B). Nevertheless, because some of the soluble proteins remain bound to the LM fraction, we included one or two additional washing steps of the membranes with a high KCl buffer that helped to remove the major part of the remaining contaminant proteins before the solubilization (see Fig. 4A, compare lanes WS and WP). Then the membranes were solubilized with DDM, a mild detergent used successfully with SERCA1a-BAD. The same detergent/ protein ratio (3:1, w/w) was used except that the protein and detergent were 5 times more concentrated than asdescribedpreviouslywithSERCA1a-BAD (33). Under these conditions, the solubilization was about 25% (see Fig. 4A, compare lanes WP and SF).
Then ϳ600 g of in vivo biotinylated and solubilized PfATP6 was added to 2 ml of streptavidin-Sepharose resin. Among the nonretained proteins, a part corresponded to PfATP6-BAD (see Fig. 4A, lane FT). This could be due to either exceeding the binding capacity of the resin or an inappropriate folding of the biotin acceptor domain. The thrombin cleavage between the sequence of PfATP6 and the biotinylated acceptor domain was followed by a Coomassie Blue staining gel (Fig. 4B) and by immunodetection with anti-PfATP6 antibodies (Fig. 4C). Then only PfATP6 devoid of BAD tag was eluted from the resin, and the corresponding fractions were well purified (Fig. 4B, lanes E1 and E2). An approximate concentration of 30 g/ml was determined for the first elution and of 10 g/ml for the second elution fraction. About 20% of PfATP6 was still retained on the resin (lane R*). The purity of the protein reached about 70% (the remaining 30% being mainly due to thrombin) as evaluated from the color density of the Coomassie Blue-stained gel. In conclusion, the purification procedure gave a total amount of at least 160 g of purified PfATP6 starting from 1 liter of yeast culture and therefore a yield of 26% compared with the amount of biotinylated and solubilized PfATP6-BAD added to the resin.
Other plasmodial membrane proteins produced in P. pastoris and purified over Ni 2ϩ -nitrilotriacetic acid were generally obtained in better yield (60 -62). Despite our lower yield, we can expect to recover PfATP6 with good functional properties by our procedure because the yeast in vivo biotinylation that our protocol implies tends to select properly folded proteins (34).
One possible point of concern is the possibility that yeast expression of PfATP6 could induce molecular modifications of the enzyme. Therefore, we performed matrix-assisted laser desorption ionization time-of-flight mass spectrometry under the conditions that we formerly designed for large membrane proteins or their fragments (63). Briefly, after PfATP6 streptavidin purification, we performed gel filtration high pressure liquid chromatography to reduce the DDM content, and then the protein was concentrated up to 2 mg/ml. Two controls were performed; SERCA1a was purified from rabbit SR, and SERCA1a was yeast-expressed and purified. In all three cases, the determined molecular mass was very close to the expected molecular mass. In the case of PfATP6, we found a mass of 140,053 Da, whereas the expected mass is 139,994 Da. Under similar conditions, rabbit SR SERCA1a gave 109,497 Da for an expected mass of 109,489 Da, and yeast expressed SERCA1a (which is slightly larger due to DNA construction (34)) gave 110,090 Da for an expected mass of 110,069 Da. 5 Under these conditions, the error associated with this type of measurement is 0.05-0.1%, so that this excludes large modification, such as partial proteolysis, glycosylations, etc.
Study of the Enzymatic Properties of the Soluble Purified PfATP6-We first measured the specific ATPase activity of the purified PfATP6 with the aid of the coupled enzyme assay as described for SERCA1a E255L and in the presence of 1 mg/ml C 12 E 8 (data not shown) or 0.2 mg/ml C 12 E 8 (Fig. 5A, left). As can be seen, the major part of ATP hydrolysis was stopped by the addition of EGTA, indicative of calcium-dependent ATPase activity. However, even at higher C 12 E 8 concentrations (data not shown), we observed a decrease of the hydrolytic rate with time, suggesting that under these conditions with pure detergent, the protein is inactivated during turnover. By the addition of lipids (0.05 mg/ml of DOPC) together with 0.2 mg/ml C 12 E 8 , we were able to maintain a stable hydrolysis rate of ATP that was still inhibited by EGTA (Fig. 5A, middle). A similar stabilization was obtained in the presence of other lipids (1,2-dioleoyl-sn-glycero-3-phosphoserine, egg yolk phosphatidylcholine, and 1-palmitoyl-2-oleoyl-sn-glycero-3phosphoethanolamine (data not shown)) or a lipid mixture consisting of 48.4% DOPC, 43% 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine, and 8.6% L-␣-phosphatidylinositol) (Fig. 5A, right). The phospholipid composition of this mixture was based on the membrane lipidic composition of P. falciparum as described (64). At a C 12 E 8 /DOPC ratio of 0.2: 0.05 mg/ml, the specific activity at 25°C of the purified PfATP6 was 1.7 mol of hydrolyzed ATP⅐min Ϫ1 ⅐(mg of PfATP6) Ϫ1 , which is about 30% of the activity of rabbit SERCA1a at this temperature.
We then measured the rate of hydrolysis of ATP, carried out by PfATP6, as a function of different calcium concentrations (data not shown). PfATP6 is activated by a low concentration of free Ca 2ϩ (pCa ϳ7), and optimal activities are obtained in the pCa interval 6 -4 with a maximal stimulation around pCa 4. Above this concentration, the activity is gradually inhibited by the increasing amount of free Ca 2ϩ . This pCa dependence profile is also in agreement with the one described for rabbit SERCA1a solubilized in C 12 E 8 (51).
The ATPase activity of PfATP6 was also measured at different pH (see supplemental Fig. S1). The pH optimum was, at pH 7-7.5, 1.7 mol of hydrolyzed ATP⅐min Ϫ1 ⅐(mg of PfATP6) Ϫ1 , whereas the activities at pH 6.5 and 8 were low. This is in agreement with what was measured previously for SERCA1a (65).
Effect of Artemisinins-We then proceeded to test the effect of artemisinin and some derivatives on PfATP6. By the addition of 10 M artemisinin, we were not able to detect any inhibition (whatever the temperature in the range of 20 -37°C) because 90% or more of the activity was always measured after the addition of this drug to PfATP6 (Figs. 5B and 7 and supplemental Fig. S2A), whereas under these conditions, PfATP6 was previously reported to be completely inhibited (24,26). We also tested the effects of 10 M artemisone or 10 M dihydroartemisinin (Fig. 5B) and lower and higher concentrations of the artemisinins (1-100 M for artemisinin and 1-500 M for artemisone) but were never able to demonstrate any effect on PfATP6-dependent Ca 2ϩ -ATPase activity (data not shown). Moreover, with a preincubation of 10 min with artemisinin, we obtained the same result.
As was done with SERCA1a E255L, we also checked the effect of artemisinin in the presence of iron (10 M Fe 2ϩ ) by comparing the effect with that obtained in the presence of iron alone (see supplemental Fig. S2, B and C). This was carried out by preincubating PfATP6 with iron together with artemisinin before trigger- ing the reaction. On the other hand, the Ca 2ϩ -ATPase activity as well as the nonspecific activity was slightly decreased in the presence of Fe 2ϩ . In addition, other artemisinin derivatives that were also tested, including artemisone, which was also used in the oocyte tests (26), and artesunate (see supplemental Fig. S2, D and E), did not inhibit the ATPase activity of PfATP6 either.
Effect of SERCA Inhibitors-We then tested the effect of specific inhibitors of mammalian SERCA proteins (see Fig. 6), including Tg, BHQ, and CPA, on PfATP6. These assays were also performed with the C 12 E 8 /DOPC mixture at 25°C. A small degree of inhibition was observed with 1.5 M Tg (data not shown), and Ca 2ϩ -dependent ATP hydrolysis became clearly inhibited with 45 M Tg (Figs. 6 and 7). This means that PfATP6 is significantly less sensitive than rabbit SERCA1a to Tg because the latter is completely inhibited by concentrations of Tg in the nanomolar range (66). These differences are well correlated with the observations of Varotti et al. (67), who, on parasites, found a similar difference in the maximum effect of Tg between Plasmodium parasites and mammalian cells. According to their results, for Plasmodium, 25 M Tg were necessary to inhibit Ca 2ϩ release in the cytoplasm, whereas 500 nM Tg was sufficient for producing the same effect in mammalian cells. Two other SERCA pump inhibitors were then assayed. We observed that 20 M BHQ was able to inhibit about 50% of the activity of PfATP6 (see Figs. 6 and 7) and that the addition of 3 M CPA resulted in an almost complete inhibition of this activity (see Figs. 6 and 7), similar to what is observed with rabbit SERCA1a and after expression of PfATP6 in oocytes. The effect of these inhibitors, which was checked in various conditions (detergent/lipid ratios, glycerol concentrations), clearly indicates that PfATP6 is a high affinity target for CPA in all of the conditions. 6 Although less sensitive to Tg and BHQ than rabbit SERCA1a, these results suggest that the purified PfATP6 qualitatively behaves the same way as a mammalian SERCA protein. Vanadate, another P-type ATPase inhibitor, was tested in a few experiments; close to 50% inhibition was obtained at low vanadate concentration (100 M) and in the absence of EGTA (data not shown; the exact experimental conditions are described in the supplemental material).
Study of PfATP6 Enzymatic Activity in Membranes-One of the differences between our ATPase assays and those done with oocytes is the presence of detergent. Consequently, because the detergent could interfere with artemisinin drugs, the protein was relipidated, and detergent was completely removed using Biobeads (Fig. 8A). Relipidated PfATP6 was then submitted to the same type of inhibition assays at 37°C but without any detergent in solution (Fig. 8B). Again, artemisone was not able to inhibit PfATP6 even in the presence of iron. These results then conclusively show that under our conditions, the purified PfATP6 enzyme is not inhibited by artemisinin and its derivatives.
Conclusions-Our work overcoming the obstacles of heterologous expression of a membrane protein from an apicomplexan organism in yeast has provided the first opportunity to study the functional properties of a purified SERCA of P. falciparum. This has revealed both similarities with (pH and pCa profile) and some differences from (drug sensitivity) that of the mammalian SERCA1a. This is not unexpected due to the sequence differences and the presence of a number of cytoplasmic insertions in PfATP6, in particular in the N-domain (68). Furthermore, the availability of a purified preparation has allowed us to test in a direct way the evidence for interaction of purified PfATP6 with artemisinin arising from previous studies after expression in the Xenopus oocyte membrane (24,26). However, neither by the addition of artemisinin, nor by the further addition of Fe 2ϩ to induce radical formation was it possible to observe an effect on the isolated system. We conclude that it is not possible to demonstrate an effect of arte- misinin on PfATP6 ATPase activity and that the explanation for the effect of artemisinin on the malarial parasite is probably more complex than originally thought. With respect to the oocyte data, it should be taken into account that the oocyte membranes represent a foreign and complex environment with hundreds of other proteins being present together with the expressed protein. The possibility cannot be excluded that these and other putative proteins or proteolipid components could interact with PfATP6 and with the tested drug and thereby affect ATPase activity. In oocyte also, using the same tests based on measurements of ATPase activity on membranes, Krishna and co-workers (26) reported a strong inhibition of artemisinin on the SERCA1 E255L homology mutant with a K i of 315 nM. However, we demonstrate in the present paper that the Ca 2ϩ -dependent ATPase activity of this mutant in the endoplasmic reticulum of COS cells is not affected by artemisinin. The same is true for the purified mutant after yeast expression. Therefore, it would appear to be an oocyte-specific artemisinin inhibition, not extendable to other eukaryotic cells.
When trying to pinpoint the target of artemisinin or derivatives, many puzzling facts come to mind. Clearly, in a cellular context, artemisinin is affecting Ca 2ϩ homeostasis as demonstrated, for example, with Ca 2ϩ -sensitive dyes, and it is now also used to kill/induce apoptosis of cancer cells with a likely effect on Ca 2ϩ mobilization (e.g. see Refs. 69 -71). However, in P. falciparum, the rise of cytosolic Ca 2ϩ due to artesunate was also observed after prior thapsigargin addition, suggesting an intracellular target distinct from that of the endoplasmic reticulum (69). In other experiments, artemisinin was shown to induce swelling of mitochondria and to interfere with mitochondrial electron transport in a yeast model (see Ref. 72 and references therein), and it was suggested that an activated species of artemisinin could depolarize the mitochondrial membrane. However, this was not observed in Toxoplasma exposed to artemisinin (21,22). Because mitochondria are also a site of Ca 2ϩ storage, this may still be related to the Ca 2ϩ homeostasis effect, but it is by no means the only possible mechanism of action.
In other investigations of the target, it was shown that activated artemisinin formed covalent adducts with four major membrane-associated proteins, but only one of these could be classified, being a homolog of the translationally controlled tumor protein with a still unknown function in parasites (73,74). In future experiments, it will be important to reconcile findings on the mechanisms of action of artemisinins obtained in apicomplexan parasites and in genetic studies, with findings from heterologous expression studies and with the present results to reassess the target of artemisinin. The present data do not support a direct action of artemisinins on PfATP6 (see also Ref. 57), but we cannot exclude the possibility that artemisinin may need some transformations before becoming active or that it could act indirectly on PfATP6 after binding to another protein. Alternatively, the drug may act on other proteins, such as Ca 2ϩ channels involved in Ca 2ϩ homeostasis. However, we note that with our procedure for purification of PfATP6, we have described a system with the potential for high throughput screening with novel classes of inhibitors acting against a key parasite transport protein.
|
2018-04-03T05:29:18.375Z
|
2010-06-08T00:00:00.000
|
{
"year": 2010,
"sha1": "7ac8cf5402359959ca9bb4bd8076ced430e7a1bd",
"oa_license": "CCBYNC",
"oa_url": "http://www.jbc.org/content/285/34/26406.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ac8cf5402359959ca9bb4bd8076ced430e7a1bd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
237541104
|
pes2o/s2orc
|
v3-fos-license
|
A dopamine gradient controls access to distributed working memory in the large-scale monkey cortex
SUMMARY Dopamine is required for working memory, but how it modulates the large-scale cortex is unknown. Here, we report that dopamine receptor density per neuron, measured by autoradiography, displays a macroscopic gradient along the macaque cortical hierarchy. This gradient is incorporated in a connectome-based large-scale cortex model endowed with multiple neuron types. The model captures an inverted U-shaped dependence of working memory on dopamine and spatial patterns of persistent activity observed in over 90 experimental studies. Moreover, we show that dopamine is crucial for filtering out irrelevant stimuli by enhancing inhibition from dendrite-targeting interneurons. Our model revealed that an activity-silent memory trace can be realized by facilitation of inter-areal connections and that adjusting cortical dopamine induces a switch from this internal memory state to distributed persistent activity. Our work represents a cross-level understanding from molecules and cell types to recurrent circuit dynamics underlying a core cognitive function distributed across the primate cortex.
In brief
Little is known about how dopamine outside of the prefrontal cortex affects working memory. Froudist-Walsh et al. identify a gradient of dopamine receptors in the macaque cortex and use this to build a large-scale computational cortex model. A gradient of cortical dopamine modulation provides a parsimonious explanation for diverse findings in the literature.
INTRODUCTION
Our ability to think through difficult problems without distraction is a hallmark of cognition. When faced with a constant stream of information, we must keep certain information in mind and protect it from distraction. For instance, when at the supermarket looking for your favorite butter, it is important to keep in mind its distinctive golden packaging and not be distracted by the many other dairy products. This brain function is called working memory. Working memory often engages persistent neural activity that is specific to the information that must be remembered. This mnemonic activity is sustained internally across multiple cortical and subcortical areas in the absence of external stimulation (Funahashi et al., 1989;Fuster and Alexander, 1971;Guo et al., 2017;Leavitt et al., 2017;Mejias and Wang, 2021;Mendoza-Halliday et al., 2014;Murray et al., 2017;Romo et al., 1999;Romo and Salinas, 2003;Vergara et al., 2016;Wang, 2001;Zhang et al., 2019).
Working memory and the prefrontal cortex are under the influence of monoaminergic modulation (Goldman-Rakic, 1995;Robbins and Arnsten, 2009). In fact, depletion of dopamine from the prefrontal cortex and complete ablation of the prefrontal cortex cause similar working memory deficits (Brozoski et al., 1979). Dopamine modulates cortical activity through its receptors. D1 receptors are the most densely expressed dopamine receptor type in the cortex. Prefrontal neuron activity during working memory depends on precise levels of activation of D1 receptors, with too little or too much D1 stimulation disrupting delay period activity (Vijayraghavan et al., 2007;Wang et al., 2019). However, the density of D1 receptors is known only for relatively small sections of the monkey cortex (Goldman-Rakic et al., 1990;Impieri et al., 2019;Lidow et al., 1991;Niu et al., 2020;Richfield et al., 1989). Because of the shortage of areas analyzed across studies, it is not clear whether the variation in D1 receptor densities across cortical areas represents random heterogeneity or a systematic gradient of cortical dopamine modulation.
Dopamine receptors are also expressed differently across different types of inhibitory neurons (Mueller et al., 2018(Mueller et al., , 2020. Distinct inhibitory cell types primarily focus their inhibition on the dendrites or somata of pyramidal cells or on other inhibitory neurons (Jiang et al., 2015;Tremblay et al., 2016). Through its differing effects on distinct interneurons, dopamine decreases inhibition to the somata of pyramidal cells and increases inhibition to the dendrites (Gao et al., 2003). An early theoretical study proposed that inhibition targeted more strongly toward the dendrites and away from the somata of pyramidal cells could increase the resistance of working memory to distraction (Wang et al., 2004a). The functional significance of dopamine's differential effects on distinct inhibitory neuron types has not yet been investigated.
In this work, we tackled two open questions. First, how does dopamine modulate distributed working memory across a multi-regional large-scale cortical system? Second, in light of the emphasis on cell types in modern cortical physiology, does dopamine contribute to robust working memory against distractors by virtue of differential effects on different neuron classes? To address these questions, we performed quantitative mapping of dopamine D1 receptor densities across 109 cortical areas using in vitro autoradiography and constructed a large-scale computational model of the macaque cortex that is capable of performing working memory tasks. The model is built using retrograde tract-tracing connectivity data and incorporates gradients of D1 receptors and excitatory synapses. Moreover, to our knowledge, this is the first large-scale cortex model endowed with three subtypes of inhibitory neurons. Our results suggest that firing of dopamine neurons can engage distractor-resistant, stimulus-selective, sustained activity across multiple brain regions in response to behaviorally relevant stimuli. Furthermore, we extend, from a local area to the multi-regional cortex, an activity-silent mechanism that has been proposed for certain forms of short-term memory trace without persistent activity (Mongillo et al., 2008;Rose et al., 2016;Wolff et al., 2017). We found that this scenario relies principally on short-term facilitation of interareal connections but fails to resist distractors. Enhanced dopamine modulation can convert an internal memory trace to an active persistent activity state needed to filter out distractors. Therefore, our findings contribute to resolving the current debate about the two contrasting scenarios that contribute to working memory Lundqvist et al., 2018;Watanabe and Funahashi, 2014) and under what conditions each mechanism is implemented (Barbosa et al., 2020;Masse et al., 2019;Tr€ ubutschek et al., 2019).
The density of the D2 receptor in the cortex is so low that it is not detectable with the method used here.
To compare the gradient of D1 receptors with other known gradients of anatomical organization in the monkey cortex, we carefully mapped the receptor data ( Figure 1A) as well as data on neuronal density ( Figure 1B; Collins et al., 2010) and spine count ( Figure 1C; Elston, 2007) onto the Yerkes19 common cortical template, to which anatomical tract tracing data (Figure 1D,i) has been mapped previously (Donahue et al., 2016). Here we include retrograde tracing data from 40 regions, quantified using the same protocol as in previous publications (Markov et al., 2014b). This expands the number of injected cortical areas by 33%, with connections to areas 1, 3, V6,F4,F3,25,32,9,45A,and OPRO (orbital proisocortex) now included in the database (downloadable from core-nets.org). We estimated the cortical hierarchy using laminar connectivity data ( Figure 1D, ii; STAR Methods; Markov et al., 2014a), expanding previous descriptions of the cortical hierarchy based on fewer regions (Markov et al., 2014a;Mejias et al., 2016). A one-dimensional hierarchy is probably an oversimplification of the cortical connectivity structure. Because we have connectivity data for two distinct sensory modalities, we also calculated a circular embedding of the connectivity data, with radial distance from the edge representing the hierarchical position and angular distance between points representing the inverse of their connectivity strength (Chaudhuri et al., 2015). In this circular representation, separate visual and somatosensory hierarchies can clearly be appreciated, with association regions falling at angles off the main sensory hierarchy axes ( Figure 1E).
To facilitate functional interpretation, we divided D1 receptor density by neuron density (Collins et al., 2010) to allow estimation of the degree to which dopamine modulates individual neurons across the cortex. D1 receptor density per neuron peaked in the parietal and frontal cortex and was relatively low in the early sensory cortex ( Figure 1F). There was a strong positive correlation between D1 receptor density per neuron and the cortical hierarchy ( Figure 1G; r = 0.81). Because of spatial autocorrelation between cortical features (i.e., nearby parts of the cortex tend to have a similar anatomy), it is possible to detect spurious correlations between distinct features of brain anatomy. To account for this, we generated 10,000 surrogate maps with similar spatial autocorrelation to the hierarchy map (Burt et al., 2020). None of these surrogate maps were as strongly correlated with the D1 receptor density map as the hierarchy, giving a p value of less than 0.0001 for the D1 receptor-hierarchy correlation. There was no significant relationship between D1 receptor expression and whether a cortical area had a granular layer IV (Wilcoxon rank-sum Z = 0.39, p = 0.70) or to the degree of externopyramidalization (Kruskal-Wallis c 2 = 1.47, p = 0.48; Goulas et al., 2018;Sanides, 1962; Figure S2). This pattern of receptor expression suggests that dopamine principally modulates areas contributing to higher cognitive processing.
A cortical circuit with three types of inhibitory neurons modulated by dopamine We built a model of a local cortical circuit that contains pyramidal cells and three types of inhibitory neurons (Figure 2A). The cortical circuit is based on a disinhibitory motif that was originally predicted theoretically (Wang et al., 2004a), with details of the connectivity structure chosen to reflect recent experimental findings (STAR Methods).
In our model, dopamine acted by increasing the synaptic strength of inhibition to the dendrite and reducing the synaptic strength of inhibition to the cell body of pyramidal cells ( Figure 2B; Gao et al., 2003). In addition, dopamine increased the strength of transmission via N-methyl-D-aspartate (NMDA) receptors (Seamans et al., 2001). On the other hand, high stimulation of D1 receptors resulted in increased adaptation in excitatory cells (potentially an M-current, via KCNQ potassium channels; Arnsten et al., 2019), mimicking the net inhibitory effect of high concentrations of D1 agonists.
A large-scale model of the macaque cortex incorporating multiple macroscopic gradients We then built a large-scale model of the macaque cortex. We placed the local circuit in each of the 40 cortical areas (Figure 2A, right). Properties of these local circuits varied across areas in the form of macroscopic gradients (Wang, 2020) of long-distance connectivity (set by tracing data), strength of excitation (set by the spine count), and modulation by D1 receptors (set by the receptor autoradiography data). We defined the connections between areas using the quantitative retrograde tract-tracing data. In the model, inter-areal connections are excitatory and target the dendrites of pyramidal cells (Petreanu et al., 2009). Inter-areal excitatory connections also target calretinin (CR)/vasoactive intestinal peptide (VIP) cells to a greater degree than parvalbumin (PV) or calbindin (CB)/somatostatin (SST) cells (Lee ii: D1 receptor density. The receptor density shown here does not take into account differences in neuron density across areas. (B) i: Collins et al., (2010) divided the macaque cortex into 42 slabs of tissue, here mapped onto the Yerkes19 surface. ii: neuron density across the cortex. (C) i: injection sites for the studies of dendritic spine density by Elston (2007). ii: number of dendritic spines on the basal dendrites of layer III pyramidal cells. (D) i: 40 injected areas in the retrograde tract-tracing database of Markov et al. (2014b). ii: cortical hierarchy. (E) Circular embedding of the cortical hierarchical connectivity structure. Radial distance to the center represents the hierarchical position of the area, with the areas lowest in the hierarchy closest to the edge. Angular distance between areas represents the inverse of connectivity strength (fraction of labeled neurons -FLN), so that areas that are plotted at similar angles are more strongly connected to each other. Colors represent the angle on the circle. Clear visual and somatosensory hierarchies emerge from this circular embedding of the connectivity data (highlighted with arrows). Association areas lie at angles off the main visual and somatosensory hierarchies. (F) The density of D1 receptors divided by neuron density. Regions that have not yet been measured are shown in gray. (G) There was a strong positive correlation between the D1 receptor density per neuron and the cortical hierarchy. The spatially corrected p value is the fraction of randomly generated surrogate maps with spatial smoothness matched to the hierarchy map that show a stronger Pearson correlation (negative or positive) with the D1 receptor map than the hierarchy map itself. See also Figures S1 and S2. et al., 2013;Wall et al., 2016). The frontal eye fields (FEF) have an unusually high density of CR (here CR/VIP) cells (Pouget et al., 2009). To account for this, we increased the proportion of interareal input to CR/VIP cells in FEF and reduced the strength of input to PV and CB/SST cells.
An inverted U relationship between cortical D1 receptor stimulation and distributed working memory activity We simulated the large-scale cortical model during performance of a working memory task ( Figure 2C) with different levels of cortical dopamine availability. In simulations, stimulus-selective C E D F B A Figure 2. An inverted U relationship between D1 receptor stimulation and distributed frontoparietal delay period activity (A) Left: local circuit design. The circuit contains two populations of excitatory cells (red and blue), each selective to a particular spatial location. The cell bodies (triangles) and dendrites (cylinders) are modeled as separate compartments. PV (green), CB/SST (purple), and CR/VIP (light brown) cells have characteristic connectivity patterns. Right: the local circuit is placed at each of 40 cortical locations (various colors). Cortical areas differ in (1) inter-areal connections, (2) spine count, and (3) dopamine D1 receptor density. (B) Stimulation of D1 receptors affects the cortical circuit via (1) an increase in inhibition targeting the dendrites with a corresponding decrease in inhibition to the somata of pyramidal cells, (2) an increase in NMDA-dependent excitatory transmission for low to medium levels of stimulation, and (3) increasing adaptation for high levels of stimulation. (C) Structure of the task. The cortical network was presented with a stimulus it had to maintain through a delay period. (D) Left: mean firing rate in the frontoparietal network at the end of the delay period for different levels of dopamine release. Right: mean delay period activity of cortical areas as a function of dopamine release. All areas shown display persistent activity in experiments (Leavitt et al., 2017). (E) Activity is shown across the cortex at different stages in the working memory task (left to right), with increasing levels of dopamine release (from top to bottom). Red represents activity in the excitatory population sensitive to the target stimulus. Very low or very high levels of dopamine release resulted in reduced propagation of stimulus-related activity to frontal areas and a failure to engage persistent activity. Mid-level dopamine release enables distributed persistent activity. (F) Time courses of activity in selected cortical areas. The horizontal bars indicate the timing of cue (red) input to area V1. DA, cortical dopamine availability. See also Figures S3 and S4 and Video S1. . Inter-areal connectivity and D1 receptor density underlie working memory activity and performance (A) There is a strong overlap (18 of 19, 95%) between the pattern of persistent activity seen experimentally (Leavitt et al., 2017) and that predicted by the model. (B) The results of 10,000 simulations using shuffled inter-areal connections (green), 10,000 simulations using shuffled patterns of D1 receptor expression (orange), and 10,000 simulations using shuffled patterns of dendritic spine counts (purple). The position on the x axis denotes the overlap between the simulated delay activity pattern and the experimental activity pattern identified by Leavitt et al. (2017) for each simulation based on shuffled anatomical data. The red vertical line denotes the overlap between the simulation based on the real anatomy data and the experimental results. The bottom half of the image shows the results of (legend continued on next page) ll OPEN ACCESS Article activity propagated from the visual cortex to the temporal, parietal, and frontal cortex. Activity in the visual cortex was relatively insensitive to dopamine (Figures 2E and 2F). Dopamine modulation had little to no effect on the initial peak of activity in early visual areas, but it did modulate the later peak of activity in these areas ( Figure S3), consistent with a specific role of feedback connections in late visual activity (Self et al., 2012). In all cases, there was a strong transient response in visual areas prior to rapid return to baseline firing rates. This is similar to the response seen in neurons recorded from area V1 in behaving monkeys (van Vugt et al., 2018). We observed similar transient activity in somatosensory areas in response to stimulus input to the somatosensory cortex ( Figure S4), as seen experimentally (Romo and Rossi-Pool, 2020). Delay period activity in a large network of prefrontal, lateral parietal, and temporal areas showed an inverted U relationship with dopamine levels ( Figure 2D). A midrange level of dopamine release engaged a distributed pattern of persistent activity throughout these areas (Figures 2E and 2F), but release that was too low or too high only led to a transient response ( Figure 2F). A similar pattern of delay period activity was observed following somatosensory input ( Figure S4). The inverted U relationship between D1 receptor stimulation and working memory activity has been shown locally in the prefrontal cortex in experimental and computational studies (Brunel and Wang, 2001;Vijayraghavan et al., 2007) but has not been described previously throughout the distributed cortical system.
Inter-areal connectivity determines the distributed working memory activity pattern We next compared the pattern of delay period activity in the model with delay period activity observed in over 90 electrophysiology studies (Leavitt et al., 2017). We chose model parameters that would produce persistent activity in the prefrontal cortex, but we did not fit the model to the experimental data. Of the 19 cortical areas in which such activity has been assessed during the delay period in at least three experimental studies, 18 were in agreement between the simulation and experimental results (c 2 = 15:03; p = 0:0001 Figure 3A). Overall, the experimentally observed persistent activity from numerous studies is reproduced, validating the model. This allows us to inspect the anatomical properties that underlie the distributed activity pattern and gain insight into the brain mechanisms that may produce it.
We repeated model simulations after shuffling the anatomical data. The delay period activity patterns for 30,000 simulations based on the shuffled anatomy were compared with the pattern observed experimentally. Ten thousand simulations were run using shuffled inter-areal connections, shuffled D1 receptor expression, and shuffled dendritic spine expression separately. The overlap between the experimental persistent activity pattern and the model persistent activity pattern was strongly dependent on the inter-areal connections (p = 0.0004) but not on the pattern of D1 receptors (p = 0.71) or dendritic spine count (p = 0.46) ( Figure 3B). This analysis suggests that the edges between nodes in the network (i.e., the inter-areal connections) are important for defining the spatial pattern of delay period activity. Next we asked how the nodes themselves (i.e., individual cortical areas) contribute differentially to distributed working memory.
Working memory deficits are most severe following lesions to prefrontal areas with high D1 receptor density We next quantified the degree to which focal lesions to individual areas in the model disrupted persistent activity during the working memory task (without distractors). The effect depended on the lesioned area and the level of cortical dopamine ( Figure 3C). Lesions to prefrontal and posterior parietal areas caused the greatest reductions in delay period firing rates ( Figure 3D,E). Lesions to frontal areas caused a significantly greater reduction in delay period firing rates than lesions to parietal areas (Mann-Whitney U = 46.0, p = 0.027). We tested the effects of progressively larger lesions to the frontal and parietal cortex. To increase the size of the lesions, for each lobe we first lesioned the area that caused the biggest drop in delay activity when lesioned individually and then additionally lesioned the area that caused the second biggest drop and so on (frontal lesion 1: 46d, lesion 2: 46d+8B, lesion 3: 46d+8B+8 m etc.; parietal lesion 1: LIP, lesion 2: LIP+7m, lesion 3: LIP+7 m+7B. etc.). When lesioning two frontal regions, the mnemonic delay period activity was completely destroyed throughout the cortex, so the network was no longer able to perform the task. In contrast, progressively larger lesions of the parietal cortex caused only a gradual decrease in frontoparietal delay activity, and even when the entire parietal cortex was removed (10 areas), sufficient residual mnemonic delay period activity remained to allow the cue stimulus to be decoded ( Figure 3F).
We subsequently addressed the ability of the model to maintain cue-specific delay period activity in the presence of distractors following precise lesioning of each cortical area. We analyzed trials across all levels of cortical dopamine availability. Lesions to three prefrontal areas (8m, 46d, and 8B), but not other areas, caused complete disruption of distractor-resistant working memory activity in all trials. Lesions to many other areas Article caused complete reduction of distractor-resistant working memory activity for some trials (corresponding to a particular dopamine range) but not others. The seven lesions causing the greatest disruption of working memory performance were in the frontal cortex (six prefrontal areas and premotor area F7; Figure 3G). The reduction in performance was significantly greater for lesions to frontal cortical areas than parietal areas (Mann-Whitney U = 48.5, p = 0.032). Our simulations thus suggest that (1) lesions to the prefrontal and posterior parietal cortex can cause a significant disruption of delay period activity, (2) frontal lesions have a greater effect on behavior than parietal lesions, and (3) smaller lesions, particularly to the prefrontal cortex, can significantly disrupt performance on more difficult working memory tasks, such as those with distractors. In contrast, larger lesions are required to disrupt performance on simple working memory tasks. Lesions to area V1 and V2 led to complete loss of visual working memory activity ( Figure 3D). However, this was because of the fact that a visual stimulus must go through area V1 to gain access to the working memory system. We confirmed this by showing that lesions to V1 and V2 had no effect on working memory when somatosensory stimuli were used (with stimulus presented to primary somatosensory area 3). In the somatosensory working memory task, lesions to early somatosensory areas and frontoparietal network areas caused memory deficits (Figure S5). This clearly separates early sensory areas, which are required for signal propagation to the working memory system, from core cross-modal working memory areas in the prefrontal and posterior parietal cortex. D1 receptor density (F = 4.72, p = 0.036; Figure 3H) was the strongest anatomical predictor of the lesion effects, and adding hierarchy or spine count to the model did not significantly improve the fit. Thus, our model predicts that lesions to areas with a higher D1 receptor density are more likely to disrupt working memory activity. This prediction can be tested experimentally.
Dopamine shifts between activity-silent and persistent activity modes of working memory Recent experimental and modeling results show that some delay tasks can be solved with little or no persistent activity (Mongillo et al., 2008;Rose et al., 2016;Watanabe and Funahashi, 2014;Wolff et al., 2017). This has spurred a debate about whether persistent activity or ''activity-silent'' mechanisms underlie working memory Lundqvist et al., 2018). Is dopamine modulation throughout the cortex relevant to this debate? We endowed the model with short-term plasticity to assess the possibility of activity-silent working memory in the large-scale network. Short-term plasticity was implemented at all synapses between excitatory cells (using the same parameters as Mongillo et al., 2008) and from excitatory to CB/SST cells. We investigated activity-silent representations by ''pinging'' the system with a neutral stimulus and reading out the activity generated in response, similar to the experimental protocol in Wolff et al. (2017) (Figure 4A, i). For optimal midlevels of dopamine release ( Figure 4A, ii), the model generated persistent activity that was very similar to the network without short-term plasticity. The strong and distributed activation of the frontal and parietal cortex is reminiscent of the ignition response to consciously observed stimuli (van Vugt et al., 2018).
For low and high levels of dopamine release, there was no persistent activity ( Figure 4A, iii). However, when we pinged the system with a neutral stimulus, activity relating to the target cue was generated transiently throughout the frontoparietal network ( Figure 4A, iii), suggesting that a memory of the target stimulus was stored internally. During the delay period, the synaptic efficacy increased at connections between neurons coding for the target stimulus. Previous models of activity-silent shortterm memory have focused on local synaptic changes in the prefrontal cortex (Mongillo et al., 2008). In our model, most of the increase in synaptic efficacy was in synaptic connections from neurons in sensory areas ( Figure 4A, iii). We then restricted short-term synaptic plasticity to presynaptic neurons outside of the frontoparietal network. Pinging this system again resulted in activation of the target-related activity throughout the frontoparietal network ( Figure S6). Next we performed the opposite manipulation and restricted short-term synaptic plasticity to presynaptic neurons in the frontoparietal network. Pinging that system did not lead to activation of the frontoparietal network ( Figure S6). This suggests that synaptic plasticity at connections from (presynaptic) prefrontal cortical neurons is not required for activity-silent memory. Finally, we restricted short-term plasticity to local connections. In that network, activity-silent memory recall also failed ( Figure S6). This suggests that short-term facilitation in inter-areal feedforward connections from early sensory areas to the frontal and parietal cortex is a potential substrate for ''activity-silent'' memory in the absence of a strong initial prefrontal response to the stimulus.
Why does the brain have two parallel systems for holding items in short-term memory? To explore this question, we simulated the model using a ping protocol (Wolff et al., 2017) with a (A) i: task structure. A target stimulus was followed by a delay and a probe stimulus. ii: for mid-level dopamine release, activity relating to the target stimulus propagated from V1 through the hierarchy and was maintained in persistent activity throughout the frontoparietal network. Top: firing rates on the surface (left) and in selected areas (right). Bottom: synaptic efficacy. iii: for low-level dopamine release, activity (top) in response to the stimulus was transient in visual and some frontoparietal areas. There was no persistent activity through the delay period. However, in response to the probe stimulus, activity representing the original target stimulus was regenerated throughout the frontoparietal cortex. Bottom: the memory for the stimulus was stored as an increase in synaptic efficacy through the delay period, mostly in connections from sensory areas. (B) i: task structure. A target stimulus was followed by a delay period, a distractor, another delay period, and a probe stimulus. ii: for mid-level dopamine release, target-related activity was maintained in persistent activity throughout the frontoparietal network throughout the delay period through the distractor until the end of the trial. iii: for low-level dopamine release, frontoparietal activity related to the most recent stimulus (i.e., the distractor) was regenerated during this probe stimulus. See also Figure S6. Article distractor. After a behaviorally relevant cue and during the delay period, we introduced a distractor that should be filtered out by the network, followed by a neutral ping stimulus ( Figure 4B, i). For mid-level dopamine release, persistent activity coding for the target stimulus is engaged and maintained through the distractor and ping ( Figure 4B, ii). The distractor is represented transiently in inferior temporal (IT) and lateral intraparietal cortex (LIP) (thus replicating the experimental results in Suzuki and Gottlieb, 2013) but does not reach most of the frontoparietal network. In the lowand high-dopamine cases, during the ping, the activity-silent mechanism regenerates activity related to the last encoded stimulus, the distractor, in the frontal and parietal cortex ( Figure 4B, iii). Thus, pinging from the activity-silent state scenario always recalls the latest item but cannot ignore a distractor. Therefore, dopamine release may serve to encode salient items in working memory and protect them from distraction.
Dopamine increases distractor resistance by shifting the subcellular target of inhibition
How does dopamine protect working memory from distraction? To examine this question, we analyzed activity within CR/VIP and CB/SST neurons during a working memory task with a distractor ( Figure 5A). CB/SST and CR/VIP neurons are in competition because they mutually inhibit each other. When CB/SST cell firing is higher, pyramidal cell dendrites are relatively inhibited. Conversely, when CR/VIP cell firing is higher, pyramidal cell dendrites are disinhibited. Each cortical area in the model contains two selective populations of pyramidal, CB/SST, and CR/VIP cells. We first analyzed trials in which the model successfully ignores the distractor. In the target-selective populations, CR/VIP neurons fire at a much higher rate than CB/SST neurons ( Figures 5B and 5C). Thus, the dendrites of the target-selective pyramidal cells are disinhibited, allowing inter-areal target-related activity to flow between cortical areas. In the distractor-selective populations, throughout the frontoparietal network, CB/SST neurons fire at a slightly higher rate than CR/VIP cells. Thus, activity from other cortical areas is blocked from entering the dendrites of distractor-selective pyramidal cells in the frontal and parietal cortex.
To test the importance of this effect, we transiently inhibited CB/SST2 cells in the frontoparietal network during presentation of the distractor (CB/SST2; Figure 5D). This transient inhibition of CB/SST2 cells was sufficient to switch the network to a distractible state, with the distractor stimulus held in working memory until the end of the trial ( Figure 5D).
Because dopamine increases the strength of inhibition to dendrites and decreases inhibition to somata, it is possible that this aspect of dopamine modulation enhances distractor resistance of the system. We removed this effect of dopamine modulation while leaving dopamine's effects on NMDA and adaptation currents as before ( Figure 5E). We repeated the working memory task in the presence of the distractor with a mid-level of dopamine, which normally results in distractor-resistant working memory. Without the dopamine-dependent shift of inhibition from the soma to the dendrite, the system becomes distractible ( Figures 5F and 5G). Previous modeling work has shown that persistent activity can depend on local recurrent excitatory connections or a combination of local and inter-areal loops (Mejias and Wang, 2021;Murray et al., 2017). We searched the parameter space for the strength of local and inter-areal excitatoryto-excitatory connections and found that, when a subset of local cortical areas was endowed with sufficient recurrent excitation to generate persistent activity in isolation (e.g., g self E;E = 0:33nA, m E;E = 1:25), high somatic inhibition and low dendritic inhibition were generally associated with distractibility ( Figure 5H; Figure S7). Low somatic and high dendritic inhibition were associated with distractor-resistant behavior ( Figure 5H; Figure S7). Therefore, the action of dopamine in shifting inhibition from the soma to the dendrite (Gao et al., 2003), via its strong effect on CB/SST cells (Mueller et al., 2020), prevents distractor-related activity from sensory areas disrupting ongoing persistent activity in the frontoparietal network.
Learning to optimally time dopamine release through reinforcement
In real life, we experience a constant flow of sensory inputs, and our working memory system must be flexible in determining the timing of relevant versus irrelevant information. Dopamine neurons fire in response to task-relevant stimuli (Schultz et al., 1993) but should not fire in response to task-irrelevant distracting stimuli, regardless of timing. We hypothesized that correct timing of dopamine release could be learned by simple reward-learning mechanisms. Figure 5. Dopamine increases distractor resistance by shifting the subcellular target of inhibition (A) Task structure. A target stimulus was followed by a delay, a distractor stimulus, and another delay period. (B) For mid-level dopamine release, persistent target-related activity (red) was present in the frontoparietal network through the delay and the distractor until the end of the trial. Each cortical area contains populations of excitatory, CB/SST, and CR/VIP cells that respond to the target stimulus (E1, CB/SST1, and CR/VIP1), separate populations sensitive to the distractor stimulus (E2, CB/SST2, and CR/VIP2), and PV cells.
(B and C) Throughout the delay period and distractor stimulus, activity in CR/VIP1 is higher than in CB/SST1, leading to disinhibition of the E1 dendrite. In contrast, activity in CR/VIP2 is slightly lower than in CB/SST2, leading to inhibition of the E2 dendrite. (D) We transiently inactivated CB/SST2 populations in the frontoparietal network during presentation of the distractor stimulus. On trials in which CB/SST2 populations were inhibited, the network became distractible. (E) We removed the dopamine modulation of somatic and dendritic inhibition while leaving the effects of dopamine on NMDA-dependent excitation and adaptation unchanged. (F and G) Without the dopamine-dependent switch toward dendritic inhibition, the network became distractible, with distractor-related activity dominating at the end of the trial. (H) Consistently across dopamine levels, higher somatic and lower dendritic inhibition were associated with distractible working memory (blue). In contrast, lower somatic and higher somatic inhibition were associated with distractor-resistant working memory (red). High dendritic and high somatic inhibition result in no persistent activity (white). The levels of dendritic and somatic inhibition associated with the standard dopamine modulation used in the rest of the paper are marked by a black square. See also Figure S7.
We created a simplified model of the ventral tegmental area (VTA) with GABAergic and dopaminergic neurons and connected this to our large-scale cortical model ( Figure 6A) (cf. Braver and Cohen, 2000). Cortical pyramidal cells target GABAergic and dopaminergic cells in the VTA (Soden et al., 2020;Watabe-Uchida et al., 2012). Dopaminergic cells are also strongly inhibited by local VTA GABAergic cells (Soden et al., 2020). Dopamine in the model is released in the cortex in response to VTA dopaminergic neuron firing, and cortical dopamine levels slowly return to baseline following cessation of (B) We simulated a task with two cues (red and blue) followed by a probe stimulus. The rewarded stimulus changed every 30 trials. Following each switch, after a few trials, the network learns to store the appropriate stimulus in distributed persistent activity. This depends on high dopamine release in response to the rewarded stimulus and low release in response to the unrewarded stimulus.
dopaminergic neuron firing (Muller et al., 2014). In the model, the synaptic strengths of cortical inputs from the selected populations to VTA populations are increased following a reward and weakened following an incorrect response (Harnett et al., 2009;Soltani and Wang, 2006). We tested the model on a variant of the target-distractor-ping task introduced earlier ( Figures 4B, i, and 6B). For the first 30 trials, the first stimulus (cue 1, red) was rewarded (rule 1). For the following 30 trials, the second stimulus (cue 2, blue) was rewarded (rule 2). For the final 30 trials, we switched back to rule 1 ( Figure 6B). By the seventh trial of the first block, distractor-resistant persistent activity emerged, and the first cue was remembered correctly. This behavior persisted until the next block. Following a few trials of the second block, dopamine release in response to the first stimulus was reduced, and neural populations throughout the cortex only transiently represented the first (now irrelevant) stimulus. However, dopamine response to the second stimulus increased so that persistent activity representing the second stimulus was engaged. Following the second rule switch, the system again switched back to engaging persistent activity in response to the first cue. Additionally, the number of trials to engage appropriate persistent activity decreased gradually with each switch. We further tested the model on a version of the task in which the relevant red cue could be shown first or second within a block before the blue cue became relevant in the second block. The model was also able to learn this task, although it took more trials (10-15) to learn the switch (for the first few blocks). Thus, by means of simple reward-learning mechanisms, the optimal timing of dopamine release can be learned, allowing flexible engagement of distributed persistent activity in working memory.
DISCUSSION
We uncovered a macroscopic gradient of dopamine D1 receptor density along the cortical hierarchy. By building a novel anatomically constrained model of the monkey cortex, we showed how dopamine can engage distributed persistent activity mechanisms and protect memories of behaviorally relevant stimuli from distraction. This work leads to new predictions that would not have been possible with local circuit models. For example, the model shows that dopamine's enhancement of inhibition from CB/SST-expressing cells to the dendrites of pyramidal cells blocks distracting sensory information from entering the frontoparietal working memory network. Second, when an initial stimulus fails to robustly activate the prefrontal cortex, we found that the memory of the original stimulus can be recalled through an activity-silent synaptic mechanism in inter-areal connections from the sensory to the frontoparietal cortex. Last, our model predicts that dopamine can switch between activity-silent and distributed persistent activity mechanisms, and the timing of dopamine release could be learned through reinforcement. This suggests that distributed persistent activity may be engaged for behaviorally relevant stimuli that need to be remembered and protected from distractors.
A gradient of D1 receptors along the cortical hierarchy We used quantitative in vitro receptor autoradiography to create a high-resolution, high-fidelity map of cortical dopamine recep-tor architecture. The dopamine system can also be imaged in vivo using positron emission tomography (PET) and single photon emission computed tomography (SPECT) scans. These scans can provide information regarding individual and group differences but are limited in spatial resolution and signal-tonoise ratio (Abi-Dargham et al., 2002;Froudist-Walsh et al., 2017a;Roffman et al., 2016;Slifstein et al., 2015) and are often unreliable for cortical measurements (Egerton et al., 2010;Farde et al., 1988). It is now possible to map the expression of genes coding for dopamine receptors across the brain. Gene expression methods have certain advantages, especially RNA sequencing, which can provide cell-specific data. However, mRNA expression is not always closely related to or even positively correlated with the receptor density at the cell membrane (Arnatkeviciute et al., 2019;Beliveau et al., 2017). Receptor density at the membrane is the functionally important quantity and is measured here directly. The map of D1 receptor density here greatly expands previous descriptions of D1 receptor densities (Goldman-Rakic et al., 1990;Impieri et al., 2019;Lidow et al., 1991;Niu et al., 2020;Richfield et al., 1989). We show that D1 receptor density increases along the cortical hierarchy, peaking in the prefrontal and posterior parietal cortex. A previous study of 12 cortical areas suggested a posterior-anterior gradient of D1 receptor expression (Lidow et al., 1991). Here we assess D1 receptor density in 109 cortical areas, take into account variation in neuron density across the cortex, and show that the D1 receptor gradient more closely follows the cortical hierarchy than a strict posterior-anterior gradient. The distinction is clear, with higher levels of D1 receptor density per neuron in areas of the posterior parietal cortex than the somatosensory and primary motor cortex. Future work is required to test the degree to which gradients of gene expression accurately capture the receptor gradient (Beliveau et al., 2017;Hurd et al., 2001). The gradient of dopamine D1 receptors is similar to gradients of other anatomical and functional properties described across the cortex, many of which increase or decrease along the hierarchy (Burt et al., 2018;Fulcher et al., 2019;Goulas et al., 2018;Margulies et al., 2016;Sanides 1962;Shafiei et al., 2020;Wang 2020). We observed some interesting patterns of D1R density per neuron ( Figure 1F), such as a gradual caudorostral increase within the prefrontal cortex, which resembles previously reported gradients of plasticity, laminar connectivity, and abstraction (Badre and D'Esposito 2009;Riley et al., 2018;Vezoli et al., 2021). Because of the small number of animals and relatively similar D1R expression levels in several areas of the frontal and parietal cortex, comparison of D1R density between pairs of areas is difficult. As shown originally in Markov et al. (2014a), the hierarchy itself is steep for early sensory areas and becomes shallower for higher-association areas. Therefore, the exact positions of areas like LIP or 10 are not as robustly distinguishable as those of V1, V2, and V4. Nonetheless, we expect the general pattern of an increase in D1R density per neuron along the cortical hierarchy to hold. Although the D1R labeling per neuron as well as synaptic excitation and inhibition display a smooth gradient, quantitative variations of circuit properties can give rise to a non-smooth pattern of persistent activity along the cortical hierarchy through a phenomenon akin to bifurcations described by the theory of nonlinear dynamical systems (Mejias and Wang, 2021;Wang, 2020). Such a sudden transition was observed in a monkey experiment where elevated persistent activity associated with working memory was absent in the middle temporal area (MT) but significantly present one synapse away in the nearby medial superior temporal area (MST) (Mendoza-Halliday et al., 2014). Simultaneous recording from many parcellated areas using new tools, such as Neuropixels (Jun et al., 2017), from behaving animals could systematically test our model prediction in future experiments. This increasing gradient of dopamine receptors along the cortical hierarchy is a major anatomical basis by which dopamine can modulate higher cognitive processing.
An inverted U relationship between dopamine and distributed working memory activity Previous experimental and modeling studies have shown an inverted U relationship between D1 receptor stimulation and persistent activity in the prefrontal cortex in monkeys performing working memory tasks (Brunel and Wang, 2001;Vijayraghavan et al., 2007;Wang et al., 2019). Dopamine activity in the VTA is relatively low during the delay period but still has an inverted U shape relationship with short-term memory performance in the rat (Choi et al., 2020). In our model, this may be interpreted as the VTA continuing to provide low-level dopamine to the cortex to maintain cortical dopamine levels within the appropriate bounds for distributed persistent activity. We found dense D1 and D2 receptor labeling in the striatum. However, we focused our working memory modeling on the cortex and VTA. Notably, optogenetic manipulation of substantia nigra pars compacta dopamine neurons (which principally target the striatum) does not have specific short-term memory effects (Choi et al., 2020). This suggests that cortical rather than striatal dopamine release is likely to be more important to short-term memory. By constructing a novel large-scale model based on the D1 receptor map and tract-tracing data, we found that the inverted U relationship between D1 receptor stimulation and persistent activity held across the frontal and parietal cortex during working memory. The working memory activity pattern was strikingly similar to that seen experimentally, according to a meta-analysis of 90 electrophysiology studies of delay period activity in the monkey cortex (Leavitt et al., 2017). Analyzing the model showed that the pattern of inter-areal connections was the strongest determinant of the pattern of working memory activity.
Noudoost and Moore (2011) found that injecting a D1 antagonist into FEF led to an increase in firing rates in V4. Similarly, in our model, when cortical dopamine levels are close to the optimal range for working memory (i.e., the peak of the inverted U), then reducing D1 receptor stimulation via an antagonist would lead to an increase in V4 activity during the second peak of the response to visual stimulation ( Figure S3). However, our model focused on distributed working memory in a largescale cortical system and was not built to uncover mechanisms of attention or decision-making. Recent electrophysiology and modeling studies of non-human primate attention have suggested that the dominant net effect of attention on neural activity in the sensory cortex is inhibition (Huang et al., 2019;Yoo et al., 2021). This may be consistent with subtle enhancement of firing for neurons whose receptive field is in the focus of attention, combined with greater inhibition of neurons with nearby recep-tive fields. We showed that somatosensory and visuospatial working memory tasks engage largely overlapping higher cortical areas during the delay period. It is likely that, at a neural level, these networks may overlap only partially. To simulate these mixed inhibitory and excitatory effects of attention and identify the degree to which different types of working memory engage the same neurons, future models will require more neural populations per area, perhaps with structured connectivity, such as a ring (Ardid et al., 2007). Local circuit modeling has shown previously that a circuit designed for working memory is suitable for decision-making (Wang 2002). Our model may also be suitable for considering decision processes distributed across cortical areas.
Prefrontal and parietal contributions to distributed working memory It is increasingly feasible to uncover the circuit mechanisms underlying distributed cognitive functions because of advances in recording technology (Jun et al., 2017) and large-scale cortical models (Cabral et al., 2011;Chaudhuri et al., 2015;Honey et al., 2007;Joglekar et al., 2018;Mejias et al., 2016;Mejias and Wang, 2021;Schmidt et al., 2018;Shine et al., 2018). Most previous large-scale cortical models have focused on replicating resting-state functional connectivity (Cabral et al., 2011;Chaudhuri et al., 2015;Honey et al., 2007) or propagation of neural activity along the hierarchy (Chaudhuri et al., 2015;Joglekar et al., 2018;Schmidt et al., 2018), with the notable exception of one recent model that simulated distributed working memory in a network of 30 cortical areas (Mejias and Wang, 2021). Compared with previous efforts, our model additionally includes (1) a D1 receptor gradient; (2) multiple inhibitory cell types and distinct pyramidal cell compartments; (3) at least 33% more cortical areas connected via quantitative graded and directed connectivity data, and, for some figures, (4) short-term synaptic plasticity; and (5) a VTA module with reinforcement learning mechanisms. The large-scale nature of the model enabled us to investigate the contributions of different brain regions to distributed working memory activity.
Some experimental studies have aimed to dissociate the contribution of the prefrontal and parietal cortex to working memory via temporary inactivations. For example, Chafee and Goldman-Rakic (2000) examined the effects of reversibly cooling the prefrontal or parietal cortex on activity in the other area and behavior during a visuospatial working memory task without a distractor. Cooling affected the FEF (area 8) and nearby prefrontal cortex, including the principal sulcus (areas 46 and 9). Cooling of the parietal cortex included LIP as well as parts of areas DP (dorsal prelunate gyrus), 7A, and 5. Cooling the parietal cortex led to a substantial reduction in prefrontal firing rates with only a minor effect on performance. Cooling the prefrontal cortex led to a substantial reduction in parietal firing rates and a large increase in behavioral errors (Chafee and Goldman-Rakic 2000). This is consistent with our simulation results showing that prefrontal and parietal inactivation can have a robust effect on reducing mnemonic delay activity but that prefrontal inactivation has much larger effects on performance (Figures 3E and 3F). Suzuki and Gottlieb (2013) inactivated areas LIP and dorsolateral prefrontal cortex (dlPFC) using the GABA-A receptor agonist muscimol and assessed performance on a similar visuospatial working memory task with and without distractor stimuli. In these experiments, neither LIP nor dlPFC inactivation caused errors in trials without distractors (Suzuki and Gottlieb, 2013). However, inactivation of dlPFC, but not LIP, led to a dramatic increase in errors on trials with distractors (Suzuki and Gottlieb, 2013). This is consistent with our simulation results showing that precise lesions to dlPFC affect behavior on challenging working memory trials with distractor stimuli, but larger lesions are required to disrupt performance in simple working memory trials without distractors, and lesions to LIP have only subtle effects on performance. This agrees with recent models of distributed working memory that suggest that the prefrontal cortex may have a particularly important role in maintaining distributed persistent activity (Mejias and Wang, 2021;Murray et al., 2017). The effects of lesions on model performance are consistent with recent reports showing that there is a distinction between areas that are active during normal behavior and those that are essential for a computation (Pinto et al., 2019;Zatka-Haas et al., 2021) and that cortical lesions have greater effects on performance in more challenging tasks (Pinto et al., 2019).
Lesions to areas with a high D1 receptor density disrupt working memory Working memory activity was most disrupted by lesions to areas with a high D1 receptor density, a prediction that can be tested experimentally. Humans with traumatic brain injury often have working memory deficits (Dunning et al., 2016). Pharmacological treatment of these deficits, including with dopaminergic drugs, has had mixed success (Froudist-Walsh et al., 2017b). Our model simulations suggest that D1 agonists or antagonists could be effective at restoring normal working memory functioning following lesions to particular cortical areas, but the correct treatment may depend on the baseline cortical dopamine levels of the individual. Dopaminergic drugs have also been suggested as treatments for individuals with schizophrenia with working memory deficits (Yang and Chen 2005). In individuals with schizophrenia, PV and SST gene expression is reduced across multiple areas of the cortical working memory network (Tsubomoto et al., 2019). Disruption of these inhibitory neurons is likely to contribute to working memory deficits. Future adaptations of our model could allow simulation of working memory deficits and motivate potential treatments for individuals based on their particular anatomy, gene expression, and patterns of cortical dopamine release or receptor density (Abi-Dargham et al., 2002;Slifstein et al., 2015).
A dopamine switch between the activity-silent state and persistent activity For very low or high levels of D1 receptor stimulation, it was possible to maintain stimulus information in the absence of persistent activity via synaptic mechanisms. This pattern of successful memory recall without frontoparietal delay period activity is reminiscent of a passive short-term memory trace thought to rely on ''activity-silent'' synaptic mechanisms (Rose et al., 2016;Tr€ ubutschek et al., 2017;Wolff et al., 2017) that could occur without ignition of the frontoparietal cortex (Tr€ ubutschek et al., 2017(Tr€ ubutschek et al., , 2019. Previous models with short-term synaptic plasticity have focused on local activity in the prefrontal cortex (Mongillo et al., 2008) and, thus, implicitly imply that the initial stimulus must significantly engage prefrontal neural activity and store the memory trace via short-term plasticity in local prefrontal connections. However, some stimuli may be remembered without a strong initial prefrontal response. We found that shortterm synaptic plasticity in inter-areal connections from sensory to frontoparietal areas was most important for maintaining the silent memory trace. In particular, this is a potential mechanism for activity-silent short-term memory in the absence of a strong initial prefrontal response to the stimulus. It has been proposed that nonspecific excitatory or inhibitory currents could account for switches between active and silent states (Barbosa et al., 2020). Our model suggests that dopamine could, in fact, account for the switch from the silent to the active state. Indeed, because of the inverted U relationship between dopamine and persistent firing, a dopamine response to the reward at the end of a trial could also terminate persistent activity. Another recent proposal suggests that activity-silent short-term memory could be undertaken via hippocampal-prefrontal episodic memory mechanisms, perhaps in combination with short-term synaptic changes in the cortex (Beukers et al., 2021). Future studies should aim to disentangle the contributions of rapid synaptic changes within the prefrontal cortex (Mongillo et al., 2008), at inter-areal connections from sensory areas (this paper), or in the hippocampus (Beukers et al., 2021) to activity-silent short-term memory in the primate. We found that, in the activity-silent state, the most recently encoded stimulus was always encoded most strongly, even when it was a distractor. This may reflect involuntary encoding of irrelevant stimuli in a short-term synaptic memory trace (Barbosa et al., 2021(Barbosa et al., , 2020. This prediction should hold as the number of distractors is increased. The activity-silent system may still be able to recall earlier stimuli for a limited time when another input biases the network toward the activity pattern used during encoding of the earlier stimulus to trigger pattern completion and recall of the memory (Manohar et al., 2019) or through active forgetting of the distracting stimuli (Wolff et al., 2021). Alternatively, multiple competing memories may be represented in neural activity (Barbosa et al., 2021;Panichello and Buschman, 2021), which would rely on an unspecified selection mechanism and may occur in parallel with short-term synaptic changes. In our model, stimuli stored in persistent activity (and thus dependent on mid-level dopamine release) were more robust against distraction, consistent with drug studies in humans (Fallon et al., 2017a(Fallon et al., , 2017b. Thus, dopamine release may engage distributed persistent activity to protect memories of important stimuli from distraction.
Dopamine increases distractor resistance by shifting the subcellular target of inhibition
The resilience of the active working memory state in the model depended on CB/SST cells blocking distracting inputs from sensory areas to the dendrites of pyramidal cells in the frontal and parietal cortex. Previous modeling work on local cortical circuits has suggested that greater dendritic and less somatic inhibition could increase distractor resistance (Wang et al., 2004a) and that selective disinhibition of the dendrite (through CR/VIP cells) could allow specific information to be passed through the network (Yang et al., 2016). In our large-scale model, CR/VIP cells selectively disinhibited the dendrites of target-selective cells, allowing target-related activity to flow through the cortical network. D1 receptors in the monkey cortex are more strongly expressed on CB/SST neurons than other interneuron types (Mueller et al., 2020). In agreement with these anatomical findings, application of dopamine to a frontal cortex slice increases inhibition to the dendrites and decreases inhibition to the somata of pyramidal cells (Gao et al., 2003). We found that, as long as local cortical areas (or potentially cortico-subcortical loops) are capable of maintaining persistent activity, then shifting the balance of inhibition from the soma to the dendrite can allow maintenance of an active representation of a stimulus in persistent activity while shielding it from distracting input from sensory areas. The ability of cortical areas to maintain persistent activity itself depends on dopaminergic enhancement of NMDA-dependent excitation. In mice, inhibition of SST neurons in medial prefrontal cortex during the sample period of a spatial working memory task impairs performance and increases representation of irrelevant information in prefrontal activity (Abbas et al., 2018). Consistent with our model, this suggests that SST neurons gate entry of information into working memory and that inhibition of SST neurons in frontoparietal areas allows distracting information to enter.
Learning to engage distributed persistent activity through reinforcement Distractor resistance in response to all stimuli could render the working memory system inflexible and unresponsive to new, potentially important inputs. Previous studies have shown that lesioning the prefrontal cortex impairs the ability to switch attention between stimuli across trials (Rossi et al., 2007). Our model predicts that the prefrontal cortex is more crucial for persistent activity than activity-silent short-term memory, which can rely on short-term synaptic changes outside of the prefrontal cortex. We show that, by using a simple reward-based learning mechanism, a cortical VTA model (cf. Braver and Cohen, 2000;Frank 2005) can successfully perform a task with reversals between the memory cue and distractor stimuli across trials. In our model, the timing of dopamine release in the cortex can be learned to engage distributed persistent activity throughout the frontoparietal network only in response to reward-predicting cues. Dopamine neurons burst about 130-150 ms after reward-predicting stimuli, coinciding with a rise in activity in frontal cortical neurons (de Lafuente and Romo, 2012). Because of the slow dynamics of cortical dopamine (Muller et al., 2014), we suggest that a transient increase in dopamine release in response to the target stimulus (Choi et al., 2020;Schultz et al., 1993) may be sufficient to maintain distributed persistent activity for several seconds. This mechanism may thus be reserved for behaviorally important stimuli that must be protected from distraction even when the behaviorally relevant stimuli change from trial to trial. In contrast, irrelevant or less salient stimuli result in lower dopamine release and may be remembered via silent mechanisms or forgotten. We investigated model performance on a reversal learning task with identical repeated trials within a block. In natural life, no two situations are exactly the same. It is likely that the brain generalizes across similar situations to enable reinforcement learning to be used in practice. This ability to generalize may arise from dopamine-dependent plasticity in the prefrontal cortex . The classic reward-prediction-error hypothesis treats dopamine as a global scalar reward prediction error signal that is spatiotemporally uniform (Schultz 1998). Here we aim to highlight one form of spatial heterogeneity and suggest that broad dopamine release will affect each cortical area according to the D1 receptor density per neuron. Recent work suggests that there is temporal heterogeneity in dopamine release, which is released in waves in the mouse striatum (Hamid et al., 2021). Whether such dopamine waves also occur in the cortex or in primates remains to be seen. Even if dopamine is released in waves across the cortex, its effect on cortical areas will be dependent on the D1 receptor gradient presented here.
Roles of other neuromodulatory and subcortical systems
In addition to dopamine, other neuromodulators, such as acetylcholine (Croxson et al., 2011;Sun et al., 2017;Yang et al., 2013) and noradrenaline (Arnsten et al., 2012), affect prefrontal delay period firing and performance on visuospatial working memory tasks. Cholinergic mechanisms may complement dopaminergic mechanisms. For example, nicotinic alpha-7 receptors depolarize pyramidal cells, which enables NMDA receptors to be engaged via removal of the magnesium block (Yang et al., 2013). This may compensate for a reduction in presynaptic glutamate release in response to D1 stimulation and enable dopamine's permissive effects on NMDA transmission (Seamans et al., 2001). Muscarinic M1 receptor activation closes KCNQ channels, which contribute to the hyperpolarizing effect of high levels of D1 stimulation (Arnsten et al., 2012; Galvin et al., 2020). Thus M1 stimulation may enable persistent activity over a larger range of dopamine release. The effects of noradrenaline on working memory circuits depend on the targeted adrenergic receptors. Moderate release of noradrenaline engages adrenergic a 2A receptors, which may counteract the hyperpolarizing effects of hyperpolarization-activated cyclic nucleotide-gated (HCN) channels (Arnsten, 2000;Arnsten et al., 2012;Li and Mei, 1994;Robbins and Arnsten, 2009) and keep the D1 effects in check by decreasing calcium-cyclic AMP (cAMP) signaling. Greater noradrenergic levels engage a 1 and b 1 receptors, which promote calcium-cAMP signaling and, at high levels, provide negative feedback via KCNQ and HCN channels (Arnsten et al., 2020). Studies linking neuromodulators to working memory have focused on the dorsolateral prefrontal cortex. Much less is known about the influence of these and other neuromodulators on the distributed network activity that underlies working memory outside of the prefrontal cortex. Future work should focus on the interaction of distinct neuromodulators and how release of different combinations of neuromodulators may affect distributed activity patterns and behavior, taking into account the different distributions of these receptors across the cortex . Subcortical structures, such as the thalamus, may play a significant role in working memory (Fuster and Alexander, 1971;Guo et al., 2017;Jaramillo, et al., 2019;Watanabe and Funahashi, 2012). Future experiments and computational modeling studies should aim to disentangle the contribution of the thalamus to sensory working memory and motor preparation (Guo et al., 2017;Watanabe and Funahashi, 2012) and clarify the degree to which such mechanisms are shared across species. When appropriate weighted and directed connectivity data become available, future largescale cortical models should also integrate further structures, such as the thalamus (Jaramillo et al., 2019), basal ganglia (Wei and Wang, 2016), the claustrum, and the cerebellum to identify their contributions to working memory.
Conclusion
We experimentally found a macroscopic gradient of dopamine D1 receptor density along the cortical hierarchy. By building a novel connectome-based biophysical model of the monkey cortex, endowed with multiple types of inhibitory cells, we show how dopamine can engage robust distributed persistent activity mechanisms across connected higher cortical areas and protect memories of salient stimuli from distraction. Because distributed persistent activity is necessary for internal manipulation of information in working memory (Masse et al., 2019;Takeda and Funahashi, 2004;Tr€ ubutschek et al., 2019), dopamine release in the cortex may be a key step toward higher cognition and thought.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
ACKNOWLEDGMENTS
We thank Camille Lamy and Pierre Misery for their work in histology, Jorge Mejias for providing an early version of his code for a related model (Mejias and Wang, 2021), and the Wang lab for helpful discussions. This project was funded by NIH/ BMBF CRCNS grants (R01MH122024 and 01GQ1902 to N.P.
Lead contact
Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Xiao-Jing Wang (xjwang@nyu.edu).
Materials availability
This study did not generate new unique reagents.
Data and code availability Dopamine D1 receptor per neuron and tract-tracing connectivity data have been deposited at at BALSA: 7qKNZ and core-nets and are publicly available as of the date of publication. Accession numbers are listed in the Key resources table.
All original code has been deposited at GitHub: seanfw/dopamine-dist-wm and Zenodo: https://doi.org/10.5281/zenodo.5507279 and is publicly available as of the date of publication. DOIs are listed in the Key resources table.
Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
EXPERIMENTAL MODEL AND SUBJECT DETAILS
For in-vitro receptor autoradiography we analyzed the brains of three adult male Macaca fascicularis specimens (between 6 and 8 years old; body weight between 5.2 and 6.6 kg) obtained from Covance (now Labcorp Drug Development), M€ unster, where they were used as control animals for pharmaceutical studies performed in compliance with legal requirements. All experimental protocols were in accordance with the guidelines of the European laws for the care and use of animals for scientific purposes. REAGENT Tract tracing data was obtained from fluorescent retrograde injections of fast blue (FsB) and diamidino yellow (DY) in 29 areas reported in Markov et al., 2014b supplemented by injections in an additional 11 areas with either FsB (areas 9, OPRO), DY (areas LIP, V6, 25, 32) or cholera toxin subunit B (CTB) (areas 1, 3, 45A, F4, F3). Animals were aged 10-15 years, female and M. fasicularis except for the LIP injection which was M. mulatta. The LIP injection was reported in Mejias et al. (2016). Animals were group housed in cages in with access to plastic toys and other enrichment devices. Housing and surgical intervention were in accordance with European procedures and were reviewed by the veterinary and ethical services.
METHOD DETAILS
Overview of anatomical data In this study, we combine post-mortem anatomical data on receptor densities, white matter connectivity, neuron densities and dendritic spine counts. Each of these four anatomical measures was originally quantified using different parcellations of cortex. Large sections of the temporal lobe are not yet quantified for either the receptor autoradiography data, or the tract-tracing connectivity data. Collection of this data is underway and will be made available in future studies. With the exception of the receptor densities in the posterior parietal cortex (Impieri et al., 2019;Niu et al., 2020Niu et al., , 2021, all D1 receptor densities are reported for the first time in this study. The connectivity data for ten of the 40 cortical areas is used here for the first time, but will be described in more detail in an upcoming publication from the Kennedy lab. This enabled us to expand the calculation of the cortical hierarchy to 40 regions.
A note on notation Subscripts in square brackets, such as ½k are used to denote cortical areas themselves. Subscripts not in brackets, such as i are used to denote populations of neurons within a cortical area. Superscripts are used to provide further clarifying information. We use the convention that targets are listed before sources, so that g i;j would denote the strength of a connection from neural population j to neural population i. Parameter values are listed in Table S6.
Quantification of receptor density across cortex -in-vitro autoradiography
In order to create a high-resolution, and high-fidelity map of cortical dopamine receptor architecture, we used quantitative in-vitro receptor autoradiography (Palomero-Gallagher and . Previous dopamine receptor autoradiography has focused on relatively small sections of cortex (Goldman-Rakic et al., 1990;Impieri et al., 2019;Lidow et al., 1991;Niu et al., 2020;Richfield et al., 1989). To create a more comprehensive map of the cortical dopamine receptors, we measured D1 receptor density across 109 cortical areas, and D1 and D2 receptors in the basal ganglia.
Animals were sacrificed by means of an intravenous lethal dose of sodium pentobarbital. Brains were removed immediately from the skull, and brain stem and cerebellum were dissected off in close proximity to the cerebral peduncles. Hemispheres were separated and then cut into a rostral and a caudal block by a cut in the coronal plane of sectioning between the central and arcuate sulci. These blocks were frozen in isopentane at À40C to À50C, and then stored in airtight plastic bags at À70C. Each block was serially sectioned in the coronal plane (section thickness 20 mm) using a cryostat microtome (CM 3050, Leica, Germany). Sections were thaw-mounted on gelatine-coated slides, freeze-dried overnight and processed for visualization of D1 or D2 receptors, cell bodies (Merker, 1983) or myelin (Gallyas, 1979).Quantitative in-vitro receptor autoradiography was applied to label dopaminergic D1 and D2 receptors according to previously published protocols (Palomero-Gallagher and Zilles, 2018) (Zilles et al., 2002) encompassing a preincubation, a main incubation and a final rinsing step. For visualization of the D1 receptor, sections were first rehydrated and endogenous substances removed during a 20 minute preincubation at room temperature in a 50 mM Tris-HCl buffer (pH 7.4) containing 120 mM NaCl, 5 mM KCl, 2 mM CaCl2 and 1 mM MgCl2. During the main incubation, sections were incubated with either 0.5 nM [3H]SCH 23390 alone (to determine total binding), or with 0.5 nM [3H]SCH 23390 and 1 mM of the displacer mianserin (to determine the proportion of displaceable, non-specific binding) for 90 minutes at room temperature in the same buffer as used for the preincubation. Finally, the rinsing procedure consisted of two 20 minutes washing steps in cold buffer followed by a short dip in distilled water. For visualization of the D2 receptor, sections were preincubated with 50 mM Tris-HCl buffer (pH 7.4) containing 150 mM NaCl and 1% ascorbate. In the main incubation, sections were incubated with either 0.3 nM [3H]raclopride alone, or with 0.3 nM [3H]raclopride and 1 mM of the displacer 1 mM butaclamol for 45 minutes at room temperature in the same buffer as used for the preincubation. Rinsing consisted of six 1 minute washing steps in cold buffer followed by a short dip in distilled water. Specific binding is the difference between total and non-specific binding. Since the ligands and binding protocols used resulted in a displaceable binding, which was less than 5% of the total binding, total binding is considered to be equivalent of specific binding. Sections were dried in a cold stream of air, exposed together with plastic scales of known radioactivity against tritium-sensitive films (Hyperfilm, Amersham) for six (for the D1 receptor) or eight (for the D2 receptor) weeks, and ensuing autoradiographs processed by densitometry with a video-based image analyzing technique (Palomero-Gallagher and Zilles, 2018) (Zilles et al., 2002). Autoradiographs were digitized using a CCD-camera, and stored as 8-bit gray value images with a spatial resolution of 2080x1542 pixels. Grey values (g) in the co-exposed scales as well as experimental conditions were used to create a regression curve with which gray values in each pixel of an autoradiograph were transformed into binding site densities (Bmax) in fmol/mg protein by means of the formula where R is the radioactivity concentration (cpm) in a scale, E the efficiency of the scintillation counter used to determine the amount of radioactivity in the incubation buffer, B the number of decays per unit of time and radioactivity, W b the protein weight of a standard, s a the specific activity of the ligand, K D the dissociation constant of the ligand, and L the free concentration of the ligand during incubation. For visualization purposes solely, autoradiographs were subsequently pseudo-color coded by linear contrast enhancement and assignment of equally spaced density ranges to a spectral arrangement of eleven colors. Cortical areas were identified by cytoarchitectonic analysis and receptor densities measured at comparable sites in the adjacent sections processed for receptor visualization. The mean receptor density for each area over a series of 3-5 sections per animal and receptor was determined by density profiles extracted vertical to the cortical surface using MATLAB-based in house software (Palomero-Gallagher and Zilles, 2018).
Retrograde tract-tracing
The inter-areal connectivity data in this paper is part of an ongoing effort to map the cortical connectome of the macaque using retrograde tract-tracing (Markov et al., 2013(Markov et al., , 2014a(Markov et al., , 2014b. For each target area, a retrograde tracer was injected into the cortex. The tracer was taken up in the axon terminals in this area, and retrogradely transported to the cell bodies of neurons that projected to the target. These cell bodies could be throughout the brain. Each of these cell bodies in cortex was counted as a labeled neuron (LN). The amount of labeled neurons was counted in all cortical areas except for the injected target area. The cortical areas that send axons to the target area are called source areas. As there are uncontrollable differences in tracer volume and uptake between injections, we estimated the strength of connections as follows. For a given injection, the total number of cell bodies in the cortex outside of the injected (target) area was counted. The number of labeled neurons within a source cortical area was then divided by the number of labeled neurons in the whole cortex (excluding the target area), to give a fraction of labeled neurons (FLN). The FLN was averaged across all injections in a given target area. For this calculation, we include all areas in the entire cortical hemisphere ðn areas = 91Þ. The subiculum (SUB) and piriform cortex (PIR) have a qualitatively different laminar structure to the neocortical areas, and thus supraand infra-laminar connections (and thus the SLN) from these areas are undefined. We thus removed all connections from these areas from the following calculations ðn areas;SLN = 89Þ. These connectivity data are available on the core-nets website.
Estimation of the cortical hierarchy Following (Markov et al., 2014a), we estimate the hierarchical position h of each area using the SLN values of its connections. Feedforward connections tend to originate in the supragranular layers, while feedback connections tend to originate in the deep layers of the source area (Barone et al., 2000;Felleman and Van Essen, 1991). Moreover, if a target area occupies a much higher hierarchical position than the source area, a greater proportion of the neurons emerge from the supragranular layers of the source area than if the two areas are closer in the hierarchy (Barone et al., 2000). Likewise for the feedback connections, a greater hierarchical distance between the areas implies that the higher area sends a greater proportion of it projections from the infragranular layers. This implies that the fraction of neurons coming from the supragranular layers in a given connection gives an estimate of the relative hierarchical position of two connected areas (Barone et al., 2000;Markov et al., 2014a). Here, following (Markov et al., 2014a), we estimate a set of hierarchical levels (one per area) that best predicts the SLN values for all connections in the dataset. The model to estimate the hierarchy has the form gðEðSLNÞÞ = Xb (Equation 4) where g is a function that links the SLN of the connection between areas to the hierarchical distance between them. b is a column vector of length n areas;SLN , containing the hierarchy values to be estimated. X is an incidence matrix of shape n conns 3n areas;SLN , where n conns ( = 2619) is the number of observed (non-zero) connections between cortical areas in the remaining dataset. Each row in X represents a connection, and each column represents a cortical area. All entries in each row equal 0 except for the column corresponding to the source area, which has a value of À1, and the target (recipient) area, which has a value of 1 (Strang, 1993).
The hierarchical values can be estimated with maximum likelihood regression. However, the model is singular (the rows sum to zero). In order to make the model identifiable, we therefore removed one column from X. We chose to remove the column corresponding to area V1, which is therefore forced to have a hierarchical value of 0. However, the choice of column is unimportant, as it is possible to estimate negative hierarchical values (in the case that other areas are lower than V1 in the hierarchy).
We used the beta-binomial model. The binomial parameter p corresponds to the proportion of successes. This is thought to be a random variable following a Beta distribution. The beta-binomial distribution depends on two parameters, the mean (m, here the SLN), and the dispersion (f). The beta-binomial model can account for the overdispersion of the neural count data. Note that the SLN of each measured connection is input into the model, without averaging across repeated injections.
The likelihood is written as where q is the number of neurons projecting from the supragranular layers, n is the number of neurons projecting from all layers, and B is the beta function defined as Bðx; yÞ = Z 1 0 p xÀ1 ð1 À pÞ yÀ1 dp (Equation 6) with x;y > 0. We fit the model using m = FðXbÞ, where F is the cumulative Gaussian, as it maps the real numbers to the (0,1) range. F À1 = g in Equation 4 is the probit link function. The hierarchy is estimated by minimizing the log-likelihood. For more details see Markov et al. (2014a). We then rescaled the hierarchy so that the maximum hierarchial value within the 40 region complete subgraph (containing all injected areas) equaled 1: for all cortical areas k in the complete 40-area subgraph.
For the circular embedding of the connectivity data, we estimate angles q i;j between areas A i and A j so that a smaller angular distance between areas corresponds to a higher connectivity strength (Chaudhuri et al., 2015). The dissimilarity dðA i ; A j Þ is defined as Àlog 10 ðFLNðA i ; A j ÞÞ for FLNðA i ; A j ÞR0 Àlog 10 ðFLN min Þ for FLNðA i ; A j Þ = 0 where FLN min = 10 À7 , a value smaller than any FLN in the dataset. The angles q i are assigned to each area such that The estimated angles q i are constrained to lie within the range ½0; 1 and then mapped onto ½0; 2p. The radial distance from the center of the circle is r i = ffiffiffiffiffiffiffiffiffiffiffiffi ffi 1 À h i p , where h i is the hierarchical value of the area, as defined above.
Integration of anatomical datasets
All anatomical data was mapped to the appropriate parcellations on the Yerkes19 surface. For the present study, we mapped all data to the 40 area Lyon subgraph (Markov et al., 2014b), as the areas in this parcellation were generally larger than those in the Julich Macaque Brain Atlas (Impieri et al., 2019;Niu et al., 2020;Rapan et al., 2021;this paper) and the Queensland (spine count) injection sites (Elston, 2007), and closer to standard areal descriptions than the Vanderbilt (neuronal density) (Collins et al., 2010) sections. The receptor densities were quantified in 109 cortical regions defined by cyto-and receptor-architecture. The method for the delineation of cortical region borders is described in (Impieri et al., 2019;Niu et al., 2020;Rapan et al., 2021). Using the same method, anatomists (NPG, MN, LR) identified cortical areas on the basis of the receptor and cyto-architecture. See Figure 1 for the definition of the areas. Anatomists carefully drew (NPG, MN, LR) and independently revised (NPG, MN, LR, SFW) defined borders on the Yerkes19 cortical surface (Donahue et al., 2016) to enable comparison with other data types. The D1 receptor data was mapped to the Lyon atlas as follows. For each area in the Lyon atlas, we searched for overlaps with areas in the Julich Macaque Brain Atlas. If more than 50% of the vertices within the area were also in the Julich Macaque Brain Atlas, the D1 receptor density for the area was calculated. All vertices within each Julich area were assigned the mean value for that area. We averaged the D1 receptor density across all vertices that lay within both the Lyon area and the Julich Macaque Brain Atlas, thus performing a weighted average of the D1 receptor densities according to the degree of spatial overlap. Thirty-two of the 40 Lyon areas were assigned D1 receptor density in this way, with the remaining eight areas not overlapping sufficiently with the Julich Macaque Brain Atlas. Due to the strong positive correlation between the D1 receptor/neuron density and the hierarchy (Figure 1), for the simulations we inferred values for the remaining eight regions using linear regression with hierarchy as the independent variable and D1 receptor/neuron density as the dependent variable.
The in-vitro autoradiography data accurately quantifies the density of receptors across cortex. However, it is important to bear in mind that the density of neurons also varies across the cortex. Collins et al. (2010) measured the density of neurons across the entire macaque cortex using the isotropic fractionator (a.k.a. brain soup) method. In the original paper, the cortex was divided into 42 regions and displayed on a flatmap, with anatomical landmarks labeled (Figures 2 and S1 of that paper). The borders of these regions were drawn on the Yerkes19 surface by SFW with reference to the original paper (Collins et al., 2010), several anatomical papers from the same group (Beck and Kaas, 1999;Cerkevich, et al., 2014;Kaas, 2004), the Julich Macaque (109 areas) and the Lyon (Markov-132) atlases (Donahue et al., 2016;Markov et al., 2014b), and were independently assessed by anatomists (LR, MN, NPG). The neural density data covered the entire cortex. As such, we assigned neural density to each area in the Lyon atlas, weighted by the spatial overlap with the original regions in the Vanderbilt atlas. D1 receptor density was divided by the neuron density to give the D1 receptor/ neuron density in each area. The neuron density was in units of neurons per gram. To estimate the receptor density in fmol per neuron, we used the previously reported figure that 8% of brain tissue is protein (McIlwain and Bachelard, 1972). This amounts to multiplying by a constant, and does not affect the correlations or the effect of the dopamine gradient in the model.
The Lyon atlas used to define the interareal connectivity data (Markov et al., 2014b) is already available on the Yerkes19 surface (Donahue et al., 2016). The complete subgraph of injected areas including bidirectional connectivity has been expanded from 29 areas in Donahue et al. (2016) to the 40 areas used in this paper.
For the spine count data, outlines of the 27 injection sites were drawn on the Yerkes19 surface by SFW with reference to the original papers (most of which had substantial anatomical description and hand-drawn maps), as well as anatomical papers cited within the original papers (Cavada and Goldman-Rakic, 1989;Preuss and Goldman-Rakic, 1991;Seltzer and Pandya, 1978) and the Lyon and Julich Macaque Brain Atlases. Direct comparison with the hand-drawn maps was possible for areas V1, V2, MT, LIPv, 7a, V4, TEO, STP, IT, Ant. Cing., Post. Cing, TEpd, 12vl, A1, 3b, 4, 5, 6, 7b, 9, 13, 46, 7 m (Elston, 2007). Areas 10, 11 and 12 were described with reference to Preuss and Goldman-Rakic (1991). The injection in area TEa used the maps in Seltzer and Pandya (1978) for area definition. We used these maps to approximate the injection location. Area STP was identified with the corresponding region STPp in the atlas of Felleman and Van Essen (1991). Area FEF was identified as lying on the anterior bank of the medial aspect of the arcuate sulcus, as described by Elston (2007). All identified injection sites on the cortical surface were independently verified by MN, LR and NPG. Spine count data was expressed according to injection sites, rather than entire cortical areas. As such, we found the number of vertices from each injection site overlapping with each area in the Lyon atlas. For each Lyon area, the spine count was an average of the spine counts for all the injection sites overlapping with the area, weighted by the number of vertices of each injection site contained within the area. In this way we estimated the spine counts on pyramidal cells in 24 of the 40 regions in the Lyon atlas. Based on the strong positive correlation between spine count and cortical hierarchy (r = 0.61, p = 0.001), and following previous work (Chaudhuri et al., 2015;Mejias and Wang 2021), we inferred the spine count for the remaining regions based on the hierarchy using linear regression.
Neuroanatomists (NPG, LR, MN) classified each of the 109 cortical areas for which D1 receptor data is available as being either granular, or agranular, and according to the ratio of cell body size between layers III and V.
Delineations of the areal borders for each atlas, and the anatomical data in the Yerkes19 space are available on the BALSA database.
Overview of dynamical models
We first describe the connectivity structure of our local circuit model, and how dopamine modulates the efficacy of these connections. We then describe a large-scale dynamical model, in which the local circuit is used as a building block, and placed in each of 40 cortical areas. We describe the various steps to building the large-scale model, including how to connect the cortical areas, apply heterogeneity of excitation and the gradient of dopamine. Lastly, we describe how we simulated working memory tasks, lesions and transient inhibition in this model.
Description of the local cortical circuit
We describe a local cortical circuit containing populations of four distinct types of neurons. This is conceptually related to previous computational models of working memory involving multiple types of interneurons (Tanaka, 1999;Wang et al., 2004a), and uses a mean field reduction of a spiking model (Brunel and Wang, 2001;Wong and Wang, 2006). PV, CB/SST and CR/VIP cells differed in the threshold and slope of their input-output function (f-I curve) (Bacci et al., 2003), local (Adesnik et al., 2012Jiang et al., 2015;Muñ oz et al., 2017;Pfeffer et al., 2013;Tremblay et al., 2016) and long-range connectivity (Lee et al., 2013;Wall et al., 2016), adaptation rates (Kawaguchi, 1993;Mendonç a et al., 2016;Schuman et al., 2019), and NMDA/AMPA ratio (Lu et al., 2007).
The connectivity structure and strengths of the local circuit, are based on a synthesis of anatomical and physiological studies, and are captured in the local connectivity matrix G (Tables S1-S3; Jiang et al., 2015;Kalisman et al., 2005;Lee et al., 2013;Ma et al., 2012;Markram et al., 1997;Pfeffer et al., 2013;Silberberg and Markram, 2007;Walker et al., 2016). Note that connection probability and synaptic strength between neural types are generally positive correlated (Jiang et al., 2015). This simplifies the process of identifying the relative strengths of connections between neural populations in the circuit.
We grouped the pyramidal neurons into two separate populations. Each of these populations is selective to a particular visual feature (such as a region of visual space). Pyramidal cells excite all cell types in the circuit, with different strengths. We model two compartments in the pyramidal cells. One compartment represents the soma and proximal dendrites, and the other the distal dendrites. The dendrite is modeled as a simplified nonlinear function, adapted from Yang et al. (2016). Pyramidal cells target the soma and proximal dendrites of other pyramidal cells in the same cortical area (Kalisman et al., 2005;Markram et al., 1997;Petreanu et al., 2009). Each type of inhibitory neuron has a unique pattern of connectivity. The first inhibitory cell type targets the perisomatic area of the pyramidal cells. These cells express parvalbumin (PV) and are fast spiking (Jiang et al., 2015;Kawaguchi, 1993Kawaguchi, , 1995. They are basket cells with axons that branch across wide distances, which allows them to inhibit pyramidal cells in neighboring populations (Helmstaedter et al., 2009;Kawaguchi, 1995). They also inhibit other PV neurons (Jiang et al., 2015;Pfeffer et al., 2013). Compared to other inhibitory neurons, PV neurons receive a smaller proportion of excitatory inputs via NMDA receptors (Lu et al., 2007;Wang and Gao, 2009). The second type of inhibitory neuron targets the distal dendrites of excitatory cells. In non-human primates, dendritetargeting cells express calbindin (DeFelipe et al., 1989). The best characterized dendrite-targeting cell type in rodents is the Martinotti cell, which expresses somatostatin (CB/SST) (Wang et al., 2004b). These cells target all other cell types, while avoiding other Martinotti cells (Jiang et al., 2015). They also receive a strong lateral projection from pyramidal cells in neighboring columns (Adesnik et al., 2012) and receive most of their excitation via NMDA receptors (Lu et al., 2007). The third type of interneuron expresses calretinin and vasoactive intestinal peptide (CR/VIP) (Tremblay et al., 2016) and targets CB/SST inhibitory neurons (Lee et al., 2013). Although gene expression of PV, SST and VIP have been used to successfully distinguish non-overlapping classes of interneurons in primates (Hodge et al., 2019;Krienen et al., 2020), in primates SST antibodies often label relatively few cells (Hendry et al., 1984;Mueller et al., 2018Mueller et al., , 2020. SST is often, but not always co-expressed with CB (Gonzá lez- Albo et al., 2001;Lake et al., 2016). CB and SST expressing cells show a similar pattern of expression across cortical layers and areas in the macaque (Dienel et al., 2020). CR is expressed in most VIP neurons in primate cortex (Gabbott and Bacon, 1997;Lake et al., 2016), and both VIP and CR show a similar expression across layers and cortical areas in the macaque (Dienel et al., 2020). However, the investigation of cross-species interneuron type similarities and differences is ongoing and not resolved (Hodge et al., 2019;Kooijmans et al., 2020;Krienen et al., 2020). In our model, the three interneuron types should be more appropriately interpreted according to their synaptic targets, rather than other cellular markers.
See Table S6 for all parameter values.
Dopamine modulation
The density of dopamine D1 receptors per neuron was rescaled, so that the area with minimum density r raw min was set to zero, and the area with maximum density r raw max was set to one, with all other areas lying in between.
r ½k = r raw ½k À r raw min r raw max À r raw min for all cortical areas k. Network behavior was investigated for differing amounts of cortical dopamine availability ðl DA Þ. The specific value of l DA used for each simulation is shown in the figures and main text. Note that for Figure 6, l DA is calculated dynamically throughout each trial. Cortical dopamine availability is related to the fraction of occupied D1 receptors l occ through a sigmoid function. The fraction of occupied D1 receptors thus lies between 0 and 1, as expected.
Dopamine increases the proportion of inhibition onto the dendrites of pyramidal cells (Gao et al., 2003). Therefore, we simulated the effect of dopamine on dendritic inhibition as follows. The total amount of dendritic inhibition increases (from a minimum to a maximum strength) as the total amount of occupied receptors increases. The total amount of occupied receptors is equal to the receptor density multiplied by the fraction of occupied receptors.
g DA E dend ;SST;½k = g min E dend ;SST + l occ r ½k g max E dend ;SST À g min Dopamine decreases the proportion of inhibition onto the soma of pyramidal cells (Gao et al., 2003). Therefore, we simulated the effect of dopamine on somatic inhibition as follows. The total amount of somatic inhibition decreases (from a maximum to a minimum strength) as the total amount of occupied receptors increases.
g DA Esoma;PV;½k = g max Esoma;PV + l occ r ½k g min Esoma;PV À g max Esoma ;PV (Equation 10) Dopamine also increases the strength of excitatory synaptic transmission via NMDA receptors (Seamans et al., 2001). We modeled this with a sigmoid function, so that dopamine primarily increases NMDA conductances at low and medium dopamine concentrations, before reaching a plateau (Brunel and Wang, 2001).
n ½k = e b n ðl occ r ½k Àc n Þ 1 + e b n ðl occ r ½k Àc n Þ (Equation 11) Here b n sets the slope of the sigmoid function, c n sets the midpoint. Here we restrict calculations to the injected cortical areas i, j, which allows us to simulate the complete bidirectional connectivity structure within the subgraph ðn sub = 40Þ. We use the same parameter values as in Mejias et al. (2016) and Mejias and Wang (2021) (Table S6) to construct our interareal connectivity matrix W . As noted previously, feedforward projections tend to originate in the supragranular layers, while feedback connections originate in the deep layers. Feedforward and feedback connections also likely have different cellular targets. Therefore it is useful to separate the long-distance feedforward and feedback connections.
Interareal population interactions
The majority of interareal connections contain a mixture of axons projecting from deep and superficial layers. Long distance connections onto excitatory cells primarily target the distal dendrites (Petreanu et al., 2009; Table S4). Therefore, in the model we assume that long-distance connections target the dendrites of excitatory cells. CR/VIP cells receive the strongest long-distance inputs of all inhibitory cells, while CB/SST receives the weakest (Lee et al., 2013;Wall et al., 2016;Tables S5 and S6). This suggests that long-range connections effectively disinhibit the dendrite in the target area by exciting CR/VIP interneurons, while concurrently exciting the dendrite, to maximize the probability of information passing from the source area into the target area. Following Mejias and Wang (2021) we assume that feedback connections target inhibitory cells more strongly than feedforward connections. Excitatory cells in different cortical areas with the same receptive fields are more likely to be functionally connected (Zandvakili and Kohn 2015). This is reflected in our model as follows. In the source area, there are two excitatory populations, 1 and 2, each sensitive to a particular feature of a visual stimulus (such as a location in the visual field). Likewise in the target area, there are two populations 1 and 2, sensitive to the same visual features. We assume that 90% of the output of population 1 in the source area goes to population 1 in the target area, and the remaining 10% to population 2. The converse is true for population 2 in the source area (it targets 10% population 1, 90% population 2; Tables S4 and S6).
Disinhibitory circuit in the frontal eye fields
The frontal eye fields (areas 8m and 8l in the model), have a very high percentage of calretinin neurons, and relatively fewer parvalbumin and calbindin neurons (Pouget et al., 2009). To account for this in the model, we relatively increased the long-range inputs to CR/VIP cells in areas 8m and 8l, as detailed in Table S6. These changes are critical for persistent activity in areas 8l and 8m, but otherwise do not greatly affect the behavior of the model. Without this change, the overlap between the simulated delay activity pattern and the experimental delay activity pattern (as in Figure 3A) is still extremely high (17/19 areas correct, chi-square = 12.31 p = 0.0004), and the activity pattern depends on both the long-range connectivity (p = 0.001), and D1 receptor distribution (p = 0.008), but not the spine count (p = 0.19), and lesions to areas 8l and 8m have a smaller effect on distributed persistent activity. All other results are unchanged. We also increased the relative strength of local CR/VIP connections and reduced the relative strength of local PV connections in FEF, but found that this had no effect on model behavior, so the simulations in the paper are presented without the local changes in FEF.
Calculation of long-range currents
Long-range interactions are applied as follows: where z ½k is the dendritic spine count for area k (as defined above), m E;E is the long-range connectivity strength onto excitatory cells (See Table S6), n DA ½k is the degree of dopamine modulation of NMDA currents for area k, k i is the NMDA/AMPA fraction for population i, w ½k;l is the connection strength from area l to area k, g E;E i;j sets the long-range strength from population j to population i (Tables S4 and S6) Cortical dopamine availability Dopamine neurons fire bursts in response to stimuli that predict reward in working memory tasks (Schultz et al., 1993). Following release in the cortex, dopamine levels remain elevated for seconds (Muller et al., 2014). This is approximately the period of one trial in our simulations. Therefore, for the majority of simulations we approximated this by setting dopamine to a constant value for each trial.
For Figure 6 the cortical model is the same as in previous figures, with the exception that dopamine availability in the cortex l DA changes dynamically and depends on the firing rates in the dopamine neurons, and g NMDA = 6.41, g AMPA = 25.
where t DA = 2s and g DA = 0:1. In addition, we removed the effect of dopamine on adaptation currents to simplify the learning process.
Reward-based learning
The fraction of cortex to VTA synapses in the up state is updated according to the outcome of the previous trial, using the simplified learning rule of Soltani and Wang (2006) if target j is selected and rewarded and if target j is selected and not rewarded. T is the current trial and a = 0:2 is the learning rate.
QUANTIFICATION AND STATISTICAL ANALYSIS
Correlation between D1 receptor density and other anatomical features Many aspects of brain anatomy are spatially autocorrelated, with nearby brain areas displaying similar anatomy. This spatial autocorrelation is not accounted for in conventional statistical tests, which often assume independence of data points. Failing to account for the spatial autocorrelation can lead to spurious correlations between brain maps. To overcome this problem, we generated random surrogate brain maps, with a spatial autocorrelation that closely matched the hierarchy map (Burt et al., 2020). This is done by first randomly permuting the values in the hierarchy map, and then smoothing and rescaling the permuted map to recover the lost spatial autocorrelation. The smoothing is perfomed by a local kernel-weighted sum of values of the k nearest neighbor regions, where k is chosen to best match the autocorrelation of the original hierarchy map (Burt et al., 2020). Each of the randomly generated surrogate maps is then correlated with the D1 receptor map. The spatially-corrected p value is then the fraction of surrogate maps that show a stronger Pearson correlation (negative or positive) with the D1 receptor map than the hierarchy map.
To compare the D1R density between granular and agranular cortical areas, we used a non-parametric Wilcoxon rank-sum test. To compare D1R density between areas with internopyramidisation, externopyramidisation and equal layer III and layer V pyramid sizes, we used a non-parametric Kruskall-Wallis test.
Comparing the simulated and experimental patterns of delay activity In Figures 3A and 3B we compare the activity pattern of the model to the experimental pattern, and investigate its dependence on anatomical features. The experimental electrophysiology data was taken from a mega-analysis by Leavitt et al. (2017) of over 90 electrophysiology studies of delay period activity during working memory tasks. We first divided the cortex into persistent activity and non-persistent activity areas for both the experimental data and simulation (Table S7). Areas were classified in the persistent activity group if at least 3 more studies observed persistent delay period activity than a lack of such activity. We excluded areas that have been assessed in less than three studies. Of the areas that have been studied in at least three studies, we classify an area as having persistent activity, if more than 50% of studies have found persistent activity. However, the conclusions are not dependent on this threshold, or the minimum number of studies (Table S8). Areas in the simulation were classified as having persistent activity if, for the last 500ms of the trial, they had mean firing rates of at least 5Hz greater than the pre-stimulus baseline firing rates.
To shuffle anatomical connections, we shuffled connections within rows of the FLN matrix, so that the distribution of connections and connection strengths to each area remained constant, with the identity of the connections changing. The same reordering was applied to the SLN matrix. D1 receptor densities and spine counts were shuffled separately. Results were visualized using a custom version of a Raincloud Plot (Allen et al., 2019) to enable concurrent visualization of the distribution and individual simulation results. The p value is calculated as the fraction of simulations based on shuffled anatomical data that produce a delay activity pattern that overlaps with the experimental data as well as (or better than) the original simulation.
Lesioning of cortical areas
In Figures 3C-3H, we simulate the effects of a lesion to individual cortical areas. We do this by removing all input and output connections of the lesioned area in the connectivity matrices W E;E and W I;E . For the statistical analysis of the relationship between anatomical features and lesion effects, we removed areas V1 and V2 from the analysis. This was due to the fact that these areas were crucial to the propagation of the visual stimulus, but not working memory per se (Figure 3; Figure S5). We performed a stepwise-linear regression approach.
|
2021-09-17T17:13:00.407Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "0324c0a09f6627dd6d3005ffde54be71ba3b4274",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S0896627321006218/pdf",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "e35bd0cf27899e35360d457c1eaf6f4620dd0434",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209461300
|
pes2o/s2orc
|
v3-fos-license
|
The visual nonverbal memory trace is fragile when actively maintained, but endures passively for tens of seconds
Despite attempts at active maintenance in the focus of attention, the fragile nature of the visual nonverbal memory trace may be revealed when the retention interval between target memoranda and probed recall on a trial is extended. In contrast, a passively maintained or unattended visual memory trace may be revealed as persisting proactive interference extending across quite extended intervals between trials in a recent probes task. The present study, comprising five experiments, used this task to explore the persistence of such a passive visual memory trace over time. Participants viewed some target visual items (for example, abstract colored patterns) followed by a variable retention interval and a probe item. The task was to report whether the probe matched one of the targets or not. A decaying active memory trace was indicated by poorer performance as the memory retention interval was extended on a trial. However, when the probe was a member of the target set from the preceding trial, task performance was poorer than a comparison novel probe, demonstrating proactive interference. Manipulations of the intertrial interval revealed that the temporal persistence of the passive memory trace of an old target was impressive, and proactive interference was largely resilient to a simple ‘cued forgetting’ manipulation. These data support the proposed two-process memory conception (active–passive memory) contrasting fragile active memory traces decaying over a few seconds with robust passive traces extending to tens of seconds.
PI is manifested when responding to RP stimuli is slower and less accurate than responding to NRP stimuli, which we term the 'recent probe effect'.
Importantly, the presence of PI can provide insights into the continued availability of old, residual memories, which has implications for theories of forgetting. For example, temporal decay theory expects old items to be gradually forgotten, so extending the intertrial interval (ITI) within the recent probes task should allow RP stimuli to decay and PI to vanish. Temporal distinctiveness models (e.g., Brown, Neath, & Chater, 2007) also predict a reduction in PI over time. In temporal distinctiveness accounts, memories are forgotten through PI, which is especially likely in crowded temporal contexts (e.g., when competing to-be-remembered items occur in close temporal proximity). Consequently, isolating items in time should reduce PI, and over a long ITI there should be less likelihood of confusing events on the current trial with those from the previous trial.
Berman , Jonides, and Lewis (2009) manipulated the ITI within the recent probes task using verbal memoranda, but, in contrast to time-based theories relying on decay or temporal distinctiveness, found time-insensitive PI. Yet PI effects may
Methodology and analysis
All five experiments used variants of the recent probes task (see Fig. 1). Participants reported normal or corrected-to-normal vision and were tested individually. The task was to remember two visual targets over a brief RI and decide whether a single probe matched one of the targets. The three standard probe types were employed, but the two mismatching probes-RP and NRP-were of primary interest, with both task accuracy (the proportion of correct responses) and response times being recorded. NRPs were either novel or had not been seen for multiple trials, whereas RPs matched a target from the previous trial (but never the probe). A 'decay interval' was computed by assessing the amount of time from offset of a target item on trial N − 1 to onset of the probe on trial N.
Two phenomena were of importance. Firstly, the effect of probe type provided a measurement of PI-less accurate or slower responding on RP than NRP trials would depict PI. Secondly, any reduction of PI would manifest as an ITI × Probe Type interaction, with improved performance on RP trials at longer ITIs. Such an interaction would provide evidence for a release from PI, in line with decay and temporal distinctiveness theories, and suggest that PI is time sensitive. Conversely, McKeown et al.'s (2014) active-passive conception predicts time-invariant PI and an absence of an interaction. Yet the active-passive theory also expects performance to decline over the RI, as actively maintained memories are subject to decay.
To assess these effects, repeated-measures ANOVAs were used, and violations to sphericity were corrected through the Greenhouse-Geisser adjustment. Null effects are theoretically relevant in this study, as an absence of an ITI × Probe Type interaction may reflect time-insensitive PI, and so Bayesian repeated-measures ANOVAs were also performed with JASP (JASP Team, 2018, Version 0.9.0.1; Wagenmakers et al., 2018). The analysis included both frequentist and Bayesian approaches.
For each main effect, a Bayes factor (BF 10 ) was calculated, with values greater than 1 denoting support for the alternative hypothesis and values less than 1 denoting support for the null hypothesis (e.g., a BF 10 of 5 would indicate that the data are five times more likely under the alternative rather than null hypothesis; a BF 10 of 0.2 would indicate that the data are five times more likely under the null hypothesis). Interpretation of Bayes factors followed a recommendation from Jeffreys (1961; see also Dienes, 2014), where values exceeding 3 and 10 denote moderate and strong support for the alternative hypothesis, respectively (conversely, values less than 0.33 and 0.1 offer moderate and strong support for the null hypothesis, respectively). Bayes factors between 0.33 and 3 offer only limited or anecdotal support for either hypothesis, whereas values equaling 1 cannot differentiate the competing predictions. Here, such effects are considered inconclusive.
Assessment of interactions is more challenging, as the Bayesian analysis of complex experimental designs places greater emphasis on model comparison (for instance, comparing a model with an interaction against a model without an interaction). JASP offers a means of assessing the interaction based on Bayesian model averaging and the resulting BF Inclusion gives a single value for each interaction, which is calculated by considering all models with a specific factor and comparing prior and posterior inclusion probability. When assessing interactions, the BF Inclusion needs to be considered against models with the main effects alone, to see whether the interaction adds any value. The Bayesian analysis of interactions in this study uses BF Inclusion .
Experiment 1
Experiment 1 tested McKeown et al.'s (2014) passive memory conception of a persisting PI by varying the ITI, leading to decay intervals of approximately 6 s to 21 s. The experiment also aimed to measure time-based forgetting of the actively maintained within-trial memory by varying the within-trial RI. To provide a more robust assessment of time-dependent forgetting, the positive trials were also included within the analysis. Following McKeown et al. (2014), the intention was to recruit at least 15 participants for the experiment. The final sample included 18 psychology students from the University of Wolverhampton (16 females and two males) between the ages of18 and 38 years (M = 23.67 years, SD = 5.69 years).
Participants
Materials Stimuli included 260 images developed by McKeown et al. (2014), which had originally been taken from Snodgrass and Vanderwart's (1980) revised object databank (Rossion & Pourtois, 2004) and distorted into abstract and meaningless shapes (see Fig. 1). The experiment was run on a PC using E-Prime 2.0 software (Psychology Software Tools, Inc., www.pstnet.com/eprime). Stimuli were presented on an Iiyama ProLite P1905S 19-in. LCD monitor at a viewing distance of approximately 70 cm.
Design and procedure The study matched the arrangements of McKeown et al. (2014, Experiment 1) using a within-groups design. Trials commenced with a fixation cross presented in the center of the screen for 500 ms, followed by two targets. Targets were displayed for 500 ms, and participants were instructed to remember both. A single probe stimulus was presented for up to 2 s after an unfilled RI lasting 1 s or 6 s. The task was to determine whether the probe matched either of the targets (using the "S" key for matches and the "L" key for nonmatches). After their response or after 2 s had elapsed, there was an unfilled interval lasting 500 ms or 5.5 s. The fixation cross was then displayed to indicate the beginning of the next trial, creating ITIs lasting 1 s or 6 s.
+
Fixation cross (start of trial) Targets (to-be-remembered)
Retention interval (RI)
Probe + Blank interval + fixation cross = inter-trial interval I(TI) Here two trials are shown: trial N − 1 (gray boxes) and trial N (black boxes). Participants are requested to remember two target items over a delay, and then decide whether a single probe is a match for one of the targets. Nonmatch trials differ, with the probe either being novel (NRP) or a member of the target set from the previous trial (RP). Variables manipulated include the retention interval (RI) on a trial, the intertrial interval (ITI) and probe type
Decay interval
The probe matched one of the targets on 50% of trials and the remaining trials were equally distributed between RP and NRP trials. On RP trials, the probe matched a target seen on the previous trial, whereas on NRP trials the probe could not match any object seen for at least 48 trials. In addition, the combination of targets was unique. There were 16 practice trials and 192 experimental trials (96 positive, 48 RP and 48 NRP). Experimental trials were equally distributed between the four RI/ITI combinations and presented within four blocks of 48 trials. Participants received feedback following a block.
The interaction between probe type and ITI was conventionally significant, F(2, 34) = 3.33, MSE = 0.003, p = .048, η p 2 = 0.16, BF Inclusion = 0.15, with accuracy increasing at longer ITIs on positive trials, but not RP or NRP (see Fig. 2, Panel A). From the Bayesian perspective this interaction was unsupported, and there was no justification for including this interaction in the model. This discrepancy between the frequentist and Bayesian analysis is influenced by the Bayes factor used for interactions (BF Inclusion ), which assesses whether the interaction adds value beyond the main effects alone. In this case, it did not, and the model was dominated by probe type. Indeed, at both ITIs, RP accuracy was around 4% lower than that recorded for NRP trials, showing persisting PI.
Next, response-time data were assessed with a 2 × 2 × 3 repeated-measures ANOVA (see Table 1). The analysis was only performed on trials featuring a correct response, and matching the accuracy data there was a significant effect of RI and strong support for the alternative hypothesis The traditional ANOVA also showed one significant interaction between probe type and RI, F(2, 34) = 5.69, MSE = 3899.10, p = .007, η p 2 = 0.25, which was influenced by an increase in response times at longer RIs on positive but not negative trials. However, responses on RP trials were slower than those on NRP trials at both 1 s and 6 s, and the Bayes factor was inconclusive (BF Inclusion = 1.32). The model was instead dominated by the main effects of RI and probe type. There was minimal evidence for any other interactions and they were nonsignificant.
Discussion
Experiment 1 documented persistent PI where performance was less accurate and slower on RP trials, in comparison to Strong time-based forgetting of actively maintained information was also observed, with accuracy declining and response times increasing at the longer RI. This effect was largely limited to positive trials, where participants were less successful at recognizing a match between the probe and one of the recently presented targets. The remaining experiments focus on this enduring PI as evidence for a passively maintained memory trace.
Experiment 2
The manipulation of the RI in Experiment 1 meant that participants had to actively maintain representations over delays lasting up to 6 s. The effort involved in maintaining the target items may have strengthened the RP memory, heightening PI. In Experiment 2, a short, standardized RI was employed, reducing the time for active maintenance and permitting target items to be more rapidly forgotten once no longer relevant. This might alleviate PI, as less time is available to deploy active retention strategies during the RI. As in Experiment 1, the ITI was varied and if PI does decrease over time, a reduction in the recent probe effect should be observed at the longer ITIs.
Method
Participants The 22 participants (15 females and 7 males), between the ages of 18 and 48 years (M = 24.5 years, SD = 7.18 years), were either students or staff from the University of Wolverhampton.
Materials
The stimuli from Experiment 1 were used and the experiment was run on a PC using SuperLab 4.5 software (Cedrus Corporation, www.superlab.com). Stimuli were displayed on a HannsG HP191 19-in. LCD monitor at a viewing distance of approximately 80 cm.
Design and procedure The experiment employed a fully within groups design and manipulated the probe type and ITI. The arrangements were broadly similar to Experiment 1, but the fixation cross remained on screen for 350 ms, the targets were presented for 750 ms and the RI was reduced to 350 ms. The probe was presented for a maximum of 2.5 s and participants were asked to press the "C" key to indicate a match and the "N" key to indicate a nonmatch. NRPs were novel. The probe was followed by a blank interval lasting 300 ms, 5 s or 10 s, creating ITIs of 650 ms, 5.35 s and 10.35 s. Participants completed nine practice trials and 144 experimental trials (72 positive, 36 RP and 36 NRP). The different probe types were equally distributed across the three ITIs, trials were presented within three blocks of 48 trials and block order was randomized for each participant. Blocks contained all three ITI durations and participants could take a break between blocks. No feedback was provided.
Results and discussion
This experiment was primarily concerned with the PI effect, which is revealed in the comparison between NRP and RP trials. However, as positive trials can provide information about how the task was approached, responses to matching probes were subjected to a separate analysis (due to experimenter error, only 16 of the 22 participants had positive trials available for analysis). As shown in Fig. 3, correct responding to positive trials was generally high, but declined at the longest
ITI.
A one-way repeated-measures ANOVA found a significant effect of ITI duration and extreme support for the alternative hypothesis, F(2, 30) = 32.37, MSE = 0.001, p < .001, η p 2 = 0.68, BF 10 = 308,269.05. Šidàk post hoc tests showed performance at the 10.35 s ITI to be poorer than both 650 ms (p < .001) and 5.35 s (p < .001). The latter two ITIs did not differ (p = .969). Analysis of response times on positive trials (see Table 2) also uncovered a significant effect, with strong support for the alternative hypothesis, F(2, 30) = 15.01, MSE = 2554.19, p < .001, η p 2 = .50, BF 10 = 39.48. Šidàk post hoc tests found quicker responding at the 650 ms ITI in comparison to both 5.35 s (p = .005) and 10.35 s (p = .001). The latter two ITIs did not differ (p = .303). Thus, outcomes on positive trials in both frequentist and Bayesian approaches were consistent.
Another 2 × 3 ANOVA then assessed the response time data (see Table 2). The effect of probe type was significant, F(1, 21) = 4.91, MSE = 5513.61, p = .038, η p 2 = 0.19, BF 10 = 1.28, with slower responding to RP (M = 796.19 ms) than NRP trials (M = 767.55 ms), but these data did not offer convincing support for either hypothesis and were inconclusive in the Bayesian analysis. Both the ITI effect and the interaction were again nonsignificant (Fs < 0.6, ps > .5), which was supported by the Bayesian analysis (ITI: BF 10 = 0.12; interaction: BF Inclusion = 0.13). In summary, the present experiment replicated the recent probe effect of Experiment 1, but primarily for task accuracy, and this PI effect did not seem to diminish over time.
Experiment 3
The time-invariant PI observed in the prior two experiments is notable given the lengthy decay intervals employed (Experiment 1: approx. 6-21 s; Experiment 2: 5-15 s). This effect may indicate that passively maintained memories do not decay, but to more severely test this notion, Experiment 3 extended the ITI even further-up to 32 s-creating a decay interval of 39 s (the other ITI, 8 s, led to a decay interval of 15 s). Experiment 3 (and the subsequent two experiments) also increased the sample size, to improve statistical power and increase the likelihood of detecting a reduction in PI. A previous study detecting a reduction in PI over time (Mercer & Duffy, 2015) employed a sample of 29 individuals, and effort was made to obtain a similar sample size in the final three experiments.
Method
Participants Thirty undergraduate psychology students (29 females and one male) from the University of Leeds (mean age = 20.73 years, SD = 4.23 years) completed the experiment.
Materials Stimuli matched Experiment 1 and the study was run on a PC using E-Prime 2.0 software. The stimuli were presented on a Dell 1708FP monitor at a viewing distance of approximately 70 cm.
Design and procedure The experiment used a fully within groups design and manipulated the probe type and ITI. Each trial began with a central fixation cross lasting 2 s followed by the two target stimuli, displayed for 500 ms. After a 2-s RI, the probe was displayed for up to 2 s. Participants responded "1" to indicate a match and "3" to indicate a nonmatch. The next
Results and discussion
Responses to positive trials were not retained in Experiment 3 and so the analysis only focused on the negative probes. Data were examined using a 2 (probe type: RP vs. NRP) × 2 (ITI: 8 s vs. 32 s) repeated-measures ANOVA. For task accuracy (see Fig. 4), both main effects were significant, but convincing evidence for the alternative hypothesis was only recorded for the probe type. Accuracy for RP trials (M = 0.94) was lower than NRP trials (M = 0.98), F(1, 29) = 13.63, MSE = 0.003, p = .001, η p 2 = 0.32, BF 10 = 1349.97, and this outcome provided extreme support for the alternative hypothesis, demonstrating PI. The ITI effect was driven by a very modest increase in performance at the 32 s ITI (M = 0.97), in comparison to that at 8 s (M = 0.96), F(1, 29) = 4.44, MSE = 0.001, p = .044, η p 2 = 0.13, BF 10 = 0.57, but this effect was inconclusive.
The interaction was also significant, F(1, 29) = 5.64, MSE = 0.002, p = .024, η p 2 = 0.16, BF Inclusion = 1.51, and there was an improvement in accuracy on RP trials as the ITI was lengthened, highlighting a release from PI. However, there was no convincing support for the interaction from the Bayesian perspective, showing a discrepancy with the frequentist analysis. As noted above, the BF Inclusion score assesses the value of retaining the interaction within the model, by comparing it against models based on main effects alone. In this case, the model was dominated by the effect of probe type and RP performance remained lower than NRP trials at both ITIs.
A similar two-way repeated-measures ANOVA assessing the reaction time data yielded no main effects, no interactions and evidence more congruent with the null hypothesis. In summary, Experiment 3 found a robust recent probe effect for task accuracy. While PI was modestly alleviated after a 32 s ITI, the Bayesian analysis suggested little justification for including the interaction within the model, supporting the notion of enduring PI.
Experiment 4
Experiments 1-3 found that old visual representations persist and disrupt current task performance over lengthy intervals. Yet in some circumstances it would be helpful to more effectively manage and regulate PI. The present experiment tested this idea, being influenced by recent evidence suggesting that individuals can control forgetting. For example, Festini and Reuter-Lorenz (2014) combined the recent probes task with a directed forgetting procedure and found that PI for verbal stimuli was prevented when participants were instructed to forget one part of the target array after encoding. Williams, Hong, Kang, Carlisle, and Woodman (2013) reported a similar effect.
Retrospectively cueing an object during the RI also has positive effects on subsequent retention (e.g., Griffin & Nobre, 2003;Landman, Spekreijse, & Lamme, 2003). This 'retro-cueing' effect can be explained in a number of ways (see Souza & Oberauer, 2016), but one account states that the cued item is protected from decay, whereas uncued items are susceptible to time-based forgetting. The alternative 'removal hypothesis' states that uncued items are marked as irrelevant and then actively removed from the memory buffer (Souza & Oberauer, 2016).
The role of time in the retro-cueing effect was demonstrated by Pertzov, Bays, Joseph, and Husain (2013), who had participants remember simple visual stimuli over RIs of different durations. Including a valid retro-cue was beneficial and preserved the object over the RI, whereas uncued and invalidly cued items were subject to temporal forgetting. Pertzov et al. argued that validly cued objects are held in a privileged state, but this has a cost for uncued items, which are forgotten. Nevertheless, other studies report uncued objects do persist in memory (e.g., Gressmann & Janczyk, 2016;Schneider, Mertes, & Wascher, 2015;van Moorselaar, Olivers, Theeuwes, Lamme, & Sligte, 2015).
Experiment 4 incorporated a retro-cue into the recent probes task on half of the trials. Arrangements were similar to Experiments 1-3 except one condition featured a cue during the RI (the "CP" or "cue present" condition) denoting the target to be remembered, and so when the probe occurred, participants had to determine whether it matched the cued object. On CP-positive trials, the probe did match the cued object; on CP-NRP trials, there was not a match, and the probe was novel; but on CP-RP trials the probe matched the uncued target from the previous trial. So, CP-RP trials included a cue, Fig. 4 Mean proportion of correct responses on RP and NRP trials according to ITI in Experiment 3. Error bars show ±1 SE but the probe itself had not been cued when displayed as a target. In the "CA" or "cue absent" condition, the cue was removed and both targets had to be remembered (and again there were three probe types: CA-Positive, CA-NRP, and CA-RP). The ITI was 800 ms or 8.3 s, creating decay intervals of 8.3 s and 15.3 s, respectively.
ITI Mean proportion correct
If participants can forget uncued items, CP-RP stimuli should produce less PI and suffer from time-based decay, whereas this should not occur in the CA-RP condition. In contrast, the active-passive conception predicts an enduring PI effect, which should not be eliminated by the presence of a retro-cue.
Method
Participants The final sample included 31 students from the University of Wolverhampton (26 females and five males) between the ages of 18 and 47 years (M = 24.81 years, SD = 8.38 years). As in Experiments 1 and 3, participants had 2 s to respond, but some struggled with this. Individuals with 10%+ missing data were excluded.
Materials This experiment involved numerous trials needing unique stimuli. To achieve this, a new set of visual objects were created. Each stimulus contained three lines of varying lengths and orientations along with a single shape (a circle, a square, a triangle, a diamond, a star, a cross, an "L" and an "X"). Each shape was used in the construction of 75 stimuli, creating 600 images, all of which were black and presented against a white background (see Fig. 5). In total, 576 of these stimuli were used on experimental trials (and 10 on practice trials). Images were randomly paired to form the targets.
Other stimuli included a pure tone warning signal (4.8 kHz) presented at approximately 65 dB and generated using Audacity (Version 2.0.3), and a black asterisk that served as the retro-cue (Calibri type size: 96). The cue was presented in the same location as the left or right target. The experiment was run on a PC using SuperLab 5 software and a Lenovo ThinkVision 24-in. LCD monitor from a viewing distance of approximately 70 cm.
Design and procedure A within groups design was used, with the presence of the cue, the ITI and the probe type being manipulated (see Fig. 5). Each trial commenced with a tone (lasting 300 ms) and a central fixation cross (lasting 100 ms and presented 200 ms after the tone onset). Targets were displayed for 500ms and followed by a 2.5-s RI. On CA trials the delay was unfilled, but on CP trials the cue was presented for 100ms in the position of one of the targets. The cue was shown 550 ms after the offset of the targets and the left and right targets were cued an equal number of times. The probe was shown for a maximum of 2 s and the next trial began after an interval of 500 ms or 8 s, creating ITIs of 800 ms and 8.3 s.
On CP trials, participants judged whether the probe matched the cued target (pressing "M" for match and "Z" for nonmatch). When a cue was present, the non-cued target would never be shown as a probe on that trial. This was intended to encourage participants to focus exclusively on the cued object (and cue validity may be important; Gunseli, van Moorselaar, Meeter, & Olivers, 2015). On CA trials, participants had to determine whether the probe matched either target. Once again, the probe could take three forms (positive, RP, and NRP), and NRP stimuli were novel. On a CA trial, the RP item could be either of the targets seen on the previous trial, whereas on CP trials the RP stimulus was always the object that was not cued.
Participants completed four practice trials (two with a cue and two without) and 256 experimental trials (64 trials for each cue/ITI combination, including 32 positive trials, 16 RP trials and 16 NRP trials). The trials were organized into four blocks (two CA and two CP) that contained both ITIs and all probe types. The trial arrangement within a block was fixed, but the order of blocks was randomly determined. A break was available after two blocks. No feedback was provided.
Results and discussion
Trials on which the participant did not respond within 2 s or pressed an invalid button (neither "M" or "Z") were excluded (fewer than 2.5% of trials, on average).
Firstly, responding on positive trials was examined (see Table 3) and accuracy was assessed using a 2 (cue: CP vs. Diagram depicting two trials in Experiment 4: trial N − 1 (gray boxes) and trial N (black boxes). A postencoding retro-cue was presented on half of the trials, for 100 ms, and indicated the target that should be remembered. On the remaining half of trials, the retro-cue was removed and both targets had to be remembered. The three standard probe types were employed, but when a retro-cue was presented the RP item did not need to be remembered and could be discarded from memory effect being significant and offering extreme support for the alternative hypothesis. The main effect of ITI was not significant, F(1, 30) = 0.43, MSE = 0.01, p = .518, η p 2 = 0.01, BF 10 = 0.16, and the Bayesian analysis showed that these data were 6.17 times more likely under the null hypothesis.
The traditional ANOVA found an interaction between ITI and cue, F(1, 30) = 5.81, MSE = 0.01, p = .022, η p 2 = 0.16, BF Inclusion = 1.23, and ITI and probe type, F(1, 30) = 8.89, MSE = 0.01, p = .006, η p 2 = 0.23, BF Inclusion = 25.29. The former interaction was driven by improved accuracy at the longer ITI for CP, but not CA trials; however, the Bayesian analysis suggested limited justification for including this interaction within the model. Much better support was provided for the ITI and probe type interaction, which is shown in Table 4. A simple effects analysis revealed an improvement in accuracy on RP trials as the ITI was lengthened, F(1, 30) = 7.22, p = .012, but no differences between the two ITIs on NRP trials, F(1, 30) = 2.74, p = .108. Of particular relevance was the three-way interaction, as this would determine whether any time-based recovery from PI was particularly likely on cued trials. This was unsupported and non-significant, F(1, 30) = 5.81, MSE = 0.01, p = .48, η p 2 = 0.02, BF Inclusion = 0.31. Additionally, the probe type x cue interaction was nonsignificant and there was no evidence for retaining it based on the Bayes factor, F(1, 30) = 0.03, MSE = 0.004, p = .872, η p 2 = 0.16, BF Inclusion = 0.20.
Another 2 × 2 × 2 repeated-measures ANOVA assessing response times found just one significant main effect. Participants were significantly faster to respond on CP (M = 758.25 ms) than CA (M = 852.48 ms) trials, F(1, 30) = 41.68, MSE = 13,206.34, p < .001, η p 2 = 0.58, BF 10 > 1,000,000. This effect showed extreme support for the alternative hypothesis, but all other results were nonsignificant and compatible with the null hypothesis.
Experiment 4 found the recent probe effect for accuracy data, highlighting PI, and the presence of a retro-cue was beneficial, leading to faster and more accurate responding on both positive and negative trials, in line with past work (see Souza & Oberauer, 2016). Unlike Experiments 1-3, PI declined slightly over a longer ITI, but this did not seem to be reliably affected by the cue, with performance improving over the ITI on RP trials for both CP and CA conditions. Thus, the present results suggest that there is PI even when a cue offers a reliable instruction for the interfering item to be discarded, but PI did modestly diminish as time passed (although the ITI × Probe Type interaction was partly influenced by the NRP condition, where performance unexpectedly declined at the longer ITI in the CA condition). The last experiment attempted to replicate this interaction and further investigate the role of the retro-cue in reducing PI.
Experiment 5
Experiment 4 tested whether PI could be alleviated when a retro-cue instructed participants to forget the RP item. This idea was unsupported, but the RP stimulus itself was never cued. This allowed a distinction to be made between conditions in which the RP stimulus either had to be maintained over the RI (CA) or did not (CP). Yet a limitation with this design was that the RP stimulus had to be retained alongside the target in the CA condition, and so could not be exclusively prioritized.
In this final experiment, a retro-cue was presented on all trials, and participants were instructed to only remember the cued item and determine whether it matched the probe. The key manipulation concerned the type of RP stimulus. On uncued RP trials, the RP stimulus had not been cued when presented on the previous trial and therefore should not have been maintained over the RI, whereas on cued RP trials this stimulus had been cued (but not presented as the current probe). In this latter arrangement, the RP stimulus may be more enduring and exert a stronger interfering effect. Following previous experiments, the ITI was varied, and this experiment served two purposes: (1) it provided a more direct test of the role of active maintenance in PI; (2) it offered an attempt to replicate the ITI × Probe Type interaction reported in Experiment 4.
Method
Participants The final sample included 25 (predominantly female) students from the University of Wolverhampton. As in Experiment 4, some participants struggled to respond within the 2-s window (or consistently pressed invalid buttons) and were excluded if 10%+ trials were affected. This applied to five participants.
Materials Stimuli and equipment were identical to those of Experiment 4.
Design and procedure The study was a within groups design and manipulated the ITI (800 ms or 8.3 s) and probe type (positive, NRP, cued RP, and uncued RP). The procedure matched Experiment 4, except the retro-cue was used on every trial. This allowed two different RP trials to be created. On cued RP trials, the RP stimulus was presented as a target on trial N − 1 and subsequently cued during the RI. However, it was not presented as a probe until trial N. Uncued RP trials were similar, except the RP stimulus was not cued on trial N − 1this matched the arrangement for the CP-RP trials in Experiment 4. Participants were asked to determine whether the probe matched the cued target on that trial. There were 192 experimental trials, with 96 trials for each ITI (48 positive, 16 NRP, 16 cued RP and 16 uncued RP). Trials were arranged into four blocks that followed a predetermined pattern, but the block order was random.
Results and discussion
Trials on which the participant did not respond within 2 s or pressed an invalid button were excluded (fewer than 2% of trials, on average). The first analysis examined responding on positive trials, comparing the short and long ITIs using traditional and Bayesian paired-samples t test (with a two-tailed hypothesis). For task accuracy, the proportion of correct responses was slightly lower when the ITI was 800 ms (M = 0.78) than 8.3 s (M = 0.82). This was conventionally significant, but inconclusive from the Bayesian perspective, t(24) = −2.41, p = .024, d = -0.49, BF 10 = 2.30. For response times, participants were quicker at responding on trials with the short (M = 721.93 ms) than long (M = 738.95) ITI, t(24) = −1.24, p = .226, d = 0.25, BF 10 = 0.42. This effect was nonsignificant, but inconclusive.
Another two-way ANOVA assessing response times yielded only one reliable effect. Participants were slower to In summary, PI was present for accuracy data, but both types of RP stimuli damaged performance. Thus, whether the RP item had been cued (and actively maintained) or not cued (and discarded) did not affect PI. This experiment also found no support for a reduction in PI over time.
General discussion
The expression of PI was exploited in the present study to explore the persistence of old visual memories over time. Given demonstrations of rapid time-dependent forgetting in visual WM (e.g., Ricker & Cowan, 2010, it is reasonable to expect PI to vanish over longer intervals, and such an effect is predicted by decay and temporal distinctiveness theories. Conversely, McKeown et al.'s (2014) active-passive conception expects time-insensitive PI. The present data were more compatible with this form of enduring passive memory trace.
PI was manifested as a reduction in accurate responding on RP in comparison to NRP trials across all five experiments. This was found with both the frequentist and Bayesian analyses. Response times to RP stimuli were also slowed in Fig. 7 Mean proportion of correct responses on NRP, cued RP cued and uncued RP trials according to ITI in Experiment 5. Error bars show ±1 SE Experiment 1, though generally the PI effect was confined to accuracy. Our demonstration of PI is compatible with previous studies (e.g., Cyr et al., 2017;Hartshorne, 2008;Makovski & Jiang, 2008;McKeown et al., 2014;Mercer & Duffy, 2015) and highlights its role in short-term forgetting. More significantly, the PI effect was largely robust over time, as assessed through an interaction between probe type and ITI length. In Experiments 2 and 5, this interaction was nonsignificant, and there was no justification for retaining it from the Bayesian perspective. In Experiment 1, the interaction was significant, but performance only changed over the ITI on positive trials-the disadvantage on RP compared with NRP trials endured over time. Furthermore, there was no support for retaining that interaction on the basis of the Bayes factor.
Somewhat better evidence for time-sensitive PI was found in Experiments 3 and 4. Experiment 3 used a very long ITI, allowing ample time for old memories to be forgotten, and the frequentist analysis suggested a modest recovery in performance at the longest ITI. Yet the Bayesian analysis was inconclusive and accuracy on RP trials was still lower than NRP after the 32-s ITI. The only case where both frequentist and Bayesian analyses supported the ITI × Probe Type interaction was Experiment 4. Here, accuracy on RP trials improved as the ITI was lengthened, indicating a recovery from PI over a longer delay. Yet this interaction was partly influenced by an unexpected decrease in the CA-NRP condition at the longer ITI, and it could not be replicated in Experiment 5. In summary, the combined evidence was consistent with robust and largely time-invariant PI.
Such PI appears different to the more limited, immediate effects reported in some prior studies (e.g., Makovski & Jiang, 2008), although the present results are congruent with Berman et al.'s (2009) investigation of PI, which estimated that a delay of 78 s would be required to eliminate the RP-NRP difference (for response times). Of course, the current study used abstract and unfamiliar visual stimuli that are likely to be harder to maintain through intentional maintenance strategies such as rehearsal.
Yet the present experiments uncover time-based effects that appear paradoxical: old and redundant items from previous trials persist over lengthy intervals, yet intentionally maintaining a single item over a short delay is difficult (note the drop in performance in Experiment 1 as the RI was extended). It should be noted that the two time-based effects are manifested through different responses: on positive trials, where there is time-dependent forgetting, the participant must determine whether there is an identical match between a probe and one of the targets. On negative trials, the participant must only reject the probe, and generally this is successfully accomplished. Table 5 shows the mean proportion of correct responding (and standard deviations) in the present experiments according to probe type. In all cases, responding to positive trials was less accurate than responding to negative trials, and this became more noticeable at longer RIs. The memory requirements on positive and negative trials may differ-only in the former case is a precise representation required to make a correct response. Furthermore, forgetting over the RI was greatly reduced on RP and NRP trials, in comparison to positive trials, as shown in Experiment 1 and further revealed in Table 5.
The notion of rapid forgetting of actively maintained information and slower loss of residual representations of the McKeown et al. (2014) model is consistent with the work of Logie and colleagues. In their first experiment Shimi and Logie (2019) used a change detection task, with participants remembering arrays of four or six objects. The array was repeated throughout the experiment, and this was beneficialarrays of six items repeated multiple times were learnt, particularly within the first 40 trials. Their second experiment explored memory for six-item arrays only, but memory was tested using a visual reconstruction procedure. Clear evidence for learning was again demonstrated, especially within the first 20 trials. These findings suggest that some visual information from a trial must persist and Shimi and Logie made a distinction between a short-lived memory that is highly vulnerable to interference from subsequent input, and a weaker, residual trace generated across trials. These ideas, emerging from a different paradigm, are compatible with the activepassive conception, and highlight the need to consider residual representations that do not neatly fit the description of a traditional short-term or long-term memory. Logie, Brockmole, and Vandenbroucke (2009) suggested that although visual short-term memory may be fragile, nevertheless feature bindings established through short-term memory can influence long-term learning. For such learning to happen, some information must survive beyond a trial. Interestingly, a robust PI for visual memoranda was recently reported in rhesus monkeys by Devkar and Wright (2016) over decay intervals between about 19 s and 58 s.
Future research would benefit from exploring these issues in more depth, particularly by interrogating the nature of passively held residual memories. The trace could be viewed as a lingering WM, reflecting the remaining contents of the immediately preceding trial, or it could be an LTM. Time-dependent forgetting over the RI may reflect a decaying WM, whereas PI is driven by a more robust LTM. While appealing, there are some reasons to doubt this interpretation. Individual target stimuli were unfamiliar to participants and briefly presented as a target once. Although this does not eliminate the possibility that new LTMs were rapidly formed, it is unclear why such memories could not be used to prevent time-dependent forgetting over the RI. Alternatively, the PI effect may be better interpreted as a decision-making phenomenon that occurs during retrieval. Oberauer, Awh, and Sutterer (2017) proposed that responses to the probe involve a competition between a familiarity signal from LTM and the available content of WM. Familiarity with the probe can be used to make a decision, which is beneficial on positive trials but detrimental on RP trials. Specifically, familiarity with the RP will lead to an incorrect decision. More direct experimentation capable of distinguishing these accounts will help better comprehend the nature of PI for visual stimuli. So, we conclude with this fundamental puzzle: A target that appears to decay rather rapidly within trials is nevertheless producing PI on future trials. The active-passive conception advanced by McKeown et al. (2014) addresses this puzzle. Here, it is proposed the attention-based maintenance that refreshes or reactivates the immediate memory trace actually has the parallel negative effect of introducing noise into the representation. This might occur if the neural bases of the trace were entered into, or exchanged between, a WM buffer for prioritized attention and a residual or passive store; the assumption in the model is that such translation is never perfect. When the trace is within prioritized focal attention it is available to guide recall responses (see, for example, Ricker & Cowan, 2010); when in the residual or passive form, it is not. Thus, items held within the passive store have a more enduring time course precisely because they escape the translation involved in bringing a recent memory trace into the focus of attention, following the termination of the trial on which that item occurred (i.e., throughout the subsequent ITI). In conclusion, there is a passive form of memory trace in visual memory that remains stable over time and is difficult to control. Conversely, actively maintained representations are subject to rapid forgetting.
|
2019-12-25T14:03:37.626Z
|
2019-12-23T00:00:00.000
|
{
"year": 2019,
"sha1": "fd4f489c368570dcb11c3875b544ec9815512b46",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.3758/s13421-019-01003-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f26d4d7e0d64f17fb396ee991538bbb85d17db69",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
249183363
|
pes2o/s2orc
|
v3-fos-license
|
A Novel Prognostic Score Based on Artificial Intelligence in Hepatocellular Carcinoma: A Long-Term Follow-Up Analysis
Objective T cell immunity plays an important role in anti-tumor effects and immunosuppression often leads to the development and relapse of cancer. This study aimed to investigate the effect of T cell numbers on the long-term prognosis of patients with hepatocellular carcinoma (HCC) and construct an artificial neural network (ANN) model to evaluate its prognostic value. Methods We enrolled 3,427 patients with HCC at Beijing Ditan Hospital, Capital Medical University, and randomly divided them into two groups of 1,861 and 809 patients as the training and validation sets, respectively. Cox regression analysis was used to screen for independent risk factors of survival in patients with HCC. These factors were used to build an ANN model using Python. Concordance index, calibration curve, and decision curve analysis were used to evaluate the model performance. Results The 1-year, 3-year, 5-year, and 10-year cumulative overall survival (OS) rates were 66.9%, 45.7%, 34.9%, and 22.6%, respectively. Cox multivariate regression analysis showed that age, white blood cell count, creatinine, total bilirubin, γ-GGT, LDH, tumor size ≥ 5 cm, tumor number ≥ 2, portal vein tumor thrombus, and AFP ≥ 400 ng/ml were independent risk factors for long-term survival in HCC. Antiviral therapy, albumin, T cell, and CD8 T cell counts were independent protective factors. An ANN model was developed for long-term survival. The areas under the receiver operating characteristic (ROC) curve of 1-year, 3-year, and 5-year OS rates by ANNs were 0.838, 0.833, and 0.843, respectively, which were higher than those of the Barcelona Clinic Liver Cancer (BCLC), tumor node metastasis (TNM), Okuda, Chinese University Prognostic Index (CUPI), Cancer of the Liver Italian Program (CLIP), Japan Integrated Staging (JIS), and albumin–bilirubin (ALBI) models (P < 0.0001). According to the ANN model scores, all patients were divided into high-, middle-, and low-risk groups. Compared with low-risk patients, the hazard ratios of 5-year OS of the high-risk group were 8.11 (95% CI: 7.0-9.4) and 6.13 (95% CI: 4.28-8.79) (P<0.0001) in the training and validation sets, respectively. Conclusion High levels of circulating T cells and CD8 + T cells in peripheral blood may benefit the long-term survival of patients with HCC. The ANN model has a good individual prediction performance, which can be used to assess the prognosis of HCC and lay the foundation for the implementation of precision treatment in the future.
INTRODUCTION
Primary liver cancer was the sixth most common cancer and the third leading cause of cancer-related deaths worldwide in 2020, with an estimated 906,000 new cases and 830,000 cancer-related deaths (1). The 5-year net survival was in the 10%-19% range in most areas around the world (2). With an aging of and increase in the world's population, deaths due to liver cancer are increasing. It is predicted that the number of liver cancer deaths will reach 1,679,630 by 2040, an increase of 85.4% over 2020 (3). Hepatocellular carcinoma (HCC) accounts for 75%-85% of all primary liver cancers (4).
Currently, the commonly used treatments for HCC include surgical resection, liver transplantation, local ablation therapy (radiofrequency ablation, microwave ablation, cryoablation, percutaneous anhydrous alcohol injection), transarterial chemoembolization (TACE), and targeted therapy (5). Curative therapy should be selected as much as possible for early HCC, such as liver resection, liver transplantation, or ablation; the 5year overall survival (OS) rate of HCC patients receiving curative therapy can reach 60%-70% (6,7). However, because liver cancer is mostly diagnosed in the intermediate and advanced stages, only a few patients can choose curative therapy. In a multicenter cohort study of 8,656 patients, only 10% of newly diagnosed HCC patients were recommended for resection (8). The availability of liver transplantations is also limited by the lack of donors. Therefore, most HCC patients can only receive local treatment, such as TACE or palliative treatment and their 5-year OS is reduced by more than half to less than 30% (9). The high mortality of HCC patients remains a key clinical problem; therefore, the identification of prognostic indicators and model construction are used to predict the outcome.
Early intervention based on prediction systems and risk stratification is an effective strategy for improving the survival of HCC patients. At present, the staging systems for predicting and evaluating the prognosis of HCC patients include the tumor node metastasis (TNM) stage (10), Barcelona Clinic Liver Cancer (BCLC) stage (11), Okuda grade (12), Cancer of the Liver Italian Program (CLIP) score (13), Chinese University Prognostic Index (CUPI) (14), Japan Integrated Staging (JIS) (15), and albumin-bilirubin (ALBI) grade (16). The predictors of these prognostic models mainly focus on tumor burden, liver function, performance status, and so on. However, these factors mainly focus on the differences between the characteristics of tumors and cannot explain the interaction between the tumor and host immune response. Previous studies have reported that high densities of CD3 and CD8 immune cells in immunohistochemical sections of colorectal cancer (CRC) patients improve disease-free survival (DFS) and OS rates (17). Moreover, the type, density, and location of immune cells in CRC had a superior prognostic value and were independent of the TNM stage. Budhu et al. (18) revealed that the biological behavior of liver cancer is related to the unique immune response characteristics of the liver microenvironment, indicating that immune cells and immune responses may be related to the prognosis of patients with liver cancer. However, the current results on the relationship between outcomes and immune cells are inconsistent. Gabrielson et al. (19) demonstrated that the density of tumor-infiltrating CD3 and CD8 T cells could predict the recurrence of HCC in patients who underwent a hepatectomy (CD3, odds ratios (OR) = 5.8; CD8, OR= 3.9), and was independent of other predictive clinicopathological factors, such as vascular invasion and HCC cell differentiation. However, some studies have shown that tumor-infiltrating CD3, CD4, and CD8 T cells in HCC patients were not related to OS and DFS after resection, whereas high-density cytotoxic CD8 T cells (CTL) and low-density regulatory T cells (Tregs) were independent prognostic factors for improving OS and DFS (20). Most of these studies on immune cells and the prognosis of liver cancer are on patients after hepatectomy or liver transplantation; however, the relationship between immune cells and prognosis in unresectable patients is not clear.
Artificial neural networks (ANNs), as a form of machine learning, have been used for the prognostic evaluation of various tumors and have a great application prospect (21)(22)(23). Using machine learning to construct a prognostic system and stratify the risk of long-term survival of HCC patients is an effective strategy to implement precision therapy. This study aims to analyze the relationship between T cells and the prognosis of HCC and establish a prediction model for the long-term survival of HCC patients with immune indexes using ANNs, which can accurately identify populations at a high risk of death and carry out an early intervention to reduce patient mortality.
Patients
A total of 3,427 patients with first-diagnosed primary liver cancer who were hospitalized in Beijing Ditan Hospital, Capital Medical University, between January 2008 and June 2017 were enrolled retrospectively. This study was approved by the Ethics Committee of Ditan Hospital. The inclusion criteria were as follows: (1) patients diagnosed with primary liver cancer with or without chronic liver diseases and (2) their ages were between 18-75 years. We excluded patients with (1) cholangiocarcinoma (n = 213), (2) metastatic liver cancer (n = 96), (3) other types of tumors (n = 67), (4) lost to follow-up (n = 201), and (5) incomplete clinical data (n = 180). Finally, 2,670 patients were randomly divided into a training set (n = 1,861) and a validation set (n = 809). The diagnostic criteria for HCC are in accordance with the criteria of the Asia-Pacific clinical guidelines for HCC (24).
Clinical and Laboratory Parameters
We recorded the clinical information including the gender, age, family history of HCC, history of smoking and alcohol abuse, liver cirrhosis status, medical comorbidities (diabetes, hypertension, hyperlipidemia and coronary artery disease), and aetiology of HCC (HBV, HCV, alcohol abuse and others). We also obtained blood test results from the clinical laboratory including routine blood examination, liver function, serum lipid level, serum alpha fetoprotein (AFP) levels, c-reactive protein, creatinine, prothrombin activity, and international standardized ratio levels. The peripheral blood was sucked and stained with MULTITEST CD45-Percp/CD3-FITC/CD4-APC/ CD8-PE TruCount four-color kit (BD Biosciences) in clinical laboratory. We extracted the T cell, CD4 T cell, and CD8 T cell counts before the treatment. Tumor factors included tumor number, maximum tumor size, vascular invasion, and tumor metastasis based on the imaging data at enrollment.
Follow-Up and Endpoint
The CT or MRI scan, ultrasonography, or serum AFP tests were performed every 3 months. The definition of progression conformed with the mRECIST criteria (25). The occurrence of vascular metastasis or extrahepatic diffusion was also considered as progression. Survival time was defined as the time from admission to death or final follow-up on December 31, 2019.
Statistical Analysis
Statistical analysis was performed using IBM SPSS Statistics for Windows version 21.0. T test or Mann-Whitney U test was used for quantitative data comparison. Fisher's exact or c 2 inspection was used for qualitative data comparison. Cox univariate and multivariate analyses (forward, maximum likelihood ratio) were used to screen the risk factors of death in patients with liver cancer. The ANN model was created using Python. Finally, the ANN model was compared with existing routine prognosis systems: TNM stage (10), BCLC stage (11), Okuda grade (12), CLIP score (13), CUPI (14), JIS (15), and ALBI grade (16). Cindex and the areas under receiver operating characteristic (ROC) curve (AUC) and time-dependent ROC curve were used to test the discrimination of the models. To test the calibration degree of the model, the Hosmer-Lemeshow test was applied and a calibration curve was drawn. Decision curve analysis (DCA) was used to compare the clinical net benefit and performance improvement of this model with those of the above models. R version 3.3.2 was used for data analysis, and rms, survival, survminer, rmda, pROC, ggplot2, and timeROC packages were used. All tests were considered to be statistically significant at p < 0.05.
Patient Characteristics
We enrolled 2,670 patients between 2008 and 2017 and randomly divided them into training (n = 1,861) and validation (n = 809) groups. Among them, 2,249 (84.2%) were infected with hepatitis B virus (HBV), 15 of them were coinfected with hepatitis C virus (HCV), and 160 had complications from chronic alcohol consumption ( Table 1). In addition, 242 patients (9.1%) were infected with hepatitis C, 25
Overall Survival Analysis
The
Development of ANN Model
The results of univariate and multivariate Cox proportional hazard regression analyses are shown in Table 2. We identified age at diagnosis, alcohol abuse, tumor size ≥ 5 cm, tumor number ≥ 2, portal vein tumor thrombus (PVTT), Child-Pugh stage C, white blood cells, total bilirubin, lactate dehydrogenase, gglutamyl transferase, alkaline phosphatase, creatinine, AFP ≥ 400 ng/ml, and C-reactive protein as independent risk factors for overall survival in HCC patients. In addition, we found antiviral therapy, albumin, T cell count, and CD8 T cell count to be the protective factors. These parameters were included in the ANN model. As shown in Figure 2, our ANN model has 14 clinical or biochemical parameters as input neurons and two corresponding clinical outcomes as output neurons. Each neuron is connected by weighted links. To improve the performance of the multilayer perceptron (MLP), after several rounds of debugging and testing, we added three hidden layers. , and the C index was 0.712, which was significantly lower than that of the ANN model (P < 0.05) ( Table 3). The results indicate that the ability of the ANN model to distinguish the survival outcome of liver cancer patients was significantly higher than that of the traditional Cox regression model. Similar results were obtained for the validation set. The AUC value of the ANN was significantly higher than that of the Cox model but there was no difference in the C index between the two models. Furthermore, we compared the ANN model with other classical models for prognosis evaluation of HCC, such as the BCLC, TNM, Okuda, CUPI, CLIP, JIS, and ALBI models, and found that the AUC value and C index of the ANN model in the prediction of OS and DFS outperformed them in both the training and validation sets ( Table 4, Table S1). Considering the continuity of survival time of liver cancer, we found that the time-dependent AUC values of the ANN model were all higher than those of the other models in the training and validation sets, as expected ( Figures 3A, B).
Considering that different etiologies, liver functions, and treatment methods may affect the prognosis of HCC patients, we further analyzed the performance of these subgroups. In terms of age, sex, etiology, AFP level, Child-Pugh grade, and era of diagnosis and treatments, we also compared the AUC value, C index of 1-year, 3year, 5-year survival, and DFS and found that the ANN model was higher than other models (Table S2, Table S3, Table S4).
By drawing the calibration curve, we showed that the ANN model can predict the 1-year, 3-year, 5-year OS probabilities of HCC patients and the corresponding actual observation probabilities ( Figures 3E-J). In the training and validation sets, the ANN model had a good fit slope in predicting 1-year, 3-year, 5year OS. In addition, compared with the BCLC, TNM, Okuda, CUPI, CLIP, JIS, and ALBI models, our model showed significant net clinical benefits and improved the overall survival of HCC patients in DCA (Figures 3C, D). These results show that the ANN model has a better clinical practicability than other models.
Application of ANN Model for Risk Stratification
According to the 40% and 70% digits of the ANN model score, all patients were divided into three levels: low risk (stratum 1), medium risk (stratum 2) and high risk (stratum 3). In the training set, compared with the low-risk group, the hazard ratio (HR) values of OS for medium-risk and high-risk groups were 3.01 (95% CI: 2.59-3.50; P < 0.0001) and 8.11 (95% CI: 7.0-9.4; P < 0.0001), respectively ( Figure 4A); the HR values of PFS were 2.15 (95% CI: 1.90-2.45; P < 0.0001) and 4.98 (95% CI: 4.38-5.66; P < 0.0001), respectively ( Figure 4B). In the validation set, compared with the low-risk group, the HR values of OS for medium risk and high-risk groups were 3.12 (95% CI: 2.50-3.89; Figure 4D). Whether in the training or validation set, the ANN model could effectively distinguish all patients according to their different death risks. We further drew Kaplan-Meier (KM) survival curves of the ANN model after risk stratification in the different etiology, liver function, inclusion time and treatment methods subgroups ( Figures S1, S2). There was no difference between the medium-and low-risk patients (log-rank P value= 0.06) ( Figure S1G) in Child-Pugh C (CTP C) grade. In the remaining sublayers, the ANN model could distinguish the patients well. The median survival time and HR values of OS in the different risk groups for all sublayers are shown in Table S5. The same results were obtained in the survival curves of the ANN model after risk stratification for the DFS ( Figure S3) and early recurrence ( Figure S4 and Table S6).
Prognostic Value of T Cell and CD8T Cell Counts in HCC Patients
We used 907 cells/mL as the cutoff value of T cell counts and 300 cells/mL as the cutoff value of CD8 T cell counts according to the maximum value of the Youden index. We divided all patients into two groups based on the cutoff values and assessed the overall survival, as visualized by the Kaplan-Meier survival curves. The median survival time of patients with T cell counts > 907 cells/mL was more than five times longer than that of patients with T cell counts ≤ 907 cells/mL (90 vs. 17.6 months) in the training set. The risks of death and progression in patients with a high frequency of T cells were significantly reduced (death risk: HR = 0.4, 95% CI: 0.35-0.45; progression risk: HR = 0.51, 95% CI: 0.48-0.57; P < 0.0001) (Figures 5A, B). The same results were obtained after a grouping based on the cut-off value of CD8 T cells (Figures 5C, D). We also estimated the discrimination and prognostic values of circulating T cells and CD8 T cells in different etiologies and treatment sublayers (Figures S5, S6).
The results suggested that an increase in T cell counts and CD8 T cell counts in HCC patients could improve the survival rate and prolong the survival time, especially in patients who underwent resection (HR value < 0.35, P < 0.001).
DISCUSSION
Recently, machine learning has been successful in cancer detection, prognostic risk stratification, and clinical decision-making for breast, prostate, lung, and other cancers (22,23,26,27). Although artificial intelligence has been applied in various imaging diagnoses and prognosis evaluations after different therapies of HCC patients, it is rarely applied to the OS of HCC patients (28). In this study, a machine learning method was used to build an ANN prediction model suitable for individual applications, which can calculate the death probability of HCC patients. This model is a simple and easy-to-use calculator, integrating tumor characteristics of HCC patients: tumor size, number, portal vein tumor thrombus, AFP, liver function, albumin, total bilirubin, g-GGT, LDH, inflammatory index, white blood cell counts, antiviral therapy, and immune index-T cell counts and CD8 T cell counts. The C index of the prediction model in this study is greater than 0.75 in the training and validation sets and the AUC value is greater than 0.8, which indicates the ANN model is more reliable.
Machine learning is the most common approach to artificial intelligence and can mimic human cognitive functions through machines or algorithms. ANNs can build probabilistic or statistical models and maximize the accuracy of predictions. ANNs are able to learn and repeatedly train clinical data by imitating the information processing function of human brain synapses, thereby acquiring decision-making ability and simple judgment ability similar to that of humans (29). Compared with conventional Cox or logistic regression analyses, ANNs have the advantages of nonlinear mapping and high accuracy. ANNs can adjust the weights between input and output values and minimize the error between actual and expected outputs. In this study, the AUC values of the ANN model were significantly higher than those of the Cox model for predicting the short-and long-term survival of HCC patients. Moreover, the timedependent ROC curve also revealed that the ANN model outperformed other scoring systems, including BCLC, TNM, JIS, CLIP, CUPI, Okuda, and ALBI, in predicting HCC outcomes under any survival time. Similar to this study, we have used ANNs to develop a model with good accuracy to predict the progression-free survival of HBV-HCC patients (26). The AUC value and C-index were 0.866 and 0.782, respectively, which were superior to the above scoring systems. The ANN system could help doctors and patients make better clinical decisions, screen timely, and slow the progression of the disease.
Therapies play a decisive role in the prognosis of HCC patients. Several studies have focused on a machine learning approach for predicting the response and prognosis of different treatments (21,30,31). Liu et al. applied random forest feature selection, a support vector machine (SVM), and multitask deep learning to build a survival-sensitive risk stratification model in 243 HCC patients receiving TACE (30). Saillard et al. used deep learning algorithms to construct a model for predicting survival by analyzing whole-slide digitized histological slides from 194 HCC patients after resection (21). At present, most studies are based on tumor histopathology and radiomics-based features to construct survival prediction models for patients after certain treatments. However, a majority of HCC patients cannot obtain tumor histopathological sections because once discovered, the patients are in the middle and advanced stages and have no chance of surgery. In this study, only 8.5% of the patients underwent resection. Imaging features have a significant heterogeneity among different equipment, parameter settings, and researchers' extraction methods. Therefore, the ANN model using clinical and laboratory characteristics is not only noninvasive but also convenient and accurate. We also verified the predictive efficacy of the proposed model in different treatment subgroups and found that the AUC values for predicting 1-, 3-, and 5-year survival were all higher than those of other scoring systems in the resection, minimally invasive, and palliative groups. Moreover, the ANN model had a good discriminatory power in different treatment subgroups. The immune system is an important way to exert antitumor effects. Several studies have shown that a high density of tumorinfiltrating lymphocytes is correlated with good clinical outcomes in different types of tumors (17)(18)(19). Unitt reported that a decrease in tumor-infiltrating lymphocytes (TILs) is an independent risk factor for HCC recurrence after liver transplantation (32). In addition, previous studies also found that a high density of CD3 and CD8 T cell infiltration in the tumor area can significantly reduce the recurrence rate of HCC patients after resection and improve overall survival (19). However, because of the limitations in tumor tissue acquisition, the relationship between immune cells and prognosis in patients with intermediate and advanced HCC who cannot undergo surgical resection remains unclear. Through this large cohort study, we found that increased circulating T cell and CD8 T cell counts could improve the survival rate and prolong survival time. This is consistent with the results for lung, colorectal, and other cancers (33)(34)(35).
The immune system is a double-edged sword in the development and progression of tumors. A healthy immune system can eliminate tumors by recognizing immune antigens.
With the proposal of the tumor immune editing concept, a large number of studies have found that the tumor microenvironment may escape immune elimination by reducing antigenicity and immunogenicity, secreting inhibitory molecules such as tumor growth factor (TGF)-b and interleukin-10, and increasing the proportion of suppressor cell such as regulatory T cells and myeloid-derived suppressor cells (36). T cell exhaustion has become a new focus in tumor immunosuppression in recent years (37). The depletion of T cells cannot effectively recognize tumor antigens and conversely, exhausted T cells with high expression of inhibitory molecules such as PD-1, TIGIT, and TIM-3 gradually lose their proliferation and cytotoxic capacity and further promote tumor progression. Our previous study also found that a high expression of PD-1 and TIGIT on the surface of T cells in HCC patients was associated with disease progression (38). This may explain why the reduced T-cell count in this study was associated with poor outcomes.
Our study has several limitations. First, an ANN with a large number of parameters may be over-fitted or only fit the training data and may not be generalized to other HCC patients. However, the large sample size of this study and fine-tuning of the hyperparameter sets can reduce the effects of overfitting to a certain extent. The ANN model exhibits excellent discrimination and good accuracy in the holdout validation set and several different subgroups, outperforming the routinely used predictive systems. Second, this is a singlecenter study and most HCC patients have HBV infection. The ANN model should be validated in HCC patients with HCV, alcohol, or nonalcoholic fatty liver disease settings to determine its generalizability.
CONCLUSION
In conclusion, this study used artificial neural network to develop a prognostic model to predict long-term overall survival. The ANN model has the advantages of convenience, accuracy, and noninvasiveness. This study identified high frequencies of circulating T cells and CD8 T cells as protective factors. Regular surveillance based on the ANN model indicators may help doctors take clinical decisions and prolong the survival time of HCC patients.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/ Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by The Ethics Committee of Beijing Ditan Hospital, Capital Medical University. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
ZY and JH designed the research. XL and LY assisted with statistical analysis. LY, YH, and XHW were responsible for the patients' inclusion and follow-up. XL and XHW wrote the manuscript. XBW, YJ, and ZY participated in the revision of
|
2022-05-31T13:23:18.791Z
|
2022-05-31T00:00:00.000
|
{
"year": 2022,
"sha1": "6f3541ecb6e2794a0d1113fc88ad5f2de31201a4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.817853/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "6f3541ecb6e2794a0d1113fc88ad5f2de31201a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
268449355
|
pes2o/s2orc
|
v3-fos-license
|
Social determinants of health rather than race impact health-related quality of life in 10-year-old children born extremely preterm
Background Reducing healthcare disparities among children is extremely important given the potential impact of these disparities on long-term health-related quality of life (HRQL). Race and parental socioeconomic status (SES) are associated with child HRQL, but these associations have not been studied in infants born extremely preterm (EP), a population at increased risk for physical, cognitive, and psychosocial impairments. Achieving health equity for infants born EP across their life course requires identifying the impact of racism and SES on HRQL. Objective We aimed to evaluate the association between self-reported maternal race, SES factors, and HRQL among 10-year-old children born EP. Design/methods Participants were identified from an ongoing multicenter prospective longitudinal study of Extremely Low Gestational Age Newborns (ELGAN Study), born between 2002 and 2004, and evaluated at 10 years of age using the Pediatric quality of life (QoL) Inventory completed by their parent or guardian, assessing physical, emotional, social, school, and total (composite) QoL domains. Multivariable regression models were used to evaluate the relationship between QoL scores and self-identified maternal race, adjusting for SES factors (education level, marital status, and public insurance). Results Of 1,198 study participants who were alive at 10 years of age, 863 (72.0%) were evaluated at 10 years of age. Differences in mean 10-year QoL scores across racial groups were observed and were significant on univariate analysis. However, these associations attenuated when adjusted for the marital status, public insurance status, and education status of mothers. A comparison of children with English as the primary language spoken at home vs. any other language revealed a significant difference only in school QoL, in which non-English language was associated with more favorable school QoL scores. Conclusions Among 10-year-old children born EP, differences in parent-reported QoL were associated with maternal SES factors but not with race. Our results suggest that interventions designed to improve the SES of mothers may enhance the QoL of children born EP. Furthermore, these results underscore that race is a social construct, rather than a biological variable, as we work toward greater equity in care provision.
Background: Reducing healthcare disparities among children is extremely important given the potential impact of these disparities on long-term healthrelated quality of life (HRQL).Race and parental socioeconomic status (SES) are associated with child HRQL, but these associations have not been studied in infants born extremely preterm (EP), a population at increased risk for physical, cognitive, and psychosocial impairments.Achieving health equity for infants born EP across their life course requires identifying the impact of racism and SES on HRQL.Objective: We aimed to evaluate the association between self-reported maternal race, SES factors, and HRQL among 10-year-old children born EP.Design/methods: Participants were identified from an ongoing multicenter prospective longitudinal study of Extremely Low Gestational Age Newborns (ELGAN Study), born between 2002 and2004, and evaluated at 10 years of age using the Pediatric quality of life (QoL) Inventory completed by their parent or guardian, assessing physical, emotional, social, school, and total (composite) QoL domains.Multivariable regression models were used to evaluate the relationship between QoL scores and self-identified maternal race, adjusting for SES factors (education level, marital status, and public insurance).Results: Of 1,198 study participants who were alive at 10 years of age, 863 (72.0%) were evaluated at 10 years of age.Differences in mean 10-year QoL scores across racial groups were observed and were significant on univariate analysis.However, these associations attenuated when adjusted for the marital status, public insurance status, and education status of mothers.A comparison of children with English as the primary language spoken at home vs. any other language revealed a significant difference only in school QoL, in which non-English language was associated with more favorable school QoL scores.
Introduction
Health disparities, defined by the Centers for Disease Control, are preventable differences in the burden of disease, injury, violence, or opportunities to achieve optimal health that are experienced by socially disadvantaged populations (1).The American Academy of Pediatrics confirms that as compared to White youth, Black and Latinx youth experience disparities in health that are "extensive, pervasive, and persistent, and occur across the spectrum of health" (2).At least 43% of children younger than 18 years old report a race and ethnicity other than (non-Hispanic) White, and these groups are expected to constitute more than half of all children in the United States by 2040 (3).Disparities within children's healthcare are extremely important to address because of their potential impact on longterm health-related quality of life (HRQL) across the lifespan.Advances in neonatal-perinatal medicine over the last few decades have improved the survival rates of infants born extremely preterm (EP; less than 28 weeks gestation) (4,5).Yet, these preterm survivors are at high risk for long-term physical, cognitive, and psychosocial impairments that can persist across the life course (5,6).Understanding the impact of race on HRQL in this patient population may allow providers to address inequity and support patients born EP across their lifespan.Addressing healthcare disparities necessitates investigating and addressing structural racism and the social determinants of health (SDoH).
It is known that preterm infants are at increased risk of a variety of adverse developmental and health outcomes (7)(8)(9)(10)(11).Both during neonatal hospitalization and during childhood, racial disparities have been identified among children born preterm, with generally less favorable outcomes among Black preterm children.Similarly, it is well described that children who are Black or Latinx experience health disparities when compared with White youth across a wide spectrum of conditions although this finding is generally informed by studies focused on morbidity (e.g., obesity, asthma), mortality, and specific indicators (e.g., access to care) (2).While adverse health outcomes can impact global health, an alternative approach to disease or outcome-specific studies is to evaluate overall health through the study of quality of life (QoL) for individuals born EP.HRQL incorporates patient and caregiver perspectives regarding health status across physical, social, emotional, and school domains, and the effects of physical and social factors on QoL.HRQL aligns directly with the World Health Organization's definition of health as "the complete state of physical, mental, and social wellbeing, not merely the absence of disease" (12).
Recent work from the Healthy Passages study of elementaryage school children shows parental socioeconomic status (SES; defined using parent-reported highest level of education completed by any member of the household and total household income), as well as factors such as family cohesion, parental nurturance, other adult, and peer support, are positively associated with child QoL across all racial/ethnic categories, and when adjusted for SES status, many (but not all) HRQL differences across racial/ethnic categories become insignificant (3,13,14).The impact of race investigated as a proxy for racism and SDoH on HRQL in infants born EP has not been investigated.Prior studies that have evaluated SDoH in relation to cognition suggest that SDoH might mediate some, or all, of the associations between race and child outcomes (15,16).However, none of these studies have included extremely preterm individuals, who are more likely to encounter SDoH and are more likely to exhibit adverse child health outcomes.We hypothesize that differences observed in HRQL between racial groups in the population of children born EP at age 10 years will not persist when adjusted for social factors associated with lower SES such as maternal education, marital status, Supplemental Nutrition Assistance Program (SNAP) eligibility, and public insurance status.In addition, since previous work by caregivers and patients indicates that compared with children in English-primary-language households, children in non-Englishprimary-language households experience worse health (17), we evaluated the hypothesis that this variable was associated with lower HRQL.
Cohort selection
This report follows the guidelines of the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) (18).Between 2002 and 2004, women giving birth at 28 weeks of gestation or earlier at one of 14 academic medical centers throughout the United States were enrolled in the Extremely Low Gestational Age Newborn (ELGAN) study.Maternal consent was obtained upon hospital admission or near delivery.Each participating institution received institutional review board (IRB) approval for protocols used during the study.Approximately 85% of mothers approached for participation in the original ELGAN study consented to participate, resulting in a cohort of 1,249 mothers and 1,506 infants.Of these 1,506 infants, 1,198 (80%) of those enrolled survived to 10 years of age, and 863 (72%) of this group completed a quality of life survey at 10 years of age (19).Nineteen percent of this group did not participate, and this nonparticipation was the result of the study design; specifically, 232 of the 1,198 surviving cohort members did not meet the following inclusion criteria for the 10-year follow-up: (1) data on inflammation-related proteins in the neonatal period and ( 2) data on neurodevelopmental follow-up at 24 months of age (age-adjusted for degree of prematurity).The remaining cohort members were eligible for the 10-year follow-up but could not be located by study coordinators (n = 77), despite attempts to reach the family using phone, email, and social media or did not complete the quality of life survey at 10 years (n = 26) (11).
Data were collected through maternal interviews and a review of mothers' medical records for multiple demographic variables.These included maternal race, which was self-reported by mothers at the time of enrollment from the following: Asian, Black, Native American, White, Mixed, and Other.
Due to the multifaceted nature of SES status (20), which no single variable captures adequately, especially for racial and ethnic minorities (21,22), we considered data on multiple contributing variables known to impact maternal financial status and opportunity and collected these data.These included maternal age category (<21, 21-35, and >35 years), maternal education (ranging from less than high school education to higher than college education), maternal marital status (including married, not ever married and living together, not ever married and not living together, and separated/divorced/ widowed), maternal insurance type (public or private), SNAP eligibility (yes/no), and primary language spoken at home (English or non-English).
Ten-year follow-up
At the 10-year follow-up visit, HRQL measures were evaluated by a parent or caregiver, who completed the Pediatric Quality of Life Inventory (PedsQL) 4.0 generic core scales (23).The PedsQL was designed to measure the core components of quality of life across multiple domains for the child-physical (eight items), emotional (five items), social (five items), and school (five items) functioning-and each item was scored on a 5-point Likert scale.These domains were summed and transformed to a linear 0-100 scale, with higher scores corresponding to a higher quality of life.The PedsQL total summary score incorporates all areas of functioning (23 items total) and provides a composite quality-oflife score for the child.The PedsQL has been demonstrated to be reliable and valid in a large population of enrollees in the Children's Health Insurance Program in California (24).
Statistical analyses
An initial descriptive statistical analysis was completed to compare the mean values of age-10 HRQL measures among different self-identified maternal race groups.Upon noticing considerable differences among these groups, a univariate linear regression analysis was conducted to evaluate the associations between self-identified maternal race as well as QoL scores and SDoH measures, including maternal age, education, marital status, insurance type, and SNAP eligibility with QoL scores.In this analysis, groups of self-reported maternal race analyzed were White, Black, and the combined group comprising self-reported race of Asian, Native American, Mixed, and Other.Noticing the negative associations (some being statistically significant), a multivariate regression analysis was subsequently conducted considering all five factors (maternal age, education, marital status, insurance type, and SNAP eligibility) for a better understanding of the associations between maternal race and SDoH factors and child QoL at age 10 years.Statistical significance was defined as p < 0.05.
Results
Of 1,198 study participants who survived until discharge, 863 (72.0%) were evaluated at 10 years.Maternal prenatal data and neonatal characteristics are summarized in Table 1.With regard to maternal race and ethnicity, 63% of our subjects self-reported as White, 26% as Black, and approximately 10% as Hispanic.Differences in mean 10-year QoL scores across racial groups were observed (Table 2) and were significant on univariate analysis, with the self-reported maternal race of Black and the combined group comprising the self-reported race of Asian, Native American, Mixed, and Other being associated with lower scores (Table 3).However, the significance of these associations was attenuated when adjusting for SDoH factors, including marital status, age, education status, public insurance status, and SNAP eligibility (Table 4).For maternal education and marital status variables, both of which consisted of five categories, a secondary set of binary variables-college education and married -was also defined to explore the associations of these factors without the small sample size effects of some categories.In the same way, the very small number of mothers whose marital status was "widowed" were analyzed with mothers "separated or divorced," under the "separated or divorced or widowed" category.On multivariable analysis, only the SDoH factors remained significant.These included maternal higher education (more than college education), which was associated with higher HRQL scores across school and total domains.A status of separated, divorced, or widowed mother was associated with a lower school HRQL score, as compared to married or living with a partner.Public insurance (Medicaid) was associated with significantly lower HRQL scores across all domains.SNAP eligibility, another income-based national program to provide food stamp assistance to families with economic need, was associated with lower physical and total QoL scores.In addition, as compared to children with English as their primary language, those with another language as their primary language had more favorable school QoL scores (Table 5).
Discussion
We found that once adjusted for SES factors, the association between race and HRQL in children born EP at 10 years was greatly attenuated and no longer statistically significant.Given that both race and ethnicity are more of a social construct rather than a truly biological variable, our findings highlight the fact that racial disparities in quality-of-life outcomes are likely explained by the social, economic, and healthcare resources available to racial and ethnic minorities in the United States.As hypothesized, lower SES factors (reliance on public insurance, SNAP eligibility, mother separated, divorced, or widowed status) were correlated with lower child QoL scores on subscores (physical, emotional, social, school) as well as impact some composite (total QoL) domains.Predictably, advanced maternal education (more than college) was associated with a higher school QoL score and a higher total QoL score.While other associations remained non-significant, we observed a dosedependent relationship between increasing maternal education and higher school QoL scores, consistent with that of previous literature, which supports and strengthens the positive impact of maternal education on child health (25,26).
SNAP eligibility, an income-based program providing assistance primarily with food and also other necessary products for daily living, was associated with a lower physical QoL score and a lower total QoL score.This finding is consistent with that of prior work identifying associations between food insecurity and multiple adverse physical health outcomes in children, including increased rates of asthma, lower utilization of preventative medical care, and higher emergency department usage (27).Reliance on public insurance, an indicator of lower household income, was associated with lower QoL scores across all domains.This finding is consistent with findings from the Healthy Passages Study, in which lower child HRQL was associated with parental SES and other SDoH factors, and when adjusted for these social factors, race was not associated with child HRQL (3,13,14).Thus, the current study agrees with the findings of others that racial/ethnic minority status is frequently associated with lower SES, which, in turn, is associated with suboptimal health outcomes in children (20,(28)(29)(30)(31).
Previous work indicates that compared with children in English-primary-language households, children in non-Englishprimary-language households experience worse health and are more likely to be poor, uninsured or sporadically insured, and have no usual source of medical care (32)(33)(34).In our cohort, children in households with another language besides English had comparable HRQL scores that were no different from those for children in English-primary-language households.Importantly, we were surprised to find that non-English-speaking children had higher school HRQL.As we have tried to demonstrate that SDoH and low SES are associated with lower HRQL, it may have been expected that HRQL might be lower when children experience these social and economic adversities.One explanation for this finding may be a nuanced difference between the primary language spoken at home and limited English proficiency (LEP).
LEP (the parents' self-reported ability to speak English very well, well, not very well, or not at all) has been shown to exhibit a dose-dependent relationship between children's insurance coverage, parental educational attainment, and family income.
Lower LEP is associated with less insurance and parent education and lower income, while primary language spoken at home is not as strongly associated (17).Our findings may be capturing the primary language spoken at home as a variable of cultural practice and parental choice and not parental LEP.It is also plausible that for this subset of families, although resource-challenged, as firstgeneration immigrants, their families are highly motivated to support their children's wellbeing and educational achievements for the betterment of their future.Further investigation is needed to disentangle these relationships, but our data indicate that language at home is probably not driving differences in HRQL observed in children born EP at age 10 years.This study has implications for clinical care and future healthcare policy and research.Disparities in child HRQL in a patient population born EP among racial groups are associated with SDoH variables, and focusing on disparities in these variables may provide insights into mitigating racial disparities.Addressing socioeconomic disparities alone is not sufficient, as it does not correct the systems of oppression that underlie these disparities.Even among women of high SES-college education, private insurance, not receiving Women, Infants, and Children (WIC) benefits-disparities in preterm birth rates persist, with TABLE 5 A comparison of HRQL measures between children with English as their primary language and those with another language as their primary language.
Characteristic
English Non-English p-value (40).Among this population of infants born EP, early and timely identification of at-risk mother-infant dyads who may benefit from comprehensive biopsychosocial interventions has the potential to improve patient HRQL well beyond the neonatal period (41).These focused interventions with postbirth hospitalization discharge home visits by nurses and social workers have been found to reduce rates of infant mortality and rehospitalization following initial discharge among preterm infants (42).Education and support for infant stimulation initiated in the hospital and continued at home have also been shown to achieve the social interaction patterns essential for optimal development in a cohort of infants born prematurely with mothers with social-environmental risk factors (43).Furthermore, the benefits of early intervention services among children born prematurely are well-documented, with demonstrated benefits observed throughout childhood and into adulthood (44)(45)(46).Addressing the financial insecurity of mothers caring for a vulnerable child is a necessity, with further research and implementation needed.A randomized control trial of 46 Medicaid-eligible mothers of preterm infants found that financial support (up to 3 weekly financial transfers of $200 each their infant was hospitalized) increased skin-to-skin care and breast milk provision, indicating the role of financial support to facilitate mothers' engagement with caregiving behaviors (47).
The strengths of our study include a relatively high follow-up rate of a large, prospective, multicenter cohort sample that was diverse with respect to sociodemographic attributes and geographic location, with patients enrolled from 14 institutions across central and eastern United States.There also are several limitations.The observational design inherent to the study prevented us from directly investigating the causes of racial disparities.We lacked a sufficient sample size to address Latinx ethnicity within our study and recognize the extensive body of literature documenting healthcare disparities afflicting this patient population (2).Only EP individuals were included in the ELGAN Study, and therefore, the findings reported here might not apply to children born preterm beyond 28 weeks' gestation and closer to term.Lastly, 28% of the potential study sample was not evaluated at 10 years of age, potentially resulting in selection bias.
Conclusions
Among 10-year-old children born EP, differences in parentreported QoL were associated with maternal SES factors but not with race.Our results suggest that interventions designed to improve mothers' SES may enhance the QoL of children born EP.Furthermore, these results underscore that race is a social construct, rather than a biological variable, as we work toward greater equity in care provision.
This study was supported by the National Institute of Neurological Disorders and Stroke (Grants 5U01NS040069-05 and 2R01NS040069-09) and the Office of the Director of the National Institutes of Health (UH3OD023348).Additionally, partial funding for open access publication of this paper was provided by Tufts University Hirsh Health Sciences Library's Open Access Fund.
TABLE 1
Baseline maternal and neonatal characteristics of the overall study cohort and by sex at age 10 years.
TABLE 2
Parent-reported HRQL for 10-year-old children born extremely preterm, by self-reported maternal race.
Data are means (standard deviation in parentheses).
TABLE 3
Relationships between self-reported maternal race, socioeconomic factors, and five child QoL scores at 10 years of age.
126) Data are mean (standard deviation).The p-value is based on the Wilcoxon rank sum test.The bold values indicate the statistical significant p-value (<0.05).
TABLE 4
Rela(37)ships between self-reported maternal race, socioeconomic factors, and five child QoL scores at 10 years of age.Hispanic Black women having increased rates of premature birth than non-Hispanic White women (35).Racial, ethnic, and socioeconomic disparities remain widespread in the United States, and addressing these disparities is necessary for providing safe and equitable care.Yet, improving access to healthcare can improve child HRQL, as demonstrated by the success of the California State Children's Health Insurance Program and the impact of the Earned Income Tax Credit on infant health(36)(37)(38).Disparities in child health are implicated in large social and financial costs across the life span (39), and optimizing child health is critical for maximizing health throughout life SNAP, Supplemental Nutrition Assistance Program; CI, confidence interval.Data are mean differences between QoL scores (95% confidence intervals in parentheses) estimated using a multivariate linear regression model.The bold font indicates that associations are statistically significant with a p-value < 0.05.
|
2024-03-17T15:36:51.074Z
|
2024-03-14T00:00:00.000
|
{
"year": 2024,
"sha1": "6d50a4abc58a0d1e8db3a265c7edc051454e6351",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2024.1359270/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25b50d14b943d578f07804e673267464e27b6307",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17842082
|
pes2o/s2orc
|
v3-fos-license
|
Coexpression of the Type 2 Diabetes Susceptibility Gene Variants KCNJ11 E23K and ABCC8 S1369A Alter the ATP and Sulfonylurea Sensitivities of the ATP-Sensitive K+ Channel
OBJECTIVE In the pancreatic β-cell, ATP-sensitive K+ (KATP) channels couple metabolism with excitability and consist of Kir6.2 and SUR1 subunits encoded by KCNJ11 and ABCC8, respectively. Sulfonylureas, which inhibit the KATP channel, are used to treat type 2 diabetes. Rare activating mutations cause neonatal diabetes, whereas the common variants, E23K in KCNJ11 and S1369A in ABCC8, are in strong linkage disequilibrium, constituting a haplotype that predisposes to type 2 diabetes. To date it has not been possible to establish which of these represents the etiological variant, and functional studies are inconsistent. Furthermore, there have been no studies of the S1369A variant or the combined effect of the two on KATP channel function. RESEARCH DESIGN AND METHODS The patch-clamp technique was used to study the nucleotide sensitivity and sulfonylurea inhibition of recombinant human KATP channels containing either the K23/A1369 or E23/S1369 variants. RESULTS ATP sensitivity of the KATP channel was decreased in the K23/A1369 variant (half-maximal inhibitory concentration [IC50] = 8.0 vs. 2.5 μmol/l for the E23/S1369 variant), although there was no difference in ADP sensitivity. The K23/A1369 variant also displayed increased inhibition by gliclazide, an A-site sulfonylurea drug (IC50 = 52.7 vs. 188.7 nmol/l for the E23/S1369 variant), but not by glibenclamide (AB site) or repaglinide (B site). CONCLUSIONS Our findings indicate that the common K23/A1369 variant KATP channel displays decreased ATP inhibition that may contribute to the observed increased risk for type 2 diabetes. Moreover, the increased sensitivity of the K23/A1369 variant to the A-site sulfonylurea drug gliclazide may provide a pharmacogenomic therapeutic approach for patients with type 2 diabetes who are homozygous for both risk alleles.
R ecent large-scale human genetic studies have made dramatic progress in identifying type 2 diabetes susceptibility genes, increasing the list from three genes (PPARG, KCNJ11, and TCF7L2) to nearly 20 genes in the last 2 years (1). Despite this rapid progress, what the precise causal variant is and how the variant increases susceptibility to type 2 diabetes is still unknown in the majority of cases. Even the widely accepted type 2 diabetes susceptibility gene KCNJ11 has not yet had the mutational mechanisms fully elucidated.
In pancreatic -cells and the central nervous system, ATP-sensitive K ϩ (K ATP ) channels are composed of the Kir6.2 and SUR1 subunits encoded by the KCNJ11 and ABCC8 genes, respectively. K ATP channels act as key transducers of metabolic signals to excitability in many cell types including the regulation of insulin secretion (2), and the K ATP channel is the target for commonly used antidiabetic sulfonylurea drugs (3). The importance of the K ATP channel in diabetes is highlighted by the fact that rare heterozygous activating mutations in KCNJ11 or ABCC8 cause diabetes with varying clinical severities (4 -6).
One of the first reproducibly associated type 2 diabetes susceptibility signals identified was the common E23K (rs5219) variant of KCNJ11 (7,8). Functional studies were subsequently performed, but the results were inconsistent (9 -11). Moreover, fine mapping in the region demonstrated the difficulty in identifying the causal variant when a second nonsynonymous (S1369A; rs757110) variant in the neighboring ABCC8 gene was shown to be in complete linkage disequilibrium with the E23K KCNJ11 variant (12). The implications of this were that 1) it was not possible from the genetic evidence to say which variant is actually the etiological variant and 2) individuals who carried the K risk allele of the E23K variant also carried the A risk allele of the S1369A variant. Consequently, functional studies to investigate the mutational mechanism need to include both variants. using fluorescent optics in combination with coexpression of a green fluorescent protein plasmid (Life Technologies, Gaithersburg, MD). Macroscopic K ATP channel recordings were then performed 48 -72 h after transfection. The inside-out patch-clamp technique was used to measure macroscopic K ATP channel currents in transfected tsA201 cells as described in detail previously (13). Experimental compounds. MgATP and MgADP (Sigma, Oakville, Ontario) were prepared as 10 mmol/l stocks in ddH 2 O immediately prior to use. Glibenclamide, gliclazide, and repaglinide (Sigma, Oakville, Ontario) were prepared as 10 mmol/l stocks in DMSO and stored at Ϫ20°C. DMSO concentration was maintained at 0.1% in all experimental solutions. Statistical analysis. Macroscopic K ATP channel currents were normalized and expressed as changes in current relative to control (i.e., normalized K ATP channel current ϭ I test /I control ). Single-channel analysis was performed using pClamp v. 10.0 software (Axon Instruments). Statistical significance was assessed using the unpaired Student's t test or one-way ANOVA with a Bonnferoni post hoc test. P Ͻ 0.05 was considered statistically significant. Data are expressed as means Ϯ SE.
RESULTS
Residue S1369 is proximal to the second nucleotidebinding domain in SUR1, which forms part of the MgATPand MgADP-sensing region in SUR1 that is a key regulator of K ATP channel activity and, hence, insulin secretion (3,14). However, the direct effects of the K23/A1369 variant on human K ATP channel nucleotide sensitivities have not been investigated.
Therefore, to gain insights into the nucleotide regulation of K23/A1369 variant K ATP channel activity, the MgATP and MgADP sensitivities of recombinant human K ATP channels containing either the K23/A1369 or the E23/S1369 variants were compared. Our results indicate that the K23/A1369 variant decreases the MgATP sensitivity of the K ATP channel (half-maximal inhibitory concentration [IC 50 ] ϭ 8.0 Ϯ 0.8 vs. 2.5 Ϯ 0.2 mol/l for the E23/S1369 variant, P Ͻ 0.05; Fig. 1A and B). Extrapolation of the MgATP concentration-inhibition curve to physiological millimolar intracellular MgATP levels (1-5 mmol/l) predicted that the shift in IC 50 may result in the K23/A1369 variant remaining slightly more active compared with the E23/S1369 variant (Fig. 1B, inset). Subsequent singlechannel experiments confirmed this prediction with the open probability of the K23/A1369 variant being significantly greater than the E23/S1369 variant at 1 mmol/l MgATP but not at 0 mmol/l MgATP ( Fig. 1C-F). To determine whether one or both of the K23 or A1369 variants account for the reduced MgATP sensitivity, MgATP concentration-inhibition curves were constructed from quasi-heterologous K ATP channels expressing either E23/A1369 or K23/S1369. These results indicate that it is the ABCC8 A1369 variant, not the KCNJ11 K23 variant, that confers the reduced MgATP sensitivity to the K ATP channel complex (IC 50 ϭ 8.2 Ϯ 1.6 vs. 3.2 Ϯ 0.3 mol/l for E23/A1369 vs. K23/S1369, respectively; Fig. 1G).
The intracellular ATP-to-ADP ratio is a major determinant of K ATP channel activity because MgADP antagonizes the inhibitory effects of ATP, and rare monogenic mutations in ABCC8 that reduce MgADP antagonism decrease channel activity and cause hyperinsulinism (14). Accordingly, the stimulatory effects of varying concentrations of MgADP were tested in the presence of 0.1 mmol/l MgATP. However, no significant differences were observed between the E23/S1369 and K23/A1369 K ATP channel variants ( Fig. 2A and B).
The K ATP channel is the molecular target for sulfonylurea and glinide drugs that are commonly used to stimulate insulin secretion in type 2 diabetes. Interestingly, recent clinical data suggest that diabetic patients who are homozygous for the A1369 risk allele (A/A) are more responsive to gliclazide therapy (15). However, it is unknown whether this is due to a direct effect on the K ATP channel because the inhibitory profile of gliclazide and other drugs on the K23/A1369 variant K ATP channel has not been determined.
Sulfonylurea and glinide drugs can be grouped according to their binding to the A, B, or AB sites in the K ATP channel complex (3,16,17). The A site is located close to SUR1 transmembrane segments 14 -16, and the S1237Y mutation in this region (Fig. 3A) abolishes A-site drug inhibition (18). Two regions of the K ATP channel contribute to the B site: the intracellular loop between SUR1 transmembrane segments 5 and 6 and the NH 2 -terminus of Kir6.2 (16) (Fig. 3A). Figure 3B shows the structures of the glinide repaglinide (B site) and the sulfonylureas glibenclamide (AB site) and gliclazide (A site). The SUR1 residue S1369 is in close proximity to the A site (Fig. 3A). Therefore, the A1369 variant may contribute to altered K ATP channel sensitivity to A-site drugs such as gliclazide. Gliclazide (300 nmol/l) inhibited the K23/A1369 variant to a greater extent than the E23/S1369 variant ( Fig. 3C and D). Construction of gliclazide concentration-inhibition curves revealed that the K23/A1369 variant was 3.5-fold more sensitive to gliclazide inhibition than the E23/S1369 variant (IC 50 52.7 Ϯ 11.1 vs. 188.7 Ϯ 32.6 nmol/l, respectively; Fig. 3E). Because the K23/A1369 K ATP channel variant may also alter the potency of other drug classes, the effects of glibenclamide (AB site) and repaglinide (B site) were tested. In direct contrast to the observed effects of gliclazide, no significant differences in either glibenclamide (3 nmol/l) or repaglinide (10 nmol/l) inhibition were found between the K23/A1369 and E23/S1369 variant K ATP channels (Fig. 3F). It is possible that gliclazide inhibition may be affected by intracellular MgADP. In the presence of 0.1 mmol/l MgATP and 0.1 mmol/l MgADP, 300 nmol/l gliclazide still elicited a significantly greater inhibition of the K23/A1369 K ATP channel variant than the E23/S1369 variant (Fig. 4A-C).
The data presented indicate that the K23/A1369 variant K ATP channel is more sensitive to inhibition by gliclazide but not glibenclamide or repaglinide. However, the relative individual contributions of the ABCC8 A1369 or KCNJ11 K23 variants to gliclazide sensitivity have not been determined. Therefore, gliclazide inhibition was measured in quasi-heterologous K ATP channels containing either the E23/A1369 or K23/S1369 variant combinations. E23/A1369 K ATP channels displayed a significantly greater gliclazide inhibition than K23/S1369 K ATP channels, which was similar in magnitude to that observed in the increased diabetes risk for the K23/A1369 variant K ATP channel (Fig. 4D-F). Results from these experiments indicate that the enhanced gliclazide sensitivity in the K23/A1369 K ATP
DISCUSSION
Previous studies have investigated the properties of K ATP channels containing the KCNJ11 K23 variant (9 -11), although Ͼ95% of people with two copies of K23 are also homozygous for A1369 (12). Therefore, this study is the first to document the properties and pharmacology of the most commonly found K ATP channel variant that contains both K23 and A1369 risk alleles. Our study reveals novel differences in both the MgATP and sulfonylurea sensitivity of this variant K ATP channel. With respect to MgATP sensitivity, the moderate rightward shift in IC 50 for MgATP inhibition seen in the K23/A1369 variant results in increased basal K ATP channel activity at physiological MgATP levels. In direct contrast to the rare monogenic K ATP channel mutations that cause neonatal diabetes and drastically decreased MgATP inhibition, a modest increase in K23/A1369 variant K ATP channel activity may predispose to type 2 diabetes in combination with other factors. Indeed, we have previously shown that the K23 variant increases the sensitivity of the K ATP channel to activation by intracellular acyl CoAs (11,13). K ATP channels encoded by the KCNJ11 and ABCC8 genes are also expressed in pancreatic ␣-cells and hypothalamic neurons that centrally regulate glucose/ energy homeostasis (19). Therefore, it is plausible that subtle increases in the activity of K23/A1369 variant K ATP channels may alter glucagon secretion and centrally mediated glucose homeostasis, further contributing to the development of type 2 diabetes.
The molecular mechanism for the reduced ATP inhibition observed in K ATP channels expressing the K23/A1369 variant proteins is of importance. Free ATP inhibits K ATP channel activity via binding to the Kir6.2 subunit, whereas, paradoxically, MgATP can activate the channel via intrinsic MgATPase activity of the nucleotide-binding folds in SUR1, resulting in production of MgADP that may stimulate channel activity (2). In direct contrast to a previous study on the KCNJ11 K23 variant (20), our results indicate that the stimulatory effects of MgADP are unaltered in the K23/A1369 variant K ATP channel, suggesting that the molecular mechanism for decreased ATP inhibition does not involve altered MgADP sensitivity per se. Our results also show that the observed decrease in ATP inhibition in the Representative macroscopic current recordings showing the effect of the A-site sulfonylurea, gliclazide (300 nmol/l), on the E23/S1369 and K23/A1369 variant K ATP channels. E: Concentration response curves illustrating that the K23/A1369 variant K ATP channel is significantly more sensitive to gliclazide inhibition (IC 50 ؍ 52.7 ؎ 11.1 vs. 188.7 ؎ 32.6 nmol/l for K23/A1369 vs. E23/S1369, respectively). n ؍ 3-12 patches per gliclazide concentration. F: Grouped data demonstrating that the K23/A1369 variant is significantly more sensitive to inhibition by gliclazide but not glibenclamide (means ؎ SE 0.47 ؎ 0.07 vs. 0.42 ؎ 0.05 for E23/S1369 vs. K23/A1369, respectively; P > 0.05, n ؍ 11 patches) or repaglinide (0.40 ؎ 0.06 vs. 0.52 ؎ 0.05 for E23/S1369 vs. K23/A1369, respectively; P > 0.05, n ؍ 11 patches). *P < 0.05. Glic, gliclazide; Glib, glibenclamide; Rep, repaglinide.
K23/A1369 variant K ATP channel results from a direct effect of the ABCC8 A1369 risk allele reducing ATP inhibition (9), perhaps via mild increases in the intrinsic K ATP channel MgATPase activity. Indeed, several rare heterozygous mutations in ABCC8 that cause neonatal diabetes (R1380L and R1380C) act by increasing MgATPase activity (21). Interestingly, the location of the ABCC8 S1369 residue is in close proximity to the MgATPase catalytic site and residue R1380 in the SUR1 nucleotidebinding fold 2 (22).
Sulfonylurea and glinide drugs that inhibit K ATP channels are in extensive clinical use to stimulate insulin secretion in patients with type 2 diabetes (3). Glibenclamide is an AB-site ligand and is the most widely used sulfonylurea, whereas gliclazide is an A-site ligand selectively inhibiting K ATP channels containing the SUR1 isoform, potentially mitigating any cardiotoxicity that has been associated with glibenclamide monotherapy (23,24). Our results indicate that the K23/A1369 variant K ATP channel is 3.5-fold more sensitive to gliclazide. These findings are the first to directly demonstrate altered sulfonylurea sensitivities of the K23/A1369 variant K ATP channel and identify the ABCC8 A1369 risk allele as conferring this effect upon the K23/A1369 variant K ATP channel. These results provide a molecular mechanism for the increase in clinical efficacy of gliclazide in subjects with type 2 diabetes who are homozygous for the A1369 allele variant (15).
In conclusion, this study provides the first evidence that the ABCC8 S1369A variant alters the properties of the K ATP channel that may contribute to the increased risk for type 2 diabetes associated with the K23/A1369 risk haplotype. The increased gliclazide sensitivity observed in the K23/A1369 variant K ATP channel (afforded by the ABCC8 A1369 risk allele) encourages the study of sulfonylurea pharmacogenomics in larger cohorts and supports a rationale for tailoring pharmacotherapy in the ϳ20% of type 2 diabetic patients who carry two copies of these risk alleles. No potential conflicts of interest relevant to this article were reported.
Parts of this study were presented in abstract form at
|
2016-05-04T20:20:58.661Z
|
2009-07-08T00:00:00.000
|
{
"year": 2009,
"sha1": "4cf37cfbfcfd91d49e172941911704ad0f093747",
"oa_license": "CCBYNCND",
"oa_url": "http://diabetes.diabetesjournals.org/content/diabetes/58/10/2419.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4cf37cfbfcfd91d49e172941911704ad0f093747",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
231731243
|
pes2o/s2orc
|
v3-fos-license
|
Direct imaging of anthrax intoxication in animals reveals shared and individual functions of CMG-2 and TEM-8 in cellular toxin entry
The virulence of Bacillus anthracis is linked to the secretion of anthrax lethal toxin and anthrax edema toxin. These binary toxins consist of a common cell-binding moiety, protective antigen (PA), and the enzymatic moieties, lethal factor (LF) and edema factor (EF). PA binds either of two specific cell surface receptors, capillary morphogenesis protein-2 (CMG-2) or tumor endothelial marker-8 (TEM-8), which triggers the binding, endocytosis, and cytoplasmic translocation of LF and EF. The cellular distribution of functional TEM-8 and CMG-2 receptors during anthrax toxin intoxication in animals is not fully elucidated. Herein, we describe a novel assay to image anthrax toxin intoxication in live animals, and we use the assay to visualize TEM-8- and CMG-2-dependent intoxication. Specifically, we generated a chimeric protein consisting of the N-terminal domain of LF fused to a nuclear localization signal-tagged Cre recombinase (LFn-NLS-Cre). When PA and LFn-NLS-Cre were co-administered to transgenic mice that ubiquitously express a red fluorescent protein in the absence of Cre activity and a green fluorescent protein in the presence of Cre activity, anthrax toxin intoxication could be visualized at single-cell resolution by confocal microscopy. By using this assay, we show that CMG-2 is critical for intoxication in the liver and heart, whereas TEM-8 is required for full intoxication in the kidney and spleen. Other tissues examined were largely unaffected by single deficiences in either receptor, suggesting extensive overlap in TEM-8 and CMG-2 expression. The novel assay will be useful for basic and clinical/translational studies of Bacillus anthracis infection and for identifying on- and off-targets for reengineered toxin variants in the clinical development of cancer treatments. Background Assays for imaging of anthrax toxin intoxication in animals are not available. Results Anthrax toxin-Cre fusions combined with fluorescent Cre reporter mice enabled imaging of anthrax toxin intoxication in animals. Conclusion Shared and distinct functions of toxin receptors in cellular entry were uncovered. Significance. A simple and versatile assay for anthrax toxin intoxication is described.
Anthrax is contracted through inhalation, ingestion, or cutaneous inoculation of endospores of the Gram-positive bacterium Bacillus anthracis. Spores germinate following their introduction to the body and disseminate to cause a systemic infection, which, if left untreated, is associated with high mortality rates. Upon the death of the host, Bacillus anthracis forms spores that are resistant to chemical insults, heat exposure, and dehydration and remain infectious for long periods (1,2).
The virulence of Bacillus anthracis results from the release of three proteins into the circulation: protective antigen (PA), lethal factor (LF), and edema factor (EF). These three proteins are individually nontoxic, and PA combines with either LF to form anthrax lethal toxin or with EF to form anthrax edema toxin. The systemic administration of anthrax toxin to animals closely mimics experimental infection with Bacillus anthracis, and vaccination against the toxin components is protective, indicating that anthrax is largely a toxin-mediated disease (1,2). Anthrax toxins exert their cytotoxic actions in a three-step activation process that involves: a) the binding of PA to the surface of target cells, b) the translocation of LF and EF to the cytoplasmic compartment of the target cells, and c) the enzymatic action of LF and EF on cytoplasmic substrates (1,2). Anthrax toxin intoxication is initiated by the binding of PA to either of two receptors, capillary morphogenesis protein-2 (CMG-2) or tumor endothelial marker-8 (TEM-8) (3,4).
Subsequently, PA is cleaved at the sequence, 164 RKKR 167 , by cell surface-localized furin or furin-like proprotein convertases (5). This endoproteolytic cleavage is absolutely required for toxin activation and triggers all subsequent steps of the activation process. The C-terminal 63 kDa fragment of PA (PA63) remains bound to the cell surface after endoproteolytic cleavage and undergoes a conformational change that leads to the formation of a PA63 heptamer or octamer that subsequently binds up to four molecules of LF or EF with high affinity (6)(7)(8). The complex of PA63 and LF or EF is then endocytosed, and PA63 undergoes pH-induced conformational changes in the endosomal/lysosomal compartment to form a channel that facilitates the unfolding and translocation of LF and EF to the cytoplasm. EF is an adenylate cyclase proposed to lead to the formation of supraphysiological intracellular levels of cyclic AMP (9). LF is a zinc-dependent metalloproteinase that can cleave and inactivate several mitogen-activated protein kinase kinases (10). Although essential for intoxication, the cellular distribution of CMG-2 and TEM-8 and the function of each receptor in the intoxication in specific organs remain to be fully elucidated.
Notably, assays for direct visualization of anthrax toxin intoxication in vivo are not available, and the tissue and cellular targets for anthrax toxin during in vivo infection have been inferred only indirectly from analysis of tissues from intoxicated animals or from biochemical and genetic analysis of anthrax toxin targets (11,12).
LF is stable in circulation when administered alone and only becomes cell surface-associated after the binding of PA to CMG-2 or TEM-8 and its subsequent proteolytic cleavage to PA63. It has long been noted that LF residues 1-254 suffice to achieve translocation of a variety of "passenger" polypeptides and other molecules into the cytoplasm of the cells in a PA63-dependent manner (13,14).
These include other bacterial toxins and bacterial proteins (13,(15)(16)(17)(18)(19), fluorescent proteins (20), viral proteins (21)(22)(23)(24), eukaryotic proteins (25)(26)(27)(28), and radioisotopes (29-32). Thus, the fusion or conjugation of LF to imageable moieties could provide ideal agents for studying the cellular intoxication by anthrax toxin in vivo. A significant caveat to this approach, however, is the low number of LF molecules successfully translocated to the cytoplasm through the PA pore, which makes most imaging modalities poorly suited to study anthrax toxin intoxication in whole-animal systems (18,27). A second challenge to whole-animal imaging approaches is that most imaging modalities, such as radionuclides, enzymes such as horseradish peroxidase and b-galactosidase, and fluorescent proteins, likely would not discriminate between productive intoxication (i.e. PA-dependent entry into the cytoplasm) and non-productive interactions of the labeled toxin with cells, such as cell surface retention, fluid phase pinocytosis, and endosomal/lysosomal accumulation of intact or partially degraded toxin conjugates.
Spleen extracts from reporter mice carrying a Cre-activated b-galactosidase transgene have been shown to express increased b-galactosidase activity when infected with Salmonella enterica serovar Typhimurium carrying the type III secreted protein, SopE, fused to bacteriophage P1 Cre recombinase (33). Although single-cell resolution was not achieved, presumably due to a low number of cells being infected (33), the study provided evidence that bacterial protein-Cre fusion proteins may display sufficient enzymatic activity in animals to induce LoxP-dependent recombination.
In this study, we used a combined biochemical and genetic approach to image anthrax toxin intoxication in animals. Specifically, we generated a tripartite fusion protein that consists of the Nterminal domain of LF fused to a nuclear localization signal-tagged bacteriophage P1 Cre recombinase (LFn-NLS-Cre). When PA and LFn-NLS-Cre were co-administered to transgenic mice that ubiquitously express a red fluorescent protein (tdTomato) in the absence of Cre activity and a green fluorescent protein (eGFP) in the presence of Cre activity (hereafter mTmG mice), anthrax toxin intoxication could readily be visualized at single-cell resolution by using confocal microscopy of unfixed and unprocessed organs. By superimposing individual genetic deficiencies of either TEM-8 or CMG-2 in mTmG mice, we were able to directly establish the importance of each receptor in anthrax toxin intoxication in individual tissues.
When used in conjunction with modified PA variants that are activated by specific cell surface proteases, the assay may also be suitable for in vivo imaging of specific cell surface proteolytic activity in a variety of physiological and pathological processes.
Recombinant proteins
Plasmids for expressing proteins having LFn (LF aa 1-254) fused to Cre recombinase were constructed using the Champion pET SUMO vector (Invitrogen, Carlsbad, CA), which expresses proteins fused at the C-terminus of a His6-SUMO tag. DNA-encoding residues 1-254 of anthrax lethal factor originated from Hideo Iwai (51). The His6-SUMO polypeptide and the His6-tagged SUMO protease were subtractively removed by passage through Ni-NTA resin. The LFn-Cre proteins were further purified by chromatography on hydroxyapatite to achieve purities of >95%. The LFn-NLS-Cre protein selected for the animal imaging studies was obtained in yields of >20 mg/L of culture.
Cell culture assays
Efficacy of LFn-Cre protein translocation into cells was tested in CV1-5B cells (52,53). Cells were plated in 96-well plates in DMEM high glucose medium with 10% fetal bovine serum, cultured at 37˚C at 10% CO2, and used when they were at low and high confluency. PA was added at 1 µg/mL in all wells, and LFn-Cre proteins were diluted serially at 3.14-fold. After 40 h, cells were washed in phosphate buffered saline (PBS) containing 2 mM MgCl2 and fixed in PBS, 5 mM EGTA, 2 mM MgCl2, and 0.2% glutaraldehyde for 30 min. After again washing in PBS with 2 mM MgCl2, the cells were stained for bgalactosidase activity with PBS, 2 mM MgCl2, 0.1% Triton X-100, 0.1% NaN3, and 1 mg/mL chlorophenolred-b-D-galactopyranoside. Absorbance was measured at 570 nm after 20 min to quantify conversion of substrate by b-galactosidase (54).
Imaging anthrax toxin intoxication in mice
LFn-NLS-Cre and PA proteins alone or in combination in PBS were delivered intraperitonially or via tail vein injection. The mice were tail vein-injected with 100 µL of 6 mg/mL Hoechst dye (Thermo Fisher Scientific, Waltham, MA) 4-6 h prior to termination of an experiment to visualize nuclei (57). Mice were euthanized by CO2 inhalation and perfused with ice-cold PBS using cardiac puncture. Organs were immediately removed and cut into ~1-2 mm thick slices using a scalpel. The organ slices were placed on a MatTek glass bottom microwell dish (MatTek Corporation, Ashland, MA) and imaged using a 20x 0.75 NA Air or 60x 1.27 NA Water objective (Nikon, Tokyo, Japan) on an A1R + MP confocal microscope system (Nikon, Tokyo, Japan). Large images were composed of stitched images with a 10% overlap using NIS-Elements software (Nikon, Tokyo, Japan).
Generation of LFn-Cre recombinase fusion proteins capable of PA-dependent cytoplasmic translocation
We have previously shown that PA-dependent translocation of a LF-b-lactamase fusion protein can be imaged in cultured cells by using a cell-penetrating b-lactamase quenched fluorescence resonance energy transfer substrate (18). The adaptation of this assay for imaging anthrax toxin intoxication in whole animals, while hypothetically feasible, is prohibited by the high cost of the b-lactamase substrate and, likely, by logistic problems associated with systemic delivery of the substrate to animals. We therefore explored the possibility of combining biochemical and genetic approaches to imaging anthrax toxin intoxication in whole animals. Specifically, we generated a series of proteins that consist of the PA- Cre was administered at a 40-fold higher concentration ( Figure 1D).
Imaging anthrax toxin intoxication in mice
The above studies showed that LFn-Cre fusion proteins could translocate to the cytoplasm in a PA-dependent manner, that the Cre moiety (alone or as an intact fusion protein) thereafter was imported to the nucleus, and that it retained its recombinase activity after nuclear translocation. This indicated that To test if PA and LFn-NLS-Cre could mediate LoxP-dependent recombination in a whole-animal context, we first designed PCR primer sets that would selectively amplify, respectively, the nonrecombined and the recombined mTmG transgene (Figure 2A). Interestingly, a PCR product derived from the recombined transgene was readily detected in the heart, lungs, liver, kidney, spleen, lymph nodes, thymus, uterus, esophagus, trachea, tongue, and bone marrow of mTmG +/0 mice injected with LFn-NLS-Cre and PA but not in these organs from non-injected mTmG +/0 mice ( Figure 2B). This PCR product was not observed in the intestine, skin, and brain of mice injected with LFn-NLS-Cre and PA ( Figure 2B).
We next determined the ability to detect fluorescence in mTmG +/0 mice by confocal microscopy of unfixed whole organ slices, which could serve as a convenient readout for Cre activity. Wildtype mice were analyzed in parallel as a control for autofluorescence. To obtain semi-quantitative estimates of fluorescence intensities of the examined organs, in this and the following experiments, we generated composite images of the entire organ slice from confocal images acquired at low magnification. As expected, red fluorescence of variable intensity was observed in multiple organs of mTmG +/0 mice but not in the corresponding organs from wildtype mice imaged using the identical conditions (Supplemental Compatible with the PCR analysis, green fluorescence was weak or absent in intestine (Supplemental To determine when an eGFP signal is first detectable after the administration fo PA and LFn-NLS-Cre, we injected mTmG +/0 mice with 75 µg of each protein and examined the heart, kidney, liver, lungs, and spleen by confocal microscopy at 6, 8, 10, and 12 h (Supplemental Figure 9). Whereas no signal was observed at any of these time points in the heart (Supplemental Figure 9 A'
Single-cell resolution imaging of anthrax toxin intoxication
Using the knowledge gained from the above experiments, we next tested the ability of the assay to image intoxication in individual cells in unprocessed organs. For this purpose, mice received three intravenous injections of 25 µg LFn-NLS-Cre and 25 µg PA at 0 h, 24 h, and 48 h. 72 h after the first injection, confocal images of red, green, and blue (nuclei) fluorescence of slices of the heart, kidney, liver, lungs, and spleen were acquired at high magnification ( Figure 3). This analysis showed that in tissues of these five organs, non-intoxicated and intoxicated individual cells were readily distinguishable by their red and their green or yellow membrane-confined fluorescence, respectively.
Effect of genetic elimination of CMG-2 and TEM-8 on anthrax toxin intoxication
We next interbred previously generated CMG-2-deficient (Cmg2 -/-) and TEM-8-deficient (Tem8 -/-) mice to mTmG +/0 mice to generate, respectively, Cmg2 -/-;mTmG +/0 and Tem8 -/-;mTmG +/0 mice. These respectively, anthrax lethal toxin and anthrax edema toxin (11,56). Assuming that both receptors were close to ubiquitously expressed in tissues, this was tentatively suggested to be a consequence of a more than 10-fold lower affinity of PA for TEM-8 than for CMG-2. These findings are compatible with the current imaging study, showing that TEM-8 was essentially unable to support the intoxication in the heart, liver, and lungs, despite repeated toxin exposure through multiple injections. It should be noted, however, that full intoxication in other organs, including the spleen and kidney, was dependent upon TEM-8 but not CMG-2. This uequivocally demonstrates that TEM-8 is a functional receptor for anthrax toxin in vivo despite its reported lower affinity for PA.
A curious discrepancy between the aforementioned genetic studies and our current imaging study pertains to the intestine: Genetic studies have definitively established intestinal epithelial cells as direct targets for anthrax edema toxin (11). Nevertheless, we were consistently unable to observe the intoxication in intestinal epithelium. Intestinal epithelial cells of the mTmG +/0 reporter mice have previously been shown to undergo recombination in vivo and express eGFP in response to constitutive or inducible Cre expression, showing that the mTmG transgene is not inherently refractory to Cre-mediated recombination in intestinal epithelium (55). An attractive explanation for this discrepancy pertains to the lack of toxicity of LFn-NLS-Cre/PA as compared to edema toxin. We speculate that edema toxin may initially be unable to access this barrier tissue, but because damage to other visceral organs progresses as a consequence of EF intoxication, endothelial and/or epithelial barrier breakdown may allow entrance to the intestinal epithelial cells.
The imaging assay described here is simple and robust, and, importantly, it does not require the handling of toxic proteins. Therefore, it should be amenable for and adaptable to diverse research settings.
The assay should be useful for answering a number of basic research questions regarding the pathogenicity of anthrax toxins, as well as assisting clinical/translational efforts aimed at optimizing the treatment of individuals accidentally or deliberately exposed to Bacillus anthracis.
Considerable effort is currently being expended on the development of modified anthrax toxins as novel agents for the treatment of human malignancies. Strategies employed include the reengineering of PA to bind tumor cell surface-enriched proteins (34,35) and the reengineering of PA to be proteolytically activated by proteases enriched in the tumor microenvironment, including matrix metalloproteinases (36)(37)(38)(39)(40)(41)(42)(43), urokinase plasminogen activator (38,40,42,(44)(45)(46)(47)(48)(49), and testisin (50). The assay described here is imminently suited for assessing the efficiency of LF delivery to tumor-relevant cell populations by these modified PAs, as well as to systematically delineate off-targets, which may be invaluable for dose and route of delivery optimization. Last, but not least, by using PA variants selectively cleaved by specific cell surface proteases (43,46,50), the assay may be used for in vivo imaging of specific cell surface proteolytic activity in diverse physiological and pathological settings.
|
2021-02-02T17:41:09.904Z
|
2021-01-22T00:00:00.000
|
{
"year": 2021,
"sha1": "36865125b56007d1d3c8f174aa5f93cb33ffad35",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/02/09/2021.01.22.427304.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "36865125b56007d1d3c8f174aa5f93cb33ffad35",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology"
]
}
|
264827269
|
pes2o/s2orc
|
v3-fos-license
|
Hand gesture recognition via deep data optimization and 3D reconstruction
Hand gesture recognition (HGR) are the most significant tasks for communicating with the real-world environment. Recently, gesture recognition has been extensively utilized in diverse domains, including but not limited to virtual reality, augmented reality, health diagnosis, and robot interaction. On the other hand, accurate techniques typically utilize various modalities generated from RGB input sequences, such as optical flow which acquires the motion data in the images and videos. However, this approach impacts real-time performance due to its demand of substantial computational resources. This study aims to introduce a robust and effective approach to hand gesture recognition. We utilize two publicly available benchmark datasets. Initially, we performed preprocessing steps, including denoising, foreground extraction, and hand detection via associated component techniques. Next, hand segmentation is done to detect landmarks. Further, we utilized three multi-fused features, including geometric features, 3D point modeling and reconstruction, and angular point features. Finally, grey wolf optimization served useful features of artificial neural networks for hand gesture recognition. The experimental results have shown that the proposed HGR achieved significant recognition of 89.92% and 89.76% over IPN hand and Jester datasets, respectively.
INTRODUCTION
Regarding human-computer contact, technological breakthroughs in artificial intelligence and modern technology have created several efficient communication channels.Hand gesture recognition (HGR) is a technique that involves the receiver recognizing physical motions produced by the recipient's fingers, hands, arms, head, and face.Meanwhile In recent times, numerous disciplines, including modeling, computing, biomedicine, and gadgets, have increased their focus on realistic human interactions and understanding in the innovative city environment (Gholami & Noori, 2022).Gestures are the most intuitive items (Oyedotun & Khashman, 2017).A single inertial sensor is utilized in motion-based monitoring.These detectors are responsible for monitoring the hand's impulse, velocity, and position.The detection limit of body movement cameras in household appliances is a downside of employing these capabilities to regulate electronic appliances such as televisions, radios, and interior illumination (Gholami & Khashe, 2022).The method employs sensing de-vices or camcorders to procure directions from hand movements; the sensors and cameras are prepared on segmentation methods, including color, structure, appearance, placement shapes, and hand motion.Following the second method, our suggested model recognizes hand motions using sensing devices or webcams (Trong, Bui & Pham, 2019).
This research study involves a robust method for hand gesture detection and recognition.We consider two state-of-the-art datasets for proposed HGR method evaluation.Initially, we perform some steps for data normalization and other related tasks, such as noise reduction and frame conversions.Hand shape detection is the second step of our proposed model.Next, we extract useful information in terms of a features extraction model, 3D reconstruction is applied to get more accurate values and accuracy rate, data optimization is performed via the heuristic algorithm, namely grey wolf optimization, and finally, recognition accuracy and classification, we apply artificial neural network.The research study's objectives are presented in the following points: • The study aims to propose the development of an optimized communication channel for human-computer interaction by utilizing hand gesture recognition (HGR) system.
• We compare and analyze the characteristics of static and dynamic gestures in relation to their effectiveness in communication and recognition.
• We investigates various methodologies employed in the acquisition and analysis of hand gestures, encompassing the utilization of image sensors, monitors, and finger-based systems.
• The study presents a comprehensive approach for extracting robust features in order to improve gesture recognition.The proposed method incorporates geometric, 3D points, and angular features.
• We adopted 3D modeling techniques to enhance the precision and accuracy of hand gesture information.
The present research article is structured into several sections and comprehensive coverage of the research.'Related Work' provides a thorough exposition of the related method, while 'Materials & Approach' outlines the proposed method in detail, involving pre-processing, hand detection, data mining and classification methods.'Experimental Results & Analysis' delivers the experimental part, including details of the experiments, results and evaluation with other state-of-the-art methods.Finally, 'Conclusion' presents the conclusion and provides some potential future insights.
RELATED WORK
Due to their decreasing rate and actual size, IMUs have subsequently become a standard technology found in telephones, smart watches, and smart bands (Chung et al., 2019).The science community is becoming more interested in using IMUs for higher physical levels due to the adoption of sophisticated and wearable technology.A stretchable variety of electronics for image stabilization was suggested by Chen et al. (2018).They developed a method that integrated an IMU onto a rubber stopper that could be fastened to the body.The molded case served as stress reduction, protecting the sensors.
Additionally, the band is simple to connect and release to get information from other locations, such as the firearms or legs.Furthermore, since the motion causes fluctuations in the rubber stopper, the detected signals were impacted by technique encouragement (i.e., noise).Due to the intensity and velocity associated with the movement, hand gestures are particularly prone to image noise.The detected motion artefact may be reduced if the IMU is immediately linked to the surface.However, creating an easily correctable sensor is generally expensive and time-consuming.An affordable, easily repairable six-axis IMU can be made using our methodology.Due to its promise in medical and interpersonal behavior fields, hand gesture detection by DL is a topic of active research (HCI).For example, Cole et al. (2020) developed an artificial neural net-work-based technique to distinguish cigarette motions from an apple watch's tri-axis magnetometer.The use of lenses to recognize static motions has been extensively re-searched.Different techniques extract motion detection information for motionless hands (Aldabbagh et al., 2020).The entire hand or only the digits can be used to feature extraction.
HGR of four indicators is a difficult task because the complete hand's proposed technique is heterogeneous and necessitates substantial pattern recognition for authentication.Numerous academics have put forth various approaches for recognizing gestures made with the entire hand.A method was put out by Cheng et al. (2016) that retrieved the hand's shape and used the center to determine compactness and finger location for gestures.Using a prediction model, nine different actions are classified as movements.Using Hu invariant periods along with skin color, angle, and other factors (Oprisescu, Rasche & Su, 2012) identified the hand.The researchers employed a distance measure configuration method for categorization.In their system, Yun, Lifeng & Shujun (2012) split the hand during pretreatment.A localized shape pattern and block-based characteristics are extracted for a stronger depiction of the hand.These features are integrated to identify static hand motions, and a classification method is employed.Using YCbCr values, Candrasari (Agarwal, Raman & Mittal, 2015) placed the hand.They used the discrete wavelet transform (DWT) to feature extraction and then successfully classified the data using the hidden Markov model (HMM) and k-nearest neighbor (KNN).
Jalal, Khalid & Kim (2020) used a user-worn glove to retrieve the hand, utilizing contour modelling.American Sign Language (ASL) and the numerals 0 through 9 were classified using ANN.Chen, Shi & Liu (2016) used a color model to partition the hands and collected training hand positions.The approach suggested by Bhavana, Surya Mouli & Lakshmi Lokesh (2018) was divided into the following phases: preprocessing, hand segmentation using cross-correlation method for detect-ing edges feature vector computation using the Euclidean distance across contours, and finalization.Following that, a comparison between the Hamming distance and the spatial relationship is made to recognize gestures.A unique approach to hand gesture identification was put out by (Yusnita et al., 2017), whereby the shoulder is subtracted employing location modification, and the hand is identified using skin-color features.Calculated gesture moments are used with SVM to classify the motions.A technique to re-cover the hand using wristband-based contour characteristics and arrive at identifiable information, a straightforward feature-matching strategy was suggested.A functional-ity structure was suggested by Liu & Kehtarnavaz (2016) for assessing 3D hand posture.For feature extraction, they utilized convolutional layers, which were strengthened by a new long short-term dependence aware (LSTD) module that could recognize the correlation on various hand parts.The authors also incorporated a contextual integrity gate to in-crease the trustworthiness of features representations of each hand part (CCG).
To compare their technology to other cutting-edge techniques, they employed evaluation metrics.
The localization of hand landmarks to extract features enabling gesture detection has been approached from many angles.Hand gesture detection is used extensively by investigators in the preponderance of currently used techniques.A technique for extracting significant hand landmarks from images was suggested by Ahmed, Chanda & Mitra (2016).They pinpointed the exact coordinates among those landmarks and then correlated those landmarks with their respective counterparts in 3D data to simulate the hand position.Regions of interest (ROI) can be generated using a method developed by Al-Shamayleh et al. ( 2018) using the local neighbor methodology.To identify fingertips, researchers ap-plied active contour detection techniques.Pansare & Ingle (2016) created an innovative method to determine the fingernails and the palm's center.An adjustable hill-climbing approach is used on proximity networks to execute the fingertip-detecting process.The proportional lengths across fingers and valley locations are used to identify individual fingers.
MATERIALS & APPROACH
The proposed method is based on robust approaches for hand gesture detection and recognition.We consider two complex databases for our proposed method evaluation.Initially, we perform various prerequisite steps for data normalization and other related tasks, such as noise reduction and frame conversions.Hand shape detection is the second step of our proposed model.Next, we extract useful information in terms of a features extraction model, 3D reconstruction is applied to get more accurate values and accuracy rate, data optimization is performed via the heuristic algorithm, namely grey wolf optimization, and finally, recognition accuracy and classification, we comprehensive method is presented in Fig. 1.
Pre-processing
In this subsection, we cover the pre-processing for the suggested technique, which begins with foreground extraction using change recognition and associated component-based approaches and strategies.The connects a component labelling approach for fragmenting the human hand silhouette and finding conspicuous skin pixels.
We adopt these techniques from different studies.For example, using Otsu's method and image segmentation, Petro & Pereira (2021) proposed a novel method for optimal color quantization.On numerous test images, author revealed that their method performs Utilizing histogram-oriented thresholding, we differentiated the hand shape after extracting the skin elements.Using Otsu's method, numerous threshold standards of (Eq. 1) were adapted, and the extreme color strength of stochastic histogram hso (x,y) is described as: where IR is the overweight, TOai is a threshold which is suggested by Otsu's technique, and Tehai max is the main position of skin occurrence over extracted histogram directory .This process is practical for every grey scale subdivision of given image, which articulated as; Ige(x,y) = (0,0,0), ifgr x,y = 0 Igr x,y ,Imgr x,y ,Ib , ifg x,y = 1 . (2)
Hand detection
The humanoid hand silhouette ridge identification process consists of two steps (Zhang et al., 2018): sequential edge identification and ridge data synthesis.In the binary border separation process, binary boundaries are recovered from the RGB silhouettes produced in the above-mentioned denoising phase.Employing distance transformation, the proximity maps are generated on the boundaries.On the other hand, the ridgeline data creation phase, the local optima are acquired from the pre-estimated mapping to generate ridge data along the binary vertices.The mathematical representation of hand recognition is where α symbolizes the centroid point of the trajectories deposited in the confusion table, β denotes the updated trajectories of the evaluated data, γ indicates the detachment among the present standards of the confusion matrix and the novel trajectories.
Hand points detection
The segmentation hand is then utilized to detect landmarks.Numerous methods are offered for localizing hand landmarks, which aid in extracting the features for recognizing and identifying individual movements.The bulk of strategies are quite straightforward and restrict the precise location of landmarks.After collecting the acoustic waves of geodesic velocity using the fast-marching algorithm (FMA) on frames, landmark recognition is conducted.The color values of quads p are generated based on the outlines' outer boundaries b.Pixels with identical color values c are identified and then the mean is calculated; based on the pixel's average values, the landmark l is painted.For the innermost landmark, the value of bright green is determined and the distances adjacent points is determined.Calculating the fingers yields where px and py have the same color in the external surface and cpx,py is the total amount of pixels with that colour in the external surface.On the hand outlines in Fig. 2 below, landmark is inspired:
Features extraction
This step presents the details of features abstraction techniques for hand gesture recognition over challenging databases.We employed three detail methods for acquisition of features: geometric features, 3D points modeling and reconstruction, angular point features.
Algorithm 1 provides the complete methodology for features abstraction.
Geometric features
The point-based approach is used to retrieve hand sign features, which comprise locations on the thumb, forefinger, ring finger, ring thumb, or little finger (see Fig. 3).All the values are merged in numerous ways to provide a range of learning and recognition-related properties.These markers are positional, geometrical, and angle-based properties.The
3D point modeling and reconstruction
The first stage in this section of the three-dimensional hand reconstructions is accomplished using a mathematical model and ellipsoid approaches.Using the details from the connections, we can see that an ellipsoid connects the hand point to the following point and the index point to the inner elements.The rest of the human hand points are connected to the inner hand in a similar manner to how the thumb position is attached to it using an ellipsoid structure.The Eq. ( 6) shows the formulation of three-dimensional hand reconstructions and computational model.
Where k me denotes the computational model and l a (e x ,e y ) is the first points and x,y are the values.Figure 4 shows the detail overview of computational model and 3D hand reconstructions.
Angular point features
The angular point descriptors are based upon the angular geometric of human hand points.We consider all the extracted points and find the angular relationship between them.Equations.( 7)-( 9) shows the formulation of angular point features.
Where i,j and k are the procedures of the angle amid two together edges b < − > c,a < − > c, and a < − > b of the triangle shaped, correspondingly.After this we map the results with the main features vector.
Data optimization: Grey Wolf Optimization (GWO)
The GWO algorithm is an intelligent swarm technique created by Rezaei, Bozorg-Haddad & Chu (2018), which imitates the wolf's governing system for cooperative exploration.The grey wolf is a member of the Canidae family and enjoys living in packs.Wolves have a strong hierarchy, with a male or female alpha as their commander.The alpha is mainly tasked with making decisions.The package must accept the leader's instructions.Betas are senior wolves who assist the leader in making decisions.The beta serves as alpha's consultant and administrator.Omega, the lowest-ranking grey wolf, must notify the majority of other dominating wolves.A wolf is a delta if it's neither an alpha, beta, or omega.The omega is governed by the delta, which interfaces with the alpha and beta.Wolves' hunt strategies and social stratification are represented statistically to develop GWO and achieve improvement.The GWO methodology is evaluated using standard test methodologies, which reveal that it is comparable to other swarm-based approaches in terms of identification and application.When prey is instigated, the reappearance rises (t = 1).Subsequently, the alpha, beta, and delta wolves would supervise the omegas to search and eventually squeeze the prey.Three measures Ā, Ź, F and Ŷ are predictable to define the surrounding performance: where t requires the existing repetition, Ā is the position trajectory of the grey wolf, and Ā1, Ā2 and Ā are the position trajectories of the alpha, beta, and delta wolves.Algorithm
Classification: ANN
This section involves the classification method via ANN.An ANN is a set of numerous perceptron or neuron on every stratum; when required information is characterized in the forward channel, this is referred to as a feed-forward neural network (Abdolrasol et al., 2021).
Artificial neural networks (ANNs) have the capability to recognize hand gestures and can be trained to solve intricate problems that traditional computing systems or individuals typically encounter.Superintended training methods are frequently employed in practice, although there are instances where unsupervised training techniques or direct design methods are also utilized.As discussed in the literature, an artificial neural network has been utilized to detect gestures (Nguyen, Huynh & Meunier, 2013).The segmentation of images in this system was carried out by utilizing skin color as a basis.The features chosen for the artificial neural network (ANN) comprised adjustments in pixel values across cross-sectional planes, boundary characteristics, and scalar attributes such as aspect ratio and edge ratio.
In addition, the ANN method is effective for handling problems involving RGB data, textual information, and contingency table.The benefit of ANN is its ability to deal with transfer function and its ability to understand variables that translate any inputs to any result for any data.The artificial neurons endow their ANN with substantial qualities that enable the network to understand any complicated relation between output and input data, often known as a universal approximation.Numerous academics use ANNs to tackle intricate relationships, such as the cohabitation of mobile and WiFi connections in spectrum resources.We pass the important features vector to neural network for classification; Fig. 5 shows the model diagram of ANN.
EXPERIMENTAL RESULTS AND ANALYSIS Validation methods
The LOSO CV strategy has been adopted to evaluate the performance of the HGR framework on two distinct benchmark databases, namely the IPN hand and Jester databases, respectively.The LOSO approach is a derived form of cross-validation that utilizes data from an individual participant for each fold.
Datasets description
The IPN hand dataset (Benitez-Garcia et al., 2021) is a broad-scale video dataset of hand gestures.It includes pointing with one finger, pointing with two fingers, and other complex gestures.The IPN dataset consists 640 × 480 RGB videos at 30 frames per second.
The Jester dataset (Materzynska et al., 2019) contains an extensive number of webcamcollected, tagged visual content of hand motions.Single video sequence is transformed to a jpg frame at a frequency of 12 frames every second.The database includes 148,092 films.There are 27 different categories of hand gestures.
Experimental evaluation
The MATLAB (R2021a) is utilized for all testing and training while Intel (R) Core i5-10210u Quadcore CPU @ 1.6Ghz with x64 Windows 11 was programmed as the primary device.In addition, the device encompassed with an 8GB RAM.
The next stage of this research was to assess the performance of proposed system on two different databases.Therefore, we utilized grey-wolf optimized ANN for classification.Figure 6 represents the 13 hand gestures of confusion matrix of IPN hand database with
Evaluation with other state-of-the-art algorithms
In this section, we evaluated our system with other classifiers.Additionally, for the classification of HGR, we compared our proposed system with other sophisticated approaches such as AdaBoost and Decision trees.Figure 8 shows the comparison of IPN Hand and Jester databases over state-of-the-art methods.
While the Figs.9-10 shows the comparison of ANN with AdaBoost and decision trees recognition accuracies over IPN hand dataset.The results shows that Adaboost achieved 86.84% and decision trees attained 84.38% over IPN hand dataset.Therefore, results clearly shows that ANN outperformed both classifiers in terms of recognition accuracy over IPN hand dataset.
In this experiment, the Adaboost achieved 87.46% and decision trees attained 84.23%.Therefore, results clearly shows that ANN outperformed both classifiers in terms of recognition accuracy over Jester dataset.
We also compared our proposed system with other performance metrics including precision, recall, and F-1 score.Table 1 presents the performance metrics results over IPN hand gesture dataset.Table 2 shows the performance metrics results over Jester dataset.
CONCLUSION
Hand gesture recognition corrects a defect in interaction-based systems.Our proposed HGR system incorporates rapid hand recognition, segmentation, and multi-fused feature abstraction to introduce a precise and effective hand gesture detection system.In addition, two benchmark datasets are used for experiments.First, we performed preprocessing and frame conversion steps.Then, the hand shape is detected.Next, we acquired important information using multi-fused extraction techniques.Next, 3D reconstruction is implemented to get accurate results.Further, we adopted GWO to acquire optimal features.Finally, ANN classification is utilized to classify the hand gestures for managing smart home devices.Extensive experimental evaluation indicates that our proposed HGR method performs well with various hand gesture posture aspect ratios and complex backgrounds.In our future research studies, we intend to investigate the incorporation of comprehensive model analysis, in combining with time complexity measurements.
Figure 1
Figure 1The overall flow of the proposed system.
Figure 3
Figure 3 The results of geometric features over extracted hand points values, (A) extracted hand points and (B) over view of geometric features.Full-size DOI: 10.7717/peerjcs.1619/fig-3
Figure 4
Figure 4 Example results of 3D point modeling and reconstruction (A) fast marching result and (B) 3D reconstruction of hand shape.Full-size DOI: 10.7717/peerjcs.1619/fig-4 02 shows a comprehensive indication of the data optimization technique via grey wolf optimization.Algorithm 2 Grey Wolf Optimizer (GWO) Adjust the grey wolf population Y i ,i = 1,n Regulate a,A and C Approximation, the fitness of correspondingly search agent (SA) Y α = the optimized SA Y β = the supplementary superlative SA Y δ = the 3rd superlative search agent While t < max number of iteration do for each search agent do Arbitrarily initialize r 1 and r 2 Adjust the location of the existing SA by the (7) Update a, A and C Approximation the fitness of all SA Adjust X α X β and X δ t + + return Y α
Figure 5
Figure 5 The architecture flow and map of ANN.
|
2023-11-01T15:13:16.607Z
|
2023-10-30T00:00:00.000
|
{
"year": 2023,
"sha1": "8fd3c8f0a626fde63a9df6a59fadd4ed6b6392c3",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fbdca69a4d99aaaf4917db1f2d971c2a68f488ee",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
240967802
|
pes2o/s2orc
|
v3-fos-license
|
Reducing the Dosing Frequency of Selective Digestive Tract Decontamination to Three Times Daily Provides Effective Decontamination of Potentially Pathogenic Micro-Organisms
This study evaluated the effectiveness of selective digestive tract decontamination (SDD) application three times daily (t.i.d.) compared to the standard four times daily (q.i.d.). Retrospective equivalence study with a before-and-after design on a tertiary ICU in which the SDD frequency was reduced from q.i.d. to t.i.d. All patients with ICU admissions ≥ 72h, and with ≥ 2 surveillance cultures collected on different dates were included in this study. We compared successful decontamination of Gram-negative bacteria (GNB). Furthermore, time to decontamination, ICU-acquired GNB bacteraemia and 28-day mortality were compared between the two groups.
Introduction
Infection is a major complication among patients in the intensive care unit (ICU) resulting in additional morbidity, higher risk of mortality and increasing health care costs. [1] Selective digestive tract decontamination (SDD) is a common measure to prevent infections on the ICU. The principle behind SDD is that by reducing the numbers of potentially pathogenic microorganisms (PPM) in the gut, the risk of ICU-acquired infections can be reduced. [2] Intestinal decontamination of Gram-negative bacteria (GNB) was associated with a three-fold reduction in ICU-acquired bacteraemia with GNB. [3] The positive effect of SDD on clinical outcomes, i.e. Improved survival and less infectious complications while maintaining a low prevalence of antibiotic resistance, has been demonstrated by three cluster-randomized studies. [3][4][5] The SDD regime is applied in ventilated patients with an expected duration of arti cial ventilation >48 hours and consists of a mixture of non-absorbable antimicrobials combined with intravenous cefotaxime during the rst four days of ICU admission. SDD is applied q.i.d. and this frequency has remained unchanged since the rst SDD studies in the 1980s. There have been no studies that evaluated the optimal dosing frequency of SDD in order to achieve GNB decontamination. Reducing the dosing frequency of SDD paste application from four to three times daily (t.i.d.) would lower antimicrobial consumption and related health care costs. More importantly, by reducing the dosing frequency the nightly administration of SDD paste application could be omitted, thereby preventing sleep interruption.
Facilitating uninterrupted sleep reduces the incidence of delirium and is essential for adequate immune, metabolic, and endocrine functioning. [6,7] In an attempt to increase the quality of sleep for ICU patients, the dosing frequency of SDD application was reduced from q.i.d. to t.i.d in 2017. This t.i.d. SDD dosing regimen has been the standard of care since. This provided us with the opportunity to perform a beforeand after study comparing the e cacy q.i.d. versus the t.i.d. SDD application frequency, in terms of GNB decontamination of the digestive tract and subsequent ICU-acquired GNB-infections.
Setting, Design and Population
We retrospectively studied electronic microbiology and patient data gathered in the period from November 2011 until July 2019 on the ICU of the Amsterdam University Medical Centre, location VU medical centre (VUmc), a 28 bed ICU in a 730-bed tertiary care centre in the Netherlands. Data were derived from an automated database combining laboratory data with pseudo-anonymous patient data from the Electronic patient dossier (EPD). This database was constructed for antimicrobial stewardship and infection prevention purposes and consists of microbiological data, admission and discharge data, Medicine Administration Records, Surgical interventions and a small amount of patient data: sex, date of birth, date of death. Data visualization was performed using TIBCO® Spot re®. The non-absorbable antibiotics in the SDD regimen consist of application of paste (colistin 2%, tobramycin 2%, amphoterin B 2%) in the oral cavity and of suspension (colistin 100 mg, tobramycin 80 mg, amphotericin 500 mg) via the nasogastric tube. This regimen was applied q.i.d. until 26-05-2017, thereafter the same regimen was applied t.i.d. Patients admitted to the ICU also receive 4 days of intravenous cefotaxime q.i.d. 1 g, this practice did not change over the course of time.
All adult patients with an ICU admission of at least 72h and with at least 2 surveillance cultures drawn on two separate days were included in the analysis.
Microbiological Methods
Surveillance cultures were taken on admission to the ICU and thereafter once a week on Mondays for pharynx and anus, and on Mondays and Thursdays for sputum. All surveillance cultures were included in the analysis. Surveillance cultures within 72h of admission represented the ora at admission on the ICU and are further called baseline surveillance cultures.
Based on previous literature the following aerobic GNB were de ned as PPMs: Klebsiella, Enterobacter, Citrobacter, Proteus, Morganella, Serratia, Acinetobacter and Pseudomonas species. [8] By assessing the prevalence of GNB in the blood cultures of the patients in our cohort we found that Stenotrophomonas maltophilia was also a frequent cultured pathogen in the study period and was therefore added as an PPM. Because we also want to assess the reduction of carriage of endogenous "normal" but potentially pathogenic ora we added Escherichia Coli to the list.
In the Amsterdam UMC medical microbiology laboratories, antimicrobial susceptibility was tested using automated systems, gradient tests and/or using the disk diffusion method.
Decontamination was de ned as the reduction of Gram-negative bacterial load to a level at which surveillance cultures are negative (rectal or faeces, pharyngeal and sputum). The number of days in which decontamination should occur to be considered successful, i.e. adequate to reduce infectious complications and mortality, has not been previously de ned. In the study of de Smet et al --in which the relationship between SDD and reduction of mortality was con rmed --the frequency of GNB isolation from rectal swabs among patients receiving SDD was reduced from 56% at day 3 to 15% at day 14. [4] The SDD regimen used in the study of de Smet et al was identical to the q.i.d. regimen used in this study. Therefore we chose to de ne successful decontamination as a surveillance culture result negative for GNB within 14 days without positive follow-up surveillance culture for GNB during ICU-admission, i.e.
follow-up lasted until the moment of discharge from the ICU.
In case of new fever two blood cultures were drawn. ICU-acquired bacteremia with GNB was de ned as bacteraemia occurring at least 48 hours after ICU admission with growth of either Enterobacterales or glucose-nonfermenting Gram-negative rods, without documented bacteraemia with the same species in the rst 48 hours of ICU admission. Polymicrobial bacteraemia was de ned when one or more microorganisms were isolated from one or more blood cultures, and clinical evidence suggested they had arisen from a common source and were part of the same episode. If the source was unknown, all positive blood cultures occurring within 48 hours of each other are considered as a single bacteraemia.
28-day all-cause mortality is de ned as death for any cause within 28 days after the date of admission to the ICU.
Due to the before-after study design, we anticipated that time dependent factors such as antimicrobial resistance could introduce bias. We described the combined prevalence of susceptibility for the components of SDD (tobramycin and colistin) in surveillance cultures at baseline.
Page 5/12
Two groups were formed on the basis of the dosing frequency of SDD, q.i.d. versus t.i.d. Continuous variables are presented as median and interquartile range. Categorical variables are presented as percentages. For the difference in proportion of patients with successful decontamination of PPMs in both groups and for comparison of susceptibility of Gram-negative bacteria at baseline a Chi-square test was used. For the difference in time to decontamination of PPMs from surveillance cultures a Kaplan Meier curve was used. For equivalence testing the two one-sided test (TOST) procedure was used. The largest clinically acceptable effect for which equivalence can be declared was a mean difference of 10%. The equivalence limit was set to 0.1 (d E = 0.1). All data available was used, no formal sample size was calculated. Odds ratio was calculated to compare 28-day all-cause mortality between the two groups. Data analysis was performed using R Statistical Software (version 3.6.1; R Foundation for Statistical Computing, Vienna, Austria). Table 2 shows the proportion of successful decontamination in the two groups. Successful decontamination, de ned as GNB negative surveillance cultures within 14 days without any further GNB detection in surveillance cultures thereafter, was not signi cantly different between the two groups ( Table 2). To show non-inferiority of the t.i.d. regime, equivalence test of the proportions of successful decontamination was performed ( Figure 2). With an equivalence bound of 0.1 and a 98% con dence interval, the proportions of successful decontamination of GNB are equivalent in both groups. The time to decontamination of GNB is shown in gure 3. The log-rank test, to compare time to decontamination of GNB between the two cohorts, did not show any difference between the groups (p-value of 0.55).
Results
We observed 27 episodes of ICU-acquired bacteraemia with GNB during the study period, 17/1236 before (1.4%) and 10/722 (1.4%) after adjustment of SDD application frequency, with an incidence of 0.9 episodes/1000 ICU days in both. Causative pathogens in intensive care unit-acquired bacteraemia are shown in table 3. 28-day all-cause mortality was 26.1% and 25.6% in the q.i.d. and t.i.d. groups, odds ratios for death at day 28 in the t.i.d. group compared to the q.i.d. group was 0.99 (95% con dence interval [CI], 0.80-1.21).
To control for potential changes in resistance epidemiology between the two time periods, especially with regard to the incidence of bacteria that were susceptible to the SDD antibiotics, we compared the surveillance cultures at baseline (Table 4). Between 72-71% (for the q.i.d. and t.i.d. group) of all admissions started with GNB positive surveillance cultures at baseline. Susceptibility for tobramycin or colistin in Gram-negative bacteria in baseline surveillance cultures was 97.3% and 96.8% (for the q.i.d. and t.i.d. group) with a p-value of 0.61 using the chi-square test. We conclude that the baseline epidemiology at admission on the ICU is comparable, and that our results are not biased by an epidemiological shift in susceptibility rate between the two historical cohorts.
The incidence of VAP on our ward has been shown to be 3.3/1,000 ventilation days. [9] Prevalence measurements for the national 'PREZIES' survey of hospital infections are performed every three months. [10,11] Based on these surveys, the prevalence of VAP in q.i.d. cohort (median 0.1 % of admitted patients, range 0-19%) did not signi cant increase after change to the t.i.d. regime (median 0.1 % of admitted patients, range 0-21%).
Discussion
Despite its common use, the optimal SDD dosing regime has not previously been evaluated in a clinical setting. The present study demonstrated that a t.i.d. application regime application regimen provides equally effective selective digestive tract decontamination compared to the standard q.i.d. regime. SDD effectiveness was demonstrated within a large patient population (n = 1958) receiving either t.i.d. (n = 722) or q.i.d. (n = 1236) administration. Several outcome measures support our conclusion. First, the proportion of successful decontamination was equal in both groups. Second, the time to decontamination of GNB did not, at any time point, differ signi cantly. Finally, we found no signi cant differences in clinically relevant outcomes (i.e. ICU-acquired bacteraemia and 28-day all-cause mortality) between the two cohorts.
Although the goal of SDD is digestive tract decontamination and subsequent reduction of ICU-acquired infection, the four cluster-randomized controlled trials that have previously investigated the e cacy of SDD did not report the time to-or success of decontamination. [3][4][5]12] The primary outcome of our study can therefore not directly be compared to these trials. However, the close association between gut (de-)colonization and ICU-acquired infection is well established. [13][14][15][16][17][18] Frencken et al showed that both rectal and respiratory tract colonization were associated with bacteraemia (cause-speci c hazard ratios, 7.37 [95% CI, 3.25-16.68] and 2.56 [95% CI, 1.09-6.03], respectively). [14] Oostdijk et al found that respiratory tract decolonization and intestinal tract decolonization was associated with a 33% and 45% reduction in the occurrence of intensive care unit acquired Gram-negative bacteraemia, respectively. Moreover, Oostdijk et al reported a reduction of proportion of colonization in patients treated with SDD throughout intensive care unit stay from approximately 30% at day 1 to 15-20% at day 20. [13] The fact that decontamination rates found in our study are comparable to the results found in the large prospective study of the Smet et al (i.e. 85% of patients cultured after 14 days are decolonized from GNB), in which clinical effectiveness of SDD application was proven, is a clear indication that t.i.d. administration of SDD is clinically effective and safe. [4,13] Furthermore, the secondary clinical outcomes de ned in our study (ICU-acquired bacteraemia and 28-day all-cause mortality), in which t.i.d. proved to be non-inferior to q.i.d, supports this conclusion.
Strengths of our study are the size of our study population and the detailed information about intestinal colonization during SDD. This study provides, for the rst time, detailed insight into the underlying dynamics of culture-results during SDD. We demonstrated equal microbiological and clinical effectiveness of less frequent dosing. This is re ected in a stable incidence of ICU-acquired GNB bacteraemia and 28-day all-cause mortality. Our study also has limitations. We used a monocentric retrospective approach, using a historical control group. We had no detailed clinical information on ventilator associated pneumonias in individual patients. On a population level, however, we noted no signi cant change in prevalence of VAP since the introduction of the t.i.d. application regime. [11] Besides, Bergmans et al showed previously that decolonization of the respiratory tract results in a relative risk reduction of 67% in the incidence of VAP. This makes a difference in VAP incidence despite the equivalent decontamination rates in our cohorts unlikely. [19] The retrospective design precludes correction for hidden variables in the original data that might have confounded the results. Yet, potential confounders in the original uncontrolled data are likely to be present in both groups (t.i.d. versus q.i.d.). Speci cally, we ruled out an epidemiological shift in susceptibility for the SDD antibiotics, which could otherwise have biased the results.
Our ndings support a t.i.d. SDD application frequency in the ICU. This new regime was designed as an sleep-promoting intervention on the ICU. Many sleep-disturbing factors are present on the ICU, but clinical interventions are one of the most important disruptive factors and should therefore be avoided. [20] Furthermore any unnecessary antibiotic use should be avoided to reduce the harm that can result from antibiotic-associated adverse events. [21] During the t.i.d. SDD application period of three years tobramycin and colistin resistance did not change, which is in line with previous studies assessing antibiotic resistance during the use of SDD. [22][23][24]
Conclusion
Based on time to-and success of decontamination of Gram-negative bacteria, incidence of ICU-acquired GNB bacteraemia (0.9/1000 ICU days) and 28-day all-cause mortality there is no difference between a t.i.d. and a q.i.d. SDD application regime. These study ndings justify implementation of a t.i.d. SDD application regimen in ICUs where a standard (q.i.d.) regimen is in place.
Declarations
Funding: No nancial support was used for this study.
|
2021-08-25T17:26:23.504Z
|
2021-02-12T00:00:00.000
|
{
"year": 2021,
"sha1": "f08e0e9efdc9e27540ff607c8fe6b585a51ca0dc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10096-021-04234-1.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f898919611884ffcb4c9dbc836c6a303f940b568",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
56043649
|
pes2o/s2orc
|
v3-fos-license
|
STRATEGY AS DISCURSIVE PRACTICE IN A BRAZILIAN PUBLIC UNIVERSITY: A LOOK UNDER THE PERSPECTIVE OF CRITICAL DISCOURSE ANALYSIS
The aim this article a critical discursive analysis of the “management plan” genre of a public institution of higher education, from 2012 to 2015, located in southeast Brazil. The aforementioned plan is inserted in the discursive practice of strategic management, specifically the institutional, bureaucratic management, and is used as an instrument for decision-making. The goal of this analysis will be to discuss the first step of the “management plan”, named “organizational policies”. We can see that, while elaborating declarative sentences, there is an evaluation of the statements regarding what is to be considered relevant to the institutions by means of the ideological discourse on neoliberal ideals and market behavior. The adoption of market-oriented managerial tools has been a constant in public administration. The public administration looks for bases of organizational practices in the private sphere. This mimicry is still present in the field, and the search for new managerial practices still crosses the imaginaries of the public managers. However, the increasing incorporation of a market-oriented, neoliberal logic, mainly in the adoption of strategic planning, can still be verified. The conclusion presented in this paper serves to foment the debate on the strategies formulated for the Brazilian public service and the methodological applicability of the critical discourse analysis. This meets the emerging need to systematize and integrate distinct theoretical and methodological approaches in the field of organizational studies when strategy is studied as a social and discursive practice.
Introduction
By considering that identities, social changes, and discursive elaborations are in constant consolidation and change throughout social actions, we present in this article a critical discursive analysis of the "management plan" genre of a federal institution of higher education, from 2012 to 2015, located in southeast Brazil.The aforementioned plan is inserted in the discursive practice of strategic management, specifically the institutional, bureaucratic management, and is used as an instrument for decision-making.According to what is written within the institutional management plan, the socially built planning methodology was developed from dynamics and flexible phases in the institutional environment through several social and discursive practices, such as interviews, seminars, focus group, lectures, individual orientations, and meetings.The development of the plan is in accordance with the molds defined by the resolution of the University Board of the institution, which seeks to standardize the implementation of strategic planning, ruled by the relevant legislation of Decree 3.860/01 and of Act 10.861, of 14 April 2004, of the National Assessment System for Higher Education (SINAES -"Sistema Nacional de Avaliação da Educação Superior") and of the regulatory provisions expressed in the document.
The goal of this analysis will be to discuss the first step of the "management plan", named "organizational policies".We were able to see, after investigation, that the public university, while building its identity, also naturalizes and institutionalizes the ideological and hegemonic discourses of capitalism.According to Fairclough (1989, p. 85), "the ideology is more effective when its action is less explicit"; we can see "subtleties" within the discourses.Under the approach of Critical Discourse Studies (CDS), in particular of the studies developed by Norman Fairclough (1989Fairclough ( , 2001Fairclough ( , 2003)), who is considered the greatest representative of CDS (RESENDE and RAMALHO, 2013), we seek to partially analyze the strategic planning, a typical management tool to private organizations, but that, on a institutional situation, is named "management plan" and is currently under a considerable use.
According to Johnson et al. (2007), several segments of the public sector provide services and products to paying customers, in the same way as occurs with the commercial organizations.However, the role of ideology in the development of the strategy of that sector is probably greater than that of the private sector.There is also a great control or influence, directly and/or indirectly, exercised from outside of these institutions, especially from the government.A "commercial" undertaking controlled by the State can have its planning horizon determined more by political questions than by market conditions, apart from the obstacles to the capital investment and financial sources.Because of these reasons, there was a large-scale privatization of companies that were run by the Statesteel, telecommunications, railroad services, and several others (JOHNSON, et al., 2007).Faced with the aforementioned issues, what we question is: how is the position and the identity of this higher education federal institution ("Instituição Federal de Ensino Superior", IFES) in southeast Brazil being built in its management plan?Why are certain identity constructions still preferred for organizational policies instead of others?The premise of this research is that the organizational policies made for the implementation of the management plan of these IFES can be implicitly loaded with power and ideology relations.Our objective will be to identify and to verify, through critical discourse analysis, how the discursive strategies presented in the organizational policies of the management plan, built collectively by the members of the IFES, can strengthen, transform, or naturalize the hegemonic discourse.
Brazilian Public Aministration: Situational Aspects
According to Andion (2012), functionalist dominance is still very present in public administration.Therefore, the author defends the dialog between different approaches and practices to provide the enrichment to this field.Even though public administration constitutes a multidisciplinary field, a rational orientation dominates to maximize its efficiency and effectiveness.The renowned work Reinventing Government, by Osborne and Gaebler (1994), although not being considered a new paradigm for many scholars in the field of public administration, has influenced new practices in the public sector, having entrepreneurial practices as reference in the basis of the modern bureaucracy.Because of that, from the research of Denhardt (2012), it is possible to see the dominant and current perspective in public administrationthe New Public Management -, which has mimetically adopted practices of the private initiative.In this same perspective, Bresser-Pereira (2011) also presents the managerial public administration, focused on the neoliberal practices of professionalization and commoditization in the leading of the public thing.The focus on the public manager was shown to be simplistic by not considering the political dimension and the work conditions in the Brazilian public management, since the power struggle leaves its political role vulnerable after every decision taken.In such context, the Brazilian government incorporates the New Public Management as a marketing insight for the operation of the public sphere, a trend that was responsible for the reform of the State in several countries (PAES DE PAULA, 2005;ABRUCIO, 2007).
The new model that was presented and spread across the country has been responsible for the implementation of several management practices, which are common to the practices of the private initiative, even though it does not always meet the reality of the Brazilian public administration system.Amongst the organizational practices, the more spread and encouraged examples are the strategic planning and the balanced scorecard management tools.Several public organizations have elaborated their organizational policies, building their management plans for the conduction of their administrative activities and, consequently, of their future.Amongst them are the companies of the direct and indirect public management, government enterprises, public agencies, regulatory agencies, and federal universities.
The Management Plan of the IFES
The Brazilian IFES are part of the federal public administration and present a series of particular characteristics that practically prevent that the contributions from the traditional theories of the private administration be successfully applied to them (ANDRADE, 2003).In this setting, the managers of the IFESmostly professorsare the ones responsible for the administration of the university, adopting management practices, decisions, and actions that allow the achievement of the institutional goals.The managing professor ends up amassing administration activities and academic activities (MARRA and MELO, 2005).In the case of this IFES, the management plan is one of the instruments that compose the Institutional Planning System ("Sistema de Planejamento Institucional"), structured in four integrated and synergistic main processes: Management Plan, Institutional Development Plan, Institutional Pedagogical Project, and Environmental and Physical Development Plan.
Based on the strategic planning technique, the management plan seeks to provide a proactive behavior, aiming at the achievement of the institutional goals and, specially, at the establishment of a dynamic between the policies of expansion and the development of the institution.For Meirelles (1995), strategic administration has emerged as a part of the strategic planning, currently considered one of the main management instruments.Strategic planning is one of the most important organizational practices.According to Machado-da-Silva and Vizeu (2007), there is no denying that strategy is an organizational practice.It is in these terms that many authors attribute the decisive moment of propagation and consolidation of the institutional field of the entrepreneur strategy practices to the trend of strategic planning (MOTTA, 2000).According to Matias-Pereira (2009), strategic planning is an essential practice both in the private and public administration because of the benefits it provides to the organizations.
For this article, however, we made a theoretical framework of strategic planning, resorting, for the analysis, to the texts that compose the organizational policies, marked by the first phase of the management plan.This stage includes the elaboration of the mission, the vision, and the institutional goals.According to Kotler, (1980), the mission of the organization must be defined to satisfy a need of the external environment and not to offer some product or service to the market.Nevertheless, for Certo and Peter (1993), the mission is the raison d'être of the organization.The vision is the interpretation of the external behavior pattern perceiving the social and economical transformations of the globalized world and correlating all of this to the context of specific businesses of the institution.It is about the ability to see, even if it is intuitive, an image of the future to the businesses.If the mission reflects the fundamental business, the vision must revel what it will be in an envisioned future.If the mission shows the raison d'être of the company, the vision aims to project this essence into the future.After the definition of the mission and the vision for the future, the organization focuses on the construction of objectives to be able to act with guidance and fulfill its mission and vision.According to Oliveira (2010), the next phase of the organizational policies is the organizational objective responsible for the guidance of its efforts.In this theoretical approach, the practice of the strategy is no longer understood as an exclusive attribute of positioning and of the performance of the organization, but is to be considered a social and discursive practice, i.e., something that people do.Thus, to understand the practice of the strategy, it is needed to analyze how action standards are associated with the characteristics of the social agents and of the organizational context (JARZABKOWSKI and SPEE, 2009;WHITTINGTON, 1996WHITTINGTON, , 2004WHITTINGTON, , 2009)).To achieve the goal of this research, the organizational policies were analyzed through the CDS model.
CDS: an Analytical Model
According to Gomes et al. (2011), the critical discourse studies are a theoreticalmethodological framework, biased in favor of the social and discursive practices, that serve to denaturalize practices that are said to be universal within the social organization, aiming to assess how the structures of the discourse produce, legitimize, or even question the relations of power.According to Resende and Ramalho (2013, p. 9), the "discourse is a moment of social practice interconnected to other equally important moments".This statement reinforces the importance of the discourse analysis, because discursive samples can transpire the internalization of other moments of the practice in the discourse, such as certain social and ideological relations.This research resorts to the discourse analysis model by Fairclough (2003), which recontextualizes Halliday's Systemic Functional Grammar (SFG) (1991).There are three main types of discourse meanings in the new model by Fairclough (2003), amongst which are the action, the representation, and the identification.For a clipping effect of the types of discourse meanings, the analysis considered only the identification, which considers the construction and the negotiation of identities within the discourse, relating with the identity of Fairclough's CDS model (RESENDE and RAMALHO, 2013).
Keeping in sight the format and the dimension of the article, we chose to work with the identification meaning only and with the "assessment" category of the aforementioned meaning.We highlight that "every analysis is necessarily incomplete, partial and open to revision" (RAMALHO and RESENDE, 2011, p. 118) and that the assessments are always subjective, partial, and, because of that, connected to private identification processes.The identification meaning, which is related to the concept of "style", represents the discursive aspects of identities, which serves to identify social actors in texts.Within this meaning, the assessment category is made by evaluative statements and valuation presumptions.The evaluative statements are statements of what is considered desirable or undesirable, relevant or irrelevant.Amongst the evaluative elements, adjectives and evaluative verbs that are grouped in semantic sets of varying intensity, as in the "good/great/excellent" continuum, are subjected to an intensity scale.Regarding the valuation presumptions, they are not marked with transparency of assessment, in which the values are more deeply inserted in the texts, i.e., in the discursive genre.
The genre has discursive characteristics that are molded in the course of social events.Thus, it is possible to infer that genres are ways to act and to interact and are culturally located, which means to consider distinct discursive ways.Therefore, it is noteworthy that located genres are specific actions of a private practice network: "a kind of language used in the performance of a particular social practice" (CHOULOARAKI and FAIRCLOUGH, 1999, p. 56).It is through the genre that certain social representations, beliefs, courses of action change, are confirmed; it is how identities are formed.It is important to emphasize that the individuals that have the right to produce any type of genre, i.e., that engage in a discursive practice, also have the possibility to mold concepts, objects, and subject positioning (HARDY et al. 2000).
Discursive Genre: Management Plan
From the perspective of Fairclough, the "management plan" discursive genre is a constituent of a network of social practices, in a specific communicative, social structure, that shows relations of power marked internally, in this case, by the superior leaders of the educational institution.The IFES management plan, which is the object of this analysis, was built in a bureaucratic and institutional practice that produces certain discourses, meanings and construction of subjectivities; it is produced to be consumed not only by the interested in the publicized fact, but also by the society in general.From its sentences, we can verify the premise of normalizing and regulating behavioral actions required by IFES that are published in its management plan.About the level of abstraction, it can be inferred that it is characterized by stability and by rigidity, typical of bureaucratic genres, with little flexibility in its construction, even if it is of tacit knowledge that the social practices present a potential of flexibility and change.
A critical discursive analysis of the management plan genre, or as a subgenre, if part of the Institutional Plan (PDI) is to be considered, comes up against the reality of the research in organizational field: the growing interest for the discursive practices in this context.This confirms a statement by Resende and Ramalho (2013) regarding the fact that EDC has attracted more researchers, not only in the field of Critical Linguistics, but also from other areas of knowledge, such as Applied Social Sciences.According to Alvesson and Karreman (2000), the "linguistic turn", introduced to the social sciences in the decade of 1980, made of the discursive analysis an important element for the organizational studies.The discourse applied to the construction of strategy has been of increasing interest in the last few years in studies that examine the linguistic nature of strategies and the ways the language shapes strategic practices, especially in considering strategy as a social and discursive practice (FENTON and LANGLEY, 2011;ROULEAU and BALOGUN, 2011;SPEE and JARZABKOWSKI, 2011;VAARA, 2010;VAARA et al., 2010VAARA et al., , 2004;;MANTERE and VAARA, 2008;HARDY et al., 2000).During the last decade, there has been an increasing focus on the relation between discourse and the organizations (CEDERSTRÖM and SPICER, 2013).The proliferation of research is demonstrating the rich approach potential of discursive practices applied to strategies.However, at the same time, is shows the necessity of systematizing and integrating diverse approaches to create a general vision of what can happen when discourse is applied to strategy.The approach may allow for new research problems in specific levels and analyses (VAARA, 2010).
Despite its increase, it is argued that the role of the discourse in strategy remains theoretically underdeveloped and empirically little explored (BALOGUN et al., 2009).There are studies with a special focus on power (CARTER et al., 2008).However, to make it possible to understand how power influences the success or the failure of singular strategic initiatives, more studies are necessary (LAINE and VAARA, 2007).It should be highlighted that power cannot be separated from discourse: the discourse is an instrument, but also an effect of power (FOUCAULT, 1980).Many studies have focused on strategies as discursive practices (MANTERE and VAARA, 2008;ROULEAU and BALOGUN, 2011;SPEE and JARZABKOWSKI, 2011;VAARA, 2010), especially in the everyday practices of strategy managers (JARZABKOWSKI, 2005;WHITTINGTON, 1996) and with the interpretative nature on strategy elaboration.Therefore, strategy is something that the members of an organization "do", not something that the organizations "have" (HENDRY et al., 2010), since a considerable part of the "doing of the strategy" occurs by the language in the form of text and conversation.
The discourse shows how the relations of power shape the constitution of strategy.There are multiple power relations in any society; these relations of power permeate, characterize, and constitute the social body and cannot be established, consolidated, or implemented without the production, the accumulation, the circulation, and the operation of discourse (FOUCAULT, 1980).Discourses are interrelated collections of texts and practices that systematically form the objects of which they speak (HARDY and THOMAS, 2012).
However, it is important to highlight that the studies of the strategic practice identified how strategists use the discourse in the construction of strategy (LAINE and VAARA, 2007;ROULEAU, 2005;VAARA et al., 2004), the discourses as narratives (VAARA and TIENARI, 2011), the discourses as rhetoric (ERKAMA and VAARA, 2010), and the discourses as metaphor.Still, other studies show discursive activities to justify, legitimize, and naturalize actions (VAARA and TIENNARI, 2002).What is noticed is that the ways that the authors employ to mobilize private discourses for strategic purposes are different (HARDY et al., 2000).Strategy is a discursive construction; the researchers of strategy explore the meanings as a practice and how these meanings play an important role in the way strategies are understood and implemented (FENTON and LANGLEY, 2011;ROULEAU and BALOGUN, 2011;VAARA et al., 2010).It is verified, according to Foucault (1980), that every process to build strategies, in addition to the exercise of power, is permeated of increasingly institutionalized discursive practices.
Power and Institutionalized Ideological Discourses
The social problem chosen for thought is not regarding the generic, regulatory and controller potential of the management plan gender, nor it is about typical attributions of a managing professor, but it is on the attempt to impose a legitimate model of organizational guidelines in a regulatory and disciplined manner.It is verified that some fragments from the texts of the IFES management plan organizational guidelines display social, political, and ideological effects.The declaration of the mission is to exercise the integral action of teaching, research, and extension activities, aiming at: the universalization of quality public higher education, innovation, the promotion of the institutional development and the promotion of sciences, languages, and arts, as well as at the formation of citizens with technical, scientific, and human vision, capable of facing challenges and attending social demands.About the vision for the future, there is the following formulation: the consolidation as an institute of excellence in education, research and extension, nationally and internationally recognized by the scientific community and society.Finally, we highlight some objectives: to consolidate and improve the model of management in multicampi universities; to expand the scientific, intellectual, and cultural production; to improve the communication between the university and the society, with the support of media vehicles and digital media; to broaden the plan of student assistance, aiming at the qualified formation and the reduction of inequalities, of retention, and of school evasion; to improve the integrated management politic and people development; to consolidate the processes of planning and evaluation as decision-making instruments; to improve the administrative, organizational, financial, and economic efficiency of the university through the optimization of resources and processes of acquisition, distribution, application, and control for goods and services.
According to Leclercq-Vandelannoitte (2011), the organization is a discursive construction under Foucauldian lens that exposes the potential of the Foucault theory for the comprehension of the underlying meaning to this argument, in addition to the answers to its deficiency.The organizations are dynamic in its constitution, and this evolutionary process is continuous and constantly negotiated through power/knowledge relations.Due to linguistic turn in 1980 on social sciences (ALVESSON and KARREMAN, 2000), the analysis of discourse became an important element for organizational studies.This analysis is still present in studies and has been used to identify the implications of power/control relations in discourses of organizational actors.The study of Foucault often appears in the analyses of organizational discourses and in studies about communication, especially to examine the effects of dominant discourses.This theoretical lens brings the relations between technology and communication, discourse, power, knowledge, discipline and, therefore, articulates the dynamics and political processes, which combine symbolic and material elements in organizational constitution.The social world is organized by rules in specific forms through discursive practices This conception of discourse appears in critical language studies that relate to the domination within organizations.The political and social concerns of Foucault (1980) also took the author to recognize the relations of power registered in discourse.Foucault withdrew his incisive affirmation about the discourse being governed by rules, autonomous and with an auto-referent system and by presented genealogy as a complementary approach to explain the control, the selection, the classification and the distribution of discourse production through the relations of power.
Discourses and Subjectivity
The individual identities, with bonds of subjectivity, are built and rebuilt through discourses in the workplace Jackson and Carter (1998) utilize the act of taming to show that work promotes obedience, meekness, and control of members in the organization.Most of the studies verify the power inserted in the organizations through conversation networks that are based on current discursive practices.Thus, language is a form of social control and power.The discourses that reproduce the relations of power are naturally obtained, and these relations can be opaque to the participants.People act in relation to stronger discourses with acceptance, with resistance or commitment (DOOLIN, 2002).In the analysis of discourse, the Foucauldian thought helps to reveal the function of discursive objective formations, the impacts of dominant discourses and the interactions between strong discourses and the local discursive practices.The conceptual framework by Foucault (1980) can be used to explore discourses related to a phenomenon: relation with structures, discipline and control; and individual practices, such as reactions and resistances.The discourses are, simultaneously, locals of domination and resistance, and they are involved in the deconstruction and reconstruction of organizations.Foucault (1980) rejects the "unified vision of the state for a network of institution, practices, procedures and techniques in which the power circulates as strategic relations" (WILLCOCKS, 2004, p. 257).Besides, he encourages the concept of power-knowledge, which means that power produces knowledge.The thought of Foucault (1980) enriches the ontological perspective because it insists in structures, as well as in objectives and in subjective characteristics of social reality.The organizational discourses are combined in a space/time specific physical organization to produce "docile bodies" and, therefore, promote certain forms of control.
According to Ezzamel & Willmott (2008), the foundation of the Foucauldian power is seen as an innovative complement to establish management analyses.There is no relation of power not related to the constitution of a field of knowledge, nor there is any knowledge that does not presuppose and constitute, at the same time, relations of power.In rational analysis, strategy is designed as a result of impersonal forces, available resources, or the calculation from the rational-decision-maker.However, the strategic analysis as a practice incorporates little consideration of how to get involved in practices that constitute professional as subjects (WHITTINGTON, 1996).The Foucauldian analysis, in contrast, covers how the elements of strategy are mobilized for the construction of practices and actors as strategies in the discourse (KNIGHTS and MORGAN, 1991).The Foucauldian analysis does not intend to captivate and catalogue the detailed aspects of strategy elaboration, but how the strategists think, talk, react, interact, feel touched, embellish, and politicize (JARZABKOWSKI, 2005).It is concerned with valuation as a strategy, i.e., how the discursive practice works to build the operationalized professional world.
Strategy as Social Practice
Strategic management can be considered a social practice and, in this sense, the strategy involves routines, standards, and rules in which both allow and limit the actions of the strategist/subject, just as they limit the possible action field.Taking an overall approach of strategy as something that companies have or not, strategy is seen as an activity in which the individuals realize and interact in physical and social contexts (WHITTINGTON, 2004).In the development of a discursive version of strategy as a practical approach, some researchers underline the influence of discursive practices in subjectivity and the behaviors of organizational members.The seminal Foucauldian analysis considers strategy as a discourse, which is a set of ideas and practices that condition our ways of relating with our way of acting on private phenomena (KNIGHTS and MORGAN 1991).Firstly, it is necessary to see the strategic practices as a part of a great power arena and, then, as a body of knowledge and discourse.In the first place, while part of a power field, strategic practices appear as a result of multiple conditions and random events.Secondly, in the condition of field of knowledge, strategic management can be seen as a heterogeneous set of discursive and material practices.These practices are governed by specific functions that partially structure what can be read, said, and done about reality.
Discussions
We were able to verify a legitimatizing identity from the organization guidelines, regarding not only the process of repressive isomorphism, when there are explicit orientations from hierarchically superior organs on the elaboration of the management plan of the IFES, but also the legitimacy through the use of a management tool that is openly accepted in the private sector.We can see that, while elaborating declarative sentences, there is an evaluation of the statements regarding what is to be considered relevant to the institutions by means of the ideological discourse on neoliberal ideals and market behavior.In one of the institutional goals, such as "to consolidate and improve the model of management in multicampi universities; to expand the scientific, intellectual, and cultural production; to improve the communication between the university and the society", we ask: which management model would this be if we were to consider a federal public institution with its social function?Would it be based on New Public Management, inspired on neoliberalism?
In its mission, the institution is said to be able to "face the challenges and meet the demands of society"; however, what we verify in the very same statement is that the institution chooses its priority in the discourse when it puts development in first plan, in detriment of the sciences, leaving languages and arts at a second plan; likewise, it seeks to raise citizens with a technical and scientific view, relegating the humanistic formation to a third plan.For a public higher education institution to have "the development of languages and arts" as its mission and, at the same time, not having more than 11% of its graduation programs and 25% of its under-graduation programs in the fields of humanities and social sciences, there is a resonance regarding the governmental exchange program "Ciências sem Fronteiras" ("Sciences without Borders"), which excludes these areas of knowledge based on the fact that they do not meet the strategies of the current government.We still ask ourselves what would be considered strategic for the federal government and for the public university regarding the formation of its recent graduates.How can we define the preferable fields and strategies?Following this logic, are we to meet the necessities of the society or the current emergences of the market?For its vision, the institution also demonstrates greater interest in being nationally and internationally recognized by the academic community, to only subsequently be recognized by the society.It also works with a market-oriented idea of projecting visibility by means of its communication tools.It is spoken of broadening the scientific production first to then foment intellectual and cultural production.The choice of the verb "ampliar" ("expand", "broaden") demonstrates the quantitative worry and the productivist logic, since terms such as "enhancement" or "quality" regarding this production are never mentioned.
What is seen in the management practice adopted by the analyzed IFES is an institutionalized tendency that sustains a usage that oriented more by legitimacy and less by performance.Such legitimacy disseminates already interiorized and socially accepted models, which would be no different with strategic planning.Public universities tend to seek an isomorphism that is more institutional than it is competitive, despite its organizational guidelines demonstrating a marketed-oriented logic instead of having an institutional orientation.However, fact is that the institution aims at guaranteeing the legitimacy of its practices.One of the mechanisms that promote its environmental changes is repression, for the public sector is always subjected to political influences and to constant pressures to meet a given institutionalized or ideological standard.Nevertheless, the repressive process does not come solely from legislation, but also from public bodies; in the case of public universities, it is those organs that keep some type of hierarchic relation or some degree of dependence.The adoption of market-oriented managerial tools has been a constant in public administration; many public administrators are pressured to adopt these tools with the objective of reinventing the approaches or practices that are considered inefficient.However, this action is not always considered positive by researches of the field, since it would be necessary to verify the particularities of Brazilian public organizations, which still carry much bureaucracy and patrimonialism; the implementation of managerial practices would call for a different logic.
The discourse that states that public universities are to obey the current legislation is the same applied to every public body; the need to fit certain operationalization standards and some rules that coordinate the organizational field are elements that fundament and evince a repressive isomorphic process and its relations of power.This organizational homogenization is imposed, amongst other reasons, by regulatory governmental bodies.It is possible to determine that there is a search for normative conformity.Therefore, the institutional pressures to regulate public universities according to the standards and models established by the old Ministry of Public Administration and Brazilian State Reform (MARE) configure an isomorphic process in the Brazilian public system.Therefore, if, on the one hand, public companies need to adapt to the regulation of the sector, induced by institutionalized norms and legislation, we can verify, on the other hand, that, in addition to power, this process is mediated by the mimicry present in Brazilian public administration as a whole.
The possibility of a homogenization process in the Brazilian public service will serve as a parameter for managerial practices adopted by the private sector to be copied.The adoption of mimetic changes is always comprehended and mediated by managers (or equivalent positions) who find it necessary to adopt them.Thus, in spite of the copy, the mimicry is never perfect; it is always subjected to alterations in the course of time.The short analysis of this frameworkorganizational guidelines for the management planallowed us to verify that the discourse constructed from the social representation of the university is first to attempt legitimacy and then to play an official role on social inclusion.What was made evident through the management plan, especially regarding the guidelines, was a propitious environment to act and consolidate the identity of the university.The management plan is part of a network of social practices, which integrates a specific social, communicative structure and constitutes the relations of power institutionalized by the university.
It is possible to see that language acts in the managerial plan as a way of maintaining or consolidating certain sociopolitical representations regarding the institution.The choice of the evaluative constructions, according to Fairclough (2003), is important not only to understand the constructions of action (genres), but also the constructions of identity, since the text is relative to attitudes and appreciations.There is a required construction with a great degree of obligation, demonstrating a desirable model of institution.The actions constructed and considered ideal for the guidelines, according to a neoliberal model, are desirable for the institution that defines its model through a market-oriented, neoliberal discursive construction.Every statement is produced by an assertive, i.e., there are no questions, exclamations, or imperatives that could allow for a dialogical construction; this allows for a monological construction, which is affirmative and definitive, and does not permit contestation.In other words, there are declarations that impose how institutionalized actions are to be constructed though typical hegemonic evaluations.
Conclusion and Recommendations
The organizational guidelines serve to justify the exercise of power by those who have it.They produce identity meanings, which makes servants relate to the actions to be executed.The institutionalized objectives follow the tendency of the new public management of using experts in the area of planning to strategically reach the objectives and, consequently, the goals and actions for a public that is not "trained" in the field: the faculty members.The institutional environments of federal public universities are conditioned by a strict environment and aided specifically by determinations prescribed by regulatory organs of the Union.What we see is that the process of institutionalization of the IFES increasingly promotes substantial organizational changes of a high regulatory degree.However, it is worthy noticing that this process of institutionalization has implications for a greater homogenization of norms and practices, therefore seeking to reach and fit the models and standards established, as well as aiming at the legitimacy of its organizational context.These changes lead to symbolic elements that attest the meanings of a discourse in determined changes that surpass logical requisitions, for they are about alterations on a institutional level.These changes are not to meet the demands of the market, but to fit the conformities demanded by the supervisory and regulatory organs that accompany the activities of public administration in the country.It is precisely this regulatory adequacy that has determined the transformations in the practice of creating management plans for Brazilian IFES.These changes are not taking place by their own, but are happening as an integrating part of an institutional conjuncture of the organizational field for the Brazilian state reform, implicitly carrying a neoliberal discourse.There are governmental interests that meet the homogenization of the administrative practices in Brazilian public bodies, aiming at the standardization of the planning and management processes.
These changes suggest a redesign of the new habits, in order to apprehend new routines in managerial practices of Brazilian IFES.The internalization of the copied practices may influence the policies of the organization involving the existing organizational structures.These practices are not warranty of efficiency, for their organizational structures are highly institutionalized and influence their daily actions, therefore disregarding some necessary actions.Determined managerial practices are regulated as functioning models by certain external public bodies; in the case of the IFES, they are under the responsibility of the Instituto Nacional de Pesquisas Educacionais (INEP, "National Institute for Educational Research") and of the Ministry of Education (MEC) itself.It is also a conditioning for the institutional environment and for the governmental structures that coordinate their applicability.The government determines the norms and the standards for the implementation of the management plan, therefore inducing the isomorphism in its organizational changes.This regulatory adequacy seeks to transform the most well-known public institutions.The meeting of these federal norms is a significant source of changes for Brazilian IFES.
The organizational guidelines of the management plan for the analyzed university confirm that the ideology is not always visible, given it subtlety in the discursive construction.These discursive practices, due to the fact they are present in texts of the public administration, evince a greater presence of the ideological and hegemonic role and the repressive control by part of the hierarchically superior public bodies.Given the still-functional predominance in the administrative activities of the public sector, the dialog between theories on Linguistics and Sociology are important for the enrichment of the discussions and the debates in the field of public administration.The public administration looks for bases of organizational practices in the private sphere.This mimicry is still present in the field, and the search for new managerial practices still crosses the imaginaries of the public managers.However, the increasing incorporation of a market-oriented, neoliberal logic, mainly in the adoption of strategic planning, can still be verified.The discussion presented in this paper serves to foment the debate on the strategies formulated for the Brazilian public service and the methodological applicability of the critical discourse analysis.This meets the emerging need to systematize and integrate distinct theoretical and methodological approaches in the field of organizational studies when strategy is studied as a social and discursive practice.These investigations are not very empirically explored and still show a need for development and theoretical discussion.
|
2018-12-05T05:46:22.417Z
|
2014-10-18T00:00:00.000
|
{
"year": 2014,
"sha1": "3c23713a10be4b94cf86eca88e8adc809e46fd69",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/par/article/download/37775/22677",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0ec2e4d9d1805b0f299bb3c00575a9d3de82a1fe",
"s2fieldsofstudy": [
"Education",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.