id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
232075107
pes2o/s2orc
v3-fos-license
Long-term clinical results of per-oral endoscopic myotomy (POEM) for achalasia: First report of more than 10-year patient experience as assessed with a questionnaire-based survey Background and study aims  Since per-oral endoscopic myotomy (POEM) was introduced in 2010, it has become accepted as one of the standard treatments for esophageal achalasia worldwide. This study aimed to present long-term clinical results of POEM over 10 years and evaluate the technique and outcomes at the institution where it was first used in clinical settings. Patients and methods  Questionnaire-based surveys were sent to patients who received POEM in our institution from September 2008 to May 2010. Patient demographics and procedural outcomes and open-ended questions were posed about the postoperative courses, including symptom improvement and recurrence, additional treatments, and post-POEM gastroesophageal reflux disease (GERD) symptoms. Achalasia symptoms and post-POEM GERD symptoms were evaluated with Eckhardt scores and GerdQ systems, respectively. Results  Thirty-six consecutive POEMs were performed in that period and 10-year follow-up data were obtained from 15 patients (41.7 %). Although four cases (26.7 %) required additional pneumatic balloon dilatation (PBD), reduction in post-Eckardt scores were observed in 14 cases (93.3 %). GerdQ score was positive in one patient (6.7 %). Proton pump inhibitors (PPI) were taken by four patients (26.7 %) and their symptoms were well-controlled. Conclusions  Clinical results of POEM over 10 years were favorable regardless of various factors. Symptoms improved even in patients who required additional treatments, suggesting that POEM plays a significant role in treatment of achalasia. Introduction Ten years after the first report of per-oral endoscopic myotomy (POEM) for esophageal achalasia by Inoue et al [1], POEM has been established as one of the standard treatments of esophageal achalasia worldwide [2,3]. While evidence-based studies have proven POEM to be a safe and effective technique, [4,5], concerns regarding gastroesophageal reflux (GERD) after POEM have emerged in recent years [6,7]. Along with this, the advantages and disadvantages of POEM still remain controversial; however, through an increasing number of reports and discussions, these have gradually become clearer in recent years. In addition, how to perform POEM more effectively and safely has been widely debated, and various evolutions of the indications and of techniques have been evaluated [8,9]. Long-term clinical results of per-oral endoscopic myotomy (POEM) for achalasia: First report of more than 10-year patient experience as assessed with a questionnaire-based survey Bearing in mind that available literature on POEM has been limited to short-or mid-term results of up to approximately 5 years [10,11], assessing the long-term clinical results of POEM in the initial period since its introduction into clinical settings was deemed to be important. In this study, as the earliest institution to develop POEM and the largest referral center for POEM in Japan, we present our long-term clinical results of POEM over 10 years. The initial technique and outcomes are also examined and discussed. Study design and patients This was a single-center retrospective cohort study. A database of information from patients who received POEM at Showa University from September 2008 to May 2010 was reviewed using electronic charts. Clinical surveys of patient symptoms over 10 years were conducted via phone calls or mail questionnaires. There were no exclusion criteria in this follow-up study; however, patients under 18 years of age or those with advanced sigmoid type achalasia were excluded from the indications for POEM procedure over the study period. Procedural technique of POEM The POEM procedure was performed as previously described in our first report of 17 consecutive POEM patients [1]. Of them, seven cases (Cases 1-7) were also included in this study. The procedure was done with the patient under general anesthesia with positive pressure ventilation. A submucosal tunnel was created on the posterior (5 o'clock axis) or anterior (2 o'clock axis) side from the level of the mid-or lower esophagus downwards about 1 to 3 cm into the proximal stomach passing the esophagogastric junction (EGJ). In all cases, myotomy was also carried out with selective circular muscle cutting. An adequate myotomy on the gastric side was confirmed by the endoscopic appearance, such as the insertion length of the endoscopy, a prompt decrease in resistance when entering into the stomach side passing through EGJ, and recognition of the palisade vessels in the esophagus and submucosal spindle vessels in the stomach. Confirmation by double-scope [8] or other methods [9] was not performed at that time. All procedures were performed by one expert endoscopist who was the first in the world to pioneer POEM. Follow-up measurements The surveys conducted during this study consisted of openended questions regarding the postoperative course, such as symptom improvement and recurrence, additional treatment, and GERD symptoms after POEM. Achalasia symptoms and post-POEM GERD symptoms were evaluated with Eckardt score and GerdQ systems [12], respectively. Esophagogastroduodenoscopy (EGD), barium swallow, and 24-hour impedance-pH monitoring (MII-pH) were added in Case 11 (32 nd experience with POEM for our team) because the patient answered that he had both insufficient results from POEM and a positive GerdQ score after POEM. Definition of outcome measurements The degree of esophageal dilatation and type were classified by barium esophagogram. According criteria from the Japan Society of Esophageal Diseases, the degree of esophageal dilatation was classified by the maximum diameter of the esophageal lumen into grade I (< 3.5 cm), grade II (< 3.5-6.0 cm), and grade III (> 6 cm), and the type of achalasia was classified by the shape of the esophageal lumen as straight (St), sigmoid (Sg), and advanced sigmoid (aSg) [13]. The primary endpoint of this study was symptom improvement by obtaining a reduction in Eckardt score at 10 years after POEM. The secondary endpoint was GERD symptoms at 10 years after POEM. Positive GERD symptoms were defined as a post-POEM GerdQ score of 8 or more according to the previous report [12]. Ethical considerations This study was approved by the Ethics Committee of Showa University Koto Toyosu Hospital (IRB Registration No:20T7022). Written informed consent was obtained from all participants. Statistical analysis Median (minimum-maximum range) was used to report continuous and categorical variables. All analyses were performed using JMP Pro 14.0.0 (SAS Institute Inc., North Carolina, United States). Results Between September 2008 and May 2010, POEM was performed on 36 consecutive patients in our institution. We were able to follow up with 16 of them (44.4 %) via phone calls or mail questionnaires. However, one of the 16 patients died from another disease 2 years after POEM. Therefore, 10 years of follow-up data from a total of 15 patients (41.7 %) were analyzed in this study (▶ Fig. 1). Patient demographics and procedural data are shown in ▶ Table 1. In the initial period, POEM was indicated for all typical achalasia except in patients under 18 years old or who had advanced sigmoid type achalasia. The median age was 51 years (range 18-75) and the median duration of symptoms was 4 years (range 1-32). Seven patients (46.7 %) were male, three (20.0 %) had grade III dilation, and four (26.7 %) had sigmoid type achalasia. Seven patients (46.7 %) received previous treatment with pneumatic balloon dilatation (PBD). In one case (Case 15), previous PBD was done as an additional treatment after transthoracic Heller myotomy because of its insufficient effect. The median pre-POEM Eckardt score was 8 (range [3][4][5][6][7][8][9][10][11][12]. In the first case (Case 1), myotomy on the posterior side was selected, but in all the latter cases (Cases 2-15), myotomy was switched to the anterior side (2 o'clock). This is due to the presence of the spine behind the esophagus, which destabilized the appropriate motion of the endoscopic tips. In addition, anterior myotomy can avoid destroying the angle of His in the light of preventing the risk of post-POEM GERD. Furthermore, because the procedure at that time was focused on opening the EGJ and consensus about the appropriate length of myotomy had not yet been reached, the length of myotomy in the esophagus side varied from 2 to 15 cm, and a minimum of 1 cm on the stomach side. Based on our experience with the first seven cases (Cases 1, 2, and 3 in this paper), the myotomy length on the esophageal side was relatively short. After subsequent reports that a 7-cm myotomy was generally recommended to achieve complete release of LES in surgical Heller myotomy, myotomy on the esophageal side was extended to over 8 cm in the latter cases [14]. The median procedural time was 120 minutes (range 90-240) and no adverse events occurred in this series. Symptomatic results in each case are shown in ▶ Fig. 2 and ▶ Table 2. As shown in ▶ Fig. 2, although four patients (Cases 2, 3, 9, and 15) required additional treatments (shown with asterisk in ▶ Fig. 2), symptom improvement was obtained in 14 of 15 patients (93.3 %). ▶ Table 2 shows the details of the clinical course in each case. Cases 4, 11, and 13 had a post-Eckardt score of 4, which was generally considered inadequate relief of achalasia symptoms. In case 13, the patient had a temporary feeling of symptom recurrence 1 month after POEM, but in Cases 4 and 13, they were satisfied with the results at present without further treatment. In Case 11, the patient was satisfied and achalasia symptom improvement was noted, but an uncomfortable feeling of GERD symptoms was reported. Additional PBD was required in Cases 2, 3, 9, and 15. The duration of the recurrence of symptoms was 1 to 5 years after POEM. Additional PBD was effective in Cases 2, 3, and 9, with relief of symptoms to some extent. Meanwhile, symptom recurrence at 5 years after POEM was noted in Case 15; however, due to the mildness of the symptoms, the patient did not seek any treatment. The patient's symptoms eventually worsened, prompting him to finally seek care and he subsequently received an additional PBD 9 years after POEM. Despite the additional PBD, no symptom improvement was noted, yet the patient was able to tolerate the symptoms afterwards. Post-POEM GERD data are shown in ▶ Table 2. Four of 15 patients (26.7 %) are taking proton pump inhibitors (PPIs). Of them, two patients (Cases 2 and 4) are taking PPIs at their request to prevent GERD symptoms despite being asymptomatic. The patient in Case 8 only takes PPIs when symptoms occur, whereas the patient in Case 12 takes PPis daily. In both patients, the symptoms are well controlled. Meanwhile, one patient (Case 11) (6.7 %) had a positive GerdQ score without any PPI intake. As such, repeat evaluation was performed, which included EGD, barium swallow, and MII-pH. EGD findings revealed inadequate opening of the EGJ and large diverticulum just above the EGJ. Barium swallow also revealed retention of barium in the esophagus with a large diverticulum above the EGJ (▶ Fig. 3). MII-pH showed that percent time clearance pH was 0.0 % and DeMeester composite score was 0.9, which indicated no GERD in this patient. Discussion In the present paper, the experience at the earliest institution to develop POEM and the largest referral center for the technique in Japan, including 10-year clinical results based on post-POEM GERD data, is presented. POEM has been reported to be a highly effective, minimally invasive treatment for achalasia and related esophageal motility disorders with short-term followup [4,15]; however, to our knowledge, this is the longest follow-up report and the first report of over 10 years of clinical results with POEM. Since the introduction of POEM into clinical practice based on the evidence of an experimental report on the safety of endoscopic myotomy in a porcine model by Pasricha et al [16], indications initially were restricted for only typical achalasia; hence, POEM has been safely completed in all cases regardless of prior treatment. Based on primary reports regarding efficiency and safety associated with short-term results of POEM, indications for the procedure have been expanded [15,[17][18][19], and to date, modifications and discussions are still being done among endoscopists all over the world to achieve safer and better results [20,21]. The majority of patients in this study obtained symptom improvement 10 years after POEM, suggesting that the clinical efficacy of POEM is favorable. On the other hand, four patients (Cases 2, 3, 9, and 15) required additional PBD treatment during their clinical courses. Based on previous literature, patient demographics factors associated with poor results of POEM were reported to be male gender, high pre-Eckardt score, longer duration of achalasia symptoms over 10 years, prior treatments, dilated esophagus, sigmoid type achalasia, and type III Chicago classification [22][23][24][25]. However, in the present study, tendencies toward these factors were not elucidated due to the small sample size. In addition, procedural factors such as myotomy length also were reported to have an effect on POEM results. According to clinical practice guidelines for POEM in Japan [2], to secure a complete LES incision, 1-to 2-cm myotomy into the gastric side is recommended. In our series, in four cases (Case 2, 3, 5 and 13), a shorter 1-cm myotomy was made in the stomach. In two of these patients (Case 2 and 3), additional PBD was required and one patient (Case 13) had a feeling of symptom recurrence 1 month after POEM and had a post-POEM Eckardt score of 4. Our study could not draw concrete conclusions about the importance of the length of myotomy on the gastric side, but taking into account previous reports, we cannot totally rule out that these results may have been influenced by the shorter gastric myotomy length. Another interesting result was that in most of the cases with symptom recurrence, the recurrence occurred a short time after POEM, and additional PBD resulted in improvement in symptoms. This suggests that even if the myotomy length is insufficient, it is possible that this previous myotomy site can become the "starting point" for the dilatation since it has already been cut, hence, enhancing the effects of the additional PBD. To avoid compromising the effects of POEM due to insufficient myotomy length, a double-scope method has been used routinely in recent years in our center to intraoperatively confirm adequacy of myotomy length on the gastric side [8]. In Case 15, POEM was done as a "second myotomy" following the insufficient effect of the first transthoracic surgical Heller myotomy and PBD. Symptom improvement to some extent was noted. Although a slight exacerbation occurred after 5 years, symptoms were still tolerable for the patient. Nine years after POEM, additional PBD was carried out to improve her symptoms; however, it did not provide relief. Since this case had several factors such as sigmoid type, long duration of symptoms, and prior treatment, complete relief of symptoms might not be obtained only by making an opening in the LES. Still, POEM was considered to have a role in alleviating this patient's symptoms to some extent. Despite the high efficacy and safety rates for POEM, the onset of GERD after POEM has become a concern and it is now an important issue for discussion. Apart from Heller myotomy with antireflux surgery, adjacent structures surrounding the distal esophagus, which work as one of the major natural barriers to reflux, are potentially preserved in POEM; however, post-POEM GERD has been reported, ranging from 14 % to 57 % [5,7,26,27]. In our cases, four patients (Cases 2, 4, 8, and 12) were prescribed PPIs. Of these, two patients (Cases 2 and 4) took PPIs for prophylaxis regardless of being asymptomatic, and in the other two patients (Cases 8 and 12), symptoms were controlled well with PPIs. Based on available reports, factors that may increase the risk of post-POEM GERD include the following: a myotomy length on the gastric side > 2.5 cm, female sex, low pre-POEM LES pressure, full-thickness myotomy, and posterior myotomy [6,[27][28][29][30]. However, the present study did not reveal a tendency toward these factors affecting GERD symptoms. Only Case 11 had a positive GerdQ score. In this case, although the length of myotomy in the stomach was 3 cm, the therapeutic effect was not sufficient, and the patient still exhibited a post-POEM Eckardt symptom score of 4. Considering the emergence of a diverticulum just above the EGJ after POEM and a poor outflow of barium in the esophagogram, the positive GerdQ score might not represent acid reflux from the stomach caused by post-POEM GERD, but instead, reflux of esophageal residue due to an insufficient therapeutic effect of POEM. This consideration was also supported by the results of MII-pH. We believe that additional treatment, such as PBD or a second POEM, is required in such cases after detailed examinations, including ▶ Fig. 3 Findings from EGD and barium swallow in Case 11. a EGD showed inadequate opening of the EGJ, mild erosive GERD, and a large diverticulum just above the EGJ. b Barium swallow also revealed retention of barium in the esophagus with a large diverticulum above the EGJ. high-resolution manometry (HRM) and the indications must be carefully considered. Certain limitations of this study must be acknowledged. Aside from its single-center retrospective nature, the sample size was relatively small. Contacting patients more than 10 years after treatment posed challenges, such as ascertaining their current whereabouts, which is why we were able to reach only 44.4 % of them. Moreover, there is a lack of objective analysis. Our hospital was the only facility performing POEM 10 years ago, and most of the patients at that time came from far away to receive POEM. Because most of them were satisfied with their past results with POEM, unfortunately, we could not get consent from them to come all the way to our facility just for the follow-up examinations. Furthermore, the COVID-19 pandemic made it difficult to perform follow-up and provide usual examinations. Finally, POEM procedures in this series had not yet been standardized 10 years ago, and methods to stabilize the therapeutic effect and safety were not yet established. This consequently resulted in various biases in considering the clinical outcomes in this study. Therefore, a multicenter study with objective data is needed in the near future. These are the longest-term clinical results with POEM to date. The clinical results of POEM were satisfying enough regardless of various patient and procedural factors. Even in cases for which additional treatment was required, symptoms scores improved to below their pre-POEM baselines, suggesting that the technique plays a significant role in treatment of achalasia. Conclusion In conclusion, this study showed satisfactory long-term clinical results with POEM over 10 years. We believe this study provides important information and lessons that can be applied to future POEM treatment.
2021-03-02T05:32:33.788Z
2021-02-19T00:00:00.000
{ "year": 2021, "sha1": "e83e4a943f7f46ab4f4fafcff67183b9aeb3791c", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-1333-1883.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e83e4a943f7f46ab4f4fafcff67183b9aeb3791c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257270532
pes2o/s2orc
v3-fos-license
COMPARATIVE ANALYSIS OF LOCAL FRUIT SELLING BUSINESSES IN THE SAENAM VILLAGE AND SALLU VILLAGE NORTH CENTRAL TIMOR East Nusa Tenggara Province has land that tends to be dry, with several superior local fruit commodities such as oranges, mangoes, avocados, and jackfruit. One of the local fruit-producing centers in East Nusa Tenggara is North Central Timor (TTU) Regency. Most of the fruit needs in TTU Regency are supplied from West Miomaffo District, especially Saenam Village. Saenam Village produces 222 Kg of local fruit while Sallu Village is 345 Kg, but the sales volume value of Saenam Village is higher than Sallu Village with a difference of Rp. 1,000,000. Based on these conditions, this study focuses on the comparison of local fruit farming businesses in Saenam Village and Sallu Village. The purpose of this study is to compare and analyze the sales volume, relative market share level, and business position of local fruit commodities by applying BCG analysis. The results showed that the fruit business in Saenam Village had a total sales volume in 2018 of IDR 2,151,313, 2019 of IDR 1,915,296, and 2020 of IDR 2,175,685. Meanwhile, Sallu Village has a total sales in 2018 of Rp 1,009,821, 2019 Rp 899,584, and 2020 Rp 887,281. The market growth rate and relative market share are calculated based on the total sales volume. The results of the BCG analysis show that Saenam Village is in the star quadrant, with a market growth rate of 1.31% and a relative market share level of 2.57. Sallu Village has a market growth rate of -15.25% and a relative market share rate of 0.4. This value explains that the fruit products of Sallu Village are in the dog quadrant. The strategy that needs to be carried out by farmers in Saenam Village is to expand fruit marketing. The strategy that can be applied by farmers in Sallu Village is to replace fruit gardens with vegetable gardens in order to increase income and use land more optimally. Introduction Local fruit is a unique horticultural commodity and can only be produced in a certain area. The province of East Nusa Tenggara (NTT) has land that tends to be dry, with several superior commodities such as oranges, mangoes, avocados, and jackfruit. The uniqueness of NTT citrus fruit is that it has few seeds with a distinctive fragrance and has been certified by the Ministry of Agriculture with the number SK 124/Kpts/SR.120/ D.2.7/12/2017 (Balitjestro (Balai Penelitian Tanaman Jeruk dan Buah Subtropika), 2019). Meanwhile, mango, avocado, and jackfruit grown in NTT have a denser fruit texture and a stronger taste and aroma because they grow on dry land (Alim et al., 2022). One of the local fruit-producing centers in NTT is the North Central Timor Regency (TTU). Table 1 above can be concluded that local fruit production in North Central Timor Regency has fluctuated over the last 4 years where in 2017 local fruit production was 8,096.9 tons, decreased in 2018 by 2,556.5 tons, and again experienced an increase in production. in 2019 it was 4,894.4 tons and in 2020 it was 8775.2 tons. BPS stated that local fruit production in West Miomaffo District in 2020 was 470 tons, 400 tons, 430 tons, and 224 tons, respectively. The data shows a decline in production from year to year. The main factors that affect the decline in local fruit production in West Miomaffo District are pests and diseases (Seran & Kune, 2016). The same thing was expressed by (Bay & Pakaenoni, 2021), where one of the causes of the low quality of fruit in the local market is the attack of fruit fly pests. Fruit fly attacks affect the quality and quantity of fruit. Quantity losses are caused by the reduced economic value of the fruit, while the loss of quality is when the fruit becomes rotten and there are black spots that are not suitable for consumption. On average, fruit farmers in West Miomaffo Sub-district pay less attention to the health of fruit trees so that they are easy to attack by pests and diseases and reduce fruit production. Farmers' lack of attention to fruit trees is caused by the low price of fruit, which ranges from Rp. 5,000 per kg to Rp. 18,000 per kg. In addition, it is suspected that a lack of marketing strategy causes the value of local fruit sales volume to be low. Therefore, there needs to be an alternative marketing strategy to increase the volume of fruit sales, which will impact the income of local fruit commodity farmers. According to (Wiedjarnarko et al., 2015), cooperating with several parties can increase income. (Pratama & Nadapdap, 2019), provides one strategy that can be done to increase sales volume, namely a market expansion strategy to create market penetration, market development, and product development. Based on preliminary observations, in West Miomaffo District, information was obtained that Saenam Village was the village that recorded the largest local fruit sales value out of 13 other villages. In 2018 Saenam Village sold local fruit for IDR 2,151,314 with a total fruit production of 222 Kg, while Sallu Village as a competitor had the potential to sell local fruit for IDR 1,009,822 with a total fruit production of 345 Kg (Bai et al., 2021). This condition is interesting to study further because Saenam Village can get a higher sales value than Sallu Village even though the fruit production in Saenam Village is lower than Sallu Village. The provisional assumption for this situation is that Saenam Village can market its fruit in several markets compared to Sallu Village. This study will focus on the comparison of local fruit farming businesses, namely oranges, mangoes, avocados, and jackfruits in Saenam and Sallu villages to determine sales volume, relative market share, and fruit commodity business position and to formulate alternative marketing strategies that are good so that they can be used by consumers. farmers in developing sales volume and developing local fruit markets. Research Location and Time This research was conducted in West Miomaffo District, especially in Saenam Village and Sallu Village. This location was chosen because the two villages are local fruit producers in West Miomaffo District. Field data collection was carried out from April to June 2021, with data collected from the last 3 years, namely from 2018 to 2020. Population and Sample The research population was all fruit farmers in Saenam Village and Sallu Village. The total population of the study in Saenam Village was 162 farmers and Sallu Village's 94 farmers consisted of representatives of each family of local fruit farmer groups (FFG) that produce avocado, mango, orange, and jackfruit. The sampling technique used is the Slovin formula (Patarianto, 2015), as follows: Information: n: Number of Samples N: Total Population Nd 2 : Percentage (set 10% with 90% confidence level) The sample for fruit farmers in Saenam Village is 115 respondents and in Sallu Village is 76 people. The number of each sample was determined based on the Farmer's Group Family (FGF) by proportional random sampling with the following criteria: (1) Average production of 2 to 4 tons/ha; (2) beginner and advanced farmer group classes; (3) experts and stakeholders involved in local fruit marketing. There are 6 experts and stakeholders with the criteria of respondents being experts on marketing or business actors who have been running for at least 3 years, and stakeholders are village officials or policymakers. The details for each village are, 1 village head, 1 extension worker, and 4 local fruit farmers. Data collection The types of data collected consist of primary data and secondary data. Primary data were collected directly through interviews using a questionnaire as a research instrument. The questionnaire used has been structured in a structured way and has been tested for validity and reliability. By knowing the market growth and market share, it can also be seen the position of the marketing strategy based on the BCG matrix, as in the matrix below. The BCG matrix is a matrix that graphically describes the differences between divisions in relative market share positions and market growth rates (Maristia et al., 2020). The BCG matrix is defined as a method of evaluating a business relative to the growth rate of the business market and the organization's share in the market (Sari & Sultan, 2019). The Boston Consulting Group matrix has levels in each quadrant, which has 4 positions, namely: Stars, Cash Cows, Question Marks, and Dogs. This matrix can also be used to place strategic products that can generate profits for the company. The Boston Consulting Group (BCG) analysis method is a method used in preparing a strategic business unit plan by classifying the company's profit potential (Subhan & Peratiwi, 2017 The concept of this research is in line with the research conducted by (Joubert et al., 2011), entitled "The Cash Cows, Dogs, Stars and Problem Children of the South African Agricultural Sector". The research focuses on analyzing the growth of various agricultural sub-sectors in South Africa by comparing one sub-sector with another. The results of his research show that the average growth of the agricultural sub-sector in South Africa for 10 years is 5.64%, with details of one subsector being in the cash cow quadrant, eight sub-sectors in the dog quadrant, fourteen subsectors in the star quadrant, and twenty-one subsectors in the question mark quadrant. RESULTS AND DISCUSSION This study focuses on knowing the relative market share and sales volume of fruit in Saenam Village and Sallu Village. The two villages were compared based on the total sales volume of all commodities produced and also compared between each fruit to obtain results that were able to describe the state of local fruit farming. Fruit Production Volume On average, the volume of local fruit production in Sallu Village is higher than that of Saenam Village. The average fruit production data for each village is presented in Table 2 In addition to citrus fruits, avocado is the second leading commodity in Saenam Village, it can be seen in Table 2, the value of avocado sales volume from year to year always increases, this is due to the number of farmers cultivating avocados so that avocado production increases and directly affect the volume of avocado sales. As for mango and jackfruit, farmers in Saenam Village do not cultivate them specifically, these two plants have been growing in their yards or fields for a long time, so the sales volume of mango and jackfruit tends to be smaller when compared to oranges and avocados. The largest local fruit sales volume in Sallu Village was obtained in 2018 Rp. 1,009,821 and continued to decline in 2019 and 2020, which was Rp. 889,584 and Rp. 887,281, and this was due to the lack of sales made by local fruit farmers in Sallu Village. Fruit farmers in Sallu Village only sell local fruit at the Eban Market for the reason that they are afraid of losing if the fruit does not sell and they do not want to incur transportation costs to sell at Pasar Baru Kefa and Pasar Rakyat Atambua. In addition, fruit farmers in Sallu Village feel unable to compete with fruit farmers from other villages, so the volume of local fruit sales in Sallu Village is very low. The fruit farmers of Saenam Village dare to sell local fruit outside the Eban Market, namely Pasar Baru Kefa and Pasar Rakyat Atambua. Despite having lower production than Sallu Village, local fruit farmers in Saenam Village have a higher average volume of sales because the average price per kg of local fruit in Eban Market is only Rp. 1,300 for mangoes, Rp. 1,600 for avocados, Rp. 12,500, and jackfruit Rp. 3,700. The price of fruit in Eban Market is much lower than the average price of fruit in Pasar Baru Kefa and Pasar Rakyat Atambua, for mangoes the average price per kg is Rp. 4,000, avocado is Rp. 9,000, oranges are Rp. 18,500, and jackfruit is Rp. 4,000. Citing the results of research conducted by (Ismini, 2010) and (Winardi, 2014) states that the number of marketing channels affects the income earned. While research by (Kasdi, 2016;Liviu & Adina Claudia, 2011;Setiawati et al., 2020) argues that the law of supply and demand applies in the market, namely if the number of goods produced cannot be sold and few are in need, it will not increase income. The results of the research above are under the conditions experienced by Saenam Village and Sallu Village, namely, Saenam Village produces less fruit than Sallu Village, but can obtain a larger sales volume because it can sell fruit in three different markets, while Sallu Village is only able to sell in three different markets. one market so that it has a low sales volume. BCG analysis of Saenam village and Sallu village The BCG matrix for Saenam Village and Sallu Village has a median value of 1% market growth rate (MGR) and a median relative market share rate (RMSR) of 1. The upper limit value, middle value, and lowest value are determined based on the highest and lowest values. obtained from the results of TPP and TPR calculations (Dewi et al., 2016;Frida et al., 2018;Laosutsan et al., 2017;Rahayuningsih et al., 2013). Based on the results of the BCG analysis in Figure 2, Saenam Village is in the star quadrant with a market growth rate of 1.31% and a relative market share level of 2.57, which means that overall fruit products in Saenam village have high market growth and market share. Sallu Village has a market growth rate of -15.25% and a relative market share rate of 0.4. This value explains that the fruit products of Sallu Village are in the dog quadrant, which means Figures 2 BCG Matrix of Fruit Commodities in Saenam Village and Sallu Village that the market growth rate and market share are relatively low. Based on the figure above, the strategy that can be done by Saenam Village to increase sales volume is to expand the market to Kupang Regency and Kupang City. In addition, fruit farmers are expected to make processed products from these fruits to increase the selling value and extend the shelf life of local fruit. The suggested strategy for Sallu Village is to replace fruit orchards with vegetable gardens so that the income earned by farmers increases and land use is more optimal, or another strategy is to work together with Saenam Village to jointly market fruit outside the Eban Market so that the market share of fruit commodities in Sallu Village increasingly widespread. These two strategies are in line with the results of research conducted by (Wardani et al., 2021), which states that in agribusiness, the quality of production in plantation commodities must be considered in shortterm development strategies for leading plantation commodities by sustainably maintaining potential commodities and prospects optimally through: increasing production, productivity, and quality strengthening partnerships, expanding markets, and utilizing global market opportunities, as well as establishing production center areas. BCG Analysis of Each Fruit Commodity After knowing the market growth rate and relative market share of the total local fruit sales of each village, this study continues the BCG analysis for the sales of each local fruit commodity, the aim is to find out which local fruit commodity is the most superior in terms of market growth rate and share. relative market, then a sales strategy is formulated. The BCG matrix of each fruit commodity can be seen in Figure 3. The BCG matrix above shows that there are three local fruit commodities in Saenam Village which are in the star quadrant, namely avocado, mango, and orange, while jackfruit is in the cash cow quadrant. If you look at the sales volume of mangoes, which are not that big when compared to avocados and oranges, but have a much higher market growth and relative market share when compared to jackfruit with a TPP value of 7.08% and a TPR of 1.45, while the TPP value is 7.08%. jackfruit has a value of -16.73% and a TPR of 1.72. Mango fruit in Saenam Village is recommended to penetrate the market and market development to generate profits if fruit farmers are serious about cultivating and rejuvenating mango trees that have passed their productive period. This strategy needs to be carried out because the market growth rate and relative market share are positive. While jackfruit has a negative market growth rate but a positive relative market share value. A suitable strategy for jackfruit is product development by processing jackfruit to increase the market growth. The strategy proposed by (Wardani & Solikah, 2019), which consists of market penetration, market development, and product development, is following Saenam Village. Market penetration focuses on selling existing products to increase the market, then market development has the aim of selling products in new markets, and product development serves to introduce products to existing markets. All local fruit commodities in Sallu Village are in the dog quadrant in the BCG Matrix above, which means they have a low market growth rate and relatively low market share. This shows that the local fruit of Sallu Village does not have many enthusiasts and is less competitive, because the fruit farmers in Sallu Village only do marketing in one place, namely the Eban Market. The marketing reach of local fruit farmers in Sallu Village is very limited. If the locally produced fruit does not sell well then as an alternative the local fruit is used as animal feed. The strategy that needs to be carried out by fruit farmers in Sallu Village is by expanding or expanding the market and also processing unsold fruits into products that have added value such as snacks or drinks. However, if fruit cultivation is only considered a side business, it is better to replace fruit plants with vegetable crops. Vegetable crops are the main commodity cultivated by the majority of farmers in Sallu Village, so they can optimize land use and are also able to increase farmers' income. Research conducted by (Wardani et al., 2021) resulted in a strategy for determining commodities in terms of product value and determining commodities in terms of market development objectives. This strategy is relevant to the condition of fruit commodities in Sallu Village which needs to be reviewed to increase farmers' income and make efficient use of land. CONCLUSION Saenam Village has a high market growth rate and relative market share, this is evidenced by the position of Saenam Village in the star quadrant of the BCG Matrix. Sallu village is in the dog quadrant which indicates that the market growth rate and market share are relatively low. The strategy that can be done by Saenam Village is to increase sales volume by expanding the market. In addition, local fruit farmers are expected to be able to make processed products from fruits to increase the selling value and extend the shelf life of fruit. The implementation of the strategy for Sallu Village is to replace fruit gardens with vegetable gardens that can increase farmers' income and optimize land use, or another strategy is to work together with Saenam Village to jointly market local fruit outside Eban Market so that the market share of Sallu Village's local fruit is getting bigger. Based on the results of the analysis, it is known that there are three local fruits in Saenam village which are in the star quadrant, namely avocado, mango, and orange, while jackfruit is in the cash cow quadrant. Mango fruit in Saenam village can generate profits if fruit farmers are serious about cultivating them, and rejuvenating mango trees that have passed their productive period because the market growth rate and relative market share are positive. While jackfruit has a negative market growth rate but has a positive relative market share value the appropriate strategy for jackfruit is to process the product to increase its market growth. All local fruit commodities in Sallu Village are in the dog quadrant in the BCG Matrix, which means they have a low market growth rate and relative market share. The strategy that needs to be carried out by local fruit farmers in Sallu Village is to expand or expand the market and also process unsold fruits into products that have added value such as snacks or drinks.
2023-03-02T16:25:36.793Z
2022-12-17T00:00:00.000
{ "year": 2022, "sha1": "7f594c1605729eabae96182e7b2c8a4302543447", "oa_license": "CCBY", "oa_url": "https://ejournal.uksw.edu/agric/article/download/5476/2286", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b800726058102656702b07309068dbb1dc79679b", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
15542428
pes2o/s2orc
v3-fos-license
Phosphorylation of endothelial NOS contributes to simvastatin protection against myocardial no-reflow and infarction in reperfused swine hearts: partially via the PKA signaling pathway AIM The cholesterol-lowering drugs statins could enhance the activities of endothelial nitric oxide synthase (eNOS) and protect myocardium during ischemia and reperfusion. The aim of this study was to examine whether protein kinase A (PKA) was involved in statin-mediated eNOS phosphorylation and cardioprotection. METHODS 6-Month-old Chinese minipigs (20-30 kg) underwent a 1.5-h occlusion and 3-h reperfusion of the left anterior descending coronary artery (LAD). In the sham group, the LAD was encircled by a suture but not occluded. Hemodynamic and cardiac function was monitored using a polygraph. Plasma activity of creatine kinase and the tissue activities of PKA and NOS were measured spectrophotometrically. p-CREB, eNOS and p-eNOS levels were detected using Western blotting. Sizes of the area at risk, the area of no-reflow and the area of necrosis were measured morphologically. RESULTS Pretreatment of the animals with simvastatin (SIM, 2 mg/kg, po) before reperfusion significantly decreased the plasma activity of creatine kinase, an index of myocardial necrosis, and reduced the no-reflow size (from 50.4%±2.4% to 36.1%±2.1%, P<0.01) and the infarct size (from 79.0%±2.7% to 64.1%±4.5%, P<0.01). SIM significantly increased the activities of PKA and constitutive NOS, and increased Ser(133) p-CREB protein, Ser(1179) p-eNOS, and Ser(635) p-eNOS in ischemic myocardium. Intravenous infusion of the PKA inhibitor H-89 (1 μg·kg(-1)·min(-1)) partially abrogated the SIM-induced cardioprotection and eNOS phosphorylation. In contrast, intravenous infusion of the eNOS inhibitor L-NNA (10 mg·kg(-1)) completely abrogated the SIM-induced cardioprotection and eNOS phosphorylation during ischemia and reperfusion, but did not affect the activity of PKA. CONCLUSION Pretreatment with a single dose of SIM 2.5 h before reperfusion attenuates myocardial no-reflow and infarction through increasing eNOS phosphorylation at Ser(1179) and Ser(635) that was partially mediated via the PKA signaling pathway. Introduction Timely reopening of the occluded coronary artery after acute myocardial infarction rescues the ischemic myocardium and reduces mortality. However, impaired regional perfusion and microvascular or endothelial dysfunction within the previously ischemic myocardium after revascularization therapy also produce the no-reflow phenomenon, which may lead to increased infarct size, contractile dysfunction, higher incidence of complications, and poor clinical outcome [1][2][3][4] . Prevention and treatment of the no-reflow phenomenon have been a worldwide challenge in this reperfusion times. According to the European Society of Cardiology (ESC) guidelines, the optimal revascularization time is within 3 h after the onset of acute myocardial infarction [11] . In almost all of the existing studies, statins were delivered several days before myocardial reperfusion. It is not clinically practical to pre-treat patients undergoing acute post-infarct percutaneous coronary intervention with high-dose statins several days before the procedure. In addition, recent studies have reported that the cyclic adenosine monophosphate (cAMP)/ protein kinase A (PKA) pathway plays a role in cardioprotection during ischemic preconditioning and in the cardioprotection provided by Tongxinluo, a traditional Chinese medicine [12][13][14] , but it is unclear whether PKA is associated with the cardioprotective effects of statins. Therefore, in this study, we tested the hypothesis that acute pretreatment with singledose statins before reperfusion exerts a cardioprotective effect against myocardial no-reflow and infarction by enhancing eNOS activity in a PKA-dependent manner. Materials and methods Animal experimental protocols The animal experimental protocols and procedures were approved by the Care of Experimental Animals Committee of Fu Wai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, China. All animals received humane care in compliance with the Guide for the Care and Use of Laboratory Animals published by the National Institutes of Health, USA. As described previously [4,14] , 6-month-old Chinese Minipigs weighing 20 to 30 kg were anesthetized with a mixture of 700 mg ketamine hydrochloride and 30 mg diazepam administered intramuscularly and were continuously infused with this mixture (2 mg/kg per hour) intravenously to maintain the anesthesia. Minipigs were assigned to 1 of 7 groups (n=7-8 in each group): control, simvastatin (SIM), SIM coadministered with H-89 (SIM+H-89), H-89, SIM coadministered with N ω -nitro-L-arginine (L-NNA; SIM+L-NNA), L-NNA, and sham group. All pigs except for the sham group underwent a 1.5-h occlusion and 3-h reperfusion of the left anterior descending coronary artery (LAD). The LAD of the sham animals was encircled by a suture but not occluded. The control pigs underwent no intervention either before or after reperfusion. SIM (2 mg/kg, Merck & Co, USA) was gavaged 2.5 h before myocardial reperfusion; the SIM dosage was determined based on the loading dose (80 mg) before acute percutaneous coronary intervention and was converted to the pig dose according to body surface area [15] . H-89 (1.0 µg·kg -1 ·min -1 , Alexis, USA), a PKA inhibitor, was intravenously and constantly infused throughout the procedure to inhibit PKA activity [13] . L-NNA (10 mg/kg, Aldrich, USA), an arginine derivative that can nonselectively and competitively inhibit NOS, was intravenously infused and maintained until the end of reperfusion to inhibit eNOS activity [16] . Although L-NNA inhibits both constitutive (cNOS) and inducible (iNOS), the effect of L-NNA inhibition on cNOS is 300 fold greater than that on iNOS, and this effect is rapidly reversible. Furthermore, the predominantly expressed isoform of cNOS in the myocardium is eNOS [17] . Therefore, we chose to use L-NNA as a selective inhibitor of eNOS in the present study. Hemodynamic and cardiac function studies Heart rate (HR) was monitored by surface limb lead electrocardiograph. A 6F pigtail catheter was inserted into the right femoral artery though an arterial sheath for real time measurements of mean arterial pressure (MAP), left ventricular enddiastolic pressure (LVEDP), and maximum and minimum rates of left ventricular pressure development (dp/dt max and dp/dt min , respectively). Hemodynamic data were recorded on a polygraph (Biopac Systems, MP-150, USA) at baseline, after 1.5 h of ischemia, and after 3 h of reperfusion and analyzed with Acqknowledge v3.8.1 software. Analysis of myocardial area at risk (AAR), area of no-reflow (ANR), and area of necrosis (AN) Myocardial AAR, ANR, and AN were measured according to previous methods [4,14] . In brief, the area of impaired perfusion was delineated by a bolus of 4% fluorescent thioflavin S (1 mL/kg, Sigma, USA) into the left atrium. Approximately 30 s later, the LAD was religated at the original site, and AAR was outlined by perfusion with a bolus of 2% Evans blue dye (1 mL/kg, Sigma, USA) into the left atrium. The heart was then excised, and the blood was washed out. In ice-cold saline solution, the extra-left ventricular tissue was removed, and the left ventricular tissue was transversely cut into six or seven slices made parallel to the atrioventricular groove. The AAR, or the area unstained by Evans blue, was traced and pictured in visible light. The ANR, or the area not perfused by thioflavin S, was photographed under ultraviolet light (365 nm). The area between the AAR and ANR was the area of reflow (AR). Then, tissue samples were collected from ANR, AR, and nonischemic area (NA) on the reverse side of the traced slices and immediately placed in liquid nitrogen for the next examination. Finally, tissue slices were weighed and incubated in 1% triphenyltetrazolium chloride (TTC, pH 7.4) at 37 °C for 15 min to identify the AN. AAR was expressed as a percentage of the left ventricular mass (AAR/LV), and ANR and AN were expressed as percentages of the AAR (ANR/AAR and AN/ AAR, respectively), with the mass of each area determined gravimetrically. Determination of plasma creatine kinase (CK) activity Plasma CK activity, an index of myocardial necrosis, was measured spectrophotometrically at baseline, after 1.5 h of ischemia, and after 3 h of reperfusion according to the manufacturer's instructions (Nanjing JianCheng Bioengineering Institute, China). Tissue PKA activity assay PKA activity was measured according to the method described previously using a nonradioactive PKA assay kit www.chinaphar.com Li XD et al Acta Pharmacologica Sinica npg (Promega, USA) [14,18] . Tissue samples from NA, AR, and ANR were homogenized on ice in PKA extraction buffer containing 25 mmol/L Tris-HCl (pH 7.4), 0.5 mmol/L EDTA, 0.5 mmol/L EGTA, 10 mmol/L β-mercaptoethanol, 1 μg/mL leupeptin, and 1 μg/mL aprotinin. The homogenate was centrifuged at 20 000×g for 5 min at 4 °C, and the supernatant was assayed for PKA activity according to the manufacturer's instructions. The reaction products were separated on a 0.8% agarose gel at 100 V for 15 min. The phosphorylated species migrated toward the positive electrode, whereas the non-phosphorylated substrates migrated toward the negative electrode. The fluorescence intensity of the phosphorylated peptides, which reflects the PKA activity, was quantified by spectrophotometry at 570 nm. One unit of kinase activity is defined as the number of nanomoles of phosphate transferred to a substrate per minute per milliliter. Analysis of tissue NOS activity Tissue samples from NA, AR, and ANR were homogenized and centrifuged at 3000 r/min for 10 min. The activity of total (t-)NOS, iNOS, and cNOS (the predominantly expressed isoform of cNOS in myocardium is eNOS [17] ) in the supernatant was measured spectrophotometrically at 530 nm according to the manufacturer's instructions (Nanjing KeyGen, China). The activities were expressed as units per milligram of myocardial protein (IU/mg prot). Statistical analysis All data are expressed as the mean±SEM. Data from all stages were compared by repeated measures analysis of variance followed by post-hoc analysis with the Student-Newman-Keuls multiple comparisons test. Differences in a single variable data, such as the no-reflow and infarct areas, and the activities of PKA and NOS were compared among groups by ANOVA followed by the Duncan's post hoc test. P<0.05 was considered statistically significant. Results Cardiac performance in SIM-treated and -untreated hearts Physiological examination revealed that there were no significant differences in cardiac hemodynamics between any of the groups at baseline (P>0.05). However, under the conditions of ischemia and reperfusion, HR and LVEDP were increased in the untreated, control hearts (P<0.05). The effects of ischemia and reperfusion were partially diminished when the animals received SIM pretreatment, as HR and LVEDP were decreased in the SIM group (P<0.05). The effects of SIM appeared to depend on the activation of PKA and eNOS because combined treatment with SIM and the PKA inhibitor H-89 or the eNOS inhibitor L-NNA did not have the same effect as treatment with SIM alone ( Table 1). Sizes of no-reflow and infarction after ischemia and reperfusion Pathological studies revealed that the area at risk (AAR) per left ventricle (LV) was comparable in the control, SIM, SIM+H-89, H-89, SIM+L-NNA, and L-NNA groups, averaging between 26.1% and 30.4% (P>0.05) ( Figure 1A and 1B). SIM pretreatment significantly attenuated the area of no-reflow (ANR/AAR, 36.1%±2.1%) and the area of necrosis (AN/AAR, 64.1%±4.5%) compared to the control group (50.4%±2.4%; 79.0%±2.7%) (P<0.01). The PKA inhibitor H-89 alone reduced the no-reflow size (29.5%±4.2%) relative to the control group (P<0.01), but it partially abolished the SIM effect on no-reflow size and completely abolished the SIM effect on infarct size, indicated by the increased no-reflow (40.4%±6.1%) and infarct (77.4%±1.2%) sizes, respectively, in the SIM+H-89 group. However, the eNOS inhibitor L-NNA completely counteracted the effects of SIM on myocardial no-reflow and infarction; the no-reflow and infarction sizes in the SIM+L-NNA group reverted to the control levels (52.3%±2.8%; 83.9%±2.5%) (P<0.01). These data indicate that the cardioprotective effects of SIM against no-reflow and infarction are completely eNOSdependent but partially PKA-dependent. After 1.5 h of ischemia and 3 h of reperfusion, plasma CK activity, a standard enzymatic marker of cardiac injury, was significantly increased in the control group (2.97±0.45 IU/ mL; 4.73±0.14 IU/mL) compared to the sham group (1.05±0.09 IU/mL; 1.59±0.25 IU/mL) (P<0.01) but was lowered in the SIM group (1.66±0.13 IU/mL; 3.53±0.29 IU/mL) compared to the control group (P<0.01). However, the addition of H-89 inhibited the SIM effect after 3 h of reperfusion, and L-NNA inhibited the SIM effect after 1.5 h of ischemia and 3 h of reperfusion ( Figure 1C). Myocardial PKA activity in the reflow and no-reflow areas after ischemia and reperfusion Figure 2A shows that the PKA activity was dramatically induced in the reflow and no-reflow areas in the control group (9.57±0.56 IU/mL; 12.18±0.88 IU/mL) compared with that in the sham group (6.04±0.62 IU/mL) (P<0.01). Myocardial PKA activity in the reflow and no-reflow areas were further activated in the SIM group (12.24±0.76 IU/mL; 14.47±0.44 IU/ mL) compared to that in the control group (P<0.05). However, SIM-induced PKA activity was inhibited by H-89 but not by L-NNA. To evaluate the inhibition effect of H-89 on the PKA signaling pathway, Western blotting analysis was performed to To investigate the mechanism by which PKA mediates eNOS activity, the expression of eNOS and p-eNOS (Ser 1179 and Ser 635 ) was detected by Western blotting analysis ( Figure 2B, 2C, and 2D). In the non-ischemic area ( Figure 2B), the expression of eNOS and Ser 635 p-eNOS was increased in the control group compared to that in the sham group (P<0.05); the eNOS expression in the SIM and SIM+H-89 groups was decreased, and the Ser 635 p-eNOS expression in the H-89 group was increased, compared to that of the control group (P<0.05). In the reflow area ( Figure 2C), the expression of eNOS and Ser 635 p-eNOS was increased in the control group Abbreviations: SIM=simvastatin; L-NNA=N ω -nitro-L-arginine; HR=heart rate; MAP=mean arterial pressure; RPP=rate-pressure product; LVEDP=left ventricular end-diastolic pressure; dp/dt max and dp/dt min =maximum and minimum rates of left ventricular pressure development, respectively. Discussion The main findings of our study include the following: first, a single-dose SIM pretreatment just 2.5 hours before reperfusion reduced the sizes of the no-reflow and necrosis areas and activated the PKA pathway and the phosphorylation of eNOS at Ser 635 and Ser 1179 in the reflow and no-reflow myocardium. Second, the PKA inhibitor H-89 blocked the SIM-induced PKA activation and partially abolished the SIM-induced cardioprotection and eNOS phosphorylation, whereas the eNOS inhibitor L-NNA completely blocked the SIM-induced cardioprotection and eNOS phosphorylation without any influence on PKA activity, indicating that the cardioprotection of SIM after ischemia and reperfusion is in part mediated by the PKA/ eNOS pathway. Previous studies have reported that a 3-day pretreatment with atorvastatin or SIM at 10 mg/kg per day decreased the infarct size in rat hearts, but this effect was not observed at 2 mg/kg [4,6,20,21] . Similarly, acute pretreatment with highdose SIM (10 μmol/L) was shown to attenuate the ischemia-reperfusion injury in isolated rat hearts, but chronic treatment of low-dose SIM did not [22] . These data suggest that chronic npg or acute pretreatment with high-dose statins can attenuate infarct size after ischemic reperfusion. The effect of SIM on infarct size in this study is consistent with that found in previous studies because 2 mg/kg of SIM in pigs is approximately equivalent to 10 mg/kg in rats, after correction for body surface area [15] . The infarct-limitation effect of statins in ischemiareperfusion is mainly attributed to its pleiotropic effects via the PI3K/Akt/eNOS pathway because the effects of the statins can be abolished by inhibiting PI3K or eNOS [5][6][7][8][9] . In this study, we further found that acute pretreatment with single-dose SIM not only decreased the infarct size but also attenuated the no-reflow area, and we showed that the PKA pathway was another important mediator in the cardioprotection of SIM that acts by modulating the phosphorylation of eNOS at Ser 1179 and Ser 635 . PKA seems to be activated by endogenous mechanisms because the PKA inhibitor H89 alone had almost the same effect as SIM on the no-reflow area in the presence of ischemia. The mechanism underlying the idiopathetic activation of PKA is considered to be that the decreased level of cyclic guanosine monophosphate (cGMP) inhibits the activity of phosphodiesterase III upon reperfusion, which in turn increases the cAMP concentration, subsequently leading to PKA activation [23] . The inhibition of PKA activity and Ser 133 p-CREB by H-89 indicates that acute SIM treatment actually results in PKA activation. Although it is presently unclear how statins activate PKA in ischemic myocardium, one probable explanation is that statins may stimulate a cell surface receptor that activates small G proteins, which results in the sensitization of adenylate cyclase and the accumulation of myocardial cAMP [24] and eventually the activation of PKA in the ischemic myocardium. It has been reported that 5'-nucleotidase and the adenosine A1, A2A, and A2B receptors are involved in atorvastatin-induced eNOS phosphorylation by stimulating phospholipase A 2 and cyclooxygenase (COX) to generate prostacyclin-2 (PGI 2 ), leading to PKA activation and subsequent eNOS phosphoryla tion [5,25] . Enhancing Ser 1177/1179 phosphorylation via the PI3K/Akt pathway is considered to be the main mechanism by which statins protect against ischemia-reperfusion injury [5][6][7][8][9] . However, several lines of evidence have shown that PKA also regulates the phosphorylation of eNOS at Ser 1179 , Ser 635 , and Ser 617 in bovine eNOS (Ser 1177 , Ser 633 , and Ser 615 in humans) [24,[26][27][28][29] . Ser 633/635 phosphorylation is critical in maintaining NO synthesis after the initial sensitization by Ca 2+ flux and Ser 1177/1179 phosphorylation [30] , and it is stimulated via the PKA pathway in response to shear stress and acute statin treatment in aortic endothelial cells [24,27] , suggesting that PKA-mediated Ser 633/635 phosphorylation may be another mechanism by which statins protect against myocardial no-reflow and necrosis after ischemia and reperfusion. Our study confirmed this hypothesis in reperfused swine hearts, demonstrating that the inhibition of PKA partially blocked the SIM-induced phosphorylation of eNOS at Ser 1179 and Ser 635 in the reflow and no-reflow myocardium, as well as partially abrogated the effects of SIM against myocardial no-reflow and infarction. Previous studies have reported that statin-induced eNOS activation involves the inhibition of the Rho GTPase and the modulation of Rho A membrane translocation [31,32] and that the transient preischemic activation of PKA by ischemic preconditioning reduces infarct size through Rho and Rho-kinase (ROCK) inhibition during sustained ischemia [13] . Therefore, it is plausible that PKA-Rho pathway is involved in the regulation of eNOS activity by statins, but the specific mechanism should be studied further. Here, H-89 administered 30 min before ischemia attenuated the no-reflow area, possibly by phosphorylating eNOS at Ser 635 , but partly inhibited the protective effects of SIM when infused 30 min after SIM administration. This finding npg is somewhat contradictory to previous studies. In isolated rat hearts, H-89 (2 μmol/L) improved postischemic function and decreased infarct size when injected 3 min before 30 min of global ischemia-reperfusion and further reduced the infarct size when administered 3 min prior to ischemic or forskolin (a cAMP-elevating agents) preconditioning [33] . However, when delivered at 1.35 μg/kg per minute in dogs or at 10 μmol/L in isolated rat hearts approximately 30 min before preconditioning, H-89 completely blunted the infarct-limitation effect of preconditioning [12,34] . Therefore, the partial inhibition of H-89 on SIM cardioprotection in our study most likely occurred because H-89 was delivered later than SIM, and the contradiction between these studies might be explained by the differences in H-89 dosage, experimental protocols, and animal species. Another mechanism underlying the bidirectional role of H-89 in ischemia and reperfusion may be the cross-talk between the PKA and PI3K/Akt pathways in the regulation of eNOS phosphorylation. It has been reported that H-89 inhibits Akt, ROCK II, and 5'-AMP-activated protein kinase (AMPK) [35] and that the PKA pathway interacts with the PI3K/Akt pathway in the regulation of gene expression [36] . Previous studies have shown that the forskolin-induced stimulation of PKA can inhibit Akt activity in human embryonic kidney cells [37] , but epinephrine or forskolin-induced stimulation of PKA enhanced eNOS phosphorylation at Ser 1177 by activating the Akt pathway in aortic or coronary endothelial cells [28,38] . Interestingly, in endothelial cells, PKA is mainly involved in eNOS phosphorylation during the early phase of preconditioning, whereas both PKA and Akt are required for late preconditioning-induced eNOS activation, and Akt is a substrate of PKA [39] . Therefore, these reports indicate that PKA plays different roles in regulating eNOS phosphorylation in different cells and that cross-talk most likely exists between the PKA and PI3K/Akt pathways in the regulation of eNOS phosphorylation during ischemia and reperfusion. The inhibition of PKA may in turn cause the activation of the PI3K/Akt pathway and the subsequent phosphorylation of Ser 635 p-eNOS. This might be another explanation of why H-89 partially inhibited SIMinduced cardioprotection in our study; the exact mechanism may be elucidated in the future when more selective PKA inhibitors are available. In summary, the present study suggests that acute pretreatment with a single dose of SIM just 2.5 h before reperfusion can attenuate the size of the no-reflow and infarction areas by phosphorylating eNOS at Ser 1179 and Ser 635 in a partially PKA-dependent manner. The observation that H-89 partially abolished the cardioprotective effects of SIM and decreased the no-reflow size when administered alone suggests a bidirectional role for PKA in cardioprotection during ischemia and reperfusion. Our results are helpful for understanding the mechanisms involved in statin-mediated protection against myocardial no-reflow and infarction, and may lead to the development of new criteria for treating patients undergoing acute post-infarct percutaneous coronary intervention.
2016-05-15T05:50:14.767Z
2012-06-04T00:00:00.000
{ "year": 2012, "sha1": "1955396e341eb9d182ae413bdb1f9f1df543bca5", "oa_license": null, "oa_url": "https://www.nature.com/articles/aps201227.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1955396e341eb9d182ae413bdb1f9f1df543bca5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247384688
pes2o/s2orc
v3-fos-license
Association between asthma or chronic obstructive pulmonary disease and chronic otitis media We hypothesized that asthma/chronic obstructive pulmonary disease (COPD) might increase the risk of chronic otitis media (COM), as asthma or COPD affects other diseases. The aim of this research was to investigate whether the incidence of COM is affected by a diagnosis of asthma or COPD in patients compared to matched controls from the national health screening cohort. A COM group (n = 11,587) and a control group that was 1:4 matched for age, sex, income, and residence area (n = 46,348) were selected. The control group included participants who never received treatment for COM from Korean National Health Insurance Service-Health Screening Cohort from 2002 to 2015. The crude and adjusted odds ratios (ORs) of previous asthma/COPD before the index date for COM were analyzed using conditional logistic regression. The analyses were stratified by age, sex, income, and region of residence. The period prevalence of asthma (17.5% vs. 14.3%, p < 0.001) and COPD (6.6% vs. 5.0%, p < 0.001) were significantly higher in the COM group than in the control group. In addition, the odds of asthma and COPD were significantly higher in the COM group than in the control group. Both asthma (adjusted OR 1.23, 95% confidence interval [CI] 1.16–1.31, p < 0.001) and COPD (adjusted OR 1.23, 95% CI 1.13–1.35, p < 0.001) increased the ORs for COM. This positive association between asthma/COPD and COM indicates that asthma/COPD might increase the incidence of COM. Scientific Reports | (2022) 12:4228 | https://doi.org/10.1038/s41598-022-08287-w www.nature.com/scientificreports/ In the respiratory tract of asthma and COPD patients, tissue remodeling occurs due to airflow limitation, hypoxia, and chronic inflammation, which affects interactions of the microbial community in the respiratory mucosa, including the Eustachian tube and middle ear 16,17 . Alteration of the microbial community in chronic respiratory diseases such as asthma and COPD leads to a decrease in mucociliary clearance and oxygen availability of adjacent upper respiratory mucosa. This is known to affect the occurrence of chronic rhinosinusitis, chronic inflammation of the nasal cavity and paranasal sinus, and otitis media in children in which pathogens are introduced into the middle ear through the Eustachian tube 18-20 . However, most of the studies on the association between asthma or COPD and otitis media have focused on eosinophilic otitis media or chronic suppurative otitis with effusion in children rather than on COM in adults. Here, this study aimed to demonstrate whether asthma and COPD can be independent risk factors for COM using data obtained from a national health screening cohort. Results The period prevalence of asthma was 17.5%, and that of COPD was 6.6% in the COM group during the follow-up period, which was significantly higher than that in the control group (p < 0.001). The distributions of age group, sex, income level, and type of residence region were comparable between the COM and control groups. The systolic blood pressure and diastolic blood pressure of the COM group were significantly lower than those of the control group. However, there was no significant difference in cholesterol level, fasting glucose level, or the degree of obesity classified according to BMI between the two groups. The numbers of past smokers (10.4%) and nonsmokers (74.6%) in the COM group were significantly higher, and the number of current smokers (15.0%) was significantly lower than those in the control group (16.4%). Moreover, the frequency of alcohol consumption was significantly lower in the COM group (27.3% in ≥ 1 time a week) than in the control group (28.9% in ≥ 1 time a week). CCI scores related to comorbidities were calculated excluding pulmonary diseases, and the number of participants with more than 1 point was mainly from the COM group (31.8% vs 29.4% in control group) ( Table 1). The adjusted OR of asthma for COM was 1.23 (95% CI 1.16-1.31, p < 0.001), which was significantly higher regardless of age group, sex, income level, and residential area. In addition, the adjusted ORs of asthma were higher in the younger-than-60-years-old, female, low-income, and urban-living groups ( Table 2). The adjusted OR of COPD for COM was 1.23 (95% CI 1.13-1.35, p < 0.001), which was significantly higher for all ages, both sexes, all income levels, and both regions of residence, as observed for asthma. Moreover, the adjusted ORs of COPD were higher in participants aged 60 and over, women, those with a high income, and those living in rural areas ( Table 3). The ORs of asthma with COM in model 2 adjusted for age group, sex, income, region of residence, obesity, smoking, alcohol consumption, CCI scores, total cholesterol, SBP, DBP, fasting blood glucose, and asthma were significantly higher in all subgroups except the group with CCI score 1 (Supplementary Table S1). Additionally, the association of COPD with COM in the same model as Supplementary Table S1 was consistent in all subgroups except the BMI < 18.5, BMI ≥ 23 to < 25, and borderline cholesterol level (≥ 200 to < 240 mg/dL) groups (Supplementary Table S2). Discussion The ORs for asthma and COPD were significantly higher in the COM group than in the control group. Additionally, the ORs of asthma and COPD in the COM group were considerably higher than those in the control group in all age, both sex, all income level, and both residential area groups. Previous studies on the association between asthma and otitis media were conducted in children 21,22 . Otitis media (OR 1.8; 95% CI 1.2-2.6) in the first year was related to the presence of asthma at 4 years of age in the Oslo Birth Cohort 21 . In the nationwide population cohort of Korea, asthma and otitis media showed a reciprocal association (hazard ratio [HR] 1.46; 95% CI 1.40-1.52, p < 0.001 for otitis media, HR 1.43; 95% CI 1.36-1.50, p < 0.001 for asthma) in children 22 . However, these studies postulated a relationship between otitis media and asthma in children, unlike our study on COM in adults, and no studies on the epidemiological association between the two diseases have been demonstrated in adults until now. Since asthma patients have increased airway hypersensitivity, the course of the disease is affected by several primary or secondary factors, which can be associated with COM. In particular, exposure to urban fine/ultrafine particles that have a large surface area, are easily deposited in the airways and can inhibit phagocytosis, affects not only the airway but also the eustachian tube and middle ear 23 . Urban particles cause mucosal thickening and inflammatory cell infiltration in the middle ear due to epithelial cells and vascular space widening and play a role in decreasing ENaC-α expression and increasing the level of MUC5AC expression 24 . In addition, factors such as respiratory infection and allergen exposure affect ventilation and mucociliary clearance of the eustachian tube and/or middle ear, which is thought to be able to potentiate the onset of COM 25,26 . To the best of our knowledge, no studies have reported the epidemiological association between COPD and otitis media. However, several causative factors of COPD can directly or indirectly affect the eustachian tube and middle ear epithelium. Smoking and tobacco use were major determinants of COPD and had a significantly higher odds ratio for overall middle ear diseases (adjusted OR 1.15; 95% CI 0.99-1.33, p = 0.05), and the adjusted OR in men aged 40-60 years was 1.73 (95% CI 1.29-2.30, p < 0.001), which was particularly high compared to other age groups 27 . Exposure to cigarette smoke solution in human middle ear epithelial cells upregulated TNFα, EGFR and MUC5AC mRNA levels and caused histological changes such as cilia loss, prominent squamous metaplasia, and goblet cell depletion in the eustachian tube 28,29 . Additionally, the association between COPD and COM can be elucidated through several bacterial pathogens that are commonly involved in both diseases. Nontypeable Haemophilus influenzae and Moraxella catarrhalis are representative bacterial species that contribute to COPD exacerbation in smokers, and acute and recurrent otitis media are prevalent in young children 30,31 . Table 1. General characteristics of participants. CCI Charlson comorbidity index, COM chronic otitis media, COPD chronic obstructive pulmonary disease, DBP diastolic blood pressure, SBP systolic blood pressure, SD standard deviation. *Chi-square test. Significance at p < 0.05. † Wilcoxon rank-sum test. Significance at p < 0.05. ‡ Obesity (BMI, body mass index, kg/m 2 ) was categorized as < 18.5 (underweight), ≥ 18.5 to < 23 (normal), ≥ 23 to < 25 (overweight), ≥ 25 to < 30 (obese I), and ≥ 30 (obese II). § CCI scores were calculated without pulmonary disease. www.nature.com/scientificreports/ Streptococcus pneumoniae, which colonizes the human nasopharynx, also causes airway inflammation, compromises the mucociliary system of the airway epithelium, and promotes bacterial overgrowth in chronic respiratory diseases such as allergic asthma and COPD 32,33 . In addition, recurrent infection and persistent inflammation in chronic respiratory diseases such as COPD and asthma change the composition of the microbial community not only in the adjacent airway, nasopharynx, and paranasal sinuses but also in the eustachian tube and middle ear epithelium 34,35 . Mucus plugging of the sterile airway in asthma and COPD causes hypoxia and necrosis of airway epithelial cells, leading to neutrophilic airway inflammation 36 . Interleukin 1 receptor (IL-1R) plays a key role in activating neutrophilic inflammation and airway remodeling in the mucus-abundant airway 37 and impairing antibacterial host defense mechanisms through Toll-like receptors (TLRs) and extracellular signal-regulated kinase (ERK) signaling pathways [38][39][40] . This airway inflammation can promote swelling and narrowing of the eustachian tube mucosa, compromise ventilation and mucociliary clearance, and eventually accelerate the accumulation of pathogens in the middle ear cavity in chronic cases 41 . This study demonstrates epidemiological associations between asthma/COPD and COM known so far using nested case-control study design from a large, nationwide population database. Additionally, expected confounding factors such as age group, sex, income level, and area of residence were matched in the control group at a 1:4 ratio to enhance the reliability of our results. The health check-up database by life cycle for the national population contains not only demographic data but also objective data such as blood pressure, fasting glucose level, and quantified information related to individual habits. Thus, it has the advantage of providing additional information about the association between diseases primarily based on medical claim codes and prescribed medication. Although this study postulates that asthma/COPD increases the occurrence of COM, we have several limitations. First, the causality between asthma/COPD and COM could not be sufficiently concluded due to the retrospective research design. Second, it is difficult to obtain descriptive information related to asthma, COPD, and COM, such as microbiological culture results, hearing threshold, radiological findings, and pulmonary function test results, from the data included in the health screening cohort database. Although several confounding Table 2. Odds ratios (95% confidence interval) of asthma for chronic otitis media with subgroup analyses according to age, sex, income, and region of residence. CCI Charlson comorbidity index, COM chronic otitis media, COPD chronic obstructive pulmonary disease, DBP diastolic blood pressure, SBP systolic blood pressure. *Conditional logistic regression, Significance at p < 0.05. † Models were stratified by age, sex, income, and region of residence. ‡ Adjusted for obesity, smoking, alcohol consumption, CCI scores, total cholesterol, SBP, DBP, fasting blood glucose, and COPD. www.nature.com/scientificreports/ factors were matched to increase the statistical power of association between diseases, the absence of specific information such as severity and clinical progression of each disease did not completely exclude the possibility of potential misdiagnosis. Also, unlike COPD, asthma is a disease that can start from childhood, but our dataset cannot obtain information on the childhood histories of patients diagnosed with asthma during the follow-up period. This study includes wide range of otitis media from suppurative through non-suppurative otitis media. The difference of these pathogenesis might affect the results. Last, the number of medical encounters in patients with COM may increase compared with that in the control group due to the chronic clinical aspect of COM, but we did not correct for this. Therefore, this could be a possible confounder that may have influenced the timing of diagnosis of asthma and COPD. In future studies, a data analysis that can take into account the number of medical encounters should be devised to minimize confounding bias that affects the incidence of diseases. 43 . Age and gender-specific distributions of population in cohort are described online (http:// nhiss. nhis. or. kr). All Koreans over the age of 40 with a 13-digit resident registration number are requested to have a health evaluation biannually without cost. All medical records and treatments, as well as births and deaths, are managed under the Korean Health Insurance Review and Assessment system based on a 13-digit resident registration number. This cohort database from NHIS includes (i) personal information, (ii) medical claim code related to procedures and prescriptions, (iii) diagnostic code using the International Classification of Disease-10 (ICD-10), (iv) Table 3. Odds ratios (95% confidence interval) of COPD for COM with subgroup analyses according to age, sex, income, and region of residence. CCI Charlson comorbidity index, COM chronic otitis media, COPD chronic obstructive pulmonary disease, DBP diastolic blood pressure, SBP systolic blood pressure. *Conditional logistic regression, Significance at p < 0.05. † Models were stratified by age, sex, income, and region of residence. ‡ Adjusted for obesity, smoking, alcohol consumption, CCI scores, total cholesterol, SBP, DBP, fasting blood glucose, and asthma. No. of COM/No. of participants (%) Odds ratios for COM . Participants who were diagnosed ≥ 2 times with malignant neoplasms of the meninges (C70), malignant neoplasms of the brain (C71), or malignant neoplasms of the spinal cord, cranial nerves and other parts of the central nervous system (C72) were excluded from the COM group (n = 32) and the control group (n = 798). A participant who did not have blood pressure records was excluded from the COM group (n = 1). COM participants were 1:4 matched with control participants for age group, sex, income, and region of residence. To minimize selection bias, the control participants were sorted by random number order. The index date of each COM participant was set as the time of diagnosis of COM. The index dates of the control participants were set as the index date of their matched COM participants. Therefore, each matched COM participant had the same index date as their control participants. During the matching process, 404,767 control participants were excluded. Cases with brain tumors and cases without medical records were excluded in both groups. Also, deaths before 2004 were excluded because they did not belong to any group. Cases (n = 4974) diagnosed with COM between 2002 and 2003 that did not meet the definition of index date were excluded in the control group. In addition, we considered excluding the ICD-10 code for COM from the control group is to completely exclude patients with COM included in the control group. Finally, 11,587 COM participants were 1:4 matched with 46,348 control participants for the study (Fig. 1). Independent variables (asthma and chronic obstructive pulmonary disease). Asthma was defined if participants were treated for asthma (ICD-10: J45) or status asthmaticus (J46) from 2002 through 2015 ≥ 2 times with asthma-related medications prescribed ≥ 2 times, as in our previous study 44 . This method was modified from a previously validated study 45 . Statistical analyses. The general characteristics of the COM and control groups were compared using the McNemar's chi-square test for categorical variables and the Wilcoxon sign-rank test for continuous variables. To analyze the odds ratios (ORs) with 95% confidence intervals (CIs) of asthma or COPD for COM, conditional logistic regression was used. In these analyses, crude and adjusted models were calculated. For asthma as the independent variable, the adjusted model was adjusted for obesity, smoking, alcohol consumption, CCI scores, total cholesterol, SBP, DBP, fasting blood glucose, and COPD. For COPD as the independent variable, asthma was used as the covariate instead of COPD in the adjusted model. The analyses were stratified by age, sex, income, and region of residence. For the subgroup analyses, we divided participants by age (< 60 years old and ≥ 60 years old), sex (males and females), income (low income and high income), and region of residence (urban and rural). We analyzed crude and adjusted models using conditional logistic regression. Additionally, we performed subgroup analyses according to obesity, smoking, alcohol consumption, total cholesterol, blood pressure, fasting blood glucose, and CCI using unconditional logistic regression (Supplementary Tables S1 and S2). Model 1 (adjusted for age, sex, income, and region of residence) and model 2 (model 1 plus obesity, smoking, alcohol consumption, CCI scores, total cholesterol, SBP, DBP, fasting blood glucose, and COPD [or asthma]) were calculated for additional subgroup analyses. For the statistical analyses, SAS version 9.4 (SAS Institute Inc., Cary, NC, USA) was used. We performed two-tailed analyses, and significance was defined as p values less than 0.05.
2022-03-12T06:23:48.063Z
2022-03-10T00:00:00.000
{ "year": 2022, "sha1": "009ad8c9b57d7277d23006881199d8070d641b6e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-08287-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3300e1b2ed9b73e17c4fc99a40e90232e18fab54", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268575268
pes2o/s2orc
v3-fos-license
Factors Influencing Morbidity and Mortality Rates in Tertiary Intensive Care Units in Turkey: A Retrospective Cross-Sectional Study Background and Objectives: The objective of this study was to determine the correlation between the prognosis of patients admitted to a tertiary intensive care unit (ICU) and the admitted patient population, intensive care conditions, and the workload of intensive care staff. Materials and Methods: This was a retrospective cross-sectional study that analyzed data from all tertiary ICUs (a minimum of 40 and a maximum of 59 units per month) of eight training and research hospitals between January 2022 and May 2023. We compared monthly data across hospitals and analyzed factors associated with patient prognosis, including mortality and pressure injuries (PIs). Results: This study analyzed data from 54,312 patients, of whom 51% were male and 58.8% were aged 65 or older. The median age was 69 years. The average number of tertiary ICU beds per unit was 15 ± 6 beds, and the average occupancy rate was 83.57 ± 19.28%. On average, 7 ± 9 pressure injuries (PI) and 10 ± 7 patient deaths per unit per month were reported. The mortality rate (18.66%) determined per unit was similar to the expected rate (15–25%) according to the Acute Physiology and Chronic Health Evaluation (APACHE) II score. There was a statistically significant difference among hospitals on a monthly basis across various aspects, including bed occupancy rate, length of stay (LOS), number of patients per ICU bed, number of patients per nurse in a shift, rate of patients developing PI, hospitalization rate from the emergency department, hospitalization rate from wards, hospitalization rate from the external center, referral rate, and mortality rate (p < 0.05). Conclusions: Although generally reliable in predicting prognosis in tertiary ICUs, the APACHE II scoring system may have limitations when analyzed on a unit-specific basis. ICU-related conditions have an impact on patient prognosis. ICU occupancy rate, work intensity, patient population, and number of working nurses are important factors associated with ICU mortality. In particular, data on the patient population admitted to the unit (emergency patients and patients with a history of malignancy) were most strongly associated with unit mortality. Introduction Tertiary intensive care units (ICUs) are highly specialized hospital environments equipped with many complex technologies [1].They are locations where life-threatening diseases are treated and organ support is provided for invasive monitoring, thus preventing multiple-organ failure and reducing mortality [2].The use of scoring systems to predict the risk of death and evaluate outcomes in critically ill patients is vital in modern medicine [3]. In ICUs, numerous scoring systems have been developed over the last two decades for general ICU patients or defined subgroups.Acute Physiology and Chronic Health Evaluation (APACHE) II is the scoring system developed for the general ICU population, predicting the risk of in-hospital mortality, and it is the most widely used scoring system to determine the severity of disease in the ICU [1,4].The use of the APACHE II scoring system for ICU patients is not limited to mortality data; it has also been shown to be predictive of the development of pressure injuries (PIs) in critically ill patients [5]. Risk scoring is a highly complex system for comparing outcomes in ICUs [1].Multiple variables are required to calculate scores for these scoring systems, and although there is difficulty in collecting data, many studies have reported the helpful performance of the APACHE II scoring system [6][7][8][9].Although patient-based values are taken into account in all of these scoring systems, a scoring system that takes into account the conditions of hospitals, ICUs, and employees has not been prepared, and there is no study on this in the literature. Studies have reported differences in patient outcomes between high-and low-volume hospitals, using both hospital and individual surgeon volume as the unit of analysis [10].Hospital volume reflects institutional characteristics such as infrastructure, number of beds, occupancy rate, bed-patient ratio, and nurse-patient ratio.Surgeon volume can be considered an indicator of the surgeon's technical or decision-making skills, which can affect patient outcomes [10,11].The available evidence supports higher-volume hospitals for better outcomes, and this has been applied in quality and cost improvement policies over the years [10].A literature review of 40 studies on the volume-outcome relationship in critically ill patients has shown that those admitted to high-volume hospitals have better outcomes [12].This is particularly relevant given the current shortage of intensive care physicians and the general complexity of critical illnesses [12].In our country, training and research hospitals (TRHs) are considered high-volume hospitals.Recently, newly established city hospitals (CHs) have been added to the existing hospitals.This policy is still supported in our country in terms of quality and cost. The literature generally limits the relationship between intensive care working conditions and mortality and morbidity to studies examining nurse staffing levels and adverse patient outcomes [13].Adequate nurse and staffing levels are indistinguishably linked to favorable patient outcomes both in general ward settings and in critical care areas, including the ICU [14].Inadequate staffing levels, coupled with increasing demand for intensive care beds and decreasing budgets, can compromise patient safety [13].Over the last decade, several studies have investigated the correlation between nurse staffing levels and patient outcomes, including mortality, complications, infection rates, PI development, falls, length of stay (LOS), and medication errors [13,15].These studies have either focused on unit-level outcomes or aggregated their results to the hospital level, thus failing to provide a clear insight into the relationship between staffing levels and patient outcomes in the intensive care setting [13,15].While some professional organizations have mandated a nurse-patient ratio of 1:1 [13], there is no clear international consensus on this issue.Additionally, data on tertiary ICUs, where patient care is particularly challenging, are limited in the literature. The objective of this study was to investigate the potential relationship between patient population, intensive care conditions, and the workload of intensive care staff with mortality and PI in patients admitted to tertiary ICUs.Additionally, this study aimed to assess the suitability of the APACHE score, which is commonly used for mortality prediction in general ICUs, for use in tertiary ICUs, where critically ill patients receive the highest level of care.To enhance the generalizability of our study, we analyzed the databases of eight TRHs in Ankara, the capital of the Republic of Turkey.All of these hospitals use the same patient tracking software system, have high patient volumes, provide healthcare services in all branches, and cover 74% of the tertiary ICU beds in the region. Ethical Conduct This retrospective observational descriptive study was initiated after receiving approval from the Yıldırım Beyazıt University Yenimahalle TRH Clinical Research Ethics Committee (date: 16 August 2023; approval number: E-2023-33).All procedures followed were in accordance with the ethical standards of the committee responsible for human experimentation (institutional and national) and the Declaration of Helsinki as revised in 2013. Participants The data of all patients hospitalized in the tertiary ICUs of 8 TRHs with equal service conditions between January 2022 and May 2023 in Ankara were investigated.The only exclusion criterion was that ICU data with missing monthly data in the database were excluded from this study.CHs were also included in this study because they had TRH status.All data were collected on a monthly basis, and data from all tertiary ICUs were probed for 17 months.The data were collected in three daily shifts and entered into the system at the end of each month.Two of the hospitals were CHs, and six were TRHs.Ankara CH: 9 months (January-September 2022) 18 units, 8 months (October 2022-May 2023) 19 units; Etlik CH (ICU data were analyzed as of January 2023 because it was newly opened): 1 month (January 2023) 14 units, 4 months (February-May 2023) 19 units; Ankara TRH: 4 months (January-April 2022) 8 units, 2 months (May-June 2022) 7 units, 11 months (July 2022-May 2023) 6 units; Dışkapı Yıldırım Beyazıt TRH: 8 months (January-August 2022) 8 units, 1 month (September 2022) 7 units, 1 month (October 2022) 3 units, and 7 months (November 2022-May 2023) 1 unit; Gülhane TRH and Atatürk TRH: 17 months 5 units; Yenimahalle TRH: 8 months (January-August 2022) 2 units, 9 months (September 2022-May 2023) 3 units; Dr. Abdurrahman Yurtaslan Oncology TRH: data from 1 tertiary care ICU were examined for 17 months.Complete data were sent from ICUs and recorded in the database. APACHE II is a model in which 12 physiological variables of the patient are included [4].It gives a single score up to a maximum of 71.It is administered within 24 h of ICU admission, and the lowest value for each component of the physiology variable is recorded.Applying logistic regression calculates the individual hospital risk of death by converting the score into the probability of death.A higher score in this model indicates greater disease severity due to its impact on mortality.The APACHE II score and in-hospital mortality rate were defined in a study conducted by Knaus et al. in 1985.The in-hospital mortality rate relationship according to APACHE II score distribution is as follows: a score of "0-4" is defined as a 4% mortality rate, a score of "5-9" is defined as an 8% mortality rate, a score of "10-14" is defined as a 15% mortality rate, a score of "15-19" is defined as a 25% mortality rate, a score of "20-24" is defined as a 40% mortality rate, a score of "25-29" is defined as a 55% mortality rate, a score of "30-34" is defined as a 73% mortality rate, and a score of "over 34" is defined as an 85% mortality rate [4].APACHE scores were divided into 8 groups: 0-4, 5-9, 10-14, 15-19, 20-24, 25-29, 30-34, and over 34. Age, gender, and APACHE II score distributions of patients hospitalized in the tertiary care ICU were taken directly from the system.Moreover, we obtained the following data from the database: the daily number of ICU beds, number of inpatients, bed occupancy rate (monthly ratio calculated by the number of daily inpatients and the number of ICU beds), patients' LOS (monthly average of the LOS of all inpatients), number of patients per bed (ratio of the number of monthly inpatients to the number of ICU beds), number of patients per nurse (monthly rate calculated by dividing the number of nurses working in one shift per day by the maximum number of intensive care patients admitted in one shift per day), rate of patients with PI (monthly rate calculated by the number of patients with PI and the total number of inpatients in the ICU), hospitalization rate from the emergency department (ratio of monthly number of patients admitted to ICU from the emergency department to total number of inpatients), hospitalization rate from wards (ratio of the monthly number of patients admitted to ICU from hospital wards and the total number of inpatients), hospitalization rate from external center (ratio of the monthly number of patients admitted to ICU in external centers and total number of inpatients), rate of referred patients (ratio of the monthly number of patients referred to external centers and total number of inpatients), and mortality rate (monthly number of patients with exitus in ICU and the ratio of the total number of inpatients).The data from each hospital were compared. Outcomes The primary outcome was the relationship between the mortality rate in tertiary care ICUs and the admitted patient population, ICU conditions, and the workload of ICU staff.The second outcome was the relationship between the mortality rate observed in tertiary care ICUs and the mortality rate estimated by the APACHE II score.The third outcome was the relationship between PIs detected in tertiary ICUs and the patient population, ICU conditions, and the workload of ICU staff. Statistical Analysis All data obtained and recorded during this study were analyzed using the Jamovi statistical program, version 2.3.21.0 (Sydney, Australia), and we created graphical representations.We used the Shapiro-Wilk test to assess whether the data were normally distributed.For non-normally distributed or ordinal data, we used median quartiles.Categorical variables are presented in terms of the number and percentage of cases, and we evaluated them using chi-squared and Fisher's exact tests.Since this study included data from 8 hospitals, we analyzed continuous variables that did not comply with normal distribution using Welch's one-way ANOVA test or the Kruskal-Wallis test.Differences between hospitals were analyzed with the Games-Howell post hoc test or the Dwass-Steel-Critchlow-Fligner pairwise comparisons test.We used Spearman's correlation analysis to analyze the relationship between mortality rate and ICU data, as our data did not follow a normal distribution.When appropriate, we calculated 95% confidence intervals (CIs), and we considered p-values of less than 0.05 to be statistically significant. Significance values were adjusted with the Bonferroni correction for multiple comparisons, and when comparing 8 hospitals, we considered p-values below 0.006 to be statistically significant. Results Data from 54,312 patients hospitalized in the tertiary care ICU in Ankara between January 2022 and May 2023 were included in this study.In the entire patient group, 51% were male and the median age was 69 years.The APACHE II score of 29% of the patients was below 10.The median APACHE II score across hospitals was 15-19, and expected mortality was 15-25%.There was no statistically significant difference in comparing the demographic characteristics of the patients according to hospitals, as demonstrated in Table 1. The average number of tertiary care ICU beds per unit was 15 ± 6 beds, and the average occupancy rate was 83.57± 19.28%.The lowest occupancy rate was seen in September 2022, and there was no statistically significant difference between months in terms of bed occupancy rates (p = 0.618, Welch's one-way ANOVA).The average number of tertiary ICUs from which monthly data were collected was 48, with a minimum of 40 and a maximum of 59 units.The number of inpatients per unit per month was 65 ± 39 patients, and the average ICU stay was 5.76 ± 5.84 days.Moreover, the number of nurses working per unit per month was 28 ± 10 nurses (working in three shifts).An average of 7 ± 9 PIs per month were reported per unit (a PI rate average of 11.93 ± 15.29% per unit), and an average of 10 ± 7 patient deaths were reported (a mortality rate average of 18.66 ± 15.36% per unit).The mortality rate determined per unit (an average of 18.66%) and the expected mortality rate according to the APACHE II score (15-25%) were similar.Among hospitals on a monthly basis, there was a statistically significant difference in terms of the percentage of patients with bed occupancy, average patient LOS, number of patients per intensive care bed, number of patients per nurse in a shift, percentage of patients developing PI, emergency department patient hospitalization percentage, ward patient hospitalization percentage, outpatient center patient hospitalization percentage, referral percentage of patients, and percentage of patients with mortality (p < 0.001, p < 0.001, p < 0.001, p < 0.001, p < 0.001, p < 0.001, p < 0.001, p < 0.001, p < 0.001, and p < 0.001, respectively, according to Welch's one-way ANOVA; Table 2).In terms of the bed occupancy percentage difference, H-5 had the highest occupancy rate, with a median of 95%.The statistical difference in terms of the average LOS originates from H-2 and H-7 hospitals, and the median LOS in these hospitals was the highest at 8. With regard to the number of patients per bed, H-5 showed a statistically significant difference, and the number of patients per bed was the lowest in this hospital.H-5 was the hospital with the highest bed occupancy rate and the lowest number of patients per bed.When the number of patients per nurse in a shift was examined across hospitals, the average was 1.3 ± 0.4.However, H-3, H-6, and H-7 exhibited statistically significant differences compared to the other hospitals, with the number of patients per nurse per shift in these hospitals being higher than in other hospitals.Considering the percentage of patients with PI, H-3 and H-7 exhibited differences, with their rates being higher than in other hospitals.H-7 was the hospital with the most extended stay, the highest number of patients per nurse, and the highest number of patients with PI.Examining the hospitalization service percentages of patients admitted to the tertiary ICU in hospitals, H-4 showed a statistically significant difference, exhibiting the highest number of patients hospitalized from the emergency department, with H-1 hospitalizing the fewest number of patients from the emergency department.While H-4 and H-8 had the lowest number of hospitalizations, H-1 had the highest number of hospitalizations in terms of referral from an external center.Observing the percentage of patients referred from an external center per unit, H-3, H-4, and H-6 exhibited statistically significant differences, with patient referral being the lowest in these hospitals.In addition, the primary reason for referral to another hospital from tertiary ICUs was the need for palliative care.Examining mortality rates among patients (Figure 1), the mortality rate was significantly higher at H-5 compared to the other hospitals.There was a statistically significant difference between H-1 and H-2 (p = 0.036; Games-Howell post hoc test), H-6 (p = 0.009; Games-Howell post hoc test), and H-8 (p = 0.017; Games-Howell post hoc test), with H-1 also being high.There was a statistically significant difference between H-7 and H-6 (p = 0.039; Games-Howell post hoc test) and H-8 (p = 0.044; Games-Howell post hoc test), with the mortality rate at H-7 being high.H-5, a specific oncology hospital, followed patients with a history of malignancy.The mortality rate and patients' history of malignancy were highly correlated. difference between H-1 and H-2 (p = 0.036; Games-Howell post hoc test), H-6 (p = 0.009; Games-Howell post hoc test), and H-8 (p = 0.017; Games-Howell post hoc test), with H-1 also being high.There was a statistically significant difference between H-7 and H-6 (p = 0.039; Games-Howell post hoc test) and H-8 (p = 0.044; Games-Howell post hoc test), with the mortality rate at H-7 being high.H-5, a specific oncology hospital, followed patients with a history of malignancy.The mortality rate and patients' history of malignancy were highly correlated.The relationship between the mortality rate and ICU data is shown in Table 3.There was a positive correlation between the number of tertiary ICU beds, bed occupancy percentage, patient LOS, emergency room patient hospitalization rate, external center patient hospitalization rate, extended patient hospitalization rate, PI patient rate, number of patients per nurse, and mortality rate (p-values of <0.001, <0.001, <0.001, <0.001, 0.004, <0.001, <0.001, and <0.001, respectively).There was a negative correlation between hospitalization and mortality rate (p < 0.001).The relationship between the mortality rate and ICU data is shown in Table 3.There was a positive correlation between the number of tertiary ICU beds, bed occupancy percentage, patient LOS, emergency room patient hospitalization rate, external center patient hospitalization rate, extended patient hospitalization rate, PI patient rate, number of patients per nurse, and mortality rate (p-values of <0.001, <0.001, <0.001, <0.001, 0.004, <0.001, <0.001, and <0.001, respectively).There was a negative correlation between hospitalization and mortality rate (p < 0.001).The relationship between the proportion of patients with PIs and ICU data is shown in Table 4.There was a positive correlation between the number of tertiary ICU beds, bed occupancy rate, patient LOS, emergency room patient hospitalization rate, extended patient stay rate, number of patients per nurse, and the rate of patients with PI (p-values of <0.001, <0.001, <0.001, 0.004, <0.001, <0.001, and <0.001, respectively).There was a negative correlation between the ward patient hospitalization rate, the external center patient hospitalization rate, and the PI incidence (p values of <0.001 and <0.001, respectively). Discussion In the results of our study, we examined the data of 54,312 patients.We analyzed the data from all tertiary ICUs in eight training and research hospitals with similar facilities, serving as last-resort centers for critically ill patients.We found that the median (15-25%) mortality data estimated according to the APACHE II score and the average mortality per unit (18.66%) were similar.We found that the gender, age, and APACHE scores of patients hospitalized in the ICUs were similar.However, there was no statistical difference in the demographic characteristics of patients hospitalized in ICUs between hospitals, ICU occupancy rate, patient LOS, number of patients per bed, number of patients per nurse, rate of patients with PI, emergency department patient admission rate, and ward patient admission rate; we found that external center patient hospitalization, referral patient, and patient mortality rates were statistically significantly different between hospitals.When we examined the data related to mortality, we detected that mortality increases with the increase in the number of tertiary ICU beds, bed occupancy percentage, patient hospitalization length, emergency room patient hospitalization rate, external center patient hospitalization rate, extended patient hospitalization rate, PI patient rate, and the number of patients per nurse.We discovered that the mortality rate decreased as the hospitalization rate increased.Moreover, we also examined the ICU conditions associated with the incidence of PIs, causing an increased risk of infection in patients, which is the most important factor of mortality in critically ill patients.As the number of tertiary ICU beds, bed occupancy rate, patient LOS, emergency room patient hospitalization rate, extended patient stay rate and the number of patients per nurse increased, the rate of PI patients in the ICU also increased.The PI patient rate decreased when the ward patient hospitalization and external center patient hospitalization rates increased. Several expert working groups supported by the National Institutes of Health and the Societies of Critical Care Medicine have recommended the regionalization of critical care medicine [7,8].There is ample evidence showing that hospitals and physicians with high patient volumes experience better patient outcomes across a wide range of medical conditions and surgical procedures [8,16].However, as there is not yet a study in the literature comparing these high-volume hospitals, there is no study on whether ICU conditions are related to mortality, despite the increased use of technology and improved healthcare conditions.Tertiary ICUs have many standard features, but the organization and delivery of intensive care services vary [8,17].In our study, we examined tertiary ICU data from eight training and research hospitals.Although patient characteristics and facilities were similar, there was a statistically significant difference in patient mortality between hospitals.The mortality rate in the Dr. Abdurrahman Yurtaslan Oncology TRH ICU (a median of 57.7%) was significantly higher than in other hospitals.We attributed this high rate to the fact that this hospital's patient demographics differ from that of the other hospitals.Since there is no separate scoring for malignancy in the APACHE II score, although the prediction score between hospitals was similar, the mortality rate was found to be significantly high in oncology hospitals due to the large patient population with a history of malignancy.There is a separate score for malignancy in the newly defined APACHE IV score in the literature, and the use of this scoring system is limited today.We concluded that the APACHE IV scoring system will, thus, provide more accurate results in predicting ICU mortality. Prognosis in critically ill patients is related to many risk factors, such as age, gender, disease severity, comorbidities, diagnosis, and response to treatment [18,19].Unit-derived clinical outcomes have increased the need for outcome review and guidance on the effective use of services [20,21].Scoring systems can be used to estimate expected mortality, adjusted for differences in diagnoses, physiological abnormalities, and outcomes of critically ill patients admitted to the ICU [6,22].Therefore, global disease severity scoring systems have grown in popularity, allowing an international comparison of intensive care outcomes.Although there are adversities in using risk adjustment methods to compare outcomes across ICUs, many studies have reported that APACHE II is the most appropriate scoring system for critically ill patients [6,23].In our study, we used the APACHE II scoring system for interhospital mortality classification.There was no statistical difference in APACHE II values between hospitals.When hospital and unit-related results were evaluated, a statistical difference in mortality rates was detected between hospitals.This result clearly shows that the conditions in the ICU have an effect on mortality that is independent of patient-related values.Lapichino et al., in their study, conducted in ICUs regardless of level, found a direct relationship between mortality and intensive care occupancy rate [20].In our study, we found a direct relationship between mortality and ICU occupancy.Flabouris et al. found that patients admitted to the ICU from emergency departments and external centers had high hospital mortality rates and extended intensive care stays [24].In our study, increased mortality was observed in patients admitted from the emergency department and from an external center.The highest association with mortality was found in patients admitted from the emergency department.It was also found that mortality was significantly reduced in patients admitted from the ward.Although there is limited data in the literature examining the relationship between site of admission and mortality, there is no scoring system that takes these data into account.As we found in our multicenter study with a large group of patients, we believe that these data should be taken into account and that prospective studies are needed in this regard. Nursing staffing levels in the ICU are different from those on wards and other hospital services for many reasons.The nurse-patient ratio is important in the ICU because of the need for nursing care, continuous monitoring and supervision of patients [13].For tasks that require more than one nurse, or in situations of sickness, it may be necessary to use floating or on-call nurses to support the nurse [14].In addition, the total number of 'staff per bed' working in ICUs is higher, because the same number of nursing staff must be available 24 h a day in ICUs, as opposed to the often-reduced staffing levels in other departments during night shifts.Although there is no clear consensus in the literature, a 1:1 nurse-patient ratio is recommended in ICUs [13].There are also conflicting results in the literature about the mortality rates associated with the number of patients per nurse.Three studies [25][26][27] showed a statistically significant association between increasing nurse staffing levels and decreasing mortality rates, while four studies [28][29][30][31] found no statistically significant association.In our study, which was conducted with a monthly average of 48 tertiary ICUs, the average number of patients per nurse across hospitals was 1.3, which was higher than the recommended value in the literature.It was observed that mortality increased linearly in hospitals as the number of patients per nurse increased.We believe that the number of nurses in tertiary ICUs is important in terms of patient prognosis and that further studies are needed for international standardization. Globally, mortality rates of patients admitted to ICUs have decreased over the last two decades [17].This is remarkable considering the age of critically ill patients upon hospital admission, the number of comorbidities, and disease severity [32].The mortality rate per ICU determined in our study was 18.6%, and this rate is similar to previously published studies [33].Additionally, the average ICU stay per unit was 5.76 ± 5.84 days, being consistent with previous studies [34].Although the average data in our study were found to be compatible with the literature, there were statistically significant differences when evaluated among the hospitals.At the same time, a significant number of intensive care patients are transferred between clinics and hospitals.Annually, it is estimated that 11,000 (a referral patient rate of 6.5% per unit) patients are transferred between hospital ICUs in the United Kingdom [35].A significant portion of interhospital transfer occurs simply due to insufficient resources (number of beds, nurses, and staff) rather than the need to access a specific service unavailable in the referring unit [36].In our study, the referral rate per unit was 2.3%.The most common reason for referral was a lack of hospital resources due to the need for palliative care.Our reasoning for referral is compatible with the literature.We attributed the low referral rate in our study to the fact that the hospitals included in this study were training and research hospitals and, therefore, had the best facilities.In addition, when patient admission and referral rates were compared between hospitals, the tertiary ICUs of CHs were the most appropriate units, as they had the lowest referral rates to external centers and the lowest number of patients admitted from external centers, despite their high occupancy rates.We attributed this to the recent opening of CHs and the fact that they have a higher number of tertiary ICUs due to their larger hospital areas.Given these referral rates, we believe that the number of hospital-based ICUs will continue to increase in the future due to demand. PI is one of the most common health conditions worldwide and comprises local tissue damage caused by the compression of underlying tissues [37].As reported by the Agency for Healthcare Research and Quality, approximately 2.5 million people are affected by PIs yearly, and more than 60,000 patients die from direct PIs each year in the United States alone [38].The incidence of PI is high in weak, elderly, bedridden, and malnourished patients who are in prone positions, as well as in patients who are unable to care for themselves and, in particular, critically ill [39].Many studies have been conducted on ICU risk factors.However, the relationship between PIs and critically ill patients has not been fully elucidated [5].There are many predictive scales for essential mortality factors in intensive care patients, such as mortality and delirium prediction scales [6,40].There is no PI risk assessment and prediction scale explicitly used for intensive care patients [41].In 2022, an extensive systematic review and meta-analysis study conducted by Wen Tang et al. demonstrated that the APACHE II scoring system was a significant determinant of PIs in the ICU and found a high correlation between high APACHE score and the development of PIs in the ICU [5].Our study examined the relationship between PIs and unit-based data.We found a linear relationship between the number of tertiary ICU beds, bed occupancy rate, patient LOS, emergency room patient hospitalization rate, extended patient hospitalization rate, number of patients per nurse, and the rate of patients with PI.A statistically significant inverse relationship existed between the ward patient hospitalization rate, external center patient hospitalization rate, and PI incidence.There was a significant correlation between the number of patients per nurse and the development of PIs.In addition, ICU mortality increased as the number of patients with PIs increased.In the light of these results, we can say that staff and unit conditions in tertiary ICUs are closely related to inpatient prognosis.Studies on unit-based data and PIs are limited in the literature, and we believe extensive prospective studies are needed. Our study had several limitations.Firstly, this study examined data from tertiary ICUs of TRHs in Ankara, which had the same conditions and used the same database.Data from both university hospitals and private hospitals were unavailable.However, as this study includes data from 74% of tertiary intensive care beds in the region, it can be inferred that the study results are representative of the general population.Secondly, data on the number of doctors and staff working in ICUs was not available.The quantity of working doctors could impact mortality rates in ICU, while the number of active staff members may play a role in the development of PIs.Thirdly, the data were obtained from tertiary ICU databases at hospitals, and its accuracy was deemed reliable.However, individual patient records were not assessed.Due to the long follow-up period of 17 months and analysis of an average of 48 tertiary ICU data per month, it can be concluded that the study results were minimally affected by potential data bias.Another limitation of this study is that nursing workload is based on the number of patients per nurse.To provide a more comprehensive evaluation, prospective observational studies using other scoring systems, such as the Nurse Activity Index, are needed.Additionally, this study did not take into account the patients' skill mix upon admission to the ICUs, nor did it consider whether they were receiving respiratory support.These factors could potentially impact the workload of ICU and ultimately affect patient prognosis. Conclusions Tertiary care ICUs of training and research hospitals are units with high patient volumes, ample facilities and resources, and better patient outcomes.Although these units have many standard features, the organization and presentation of intensive care services vary.Resource use and inpatient prognosis data are not the same in these units, which represent the most advanced level. There are many highly validated scoring systems used in the literature to predict prognosis in ICUs; however, none of them include ICU and staff conditions.Although these scoring systems are successful as general data, they have shortcomings when examined on a unit basis.ICU conditions have a significant impact on patient prognosis.ICU occupancy, work intensity, patient population and the number of nurses working in the ICU are important mortality factors.In particular, the patient population admitted to the unit (emergency patients and history of malignancy) is the data most strongly associated with unit mortality.At the same time, the development of PIs, which is associated with ICU mortality, is closely related to ICU and staffing conditions.Although more technologically advanced and larger centers are being established in today's world, where the population is growing and the need for intensive care continues to increase, international standardization of these centers in terms of working conditions is essential.Further prospective studies examining the effect of unit-related conditions on mortality are needed. Table 1 . Comparison of demographic characteristics of patients according to hospitals. Categorical variables are expressed as either frequency (n) or percentage (%).Categorical variables were compared using Pearson's chi-square test or Fisher's exact test.† Continuous variables are expressed as median quartiles (Q1-Q3).Continuous variables were compared with Welch's and Fisher's one-way ANOVA tests or the Kruskal-Wallis test.* Statistically significant p-values are shown in bold.N: total number of inpatients. Table 2 . Comparison of clinical data of hospitals.Continuous variables are expressed as median quartiles (Q1-Q3).Continuous variables were compared with Welch's and Fisher's one-way ANOVA tests or the Kruskal-Wallis test.* Statistically significant p-values are shown in bold.Significance values were adjusted with the Bonferroni correction for multiple comparisons, and when comparing eight hospitals, we considered p-values below 0.006 to be statistically significant.The p significance value was corrected, and p < 0.05 was considered significant. Table 3 . Relationship (correlation)between mortality rate and variables. Table 3 . Relationship (correlation)between mortality rate and variables. Table 4 . Relationship (correlation)between the proportion of patients with pressure injuries and the variables. * Rate of patients staying in ICU for more than 15 days.The relationship between significant values were evaluated with the Spearman correlation test.** Statistically significant p-values are shown in bold.ICU: intensive care unit, R: correlation coefficient.
2024-03-22T15:38:57.353Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "3d14a738984c5683662681e1edd2080dcafa2c4d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/12/6/689/pdf?version=1710845171", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e65a7efcdd13bf5e31790896ee6c4b2d7c6a7ef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
216168440
pes2o/s2orc
v3-fos-license
A review on friction stir welding of aluminium alloys and the effects on tool geometry Friction stir welding (FSW) is a novel technique used to join similar and dissimilar alloys. Aluminium alloys of 2XXX, 6XXX and 7XXX are reviewed and presented in this paper. In FSW welding process, heat is generated during the process and workpieces are joined without melting the aluminium alloys. The tool moves along the soften surface for joining aluminium alloys. Tool materials, tool geometry, welding speed, rotational speed, axial forces are examined and the effects on welding surfaces after welding process has been analysed. Friction stir welding can be widely used in aviation, marine and vehicle body construction industries because of its mechanical properties where fusion welding process revealed deformities on the welding materials to be joined. In this review article, the recent research of friction stir joining of aluminium alloys and its applications in various industries are explicated, including the quality of weld of aluminium alloys, the evaluation on microstructure in weld nuggets, and tool wear. The conclusion of this review is unequivocally recommended that aluminium alloys are suggested for future research with numerous industrial applications.. Introduction Friction stir welding is a technique used to join Aluminium alloys, invented by The Welding Institute (TWI), UK in 1991. In FSW technique, a rotating non consumable welding tool is used for heat generation between workpiece due to friction and causes deformation at welding nugget. FSW has vast applications in aircraft, ship building, automobile body construction, and railway industries. Thus fundamental studies on rotational speed, weld nugget, microstructure, mechanical properties and process parameters have recently been studied. The formation of the oxide layers in conventional welding is significant due to coefficient of thermal expansion, high thermal conductivity, and shrinkages during solidification process. After welding aluminium alloys using fusion welding technique, oxide formation at weld nugget cannot be avoided and mechanical properties are also not improved on the surface of workpiece. To overcome this problem, solid state welding technique such as FSW is used to join Al alloys. Further, FSW has a certain advantages over conventional welding methods because it reduces distortions, residual stresses in weld nugget. Principle of FSW Technique Friction Stir Welding is a solid state welding process in which a cylindrical shape shouldered tool rotates on the surface of the workpiece to be joined. The workpiece is which firmly clamped in the fixture of FSW machine and during the rotation of the tool, the axial force is applied on the welding surface. A non-consumable tool is used to join workpieces without melting it during the welding process and the workpiece is plasticized due to the generation of heat and friction. Microstructure and Mechanical Properties of FSW Joints In Automobile, Aerospace and Railway industries, reduction of weight is important to fabricate heavy machine parts because of limitation in the strength to weight ratio. Due to this property, materials such as copper alloys steels, wrought iron, titanium and magnesium alloys are replaced by aluminium and its alloys. Thomas et al. [1] investigated the microstructure, hardness and tensile strength of aluminium alloys Al Mg4.5Mn0.7 (AA5083) and AlZn6Mg2Cu (AA7075). During hot working process microstructures of aluminium alloys are similar and the hardness is varied in alloy AA5083. The mechanical properties of welded AA5083 and AA7075 6.0 mm alloys are as high as 100% and 72% respectively compared with base material. After welding process, grain structure of aluminium alloys are improved drastically due to recrystallization of microstructure. Liu et al [2] investigated the microstructure of AA6061-T6 alloy by varying the welding speeds from 300 to 1000 rpm and traverse speeds of 0.15 to 0.25 cm/s. TEM (Transmission Electron Microscope) and light microscopy are used for microstructure analysis and they have divided the result in two phases. The first phase and the second phase results the variation in residual hardness between 55 WHN to 65 WHN. In weld nuggets, the grain size is 10μm and the hardness of work piece is 100μm approximately. Residual hardness is improved in the first and second phase after the solidification process but the effects of microstructure at low temperature is not investigated. Z.H. Fu et al. [3] studied the dissolution in weld nugget and the effect on microstructure during the joining of aluminium alloys at low temperature which eliminates the major problem of conventional welding processes and it is performed in the presence of inert gas. During the welding process, oxidation is drastically reduced due to the presence of inert gas. By varying rotational speed of welding in Friction Stir Welding, harness of weld nugget has been improved by Micro hardness test. P. Cavaliere et al. [4] investigated the mechanical properties and micro structure of AA 6056 by varying rotational speeds of 500, 800 and 1000 rpm and welding speeds of 40, 56 and 88 mm/min. The joints after welding process are characterized by using micro hardness test. Residual micro hardness is also important parameters during the welding process and grain structure is also improved after solidification process. Li et al. [5] studied the residual microstructures of AA2024 and AA6061. The super plastic flow is found by residual, equiaxed grain ranging from 1 to 15 mm and reveals 40% reduction in residual micro hardness and 50% residual micro hardness reduction in the AA2024. However, post weld treatment has not been absorbed by many researchers after the welding process. Rhodes et al. [6] investigated the mechanical properties and the behaviour of micro structure after post treatment of AA7075 alloy with the speed 5m/ min and they observed the low dense dislocation and also nugget is recrystallized and the strength of weld is enhanced after welding process. After solidification process, hardness and microstructure has been improved but dispersion of other elements over the weld nugget is not observed after post weld treatment. Dispersion of alloying elements such as Si and Cu, Mg are used to improve the hardness during the welding process. Ghosh et al. [7] studied the phase transformation in weld zone and grain refinement of A356 and A6061-T6 aluminium alloys and studied the dispersion of Si particles on weld nugget. Phase transformation from liquid state to solid state is played a vital role during the welding process but mechanical properties by post weld aging weld for base metal during the initial stage are not observed during the process. Yan et al. [8] attempted the effect of base metals T7451, T62 and also studied the post weld heat treatment of AA7050 aluminium alloy. The mechanical properties of base 7075 aluminium increases during initial temper the join strength is also increased by post weld aging method. However, ultimate tensile strength improves the similar and dissimilar metals. During welding process, ultimate strength has been improved at low welding speed and high welding speed. Sakthivel et al. [9] attempted the metallurgical and mechanical properties of the similar and dissimilar aluminium alloys by varying speed from 50 mm/min to 175 mm/min and they have found that parent material has ultimate tensile strength of 84 MPA at low welding speed of 50 mm/min and 80 at higher welding speed 175mm/min. By varying welding speed, strength of parent metal has been improved after welding process. Super plastic impairment of Aluminium alloy improves the grain structure after welding process. S. Benavides et al. [10] investigated a study on low temperature FSW of aluminium 2024 and studied super plastic impairment using active reformation of superfine equiaxed grains. At low temperature, the hardness and the equiaxed grains are developed after the solidification process. The effect of heat affected zone is recorded during the welding process. Priya et al. [11] studied the microstructure and metallurgical properties of AA6061 and it is observed that hardness is increased in weld nugget and the hardness is not improved in HAZ (Heat Affected Zone). Formation of onion ring and the microstructure are observed on the surface of weld nugget by many researches and similarly hardness has been improved after the welding process. Da Silva et al. [12] investigated the features of microstructure and mechanical properties of dissimilar AA2024-T3 and AA7075-T6 joints and performed hardness and tensile test. The SEM result reveals the formation of onion ring is negligible. Aluminium alloys and its compositions are used by many researchers but they use only one or two type of similar and dissimilar aluminium alloys to analyse the microstructure and metallurgical properties. Rajakumar et al. [13] investigated six types of aluminium alloys including 7xxx series alloy such as AA7075 and AA7039. The micro structural analysis is performed in weld nugget. Ductility of aluminium alloys and tensile strength are analysed after the welding process. Leitao et al. [14] found the mechanical and metallurgical behaviour of AA5182-H111 and AA60616-T4 and the size of grains of AA5182-H111 are based on tensile strength and ductility of aluminium alloys. Tensile strength and mechanical properties are varied at low welding speeds and higher welding speeds. O F Flores [15] attempted to find the tensile characteristics in weld nugget. It is observed that at the lowest spindle speed tensile strength is decreased. Palanivel et al. [16] investigated AA5083-H111 and AA6351-T6 using tensile strength aluminum alloys. The tensile strength and welding surface without any defects are developed during the FSW process. Sato Y.S. et al. [17] studied on aluminium alloy 1100 using stored roll bonding and the grain structure is relatively improved after FSW process. Shujun Chen et al. [18] absorbed the effect of tensile strength in weld zone and HAZ. The tensile strength of the welded joint of 100-400 A has improved by 2.74%-7.38%, and for 500A and 600A has increased by 17.11%. In HAZ and weld zone, tensile strength has not been improved after the welding process. Researcher are not only analysed the mechanical and metallurgical properties and they are also observed concentric formations, cavities and smooth crown during the welding process. Lombard et al. [19] investigated the properties of friction stir welded AA5083-H321 aluminum alloys parameter and observed concentric ring in the weld zone and the width of the nugget was of the order of the pin [20] investigated on the mechanical and microstructures properties of Friction Stir welded joints based on shoulder geometries. It is recorded that smooth crown with fillet and cavity is produced after the FSW process. Thermo-mechanically affected zone (TMAZ) is affected by varying welding speeds and mechanical properties are also affected after post weld treatment process. Singh et al. [21] studied the mechanical properties and microstructure of 7039 Al alloys after the joining process. The rotary speed of 635 rpm and the welding speed of 8 and 12 mm/min are used to weld aluminium alloys. It is recorded that the stir zone has maximum coarse grain than the thermo-mechanically affected zone (TMAZ) and the strength based on yield and tensile strength are relatively improved in the friction stir welding joints. Rajamanickam et al. [22] attempted to study the mechanical properties of aluminum alloy AA2014 based on tool rotation. From the analysis of tensile property data of joints, it is observed that tensile properties have been influenced based on parameters such as rotational and welding speed. From the above literature surveys, researchers have been attempted to analyse mechanical, metallurgical properties, concentric rings, cavities, HAZ and TMAZ by Friction Stir Welding process. From the above literatures surveys, researches are used only similar metals for their research. Analysis on Tool Geometry Microstructure, generation of heat, flow of material affects the tool geometry and the life of tool is an important feature which depends on parameter during the welding process. The parameters such as axial force, rotation, tool tilt angle, shoulder and pin diameter, welding speed have been studied and mechanical characteristics of Friction Stir Welded joints are also studied including tool life. Important parameters such as axial tool pressure (F), rotational speed (N) and traverse speed (S), on weld properties have been studied by researchers. Acerra.F et al. [23] investigated the aluminium alloy of AA7075-AA2024 and observed that heat generation is higher when the diameter of shoulder in tool is increased after the welding process and the major defects are analysed in welding zone and the elements of coating blank. Tool geometry, shoulder diameter and pin are also an important parameters which affect the microstructure and mechanical properties. Fonda R.W. et al. [24] studied the morphology of aluminium alloys based on grain structure in FSW Process and observed that FSW tool has free from damages after the post welding process and grain structure is relatively improved near the welding tool. Finer type of grains is developed and texture follows FCC structure during the welding process. Texture of FSW welded aluminium alloys have been analysed and FCC structure is formed after the post welding process. Zaho et al. [25] the pin and shoulder plays a vital role in material flow which controls the characteristics of welding during the joining process. FSW has a weld defects and contours belong to the tool design and geometry. Material flow during the welding process is done based on pin and shoulder diameter. Researchers are also used simulations such as CFD (Computational fluid dynamics), Analysis of variance (ANOVA) based on FEM (finite element model) to analyse the FSW aluminium alloys. Buffa.G et al. [26] made a study on the simulation based on model has been developed in which studies the geometry of tool in weld nugget. The three dimensional finite element model (FEM) based on cylindrical geometry has been investigated using FSW tool and behaviour of tool over the weld, microstructure of tool have been analysed. Mechanical properties and tensile strength has been analysed by Analysis of variance (ANOVA). Mohanty et al. [27] investigated the parameters such as speed of tool and other geometries of AA1100 aluminium alloys and Analysis of variance (ANOVA) are utilized to find the mechanical properties and tensile strength is relatively increased by increasing rotational and welding speeds. Trivex tool and Triflute tools are compared by varying the transverse and axial forces and the mechanical properties and metallurgical properties of aluminium alloys have been compared after the welding process. Colegrove [28] investigated the material flow of FSW process using the CFD. Trivex as well as MX are used to weld alloys and the result is compared with Triflute tools. The transverse and axial force are relatively decreased by using Trivex tool and increased by using Triflute tool. CFD analysis is also used for the comparisons of aluminium alloys. The morphology of aluminium alloys have been varied by Tool geometry, Transverse, longitudinal motions. Mahoney et al. [29] investigated the properties based on transverse and longitudinal motions. Tool geometry and microstructure have been analysed after the aging of 7075T65 Al alloy. It is observed that the yield strength, tensile strength is poor in HAZ after the post weld treatment process. Fujii Hidetoshi et al. [30] studied the microstructure and metallurgical properties after the post welding treatment and triangular type of profile is used to weld aluminium alloys. The tool has less impact at low welding speed. The strength is relatively decreased in HAZ by researchers and they are also made an attempt to modify the shape of the tool. It results that the strength in weld nugget is improved after the post treatment process. The microstructure and mechanical properties are relatively improved by varying the tool diameter, shoulder diameter and pin. Elangovan et al. [31] studied the pin and shoulder diameter of tool on FSW of AA6061 aluminum alloy and shoulder diameters with different profiles of tool pin are used to join the welds. Their study is shown observations such as macrostructure analysis and transverse tensile properties. Micro structure and mechanical properties are relatively improved based on tool shoulder and pin profile. Optimization Techniques such as Taguchi method is used to analyse the factors on tool geometry. Lakshminarayanan et al. [32] studied Taguchi method to find the factors which affect the tensile strength of the welded of FSW RDE-40 aluminium alloy and it observed that the influence of tensile strength of the weld is based on tool geometry. Mechanical properties such as tensile strength have been improved for FSW welded aluminium alloys. Researchers are also used threaded and unthreaded tools to analyse the material flow in the weld nugget after the welding process. Oliver Lorrain et al. [33] investigated threaded pins and its application in industries. In the first phase, it is not unthreaded and this is because of the tool wear. The material flow has been analysed both threaded and unthreaded pin profiles. From above literature surveys, it has been found that they are varied the tool shape, shoulder diameter, pin to improve the weld nugget and analysed the material flow after FSW Process. Conclusion In this review, the friction stir welded Aluminium alloys have been presented and the simulation techniques are also validated with its process parameters. It is observed that similar and dissimilar FSW welded aluminium alloys have been utilized for industrial applications owing to its mechanical properties. For the past decades, few researchers are examined dissimilar aluminium alloys and analysed mechanical and metallurgical properties. In this paper, the grain structure of FSW joints, cavities, formation of onion rings in weld nugget zone of 2XXX, 6XXX, 7XXX series aluminium alloys are examined and SEM analysis reveals the deformities in aluminium alloys. In future, dissimilar aluminium alloys are recommended to enhance the quality of weld in industries and commercial applications.
2020-03-12T10:17:49.721Z
2020-03-07T00:00:00.000
{ "year": 2020, "sha1": "1f3851473327f0c4053149abf4e4f8f0ac1433af", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/764/1/012009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "72988c48a54d8f42ae87e48225dcb33b14383a0c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
237420877
pes2o/s2orc
v3-fos-license
Bulk nanocrystalline Al alloys with hierarchical reinforcement structures via grain boundary segregation and complexion formation Grain size engineering, particularly reducing grain size into the nanocrystalline regime, offers a promising pathway to further improve the strength-to-weight ratio of Al alloys. Unfortunately, the fabrication of nanocrystalline metals often requires non-equilibrium processing routes, which typically limit the specimen size and require large energy budgets. In this study, multiple dopant atoms in ternary Al alloys are deliberately selected to enable segregation to the grain boundary region and promote the formation of amorphous complexions. Three different fully dense bulk nanocrystalline Al alloys (Al-Mg-Y, Al-Fe-Y, and Al-Ni-Y) with small grain sizes were successfully fabricated using a simple powder metallurgy approach, with full densification connected directly to the onset of amorphous complexion formation. All the compositions demonstrate densities above 99% with grain sizes of<60 nm following consolidation via hot pressing at 585 oC. The very fine grain structure results in excellent mechanical properties, with nanoindentation hardness values in the range of 2.2-2.8 GPa. Detailed microstructural characterization verifies the segregation of all dopant species to grain boundaries as well as the formation of amorphous complexions, which suggests their influential role in aiding effective consolidation and endowing thermal stability in the alloys. Moreover, nanorods with a core-shell structure are also observed at the grain boundaries, which likely contribute to the stabilization of the grain structure and high strength. Finally, intermetallic particles with a sizes of hundreds of nanometers form. As a whole, the results presented here demonstrate a general alloy design strategy of segregation and boundary evolution pathway that enables the fabrication of multiple nanocrystalline Al alloys with hierarchical microstructures and improved performance. Introduction Al alloys are a class of structural materials widely used in aerospace and gaining growing interest for automotive applications because of a combination of exceptional strength-to-weight ratio, high stiffness, and superior specific strength [1,2,3]. Conventional Al alloys typically have grain sizes in the micrometer range, and common alloying elements include Zn, Mg, and Cu. These alloying elements generally promote the formation of secondary phases to strengthen the materials [4,5], leading to yield strengths that can sometimes exceed 700 MPa. To further improve the strength, grain refinement has been identified as a promising route since the strength-to-weight ratio can be increased without requiring more alloying elements, which are often heavier than the base Al. For example, Ma et al. [6] showed that, compared to an as-extruded coarse-grained 7075 alloy with a yield strength of 283 MPa, the yield strength of the same alloy composition but with an ultra-fine grain size is 583 MPa. With grain refinement, grain boundary strengthening is the predominant mechanism as predicted by the Hall-Petch relationship. Consequently, further refinement of grain sizes down to the nanoscale regime may offer a chance for even better mechanical properties. Li et al. [7] sputter deposited columnar nanotwinned Al-Fe alloy films with an average grain size of ~5 nm and obtained a yield strength exceeding 1.8 GPa, as measured through compression tests on micron-sized pillars. However, despite the promise of outstanding properties, the high density of grain boundaries in nanocrystalline metals and alloys also typically results in poor thermal stability so that undesired grain growth can easily occur during material processing or in service. Given the low melting point of Al and its alloys, even modest temperature exposures can lead to deleterious microstructural evolution. For example, Ahn et al. [ 8 ] investigated the effect of degassing temperature on the microstructure of a nanocrystalline Al 5083 alloy and observed an increase in grain size from 50 nm to 118 nm after degassing at 500 o C for 2 h. Moreover, the grains further coarsened to a mean size of 214 nm after cold isostatic pressing and forging processes. As a result, synthesis of nanocrystalline alloys usually requires far-fromequilibrium processing routes, such as magnetron sputtering, electrodeposition, and pulsed laser deposition techniques, which commonly lead to specimens with small overall dimensions on the micron scale or smaller [9]. For instance, Devaraj et al. [10] employed magnetron sputtering to fabricate coarsening-resistant Al-Mg thin films, yet these materials only had a thickness of ~100 nm. In contrast, powder metallurgy methods can be easily scaled up and should require much less energy compared to techniques such as high-pressure torsion or equal channel angular pressing, which necessitate multiple plastic deformation cycles, each requiring high applied forces. Evidence is building in the literature to support the idea that the most effective method for stabilizing a nanocrystalline grain structure is the addition of dopant elements that can segregate to grain boundaries, which reduces the driving force for grain growth and/or provides kinetic stabilization on the grain structure. The grain boundary energy for a dilute solution is [11]: where 0 is the grain boundary energy of the pure material, Γ is the dopant excess at grain boundaries, ∆ is the segregation enthalpy, k is the thermal energy, and is the global dopant content. In order for dopant atoms segregating to and stabilizing grain boundaries, appropriate dopant elements need to be chosen, with previous studies providing guidelines such as large atomic size mismatch with the solvent [12] and, correspondingly, low bulk solubility [13]. Chookajorn et al. [ 14 ] built on this idea to create a theoretical framework based on a thermodynamic model to evaluate and design stable binary nanostructured alloys. This model considers two key thermodynamic parameters, which are the enthalpy of mixing in the crystalline state for the grain interior and the dilute-limit enthalpy of segregation for the grain boundaries. These authors then identified a stable nanostructured alloy system, W-Ti, which was shown to retain a 20 nm grain size even after annealing at 1100 o C for one week, owing to the segregation of Ti to grain boundaries and consequent stabilization. Later work by Murdoch and Schuh [15] employed a Miedema-type model to estimate the grain boundary segregation enthalpy in binary alloy systems, where positive and negative enthalpy values indicated segregation and depletion of dopant atoms at grain boundaries, respectively. The enthalpy calculation was expanded to ~2500 binary alloys, which can serve as a rapid screen tool for the stability of nanostructured alloys. It is worth noting that grain boundary segregants can concurrently provide a kinetic drag term that slows grain growth [16]. As a whole, alloying with segregating dopants is generally being accepted as the key process necessary for the creation of thermally-stable nanocrystalline alloys. However, the vast majority of previous work has focused on binary systems where only a single segregating species was added. Recent evidence suggests complex segregating behavior in multicomponent systems is ultimately driven by the nature of the thermodynamic interactions between multiple segregating species, the kinetic ability to diffuse to the boundaries, and the nature of site competition at various boundaries [17,18,19,20]. Segregation of dopant atoms can not only alter the chemistry of grain boundaries but also change the local equilibrium structure of these interfaces. A grain boundary can be described as a phase-like entity, known as a complexion, and may transform between different structures as chemistry and/or temperature changes [ 21 , 22 ]. One type of complexion is an amorphous intergranular film, which occurs when a grain boundary undergoes a premelting transition, locally melting below the bulk melting point of the alloy and yet retaining an equilibrium thickness at the boundary. Amorphous complexions have been shown to significantly enhance thermal stability of nanocrystalline alloys, as Grigorian and Rupert [23] demonstrated that Cu-Zr-Hf alloys with thick amorphous complexions exhibit nanosized grains even after two weeks of annealing at a temperature higher than 95% of the solidus temperature. In addition to improving thermal stability, amorphous complexions have been associated with enhanced diffusion and the phenomenon of activated sintering [24]. For example, in a W specimen that was doped with Ni, Gupta et al. [25] observed the presence of nanoscale amorphous Ni-enriched films at grain boundaries well below the bulk eutectic temperature, which gave rise to enhanced mass transport and consequently solidstate activated sintering. In a different system, Donaldson and Rupert [26] reported that the formation of amorphous complexions in a Cu-4 at.% Zr alloy consolidated from ball-milled powders coincided with a significant increase in the density of bulk samples. In the present study, our goal is to choose alloying elements that can segregate to grain boundaries and form amorphous complexions so that bulk Al-rich alloys with both nanosized grains and high density can be achieved. Three elemental combinations are selected: Al-Mg-Y, Al-Fe-Y, and Al-Ni-Y. Ternary systems are targeted due to recent studies which provide evidence of better thermal stability and thicker amorphous complexions, along with their much slower critical cooling rate needed to retain amorphous complexions compared to binary systems [23,27]. All of the compositions were successfully fabricated into fully dense bulk pellets with a diameter of ~1.4 cm and height of ~1 cm using a simple and scalable powder metallurgy approach. Moreover, the bulk samples showed full densification to >99% density and retained a grain size of ~50 nm under proper hot pressing conditions. Microstructural characterization reveals a hierarchical structure containing a nanocrystalline Al-rich grain structure and additional reinforcing phases with different structures and characteristic length scales. On a very fine scale, complete segregation of dopant atoms to grain boundary regions and formation of amorphous grain boundary complexions were observed, with the latter feature being connected to activated sintering and mechanical behavior. Nanorods with a core-shell structure were observed to nucleate at grain boundaries as well, providing a reinforcing phase on the nanometer scale. The internal structure of the nanorods is amorphous and clear partitioning of Y to an outer shell layer is observed. On a larger length scale, intermetallic particles with sizes of a few hundred nanometers were uniformly distributed within the microstructure. The end result is that the Al alloys reported here readily consolidate from powder form into bulk samples and possess a hierarchical nanostructure that both stabilizes the grains and results in high mechanical strength, all of which appear to be consequences of the formation of segregation-mediated complexion formation. The present study offers a simple approach to synthesize fully dense bulk nanocrystalline Al alloys via engineered grain boundaries. Materials selection In the present study, the goal is to fabricate bulk nanocrystalline Al-rich ternary alloys with both full density and small grain size, by employing grain boundary segregation of dopant elements expected to stabilize the grain size and promote the formation of amorphous complexions. Therefore, appropriate selection of dopant species is a crucial step. First, positive segregation enthalpies are desired indicating that dopant atoms tend to segregate to grain boundaries. In addition, limited solubility is beneficial as another sign that most of the dopant atoms will not dissolve in the Al matrix and instead will segregate to grain boundaries. Next, pair-wise enthalpy of mixing should be negative so that atoms of different elements are likely to mix with each other, as this has been shown to be critical for the formation of amorphous complexions as proposed by Schuler and Rupert [28]. Finally, large atomic size mismatch is another important factor in analogy to bulk metallic glasses, as a significant difference in atomic sizes among the constituent elements has been shown to be beneficial for a larger extent of dense randomly packed atomic configurations [29]. Due to the similar structural disorder in both metallic glasses and amorphous complexions, the criterion for metallic glasses can help serve as a guide for amorphous complexion design. Based on the above factors, three alloy systems of Al-Mg-Y, Al-Fe-Y, and Al-Ni-Y were chosen, with roughly 2 at.% used for each dopant element. Each system includes one alkalineearth/transition metal and one rare earth metal to maximize the atomic size mismatch [30,31], while the selection of Mg, Fe, and Ni allows for a range of common metals to be studied. In addition, all dopant elements have limited solubility in Al [32,33,34,35], and Mg, Fe, and Ni have been shown to segregate to grain boundaries in Al alloys [36,37,38]. Although the segregation of Y to grain boundaries in Al has not been reported directly to date, grain boundary segregation of a similar rare-earth element, Ce, and complexion formation were observed in a ternary nanocrystalline Al alloy [36]. Computational studies have shown that pair-wise mixing enthalpies are negative in all of the systems [39,40,41]. Materials fabrication To synthesize the nanocrystalline alloys, powders of elemental Al (Alfa Aesar, 99. for 10 h in a SPEX SamplePrep 8000M high-energy ball mill using a hardened steel vial and milling media. All milling processes were conducted in a glovebox filled with Ar gas and with an oxygen level lower than 0.05 ppm, in order to avoid oxidation during milling. A ball-to-powder weight ratio of 10:1 was used with 3 wt.% stearic acid as a process control agent to prevent excessive cold welding. As an initial test of each alloy's thermal stability, the as-milled powders were encapsulated under vacuum in quartz ampules and annealed at either 270 o C or 540 o C for 1 h in a tube furnace (MTI International GSL-1100X-NT). Each sample was then dropped into a water reservoir to quench as fast as possible. For fabrication of bulk specimens, the as-milled powders were loaded into a ~14 mm inner diameter graphite die and then consolidated into cylindrical bulk pellets using an MTI Corporation OTF-1200X-VHP3 hot press. This system is comprised of a vertical tube furnace with a vacuumsealed quartz tube and a hydraulic press. The powders were first cold pressed for 10 min under [42]. The heating rate to reach the target THP was 10 o C/min. Finally, the pellets were naturally cooled down to room temperature, which typically took more than 4 h. For the investigation of microstructural features representative of the high temperature state, one Al-Ni-Y pellet that was originally hot pressed at 540 o C and then naturally cooled down to room temperature was subsequently annealed again at 540 o C for 10 min. This specimen was then very quickly quenched by being placed on an Al heat sink sitting in liquid nitrogen, so that the high temperature structure can be frozen in and studied. For the remainder of the paper, the hot-pressed and then naturally cooled pellets will be referred to as naturally cooled samples, while the annealed and subsequently quenched sample will be referred to as the quenched sample. Materials characterization The cylindrical pellets were cut into two equal semi cylinders using a low-speed diamond saw so that the cross-sectional plane could be studied. All cross-sectional surfaces were first ground with SiC grinding paper down to 1200 grit and then polished with monocrystalline diamond pastes down to 0.25 μm. For both powders and bulk pellets, X-ray diffraction (XRD) scans were collected using a Rigaku Ultima III X-ray diffractometer with a Cu Kα radiation source operated at 40 kV and 30 mA and using a one-dimensional D/teX Ultra detector. Phase identification, grain size analysis, and weight fraction calculation from XRD scans were all carried out using Rigaku PDXL software, which is an integrated powder X-ray analysis software package. Grain size measurements were performed using the Halder-Wagner method [43] with a LaB6 calibration file used as an external standard, and weight fraction calculation were performed via Rietveld refinement [44]. Mechanical properties of the bulk alloys were investigated by nanoindentation experiments using a Nano Indenter G200 (Agilent Technologies). For each sample, at least 50 indentations were performed with a maximum penetration depth of 400 nm and constant indentation strain rate of 0.05 s -1 . The distance between the two nearest indents was 12 μm, which was approximately 30 times the penetration depth so that interference can be avoided [48]. The final hardness value presented is an average of all tests for each sample, and the error bars represent the standard deviation. Sample density and thermal stability The chemistry and structure of the as-milled powders were first characterized, as well as the general stability of these alloy powders against grain growth. Table 1 Namely, the Al-rich FCC grains in Al-Fe-Y and Al-Ni-Y both grow to 34 nm, while those in Al-Mg-Y increase to 39 nm. Therefore, all three alloy systems show very small grain sizes for the asmilled powders and thermal stability that is comparable or higher than other studies. For example, Hanna et al. [49] reported that cryomilled Al-5083 alloy powders that were stabilized using diamantane molecular diamonds exhibited an average grain size of 22 nm in the as-milled condition, which is twice as large as the as-milled Al-Fe-Y and Al-Ni-Y powders here. The grain size of the Al-5083 powders roughly remained the same or increased to ~52 nm after 1 h annealing at 300 o C or 500 o C, respectively, similar to the trend observed here but indicating more growth. The nanocrystalline powders were next consolidated into pellets at different THP. [25,51,52] confirmed these predictions by using high resolution TEM to reveal the existence of amorphous complexions well below the bulk eutectic temperature in various systems, which are responsible for activated sintering. In another study, Donaldson and Rupert [26] reported a dramatic increase in the density at 850 o C (~0.83 of the melting temperature of pure Cu) for a Cu-4 at.% Zr alloy, which coincides with the formation of amorphous complexions as observed using high resolution TEM. In the present study, the significant increase in pellet density for both Al-Fe-Y and Al-Ni-Y also occurs within a homologous temperature range from 0.77Tm to 0.82Tm, and is most likely associated with amorphous complexion formation, a hypothesis which will be tested directly in Section 3. After verifying that all systems can be consolidated into fully dense bulk pieces, the next step is to evaluate the resulting grain size and its evolution during consolidation. Figure The grain size distributions are all well-described by a log-normal distribution, meaning that no abnormal grain growth was detected. Overall, Figures 2 and 3 show that all three alloys were successfully fabricated into bulk pieces with both small grain size (<60 nm) and high density (>99%). bulk pieces with a combination of much smaller grain size (<60 nm) and higher density (>99%). Moreover, a much simpler processing route that requires less equipment (only a high-energy ball mill and a hot press) and energy, and is innately scalable, is employed. For example, it requires much less energy and pressure to hot press versus torsion straining under a pressure of ~5 GPa to a torsional strain of ~7, as reported in Ref. [55]. The key to the exceptional combination of density and grain size is the alloy selection and judicious processing pathways that promote grain boundary enrichment and the formation of amorphous complexions, which suggest that simple yet effective processing route can be adopted by choosing proper dopants (Section 2.1). To verify the existence of these interfacial structures, the local grain boundary environment is investigated in the next section. Grain boundary chemistry and structure Although all dopant species were chosen because they were expected to segregate to grain boundaries, this hypothesis is tested by investigating the spatial distribution of dopants. The pellets with the lowest THP were first selected for study, before moving on to higher THP. Figure 5 shows [60], meaning that Y atoms will prefer locations with a higher free volume such as the grain boundary region. The stronger segregation tendency of Mg than the other two transition metal elements is consistent with the calculations of Murdoch and Schuh [15] showing that the segregation enthalpy of Mg in Al is more positive than that of Fe or Ni in Al, and therefore Mg should be a stronger segregant in an Al matrix phase. In addition, the pair-wise mixing enthalpy between Mg and Y is negative ( Table 2), suggesting that Mg atoms also like to be next to Y atoms. These synergistic effects in segregation and co-segregation tendencies for the alkaline-earth and transition metal elements in the ternary Al alloys could provide an explanation for the earlier onset temperature for activated sintering in Al-Mg-Y as compared with Al-Fe-Y and Al-Ni-Y. In the present study, the dopant species were selected to promote grain boundary segregation in service of forming amorphous complexions once the boundaries were chemically enriched. Therefore, the next step is to examine the grain boundary structure in detail, and the Al-Ni-Y pellets are chosen for more in-depth inspection. From Figure 1, the temperature required for amorphous complexion formation for Al-Ni-Y is hypothesized to be between 449 o C to 495 o C, corresponding to a homologous temperature range between 0.77Tm to 0.82Tm, as the density increased more than two orders of magnitude within this regime, so two THP values above the observation of activated sintering were selected (495 and 540 o C). Amorphous complexions were indeed observed in naturally cooled samples consolidated at these temperatures, with these features outlined by dashed lines in Figures 6(a) and (b) showing two representative amorphous complexions for each THP. Care was taken to ensure that the grain boundary region was viewed in an edge-on condition by confirming that the complexion thickness does not vary in both underand over-focused imaging conditions [61]. These images clearly demonstrate the disordered nature of the complexions and the crystalline structure of grains (denoted "G1" and "G2" in Figure 6) adjacent to the complexions. The thickness of the amorphous complexions shown here is ~2-3 nm, although variations from boundary-to-boundary have been reported in previous work [23,27,62] and would be expected in our alloys as well. However, a detailed description of the thickness distribution is not the focus of the present study. The existence of the amorphous complexions in the naturally cooled sample (cooled over ~4 h) further verifies the robust alloy selections in the present study, as fast quenching is typically required to kinetically freeze the high-temperature interfacial structures [62]. Although amorphous complexions are stable at high temperatures, they would be metastable at room temperature and therefore could transform back to an ordered state during a slow cooling process, similar to the crystallization behavior of metallic glasses. In the metallic glass community, early fabrication techniques involve rapid solidification methods with characteristic cooling rates in the range of 10 3 -10 6 K/s so that crystallization could be avoided. The resistance to nucleation and growth of crystalline phases in an undercooled melt has been related to the glass forming ability of the materials by Turnbull [63], and metallic glasses with good glass forming ability can remain amorphous even under very low cooling rates. Examples of good metallic glasses include Zr41.2Ti13.8Cu12.5Ni10.0Be22.5 with a critical cooling rate of 10 K/s or less [64], Mg65Cu25Gd10 with a critical cooling rate of ~1 K/s [65], and Pd43Ni10Cu43P20 with a critical cooling rate of 0.4 K/s [66]. The finding of relatively stable amorphous complexions in the present Al-rich ternary alloy systems is consistent with the prior report of Grigorian and Rupert [27], where the critical cooling rate for amorphous-to-ordered complexion transition in a ternary (Cu-Zr-Hf) Cu-rich alloys was found to be much lower than that for a comparable binary (Cu-Zr) alloy. In the present study, the critical cooling rate for the Al-rich ternary systems is evidently even slower than that for the Cu-Zr-Hf in Ref. [27] since amorphous complexions still exist within the Al alloys even after a cooling and ~7 nm thick, respectively, and the 9 nm is on the order of the thickest example as observed in a Cu-Zr-Hf alloy [27]. The observation of thicker complexions after quenching confirms that they evolve during slow cooling [67]. Multi-scale precipitates The amorphous complexions observed in the alloys are restricted to nanoscale-thickness films along the grain boundaries. Inspection with TEM shows that additional microstructural features with nanoscale dimensions and elongated shapes also exist, which will be termed nanorods here. Above a critical THP, nanorods begin to form in all alloy systems. The evolution of these nanorods in the Al-Ni-Y system is shown in Figure 8 (Figure 8(a)), no nanorods are observed either in the grain interior or at grain boundaries. We note that the FCC grain size is approximately 20 nm, confirming the XRD result in Figure 2. As THP is raised to 495 o C (Figure 8(b)), nanorods begin to form at grain boundaries with a length of ~10-20 nm and width of a few nanometers. When THP is further increased to 540 o C, the size of these nanorods increases to a length of 20-30 nm and width of ~3-5 nm. The nanorods begin to form at grain boundaries and are first observed in these regions, but are not restricted by the adjacent grains. This can be seen in Figure 8(c), where the nanorod enclosed by the yellow square grows into an adjacent grain in the magnified view. This behavior is very different from amorphous complexions that are restricted along grain boundaries, and a detailed comparison between the nanorods and amorphous complexions will be discussed below. Finally, at the highest THP of 585 o C (Figure 8(d)), the nanorods grow to ~50 nm in length, while the width stays below 10 nm. Therefore, with increasing THP, the size of these nanorods increases. As the nanorods grow, more and more of them extend into the grain interior. It should be noted that since no preferred crystallographic textures exist in the present alloy systems, these nanorods most likely orient randomly. In addition, the rod-like shape is the only one observed for dozens of features investigated across the multiple alloys and samples. High resolution TEM was next used to investigate the internal structure of the nanorods. (Figures 6 and 7). First, both nucleate at grain boundaries once they are chemically enriched owing to segregation of the alloying species. Second, both the nanorods and amorphous complexions have disordered structures. Finally, the dimensions of both are smaller in the naturally cooled state than in the quenched state, suggesting that both structures prefer a larger equilibrium size at higher temperature. However, there is also a major difference between the nanorods and amorphous complexions. Unlike the amorphous complexions, which remain strictly bounded by adjacent crystalline grains, some of the nanorods grow into the grain interior ( Figure 8). Since the various grain boundary structures observed here presumably are formed due to grain boundary segregation of dopant elements, the distribution of dopant atoms is next studied. STEM-EDS was performed on the three alloys with THP = 540 o C, and the corresponding elemental mapping is presented in Figure 10. However, in contrast to Ref. [68], the interior of the nanorods in the present study shows enrichment of elements different from that on the edge, as Al and C elements clearly segregate into the nanorod interior. The C in the system likely comes from the addition of stearic acid during ball milling and can be considered as an unintentional impurity. Therefore, the nanorods have a core-shell structure with the core composed of Al and C and the edges being Y-enriched. Due to the small dimensions and disordered structure, neither the amorphous complexions nor core-shell nanorods are detected by XRD scans. However, XRD measurements do show that secondary phases emerge for all alloys during hot pressing, with their weight fraction as a function of THP is shown in Figure 11. For all alloy systems, the secondary phases for the lowest THP are Table 2. Overall, Al-Mg-Y has the least driving force to incorporate Mg into the intermetallic phase since ∆H mix Al−Y [39] is much more negative than ∆H mix Al−Mg [39] and ∆H mix Mg−Y [39], while the pair-wise enthalpies between Al and the dopant elements are more comparable in the other two alloy systems. Moreover, the grain boundary segregation tendency of Mg is stronger than that of Fe and Ni (Figure 5), so fewer Mg atoms are available to potentially form intermetallic phases. Figure 11(b) presents TEM selected area diffraction patterns for the highest THP (585 o C), where the existence of all secondary phases from XRD is verified. The intermetallic grain sizes measured by XRD are on the order of tens of nanometers, comparable to the Al-rich FCC phase, but this technique can only be used to accurately measure crystal size for nanoscale features and it is possible that larger precipitates are present (where XRD peak broadening would be undetectable). To understand the overall size distribution and spatial distribution of these intermetallic phases, backscattered electron (BSE) imaging is employed. When THP increases, some intermetallic particles grow with the largest diameter being approximately 5 µm. However, most of the intermetallic particles remain small with sizes of a few hundred nanometers, and the shape of the intermetallic particles does not change with increasing THP. Figure 13 shows cumulative distribution functions of the intermetallic particle size for all three alloys. For Al-Mg-Y, the whole distribution shifts to larger particle size with increasing THP, while the curve corresponding to Al-Fe-Y also moves to larger size regime but to a lesser extent. In contrast, for Al-Ni-Y, only the extreme large tail of the distribution shows change as THP increases, indicating that only a few Al19Ni5Y3 particles grow while most of the particles remain the same size. Therefore, the intermetallic particle size is more stable in Al-Ni-Y than in Al-Mg-Y and Al-Fe-Y. Even though the average sizes of these intermetallic particles are only a few hundred nanometers, they are much larger than the grain size of the Al rich FCC phase. For nanocrystalline alloys, a kinetic approach to stabilizing grain size is by pinning the grain boundaries using precipitates, which is often referred to as Zener pinning [71]. According to Zener pinning theory, the maximum diameter of pinning particles that are effective for stabilizing grains is d = (3/4)D•f, where D is the grain size and f is the volume fraction of pinning particles [72]. Since the volume fraction of pinning particles cannot be larger than 1, particles with a size comparable and/or much smaller than the matrix grain size should be effective in limiting grain growth. For example, Praveen et al. [73] fabricated a nanocrystalline CoCrFeNi high entropy alloy by mechanical alloying and spark plasma sintering, which exhibited high thermal stability after annealing at 900 o C for 600 h. One of the dominant factors contributing to the good thermal stability in Ref. [73] was the Zener pinning effect from fine Cr-rich oxide precipitates with a size of approximately 30 nm, much smaller than the FCC matrix grains with an average size of 130 nm. In contrast, in the present study, the intermetallic particles are much larger than the Al matrix grains, so it is unlikely that these intermetallic particles contribute significantly to stabilizing the nanosized grains. Mechanical behavior of nanocrystalline Al with hierarchical reinforcements A hierarchical microstructure (including amorphous complexions, nanoscale amorphous core-shell precipitates, and intermetallic particles with sizes of a few hundred nanometers) was observed within the bulk nanocrystalline alloys studied here, which we hypothesize will give rise to excellent mechanical properties. To test this, nanoindentation tests were employed and Table 3 lists reporting that the strengthening effect from incoherent twin boundaries was equivalent to that from high-angle grain boundaries. Therefore, the higher hardness for Al-Mg-Y could possibly come from nanoscale twin boundaries. While we did not observe nanotwins in our microstructures, it is difficult to unequivocally rule out their existence in limited quantities. In addition, the different intermetallic phase in each system may also contribute to the various hardness values. A comparison of hardness values of Al alloys from both the present work and other studies from the literature is shown in Figure 14. In general, all three of our alloys fabricated via a simple powder metallurgy approach exhibit hardness comparable to or higher than the other Al alloys, which range from nanocrystalline to micron-sized grains synthesized using different techniques. In fact, the Al-Mg-Y alloy in this work is the absolute hardest at 2.77 GPa. Youssef et al. [76] employed an in-situ mechanical alloying consolidation technique to fabricate a single-phase nanocrystalline Al-5 at.% Mg with a reported average grain size of 26 nm and hardness of 2.30 ± 0.19 GPa. That hardness value is four times higher than a conventional polycrystalline Al 5083 alloy with an average grain size of 5.5 μm and hardness of 0.57 GPa, which was mainly attributed to grain size refinement in Ref. [76]. Compared to the present ternary Al-Mg-Y with THP = 585 o C, the hardness of the binary Al-Mg is significantly lower even though that material had a smaller grain size (as no high temperature was involved during the consolidation process). The improved strength of the Al-Mg-Y presented in this study demonstrates the advantage of adding a rare earth element (or increasing the chemical complexity in general) to the nanocrystalline alloy that leads to the formation of a hierarchically reinforced microstructure. Rajulapati et al. [77] synthesized nanocrystalline Al-W alloys containing up to 4 at.% W using ball milling and then hot compaction at 573 K. The resulted alloy contained two phases: an Al phase and a W phase, both with average grain sizes of 34 ± 2 nm. The individual crystallites of the W phase were small in this material, but the W particle size (i.e., a region of material containing one or more aggregated W grains) ranged from a few tens of nanometers up to larger than 500 nm. The hardness of the Al-W alloy increased from 0.94 GPa with no W addition up to 1.23 GPa with 4 vol.% W, which was mainly attributed to Orowan hardening due to the smaller W particles in the system. In the present study, the size of the core-shell nanorods is smaller than the Al matrix grain size, and therefore may also have a Orowan hardening effect on the strength. In addition, in Ref. [68], the clusters of dopant atoms (Zn, Mg, and Cu) at the grain boundary region were suggested to play an important role in strengthening of a 7075 Al alloy, with the enriched regions having similar dimensions as the present amorphous core-shell nanorods, pointing to a strengthening effect of the nanorods. Moreover, in the present study, nanoindentation tests were also performed on the annealed and quenched Al-Ni-Y sample, which exhibits an average grain size of ~100 nm. The hardness is very close to the measured value for the naturally cooled counterpart, even though the mean grain size of the annealed sample is twice as large, indicating that grain size strengthening is not the dominant strengthening mechanism. As such, it is likely that the primary strengthening effect comes from the nanorod precipitates. In order to design solid solution strengthened Al alloys, Hung et al. [78] performed first-principle calculations to identify Al-Ce and Al-Co as promising candidates, and then synthesized the corresponding alloys using arc melting and produced non-equilibrium microstructures through laser surface glazing. Figure 14). Moreover, the total amount of dopant atoms used here is only 4 at.% for each alloy, considerably lower than that (>10 at.%) in Ref. [79], thereby leading to alloys with exceptional strength and thermal stability while preserving the low densities required for advanced light-weight Al alloys. Conclusions In the present study, three ternary alloys (Al-Mg-Y, Al-Fe-Y, and Al-Ni-Y), were successfully synthesized into bulk samples with both high density (>99%) and nanocrystalline grains (<60 nm) with a hot pressing temperature of 92% of the melting temperature of pure Al. This was accomplished by choosing appropriate dopant species that can lead to both grain boundary segregation and amorphous complexion formation. The microstructures corresponding to four hot pressing temperatures of each alloy were examined in detail using XRD and TEM, while mechanical properties were tested using nanoindentation. The following conclusions are drawn: 1) All three alloy systems exhibit a superior combination of small grain size, full density, and bulk size that outperform previous reports, suggesting the importance of the grain boundary regions on alloy selection. The addition of a combination of an alkalineearth/transition metal element (Mg, Fe, or Ni) and a rare-earth element (Y) significantly increases the atomic size mismatch between the three constituent elements, promoting grain boundary segregation and amorphous complexion formation that enable thermal stability of grain size and activated sintering. Moreover, the three alkalineearth/transition metal elements show different segregation behavior and temperature ranges of activated sintering, suggesting the important role of these metals in controlling grain boundary activities during consolidation. 2) A hierarchical microstructure was observed in all systems, consisting of nanometerthick amorphous grain boundary complexions, amorphous core-shell nanorods with Yenriched edges and Al+C-enriched interiors, and intermetallic particles with average sizes of a few hundred nanometers. The core-shell nanorods are observed to nucleate at grain boundaries, which share similarities with amorphous complexions but also possess their own unique properties, such as not being confined by adjacent grains. 3) The retention of the amorphous grain boundary complexions, even with a very slow cooling rate, shows that these features are much more resistant to transitioning back to the ordered grain boundary complexions stable at room temperature than amorphous complexions observed in other systems such as Cu-Zr, Cu-Hf, and Cu-Zr-Hf. 4) The hierarchical microstructure gives rise to exceptional hardness exceeding other reports for bulk Al alloys without sacrificing the light-weight property, since only a total of 4 at.% dopant atoms were added. The results of this study generally shed light on a design pathway to fabricate large-scale nanocrystalline Al alloys strengthened by hierarchical reinforcements, which have great potential
2021-09-07T01:16:04.991Z
2021-09-05T00:00:00.000
{ "year": 2021, "sha1": "5814d257788d65fa85311147b64215af7010910a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2109.02133", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5814d257788d65fa85311147b64215af7010910a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
244492640
pes2o/s2orc
v3-fos-license
Trauma, loss and other psychosocial drivers of excessive alcohol consumption in Karamoja, Uganda This article investigates the trends, drivers and effects of alcohol consumption in Karamoja, a primarily pastoralist area of Uganda. Although locally brewed alcohol from sorghum and millet has an important and long-standing place in Karamojong tradition, the emerging trend of excessive consumption of hard liquor is a cause for concern among government and health officials, development practitioners and, especially, community members themselves. This article explores the varied reasons for this rise in hard liquor consumption, particularly in Karamoja’s post-disarmament period. The article is based on data collected in mid-2018, as well as information gleaned from the authors’ engagement in the region over the past decade. The peace and security ushered in by the disarmament exercises of the 2000s has, on the one hand, opened up the once isolated region politically and economically. Conversely, it has accelerated external interest in Karamoja’s economic wealth, leading to further disenfranchisement of its people due to dispossession of land. Emerging from the trauma of the disarmament exercise, the drastic loss of livestock and livelihoods and the continuing negligence of pastoralism by the state, Karamoja’s rural as well as peri-urban communities are undergoing a remarkable loss not only of their economic systems, but also of their socio-cultural identity. Acknowledging the specific trauma and loss experienced by individuals and communities provides a lens through which to better understand the excessive alcohol consumption. These psychosocial factors, along with the economic and political aspects, must be considered in efforts to address this continuing crisis in the region. Introduction The use of alcohol among Karamojong communities in northeastern Uganda is a topic of regular and heated discussion among policy-makers and public health advocates alike (Stites 2018). Karamoja, home to predominantly pastoral and agro-pastoral communities, has witnessed a surge in alcohol production, sale and, critically, consumption over the last decade-a cause for concern among nearly everyone who has an interest in the region, but especially for local communities. Narratives on the devastating effects of alcohol use in Karamoja abound in the national news media as well as in technical reports (Ariong 2018;Cau et al. 2018;Uganda Radio Network 2018). These narratives tend to attribute excessive or harmful consumption of alcohol, especially in recent years, to the socio-economic transitions that communities are experiencing in the aftermath of years of insecurity and inter-community conflict, and the establishment of a fragile peace. Conversations in policy and practice circles tend to draw attention to the excessive and harmful patterns of alcohol consumption in Karamoja as one of the primary issues driving the region into further poverty and related issues today. This article explores some of the purported reasons behind the trends in the growth of excessive alcohol consumption in Karamoja, particularly in the period following the disarmament campaign. Overall, alcohol consumption rates in 4Uganda are considerably higher than those in other sub-Saharan African countries (World Health Organization 2014), and, as such, this article does not claim that the prevalence of 'unsocial' 1 or harmful drinking behaviour is singularly a Karamoja problem. However, the sudden rise in alcohol consumption and the precipitous decline in health indicators in Karamoja are, as we contend, a by-product of the long history of disenfranchisement and exacerbation of social and economic strains set in motion by forced disarmament. The widespread and, arguably, irreversible loss of livestock, especially cattle, for many households robbed communities not only of their primary source of livelihoods but also of sacred entities which hold enormous socio-cultural and symbolic value (Dyson-Hudson 1966). The gradual loss of the pastoralist subsistence base-inextricably intertwined with ideas of wealth, status and identity (Anderson and Broch-Due 1999)-has been explored in the context of household resilience and asset poverty (Ayele and Catley 2018;Levine 2010), masculinity (Stites and Akabwai 2010), increased militarism and raiding (Gray 2000(Gray , 2009. This article, in part, extends the analysis of the loss of the pastoralist identity and experiences of disenfranchisement and trauma as they relate to excessive, and often harmful, alcohol consumption. We examine the distinctive contextual upheavals within which this phenomenon has occurred for those living in rural and peri-urban areas. The consumption of alcohol in Karamoja cannot be analysed only through a negative lens, given the role of drinking in creating and sustaining forms of solidarity and cooperation. In peri-urban and urban centres, the institution of drinking groups, comprised mainly of young and older men who get together in the evenings to drink local brews, has become an important feature of social life (Mosebo 2008(Mosebo , 2015. In these groups, young men pay a regular fee, which goes towards the purchase of the local brew. Drinking groups are not only a space for relaxation and socialization, they are also important forums in which ideas, knowledge and advice are shared between peers and inter-generational groups. Critically, a certain decorum is also to be maintained, and drinking too much or drinking alone is not considered proper. Mosebo's (2008) ethnographic work among drinking groups shows the 'integrative' aspects of drinking and how the drinking group 'can become a space of freedom, unity and order in lives lived in an area of chaos and disorder' (p. 2) 2 . In addition, the production, repacking, transportation and sale of alcohol bring important economic benefits, especially for women, who use the income to cover household necessities (including food, school fees and medical care) as well as investing in assets such as poultry and goats (Dancause et al. 2010). However, these somewhat positive aspects are overshadowed by some of the putative reasons for young people's harmful consumption of alcohol. We comment on the various stressors that young people in Karamoja's rural and peri-urban centres face and how delayed adulthood ('waithood'), unstructured time and an uncertain future might be shaping behaviours around alcohol consumption. This article is based on research conducted in 2018 which sought to investigate changes in alcohol production, consumption and sale in Karamoja, with a view to informing programme and policy in the region by governmental and non-governmental actors. The study demonstrated, among other things, that the demonstrable increase in harmful alcohol consumption behaviour arose from the rapid rise in availability of cheap and potent hard alcohol, combined with contextually specific livelihood stress. Here 3 we consider factors of economic stress, gradual erosion of pastoralist identity, growing disenfranchisement and easy access to commercial liquor. While taking these factors into account, we explore in greater depth the psychosocial, emotional and trauma-related aspects that may also influence drinking in the region. This component has garnered little attention in the literature on Karamoja. We examine the drastic changes to Karamoja's economy and propose that the harmful, excessive and unsocial alcohol consumption in rural and peri-urban areas is, partly, an offshoot of livestock loss and continued marginalization. Widespread livestock poverty has led to negative diversification of livelihoods, characterized by low levels of income and poverty traps (Iyer and Mosebo 2017); it has also had psychosocial and trauma-related ramifications because of the critical socio-cultural and political importance of cattle, the primary pillar of Karamojong life (Dyson-Hudson 1966). Our findings show that the desire to escape (with the aid of alcohol) is neither extraordinary nor unexpected in the historical and contextual circumstances of Karamoja. Nonetheless, we advocate for the serious consideration of the psychosocial, in addition to the economic, needs of communities in interventions in the region. 1 'Unsocial' drinking is generally understood as drinking alone or as a solitary activity, without the company of others. In our observations in Karamoja, much of the drinking occurs in groups-small or large-and often as a social activity. In this context, by 'unsocial' drinking, we mean instances of alcohol consumption where hard liquor is the main choice of alcohol and is consumed alone (and not necessarily as part of a social gathering) and to excess. 2 Mosebo's study coincided with the tail end and aftermath of the disarmament exercises in Karamoja; the chaos and disorder referenced here allude to the associated upheaval that took place in the region. 3 Data for this article was collected in mid-2018 when the ban on 100ml sachet alcohol was yet to be enforced. Where appropriate, the article reports changes to alcohol consumption behaviour in Karamoja since the time of data collection. Iyer and Stites Pastoralism: Research, Policy and Practice (2021) Context The Karamoja region in northeastern Uganda is home to several communities, often collectively referred to as 'Karamojong' 4 , who traditionally engage in pastoralist and agropastoralist livelihoods depending on the agro-ecological zone in which they reside. The westernmost agro-ecological livelihood zone receives the most rainfall regionally and affords conditions for opportunistic cultivation (Robinson and Zappacosta 2014). In contrast, the easternmost zone is characterized by high temperatures, high rainfall variability and low soil fertility, making pastoralism the most suitable and adapted livelihood system (Ellis and Swift 1988;Scoones 1995). Historically stigmatized and even ridiculed for their reliance on livestock for socio-political and economic ends, Karamoja's pastoralist communities have withstood a series of assaults on their lives and livelihoods from colonial and post-colonial governments (Dyson-Hudson 1962;Quam 1976). Today, despite several governmental and other interventions to support and promote crop production, pastoralism continues to thrive as the most viable strategy for food security and a critical mechanism for household insurance (Levine 2010;Stites et al. 2016). After tumultuous decades (between the 1980s and 2000s) marked by widespread hunger, herd-decimating livestock diseases and armed inter-community raiding, Karamoja now finds itself in 'relative peace' 5 (Howe et al. 2015). The confiscation of weapons (discussed further below), community-led conflict transformation efforts and interventions of governmental and non-governmental bodies in building social cohesion have resulted in generally improved security in the region; consequently, not only has pastoralism rebounded (to an extent and for some households), businesses and trade from other regions of Uganda and across the border to Kenya have also proliferated (Stites et al. 2016). Of note is the growth in the extractive industries, primarily gold, limestone and marble. Ugandan and multi-national companies have flocked to Karamoja on the heels of the disarmament, exploiting the improved overall security of the area and the potential for mineral deposits. Artisanal and smallscale mining has emerged as a new economic activity, either as a form of diversification or as the primary livelihood strategy. Increased interest in Karamoja's mineral resources has also created a number of conflicts, pitting investors against community members, government representatives against elders and so forth (Hinton et al. 2011;Houdet et al. 2014;Rugadya 2020;Saferworld 2017). Karamoja's agro-pastoralist communities have a strong and enduring tradition of brewing alcohol, which features prominently as the libation of choice in every important socio-cultural ceremony. Variations of these local brews with low alcohol content are known collectively as ngagwe. The production of these local brews such as ekweete (maize/sorghum), kutukuto (maize/sorghum), ebutia (sorghum) and marua (millet) is an intensive multi-day process involving cleaning, drying, milling, roasting and fermenting the grains and, ultimately, filtering the brew (Namugumya and Muyanja 2009). Due to the central role women play in its production and in reaping the benefits from its sale, local beer has been termed the 'cattle of women' by some authors (Dyson-Hudson 1966). In addition and particularly in times of food scarcity, traditional brews made of grains continue to enjoy the status of 'food' among communities. Historically produced in homes for ceremonies 6 (such as births, weddings and initiations) and to feed agriculture work groups, local brews took on a commercialized character in the early 2000s (Dancause et al. 2010;Stites and Akabwai 2010). As livestock numbers dwindled due to inter-group raiding and state-imposed disarmament, women were pushed to assume greater or even sole responsibility of providing for the household. The sale of brews thus became an important income-generating activity (Iyer and Mosebo 2017;Stites 2018). The production and consumption of the low alcohol content local brew in Karamoja is not a primary health concern, especially when compared to the hard liquor which has become ubiquitous in recent decades. This hard liquor is available in either the crude 'moonshine' variety or the commercially produced version sold in small sachets or, more recently, small bottles throughout Uganda. 7 Hard liquor is known as waragi, from 'war gin' 8 , or etule in the vernacular. 9 Hard liquor is by no means new to the region; legendary stories of large-scale livestock raids are often laced with mentions of alcohol consumption. However, whereas produced mainly locally 4 The word Karimojong in the local language signifies the region of Karamoja. The following communities call the land area of Karamoja their home: Matheniko, Bokora, Pian, Jie, Dodoth, Tepeth, Labwor, Nyangea, Ik and Pokot. Even though we use the umbrella term Karamojong in the report, we recognize the distinction of the constituent communities and identify them separately where appropriate. 5 Livestock raids and general insecurity have occurred every year since the disarmament campaigns, but these have not reached the destabilizing proportions of the late 1990s and early 2000s. Recently (beginning in 2020), raiding has again become prevalent in Karamoja, even spilling over to the other regions in Uganda. See for example https://www.monitor.co.ug/uganda/news/national/cattle-raids-inlango-karamoja-blamed-on-moroto-prison-break%2D%2D3286232. 6 Epurot, fermented honey beer produced by Tepeth communities (and less frequently by Karamojong communities), continues to retain its primarily traditional/ceremonial use. 7 Alcohol sold in sachets of 50 ml was phased out between 2017 and May 2019 in favour of 200 ml packaging (Kajoba 2019). However, 50 ml sachet production continues illicitly today (Daily Monitor 2020). 8 https://www.independent.co.uk/news/world/africa/how-africas-partyanimals-drank-themselves-to-death-1792932.html 9 We use the terms waragi and etule interchangeably in the paper. Both signify 'hard liquor' or 'moonshine'. Local brews made of sorghum and millet are referred to as such. in the past, much of today's 'moonshine' is manufactured in adjacent regions and (usually illegally) imported into Karamoja. This trade is untaxed and unregulated, and jerrycans of moonshine move throughout the region on trucks, motorbikes, bicycles and pack animals. The commercially produced variety comes from southern cities such as Kampala and Jinja. The sale of hard liquor provides income to retailers in trading centres and villages. Unlike brewing, hard liquor profits are not controlled solely by women, and men are heavily involved in the transport and sale of both the crude and commercial varieties. Field observations over the past few years have made apparent the extent (of both the commercial and moonshine varieties) of waragi's availability, reach and negative health impact, extending from Karamoja's urban and peri-urban centres to the farthest villages. Officials in some of the region's sub-counties and districts have attempted to address the widespread over-consumption of hard alcohol by prohibiting the production and import of the crude version. These efforts, combined with the national ban on the sale of liquor sachets, should have reduced the availability of hard alcohol in the region. However, consumption of hard alcohol in Karamoja appears to continue largely unabated. Coming across people in rural and urban areas walking around casually with a sachet or inebriated to incapacitation at any hour of the day was extremely common at the time of the research (mid-2018). This sachet has been referred to as 'the silent gun'-slowly leaving a trail of bodies in its path (Iyer et al. 2018). The passing of a country-wide ban on sachet waragi in early 2019 brought visible changes to the drinking culture in Karamoja within months of our data collection. 10 In place of the 50-ml sachets sold for 500 Ugandan shillings approx. 15 cents USD are now bottles of at least 200 ml, sold at approximately three times the price. The paucity of cash to feed the waragi habit at a price which is exorbitant for most people has effected a shift back to crude or illegally produced waragi, sold in large drums and smuggled into the region. Although the ban on the sale of sachets was meant to quell 'unsocial' drinking, excessive drinking remains widespread. As such and despite national and local regulations, an examination of the reasons behind the continuing demand for hard alcohol, the changes in drinking behaviour and the drivers of excessive alcohol consumption remain highly relevant. While many reports and studies raise the issue of 'alcohol' or even 'alcoholism' in reference to food security, maternal and child health, or nutrition, no systematic study has examined local communities' perspectives and experiences of these changes. This article and the associated research study are primarily concerned with Karamoja's rural and peri-urban communities. Nevertheless, high alcohol consumption is not a distinctly rural and peri-urban phenomenon, and makeshift and established bars, drinking areas and groups proliferate in urban centres around Karamoja (Mosebo 2008). Several informants from the salaried class who participated in the study-and provided extensive commentary on the alcohol problem in rural areas-also consume alcohol, and we do not have evidence of unusually higher consumption among rural communities. However, this article is, at its core, concerned with the majority population of Karamoja, who are primarily rural pastoralists and agro-pastoralists and who bore the brunt of the forced disarmament exercises of the 2000s and subsequent social and economic transformations (Stites and Akabwai 2009;Stites et al. 2007). They have also, simultaneously, faced new socioeconomic and health-related challenges related to (but not solely because of) misuse of alcohol. Through our study and years of observation in Karamoja, we know that urban and rural classes have different drivers of (excessive) alcohol consumption and that the findings detailed below may not apply to those living in fledgling cities in salaried and other jobs and/or who have been urbanites for several generations. We do touch upon some of the purported drivers for harmful alcohol consumption among people in peri-urban or urban centres, but our primary motivation is to examine some of the putative causes and consequences of excessive alcohol consumption among individuals and households that have experienced extensive livestock loss. We argue that these losses not only affect food security and nutrition, but also greatly influence the many social institutions and networks of solidarity that define Karamojong 'culture' and leadership (Ayele and Catley 2018). Research objective and methods The research study behind this article sought to investigate the following: the changes in alcohol consumption, production and sale in the period following the disarmament that began in 2006; the structural drivers of alcohol consumption in the post-disarmament period; the effects of changes in alcohol consumption on interpersonal relations; the effects of alcohol production, sale and consumption on household economy and livelihoods; and the effectiveness of local initiatives in addressing changes in alcohol consumption. This article briefly presents the central findings on consumption from this research and then moves beyond these initial queries to discuss some of the potential drivers behind the undeniably high rates of alcohol consumption. Data collection took place between May and July 2018 and used a primarily qualitative methodology. A total of 503 individuals participated in 61 focus group discussions (FGDs) and 56 key informant interviews (KIIs). Approximately 40% of respondents were female and 60% were male. Interviewees and discussants included community members (male elders, women, male and female youth); government representatives at the district, sub-county and village levels; and representatives of non-governmental organizations (NGOs). The team worked in five districts of Karamoja-Moroto, Kotido, Kaabong, Nakapiripirit and Amudat-and sampled two sub-counties in each district. 11 We purposively selected the study sites to explore differences in rural and peri-urban locations. In addition, the research team visited recent settlements near to mining and aloe vera processing locations, such as Naput and Kosiroi in Moroto District. We selected these sites due to their reputation for heavy and occasionally fatal alcohol use (Ariong 2018). Facilitated participatory methods were used in a subset of nine FGDs to collect data on group perceptions regarding alcohol production, consumption and sale. These methods included diagramming, proportional piling and ranking, and creation of calendars. Seasonal calendars illustrated how alcohol consumption, production and sale changes over the course of the year. Daily calendars illustrated averages for the types and quantities of alcohol consumed and the portion of the household budget spent on alcohol. Proportional piling depicted the peak and low seasons for alcohol production, sale and consumption. One hundred twenty-seven participants (83 women and 44 men) participated in the nine gender-specific focus groups that included the participatory methods. The study team received ethical approval for human subject research for this study from the Institutional Review Board of Tufts University, the MildMay Uganda Research and Ethics Committee (MUREC) and the Ugandan National Council for Science and Technology (UNCST). Main findings Key drivers of alcohol consumption in Karamoja Shifting livelihood profiles Karamoja has seen major shifts in livelihoods over the past two decades. Authors have documented the erosion of pastoral production systems as a result of livestock loss, conflict, frequent droughts, limitations on pastoral mobility, growing inequity of animal ownership, and anti-pastoral policies. Local people have had to shift their activities and strategies accordingly, and studies show the extent to which people are relying on wage labour and petty trade to both fulfil household needs and manage risk (Fernandes 2013;Iyer and Mosebo 2017;Stites and Akabwai 2009;Stites et al. 2014). Although livestock-based livelihoods may be gradually recovering (Stites et al. 2016), the widespread loss of animals has had cascading effects on household food security, customary authority, gender roles and future prospects (Carlson et al. 2012;Stites and Akabwai 2010). In addition, the growing inequity in livestock ownership means that while some households may be thriving within or re-entering the pastoral economy, many others remain on the margins (Ayele and Catley 2018;Catley and Aklilu 2013;Marshak et al. 2019). Ultimately, this shift in livelihoods is also a key driver of the changes in alcohol consumption, production and sale. The most direct connections between shifts in livelihoods and the increase in alcohol consumption, production and sale relate to the increase in urbanization and monetization. The growth of towns and the concurrent decline in livestock ownership means that a growing number of people seek economic activities in towns, whether on a daily, occasional, seasonal or sporadic basis (Stites 2020). Cash is more widely available in both urban and rural households due to extensive informal employment in both urban and rural areas (primarily in farm labour or mining in rural locations) (Iyer and Mosebo 2017). This is a change from the period prior to disarmament, when insecurity deterred trade and people relied more heavily on subsistence production. The availability of cash is thought by some to be responsible for the increase in alcohol consumption. Prior to disarmament, it was difficult both to find hard liquor in large quantities and to purchase it due to lack of money. Explaining this change, a Jie elder recounted 12 : Those days people didn't like having money. People looked at people having money like these are some (other) kind of people. But nowadays people like having money. Everybody has money. That is why everyone is taking too much etule. People never say, "I lack money for drinking etule". Although cash is a necessity today in Karamoja as elsewhere, the low wages and rising costs of living in Uganda deter productive savings. Young people, are especially affected by the inability to substantially save their earnings in order to invest in education or training. As recounted 11 Moroto District: Rupa and Nadunget Sub-Counties (SC); Nakapiripirit District: Lorengedwat and Namalu SC; Kotido District: Kotido Town Council and Rengen SC; Kaabong Distrcit: Loyoro and Kapedo SC; Amudat District: Amudat and Loro SC by some youth, the problems of poverty force them to spend 'the coin' (signifying a small amount of money that is typically earned from daily labour) on liquor because they do not see many prospects for that 'little' money. 13 Whereas this rationale may be extended to other age groups, particularly those in the middle age, youth appear particularly vulnerable to the urge to quickly spend small amounts of cash in hand. Nevertheless, there is an acute understanding of the ills of hard liquor consumption among community members, young and old. Observations on the effects of excessive consumption of liquor on health, inter-personal relationships, household economy, livelihoods and the community at large were reported animatedly by participants. It was not uncommon to hear that 'the land is now spoiled' because of the proliferation and excessive use of hard spirits. Deaths from excessive consumption of waragi were regularly reported by respondents. Respondents were careful to differentiate between local brews and hard spirits, with the former seen as having many fewer negative effects (Stites 2018). The reasons respondents gave for consuming the two types of alcohol also varied greatly. Whereas 'hunger' and social/traditional reasons are typically attributed to the consumption of local brew, the triggers for the consumption of hard spirits range from economic to social and psychological reasons. Livelihood and employment-related stress Respondents often cite stress related to the loss of livelihoods and the associated economic impacts as contributing to excessive consumption of hard liquor. These struggles appear particularly acute for male and female youth (generally greater for young men). In urban and peri-urban settings, young people have few prospects for skilled or semi-skilled jobs without attaining at least a secondary education. However, the high cost of secondary and higher education are a critical barrier to educational attainment in Karamoja. Compared to average wage rates in the region, covering the costs of education is often an insurmountable task even for urban households with multiple earners. 14 Businesses fill the gap in qualified applications by bringing in workers from outside the region (Iyer and Mosebo 2017). Because of incomplete education, inability to secure relatively better paid semi-skilled or skilled jobs, and an exploitative and highly competitive labour market, many male and female youth in urban and peri-urban areas express hopelessness in their current and future prospects. This hopelessness, according to youth respondents, is one of the main reasons for alcohol consumption. The problems are different for rural youth. According to some respondents, the confiscation of guns as part of the disarmament campaign has left a void in the lives of young men in rural areas. In addition to human rights violations which primarily targeted young men, the forced disarmament campaign resulted in widespread erosion of livestock holdings due to the protected kraal policy, whereby animals were kept in enclosures within or near military barracks, purportedly to minimize raids Akabwai 2009, 2010). Animal mortality and morbidity were extremely high, young men and boys were prevented from engaging in socially expected roles of herd management, and households suffered from the lack of regular access to milk and blood and the ability to sell an animal when needed for cash or to make a horizontal social exchange. Animal stocks did not recover in the years following the protected kraal policy, and ownership became increasingly inequitable. The loss of guns coupled with the decline in animalbased livelihoods brought a fundamental shift in gendered responsibilities at the household and community level. While young men in pastoral and agro-pastoral households had previously served critical roles as providers and protectors, in the post-disarmament period, they found themselves with few clear roles or activities. As male youth in a focus group discussion in Namalu Sub-County in Nakapiripirit reported, following the loss of animals, rural male youth have little to do but 'sleep under the tree' 15 . This idleness contributes to drinking, as explained by a group of men interviewed in Nadunget Sub-County in Moroto: During those days, the herders were wise enough. People used not to drink ngagwe a lot, the youth also never used to drink…but now there are no animals to care for, we are all just here wandering going to town, there is nothing to herd…what [animals] is there is just for the young kids to herd and the youth now have resorted to drinking. 16 Complaints about lack of things to do, however, are not restricted to rural male youth. Young men in urban and peri-urban centres also reported a lack of activities to occupy themselves as a reason for drinking to 'pass time'. At the same time that young men in rural areas have seen their identities as protectors and providers erode in conjunction with the removal of the weapons and the loss of animals, young women in rural areas have had to step in to provide for their families. As towns and trading centres have expanded and movement has become safer in the wake of disarmament, women have increasingly engaged in exploitation and sale of natural resources (including firewood, charcoal, thatch and wild vegetables), petty trade and services (including working at breweries and in hotels or restaurants and doing casual domestic work). This means that women are away from the homestead for extended periods (which may contribute to drinking by men), women are engaged in the cash economy (making more cash available for the purchase of liquor by household members) and women are moving regularly between rural and urban areas (where hard liquor is cheap and readily available). The shift in gendered responsibilities at the household level has also increased pressure upon women, who already faced extreme time burdens due to their domestic and reproductive duties. Women recount taking up drinking to remove the stress of managing their households single-handedly in the absence of a contributing spouse and with little earnings. Compounding the issue is the generally low wage rates in sectors dominated by women, such as domestic work, and the high costs of education, food and non-food commodities. These problems are also faced by women who are widowed, divorced or abandoned. A woman in a focus group discussion in Kotido explained: I have two children. I don't have a husband; now it's seven years without a husband. So, I am a mother, I am a father. There is no business I am doing, but the children must go to school, must eat, must dress, (treat) sickness when it is there, and rent. But all these things should be paid. But now, if now I sit only in one place without drinking, the thoughts will kill me (laughter). So I just drink, drink! The child wants food, the child wants soda -I am just there drunk! All these thoughts are not there (then). 17 A second woman in the same group added: I have two children. Their father got another woman, and the other woman spoiled his mind. If I go to him sometime to give me some money to buy food for child, [sometimes] he does not give me money. So, I stay stressed. If someone gives me either 1000 or 500 (Uganda shillings), I go and get someone who's drinking. I also join drinking to forget the stress. …So, when you think of the prices of other things, it just makes you stop and drink. So, you just continue drinking to forget. For both men and women, low wage rates and an inability to save translated to a lack of ability to invest in productive assets, including education or livestock, drive alcohol consumption. This is especially critical for male youth who aspire to establish a family. For most young men, establishing a livestock herd is the first step in preparing for marriage. Those without animals for bridewealth are unable to secure rights to a female partner. This has negative impacts upon a youth's status as a 'man' within his community, as evident in the discussion by a group of young men in Kotido town: The views expressed by the young men above were common among respondents of both genders in middle age and youth groups. Clubbed under the term 'thoughts' (ngatameta), a great number of respondents listed stressors such as marred inter-personal relationships, inability to provide for children, lack of or loss of employment, and problems in sufficiently meeting basic household needs. Forgetting these 'thoughts' was a motivating factor in excessive alcohol consumption. Respondents are keenly aware that, due to its high alcohol content, waragi can help them 'pass out' and thus, at least temporarily, relieve stress. Importantly, the desire to obliterate thoughts is in stark contrast to the typical reasons for consuming local brews, which are mainly ceremonial, to socialize with kin and non-kin and to alleviate hunger. We conducted a participatory exercise in an attempt to quantify consumption levels of hard alcohol. Using a daily calendar exercise, male and female respondents indicated that many people drank local brew and waragi from sachets in the morning, kwete in the afternoon and beer and waragi from sachets in the evening. Others explained that local brew was particularly popular in the morning as a warm substitute for porridge and then again at mid-day because it stays in the stomach 'like food'. On the other hand, the evenings see a mix of both local brew and hard liquor. At times, drinking is regularly continuous and can last for more than 6 h at a time, with expected impacts on functionality. Dispossession and disenfranchisement Although not explicitly phrased as such by informants during data collection, what we observe in Karamoja visà-vis increased alcohol consumption might also reflect a consequence of disenfranchisement that has blighted the region for decades. This dispossession in recent history began with the coordinated and brutal disarmament exercises by the Uganda People's Defence Forces (UPDF). While disarmament exercises in Karamoja were a recurring experience since early in the colonial era (Bevan 2008), the disarmament that began in 2006 was significantly more coordinated, prolonged and systematic (Human Rights Watch 2007; Akabwai 2009, 2010). It was also much more brutal. Data collected from communities in the months and years following the start of the disarmament and one of the author's first-hand observations illustrated the manner in which communities were cordoned and men were rounded up and exposed to humiliation and painful and degrading treatment (such as lying naked in the hot sun for hours with bricks on their chests), and arbitrarily detained, including in unofficial cells and prisons. Young men detailed that they were afraid to walk to towns or visit markets because soldiers would attack them, beat them in public and accuse them of being raiders and having guns. A number of young men who had experienced detention reported sexual violence in the form of beatings to their genitalia. Numerous young men in different communities told one of the authors, without prompting, of having their scrotums twisted around sticks, which resulted in impotence for some. For their part, women detailed the experiences of attempting to locate and secure the release of male relatives who had been detained. The UPDF did not document or record detentions, and women often had to visit multiple locations at significant costs in order to locate their men. Once found, the primary means of securing release was by turning in a weapon, but many households either did not have a gun or had already given it up. Women reported selling livestock in order to purchase a weapon, which they would then exchange for the release of their male relative. Not only did this disarmament result in trauma, humiliation and actual loss of lives, it also resulted in widespread loss of livestock and increased dependence on wage labour, petty trade and natural resource exploitation (Human Rights Watch 2007; Stites and Akabwai 2009). In parallel and further undermining local livelihood systems is the on-going large-scale land dispossession in Karamoja, a result of substantial commercial interest in the region following the disarmament (Human Rights Watch 2014; Rugadya et al. 2010;Saferworld 2017;Wambede and Mukooli 2017). By one recent estimate, approximately 3.7 million acres (approx. 1.5 million hectares) of land in Karamoja has been parcelled out for sundry mining activities (Mutaizibwa 2019). Local communities have witnessed not only their land being appropriated for commercial interests that do not directly benefit them, but also mining concessions and activities which sometimes cut directly through critical rangeland areas, indispensable for pastoralist production. At the same time, policies of the central and district governments and international actors over the past decade have encouraged households and communities to sedentarize and adopt cultivation. For many, this has meant moving away from traditional homesteads and to areas deemed more suitable for agrarian livelihoods. Such shifts by one sector of the population have severed systems of customary authority as well as up-ending the intergenerational transmission of knowledge and practice. Cultivation as a form of livelihood diversification may benefit those households with the resources to remain simultaneously engaged in animal husbandry, and some of these households are benefitting from some rebounds in the pastoral economy (Stites et al. 2016). However, a significant number of households remain 'livestock poor' without animals to fall back on as insurance when harvests fail, a common event in a semi-arid region characterized by highly variable rainfall (Ayele and Catley 2018). The loss of livestock has critical implications for household resilience (Little et al. 2001), and diversification into wage labour and petty trade has not proven beneficial for staving off poverty for the majority of the population (Iyer and Mosebo 2017). The mix of destructive policy and practice has spelled further destitution of the pastoralist economy with many 'moving out' of pastoralism altogether (Catley and Aklilu 2013). While destitution alone does not drive people to consume excessive amounts of alcohol, numerous respondents for this study spoke of the combination of poverty, despair and a lack of ability to envision a life that looked any different from their current reality. People spoke of the challenge of saving enough money to make any real investments, and, when faced with this dilemma, many saw no reason not to spend a day's wages on waragi or etule. Effects on communities of alcohol production, consumption and sale Effects on physical health and well-being Respondents in the study were acutely aware of the physical consequences of excessive liquor use and were quick to recount negative health effects. People who drink excessive amounts of waragi reportedly lose weight, become frail, have reddened mouths and lips and look like 'AIDS victims'. 19 Respondents reported that etule affects the mind as well as the body. Fighting among family members at night and the inability to remember the fight the next morning is one such manifestation. 'Madness' is another, as an elder in Kotido explained 20 : When you look at these people moving naked, these people of madness (ngicen), they are becoming so many. Because that thing (waragi) is confusing the brain. Some people drink until the blood becomes only etule. The head goes from normal to something abnormalthen it becomes madness. Local respondents also associated erratic behaviour with excessive drinking, as explained by men in Loyoro in Kaabong: 'alcohol is just bad…you see those days, someone drank alcohol, ran mad and started climbing over the mountain'. 21 A man in Moroto shared a similar account: a 'bad thing is when someone drinks alcohol; he/she becomes confused [and] mad and can just get up and run somewhere far away because of confusion'. 22 One of the more notable physical effects of drinking etule, according to community members and health and government officials alike, was the inability 'to reproduce'. Although scientific evidence is lacking, it is assumed that the heavy consumption of hard alcohol has contributed to male impotence. 23 In addition, the high level of inebriation is reportedly an obstacle to sexual relations within couples, further undermining strained inter-personal relations, discussed further below. Medical centre staff listed various health ailments believed to relate directly to waragi consumption. A senior nursing officer in Matany Hospital, the largest hospital in the region, cited alcohol as a factor in the rise of cardiovascular problems, cirrhosis of the liver, and pancreatitis, as well as contributing to trauma, accidents and homicides. 24 Excessive alcohol consumption was also said to be a leading cause of suicide and depression among young drinkers. 25 A health officer in Nadunget Sub-County in Nakapiripirit noted that vision loss had been reported in some instances. 26 A sub-county official in Loyoro said that problems associated with 'heavy' alcohol consumption include 'paralysis, madness, complaints of barrenness, quarrelling, [and physical] fighting'. 27 The sub-county official added that children suffered in households with heavy drinking. 28 Some of these impacts are likely emotional due to increased tensions and conflicts within family members, while some are physical. For example, a nurse at Matany Hospital discussed the problems of intoxicated adults caring for children, 'We have seen children dying as a result of alcohol consumption [by parents]….mothers sleeping on their babies because they are drunk'. 29 Drinking also has negative impacts on maternal health in the region. The health official in Loyoro mentioned that miscarriage due to excessive drinking was a problem in his area 30 and the nurse at Matany reported a connection between heavy drinking and premature births. 31 Medical officials reported specific and negative physical health impacts for children, including those associated with alcohol consumption by children. However, it is difficult to know either the extent or type of alcohol consumed by children, as evident in the explanation from a health official in Lorengedwat Sub-County in Nakapiripirit: Sometimes, children are brought to us when they are comatose as a result of alcohol consumption….however, we normally do not document whether they have been given local brew or waragi…we just have to fight to save their lives 32 . In line with the medical personnel cited above, most of the study participants agreed that the excessive consumption of waragi was contributing to mortality and morbidity among both men and women. Death rates are particularly high around mining sites, where miners are sometimes paid in hard alcohol (Ariong 2018;Eninu 2015). Respondents pointed to convenient packaging, low price points and widespread availability of waragi sachets as factors in the rise of heavy drinking, in addition to the desire to kill 'lingering thoughts'. 33 Effects on inter-personal relations One of the most-cited consequences of excessive liquor consumption is destabilized inter-personal relationships. Drinking waragi is said to lead to fighting between spouses, between children and parents, and between individuals in general. Respondents mentioned increased rates of divorce, separation and extramarital relations. Relations between generations have also suffered. Elders say that they are losing authority as a result of youth's alcohol consumption, but observations indicate that many elders also drink heavily. As explained by a group of male elders in Nakapiripirit District: The youth have lost respect for their parents. They are getting spoilt. You can't advise these young boys these days. They will want to fight you. They have even gone to the extent of abusing and insulting their parents. No respect at all. Sometime even we the parents are the ones who are [in] the wrong. We go and drink and start disturbing these young ones. At the end of the day we end up being beaten. 34 Alcohol is consumed by and adversely affects most demographic groups, but women and girls appear to bear the brunt of the inter-personal consequences. Numerous respondents cite the role of alcohol in domestic violence, and this was confirmed by medical workers in several locations. 35 Respondents in Tapac Sub-County in Moroto explained how alcohol was playing a role even in marriage negotiations. Families of grooms are said to ply the brides' fathers with waragi in hopes of facilitating quick wedding arrangements, sometimes with underaged girls. 36 This has reportedly contributed to an increase in the number of adolescent girls running away from their natal homes. Discussion The loss of Karamojong herds and the pace of sedentarization and economic diversification in the region have previously been analysed for their impact on household resilience, food security, poverty and other indicators of human well-being. The drastic changes to livelihoods, changes to ideas of masculinity and identity, increase in economic pressures for women, and the experience of dispossession from not only land but an entire way of life are some of the many transformative events that have characterized Karamoja. While the roots of these changes may have been in place for an extended period, the speed of change has increased dramatically over the past two decades. Likewise, while drinking alcohol is not new in the region, the extent and widespread nature of excessive and harmful alcohol consumption appear to have greatly increased in this same period. Many policymakers, development practitioners and members of Karamoja's salaried classes view this heavy consumption as resulting from 'idleness' and an inability to move past a headstrong reliance on livestock and livestock-based livelihoods, which, despite the copious evidence base, finds few champions among those deciding the region's trajectory. 37 The psychosocial, emotional and traumarelated aspects of alcohol-related behaviour remain, to a large extent, unexamined. These, perhaps hidden, drivers find commonality in livelihood stress, which a number of respondents and observers-including community members, practitioners and policy-makers-hold responsible for the unchecked and deleterious alcohol consumption. For young people, however, congregating around alcohol, games and other social forums is a way to seek solidarity, compassion and companionship (Jones 2020;Mosebo 2008Mosebo , 2015. At the same time, alcohol use may also be analysed as an escape from their realities and the complex and prolonged transitions to adulthood-also referred to sometimes as 'waithood'-that many young people around Africa today face (Dawson 2014;Honwana 2014). The worries for the future, as explained to us by Karamojong youth sampled for the study, were grouped together under the term ngatameta (or 'thoughts'); often, these thoughts revolve around completing an education, finding stable and suitable sources of income and the inability to do so in a fraught and uncertain economic situation (Iyer and Mosebo 2017). Exorbitant school fees, a competitive market and the conditions of informal employment are some of the constraints faced by many young people in Karamoja (especially those living in peri-urban and urban areas, and who have had some form of education). Young people from rural areas face a different, yet related, economic precarity, vis-à-vis generally decreased numbers of livestock, the main economic asset and social currency of the rural Karamojong economy. The lack of adequate veterinary facilities, repercussions from the protected kraal policy, losses through raiding, increased inequality and an overall lack of interest on the part of policy-makers to support the mobility regime and land management policies required for pastoralism to thrive have had a significant impact on the livestock asset base, driving many Karamojong into poverty (Ayele and Catley 2018;Levine 2010;Stites et al. 2007 For many rural communities, the decreasing livestock asset base is accompanied by the even further harrowing prospect of land loss, which appears to have sped up in the aftermath of the disarmament campaign. A combination of political and financial interests, increased security, and destitution has produced the scale of land grabs and other land-related conflicts that plague many communities today (Rugadya 2020;Rugadya et al. 2010). Large-scale land acquisitions by a host of private companies (mostly extractive industries) have cut pastoralists off from grazing areas, with negative impacts on the health of herds 38 (Rugadya et al. 2010). Conflicts implicating politicians and local leaders who have utilized their elite status to acquire large tracts of land have become increasingly common and contentious (Czuba 2017). At the same time, communal and customary tenure systems that have, for centuries, guaranteed access and arbitration around disputes through elder-led governance institutions are facing a grave threat. The shift of power from traditional institutions to mining companies and government agencies, and the erosion of traditional management systems, has increased the likelihood of future land conflicts (Rugadya 2020). The weakening of these socio-political institutions, the cornerstone of pastoralist communities, is by no means limited to Karamojong; however, it is an accelerating phenomenon that has repercussions for such imperative issues as land and water governance, peace and security and, ultimately, well-being among community members. In his study on changing food systems, diet and cultural identity among agro-pastoral Suri people of southwestern Ethiopia, remarks on the phenomenon of 'selfdestructive' (sic) consumption of alcohol. A steep rise in alcohol use among Suri agro-pastoralists is explained along generational, cultural, gender and economic lines. Disarrayed family life and social organization, changing production systems and, crucially, state policy and largescale changes such as land dispossession, resettlement and villagization schemes all contribute to increased use of hard liquor among Suri. Abbink writes: There is a certain 'alcoholization' of Suri society going on, an ambivalent and health-threatening process also seen elsewhere in Southern Ethiopia; the strong liquors have not yet been properly 'absorbed' into local drinking culture: local people usually consume it to excess, predictably stimulating drunken brawls, theft and robbery on market days…the abuse of alcoholic drinks has led to numerous brawls with a deadly outcome, to widowed women and young unmarried girls brewing local beer to gain cash to compensate for loss of family or husband's economic support, and to neglect of regular food production. There are elements of 'self-destruction' present here. Many male Suri also started to drink because of personal despair at having lost their cattle in raids by enemies or by government punitive action. (pp 133, 138) Abbink's analysis of hard liquor consumption and 'alcoholization' as stemming from changing political, social and economic phenomena among Suri resonates with this study's authors' experiences in Karamoja. The combined loss of livestock-based livelihoods and the removal of weapons has undermined the identity of men as the providers for and protectors of their families and communities. A similar 'personal despair' as described by Abbink affects men in Karamoja who have lost their prized animals and critical assets, but the damage to the male identity goes beyond this loss and has a significant effect on the underlying social fabric (Stites and Akabwai 2010). In some ways, the ubiquitous drinking seems designed to address this rupture; today hard liquor accompanies every significant and insignificant event and ceremony in Karamoja, from quotidian meetings under the village tree to initiation, birth and wedding ceremonies. Consumption of waragi or etule at all such events seems to emerge more out of necessity than from recreational pleasure. As such, we posit that what we are witnessing in Karamoja is an 'alcoholization' akin to that among the Suri. We surmise that this overconsumption of liquor stems from underlying psychosocial stress that has resulted from incremental, continuous and often drastic changes to people's economic, political and socio-cultural lives. Karamojong socio-cultural and economic identity as pastoralists has been under threat for decades; political repression, territorial containment, economic isolation and ecological disaster have given rise to an explicit loss of livestock assets and an implicit loss of cultural identity (Gray 2000). Alongside these drastic and lasting shifts is an on-going 'theft' of Karamoja's land and the communities' primary livelihood. 39 This 'theft', according to informants and our own observations, is an underlying reason for the growing 'alcoholization' of Karamojong society. Based on numerous conversations with key informants in the region, there appears to be a resignation within some segments of the population vis-àvis their future as pastoralists, what support they can expect from the government and what government policy means to their lives and livelihoods. A critical component of the collective experience of trauma was the experience of forced disarmament at the hands of the UPDF. A number of studies have detailed the negative human rights, security and socio-political impacts of the disarmament campaign, particularly at the height of the campaign (2006)(2007)(2008)(2009) and in the immediate aftermath (Bevan 2008;Human Rights Watch 2007;Stites and Akabwai 2010). Nonetheless, today the disarmament campaign is most often understood in light of its positive impacts on peace and security in the region, particularly by private sector investors and many government representatives. 40 We do not deny these developments and have written widely about some of the benefits of improved security on livelihoods, mobility and opportunity that were not possible prior to disarmament in the region (Stites & Akabwai, 2009;Stites et al., 2016). However, when considering the rampant and widespread excessive consumption of alcohol, we feel compelled to revisit the role of disarmament in creating collective trauma. Importantly, while young men were the primary targets of the (also young male) UPDF soldiers' efforts to flush weapons from communities, disarmament and associated fear and suffering were experienced much more widely. The extensively employed cordon-and-search practice drove all residents-young, old, male, female-from their homes, often in darkness, and forced them to remain outside while huts and homesteads were ransacked and searched. The military used carefully targeted practices of community humiliation and threat in an effort to uncover weapons: male elders were reportedly stripped naked and put on public display; children were rounded up and marched through villages until their parents handed over guns. Simultaneous to the experiences of trauma for individuals, households and communities, people were losing access to their most important assets-their livestock. Purportedly to minimize losses from raids while disarmament was on-going, the protected kraal system cut owners, herds and communities off from their animals. Extremely high rates of morbidity and mortality of livestock in the protected kraals and in the aftermath undermined livestock-based livelihoods of numerous households in the region, many of whom never recovered their herds and stepped out of pastoral production entirely. As such, we see the direct impacts and experiences of trauma on people's bodies, families, assets and way of life. Drinking, thus, is an escape, a brief respite from the present worries and future uncertainty. Excessive and 'self-destructive' alcohol use may be viewed as one manifestation of this trauma. For young people in precarious educational and economic conditions, alcohol provides a lubricant for social interaction as well as an outlet for escape, giving them the opportunity to seek amusement and sociality (see also Jones 2020). However, as in other parts of Uganda and sub-Saharan Africa, Karamojong young people's expectations of progress (particularly for those who live in peri-urban or urban centres) are often incongruous with the realities of the market (Iyer and Mosebo 2017). Expanding educational requirements and shrinking economic prospects, brought on by a host of historical, economic and political factors, appears to have created a context where a sense of 'progression' among youth has not kept pace with available opportunities. 41 This has led to the over-abundance of unstructured time, which, in turn, gives rise to stress-related thoughts 42 (Mains 2017). The inability to invest in productive assets, entrepreneurship opportunities, and the intense competition of a job market overrun by young people from outside the region, Karamoja's young people find an ever-widening period of 'waithood'. Our study on the drivers of excessive alcohol consumption in Karamoja points to a multitude of explanations for what appears to be an increase of this behaviour in recent years. Drastic and continuing alienation from the pastoralist identity, growing poverty and disenfranchisement, and violence at the hands of the state are some of the reasons for growing resignation among Karamoja communities, particularly those who live in rural areas. Public servants and development practitioners continue to blame rampant alcohol abuse for the region's staggered 'progress' and halting 'development'. 43 From the livelihood, nutrition and health standpoints, interventions attempting to improve food security and asset wealth are supposedly severely hindered by the drunkenness of prospective beneficiaries. 44 Although such narratives tend to view alcohol use as a causal factor in violence and poverty, the 'alcoholization' of Karamojong communities is 'more symptomatic of cultural displacement' akin to a 'social pathology in 40 This one-sided view of disarmament as positive is troubling given its brutality and long-term repercussions. It is even more worrying when Karamoja's leaders actively advocate for a return to 'serious' disarmament to address the recent resurgence in insecurity in the region, as this indicates a failure to recall or recognize the extent of suffering experienced. See for example https://www.monitor.co.ug/uganda/news/ national/karamoja-leaders-demand-fresh-disarmament-exercise-186 9732. 41 Young men also cannot fulfil initiation rites or marry in the absence of livestock (Stites 2013). 42 As in Mains' analysis, we find that in Karamoja, this over-abundance of time is distinctly a male phenomenon as young women in rural, peri-urban and urban settings are tasked with the lion's share of household and other work. 43 In a similar way, Bodi pastoralists in Ethiopia are blamed for selling animals and using money for alcohol consumption (Gebresenbet 2019 other culturally and politically marginalized societies' (Gray 2000: 414). Simplistic explanations and moralistic approaches to the alcohol problem would necessarily overlook the underlying psychological and emotional drivers of alcohol use. Whereas we do not attribute all alcohol use in Karamoja to these factors, we find that the drastic changes in alcohol use, besides being a result of the availability of alcohol and cash for purchase, are symptomatic of larger contextual and livelihood shifts and unaddressed violence and collective trauma. Any resulting action to address these issues must directly link to the question of the loss of livelihoods, delayed and uncertain employment (for those in peri-urban and urban areas), and associated identities. Ultimately, the most positive interventions may come in the form of acknowledgement of the trauma and losses experienced, by customary leaders, local officials and politicians. Discussions of disarmament today focus primarily on the positive results-the increased security, the improved mobility, the greater economic investment and the opening of the region to trade and commerce. These outcomes have certainly benefitted the region; such results should be lauded while also recognizing-and even apologizing for-the brutality, trauma and damage to a sociocultural identity that was part and parcel of disarmament and its aftermath. At present, too much of the discussion of the past two decades ignores the negative impacts and externalities, at the same time that perplexity is expressed about the rapid increase in alcohol abuse.
2021-11-24T14:09:04.901Z
2021-11-24T00:00:00.000
{ "year": 2021, "sha1": "327c2482ca6ac9593d25c33ef4a3afe56cbda3d4", "oa_license": "CCBY", "oa_url": "https://pastoralismjournal.springeropen.com/track/pdf/10.1186/s13570-021-00199-0", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0ca8f586f3bf4752c9e712e901b2825e96f628d6", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
55058140
pes2o/s2orc
v3-fos-license
Synthesis of Decentralized Variable Gain Robust Controllers for Large-Scale Interconnected Systems with Structured Uncertainties In this paper, we propose a decentralized variable gain robust controller which achieves not only robust stability but also satisfactory transient behavior for a class of uncertain large-scale interconnected systems. For the uncertain large-scale interconnected system, the uncertainties and the interactions satisfy the matching condition. The proposed decentralized robust controller consists of a fixed feedback gain controller and a variable gain one determined by a parameter adjustment law. In this paper, we show that sufficient conditions for the existence of the proposed decentralized variable gain robust controller are given in terms of LMIs. Finally, a simple numerical example is included. Introduction To design control systems, it is necessary to derive a mathematical model for controlled systems.However, there always exist some gaps between the mathematical model and the controlled system; that is, uncertainties between actual systems and mathematical models are unavoidable.Hence for dynamical systems with uncertainties, so-called robust controller design methods have been well studied in the past thirty years, and there are a large number of studies for linear uncertain dynamical systems (e.g., see [1,2] and references therein).In particular for uncertain linear systems, several quadratic stabilizing controllers and H ∞ one have been suggested (e.g., [3][4][5]), and a connection between H ∞ control and quadratic stabilization has also been established [6].In addition, several design methods of variable gain controllers for uncertain continuous-time systems have been shown (e.g., [7][8][9]).These robust controllers are composed of a fixed gain controller and a variable gain one, and variable gain controllers are tuned by updating laws.Especially, in Oya and Hagino [8] the error signal between the desired trajectory and the actual response is introduced and the variable gain controller is determined so as to reduce the effect of uncertainties. On the other hand, the decentralized control for largescale interconnected systems has been widely studied, because large-scale interconnected systems can be seen in such diverse fields as economic systems, electrical systems, and so on.A large number of results in decentralized control systems can be seen in Šiljak [10].A framework for the design of decentralized robust model reference adaptive control for interconnected time-delay systems has been considered in the work of Hua et al. [11] and decentralized fault tolerant control problem has also been studied [12].Additionally, there are many existing results for decentralized robust control for large-scale interconnected systems with uncertainties (e.g., [13][14][15][16]).In Mao and Lin [13] for large-scale interconnected systems with unmodelled interaction, the aggregative derivation is tracked by using a model following technique with online improvement, and a sufficient condition for which the overall system when controlled by the completely decentralized control is asymptotically stable has been established.Gong [15] has proposed decentralized robust controllers which guarantee robust stability with prescribed degree of exponential convergence.Furthermore, some design methods of decentralized guaranteed cost controllers for uncertain large-scale interconnected systems have also been suggested [17,18]. In this paper, on the basis of the existing results [8,9], we propose a decentralized variable gain robust controller for a class of uncertain large-scale interconnected systems; that is, this study is an extension of the previous studies in this field.For the uncertain large-scale interconnected systems, uncertainties and interactions satisfy the matching condition.The proposed decentralized variable gain robust control input is defined as a state feedback with a fixed feedback gain matrix designed by using nominal subsystem and an error signal feedback with a fixed compensation gain matrix and a variable one determined by a parameter adjustment law.In this paper, LMI-based sufficient conditions for the existence of the proposed decentralized variable gain robust controller are derived.One can see that the crucial difference between the existing results and our new one is that the derived LMI conditions in this paper are always feasible; that is, the proposed decentralized variable gain robust control system can always be designed.Additionally, the number of LMIs needed to be solved equals the number of subsystems and is less than the conventional decentralized robust control with a fixed feedback gain matrix based on Lyapunov stability criterion.Therefore, the proposed decentralized robust control scheme is very useful. This paper is organized as follows.We show notations and useful lemmas which are used in this paper in Section 2, and in Section 3, the class of uncertain large-scale interconnected systems under consideration is presented, and uncertain error subsystems and a decentralized variable gain robust control input are introduced.Section 4 contains the main results; that is, LMI-based sufficient conditions for the existence of the proposed decentralized variable gain robust controller are derived.Finally, simple illustrative examples are included to show the effectiveness of the proposed decentralized variable gain robust control system. Preliminaries In this section, notations and useful and well-known lemmas (see [19,20] for details) which are used in this paper are shown. In the paper, the following notations are used.For a matrix X, the inverse of matrix X and the transpose of one are denoted by X −1 and X , respectively.Additionally {X} and mean X + X and -dimensional identity matrix, respectively, and the notation diag(X 1 , . . ., X M ) represents a block diagonal matrix composed of matrices X for = 1, . . ., M. For real symmetric matrices X and Y, X > Y (resp., X ≥ Y) means that X − Y is positive (resp., nonnegative) definite matrix.For a vector ∈ R , |||| denotes standard Euclidian norm and,for a matrix X, ||X|| represents its induced norm.The symbols "≜" and "⋆" mean equality by definition and symmetric blocks in matrix inequalities, respectively.Lemma 1.For arbitrary vectors and and the matrices G and H which have appropriate dimensions, the following inequality holds: where Δ() with appropriate dimensions is a time-varying unknown matrix satisfying Δ() ≤ 1.0. Lemma 2 (Schur complement).For a given constant real symmetric matrix Ξ, the following items are equivalent: Problem Formulation Consider the uncertain large-scale interconnected system composed of N subsystems described as where () ∈ R and () ∈ R ( = 1, . . ., N) are the vectors of the state and the control input for the th subsystem, respectively, and the vector () = ( 1 (), . . ., N ()) is the state of the overall system.The matrices () and () in (2) are given by that is, the uncertainties and the interaction terms satisfy the matching condition.In ( 2) and ( 3), the matrices ∈ R × , ∈ R × , and ∈ R × denote the nominal values of the system, and the matrices D , E () , and E () with appropriate dimensions represent the structure of interactions or uncertainties.Besides, () () ∈ R 1 and () () ∈ R 1 denote unknown parameters which belong to the following ellipsoidal sets: In ( 4), Σ ∈ R × and Σ ∈ R × represent the size of the ellipsoid and are defined as Σ = diag( (1) , . . ., ) and Σ = diag( (1) , . . ., ), respectively, where () and () are positive constant; that is, matrices Σ and Σ j are symmetric positive definite.Besides, () and () are the unknown parameter vectors given by () = ( (1) () , . . ., Now, the nominal subsystem, ignoring the unknown parameters and the interactions in (2), is given by where () ∈ R and () ∈ R are the vectors of the state and the control input for the th nominal subsystem, respectively.First of all, in order to generate the desired trajectory in time response for the uncertain th subsystem of (2) systematically, we adopt the standard linear quadratic control problem for the th nominal subsystem of (6).Of course, one can adopt some other design method for deriving the desirable response (e.g., pole placement).It is well known that the optimal control input for the th nominal subsystem of ( 6) can be obtained as In (7), X ∈ R × is a symmetric positive define matrix which satisfies the algebraic Riccati equation where the weighting matrices Q ∈ R × and R ∈ R × are positive definite and are determined in advance so that the desirable transient behavior is achieved. From the above discussion, our design objective in this paper is to determine the decentralized variable gain robust control of ( 9) such that the resultant overall system achieves not only robust stability but also satisfactory transient behavior, that is, to design the fixed gain matrix ∈ R × and the variable one L ( , , ) ∈ R × such that asymptotical stability of the overall error system composed of N error subsystems of ( 11) is guaranteed. Decentralized Variable Gain Controllers The following theorem shows sufficient conditions for the existence of the proposed decentralized control system.Theorem 3. Consider the uncertain error subsystem of (11) and the control input of (9). Then the overall error system composed of the N error subsystems of (11) is robustly stable. Proof.In order to prove Theorem 3, let us define the following quadratic function: where V () and V () are given by V () ≜ () S () . Since the state vector () can be written as () = () + (), we can obtain the following relation for the time derivative of the quadratic functions V () of (24): Therefore from the definition of the matrices G in ( 22), the inequality of (28) can be rewritten as Furthermore, one can easily see that the following relation for the quadratic functions V () of ( 25) is satisfied: Firstly, we consider the case of P () ̸ = 0.In this case, substituting the variable gain matrix of ( 14) into (29) and some algebraic manipulations give the inequality Thus, from the definition of the quadratic function V() of (23), the following relation can be obtained: The inequality of (32) can be rewritten as If the matrix inequalities hold, then the following inequality for the time derivative of the quadratic function V() of ( 23) is satisfied: where () ≜ ( 1 (), . . ., N (), 1 (), . . ., N ()) . Next we consider the case of P () = 0.In this case, one can see from (26) and (30) that the time derivative of the quadratic function V() of (23) can be written as Namely, in the case of P () = 0, the relation of ( 36) is also satisfied, provided that the inequalities of (34) and (35) hold. From the above, the overall error system is clearly guaranteed to be stable, because all nominal subsystems are asymptotically stable. Finally, we consider the inequalities of (34) and (35).By introducing the matrices Y ≜ P −1 and W ≜ P and pre-and postmultiplying both sides of the matrix inequality of (34) by Y , we have the inequality Thus by applying Lemma 2 (Schur complement) to ( 35) and (38), we find that the inequalities of ( 35) and ( 38) are equivalent to the LMIs of ( 13) and ( 12), respectively.Therefore the proof of Theorem 3 is accomplished. Remark 4. The positive constants in LMIs of ( 12) and ( 13) are included in (2, 2)-block only.Namely, the positive constants are independent of LMI variables Y and W .Thus the positive constants can arbitrarily be selected and one can see that the LMIs of ( 12) and ( 13) are always feasible.Thus by using the solution of the LMIs of ( 12) and ( 13) the fixed compensation gain matrix is determined as and the variable one is given by (14). Remark 5.The proposed decentralized robust controller design method is adaptable when some assumptions are satisfied.Namely, if the matching condition for uncertainties and interactions is satisfied, then the LMIs of ( 12) and ( 13) are feasible; that is, the proposed decentralized variable gain robust controller is applicable.Besides, in the proposed controller design method, the number of LMIs needed to be solved is N. On the other hand, the number of the inequalities needed to be solved in the conventional decentralized robust control with fixed gain matrices is "2 M ", provided that the number of unknown parameters included in the overall system is M (see the appendix).Therefore, one can see that the proposed controller design method is very useful.( 12) and ( 13 Thus the fixed gain matrices ∈ R 1×2 can be computed as Note that the number of LMIs needed to be solved in the conventional decentralized robust control with a fixed gain matrix is 2 9 = 512 (see the appendix).Namely, it is difficult to design the conventional fixed gain decentralized robust controller comparing with the proposed design method of the decentralized variable gain robust control.The result of this example is shown in Figures 1, 2, 3, and 4. In this example, initial values of the uncertain large-scale system with system parameters of (39)-(42) and the nominal system are selected as follows: x ( 2) 1 (t) x (1) 1 (t) x (2) 1 (t) State Time x (1) 2 (t) x (2) 2 (t) x (1) 2 (t) x (2) 2 (t) From these figures, the proposed decentralized variable gain controller stabilizes the uncertain large-scale systems with system parameters of (39)-(42) in spite of uncertainties and interactions.Besides, the proposed variable gain robust control system achieves good transient response close to desired transient behavior generated by the nominal subsystem.Therefore, the effectiveness of the proposed decentralized robust control system has been shown. Conclusions In this paper, on the basis of the work of Oya and Hagino [8], we have suggested a decentralized variable gain robust x (2) 3 (t) x (1) 3 (t) x (2) 3 (t) controller for a large-scale interconnected system with uncertainties.For the uncertain large-scale interconnected systems, we have presented an LMI-based design method of the proposed decentralized variable gain robust controller.In addition, the effectiveness of the proposed decentralized robust controller has been shown by simple numerical examples.One can easily find that since the parameters in LMIs of ( 12) and ( 13) can arbitrarily be selected, these LMIs can be easily solved.Namely, the proposed design approach is very useful comparing with the conventional decentralized robust control. In the future research, we will extend the proposed controller synthesis to such a broad class of systems as largescale systems with general uncertainties, uncertain largescale systems with time delays, and so on.
2018-12-13T17:48:52.387Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "f6d75d1f0e7f94e47ff7d6493cf62229f2befbe1", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jcse/2014/848465.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f6d75d1f0e7f94e47ff7d6493cf62229f2befbe1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Engineering" ] }
134230421
pes2o/s2orc
v3-fos-license
Capitalizing on Cellular Technology—Opportunities and Challenges for Near Ground Weather Monitoring "2279 The use of existing measurements from a commercial wireless communication system as virtual sensors for environmental monitoring has recently gained increasing attention. In particular, measurements of the signal level of commercial microwave links (CMLs) used in the backhaul communication network of cellular systems are considered as opportunistic sensors for precipitation monitoring. Research results have demonstrated the feasibility of the suggested technique for the estimating and mapping of rain, as well as for monitoring other-than-rain phenomena. However, further advancement toward implementation and commercial use are heavily dependent on multidisciplinary collaborations: Communication and network engineers are needed to enable access to the existing measurements; signal processing experts can utilize the different data for improving the accuracy and the tempo-spatial resolution of the estimates; atmospheric scientists are responsible for the physical modeling; hydrologists, meteorologists, and others can contribute to the end uses; economists can indicate the potential benefits; etc. In this paper I will review state-of-the-art results and the open challenges, demonstrating the benefit to the public good from utilizing the opportunistic-sensing approach. I will also analyze the various obstacles on the way there. Introduction The relation between the rain intensity R (in mm/h) and the attenuation of a microwave wireless signal A traveling in the atmosphere is relatively simple: where A is in dB, l is the path length (in km) of the link and a, b are constants, depending on the frequency and the polarization of the signal, as well as on the drop size distribution (DSD) of the rain, which is considered as typical to an area. This relation is a simplified model of complex physical relations [1,2] which has empirically been found to be a good approximation for the rain-induced signal's attenuation, for microwave frequencies and for links of length of about 0.5-20 km. Equation (1), which become linear (b = 1) for a certain choice of signal parameters, first suggested in Reference [3]. This raised the idea to use microwave links (MLs) for rainfall measurements in the early 90s [4][5][6], and it was experimentally tested in a multinational European project [7]. However, as the installation of dedicated MLs is costly, in combination with their limited coverage and questionable accuracy, this idea has not spread. In 2006, Messer et al. [8] first demonstrated the idea of taking advantage of the existing, widely spread, cellular communication technology, and used the MLs, which are part of its backhaul network, for environmental monitoring. While the relation (1) still forms the basis of links (CMLs) instead of dedicated MLs as in Reference [9], brings new opportunities, as well as challenges. The major opportunity is obvious: the availability of millions of potential virtual meteorological sensors almost everywhere on Earth, with no costs for installation, maintenance, or communication. Since 2006, interest in this technology has rapidly increased and many research groups around the world are contributing to it. However, the fact that it has not yet been commercialized is indicative of the challenges its implementation poses. In this paper, I will review the most advanced CML technology and its future directions as regards becoming an operational environmental monitoring system. Materials and Methods The CML-based weather monitoring technology depends on the availability of materials: measurements of the received signal level (RSL) and the transmitted signal level (TSL) from the microwave backhaul network of a cellular communication system. In most countries, a cellular company owns the infrastructure, so the required measurements are owned by a private company. While the use of measurements of the transmitted/received signal levels is of no risk to either the communication services or to the privacy of the users, most cellular providers are reluctant to provide a third party access to their intra-network. On the other hand, researchers interested in CML technology have approached cellular companies and succeeded in receiving measurements. The following sections explore these protocols. The Passive Approach Manufacturers of the backhaul transmission networks have implemented tools in their systems which monitor and log the signal levels of all links in the network. The tool, known as the network management system (NMS), produces RSL and TSL indicators which are automatically logged by the network operators (i.e., the cellular providers). The passive approach relies on the use of the already existing NMS records as inputs for the CML weather monitoring technology. The major advantage of the passive approach is that it puts neither burdens nor risks on the cellular providers, so it is relatively simple to get them to share this data. However, as the NMS data is kept for network monitoring, and in particular, for monitoring the actual link budget, the RSL (and TSL) signals in the NMS go through a highly nonlinear process. Typically, only the minimum and the maximum RSL (and TSL) values, from the measurements taken over a window of 15 min, are stored. Moreover, these values are quantized at a 0.1-1 dB resolution. Furthermore, as they are mostly used for analysis, the NMS records are rarely available in real time. An example of a typical RSL time series from an NMS record is depicted in Figure 1. The Active Approach Modern microwave communication networks are remotely managed. That is, the network operators can access and inquire the status of the different CMLs remotely. Specifically, most CML hardware is connected to the provider's intra-net, and uses the simple network management protocol (SNMP) to submit queries to the CMLs, and receive the requested information. The active approach is to use the SNMP to collect RSL measurements dedicated for weather monitoring. A recent publication details this methodology, and establishes a set of open-source tools which can be used to actively access the CMLs of main manufacturers, and receive the instantaneous RSL (and TSL) samples [9]. With this approach, RSLs (and TSLs) are available in real time as instantaneous samples, at sampling intervals that can be as small as 10 s. Note that to avoid unnecessary traffic load for the cellular provider, the active approach also requires adding a designated server for handling the RSL measurements. Also, in most cases, the RSL and the TSL samples collected by this approach still suffer from quantization, as the quantization process is a property (and a limitation) of the sampling hardware itself. CML measurements collected by the active approach are most suitable for environmental monitoring, both because of the excellent temporal resolution and the lack of the highly nonlinear min/max processing of the NMS. Moreover, the availability of real time measurements is most attractive for applications such as now-casting and flood prediction. However, the active approach requires a high level of involvement from the cellular provider, including permission to a third party to cross its firewall. Most providers are reluctant to allow it, as they see it as a potential risk to their main business, i.e., communication. Table 1 summarizes the two approaches. Methods If instantaneous RSL/TSL measurements are available (as in the Active approach), the total attenuation (in dB) of the CML's signal can be extracted by subtracting one from the other, and it can be described by [10]: where t is the time index and, as in Equation (1), l is the length of the link. A 0 is the frequency-dependent propagation loss which is constant over time. A P is the attenuation caused by precipitation, if existing. For rain, for example, A P = A of Equation (1). A H is the attenuation due to hydrometeors other than precipitation, e.g., fog or water vapor. This component changes slowly over time, while being small compared with A P , if existing. N(t) represents the measurements noise, and A C (t) is a component which represents the disruption of the signal by other objects, if existing. Note that Equation (2) is a simplified description of the heavily studied physics of the attenuation of a signal propagating in the atmosphere [11,12], which has a great effect on the performance of communication systems [13]. Depending on the application, the first stage is to isolate A P (t, l) (for precipitation monitoring), or A H (t, l) (for monitoring other-than-rain atmospheric phenomena). This is commonly done by using side information, or by taking advantage of the built-in diversity of CML technology, as there are usually multiple CMLs in a given area of different lengths and frequencies (e.g., Reference [14]). The next step is to estimate the parameter of interest (e.g., rain-rate) from the corresponding term, using the known relation between the signal attenuation and the phenomenon of interest. A flowchart of this possible process is presented in Figure 2. attenuation of a signal propagating in the atmosphere [11,12], which has a great effect on the performance of communication systems [13]. Depending on the application, the first stage is to isolate ( , ) (for precipitation monitoring), or ( , ) (for monitoring other-than-rain atmospheric phenomena). This is commonly done by using side information, or by taking advantage of the built-in diversity of CML technology, as there are usually multiple CMLs in a given area of different lengths and frequencies (e.g., Reference [14]). The next step is to estimate the parameter of interest (e.g., rain-rate) from the corresponding term, using the known relation between the signal attenuation and the phenomenon of interest. A flowchart of this possible process is presented in Figure 2. Obviously, measurements collected by the passive approach are far from being ready to use for this analysis. The instantaneous total attenuation ( , ) required by Equation (2) cannot be extracted from the min/max indicators of the TSL/RSL. A systematic approach, for example, for extracting rain-rate estimates from extreme measurements provided using the passive approach is described in Reference [15] and is depicted in Figure 3. power-law (PL) is adjusted to maximum attenuation (after J. Ostrometzky [15]). Results Since first introduced in 2006, research groups from different disciplines have started to study this technology, and dozens of peer-reviewed papers have been published. The references include many selected publications of studies which are CML measurement-based. In general, these papers can be divided into four groups: 1. Papers in which the capabilities of CML technology for environmental monitoring have been demonstrated (see Table 2). Naturally, foremost potential is attributed to the near-ground rainfall monitoring capability. Several papers demonstrated the CML as a rainfall sensor, and many CMLs as a sensors network, capable of 2D rainfall mapping. Later, other papers have Obviously, measurements collected by the passive approach are far from being ready to use for this analysis. The instantaneous total attenuation A T (t, l) required by Equation (2) cannot be extracted from the min/max indicators of the TSL/RSL. A systematic approach, for example, for extracting rain-rate estimates from extreme measurements provided using the passive approach is described in Reference [15] and is depicted in Figure 3. attenuation of a signal propagating in the atmosphere [11,12], which has a great effect on the performance of communication systems [13]. Depending on the application, the first stage is to isolate ( , ) (for precipitation monitoring), or ( , ) (for monitoring other-than-rain atmospheric phenomena). This is commonly done by using side information, or by taking advantage of the built-in diversity of CML technology, as there are usually multiple CMLs in a given area of different lengths and frequencies (e.g., Reference [14]). The next step is to estimate the parameter of interest (e.g., rain-rate) from the corresponding term, using the known relation between the signal attenuation and the phenomenon of interest. A flowchart of this possible process is presented in Figure 2. Obviously, measurements collected by the passive approach are far from being ready to use for this analysis. The instantaneous total attenuation ( , ) required by Equation (2) cannot be extracted from the min/max indicators of the TSL/RSL. A systematic approach, for example, for extracting rain-rate estimates from extreme measurements provided using the passive approach is described in Reference [15] and is depicted in Figure 3. Results Since first introduced in 2006, research groups from different disciplines have started to study this technology, and dozens of peer-reviewed papers have been published. The references include many selected publications of studies which are CML measurement-based. In general, these papers can be divided into four groups: 1. Papers in which the capabilities of CML technology for environmental monitoring have been demonstrated (see Table 2). Naturally, foremost potential is attributed to the near-ground rainfall monitoring capability. Several papers demonstrated the CML as a rainfall sensor, and many CMLs as a sensors network, capable of 2D rainfall mapping. Later, other papers have Results Since first introduced in 2006, research groups from different disciplines have started to study this technology, and dozens of peer-reviewed papers have been published. The references include many selected publications of studies which are CML measurement-based. In general, these papers can be divided into four groups: 1. Papers in which the capabilities of CML technology for environmental monitoring have been demonstrated (see Table 2). Naturally, foremost potential is attributed to the near-ground rainfall monitoring capability. Several papers demonstrated the CML as a rainfall sensor, and many CMLs as a sensors network, capable of 2D rainfall mapping. Later, other papers have demonstrated the use of CMLs for monitoring other-than-rain phenomena, including humidity, fog, dew, snow and sleet, and even wind and air pollution (indirectly). Atmospheric Phenomenon Reference Rainfall sensing [8,[16][17][18] Rainfall mapping [8,19] Humidity sensing [20] Fog sensing [21][22][23] Precipitation classification [24] Dew detection [25] Wind estimation [26] Air pollution detection [27] 2. The next step was to study the accuracy of CMLs as virtual rainfall sensors. Since cellular networks have been designed to operate optimally for efficient telecommunication service and not for measuring rain (or other atmospheric variables), its opportunistic use for rain monitoring is challenging, since the network must be taken as is. Table 3 presents a summary of the major contributions to an errors and uncertainties analysis. The analysis aims at quantifying the different sources' errors and their effect on the resulting rain estimates. Generally speaking, the uncertainties can be put into two groups: one which is related to physical, atmospheric effects, e.g., wet antenna, which cause attenuation that may read as higher rain-intensity value in Equation (1) if not properly handled. The second group consists of errors caused by the opportunistic use of existing technology not aimed at atmospheric monitoring. This may include signal quantization and non-linear pre-processing (applied on the signal for efficient network management), as well as errors resulting from the non-optimal, given spatial spread of links and frequencies in the CML network, when being used for atmospheric monitoring. Table 3. Errors and uncertainties analysis. [42] 3. Sources of Errors Reference In Table 4, a list of papers suggesting algorithms for rainfall monitoring is presented. As the main opportunity in CML technology is in near ground, bottom-up rain mapping, most algorithms are focused on this. The straightforward approach is to treat each CML as a local point measurement and to interpolate local measurements to a grid, using standard spatial interpolation techniques (e.g., inverse distance weighting IDW, Kriging, etc.). On the basis of this approach, open software tools were developed [43,44]. More advanced algorithms have been developed by signal processing experts, on which the tempo-spatial resolution of the rainfall maps, their accuracy and their coverage have been improved by exploiting the spatial spread of the CML measurements. Different authors used different approaches, such as: an iterative approach in which variability of rain along the links is exploited [19]; a compressed sensing approach [45,46]; a model based, parametric approach; a tomographic approach [47]; and dynamic mapping [48,49]. The main future challenge is to improve CML rainfall maps by merging with other types of measurements (mostly radar), where these exist (see Reference [50] for a review of this issue). Table 4. Algorithms and tools. Conclusions The CML environmental monitoring technology was introduced and has developed into academic research. By negotiating with local cellular providers, multidisciplinary research groups all over the world have received access to CML measurements in their countries, mostly for no cost, and are studying different aspects of this technology. Most of these groups are now collaborating and sharing experience, tools and knowledge in different ways, so a new scientific community has been built. While the achievements of this community are impressive, the road ahead is challenging. The Commercialization Challenge Proving the feasibility of CML technology and having an active scientific community are most important for the sustainability and for the future advancement of this emerging technology. However, the next step in the journey is the technology transfer for the public good. The established research is an important component, but is not sufficient. A necessary condition for CML technology to be used is to ensure access to measurements. Fortunately, the changes in the communication markets push cellular companies to look for new business, so they are now more open to explore the potential of creating revenues from CML technology. Another part in this equation is the market itself, in which measurements and (big) data of any kind become valuable assets. Multinational companies such as IBM™ and Google™ are now interested in weather, and CML technology is the best source for weather-related (big) data. Note, however, that sustainable access to the measurements is a key issue for the commercial use of CML technology, and a necessary pre-condition for its practical use. Potential Use The vast research reviewed in this paper indicates the great potential of CML technology in future environmental monitoring, once the availability of measurements is granted. Potential use can be divided between three main families: a. Covering blind spots. There are areas where almost no near-ground measurements are available. One such example includes country-wide areas in developing countries, such as Africa [60,63]. Other examples are local, and include specific challenging landscapes such as slopes and urban areas, where traditional ground weather stations are known to be less reliable. Even with the limited accuracy of CML technology, in cases where there is no alternative, its potential is extremely important. b. Improving monitoring accuracy. Even in areas where the coverage of conventional weather-monitoring facilities (e.g., gauges and radar) is good, the use of additional ground-level measurements can improve performance. The potential improvement highly depends on the topology of the network (e.g., its density) and on the temporal resolution of the available measurements. c. Improving models. Complex meteorological and hydrological models, used for forecasting, are continuously improved by comparing their predictions to actual measurements. CML technology offers a new dimension of data to be assimilated in such models. Limitations As discussed in Reference [53], cellular networks have been designed and dimensioned to operate optimally for efficient telecommunication service and not for measuring rain (or other atmospheric variables). The design of microwave links as designated sensors for the observation of the lower atmosphere would be very different. This raises an intriguing scientific challenge for the research community. Signal processing and machine learning algorithms are being developed to overcome limitations in the measurements. Moreover, the question of potential improvements in performance resulting from using CML measurements is an open, and important one. If it can be theoretically proven that the potential performance improvement is significant, then the motivation to use CML technology will be higher. Note, however, that performance can be defined in different ways, depending on the application. It can be the accuracy of measuring total rainfall in a given area and time slot, or accuracy of measuring instantaneous rain rate at a given point, or accuracy of the spatial, 2-D representation of the rain in a given time slot or period, etc. The analysis and the results will depend on the CML network as well as on the characteristics of the atmospheric phenomenon under study (e.g., the spottiness of the rain). In Reference [42], for example, the achievable spatial resolution of rain mapping was studied and has been characterized as a function of the sparsity of the rain, as well as of the statistical features of the CMLs in the area. A Test Case for Opportunistic Sensing of the Environment CML technology has recently been mentioned as one of the first Internet of Things (IoT) applications [85], which is now getting much attention. While the conditions for CML technology becoming useful seems to be right, and we may see it soon in products as well as in public-good services, it also serves as a pioneering example of the new trend of capitalizing on existing technology by utilizing it for non-intended, opportunistic use [86][87][88]. Opportunistic sensing is believed to be the future of environmental monitoring, being a sustainable source of (big) environmental data. The analyses provided in this paper for the case of CML technology can serve as a test case for this emerging trend of opportunistic sensing, demonstrating the need for open bi-directional communication channels within the world of academia, where innovative ideas are initiated and studied, and for contemporary industry, which serves as a source of opportunistic measurements as well as a platform for utilization of new ideas. To conclude: The uniqueness of CML technology stems from its special situation, standing between science and technology, between academia and industry. Future development of this technology and its potential use in practice depend on business challenges as well as on science and technology. The focus of this paper is to review the most advanced developments in CML technology and to anticipate its future development, based on an analysis of the opportunities and challenges faced by researchers in this area over the years. Depending on the future evolution of CML technology, it will be important to provide a deep scientific critical review of this technology, including meticulous scientific background on the different sources to the signal's attenuation, comparative analysis of the different algorithms, etc. Funding: This research is based on integration of works and has received no specific external funding.
2019-04-27T13:10:01.802Z
2018-06-22T00:00:00.000
{ "year": 2018, "sha1": "80394ce40642895b7f8fc49afa1ead221addca34", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/environments5070073", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "80394ce40642895b7f8fc49afa1ead221addca34", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
52811431
pes2o/s2orc
v3-fos-license
The Relationship Between Head Motion Synchronization and Empathy in Unidirectional Face-to-Face Communication Embodied synchronization is widely observed in human communication, and is considered to be important in generating empathy during face-to-face communication. However, the quantitative relationship between body motion synchronization and degree of empathy is not fully understood. Therefore, we focused on head motion to investigate phase and frequency differences in head motion synchronization in relation to degree of empathy. We specifically conducted a lecture-based experiment using controlled spoken text divided into two parts: high empathy and low empathy. During the lecture, we measured the acceleration of speakers’ and listeners’ head motions using an accelerometer, and calculated the synchronization between the time-series data from their acceleration norms. The results showed greater head motion synchronization during high empathy. During high empathy, the speakers’ head motions began before those of listeners’ in the medium (2.5 to 3.5 Hz) and high (4.0 to 5.0 Hz) frequency ranges, whereas the speakers’ head motions tended to start later than those of the listeners’ in the low (1.0 to 2.0 Hz) and medium (2.5 to 3.5 Hz) frequency ranges. This suggests that the degree of empathy is reflected by a different relationship between the phase and frequency of head motion synchronization during face-to-face communication. INTRODUCTION Non-verbal communication channels play an important role for sharing emotional information during human communication. For instance, synchronization of non-verbal behaviors occurs in various forms of social communication, such as between a mother and infant (Meltzoff and Moore, 1983;Bernieri et al., 1988), physician and patient (Koss and Rosenthal, 1997), teacher and student (Bernieri, 1988;Lafrance and Broadbent, 1988), and psychological counselor and client (Ramseyer and Tschacher, 2006;Koole and Tschacher, 2016). In addition, synchronization of non-verbal behaviors has psychologically positive effects (Tickle-Degnen and Rosenthal, 1990;Hall et al., 2012). For instance, body synchronization between a counselor and a client represents their mutual empathy and relates to their level of satisfaction with counseling (Komori and Nagaoka, 2008;Ramseyer and Tschacher, 2014). In a group activity, when a group of students is rhythmically synchronized, they feel rapport with their groupmates, a sense of belonging to the group, and a strong sense of unity (Lakens and Stel, 2011). Body movement imitation between a teacher and students in educational settings leads to higher levels of rapport and greater satisfaction with learning outcomes (Duffy and Chartrand, 2015). Bavelas et al. (1986) and Koehne et al. (2016) reported that greater levels of synchronization are linked to empathy. In addition to the social context, neuroscientific bases have been identified for non-verbal behavior synchronization, such as synchronization of brain activities among participants during successful communication (Stephens et al., 2010) and correlations between body movement and brain activity (Yun et al., 2012). However, the quantitative relationship between body motion synchronization and degree of empathy is not fully understood. Therefore, to investigate this relationship in greater detail, we analyzed changes in body motion synchronization in relation to the degree of empathy during face-to-face communication, given that it is believed that body motion changes unconsciously with emotional states (Lakin, 2006;de Waal, 2007;Niedenthal, 2007;Richmond et al., 2008). Clarifying the relationship between changes in physical indicators and degree of empathy would improve interpretation of the cognitive relationship between body motion synchronization and degree of empathy from a physical aspect. Because body motions give different impressions, depending on speed and generation timing (Mehrabian and Williams, 1969;Miller et al., 1976), we specifically analyzed body motion using a set of physical indicators, including frequency and phase difference. We hypothesized that the phase and frequency relationships of body motion synchronization would change according to the degree of empathy during face-to-face communication. Here, we focused on measuring participants' head motion acceleration changes while in a seated position. An experiment with a unidirectional (i.e., speaker to listener) face-to-face lecture task was conducted. To manipulate the listeners' state of empathy during the story, the lecture material was divided into two parts: "low empathy" and "high empathy." The high empathy part was intended to facilitate listeners' empathy with the story, in contrast to the low empathy part. After the lecture task, listeners evaluated their degree of empathy with each part of the story. We then statistically compared the incidence of head motion synchronization between speakers and listeners during the low and high empathy conditions. To detect and analyze the head motion synchronization phase and frequency differences, timeseries data for each participant's head motion acceleration were collected using an accelerometer and short-time Fourier analysis; correlation analysis was then used to examine the time-series acceleration norms. Ethics Statement Our experimental protocol was approved by the Ethics Committee of the Tokyo Institute of Technology, and participants were recruited from the Tokyo Institute of Technology. All participants were briefed about the experimental procedures and gave written informed consent prior to participation. The methods were carried out in accordance with the approved guidelines. Informed consent was obtained for publication of identifying images. Participants Forty-eight Japanese adults (22 males, 26 females) were recruited via public advertisements and grouped into 24 same-sex pairs (11 male pairs, 13 female pairs). In each pair, one participant was assigned as the speaker and the other as listener. The participants in each pair had an age difference of no more than 5 years and did not know one another. After the experiment, we checked whether any participant had previously known the content of the material, and excluded one female pair on this basis; none of the other participants had ever read the material before. Interactions between participants before the experiment were not allowed to avoid a familiarity effect between participants. Lecture Material The lecture material, entitled "The meaning of life that our predecessors considered academically, " was adapted from Wikipedia's (2013) Japanese article "The meaning of life." The material was divided into two parts, low and high empathy. The low empathy portion related to philosophers' opinions about the meaning of life, including conceptual and complex sentences and words coined by the philosophers. In contrast, the high empathy portion related to psychologists' and sociologists' opinions about the meaning of life using concrete, simple sentences; authors' names and jargon were deleted from the high empathy sections to make the material easier to understand. The lecture transcript is included in Appendix 1. Each of the two parts included Japanese characters totaling about 650 words. We prepared two versions of the lecture material to eliminate an order effect. Version 1 had low empathy followed by high empathy; version 2 was the reverse order (Appendix 1 is version 2). For both versions, an identical introduction before the lecture material and conclusion at the end were added. Speaker pairs were randomized to receive version 1 or version 2. After the experiment, listeners retrospectively watched a video of the lecture and, at 30-s intervals, rated their degree of empathy on a questionnaire using a visual analog scale (Figure 1). The results indicate that listeners felt more empathy during the high empathy condition compared with the low empathy condition [t(22) = 6.55, * p < 0.01]. Figure 2 shows the experiment room setup. The speaker and the listener sat in chairs on either side of a table, 0.9 m apart. Environmental factors such as brightness, noise, temperature, and humidity were set at levels suitable for the experiment. A book stand was placed on the table so that the speaker could easily read the lecture material. Wireless accelerometers (WAA-006, Wireless Technology, Inc., Tokyo, Japan; sampling rate: 100 Hz) were attached to the speaker's and listener's foreheads with a rubber band to measure time-series data for their head motion accelerations. Since these devices are sufficiently small and light, they did not interfere with participants' natural movements. The positioning of the accelerometer was selected based on the kinematic perspective that, while in a sitting position, the head moves more frequently compared with other body parts. Experiment Procedure We conducted a lecture experiment using unidirectional face-toface communication. During the experiment, speakers conveyed lecture material to listeners (Figure 3). The speaker was instructed to read the lecture material in a natural way and this took about 5 min. During the experiment, time-series data on head motion acceleration of both the speaker and the listener were measured by their accelerometers. The experimental procedures were as follows. (1) An investigator explained the experiment to each participant pair in an experiment room. One person in each pair was randomly assigned to the speaker role, and the other the listener role. (2) The listener remained in the experiment room, while the speaker rehearsed the lecture material in a natural way for about 5 min in a separate room. The lecture material was set on a book stand. After the speaker had rehearsed the material, the investigator sat at a table opposite the speaker, and checked whether their reading was clearly audible. (3) The speaker returned to the experiment room. Both the speaker and the listener had an accelerometer and microphone positioned. (4) The investigator cued the participants to begin the experiment and then left the experiment room. (5) During the experiment, the speaker read aloud to the listener, referring to the material set on the book stand, as needed. (6) After the experiment, the speaker and the listener removed their devices. Analysis of Head Motion Synchronization We analyzed the time-series data on head motion acceleration of each participant to detect synchronization using the method described by Thepsoonthorn et al. (2016). This method consists of three main steps. Step 1: Short-time Frequency Analysis of Acceleration Norms of Head Motion First, the normative |a(t)| triaxial acceleration (a 2 x (t),a 2 y (t),a 2 z (t)) of each participant was calculated as: where the time resolution was set to 0.01 s. Figure 4A shows an example of time-series data on the acceleration norm of a participant. Next, a short-time Fourier transform (STFT) was applied to time-series acceleration data as: where v is frequency in Hz, ω(t) is a Humming window function, and t is a central time of window function. In this study, the window width was set to 1.28 s and the window moving time was set to 0.1 s to perform the STFT. Next, the linear interpolation was conducted with respect to frequency v. Figure 4B shows an example of the STFT within a frequency band of 1.0-5.0 Hz. The darker shade represents a higher amplitude spectrum value. In this study, for detection of head nodding synchronization, the amplitude spectrum was extracted for every 0.5 Hz; that is: 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, and 5.0 Hz. Figure 4C shows the 3.0 Hz amplitude spectrum as an example. Step 2: Detection of Head Motion Synchronization The head motion phase difference between speaker and listener was calculated using Spearman's rank correlation to detect head nodding synchronization. Synchronization between two participants during a tapping task is only possible for the temporal interval of tapping sounds within a range of 200-1800 ms (Fraisse, 1982). Therefore, in this study, the window width was set to 1.8 s, and the frame shift of the window was set to 0.1 s. In addition, the phase difference within a range of −0.5 to +0.5 s was used with a temporal interval of 0.1 s. This was based on a report that in a positive psychotherapeutic session between a client and therapist, the therapist's body motions occur on a 0.5 s delay (Komori and Nagaoka, 2011). Furthermore, it has been reported that synchronization between infant movements and adult speech occur at a phase difference of 0.05 ± 0.2 s (Kato et al., 1983). There were two criteria for detection of head motion synchronization. First, the amplitude spectrum of each participant's head motion had a value of more than 90% in the amplitude spectrum throughout the experiment for each pair. Second, for the amplitude spectrum to satisfy the first condition, Spearman's rank correlation must be positive and statistically significant. Figure 4D shows an example of head motion synchronization, where synchronization is represented by the black vertical lines. Step 3: Calculation of Mean Value and Test of Incidence of Head Motion Synchronization After detection of head motion synchronization for each pair, the length of utterance of each pair was adjusted. Then, the mean value of head motion synchronization for each phase and frequency for all pairs (Figures 5, 6) was calculated. Finally, each condition was grouped into six groups of the same size, and a mean value significance test was conducted using Wilcoxon's signed rank test for each group (Figure 7). Figure 5 shows the results of a head motion synchronization analysis, where the horizontal axis represents frequency and the vertical axis represents the synchronization phase difference between the speaker and listener. On the vertical axis, positive values indicate that the listener's head moved an instant before the speaker's, while negative values mean the converse. In this analysis, we performed a STFT analysis to obtain the power spectrum of the acceleration norm of participants' head motions from time-series data on head motion acceleration. We then calculated the Spearman's rank correlation coefficient to identify head motion synchronization with a binary label, where "1" indicates synchronization" and "0" means no synchronization. Finally, we calculated the mean value incidences of head motion synchronization of the 23 pairs from the time-series data. In the figures, the incidence of head motion synchronization is illustrated by a continuous spectrum of colors from red to blue. Figure 5A shows the incidence of head motion synchronization during the low empathy condition. Here, the incidence of head motion synchronization had large values in the region from ∼1.0 FIGURE 5 | Incidence of head motion synchronization for low and high empathy conditions. The incidence of head motion synchronization is illustrated by a continuous spectrum of colors from red to blue. (A) Mean incidence of head motion synchronization during low empathy (n = 23). (B) Mean incidence of head motion synchronization during high empathy (n = 23). Head Motion Synchronization Analysis Frontiers in Psychology | www.frontiersin.org 6 September 2018 | Volume 9 | Article 1622 FIGURE 6 | Difference in incidence of head motion synchronization between low and high empathy conditions. The dashed line shows the six areas of comparison between high and low empathy conditions. to ∼3.0 Hz and from 0.0 s to about −0.50 s in phase difference. Figure 5B shows the incidence of head motion synchronization during the high empathy condition. Here, the incidence of head motion synchronization had large values in the region from ∼1.5 to ∼5.0 Hz in frequency and from ∼0.2 s to about −0.50 s in phase difference. Figure 6 shows differences in the incidence of head motion synchronization between the low and high empathy conditions. Areas with relatively large positive differences in the incidence of head motion synchronization in the high-frequency region are evident. Next, we divided this figure into six representative areas to confirm the distribution tendency of synchronization during the high and low empathy conditions. From a physical viewpoint, frequency represents the speed of head motion, which is commonly divided into low, middle, and high areas. On the other hand, phase difference represents the temporal order of participants' head motions and is therefore reasonably divided into positive and negative values. This grouping helps us interpret the differences in the phase and frequency synchronization relationships in head motion corresponding to different listeners' empathic states from the viewpoint of perception in face-to-face communication. Figure 7 shows the Wilcoxon's signed rank test for each area. A p-value < 0.01 was considered statistically significant ( * p < 0.01). This result indicates that in the negative phase relationship areas, speaker-led head motion synchronization with medium (area b) and high (area c) frequencies was statistically more common during high empathy (area b: n = 23, Z = −3.32, * p < 0.01; area c: n = 23, Z = −3.38, * p < 0.01). In contrast, in the positive phase relationship areas, listener-led head motion synchronization at every frequency was statistically more frequent in the high empathy part (area d: n = 23, Z = −3.32, * p < 0.01; area e: n = 23, Z = −3.15, * p < 0.01). DISCUSSION To investigate the hypothesis that the phase and frequency relationships of body motion synchronization change according to the degree of empathy during face-to-face communication, we conducted a lecture task experiment using a controlled script divided into high and low empathy sections. During the randomly assigned speaker's lecture to a listener, we measured the acceleration of both speaker and listener head motions using an accelerometer, and analyzed their synchronization from the time-series data on acceleration norms. The statistical analyses in Figure 7 show that head motion synchronization between the speaker and the listener occurred more often during the high empathy portion of the lecture compared with the low empathy portion, in all areas except a and f. In other words, in the positive phase difference region where the speaker's phase leads, there was a significant difference in the incidence of head motion synchronization within a medium frequency area (area b) and a high frequency area (area c). However, in the negative phase difference region where the listener's phase leads, there were significant differences in incidence within the low frequency area (area d) and middle frequency area (area e). In addition, Figure 5B shows that during high empathy, speakers' head motions started earlier compared with listeners, ' although there were also cases in which speakers started about 0.1-0.2 s later than the listeners. Based on these results, we focus on the relationship between the phase and frequency relationship of head motion synchronization and the degree of empathy. A listener's head motions are known to indicate their comprehension of communication content and to prompt the speaker to make the next utterance, thus encouraging smooth communication (Eibl-Eibesfeldt, 1972;Morris, 1977). In contrast, a speaker's head motions are often seen in conjunction with catching the listener's eye at the end of an utterance, to confirm that the listener has heard the utterance and understood its content (Hadar et al., 1983). Furthermore, head motions are a conversational behavior used to show politeness based on a communication strategy (Ohashi, 2013). In our experiment, the high empathy condition consisted of more concrete, everyday, easily understood sentences compared with the low empathy condition. Therefore, the high empathy section induced head motion synchronization between the speaker and listener to facilitate smooth conversation. This is consistent with the report that conversation smoothness was experienced during synchronized conversation (Chartrand and Bargh, 1999). Regarding the relationship between the head motion synchronization phase difference and degree of empathy, Figures 5B, 7 show that compared with low empathy, during high empathy the speaker-led synchronization occurred more frequently within a middle frequency area (area b: 2.5 to 3.5 Hz) and a high frequency area (area c: 4.0 to 5.0 Hz). This suggests that when a listener is in a highly empathic state, high frequency head motion might occur because it signals empathy with the story being read aloud by the speaker. Thus, if the speaker perceives the listener's high frequency head motions, they might perceive the listener's interest in addition to their understanding of the content. Furthermore, we found that listener-led synchronization occurred more frequently during the high empathy part of the lecture compared with the low empathy part (see areas d and f Figures 6, 7). This suggests the possibility that high empathy content may enhance listeners' predictive head motions, which send a positive signal about comprehension and interest, and make the speaker comfortable. Probably, the speaker would feel a comfortable gap between utterances because it was possible to receive a positive signal that the listener understood without a long interval for confirmation. Indeed, it has been shown that during telephone dialogs, participants' predictive movements were important to their temporal comfort (Campbell, 2007). In other words, in a state in which the listener's degree of empathy was high, it is conceivable that the listener predicted the timing of events such as sentence breaks, when the speaker generated the head motion for confirmation and the listener made the head motion according to this timing. As an example of this type of timing of non-verbal motion, the occurrence of blinking moves toward synchronizing with the end of an utterance (Nakano and Kitazawa, 2010). These reports have a similar tendency as listeners' predictive head motions gave speakers a positive impression in unidirectional communication, such as the lecture task in this study. In this study, we investigated the relationship between head motion synchronization and degree of empathy, and inferred that the degree of empathy might be reflected in the phase and frequency relationship of head motion synchronization. However, there are some limitations to our study. (1) Our definition of empathy. Although we used this term in relation to the lecture material, empathy has many definitions that encompass a wide range of emotional states, including empathy for another person. Therefore, in future work, it is necessary to establish a new experimental setting to examine the listener's degree of empathy with the speaker. (2) Dependence on the content of utterances. Similar experiments should be conducted with different lecture material to determine whether similar results are found. (3) Directionality of speech and restrictions of a script. In daily life, dialog is typically bidirectional and usually without a script; hence, descriptive understanding such as used in this study becomes more challenging. (4) Culture dependence of non-verbal behavior. Since non-verbal behavior is highly culturedependent (Eibl-Eibesfeldt, 1972), it is desirable to investigate whether these results are universal across different cultures. (5) Selection of the physical movement for quantification. Here, we analyzed frequency and phase difference based on head motion acceleration. However, considering that synchronization and empathy are types of cognitive states that emerge through complex human communication, other observable behaviors should be considered. Overcoming these limitations would provide a deeper understanding of the specifics of non-verbal behavior in human communication and lead to pursuing more comfortable human-human interactions. AUTHOR CONTRIBUTIONS TY designed the experiment, collected and analyzed head motion and empathy data, and wrote the paper. EO and YI designed the experiment and provided the linguistic structure data analysis. K-IO provided conceptual advice regarding the experiments and results. YM supervised the study and experimental design. All authors discussed the results and commented on the manuscript.
2018-09-25T18:44:01.446Z
2018-09-25T00:00:00.000
{ "year": 2018, "sha1": "14e15dc7e925a3d9fea3a1af4cf888c60d8b0e3c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01622/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14e15dc7e925a3d9fea3a1af4cf888c60d8b0e3c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
18406330
pes2o/s2orc
v3-fos-license
Integrin Inhibitors as a Therapeutic Agent for Ovarian Cancer Ovarian cancer is a deadly disease, with a cure rate of only 30%. Despite aggressive treatments, relapse remains almost inevitable in patients with advanced-stage disease. In recent years, great progress has been made towards targeting integrins in cancer treatment, and clinical studies with various integrin inhibitors have demonstrated their effectiveness in blocking cancer progression. Given that the initial critical step of ovarian cancer metastasis is the attachment of cancer cells onto the peritoneum or omentum, in addition to the proven positive clinical results of anti-angiogenic therapy, targeting integrins is likely to be one of the most feasible approaches. This paper summarizes the current understanding of the integrin biology in ovarian cancer metastasis and the various therapeutic approaches attempted with integrin inhibitors. Although no integrin inhibitors have shown favorable results so far, integrin-targeted therapies continue to be a promising approach to be explored for further clinical investigation. Introduction Ovarian cancer is a highly metastatic disease characterized by widespread peritoneal dissemination and ascites and is the leading cause of death from gynecologic malignancies. It is often diagnosed at a late stage after tumor cells are disseminated within the peritoneal cavity. Despite aggressive treatments which consist of surgical cytoreduction and chemotherapy, more than two-thirds of all patients succumb to the disease within 5 years [1]. The initial step of ovarian cancer metastasis is that cancer cells, detached from the ovarian surface epithelium, attach to the layer of mesothelial cells that line the inner surface of the peritoneum. Several integrins have been identified as important mediators of ovarian carcinoma metastasis to the mesothelium, suggesting that integrin inhibitors could be a new therapeutic strategy to prevent cancer cells from attaching onto the peritoneal cavity. During the last 10 years, novel insights into the mechanisms that regulate cell survival as well as cell migration and invasion have led to the development of novel integrin inhibitors for cancer treatments [2]. In this short review, we describe the critical roles of integrins during the metastatic process of ovarian carcinoma and discuss the potential of integrin inhibitors as a new therapeutic agent for the treatment of ovarian cancer. Biology of Integrin The role of integrins in cell migration and invasion is one of their most studied functions in tumor biology [3,4]. Integrins are cellular surface glycoprotein receptors consisting of a heterodimer of αand β-subunits that are mutually noncovalently associated. In mammals, integrins have extensive distributions throughout the whole body, and there are 18 αand 8 β-subunits assembling 24 functionally different heterodimers [5,6]. Each individual integrin subunit has a large extracellular domain, a single membrane-spanning domain and a short noncatalytic cytoplasmic tail. The assembled integrin heterodimer can bind to a unique set of ligands. Natural integrin ligands include the components of the extracellular matrix (ECM) such as collagen, laminin, fibronectin, and vitronectin. Many integrins bind their ligands by recognizing the short amino acid sequences on exposed loops, such as Arg-Gly-Asp (RGD) (integrin α5β1) or Arg-Glu-Asp-Val (REDV) (integrin α4β1). On ligation to the ECM, integrins recruit complex signaling events, alone or in combination with growth factor receptors. Integrin signaling regulates diverse functions in tumor cells, including migration, invasion, proliferation, and survival through the activation of various pathways, such as integrin-linked kinase (ILK), mitogen-activated protein kinase (MAPK), protein kinase B (PKB/Akt), or nuclear factor kappa B (NF-κB) [7]. In recent years, great progress has been made towards targeting integrins in cancer treatment. Preclinical as well as clinical studies with various integrin antagonists have demonstrated their effectiveness in blocking tumor progression [3]. Almost all such Phase 1 clinical trials showed that the integrin inhibitors are nontoxic and well tolerated by patients, suggesting that they can be used concurrently with the conventional cytotoxic chemotherapy or radiotherapy. Some reports showed that radiotherapy results in upregulation of integrin expression in several types of cancer, leading to cellular resistance to radiotherapy-induced cancer cell death [8,9]. Nam et al. demonstrated in their preclinical works that targeting β1-integrin enhances the efficacy of radiation therapy in several cancers including breast cancer [9]. Integrins are also involved in innate multidrug resistance, allowing tumor cells to survive chemotherapy (celladhesion-mediated drug resistance: CAM-DR) [8]. It has been proposed that CAM-DR is caused by the activation of β1-integrin-stimulated tyrosine kinase that suppresses apoptosis from chemotherapy [10,11]. Integrin-targeted therapies in addition to conventional cytotoxic treatments, thus, have great potential to enhance the efficacy of overall treatments with minimal side effects. Ovarian Cancer Metastasis and Current Treatment Options In 2010, the American Cancer Society estimated that there were 21,880 cases of epithelial ovarian carcinoma and 13,850 disease-related deaths, identifying that ovarian cancer has the highest mortality rate of all gynecologic tumors. Sixtythree percent of all patients with ovarian carcinoma will succumb to their disease, making it the fifth leading cause of cancer death among USA women [12]. The high mortality of this tumor is largely explained by the fact that the majority of patients present at an advanced stage, with widespread metastatic disease within the peritoneal cavity. Only 20% of ovarian cancers are diagnosed while they are still limited to the ovaries, and patients at this early stage have an 85-90 percent 5-year survival [13]. In spite of several efforts made for early screening of ovarian cancer, no effective screening methods have been established to reduce ovarian cancer incidence and mortality [14]. Current treatment strategies for advanced ovarian carcinoma consist of aggressive "cytoreductive" or "tumordebulking" surgery, followed by a combination of platinumand taxane-based chemotherapy. The surgical treatment goal is "optimal" surgical cytoreduction, which is generally defined as residual disease of 1 cm or less. No gross residual tumors should be left throughout the abdominal cavity, because several studies have convincingly shown that cytoreduction results in improved patient survival [15,16]. This effect of cytoreduction is indicative of a dramatic difference in the biological behavior of ovarian cancer as compared with other malignancies, because in most other cancers the removal of metastatic tumors does not necessarily lead to improved survival [13]. One of the main reasons for this difference is that, unlike other malignancies, ovarian cancer directly disseminates within the abdominal cavity and rarely disseminates through the vasculature unlike other malignancies, although metastasis in pelvic and/or paraaortic lymph nodes can be found occasionally [17]. Once the cancer cells have detached as single cells or clusters from the primary ovarian tumor, it is thought that they metastasize through a passive mechanism, carried by the physiological movement of peritoneal fluid to the peritoneum and omentum. However, in spite of the execution of primary aggressive cytoreductive surgery as well as meticulouslydesigned chemotherapy regimens, the overall cure rate of ovarian cancer patients remains approximately 30% [13]. Even though no apparent tumors remain throughout the peritoneal cavity after the initial surgery, invisible cancer cells are left and endure through the postoperative chemotherapy. Small numbers of drug-resistant cells can persist for many months and remain dormant in the peritoneal cavity, only to grow progressively, leading to death of the patient despite aggressive treatment of the recurrent disease. There is, thus, a critical need for novel targeted therapies to overcome this situation. In particular, efficacious consolidation or maintenance therapy after the cytoreduction surgery needs to be explored. Novel molecularly directed therapies which aim to target tumor cells and the tumor microenvironment in ovarian tumorigenesis are rapidly emerging. Antiangiogenic agents have led the field so far. Preclinical and clinical studies have demonstrated the efficacy of antiangiogenic approaches against ovarian cancer both alone and in combination with cytotoxic chemotherapy [18]. Bevacizumab, a humanized monoclonal antibody directed against VEGF, has been tested in several epithelial malignancies, including ovarian cancer. Several prospective Phase II trials have shown that bevacizumab in combination with chemotherapy (carboplatinpaclitaxel, cyclophosphamide, or topotecan) is efficacious in advanced ovarian cancer [19], and Phase III evaluation is currently ongoing. Although these results are promising and it appears to be clear that bevacizumab is efficacious in a subset of ovarian cancer patients, resistance to bevacizumab is a major obstacle even for patients in whom bevacizumab was initially efficacious [18]. One potential alternative treatment option is targeting integrins, which regulate diverse functions in tumor cells including adhesion, migration, invasion, proliferation, and survival. In addition to tumor cells, integrins are also found on tumor-associated host cells, such as the vascular endothelium, fibroblasts, or bone marrow-derived cells. Targeting integrin signaling has the potential to inhibit the contribution of these cell types to cancer progression [3]. Several integrin-targeted therapeutic agents are emerging and currently in clinical trials for cancer therapy including ovarian carcinoma. Integrin Biology in Ovarian Cancer Most ovarian cancer cells are derived from the epithelial cells that cover the surface of the ovary [1]. Before the ovarian carcinoma cells detach from the basement membrane, they often undergo an epithelial-mesenchymal transition (EMT), Journal of Oncology 3 which loosens the intercellular adhesions between the cancer cells. EMT often starts from the loss of E-cadherin, one of the molecules crucial for the adhesion between neighboring epithelial cells. During the process of EMT, cancer cells acquire a more invasive phenotype and proliferate and spread throughout the abdominal cavity, carried by the physiological movement of massive ascites. Indeed, the knockdown of E-cadherin was reported to induce the up-regulation of the fibronectin receptor, α5β1-integrin, which promotes the adhesion of ovarian cancer cells to secondary metastasis sites, such as omentum and peritoneum [17,20]. According to an immunohistochemical analysis using clinical samples conducted at the University of Chicago (Chicago, IL), about 40% (42 of 107) advanced (Stages II-IV) ovarian cancer patients showed α5β1-integrin positive staining. Among these positive cases, 10 cases (9%) were considered to show overexpression and the median survival of the patients with α5β1-integrin overexpression was significantly worse (26 months) than that of those with low or negative integrin expression (35 months) [20]. Once the cancer cells have detached from the primary tumor, they float in the ascites as single cells or as multicellular spheroids. Casey et al. reported that the β1-integrin stimulating antibody or exogenous treatment with fibronectin promoted the spheroid formation of ovarian cancer cells, while blocking antibodies against α5or β1-integrin inhibited the formation, indicating that interactions between α5β1-integrin and fibronectin mediate the formation of ovarian carcinoma spheroids and their adhesion to ECMs at the secondary tumor growth sites [21]. The initial key step of ovarian cancer metastasis is the attachment of ovarian cancer cells onto the layer of mesothelial cells which cover the peritoneal cavity. Integrins have also been identified as important mediators between ovarian carcinoma and the mesothelium. Strobel and Cannistra reported that blocking antibodies against α5and β1-Integrin as well as RGD peptide inhibited the binding of ovarian cancer cells to mesothelial cells, suggesting that α5β1-integrin was the major receptor responsible for fibronectin-mediated ovarian cancer binding to the mesothelium [22]. These accumulating results strongly suggest that inhibition of α5β1-integrin is a potential new therapeutic target, at least for a subset of ovarian cancer patients [21]. Not only fibronectin but also collagen and laminin are the most abundant extracellular proteins in the mesothelium covering the peritoneum and the omentum. Primary ovarian carcinoma cells adhere preferentially to type I collagen, which can be blocked with an α2β1-integrin antibody [23]. The other important adhesion molecules which interact with cancer cells and the mesothelial cells are α4β1-integrin and its adhesion receptor, cell adhesion molecule-1 (VCAM-1) [24]. α4β1-integrin expressed on ovarian carcinoma cells binds to VCAM-1, which is present on the mesothelial cells and functionblocking antibodies directed against VCAM-1 and α4β1integrin block migration and metastasis in a xenograft model [24]. The expression of αvβ6 integrin in ovarian cancer cell lines correlates with the invasive potential of cells by inducing the secretion of proteinases such as urokinase plasmin activator (uPA) and matrix metalloproteinases (MMPs) [25]. Inconsistent results have been reported regarding the role of the vitronectin receptor, αvβ3-integrin, in ovarian cancer metastasis. Although it was initially thought to be expressed on aggressive ovarian cancer cells and to be correlated with ovarian cancer cell adhesive, migratory, and proliferative properties, recent data question this assertion and indicate that it is expressed on well-differentiated tumors and acts as a tumor suppressor in ovarian cancer [17]. Kaur et al. reported that αvβ3-integrin-expressing ovarian cancer cells showed impaired invasion, protease expression, and colony formation and that patients with tumors expressing high levels of β3-integrin had significantly better prognoses [26]. Given that Reynolds et al. recently showed that nanomolar concentrations of RGD-mimetic αvβ3-/αvβ5-integrin inhibitors enhance tumor growth and tumor angiogenesis in preclinical xenograft models [27], therapies aimed at blocking αvβ3-integrin may have detrimental effects. Clinical Trials Targeting Ovarian Cancer Preclinical studies have shown that integrin antagonists inhibit tumor growth by affecting both tumor cells and tumor-associated host cells, especially the angiogenic endothelium. Integrin antagonists currently in clinical trials include monoclonal antibodies and Arg-Gly-Asp (RGD) peptide mimetics [3,31]. The candidate integrin inhibitors which could be applied for ovarian cancer treatment are summarized in Table 1 [20,26,29,30]. Volociximab, a chimeric monoclonal antibody directed against α5β1integrin, inhibits angiogenesis and impedes tumor growth. Bell-McGuinn et al. reported on their Phase II data of platinum-resistant ovarian cancer patients treated with volociximab as a monotherapy [32]. Of 14 patients who were evaluable for efficacy, only one patient had stable disease at 8 weeks, and the remaining 13 progressed on treatment, although weekly volociximab was well tolerated. Beside the antibodies, synthetic peptides that mimic the structure of natural integrin binding ligands are alternative candidates for integrin inhibitors [6]. ATN-161 is a non-RGD-based pentapeptide binding to α5β1and αVβ3-integrins, derived from fibronectin by replacing an arginine residue of the primary sequence with cysteine moiety [6]. It has been shown to inhibit tumor growth, angiogenesis, and metastasis in multiple animal models [28,33]. In Phase I safety trials, ATN161 was well tolerated, and several patients exhibited stable disease, including one ovarian carcinoma [34]. Since the 1990s, αvβ3-integrin has been identified as a target for antiangiogenic therapy, as it expresses in proliferating vascular endothelial cells and regulates endothelial cell migration in sprouting vessels [24]. LM609, a mouse anti-human monoclonal antibody raised against αVβ3-integrin, showed considerable antiangiogenic activity in preclinical models [35]. As a result of these studies, etaracizumab (MEDI-522), a humanized version of LM609, was developed as one of the first integrin antagonists introduced into clinical trials. However, clinical trials found it to have limited effectiveness as a metastatic cancer treatment, probably owing to the single integrin (αVβ3-) targeting [6,36]. The human αv-integrin specific monoclonal antibody, intetumumab (CNTO-95), Merck KGaA [26] which targets both αvβ3and αvβ5-integrins, also showed antitumor and antiangiogenic effects in xenograft tumor models [37,38]. In a Phase I clinical trial, intetumumab was nontoxic, localized to tumors, and showed signs of antitumor activity [39]. A complete response imaged by FDG-PET was observed in one patient with ovarian carcinosarcoma whose disease remained stable for 6 months while receiving intetumumab [40]. This antibody should be further evaluated in additional clinical trials. Among various available RGD mimic peptides, cilengitide (c-[RGDf(NMe)V-]) has emerged as a promising agent. It can bind to both αVβ3and αVβ5-integrins with high affinity and inhibit their function strongly [6]. Cilengitide has shown significant promise in patients with late-stage glioblastoma by extending patient survival with minimal side effects [41,42]. It is currently being tested in Phase II trials in patients with lung and prostate cancer, and Phase II and Phase III trials are currently underway for glioblastoma [3]. However, in moving toward ovarian cancer clinical trials with cilengitide, a serious concern needs to be addressed. As noted above, Kaur et al. suggested that increased αvβ3-integrin expression on ovarian cancer cells correlates with a favorable outcome and that inhibiting its activity could increase the severity of the disease [26]. Therefore, it is critical to further investigate and clarify the effects of anti-αvβ3-integrin therapy on ovarian cancer tumors and the surrounding endothelial cells, before embarking on clinical therapeutic trials [18]. Conclusion Recognition of the need for cytoreduction along with the evolution of surgical techniques and the establishment of chemotherapy regimens through multiple clinical trials allows a majority of ovarian cancer patients to achieve "disease-free" status after the initial treatment. One of the major disappointments with the current ovarian cancer treatments is failure to achieve a complete cure, even in optimally debulked or chemosensitive patients. The establishment of efficacious consolidation or maintenance therapies would be a powerful tool for improving the miserable outcomes of patients with advanced-stage disease. The biological behavior of ovarian carcinoma is unique, differing from the classic and well-studied pattern of hematogenous metastasis found in most other cancers. Once ovarian cancer cells have detached as single cells or clusters from the primary ovarian tumor, they are carried by the physiological movement of peritoneal fluid and finally metastasize to the peritoneum and omentum, suggesting that the attachment of cancer cells onto the mesothelial cells covering the basement membrane is the initial key step in metastasis. Bevacizumab has already shown significant utility in ovarian cancer treatment not only in combination with current chemotherapy but also as a single agent, indicating that antiangiogenic therapy has considerable promise. Given that targeting integrins can affect not only the diverse functions of tumor cells, including adhesion, migration, invasion, proliferation, and survival, but also tumor microenvironments, especially the angiogenic endothelial cells, integrin inhibitors obviously have the potential for clinical use in the near future. Unfortunately, although several clinical trials have been attempted against ovarian cancer, no integrin inhibitor has shown sufficiently promising efficacy to progress to further clinical investigation; the agents targeting only a single integrin, such as αvβ3 and α5β1, failed to show evident clinical benefits in metastatic cancer treatment. In cancer progression, more than one integrin pathway is involved. For example, even if inhibition of the function of α5β1-integrin as a fibronectin receptor could be adequately achieved, the other integrins, such as αvβ3 or α3β1, would eventually compensate for its function. Therefore, a combination of different integrin receptor pathways is likely to be more effective in the clinical setting and should be explored for the future clinical application. Collectively, although there remain many questions and challenges, integrin-targeted therapies continue to be a promising approach to improve the outcomes of women with ovarian cancer.
2014-10-01T00:00:00.000Z
2011-12-25T00:00:00.000
{ "year": 2011, "sha1": "1f93f09e4907c940d46f7925d818e3dabb60c9dc", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jo/2012/915140.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a49dfab0d297c75579c5c3cebb8e3f56b51f8333", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
38843768
pes2o/s2orc
v3-fos-license
Induction of Stem Cell Factor/c-Kit/Slug Signal Transduction in Multidrug-resistant Malignant Mesothelioma Cells* Malignant mesothelioma (MM) is strongly resistant to conventional chemotherapy by unclear mechanisms. We and others have previously reported that cytokine- and growth factor-mediated signal transduction is involved in the growth and progression of MM. Here, we identified a pathway that involves stem cell factor (SCF)/c-Kit/ Slug in mediating multidrug resistance of MM cells. When we compared gene expression profiles between five MM cells and their multidrug-resistant (MM DX) sublines, we found that MM DX cells expressed both SCF and c-Kit and had higher mRNA levels of Slug. Knockdown of c-Kit or Slug expression with their respective small interfering RNA sensitized MM DX cells to the induction of apoptosis by different chemotherapeutic agents, including doxorubicin, paclitaxel, and vincristine. Transfection of c-Kit in parental MM cells in the presence of SCF up-regulated Slug and increased resistance to the chemotherapeutic agents. Moreover, MM cells expressing Slug showed a similar increased resistance to the chemotherapeutic agents. These results indicate that induction of Slug by autocrine production of SCF and c-Kit activation plays a key role in conferring a broad Malignant mesothelioma (MM) 1 is an aggressive tumor caused primarily by asbestos exposure. MM is characterized by overexpression of several cytokines, growth factors, and tyrosine kinase receptors (1). Some of these are proto-oncogenes and key regulators for proliferation, differentiation, and motil-ity of MM cells (2). Whether cytokines and growth factors may control other mechanisms of MM progression remains to be determined. MM is poorly responsive to chemotherapy, and median patient survival is 8 -18 months (3). MM cells exhibit resistance in vitro and in vivo to many anti-cancer agents, including doxorubicin and cisplatin, which are nevertheless widely used to treat MM (4). In most cases, this is the consequence of overexpression of the ATP-binding cassette (ABC) transporters, including MDR1 (ABCB1), MRP1 (ABCC1), and MRP2 (5). However, ABC transporters cannot completely explain the widespread MDR phenotype of MM cells (6,7), suggesting that other factors may be involved. The tyrosine kinase receptor c-Kit and its ligand stem cell factor SCF are essential to the maturation of hemopoietic and primordial germ cell precursors and melanocytes during embryonic development (8,9). In addition, aberrant expression of c-Kit and/or SCF has been reported in several tumors, such as acute myeloid leukemia (10,11), small cell lung cancer (12), gynecological tumors (13), and breast carcinomas (14). Thus, constitutive activation of c-Kit is also involved in tumor progression. Multiple cellular functions are affected by c-Kit-dependent signals including cell survival, proliferation, adhesion, and differentiation (15). SCF augments the derivation of melanocytes from embryonic stem cells (16). Moreover, c-Kit activation was found to suppress apoptosis of normal murine melanocyte precursors (17), soft tissue sarcomas of neuroectodermal origin (18), and normal and malignant human hemopoietic cells (19). This antiapoptotic activity of SCF/c-Kit signals is mediated, at least in part, by the zinc finger transcription factor Slug (20), which functions as transcriptional repressor (21). However, we can find very little or no documentation regarding the expression of SCF/c-Kit system or Slug in human MM tissue. In this study, we set out to gain insights into the control of MDR in MM cells. We determined whether cytokines, growth factors, or their receptors can activate and consequently protect MM cells against chemotherapeutic injury, and, if so, we dissected the molecular mechanisms underlying this cytoprotection. We identified the SCF/c-Kit signaling pathway as a component of the MDR program induced by chemotherapy and provide evidence that this signal acts in the MDR phenotype, in part, by modulating the activity of Slug. EXPERIMENTAL PROCEDURES Antibodies and Reagents-All of the primary antibodies used, excluding anti-FLAG monoclonal antibody (by Sigma) and monoclonal anti-MDR1 antibody (Calbiochem), were obtained from R&D Systems Inc. (Minneapolis, MN). The antineoplastic agents doxorubicin (adriamycin), paclitaxel (taxol), and vincristine as well as verapamil were purchased from Sigma. Recombinant human SCF was obtained from * This work was supported by grants from the Italian Ministry of Research and from Associazione Italiana per la Ricerca contro il Cancro (to A. P.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Sigma. The pBabePuro plasmid was a gift from Dr. G. J. Clark (NCI, National Institutes of Health, Bethesda, MD). All other reagents were obtained from Sigma unless otherwise specified. Cell Lines-We used a panel of five human MM cell lines (22,23). MM1, MM3, MM4, and MM5 cells were derived from previously untreated patients and identified morphologically and by extensive phenotypic analysis as previously described (24). They were then expanded and utilized for the experiments. H-Meso cell lines (MM2) have been characterized previously (25). MM DX cells were selected from their parental cells by stepwise increases in doxorubicin concentrations from 0.0001 to 0.3 M, as detailed elsewhere (26). MM DX cells were then maintained in the presence of 2.5 M doxorubicin. All cells were maintained in RPMI 1640 supplemented with 10% heat-inactivated fetal bovine serum, 1% L-glutamine, 1% penicillin/streptomycin (complete medium) (all from HyClone, Rome, Italy) at 37°C and 5% CO 2 . Proliferation Assays-Chemotherapy sensitivity was assessed by MTT assay as previously described (22). The cells were pulse-exposed to different concentrations of doxorubicin, vincristine, or paclitaxel for 24 h. The cells were then cultured for an additional 36 h in culture medium with 10% fetal bovine serum. After this time, the remaining cells were stained with MTT (Promega, Madison, WI) using the manufacturer's instructions. Absorbances were normalized, and LC 90 values were calculated by nonlinear regression. Normalized data and nonlinear regression curves were plotted graphically as a percentage of viable cells. Alternatively, cells were incubated with chemotherapeutic drugs for 24 h. Cells were then collected after different times, stained with trypan blue, and counted. Apoptosis-Cells (5 ϫ 10 5 ) were plated 24 h before treatment. Cells were then exposed to the indicated chemotherapeutic agents for 24 h. At the end of treatment, cells were washed twice with phosphate buffer with calcium and magnesium (PBS), and apoptosis was determined using the cell death detection ELISA kit (Roche Applied Science) according to the manufacturer's instruction. Reverse Transcription and Real Time PCR-For real time PCR, total RNA was extracted with the RNeasy minikit (Qiagen, Milan, Italy) and reverse-transcribed, and the cDNA was quantified using the TaqMan system (Applied Biosystems, Milan, Italy). Values for each mRNA were normalized to the relative quantity of GAPDH mRNA in each sample. The PCRs were carried out in a Chromo4 sequence detector (MJ Research, Waltham, MA). The primer and probe sets used for the amplification of the investigated genes were supplied by Applied Biosystems. Details of sequences and thermal cycle conditions are available upon request. To analyze the expression of Slug and Snail in cell lines, reverse transcription (RT) was performed as previously described (20). Thermocycling parameters for PCR and the sequences of the specific primers were as follows: Slug, 30 cycles at 94°C for 1 min, 56°C for 1 min, and 72°C for 2 min, sense primer 5Ј-GCCTCCAAAAAGCCAAACTA-3Ј, antisense primer 5Ј-CACAGTGATGGGGCTGTATG-3Ј; Snail, 30 cycles at 95°C for 2 min, 60°C for 2 min, and 72°C for 2 min, sense primer 5Ј-CAGCTGGCCAGGCTCTCGGT-3Ј, antisense primer 5Ј-GC-GAGGGCCTCCGGAGCA-3Ј. Amplification of ␤-actin RNA served as a control to assess the quality of each RNA sample. Immunoblotting, Immunoprecipitation, and ELISA-Proteins (50 -75 g/lane) were separated on 7.5-10% SDS-polyacrylamide gels and transferred to nitrocellulose. Transfers were blocked for 2 h at room temperature with 5% nonfat milk in TBS, 0.1% Tween 20 and then incubated overnight at 4°C in the primary antibody diluted (1:1000) in 5% bovine serum albumin in TBS, 0.05% Tween 20. The transfers were rinsed with TBS, 0.05% Tween 20 and incubated for 1 h at room temperature in horseradish peroxidase-conjugated goat anti-rabbit or goat anti-mouse (Bio-Rad) diluted 1:5000 in 5% nonfat milk in TBS, 0.1% Tween 20. The immunoblots were developed with the Super Signal reagent (Pierce). Protein concentration was determined by standard Bradford protein assay (Bio-Rad). All Western blots were reprobed with human ␤-actin antibody (sc-1616; 1:1000) from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) to confirm equivalent loading of protein samples. Human SCF levels in cell lysates and culture supernatants were determined using an ELISA kit (R&D Systems) according to the manufacturer's instructions. For analysis of c-Kit receptor autophosphorylation, cells were rapidly lysed in Nonidet P-40 lysis buffer (1% Nonidet P-40, 20 mM Tris (pH 8.0), 137 mM NaCl, 1 M NaF, 0.5 mM EDTA, 10% glycerol, 1 mM phenylmethylsulfonyl fluoride, 0.15 units/ml aprotinin, 20 M leupeptin, and 1 mM sodium vanadate) on ice for 20 min. Cell lysates were cleared by centrifugation for 20 min at 14,000 ϫ g, and protein content was quantitated as described above. One mg of total protein was immunoprecipitated with anti-c-Kit antibody and detected by Western blotting with the anti-phosphotyrosine antibody (Santa Cruz Biotechnology). Fluorescence-activated Cell Sorting Analysis-Cells (1 ϫ 10 5 ) were washed three times in PBS containing 0.5% bovine serum albumin (PBS buffer) and then incubated for 30 min at 4°C with appropriate amounts of anti-c-Kit monoclonal antibody (R&D Systems). Cells were washed twice with PBS and then incubated with fluorescein isothiocyanate-conjugated (Fab) 2 goat anti-mouse IgG. After three washes, samples were analyzed on a FACScan (BD Biosciences). A minimum of 10,000 events/sample was analyzed. As a negative control, an isotypematched monoclonal antibody with irrelevant specificity was used. Plasmids-The full-length c-Kit cDNA was cloned into the pcDNA3 plasmid vector and transfected into MM5 cells by LipofectAMINE reagent (Invitrogen) according to the manufacturer's instructions. Stable transfected cells were selected in 400 g/ml Geneticin and further subcloned as previously described (27). The expression of c-Kit protein in the single-cell clones (G5 and G12) was characterized by Western blotting and fluorescence-activated cell sorting analysis. Full-length cDNA for human Slug (GenBank TM accession number BC014890) and Snail (GenBank TM accession number NM005985) were amplified from RNA of MM cell lines by reverse transcription-PCR, and a COOH-terminal FLAG epitope tag was added by PCR as previously described (28). Constructs were subcloned into the pBabePuro plasmid. The identities of all plasmid inserts and vector boundary regions were confirmed by sequence analysis. Details of the construct are available upon request. Transient transfection with Slug and Snail constructs and culturing of transduced cells were performed by standard procedures. To generate Slug d-siRNA, full-length cDNA of human Slug with T7 promoters at both ends were made by PCR and subjected to in vitro transcription to produce large double-stranded RNA as previously described (29). Then Slug d-siRNA was produced using an r-Dicer kit (Stratagene, La Jolla, CA) according to the manufacturer's instructions. For transfection, cells (10 5 cells/well) were plated in 6-well plates, grown in antibiotic-free medium overnight, and then transfected with specific siRNA using Oligofectamine reagent (Invitrogen) according to the manufacturer's instructions. At 24 or 48 h after transfection, cells were trypsinized, replated, and incubated overnight before treatment. Northern Blotting-Total RNA was isolated from cells using TRIzol reagent (Invitrogen, Milan, Italy). Electrophoretic separation and membrane transfer of RNA were carried out by standard methods. For use as probes, Slug, Snail, and GAPDH cDNA fragments were labeled with [ 32 P]dCTP (Amersham Biosciences) by random priming with the Re-diPrime II kit (Amersham Biosciences). Prehybridization and hybridization were carried out in Rapid-Hyb buffer (Amersham Biosciences), a The cells were pulse-exposed to increasing concentrations of doxorubicin (from 0 to 2.5 M), paclitaxel (from 0 to 0.5 M), and vincristine (from 0 to 0.5 M) for 24 h. The cells were then cultured for an additional 36 h in culture medium supplemented with 10% fetal bovine serum. Drug concentration that was cytotoxic for 90% of treated cells were determined with an MTT assay as described under "Experimental Pro- the membrane was washed, and the blot was exposed to BioMax MS film (Eastman Kodak Co.). Statistical Analysis-All values were expressed as mean Ϯ S.D. Comparison of results between different groups was performed by oneway analysis of variance and paired t test using StatView 5.0 (NET Engineering, Pavia, Italy). Characterization of the Multidrug-resistant Malignant Mesothelioma Cells (MM DX Cells)-Five MM DX sublines were selected from their parental MM cell lines by exposure to stepwise increases in doxorubicin concentrations. Their drug sensitivity profile to doxorubicin, paclitaxel, and vincristine are shown in Table I. The concentration of drug lethal for 90% of treated cells (LC 90 ) was calculated from dose-response curves for each drug tested using the MTT assay. MM DX cells with an LC 90 value Ն3-fold greater than that of parental cells were considered resistant. All of the five MM DX sublines expressed an elevated amount of MDR1 protein (also known as P-glycoprotein) (Fig. 1A). To investigate to what extent P-glycoprotein expression determines MDR phenotype, we treated MM DX sublines with verapamil, a substance that antagonizes P-glycoprotein activ-ity through competitive block of calcium channels. Contemporary application of doxorubicin and 5 M verapamil, a concentration known not to have a cytotoxic effect, did not significantly modify the LC 90 value of MM1 DX or MM2 DX sublines as determined by trypan blue exclusion (Fig. 1, B and C) and MTT assay (data not shown). These results indicated that MM DX cells acquired an MDR phenotype, which was independent by MDR1 expression. Cytokines, Growth Factors, and Their Receptor Expression in MM DX and MM Cells-Differential cytokine and growth factor mRNA expression between MM DX and MM cells were determined by real time PCR analysis. Four of five MM DX sublines showed a 2-4-fold increase in the expression of SCF ( Fig. 2A). By contrast, MM DX and parental cells expressed equal amounts of VEGF, TGF-␤, insulin-like growth factor-1, PDGF, and FGF-2 ( Fig. 2A) as well as TGF-␣ and hepatocyte growth factor (data not shown). Expression of SCF protein was confirmed by Western blotting analysis. Membrane-bound SCF showed a marked increase in the MDR sublines compared with the parental cells, whereas soluble SCF was not detectable in all cell types (Fig. 2B, inset). Accordingly, increased amounts of SCF were detected by ELISA in the cell lysates (Fig. 2B), but The expression of tyrosine kinase receptors in both MM DX and MM cells was then evaluated by real time PCR, and most of the MDR sublines expressed higher mRNA levels of the SCF receptor c-Kit than parental cells (Fig. 3A). VEGF receptor-1 (Flt-1) and PDGF receptor-␣ mRNA increased in three of five MM DX cells, whereas VEGF receptor-2 (KDR), FGF receptor-1 and PDGF receptor-␤ decreased in four of five MDR sublines compared with parental cells (Fig. 3A). In addition, MM DX cells did not modify mRNA levels of EGF receptor (c-erbB2) and hepatocyte growth factor receptor (MET) compared with parental cells (data not shown). Consistently, c-Kit protein expression was higher in four of five MM DX sublines with respect to parental cells with the exception of MM3 DX cells (Fig. 3B). Down-regulation of c-Kit Expression or Function by Small Interfering RNA and Neutralizing Anti-c-Kit Antibody Restore Sensitivity to Anticancer Agents in MM DX Sublines-The coexpression of SCF and its receptor c-Kit in some of the MM DX cells suggested that this pathway could confer MDR to anticancer compounds by an autocrine/paracrine mechanism. To address this hypothesis, we knocked down c-Kit with small interfering RNA (c-Kit siRNA) duplexes. Exposure of MM1 DX or MM2 DX cells to c-Kit siRNA was associated with downregulation of c-Kit expression (Fig. 4A). There was no detectable down-regulation of c-Kit in MM1 DX or MM2 DX sublines exposed to a control siRNA (CsiRNA) (Fig. 4A). Neither c-Kit siRNA nor CsiRNA inhibited MDR1 protein expression in MM1 DX (Fig. 4B) as well as in MM2 DX sublines (data not shown), suggesting no functional association between MDR1 and SCF/ c-Kit signals. MM1 DX or MM2 DX sublines and their parental cells exposed to CsiRNA or c-Kit siRNA were then analyzed for nucleosome formation, a marker of apoptosis (30). MM1 DX, MM2 DX, or parental cells exposed to c-Kit siRNA exhibited no significant increase in nucleosome formation compared with controls (Fig. 4C). By contrast, MM1 DX and MM2 DX cells exposed to c-Kit siRNA responded to doxorubicin, paclitaxel, or vincristine with an increase in apoptosis compared with that obtained with cells exposed to CsiRNA (Fig. 4C). Similar results were obtained by adding neutralizing anti-c-Kit antibody (5 g/ml), whereas an antibody with irrelevant specificity had no effect (data not shown). These findings indicate that blocking the autocrine activity of SCF or knocking down c-Kit expression increased sensitivity to genotoxic agents in the MDR sublines. Activation of the c-Kit-dependent Pathway by SCF Confers MDR in MM Cells-To further determine the role of SCF/c-Kit signaling in mediating the resistance of MM cells, MM5 cells lacking endogenous c-Kit (see Fig. 3) were engineered to stably express a wild-type, full-length c-Kit receptor. We used representative clones G5 and G12 in our studies, which had high c-Kit protein levels as well as MM5 DX sublines (Fig. 5A). G12 or G5 cells were treated with SCF (100 ng/ml) for 10 min, and autophosphorylation of c-Kit was examined by immunoprecipitation with anti-c-Kit antibody followed by Western blot with anti-phosphotyrosine antibody. In the presence of SCF, autophosphorylation of c-Kit was detected in G12 cells, indicating c-Kit activation (Fig. 5B). Notably, MM5 DX cells also showed c-Kit autophosphorylation in the absence of SCF (Fig. 5B), suggesting that the autocrine loop of SCF is constitutively active in MM DX sublines. G5 or G12 cells were next cultured in the presence or absence of SCF, and their resistance to doxorubicin, paclitaxel, and vincristine was determined. Unlike control vector-transfected MM5 cells, G12 cells in the presence of SCF showed universally increased resistance to all three chemotherapeutic agents (Fig. 5C). Thus, the presence of both c-Kit and SCF in MM cells promotes MDR to cytotoxic drugs. Interestingly, c-Kit overexpression also increased resistance to doxorubicin in G12 cells in the absence of exogenous SCF by 30%, consistent with an autocrine role of MM5-derived SCF (see Fig. 2B). This effect, however, was not statistically significant. ern blot analysis. Expression of Slug, rather than that of Snail, was strongly correlated with the MDR phenotype of MM cells (Fig. 6A). To assess whether SCF affected Slug expression in MM cells, we determined Slug and Snail mRNA levels in c-Kittransfected cells in the presence of SCF using RT-PCR. G12 cells specifically expressed Slug upon SCF stimulation and in a time-dependent manner (Fig. 6B). By contrast, Snail was expressed at similar levels in SCF-unstimulated and -stimulated G12 cells (data not shown). In concert with these findings, knockdown of c-Kit expression in MM1 DX or MM2 DX cells by c-Kit siRNA was associated with decreased Slug expression, as compared with cells transfected with CsiRNA (Fig. 6C). We observed comparable results when we used the neutralizing anti-c-Kit antibody in place of c-Kit siRNA (results not shown), ruling out unspecific effects of siRNA. Modulation of Slug Expression and Function by SCF/c-Kit Pathway in MM DX To address whether Slug functionally contributes to the MDR phenotype of MM cells, we used the recombinant human Dicer (r-Dicer) to generate random pools of siRNA (d-siRNA) from double-stranded RNA corresponding to the entire coding regions of Slug (29). The Slug d-siRNA was transfected into MM1 DX sublines, and the expression of Slug and Snail was assessed by RT-PCR. The Slug d-siRNA decreased Slug by 80% (Fig. 6D). Expression of Snail or ␤-actin was unaffected, and d-siRNA derived from an irrelevant laminin cDNA had no effect on either Slug or Snail expression (Fig. 6D). As observed above with c-Kit siRNA, transfection of Slug d-siRNA sensitized MM1 DX or G12 cells treated with SCF to apoptosis induced by all three drugs tested (Fig. 6E). As control, Slug d-siRNA did not increase apoptosis in MM1 parental cells or SCF untreated G12 clones (data not shown). The results indicate that Slug as well as SCF/c-Kit can mediate the broad chemoresistance observed in MM DX sublines. To extend these data, constructs expressing full-length, FLAG epitope-tagged Slug and Snail were generated (Fig. 7A), and the effects of these proteins in mediating the resistance of MM cells were assessed. When compared with the effects of the empty expression vector, Slug expression reduced the sensitivity of MM cells to apoptosis induced by all three chemotherapeutic agents, whereas Snail had no effect (Fig. 7B). Taken together, these results indicate that the effects of SCF/c-Kit signal transduction on MDR phenotype of MM cells was contingent on Slug activity. MDR have been intensively studied, since experimental models can be easily generated by in vitro selection with cytotoxic agents, and different types of MDR have been described. One way is to pump drugs out of cells by increasing the activity of efflux pumps, such as ATP-dependent transporters (31). In cases in which drug accumulation is unchanged, activation of detoxifying proteins, such as cytochrome P450 mixed function oxidases, can promote drug resistance (32). Cells can also activate mechanisms that repair drug-induced DNA damage (31). Again, endogenous mitogens, such as oncogene activation, can suppress cell death through NF-B signaling (33) and other prosurvival pathways, such as the phosphatidylinositol 3-kinase/AKT signaling cascade (34). Finally, disruptions in apoptotic signaling pathways (e.g. p53 or ceramide) allow cells to become resistant to drug-induced cell death (35). Selection of cancer cells in culture with natural product anticancer drugs, such as paclitaxel, doxorubicin, or vinblastine, frequently results in MDR that is due to expression of the ABC transporter MDR1 (5). However, host and tumor genetic alterations, epigenetic changes, and tumor environment all seem to contribute to the complex story of cancer drug resistance (36). Therefore, in any population of cancer cells that is exposed to chemotherapy, more than one mechanism of MDR can be present. In this study, we have investigated whether the MDR phenotype of MM cells is also mediated by environmental factors, such as cytokines and growth factors, that are overexpressed in MM (1, 2). We identified the tyrosine kinase receptor c-Kit and its ligand SCF as genes up-regulated in MDR sublines of MM cells. Moreover, induction of SCF and c-Kit is not simply associated with an MDR state, but these molecules actively contribute to MDR. Indeed, knocking down c-Kit expression increases sensitivity to chemotherapeutic agents in MDR sublines, and forced expression of SCF/c-Kit signal is sufficient to lead to MDR in parental cells. The biological events controlled by the SCF/c-Kit signaling pathway are first implicated in the generation and migration of hematopoietic stem cells (8,9). Moreover, mutations resulting in constitutive activation of c-Kit have been described in acute myeloid leukemia (10,11), small cell lung cancer (12), gynecological tumors (13), breast carcinomas (14), and colonic tumors derived from interstitial cells of Cajal, which are SCF-dependent (37,38). Thus, activation of the c-Kit-dependent pathway is also involved in malignant transformation of both leukemia and solid tumors. Recently, it was observed that a SCF/c-Kit-dependent pathway protects erythroid precursor cells from chemotherapeutic agents (19) and that exposure of cultured human keratinocytes and melanocytes to UVB light up-regulates SCF and c-Kit, promoting cell survival (17). These findings are congruent with our results and suggest that some cell types may acquire the ability to induce SCF/c-Kit signals as a protective mechanism against DNA-damaging agents. However, the stimuli responsible for the constitutive expression of SCF and c-Kit in MDR sublines remain unclear. Recent evidence indicates that drug exposure may induce not only resistance but also invasiveness in cancer cells (39). We observed that MM cells and MM DX sublines secreted similar levels of matrix metalloproteinases 2 and 9. 2 Moreover, we observed higher protein levels of membrane-bound SCF in MDR sublines than parental cells, whereas soluble SCF was not detectable in all cell types or in conditioned media (see Fig. 2B). Thus, it is unlikely that proteases are involved in SCFinduced MDR. Although c-Kit expression was found in a number of MM cell lines (40), little is known about the expression of SCF and c-Kit in primary MM tissue. One study evaluating c-Kit expression in formalin-fixed paraffin-embedded MM tissue identified c-Kit in 7 of 33 samples examined (41). Another study observed c-Kit in a subset of 37 cases of archived MM with one type of anti-Kit antibody, but only in a single MM case with a second anti-Kit antibody (42). Since fresh tissue was not available for examination by complementary methods or optimized histochemical procedures, c-Kit expression in additional samples may have been beyond the limits of detection. In addition, no information was presented regarding the clinical staging of the MM tissue. Nevertheless, detection of c-Kit expression in at least some of the archival MM tissue studied supports an involvement for c-Kit-dependent pathways in MM biology. Our results show that the MDR conferred to MM cells by the SCF/Kit signaling pathway is mediated by a specific member of the Snail family of zinc finger transcription factors, namely Slug. The Snail transcription factors are implicated in epithelial-mesenchymal transitions during mammalian development as the generation and migration of mesoderm and neural crest cells (21). Recent findings show that Slug is expressed in t (17,19) leukemic cells (43), in rhabdomyosarcoma cells (44), and in breast cancer (28). Moreover, Slug mediates the oncogenic effect of the E2A-HLF fusion protein in leukemia (43) and is the molecular target of c-Kit-dependent pathway that promotes the radioresistance of bone marrow cells (20). Thus, Slug may be a common component providing chemoresistance to tumor cells and, therefore, might constitute an attractive target in the treatment of MM. However, the precise mechanisms FIG. 6. SCF/c-Kit signaling up-regulates Slug in MM DX cells, and Slug inhibition increases MM DX apoptosis induced by chemotherapeutic drugs. A, total RNA was collected from the indicated cells, separated on an agarose-formaldehyde gel, and subjected to Northern blotting. The same membrane was hybridized sequentially with probes to Slug, Snail, and GAPDH. The GAPDH blot confirms roughly equivalent loading of RNA samples. B, G12 cells that constitutively express c-Kit were incubated for the indicated periods of time with SCF (100 ng/ml). After incubation, Slug, Snail, and actin expression were determined by semiquantitative RT-PCR. C, MM1 DX and MM2 DX cells were transiently transfected with CsiRNA or c-Kit siRNA as described in Fig. 4. After 48 h, expression of Slug was analyzed by RT-PCR. D, MM1 DX cells were transfected with two different pools of d-siRNA, one complementary to Slug and another to laminin C. After 48 h, total RNA was extracted and subjected to RT-PCR with specific primers for Slug and Snail. Expression of actin was used as an internal control. E, MM5 DX sublines and G12 clones, treated with SCF as described in Fig. 5, were transfected with the Slug d-siRNA pool. After 24 h, cells were pulse-exposed for 24 h to the chemotherapeutic agents followed by an additional 16-h drug-free culture period. Apoptosis was then determined by a cell death detection ELISA kit. *, p Յ 0.01 versus mock, ANOVA, n ϭ 4. FIG. 7. Slug, but not Snail, increased resistance to apoptosis induced by chemotherapeutic drugs. A, empty vector, Snail, and Slug constructs were transfected into MM1 cells, lysates were prepared, and expression of FLAG epitope-tagged Slug and Snail proteins was determined by immunoblotting. The anti-FLAG signal indicates the expression of Snail and Slug proteins, and the anti-actin signal confirms equivalent loading of the lanes. B, empty expression vector or a vector encoding Snail or Slug was transfected in MM1 cells. After 48 h, cells were pulse-exposed for additional 24 h to the chemotherapeutic agents as described in the legend to Fig. 5. Apoptosis was then determined by a cell death detection ELISA kit. *, p Յ 0.01 versus control, ANOVA, n ϭ 4. of Slug effects on multidrug resistance remain to be clarified, although they were not the consequence of MDR1 expression, since slug inhibition by c-Kit siRNA did not alter MDR1 expression and verapamil did not restore the sensitivity to pharmacological treatment in MM DX sublines. Slug and other members of the Snail family bind to specific target genes and function as transcriptional repressors (21). We were unable to detect any significant influence of Slug on pro-or antiapoptotic genes of the Bcl-2 family members, such as bax and bcl-2 (results not shown), which are frequently involved in the tumor resistance to apoptosis (31). Interestingly, Slug represses Ecadherin expression in breast cancer (28), and the loss of Ecadherin function occurred during tumor progression (45). The role of E-cadherin within this context is being investigated. In summary, we demonstrated for the first time that SCF, c-Kit, and Slug are up-regulated in MDR sublines. Autocrine production of SCF by tumor cells activates its tyrosine kinase receptor c-Kit and then induces slug gene expression that mediates resistance of the cells to chemotherapy. This novel function of SCF/c-Kit/Slug pathway in cancer biology may provide a rationale to use a combination of conventional chemotherapeutic drugs with a new generation of SCF/c-Kit/Slug signal transduction inhibitors for the treatment of MM.
2018-04-03T04:25:38.925Z
2004-11-05T00:00:00.000
{ "year": 2004, "sha1": "c51f0075caf285ae6a43e509d483239b3971b825", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/45/46706.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3e6da549132c6152fac8b17fca519797a3890e41", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231946654
pes2o/s2orc
v3-fos-license
Effect of receptors on the resonant and transient harmonic vibrations of Coronavirus The paper is concerned with the vibration characteristics of the Coronavirus family. There are some 25–100 receptors, commonly called spikes protruding from the envelope shell of the virus. Spikes, resembling the shape of a hot air balloon, may have a total mass similar to the mass of the lipid bi-layer shell. The lipid proteins of the virus are treated as homogeneous elastic material and the problem is formulated as the interaction of thin elastic shell with discrete masses, modeled as short conical cross-sectional beams. The system is subjected to ultrasonic excitation. Using the methods of structural acoustics, it is shown that the scattered pressure is very small and the pressure on the viral shell is simply the incident pressure. The modal analysis is performed for a bare shell, a single spike, and the spike-decorated shell. The predicted vibration frequencies and modes are shown to compare well with the newly derived closed-form solutions for a single spike and existing analytical solutions for thin shells. The fully nonlinear dynamic simulation of the transient response revealed the true character of the complex interaction between local vibration of spikes and global vibration of the multi-degree-of-freedom system. It was shown that harmonic vibration at or below the lowest resonant modes can excite large amplitude vibration of spikes. The associated maximum principal strain in a spike can reach large values in a fraction of a millisecond. Implications for possible tearing off spikes from the shell are discussed. Another important result is that after a finite number of cycles, the shell buckles and collapses, developing internal contacts and folds with large curvatures and strains exceeding 10%. For the geometry and elastic properties of the SARS-CoV-2 virus, these effects are present in the range of frequencies close to the ones used for medical ultrasound diagnostics. Introduction Nature has endowed viruses with a beautiful and dangerous featurethe crown. Many enveloped viruses including influenza, HIV, and SARS belong to this family. The crown is composed of densely packed receptors, commonly named spikes. They are not just for decoration. The receptors play an essential role in the reproductive cycle of the virus. They bind with their counterparts of the invaded cell and initiate the mechanism of injecting the deadly genome into the cell. For decades, the interface between the virus and host cell has become a battlefield of modern science. Tremendous efforts have been made by the medical and biological communities to deactivate the interface by pharmacological means or by boosting the natural immune response, as is the case with the new vaccine. This is not the place to review the enormous literature on this subject. Instead, we will look at the effect of the receptors from the point of view of rigorous laws of continuum mechanics and structural dynamics. In the case of the influenza A virus, the spikes are massive structures protruding through the apparently thin and smooth membrane. The estimated total mass of the spikes in the viral envelope similar to the mass of the bare lipid bi-layer shell itself. The objective of the reported work is to identify the most probable damage scenario of the family of coronaviruses, subjected to harmonic excitation. In the language of mechanics, the problem is formulated as an interaction of vibration of a thin elastic shell with a large set of attached, but much smaller elastic objects, randomly distributed over the surface. The specific geometry of spikes and the presence of large concentrated masses would render the resonant and transient response of the viral shell different than in the case of the spike-free liposome and justifies the title of the paper. This paper does not report on any new constitutive models or other advances in theoretical and computational mechanics. On the contrary, it uses simple concepts of the mechanics and physics of solids, which are commensurate with the incomplete knowledge of the geometry and material properties of the viruses. Instead, the paper is contributing to the solution of the most urgent problem facing the nation and the world. In this article, we construct a practical geometrical and computational model of the viral shell decorated with spikes, based on the limited information in the literature. The scattering problem of the shell with the acoustic harmonic wave is solved to determine the spatial and temporal variation of pressures on the surface of the shell. Closed-form solutions are derived for the resonant vibrations of the shell and individual spikes in the realm of continuum mechanics. Then, a numerical simulation is performed on the static and dynamic response. A fully nonlinear simulation of the complex assembly of the viral envelope heavily armed with spikes follows the modal analysis. Finally, the possibility of permanently damaging the shell and/or spikes by a short burst of the ultrasound wave is discussed. The paper poses three questions pertinent to the present pandemic: (i) Under what condition will the ultrasound pulse excite large-amplitude resonant vibrations of the viral envelope with the crown? (ii) Can the life cycle of the SARS-CoV-2 virus be disrupted by the ultrasound excitations without damaging healthy cells? (iii) By what mechanism can the spikes and the viral shell be permanently damaged? The paper is answering the above questions and building the computational model step-by-step using the tools of modern acoustics, structural mechanics, and dynamics. Neuman et al. (2006) and Beniac et al. (2006) were the first to determine the most probable geometry of the spikes. The relative contribution of lipid bilayer shell and the S-receptors proteins of the influenza virus in the process of static indentation was studied experimentally by Schaap et al. (2012) under the Atomic Force Microscope (AFM). Compared with the bare viral membrane, the spike-decorated virus was found to be twice as stiff, due to membrane-attached spikes. This conclusion was independently confirmed by Li (2012). The Cryo-Electron Tomography (CT) is a powerful tool to scan and visualize the architecture of a virus. In the second half of 2020 alone five papers were published, disclosing various silent features of the SARS-CoV-2 virus, its spikes, and the way it is connected to the envelope shell. Wrapp et al. (2020), Srinivasan et al. (2020), Yao et al. (2020), Ke et al. (2020), and Turoňová et al. (2020) provided a complete 3D models of the virus, now circulating in the media. Low-cycle fatigue in the nanoindentation tests on the SARS-CoV-2 virus was reported by Kiss et al. (2021). The dynamic responses of liposomes have been extensively studied by the drug delivery industry. Liposomes are tiny man-made smooth spherical vesicles that transport a given drug to the infected cell. There is an enormous literature on this subject accumulated over the past 30 years. Excellent review articles on this subject were published by Schroeder et al. (2009) and Sirsi and Borden (2014). A practical solution to the drug release issue is offered by poromechanics, Ma et al. (2018). The theory of sonoporation uses the concept of ultrasound-induced cavitations and the collapse of micro-bubbles in order to induce opening in the liposome shells. It is interesting that the new vaccine developed by Pfizer is delivered to the body by means of a liposome. Molecular Dynamics offers many tools to model and visualize the process of the assembly of the viruses. There are several degrees of resolution from the atomic level, through the particle level, all the way to the coarse grain. A critical review of the vast literature on this subject (600+ papers) is available in the recent paper by Marrink et al. (2019) who is also behind the development of the popular Martini model (Arnarez et al., 2015). The images of the virus generated by molecular dynamics were very helpful for choosing the geometry of the virus in Section 2. Hu and Buehler (2020) constructed the multi-degree-of-freedom model of the spike at the scale of molecules connected by tiny springs. Their model predicts a spectrum of vibration modes within the spike itself. They found that the lowest mode is critical for a successful binding with the receptors of the invaded cell. Parallel to the molecular dynamics, homogenized continuum models of the virus were proposed, based on the information provided by the nanoindentation tests under the Atomic Force Microscope (AFM) (De Pablo, 2020;Klug et al., 2006). This approach goes back to the work of late Tony Evens at UCSB, who introduced the concept of area compressibility and bending stiffness of the viral shell (Evans et al., 1976). Last year, low-cycle fatigue in the nanoindentation tests on the SARS-CoV-2 virus was reported by Kiss et al. (2021). The continuum formulation and molecular dynamics to model the geometry and properties of viruses are two complementary tools rather than competing theories. Each is based on a certain set of assumptions and has its advantages and limitations. Our team was encouraged by the approach taken by a group of researchers at Caltech led by Ortiz. They predicted by purely analytical and numerical methods a spectral gap between resonant harmonic frequencies of healthy and cancerous cells (Heyden and Ortiz, 2016). Guided by this work, the first successful trial to kill breast, colon, and leukemia cancer cells was announced by the same team, (Mittelstein et al., 2020). Several challenges were encountered in the course of the present research. The main difficulty has come from a lack of experimental data in the literature on damage and fracture of the envelope and S-spike materials at the scale of continuum mechanics. Buehler and Ackbarow (2007) offer a description of differences in fracture mechanisms at the scale of atoms and molecules, as compared with ductile and brittle fracture of engineering materials. They emphasize the importance of covalent and hydrogen bonds at the atomic level. At the molecular scale, different forces operate. For example, the heads and tails in the lipid bi-layer membrane are held together by the hydrophobic, Van der Waals, electrostatic, and other long-and short-range forces. This is not the avenue that is taken in this article. We have to rely on limited test configurations that can be performed at a scale of 10-100 nanometers. Several assumptions had to be made in lieu of the experimental data. More discussion on this topic is presented in Section 13. It is ironic that much more useful information exists in the literature on critical interaction forces at the atomistic and molecular scale than at the scale of a continuum. Still, several techniques such as the RVE method exists in mechanics to homogenize the discrete molecular models and determine the average elastic properties (Young modulus and Poisson ratio). To the best of our knowledge, this has not been done. The present paper summarizes the results of the feasibility study and indicates a plan for follow-up research. It is directed to all audiences and is intended to initiate a discussion and collaboration across several disciplines of microbiology, physics, and mechanics. The results obtained are surprising and at the same time very promising. Geometry of the SARS-CoV-2 Topology. There is an abundance of photographic coverage of the coronaviruses such as influenza and SARS-CoV-2 on the websites. The photograph of the SARS-CoV-2 during the budding process and the sister TGEV pig virus are shown in Fig. 1. A virus does not make an exact copy of itself in the reproduction cycle. In the coronavirus family, the size, shape, and distribution of the receptors vary considerably across the surface of the virus and from one virus to the other (Neuman et al., 2006). What a virus does extremely well is to copy exactly its RNA genome. It is difficult to determine the size and number of receptors (spikes) on the surface of the virus from the planar 2D, low-resolution photographs. Counting the number of spikes on the periphery of the viral shell of the TGEV virus in Fig. 1 returns the numbers between 12 and 22 with an average of 16. Some assumptions must be made to develop the 3D model of the virus with receptors. There is a vast literature on geometrical modeling of smooth capsid viruses, see for example (Twarock and Luque, 2019). Assuming perfect triangulation on a sphere, the total number of vertices (spikes) on the surface of the virus, N can be determined with the help of the Euler formula in topology where T stands for the number of the equilateral triangles on the surface of a sphere. The side length of a triangle l can be found from the 2D photos by counting the number of spikes n on the circumference, l = 2πR/n. Then, the area of one triangle times the number of triangles T must be equal to the area of the virus 4πR 2 . The total number of spikes N on the sphere is then found from simple geometrical consideration and Eq. (1). Eq. (2) appears to predict correctly the number of spikes on the sphere, even though the spikes are randomly distributed on the surface. For example, taking n = 16 for the TGEV virus, the total number of spikes is N = 96., in agreement with the value given by Srinivasan et al. (2020). Schaap et al. (2012), and others reported on a similar number for the influenza A virus. Most recently Yao et al. (2020) and independently Ke et al. (2020) reported on a much smaller number of spikes, N = 24-25 for the SARS-CoV-2. From the Cryo-EM photos in the above papers, the average number of sparsely distributed spikes on the periphery of the shell is n = 8. The Euler formula predicts again correctly the total number if spikes. N = 26. In the present computational model, a densely populated virus with 96 spikes is considered because the experimental validation will be performed on the TGEV pig virus. Guided by the above results, the distribution of spikes on the sphere was defined by making several cuts at different parallels and an increasing number of spikes was evenly distributed over the circumference of each of the respective circles. In the present, densely populated virus, the numbers of spikes on the respective cuts are taken to be 1-6-8-10-14-20-14-10-8-6-1, giving a total of 98 spikes, see Fig. 2. Such a model preserves a rotational symmetry, but perfect triangulation was intentionally not achieved. Spikes are Spike geometry. Determination of the exact shape of spikes and the way that they are attached to the lipid bilayer and to the M protein below has become one of the main challenges in the course of this research. In most of the existing photos, only a shade of the receptor is seen. Compared with the radius of the virus, the spikes are small but compact structures, resembling a hot air balloon from the side view. It is generally agreed that the spike is composed of three proteins twisted together, giving the appearance of a clove from above. The early photographs published by Beniac et al. (2006) and Neuman et al. (2006) were too crude to serve as prototypes of computational models. A more precise geometry of the SARS-CoV-2 spike is reconstructed based on the image provided by the Cryo-EM photographs and molecular dynamics simulation, shown in Fig. 3 (Wrapp et al., 2020). Still, the shape of the spike, constructed by the molecular dynamic simulation varies across recent literature (Hu and Buehler, 2020;Srinivasan et al., 2020). Very recently Turoňová et al. (2020) presented the most detailed pictures and models of the spike and its connection to the SARS-CoV-2 viral shell. They found that the connecting stalk is composed of three very flexible cables that can be modeled as a three-hinge system of rods. The bending flexibility is then very small allowing the virus to more easily find and connect to the receptors of the infected cell. The present model is representative to the influenza and TGEV virus and is different. The information on the total height of the spike of H = 16 nm (160 A), is consistent across the literature. All other dimensions can then scaled-up from the image in Fig. 3. The spike is assumed of the shape of the truncated cone with the radius r 1 at the bottom and r 2 at the top. The top radius r 2 , shown in Fig. 3b is different in the side view and the top view. Taking r 2 = 8 nm, the volume of the cone is about 1072 nm 3 . This is almost three times larger than the volume of the tributary area of each spike on the shell is V = πR 2 h /4N = 320 nm 3 . The same radius, estimated from the aerial view is r 1 = 5.5 nm, reduces the volume to 506 nm 3 . These numbers already suggest that spikes will dominate the vibration response of the virus. There is a short cylindrical part of the spike at the bottom, called a stalk in the biomedical literature. The radius r 1 and the length of the bottom part will control the bending stiffness of the spike and so its natural vibration frequency. In the present paper, it is assumed that the spike is fixed into the shell. A modified model of SARS-CoV-2 with a fewer number of spikes and the 3-hinge connection at the base will be the subject of the follow-up publication (Nonn and Wierzbicki, 2021) Mechanical properties. The mechanical properties of viruses, available in the literature are by far incomplete for performing rigorous analysis. The review article by Buehler and Yung (2009), provided a wealth of information on deformation and failure of protein materials in general, with no reference to the spike S-protein. For the purpose of the present paper, we follow the assumption of mainstream research in microbiology that the viral envelope is an elastic continuum. This leaves of course several unanswered questions about the symmetry of elastic properties in tension and compression, viscosity and strain rate sensitivity, multi-axial response, failure or rupture of the spike material, and effect of different properties of the shell and spikes, etc. The state-of-the-art is that Young modulus is determined by matching measured and numerically predicted force-displacement curves in the nanoindentation tests under the Atomic Force Microscope (AFM). An excellent overview of methods and results on the mechanics of viruses can be found in the monograph edited by Mateu (2013) who also contributed several key chapters of the book. In the present paper the specific values of the geometrical and loading parameters, Young modulus and Poisson ratio of the influenza A virus, taken from Schaap et al. (2012) are given in Table 1, Fig. 2. Reconstruction of the 3D model of the spike-decorated Influenza A virus from 2D photographs. Top view (a) and side 3D image. The plane ultrasound harmonic wave is perpendicular to the axis of the sphere joining the North and South Poles. Wave scattering The plane longitudinal harmonic wave introduces a time variable pressure on the surface of the virus. The total pressure is the sum of the incident pressure and scattered pressure. In general, the pressure field of the plane incident acoustic wave generated by the ultrasound transducer can be written in the separable form in space and time. Due to the spherical symmetry, the pressure is axisymmetric, depending only on the polar angle θ but not on the azimuth ϕ, where p 0 is the incident pressure amplitude, t is time, ω is the circular frequency measured in rad/s, k is the wavenumber, and z = R (1 − cos θ), is the direction of the propagating plane acoustic wave. In the spherical coordinate system (r, θ, ϕ) . The circular frequency ω is related to the linear frequency f, measured in Hz, ω = 2πf. The wavenumber is k = ω/c, where c is the sound speed in the medium. The present analysis starts with the simplest uncoupled model of the smooth rigid sphere. Then, a more complex model will be constructed and discussed. The reference solution of a rigid sphere is important because it is exact and does not depend on the material properties of the virus shell. There are two physical parameters controlling the total pressure acting on the virus, the radius of the sphere R and the wave number k, which form a single dimensionless parameter kR = 2πRf/c = 2πR/λ, where λ is the wavelength. The next step in complexity is the consideration of the receptors or spikes on the surface of the virus, still keeping the material as rigid. Viruses, especially the SARS-CoV-2, are highly deformable and the surface pressure generated by the interaction with the incoming wave at the nano-scale is a formidable problem and has not been studied in the literature. Finally, the effect of the elasticity on the scattered pressure will be discussed. Scattering of the rigid shell For the general body geometry, numerical approaches must be employed to solve for the scatted pressure. One efficient and effective numerical approach is the boundary element method, which has been well developed and widely applied in many problems of structural acoustics (Jensen et al., 2011). In this method, the solution of the boundary-value problem for the unknown scattered pressure field was formulated as boundary integrals in terms of the free-space Green function and pressure and normal pressure gradient on the object surface. On the object surface, the normal pressure gradient is known from the boundary condition while the scattered pressure is unknown. The unknown pressure on the object surface was solved from the boundary integral equation by the use of numerical boundary element discretization. In principle, the application of this numerical method can provide an accurate evaluation of the effect of the spikes in scattering. The input parameters of the problem are the body geometry, the wavenumber k, and the amplitude of the ultrasound wave p 0 . The output is the spatial and temporal variation of the incident, scattered and total pressure acting on the body. The scattering problem considered in this work is linear. The amplitude of the scattered pressure p s (t) is linearly proportional to the incident pressure amplitude p 0 . In general, the total pressure field consists of the cosωt and sinωt components, i.e. p T = p c cosωt + p s sinωt with p c = p Ic + p Sc and with p s = p Is + p Ss . In the case of a rigid sphere or spherical shell, an analytic solution of the scattered pressure is known (Morse and Ingard, 1986). A subroutine was developed to determine the total pressure on the shell. As an illustration, Fig. 4 shows the distributions of the incident, scattered, and total pressures on the surface of the rigid sphere for a sample value of kR = 4. The three curves on each sub-figure respectively represent p Ic (θ), p Sc (θ), and p c (θ) for the cosωt component and p Is (θ), p Ss (θ), and p s (θ) for the sinωt component. At this value of kR for which the acoustic wavelength is comparable to sphere's radius R, the scattered pressures are comparable to the incident pressures in amplitude. The total pressures over the sphere surface, p c (θ) and p s (θ), are represented by a small number of Fourie-Legendre functions of cosθ, which enable a straightforward determination of the associated pressure excitations for the normal mode vibrations of the deformable spherical object or shell. Fig. 5 shows the variation of the component p c and the component p s of the total pressure on the surface of the sphere for different values of kR. In the case of small kR, the scattering is weak so that the scattered pressure is negligible. The total pressure is dominated by the incident pressure p I with p c (θ)/p 0 ≈ 1 and p s (θ)/p 0 ≈ 0. This pressure distribution is effective for exciting the resonant motions of lower normal modes such as the breathing and bouncing modes. In the case of large kR, both p c and p s oscillate with the polar angle on the sphere surface with a phase shift and strong scattering from the front surface of the sphere while weak scattering from the back surface due to the shadowing effect. The resulting pressure distribution is effective in producing resonant motions of relatively higher normal modes of the deformable body. The present solution is valid for both media, the air and water. Scattering in water is similar to that in the air since the solution of the scattering problem depends on the single parameter kR only. Since the sound speed in water is about four times larger than in air, the value of kR in water is about four times smaller than in air for the same sound frequency. As a result, it may be easier to induce the resonant motions of lower normal modes in water than in air. Another effect is associated with the added mass effect in the motion of normal modes. The added mass in water is significant while it is negligible in the air since the density of water is much larger than that of air (Lamb, 1931). The consideration of the added mass effect in the motion of the deformable body generally lowers the resonant frequency of the normal modes. The correction factor on the natural vibration frequencies of the bare shell is given in Section 11. A similar analysis of the effect of water on the vibration frequency of a single spike is provided in Section 10. Scattering into a deformable elastic shell In the case of a deformable object, the action of the time-varying pressure, including the scattering effect will excite vibrations of the surface of the object. This will change the Neumann boundary condition for the normal pressure gradient on the object surface. The coupled fluid-solid interaction problem will become quite complicated. The vibration of the object surface radiates sound waves, propagating away from the object, causing acoustic energy dissipation that limits the amplitude of the vibration. The amplitude of induced resonant vibration of a normal mode is generally given by the ratio between the pressure excitation and the acoustic impedance at the associated resonant frequency. Significant simplifications are obtained in the case of the uniform breathing vibration mode, to be derived in the next section, because the unit normal vector to the spherical surface remains the same. In the case of small kR values, a simple analytic solution of the acoustic impedance for the breathing (or bouncing) mode is obtained as the radiation of a monopole (or dipole) source (Lamb, Effect of spikes on the scattering Determination of the effect of the spikes on the process of scattering and surface pressure generated by the harmonic wave is a very challenging task. The shape of the spike-decorated model is too complicated to perform an exact scattering analysis. Instead, we assessed the scattering effect of a spike by means of two approximate models. Since the size of the individual spike is much smaller compared to the viral shell, the scattering from the spike is negligibly small in the cases with kR much less than 1. With this model, the incident pressure dominates the total pressure on each spike, and then the resulting pressure on the viral shell is simply the known incident pressure. The second model is to treat the spike as a small rigid sphere that is connected to the viral shell by a thin rod. The scattering of the spike is then approximately represented by the scattering of the small sphere and the pressure acting on the surface of the spike is self-equilibrated and uniform. The solution of the scattering by a sphere is analytically known. In particular, in the case of small values of kR, the pressure on the sphere is nearly uniform exciting only radial extensions and compressions within the volume of the spike. This type of response is then fully uncoupled from the global vibration of the viral shell. In either case, the spikes make only the second-order effect on the pressure distribution on the surface of the virus. At the same time, spikes dominate the eigenvalue and transient response because of their large mass. Fourier/Legendre expansion of the spatial pressure distribution In the present formulation, the spatial and temporal distribution of pressure is represented in the multiplicative form. The temporal variation is the same as the incident harmonic wave. The spatial distribution of the pressure, p c and p s , depends on the single parameter kR. In the case of a sphere, p c and p s are given by a summation of the Fourier-Legendre series of the polar angle θ with the amplitude coefficient decaying rapidly with the order of the Fourier-Legendre function. The first term in the series is independent of θ, representing a uniform pressure over the body surface. It excites the breathing vibration of the elastic spherical shell. The second term excites the bouncing vibration. In the case of small values of kR, these two terms are dominant. In the large values of kR, in addition to these two terms, higher terms also become important, which can excite high normal mode vibrations. Guided by the above mathematical formulation, a simple analytical fit of the numerical pressure distribution is proposed in the is the total pressure acting on the surface of the shell. The exact numerically determining pressure distribution is shown by a solid black line in Fig. 6 for two values of the parameter kR. The prediction of the simplified Eq. (4), is denoted by the red line. The pressure given by Eq. (4) is seen to capture the first-order effects with good accuracy for kR less than or equal to order one. It will be introduced in Section 14 as a forcing term in the transient analysis of vibration Reynolds number, cavitation, and viscosity It is instructive to check the magnitude of the Reynolds number. For large Reynolds numbers, there is a danger of cavitation. Very small Reynold numbers bring the necessity of considering viscous effects. From the definition, the Reynolds Number Re =uR /ν, where u is the characteristic fluid particle velocity, ν is the kinematic viscosity of the fluid. The flow velocity is defined from the harmonic vibration. If A represents the amplitude of resonant motion of the spherical shell, then u = ωA, where ω = kc and c being the sound speed. The Renolds number can then be transformed into the form The associated Womersley number, which is the ratio of transient inertial force to the viscous force can also be calculated. The magnitude of Re and the associated Womersley number for the representative vibration amplitude of A = 10 nm are given in Table 2. The calculated numbers fall within a safe middle range. It can be concluded that the viscous and cavitation effects can be neglected in the estimate of harmonic forces. Since the Reynold number measures the viscous flow effect on the mean force on the shell, it does not affect the assumption in solving the (transient) harmonic motion problem. The above viscous effect should not be confused with the viscoelastic properties of the envelope which brings the issue of damping of resonant vibration. This effect will be considered in the subsequent publication. Resonant vibrations of the smooth shell Determination of the spectrum of free frequencies f m of shells of the revolution was the subject of extensive research since the classical treatise of (Lamb, 1882). The spectrum of natural frequencies consists of two infinite sets of modes spaced within a finite frequency interval. The upper-frequency spectrum corresponds to a purely radial motion. The lowest frequency, corresponding to the so-called breathing mode is given by When additional degrees of freedom in the form of tangential displacements are allowed, the membrane energy is considerably reduced, giving rise to the lower frequency spectrum. However, the purely inextensional deformation modes can never be achieved within the membrane theory for the spherical shell. The inextensional modes are developed only for structures with zero Gaussian curvature, such as a cylindrical shell. Baker (1961) presented the most general solution of the membrane response in the form where g m is the dimensionless frequency parameter of the vibratory mode of order m. Four cases of particular interest are compared in Table 3. The frequency corresponding to the breathing m = 0 mode, involving uniform extension and compression of the shell is 2-4 times higher than all the others. The rocking mode m = 1 is the rigid body translation and is not present. The bouncing mode, m = 2, and all higher modes do not depend on the Poisson ratio ν. This is an important result because the exact value of v is unknown for the viral lipid bilayer. The dimension of the term /R is 1/s, so the frequency f is in Hz. It should be noted that, except m = 0, all frequencies of the lower branch have an asymptotic value. Taking the input parameters gathered in Table 1, the predicted frequencies are shown in the last column of 3. The comparison of the analytical prediction with the FE simulation is presented in Section 11. Stiffness and strain localization in the single spike A simple beam bending analysis of the spike provides a wealth of information about the mechanics of the cone-like spikes, refer to Fig. 12. The authors were unable to find the solution for the cone-like cantilever in the literature. The solution for the Euler-Bernoulli beam theory is summarized below. The correction for the shear effect (Timoshenko beam) is discussed later. The geometry is fully characterized by the height H, the bottom, reference radius r 1 the top radius r 2 or the ratio of the radii η = r 2 /r 1 . The moment inertia of a circular cross-section is I = πr 4 /4, where r is the current radius. Denote by I 0 = πr 4 1 /4 the reference moment of inertia of a prismatic beam of a constant radius r 1 . The spike is loaded by the force P at its tip and fully clamped at the bottom. The bending moment equilibrium equation is where w is the transverse deflection of the beam. For the fully clamped beam, both deflections and slopes are zero at the base. In the conical geometry, the radius is a linear function of the x-coordinate, Care should be taken when solving Eqs. (7) and (8) because of a singularity at η = 1. After some cumbersome algebra, the solution of the ordinary differential equation yields the relation between the tip load P and deflection δ, P = Kδ, where the stiffness of the structure is The subscript 1 indicates the solution for the stiffness of the spike. For a uniform thickness beam, η = 1 and Eq. (9) reduces to the classical solution for a cantilever beam. The normalized deflection profile of the axis of the cone, w = w/δ is where r = r/r 1 . The plots of the normalized deflection profiles of the analytical and numerical solutions for the two limiting cases η = 1 Table 3 The frequency parameter in Baker's (1961) solution. uniform cross-section) and η = 4 (truncated cone) are shown in Fig. 7. In both cases, the predicted deflected shapes are very close to the numerical results. The present solution proves that deformations are localized in a short region of the length equal approximately to the bottom radius of the spike. The spike responds then mostly as a rotation around a generalized "elastic" hinge. Comparison of the prediction of Eq. (9) with the numerically calculated stiffness is presented in the table in Fig. (8). Two effects might be responsible for the 16% error in the case of η = 4. One is the transverse shear and the other one the clamped boundary condition of the spike. The solution of the Timoshenko cone-like beam is too complex to be presented in a closed-form. At the same time, the effect of the boundary condition can be assessed. The above analysis is valid for the model with large stiffness, inducing flexural vibration of the spike. In the other limiting case, of flexible connection, applicable to the SARS-CoV-2 (Turoňová et al., 2020), the spike will undergo axial tension-compression vibration mode. The local stiffness of the shell is large compared with the spike bending stiffness but is finite. Consider the tributary area of the spike as a square section of the shell H × H subjected to a local point bending moment. By disregarding the local curvature, the approximate solution provides the following expression for the local bending stiffness of the shell K 2 = π 2 E 2 h 3 /3H 2 . For the present geometry, and E 1 = E 2 , the solution is K 2 = 27K 1 . The combined bending stiffness K of two springs in series can be calculated from By comparing K with K 1 it is seen that correction is small and lowers the total bending stiffness by 5%, leaving still an error of 11%. It can be concluded that the error comes mainly from the transverse shear strains not included in the Euler beam theory. Knowing the beam stiffness and the deflected shape, it is possible to calculate the maximum curvature at the hinge from Eqs. (9) to (10), in terms of the tip deflection The corresponding maximum strain at the extreme fiber, located at r = r 1 from the middle axis of the beam is For example, taking the amplitude of vibration δ = 3.65 nm from Fig. 8, the strain will reach 0.35. At the maximum amplitude of vibration, δ max = 15 nm (see Section 13 for explanation), the strain will increase to 1.41. The prediction of Eq. (13) is close but underestimates the values of the FE simulation, shown by the color-coded plots in Fig. 8. The present values should be compared with the fracture strain of the S-protein spike, which is the subject of the future study. In any case, tearing off the spike from the shell at such large tensile strains is a real possibility. Natural period of vibration of a single spike Vibration in the vacuum (or air). Knowing the bending stiffness of the spike, a close-form solution on the natural vibration frequency of the spike can be easily calculated. In the one-degree-of-freedom model, the vibration frequency is Fig. 7. The normalized shapes for the analytical and numerical solutions are almost identical for both cylindrical (η = 1) and, conical (η = 4) beams. where K is the equivalent spring constant, defined by Eq. (11) and M is the total mass of the spike. For a given H, two parameters of the spike, r 1 and r 2 , are controlling the frequency. The mass of the truncated cone is The final expression for the frequency is Taking the reference values of all parameters from Table 2, the closed-form solution predicts the frequency of the spike to be f = 123.8 MHz. By replacing the analytically derived stiffness K = 1104 nN/nm with the more exact numerically calculated stiffness K = 926 nN/nm, the one-degree-of-freedom model predicts the resonant vibration of the spike at 112 MHz, almost identical to the FE value of 110 MHz. Vibration in water. The shape of a spike is too complicated to attempt an exact solution involving fluid-solid interaction. The concept of the added mass should be used instead. There are several publications on the vibration of the cantilever beam in water that is based on this concept. Of particular interest is the article by Weigert et al. (1996) because it provides an experimental validation on the beams of only 200 micrometers length. The correction factor is given by a simple expression The above equation was shown by Weigert et al. to reproduce the experimentally determine frequencies in the first eight modes with good accuracy. Assuming that the mass density of the spike ρ spike is approximately equal to the mass density of water ρ water , the correction factor of frequencies becomes f water /f air = 0.71. Thus, the fundamental frequency of the spike will be reduced from 112 MHz to 79 MHz. This result is further discussed in Section 14. Finite element simulation The geometry of the viral envelope with spikes is complicated and numerical methods must be used to determine resonant and (Baker, 1961) presented in Section 9 was based on the membrane theory for very thin shells. It was necessary to assess the limitations of this theory and for that purpose, five elements were used through the thickness of the shell for both models. Simulations of the full virus with spike were done using a one-eighth 3D model with the appropriate symmetry, in order to reduce computation time. The general-purpose Finite Element code Abaqus/standard was used. Eight-node brick elements with reduced integration (C3D8R) were used across the thickness of the shell, and six-node wedge elements (C3D6) were used for the spikes. There were 9610 elements in one spike; 40,920 elements in the shell; and 159,202 elements in total. The explicit time integration algorithms must satisfy the Courant stability criterion. Abaqus is automatically adjusting the time step to satisfy this criterion. For the smallest size of the finite element of a fraction of the nm and the wave speed of 150 m/s in the virus material, the time step is set up to be 0.0001 ns. Natural frequencies for the shell without spikes in the air 12.1 Natural frequencies for the shell without spikes in the air are presented in Fig. 9. The eigenmodes and the associated eigenfrequencies in the Baker closed-form solution are depicted in the first row. The second and third rows show the Abaqus numerical results for the cross-sectional shape and the 3D view. The first mode m = 1 (first column) is a rigid body rocking motion. It will never be generated under the plane harmonic wave and its frequency is zero. The second mode, m = 2 is the so-called bouncing node, sometimes referred to as the wineglass four-node mode. Then, there are higher modes with increasing frequencies. The frequencies predicted by the analytical and numerical solutions are almost identical and this high level of correlation is valid up to the ninth mode. Up to that point, the membrane action dominates the response. For still higher modes the local curvature increases and bending action comes into play. The numerical model with eight elements through the thickness is able to predict both membrane and bending strains. The frequencies continue to increase, as opposed to the Baker solution that predicts the asymptotic value of about 572 MHz. This feature was pointed out by Silbiger (1962) in his discussion of Baker's solution. So, where is the breathing mode? Searching through higher modes, it was found between the 16th and 18th modes, vibrating respectively at frequencies of 895 MHz and 977 MHz, see Fig. 10. Abaqus detected the purely radial motion at 913 MHz as the 17th mode. In the Baker analysis, the breathing mode belongs to a different branch and vibrates at 923 MHz, for the Poisson ratio of ν = 0.45. The insert in Fig. 10 illustrates the locally distorted mesh as a combination of local shear and bending strains. These two components, which are absent from the Baker solution, increase substantially the stiffness of the shell and its vibration frequency. It is concluded that the breathing mode is important mainly because of its mathematical simplicity. In reality, it must be an unstable mode. A small disturbance in the pure spherical geometry or uniform loading will cause a jump to a different mode. There is here an analogy with buckling of a thin spherical shell, where the pre-buckling uniform compression bifurcates into higher modes with Fig. 9. Comparison of Baker's analytical solution (top row) with the results of the numerical simulation, (bottom rows). Both models were axisymmetric, smooth, and spike-less. A perfect correlation is observed in both mode shapes and natural frequencies. Natural frequencies for the shell without spikes in water Sonstegard (1969) extended Baker's modal analysis in the air to the shells surrounded by water. The problem was solved using the energy method, including both membrane and bending strain terms. By analogy with Baker's analysis, there are two branches. The upper-frequency spectrum is extensional, purely membrane modes. The lower frequency spectrum includes both membrane and bending action and are called composite modes. It was found that the difference between the incompressible and compressible fluid was negligible. Comparison of the frequencies in the air and water was shown in Fig. 3 of the above publication for the first eight frequencies. The reduction factor for the breathing mode n = 1 was equal approximately to 0.5. The corresponding reduction in the bouncing mode is only 0.7. Baker expressed his solution in the dimensionless form while Sonstegard presented an example only for a steel shell with an aspect ratio of 20, not the same as the present case of R/h = 12.5. The concept of the added mass provides a simple approximate method of calculating the effect of water on natural frequencies. The validity of the added mass approach was discussed in Section 8. The ratio of frequencies is, Jensen et al. (2011), where M added is the added mass and M shell = 4πR 2 hρ shell is the mass of the shell made of the protein material. The added mass in the breathing mode is M shell = 4πR 3 ρ water /3. The added mass in the bouncing mode is half of it. Taking the present aspect ratio R /h = 12.5, Fig. 10. The breathing vibration mode is predicted by the numerical analysis. It sits rather strangely in the company of two higher modes, n = 17 and n = 19 dominated by bending and the through-thickness shear. Still, the agreement between the analytical and numerical solutions in terms of vibration frequencies is remarkable. Fig. 11. Stiffness of the spike, modeled as an inverted truncated cone fixed into the shell. Five different combinations of the elasticity modulus of spike and shell were considered. The numerical simulation confirms that the shell is an order of magnitude stiffer than the cantilever beam. (20) The above closed-form solution provides comparable estimates to the exact fluid/solid interaction solution by Sonstegard (Sonstegard, 1969). The magnitude of the Reynold numbers, calculate in Section 8 proves that the added mass approach is applicable to the range of frequencies considered in the present paper. Vibration analysis of a single spike A full eigenvalue analysis of the viral shell with spikes reveals a complex interaction of the bending of spikes and vibration of the shell. To better interpret this interaction, the bending stiffness of a single spike and its local vibration must be determined. In the first step the spike with a surrounding square section l × l of the shell is isolated, in a similar way as in the closed-form solution in the previous section. A horizontal force was applied to the end of the spike and the load-displacement relation was calculated using Abaqus explicit code. Five different combinations of the elastic modulus of the spike E 1 and the shell E 2 were considered, defined in Fig. 11. The default value is E 1 = E 2 = 30 MPa. The bending stiffness of the spike increase linearly for a range of E 1 = 1.5 -200 MPa, in perfect agreement with the closed-form solution. The numerical simulation confirms the stiffness predicted by the analytical solution. This means that the frequency of vibration of the spike given by the closed-form solution, Eq. (11), 123 MHz should be correct. Indeed, the lowest period of vibration in the numerical analysis of the system returns the value of 111 MHz. The authors could not find in the literature information on the relative difference between the elastic properties of the bilayer lipid membrane and protein. Therefore, a parametric study of the effect of the elastic properties of spikes and shell was also performed. The numerically predicted natural frequencies of spikes for three ratios E 1 /E 2 in the first three modes are shown in Fig. 12. In the lowest range of the frequencies (111, 121, and 140 MHz) spikes vibrate around the stationary rigid shell. A strong interaction between the local spike and global shell vibrations are clearly seen for the intermediate range (378, 400, 512 MHz) and higher frequencies (1110( , 1212( , 1225. The above close-up images will be helpful in explaining a complex vibration of the spike-infested virus to be discussed in the next sub-section. Natural vibration of the shell with spikes In the last step of the eigenvalue analysis, the first seven vibration modes were selected from the output of the Abaqus code. Several features of the response can be distinguished from Fig. 13. (i) At the lowest frequency of 107 MHz, the spikes vibrate around a stationary shell, as evidence by the first column. In the 3D view, six spikes swing back and forth with respect to the North and South Pole spikes (lower left). The next row of spikes vibrates out of phase from the first row. The frequency is very close to the frequency of 111 MHz predicted earlier for a single spike. (ii) There is a large spectral gap between the lowest vibration frequency and all higher six frequencies, starting at 312 MHz. (iii) The mode shapes of the virus with spikes differ from the spike-less virus, described in previous sections due to a large mass of spikes. Still, there is a similarity of the second-lowest mode with the bouncing mode in Fig. 9. (iv) There is a complex interaction between local spike vibration and global vibration of the shell. The results of the eigenfrequency analysis of the virus with spikes prove that the spikes will swing around the neutral position at several well-defined frequencies. Transient response at resonance Input and output parameters. This section is built on the results of all previous results of the paper and demonstrates the growth of the vibration amplitude of the harmonic ultrasound excitation at or around resonant frequencies. The shell-spikes system is subjected to the pressure varying in space and time, according to Eq. (4). In both cases, the pressure acting on the virus shell will excite the breathing as well as the bouncing modes. The presence of spikes will disturb this nice symmetry and will generate all modes with different amplitudes. In the forcing term, given by Eq. (4), both the temporal and spatial variation of pressure depends on the imposed circular frequency ω or liner frequency f. The parameter kR is defined as kR = 2πRf/c = 2πR/λ, where c is the wave speed in the air, c = 343 m/s and λ is the wavelength. The simulation will be done for three values of resonant frequencies, for a virus suspended in the air. The fourth case kR = 0 corresponds to uniform pressure. The input to loading for the numerical analysis is summarized in Table 4. The pressure is applied to the surface of the shell as well as the top surface of the truncated cone spikes. The pressure on the lateral surface of spikes is self-equilibrated and does not contribute to the deformation of the system. The maximum intensity of the harmonic wave p 0 is taken to be 1.0 MPa in all calculations. The objective of this section is to provide both a qualitative analysis of the complex interaction of different vibration modes of the shell and coupling of the local spike vibrations and the global vibration of the shell. Zero displacement and velocities were taken as initial conditions. Fig. 7. The spike can move radially or swing to the left and right. Table 4 Spatial and temporal variation of the pressure on the surface of the shell. The value of all ten input parameters on the geometry, material, and loading were given in Table 1, Section 2. The output of the numerical simulation is the vertical displacement at the base of the spike and horizontal displacement at the top of the spike. In particular, the amplitude of the shell vibration at the North (and South) Poles is plotted. The 3D view is also shown in some cases. The default values of input parameters are kR = 0, E 1 /E 2 = 1.0, f = 110 MHz and p 0 = p cr . In order to reduce the number of combinations, the effect of kR and E 1 /E 2 is discussed first. Results of transient vibrations Effect of water. The solution of the transient vibrations of the viral envelope with spikes in water formulated as the fluid/solid interaction will be too complicated to attempt. It was shown that resonant frequencies of individual spikes (Section 11.1) and the spikefree or "bold" shell (Section 11.2) in water are reduced by approximately 50%. The added mass approach for the modal analysis can be extended to get an approximate solution for transient problems. The magnitude of the inertia force to move the column of water around the shell is equivalent to increasing the mass density of the shell or decrease the Young modulus. To reduce the vibration frequency by a factor of two, as predicted by Eqs. (16)-(20), the mass density should be increased four times. Alternatively, the Young modulus could be reduced by the same amount. The analysis of the effect of Young's modulus ratio on transient vibration, shown in Fig. 14 confirms that qualitatively, the response is similar to the shell in the air but shifted in time. Lowering frequencies will bring analysis closer to the safe range in ultrasound diagnostics, as discussed in the next sections. Effect of the wavenumber. The wavenumber k or the dimensionless parameter kR, controls the spatial distribution of pressure. It was shown in the previous section that for small values of kR less than 1.0, the scattered pressure is very small and can be neglected. For the viral shell, kR is always smaller than unity both in the air and water. Transient simulations run at kR = 0, 0.01, 0.1 and, 1.0 yield almost identical results. At kR = 0 the pressure is uniformly distributed over the surface. Small imperfections are needed to facilitate the initiation of the second, bouncing vibration mode and eventual transition to the dynamic buckling. There are two types of imperfections. Large structural imperfections to the uniform thickness shell are introduced by the concentrated masses of spikes. The nonuniformity in the pressure loading, however small it might be acts as initial loading imperfections. In this respect, Eq. (3) should be used for the input forcing term, even for smaller values of kR. Effect of the ratio E 1 /E 2 . It was shown in Sections 9 and 10 that the stiffness and natural frequency of a single spike depend on the relative values of Yong modulus and mass densities of the spike and membrane proteins. Simulations were performed at three values of E 1 /E 2 = 0.5, 1.0, and 2.0. The ratio is controlling the interaction between the spikes and the shell. The growth of the maximum displacement of the spike in time for three values E 1 /E 2 and three resonant frequencies is shown in Fig. 14. The initial growth under the resonant frequency of spike of 110 MHz is linear, and the growth rate, measured by the slope of the straight-line fit, is inversely proportional to the frequencies, as predicted by Heyden and Ortiz (2016). At the beginning of the vibration process, the horizontal swinging of the top of spikes is almost an order of magnitude larger than vertical displacement at the spike/shell interface, shown in the second row. Numerical simulation was run up to 0.05 µs, or 50 ns. For an undamped system, the amplitude will grow to infinity at a resonant frequency. Each column in Fig. 14 corresponds to a different ratio of elastic moduli. Large horizontal amplitudes are reached at any of the three resonant frequencies, an indication of a strong and complex coupling of the local spike vibration and global shell vibration. The 3D visualization of the vibration modes is presented in Fig. 15 for one-quarter of the shell. One can clearly see the process of It can be concluded that at later times the response becomes independent of E 1 /E 2 . Therefore, all other simulations in this section were done for the default value E 1 = E 2 = 30 MPa. It should be noted that the effect of E 1 /E 2 includes indirectly a possible difference in mass densities between the spike protein and the lipid bi-layer envelope. Strictly speaking, the resonant frequency depends on the ratio Effect of pressure amplitude. The parametric study is then reduced to the effect pressure p 0 and frequency f, because of their relation to the application of ultrasound transducers used for medical imaging. Those two parameters are strongly inter-related which makes the analysis interesting but difficult to present in a clear way. According to O'Brien (2007), the range of the intensity generated by the present generation of transducers is 0.3 − 800W/cm 2 . This corresponds to the range of pressure p 0 = 0.1-5MPa. For the resonant vibration, low magnitude of pressure amplitude will delay but not prevent reaching large amplitudes for the undamped system. Increasing vibration amplitude at the North and South Pole will eventually lead to a global buckling and collapse. As a reference, we took the critical buckling pressure of a perfect thin spherical shell loaded quasi-statically (Hutchinson, 2017), Deformation sequence under 0.5 MPa pressure at the extended time scale of the initial 0.005 µs, (c). The numerical simulation stops when the dimple amplitude caused internal contacts, giving rise to large tensile strain at the high-curvature crease. For the present geometry and elastic properties, (refer to Table 3), the critical buckling pressure is p 0 = 0.25MPa. At that pressure, the uniformly compressed shell buckles into the dimple-like shape. Under a quasi-static loading, the shell should not buckle at 0.1 MPa. But it does under harmonic excitation, in accord with the Lyapunov concept of dynamic buckling, involving the growth of amplitude in the presence of imperfections. Numerical simulation was run at four values of the pressure, p 0 = 0.01, 0.1, 0.5, and 1.0 MPa. The growth of the North Pole amplitude of the shell is shown in Fig. 16a. Under the lowest values of the pressure 0.01 MPa, the amplitude is constant in the considered time interval t 0 = 0.3 µs. The response changes at the pressure amplitude equal to 0.1 MPa giving rise to the growth of the amplitude of the bouncing mode that culminates with buckling and total collapse at t = 0.31 µs, Fig. 16a. At the higher pressures of 0.5 MPa, the shell buckled almost instantaneously, shown as a blue curve in Fig. 16b. The snapshots of the deforming shell at 3.5 ns, 4.5 ns, and 5.0 ns are shown in Fig. 16c for the quarter model. The period of vibration at the frequency of 110 MHz is T = 9 ns. It is interesting to compare the response with the time variation of the forcing term, depicted in Fig 16b by a solid black line. So, the collapse occurs during the half of the first cycle vibration. The shell is collapsing and crumpling until internal contact is established to reach the next equilibrium state, (Hutchinson and Thompson, 2017). To further reinforce the present conclusion, the cross-sectional view of the damaged shell is shown in Fig. 17. The color-coded profile of the maximum principal strain (on the left) has an average of 0.75 with one point exhibiting the strain of 1.14. The spikes were removed for clarity. On the 3D view and the cross-section section view (right figure), one can see large deflections of spikes, some of them touching the deformed shell. The above scenario might be modified because the viral shell is not empty. The average mass density of the interior is however much smaller than that of the lipid bi-layer shell. The internal pressure caused by the "implosion" of the viral shell may in fact help to eject the deadly RNA to the surrounding. Effect of harmonic frequency. The modal analysis in Section 9 has identified three reference vibration modes. A single pike vibrates with respect to the stationary shell at 110 MHz. The resonant vibration of spikes in the entire shell is 107 MHz, refer to Fig. 13. The resonant frequencies of the spike-free shell in the bouncing mode is 340 MHz and in the shell, with spikes, it reduces to 312 MHz. Transient simulations were performed at these three distinct frequencies, in the time interval 1-0.05 µs and the results are shown in Fig. 14. Any of these frequencies are inducing all vibration modes, consistent with what we know about coupling effects in two-degreeof-freedom and multi-degree-of-freedom systems. The amplitude is initially increasing linearly and the rate of growth is inversely proportional to the frequency, as predicted by Heyden and Ortiz (2016). The maximum deflections of 15 nm are reached very early, within 0.05 ms. The base of the spikes moves vertically 1 nm around the neutral position in the initial 0.05 µs time interval but eventually grow much bigger if the simulation is carried out three times longer. Abaqus simulations always stopped automatically when the "excessive" distortion of an element was detected. To bring the present analysis closer to practical applications, simulations were run at f = 110 MHz, f 1 = 50 MHz and f 2 = 25 MHz. The comparison of the spike amplitude versus time at these three frequencies is shown in Fig. 18. In all three cases, the amplitude reaches high and comparable values. The most interesting result is shown on the right of Fig. 18. Here, the simulation was run until Abaqus terminates due to excessive deformation at the collapse of one or several finite elements. This happens at 0.35 µs for 110 MHz, at 0.13 µs for 50 MHz, and at 0.1 µs for 25 MHz. Such an order is counter-intuitive. The present results demonstrate that large, potentially damaging vibration of spikes, as well as the collapse of the viral shell, are present at frequencies and powers, routinely used in medical imaging diagnostics. Implication for the life cycle of the coronavirus Ultrasound is known to cause shattering of wineglass at about 620 Hz, Skeldon et al. (1998) and Prikhodko et al. (2011) breaking of Fig. 17. The 3D view and the maximum principal strains in the collapsed viral shell. Spikes were removed from the color-coded plot for clarity. kidney stones, and opening the lipid shell of liposome for drug delivery. Can the application of the same technique disrupt the reproductive cycle of SARS CoV-2? At this stage of research, it is still premature to say definitely yes. Damage of spikes or viral shell? There are two competing mechanisms in transient vibration of the lipid membrane with a crown: large amplitude flexural (or axial) vibrations of individual spikes and buckling and collapse of the entire shell. In either case, large tensile strains are developed, potentially resulting in the formation of cracks and rupture. The release of RNA from the viral envelope could then disrupt the life cycle of a virus. The results of the parametric study, summarized in Figs. 14 and 18 proved that strains are localized in the high curvature zones of the local buckles and folds, reaching 10%. At the same time, the spikes vibrate at an everincreasing amplitude. At the maximum amplitude of 15 nm, the tensile strain at the base of the spile can get as high as 1.66. Both mechanisms could be equally damaging to the virus. The similarity of the quasi-static and dynamic dimple formation, discussed extensively by Hutchinson (2016) and Hutchinson and Thompson (2017) was demonstrated. These results bring us to the most important conclusion of the paper: Large, potentially damaging deformation of the viral shell with spikes can be achieved at the frequencies and powers routinely used in ultrasound imaging and diagnostics. Our findings effectively eliminate safety concerns of applying ultrasound vibration at frequencies of 1-50 MHz to or around humans. Additionally, it was shown that large vibration amplitudes and strains are generated under the relatively small value of pressure p 0 = 0.1 MPa. The authors were unable to find consistent information in the literature on the critical fracture properties of glycoprotein spike material, treated as a continuum. Several authors provided estimates on the indentation depth of the envelope at failure under AFM tests but not the fracture strain (Klug et al., 2006). Luque (2011) estimated the fracture strain of the capsid viral shell to be 0.05. The spike S-protein is also lipid-made and must have its limit on elongation. In any case, the strains in the viral shell and spikes may by far exceed these values. In the absence of any fracture theory of the lipid and protein material and the relevant experimental data, a quantitative fracture analysis was not carried any further in the present study. Still, the possibility of disrupting the reproduction cycle of coronaviruses is clearly laid out. Another important piece of information is that the amplitude of resonant vibration reaches large values within only 0.3 microseconds under a broad range of frequencies. It will thus be possible to program the pulse generated by the transducers of the harmonic waves, to sweep over a broader range of frequencies in a fraction of a second. This finding will remove one of the shortcomings of the present state-of-the-art where the geometry and material properties are not precisely known. Conclusions and outlook An attempt is made in this paper to formulate the response of the virus to the ultrasound excitations as a problem is mechanics and dynamics. It is the small size of the viral envelope and even smaller dimensions of a spike that makes the formulation and solution difficult but at the same time interesting. The general procedure applies to both the virus suspended in the air or floating in body fluids and the effect of water was shown to substantially lower the frequency spectrum. Spikes with their large masses act as randomly distributed seeds for the process of buckling and total collapse of the viral shell. It was shown that the pressure distribution on the viral shell depends on a single parameter kR and, is dominated by the first two terms in the Taylor-Legendre series expansions. Higher components are also present, especially for large frequencies. A great deal of effort was devoted to the determination of natural vibration modes and frequencies, using both analytical and numerical methods. The availability of closed-form solutions is important due to uncertainties in the geometry and mechanical properties of the viral material. In this way, the effect of elastic parameters, mass densities, spike geometry, etc. can be readily assessed. The eigenvalue analysis Fig. 18. Evolution of the lateral deflections of spikes for three harmonic frequencies. Initial growth is shown on the left. Simulation at a longer time until buckling, right. The deflections were plotted for the spike marked with a blue triangle. confirmed the existence of three distinct resonant frequencies of 111 MHz for the individual spike, and 340 MHz for the bouncing mode, and 910 MHz for the breathing modes of the spike-free shell. The fully nonlinear simulation of the forced vibration revealed the true character of the viral response to harmonic waves. Even though the system consisted of a hundred-degrees-of-freedom (98 spikes), a general pattern emerges. First, harmonic excitation of the system at any of the three distinct natural frequencies induced coupled vibrations of both shell and the spikes. Secondly, large amplitudes are reached at a fraction of a microsecond as initial imperfection grows in time. The amplitude of the bouncing mode increases, leading to the buckling and collapse of the viral shell. The analysis of the color-coder strain fields proved the development of large tensile strains under low frequencies, spanning a safe range of 25-50 MHz. This conclusion is valid for the set of input parameters defining vibration in the air-suspended viral shell. According to the analysis in Sections 10 and 11, natural frequencies of the vibration of the viral shell in water (body fluids) are 50% smaller. Before moving into preventive or therapeutic applications, the validation study must be performed. The present research has identified two ways of reaching this goal. One will be to provide more precise information on the input of the finite element simulation. Research should focus on the effect of damping on the amplitude of resonant vibration in order to calculate more precisely the needed power of the ultrasound pulse and pressure amplitude. The analysis of the vibration of the viscoelastic structure will be straightforward but was not performed yet. A critical step will be to develop the fracture theory of the lipid bi-layer shell and spike material. It will then be relatively easy to predict the failure sequence of spikes. A possible sequence of events is that a crack would first initiate at the bottom of the spike and propagate until the spike is teared-off. Alternatively, the shell itself could rupture first. The general-purpose FE codes have reached a high degree of accuracy and in many fields of engineering and design are replacing physical testing. The task of providing improved mechanical properties of the viral envelope with spikes rests on the shoulders of experimental microbiology. That also includes measuring the more exact geometry of the spike-membrane interface. Our team decided to take a more direct approach to validate the present prediction. Now, it is feasible to conduct ex Vitro experiments with frequencies (range: 1-50 MHz), ultrasonic pressures (p 0 range: 0.1-5 MPa; corresponding intensity range: 0.3-800 W/ cm 2 ) and pulse durations (typically 1 µs which is greater than shell collapse times in Fig. 18) generated by laboratory-based ultrasound equipment available commercially (O'Brien, 2020). We have teamed up with the experimental lab of Professor Pedro de Pablo of the Universitat Autonoma de Madrid (De Pablo, 2020). Ex Vitro tests on the TGEV pig virus (inactive to humans) are being planned to see under what combination of ultrasound parameters and exposure times, damage to the virus will be detected under the Atomic Force Microscope. Considering the enormous attempts of the scientific community around the globe to develop new weapons for the fight against the present pandemic, our results should be looked at from a proper perspective. The acquired immunity provided by the vaccine recently developed by Pfizer and Moderna would be an ideal solution to fight SARS-CoV-02. But it would be just temporary because the emergence of new mutations or strains would require the development of new vaccines, as occurs seasonably with the influenza virus, with an investment of time of one year. In this paper, we presented a new concept of using ultrasound and mechanical resonance to target SARS-CoV-2 and other enveloped viruses that do not have this time limitation. Currently, we have only outlined the promising first step of this ambitious project that would require more profound interdisciplinary research. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-02-18T14:00:49.968Z
2021-02-18T00:00:00.000
{ "year": 2021, "sha1": "4354b16de072d6d272d5f0bd561fccaca1d9eb32", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.jmps.2021.104369", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "4ccb8b23295a2baca328fcb42e75dda2addd6847", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
248132564
pes2o/s2orc
v3-fos-license
Efficient Bayesian inference of instantaneous reproduction numbers at fine spatial scales, with an application to mapping and nowcasting the Covid‐19 epidemic in British local authorities The spatio-temporal pattern of Covid-19 infections, as for most infectious disease epidemics, is highly heterogenous as a consequence of local variations in risk factors and exposures. Consequently, the widely quoted national-level estimates of reproduction numbers are of limited value in guiding local interventions and monitoring their effectiveness. It is crucial for national and local policy-makers, and for health protection teams, that accurate, well-calibrated and timely predictions of Covid-19 incidences and transmission rates are available at fine spatial scales. Obtaining such estimates is challenging, not least due to the prevalence of asymptomatic Covid-19 transmissions, as well as difficulties of obtaining high-resolution and high-frequency data. In addition, low case counts at a local level further confounds the inference for Covid-19 transmission rates, adding unwelcome uncertainty. In this paper we develop a hierarchical Bayesian method for inference of transmission rates at fine spatial scales. Our model incorporates both temporal and spatial dependencies of local transmission rates in order to share statistical strength and reduce uncertainty. It also incorporates INTRODUCTION The spatio-temporal pattern of Covid-19 infections, as for most infectious disease epidemics, is highly heterogenous as a consequence of local variations in risk factors and exposures. Consequently, the widely quoted national-level estimates of reproduction numbers are of limited value in guiding local interventions and monitoring their effectiveness. It is crucial for national and local policy-makers, and for health protection teams, that accurate, well-calibrated and timely predictions of Covid-19 incidences and transmission rates are available at fine spatial scales. Obtaining such estimates is challenging, not least due to the prevalence of asymptomatic Covid-19 transmissions, as well as difficulties of obtaining high-resolution and high-frequency data. In addition, low case counts at a local level further confounds the inference for Covid-19 transmission rates, adding unwelcome uncertainty. In this paper we develop a hierarchical Bayesian method for inference of transmission rates at fine spatial scales. Our model incorporates both temporal and spatial dependencies of local transmission rates in order to share statistical strength and reduce uncertainty. It also incorporates information about population flows to model potential transmissions across local areas. A simple approach to posterior simulation quickly becomes computationally infeasible, which is problematic if the system is required to provide timely predictions. We describe how to make posterior simulation for the model efficient, so that we are able to provide daily updates on epidemic developments. The results can be found at our web site https://localcovid.info, which is updated daily to display estimated instantaneous reproduction numbers and predicted case counts for the next weeks, across local authorities in Great Britain. The codebase updating the web site can be found at https://github.com/oxcsml/Rmap. We hope that our methodology and web site will be of interest to researchers, policy-makers and the public alike, to help identify upcoming local outbreaks and to aid in the containment of Covid-19 through both public health measures and personal decisions taken by the general public. DATA Our model is applied to publicly available daily counts of positive test results reported under the combined Pillars 1 (NHS and PHE) and 2 (commercial partners) of the UK's Covid-19 testing strategy. 1 The data are available for 312 lower-tier local authorities (LTLAs) in England, 14 NHS Health Boards in Scotland (each covering multiple local authorities) and 22 unitary local authorities in Wales, for a total of n = 348 local areas. The data are daily counts of lab-confirmed (PCR swab) cases presented by specimen date, starting from 30 January 2020. The original data are from the respective national public health authorities of England 2 , Scotland 3 and Wales 4 and we access them through the DELVE Global Covid-19 Dataset 5 (Bhoopchand et al., 2020). Due to delays in processing tests, we ignore the last 7 days of case counts. METHOD Our method is based on an approach to infectious disease modelling using discrete renewal processes. These have a long history, and have served as the basis for a number of recent studies estimating instantaneous reproduction numbers, (Cori et al., 2013;Flaxman et al., 2020;Fraser, 2007;Wallinga & Teunis, 2004). See Bhatt et al. (2020) and references therein for historical and mathematical background, as well as Gostic et al. (2020) for important practical considerations. Following Flaxman et al. (2020), we model latent time series of incidence rates via renewal processes, and separate observations of reported cases using negative binomial distributions, to account for uncertainties in case reporting, outliers in case counts, and delays between infection and testing. We introduce a number of extensions and differences addressing issues that arise for applications to modelling epidemics at local authority level rather than regional 1 https://www.gov.uk/government/publications/coronavirus-covid-19-scaling-up-testing-programmes 2 https://coronavirus.data.gov.uk 3 https://publichealthscotland.scot/our-areas-of-work/sharing-our-data-and-intelligence/coronavirus-covid-19-dataand-guidance/ 4 https://phw.nhs.wales/topics/latest-information-on-novel-coronavirus-covid-19/ 5 https://github.com/rs-delve/covid19_datasets or national levels. Firstly, we introduce dependencies between reproduction numbers across neighbouring localities, in order to smooth estimates of reproduction numbers and statistical strength across localities and time. We do this using a spatio-temporal Gaussian process (GP) prior for the log-transformed reproduction numbers. Secondly, we model transmissions across localities using a spatial meta-population model. Our meta-population model incorporates commuter flow data from the UK 2011 Census in order to capture stable patterns of heterogenous cross-infection rates among local authorities, linked to typical commuter patterns. Human mobility patterns may also reflect the introduction of non-pharmaceutical interventions (NPIs), though our model does not explicitly use real-time mobility data so cannot estimate the direct or indirect effects of NPIs. The model is implemented in the Stan probabilistic programming language (Carpenter et al., 2017), which uses the No-U-Turn Sampler (NUTS) (Hoffman & Gelman, 2014) for posterior simulation. A number of modelling design choices as well as inference approximations are made to improving mixing and computational efficiency. These are described in Appendix B. Model overview In this section we give an overview of our model, which we refer to as EpiMap. The model consists of three layers: a latent Gaussian process over the log reproduction numbers, a meta-population model for the epidemics across local areas and an observation model relating the size of the epidemic with the observed number of positive tests in each day and area. We first introduce some notations. We are interested in estimating the instantaneous reproduction numbers, R i,t , across local areas in the United Kingdom (indexed by i) and across time (indexed by t). For each local area i and day t, the observed daily Pillars 1 + 2 case counts are denoted C i,t . Let the unobserved daily infection (incidence) counts be X i,t . Starting with the observation model, we model the number of reported cases using a delay distribution and an over-dispersed negative binomial observation model: where D s is the probability that an infected person gets tested and tests positive s days after infection and E i,t is the expected number of positive test cases on day t in area i. NegBin( , ) is the negative binomial distribution with mean and dispersion parameter , while V day_of_week(t) models day-of-week variations in reported cases. Section 3.1.2 gives more details. Assuming a homogeneously mixing population in each area, and interactions across areas modelled using a cross-coupled meta-population model, we model the number of new infections in each area as follows. Conditional on the history of infections, let be the infection load on day t caused by previous infections in area i, if each primary case produces one secondary case. W s describes the generation distribution, and is the probability that a secondary infection occurs s days after the primary infection. See Section 3.1.2 for more details on how we parameterise W s . These secondary infections can occur in area i, or in another area, for example due to individuals working in an area different from where they live. We model this with a time-dependent flux matrix F (t) ji , which is interpreted as the probability that a primary case living in area j infects a secondary case living in area i on day t. The resulting cross-coupled infection load in area i is:Z We describe the meta-population model in further detail in Section 3.1.3, including how the flux matrices are parameterised. We model the number of new infections on day t as, where R i,tZi,t is the force of infection in area i and day t, and is a dispersion parameter which allows for over-dispersion. We expect this to be a better model for Covid-19 than using a Poisson distribution in (4) due to super-spreading events. Note that if we used a Poisson then the secondary infections resulting from a primary infection would have been modelled as conditionally iid given the primary infection. The use of a negative binomial distribution instead introduces a positive correlation among the secondary infections. In order to make the posterior simulation computationally efficient using Stan, we approximated this with a positivised Gaussian distribution; see Appendix B.2. Latent GP With low case counts, inferring R i,t over small local areas can lead to high uncertainty. A standard Bayesian hierarchical modelling approach is to borrow strength across different local areas and across different time points. We use GPs to do so; namely, for area i and time t we model: where S ∶,∶ is a GP with a separable Matern(1/2) kernel 6 : and U i,∶ are independent copies of a GP with Matern(1/2) kernels: Here, y i and y j are the geographical centres of areas i and j, respectively, s and t are daily indices for each Monday, and we assume that the instantaneous reproduction numbers are constant within each week (taken to be Monday to Sunday). Note that our prior covariances in Equations (6) and (7) enjoy a Kronecker structure across the space and time dimensions, which allows for efficient computations (see Section B.1). In the temporal case, which is one-dimensional, the GP prior with the Matern(1/2) kernel is equivalent to an AR(1) process with zero mean. We also considered Matern(3∕2), Matern(5∕2) and squared-exponential covariance kernels, which produced similar inferences. The hyperparameters of the spatio-temporal GP are: scale parameters spatial and local and length-scale parameters spatial and time . We place independent truncated normal priors  + (0, 0.5) over the scale parameters. For the length scale parameters, we have found that if we inferred these along with the rest of the random variables in the model, the posterior distribution places mass on large spatial length scales and short temporal length scales. This has an undesirable over-generalisation effect, and we believe this behaviour is due to model misspecification with respect to the length scale parameters. Instead we selected these using an initial cross validation run optimising for performance of forecasted case counts three weeks into the future, and selected spatial = 10km and temporal = 200 days. Observation and infection model Weekly variations are modelled using multiplicative factors in (1), with a uniform prior over positive vectors of length 7 and sums to 7. Following Flaxman et al. (2020) we use an over-dispersed negative binomial observation model (1), with a broad half normal prior for the dispersion parameters, i ∼  + (0, 5) iid. The neg_binomial_2 parameterisation in Stan uses a mean parameter , an inverse-dispersion parameter c, and variance + 2 ∕c. We use a different parameterisation, and set c = ∕ , where is a dispersion parameter. This gives a variance of (1 + ) and probability mass function: This parameterisation naturally emphasises the infinite divisibility of the negative binomial, that is, if Y 1 , … , Y m are independent negative binomial random variables with means 1 , … , m and the same dispersion parameter , then ∑ m i=1 Y i is also negative binomially distributed with mean ∑ m i=1 i and dispersion , a sensible choice in cases where we believe counts are sums of independent random events. The infection-to-test delay distribution D s is a convolution of two delay distributions: an incubation period distribution, and a symptom-onset-to-test distribution. Following Bi et al. (2020), we use a LogNormal( , 2 ) distribution for the incubation period, where has a 95% confidence interval (CI) of (1.44, 1.69) and mode 1.57, and 2 has 95% CI of (0.56, 0.75) with mode 0.65. This results in a median of 4.8 days and a 90% confidence interval of (1.64,14.04) days for the incubation period, and we assume an additional 2-day delay to get tested. Similarly, we parameterise the generation distribution W s as a Gamma distribution whose shape parameter has mode 2.29 with (1.77, 3.34) 95% CI, and whose rate parameter has mode 0.36 with (0.26, 0.57) 95% CI. This corresponds to the serial interval parameter distributions from Bi et al. (2020); we note that the serial interval is often used as an accessible proxy for the unobserved generation distribution (Cori et al., 2013). For both D s and W s , we aggregate predictions and inferences from 10 bootstrapped runs of our model, each with independently sampled LogNormal and Gamma parameters respectively. This is equivalent to a nested Monte Carlo approximation to a cut or modular model (Carmona & Nicholls, 2020;Jacob et al., 2017;Plummer, 2015). We found this to be crucial to avoiding overconfident predictions for R t estimates. Meta-population model Our final extension relaxes the assumption in many infectious disease models, that the epidemic is evolving in a homogeneously mixing population in an area, with no significant transmissions from other areas. While this might be sensible in large regions or countries, it is not a sensible assumption for modelling multiple small areas with likely a significant number of cross-area transmissions. To address these transmissions, we describe a simple cross-coupled meta-population extension, given by Equations (3) and (4). In the following we describe how to parameterise the flux F ji , which describes the chance that, if a primary case living in area j infects a secondary case, the secondary case will live in area i. One sensible choice, if the data were available, would be to use real-time data on the actual volume of travel between each pair of areas. Such data are unfortunately not publicly available, and in any case the relationship between the volume of travel and the number of transmissions is not straightforward due to heterogeneity in the population. We use commuting flow data from the 2011 Census 7 to parameterise a weekly varying flux matrix. First, the data give, after some preprocessing, a matrix M such that for each pair of areas i and j the number of individuals who live in area j and commute to work in area i is M ji . Let P j be the population of area j. We take M jj to be the population who commute within their own area or who do not commute, so ∑ i M ji = P j . We consider three types of transmissions: an individual living in area j infecting another individual in area j (e.g. household transmissions), an individual living in area j working in area i infecting one living in area i, and an individual living in area i being infected while working in area j. These three types of transmissions can be described using three flux matrices: where ji = 1 if j = i and 0 otherwise. Then, we parameterise the overall flux matrix during week t using a convex combination of F id , F fwd , and F rev , with t ∈ (0, 1) governing the amount of mixing across areas on week t (roughly the proportion of the population working from home), and ∈ (0, 1) governing the amount of home-to-work versus work-to-home transmissions. We use a uniform prior over and a weekly AR(1) prior for the log-odds, specifically t = 1∕(1 + exp(− + A t )) where the AR(1) process is given by is a weakly informative prior on the time scale of the AR(1) process centred around 4 weeks. EMPIRICAL EVALUATIONS In this section, we report some empirical evaluations of our model, which we call EpiMap. We compared two variants of EpiMap: one which models each local area separately from the rest (hence no meta-population model nor spatial component of GP), and the other the full model. For the full model we have found that the inferences are sensitive to the length scale of the spatial GP, and so we compared the full model with varying spatial length scales and with no spatial GP component. To account for uncertainty in the serial interval and incubation period distributions, we ran EpiEstim with 10 instantiations of these distributions with parameters drawn iid from the posterior distributions reported in Bi et al. (2020), and averaged the posterior predictive distributions over these. This procedure can be interpreted as nested Monte Carlo for a cut distribution where we specified the prior for these parameters but disallow the model from updating the prior to a posterior (Plummer, 2015). We also compared against EpiEstim (Cori et al., 2013) and EpiNow2 . We compared these methods on simulated data and on predicting future case counts in British local authorities. We also report estimates of R t at regional and national levels. Simulation data One sanity check of our method is to fit the models to simulated data for which we know the underlying R t , and check how well our models can recover this. In this section we do just this, and compare the results with a number of other common methods. The simulation model we use is exactly the generative model we described. We use the median distribution parameters given by Bi et al. (2020) for the serial interval and incubation period. We assume the delay distribution is the incubation period distribution plus a fixed reporting delay of 2 days. The data are simulated by taking initial real case data from Oxford and the four surrounding LTLAs up to 14 March 2020, and from that point simulating new cases using the model. The main unspecified parameter is the R t in each region over time. An R t curve was manually designed in order to give a double peak epidemic similar in nature to the pattern seen across the United Kingdom, with case numbers in the regions roughly similar. The same R t curve was shared across the LTLAs. Additionally we use 50:50 flux proportions of the forward and reverse commuter flow data, with a constant t of 0.45. These choices of parameters are somewhat arbitrary and were chosen to give qualitatively sensible epidemic curves. To these simulated data we fit the two variations of our model, with the full model using a temporal length scale of 200 days and a range of spatial length scales between 1 and 100 km. The results can be seen in Figure 1. Plots showing the full sweep of spatial length scales for EpiMap can be found in Section C.1. Predicting future case counts Next, we evaluate the methods' predictions of future case counts by comparing them to true case counts. In addition to measuring predictive performance, we also assess the model's uncertainty calibration by comparing the coverage probability of its prediction intervals with the actual, achieved (empirical) coverage. We first picked four well-separated dates: 12 October 2020, 23 November 2020, 21 December 2020 and 18 January 2021. For each date, we used the 15 preceding weeks of data for inference and evaluated predictions of case counts for the subsequent 3 weeks. These assessment periods were chosen to cover a range of situations from relatively stable transmission rates (during lockdown in January) to drastic changes in transmission rates due to NPIs (during December period). Note that since the methods do not model drastic changes arising from NPIs changing, we expect them to perform poorly during such periods. In addition to the variants of EpiMap, EpiEstim and EpiNow2, we also included two simple baselines: 'zero' which predicts zero cases for all dates and LTLAs, and 'last case count' which predicts using the case count on the last day of the 15-week inference period for each LTLA. Figure 2 shows log(RMSE + 1) between predicted and true case counts. More precisely, the RMSE is separately computed for each LTLA's predictions over the test period, then we average the resulting log(RMSE + 1) across LTLAs. The log transformation is so that results are not dominated by areas with much higher case counts. EpiMap variants usually perform the best or competitively at predicting the true case counts. The positive impact of modelling cross-area dependencies is observed, since EpiMap (single area) tends to slightly underperform the other variants of EpiMap. Moreover, the predictive performance of EpiMap is dependent on, though not very sensitive to, the choice of spatial . Note that for the start date 21 December 2020, all models perform worse relative to other dates. This is because of significant changes in the dynamics of Covid-19 spread due to changing NPIs over the Christmas period, information that is not incorporated into any of these models. Figure 3 assesses the quality of the uncertainty estimates produced by the models using reliability curves. Each model outputs percentiles of the posterior predictive distribution of case counts. Letĉ p be the pth percentile produced by a model for a given date and LTLA. Ideally, we expect that the percentage of dates and LTLAs for which the true case count c is less than or equal toĉ p , is approximately p. In other words, the actual, empirical coverage of the pth percentile (y-axis of Figure 3) will ideally be equal to the target coverage p (x-axis of Figure 3), yielding a reliability curve close to y = x. We observe that EpiMap's uncertainty estimates generally capture the underlying case counts distribution well, though with some variation across start dates and model configurations. EpiNow2 usually performs similar to the well-performing configurations of EpiMap. EpiEstim's uncertainty estimates are overconfident as indicated by the flatter-shaped curves. For the first three start dates, EpiMap (single area) and models with small spatial yield better uncertainty estimates. For 21 December 2020, the concave shape of the reliability curves indicates that models are overestimating case counts, which is consistent with the fact that stricter NPIs curbed case counts while the models predicted case counts would increase assuming no changes in spread dynamics. For 18 January 2021, larger spatial perform best, likely because the prevailing national lockdown in that period meant that spread dynamics were more uniform across areas. Additional results are in Appendix C.2, including loss and reliability curves stratified by week during the 3-week prediction period and individual LTLA losses. Regional estimates While our model operates at the level of local authorities, we can estimate R t 's at coarser spatial scales by aggregating inferences across multiple local areas. Specifically, given a region r consisting of a set of areas and a time period w, we estimate This definition is consistent with R i,t when r = {i} and w = {t}, and interprets R r,w as a summary statistic of the average number of secondary infections per primary infections over the region and time period. Figure 4 shows the posterior distributions of R r,w , for the London NHS region, England, Scotland and Wales, and for each week in the December 2020 to March 2021 period, produced by the full EpiMap model with spatial length scale of 20 km, using data available on 15 March 2021. Corresponding plots for other English NHS regions can be found in Appendix C.3. Figure 4 shows sensible credible intervals both during the modelled 15-week time period and subsequent 3-week forecasts. In this example, we see that our model projects an increasingly uncertain size of epidemic in Scotland in the near future, with a non-negligible probability of R t being above 1 in Scotland and Wales on 15 March 2021, whereas other regions are projected to have stable or shrinking epidemics. DISCUSSION We have proposed a hierarchical Bayesian approach to model epidemics at fine spatial scales, which incorporates movement of populations across local areas as well as spatiotemporal borrowing of strength. Empirical results suggest that our model can be a useful tool for policy-makers to locate future epidemic hotspots early, in order to direct resources such as surge testing as well as targeted local transmission reduction measures. As with other methods that infer the extent of epidemics through identified cases alone, the main limitations of this work are due to the provenance of the Pillars 1 + 2 case data. Firstly, there can be substantial selection bias in the population who get tested, leading to discrepancies between reported cases and the true size of the epidemic. In addition, the amount of testing may change over time, for example, due to localised testing or limited supplies of testing kits, potentially leading to spurious temporal patterns (Omori et al., 2020). Finally, case data are only reported for the combined Pillars 1 and 2 of the UK's testing regime. These correspond to different sectors of society at different points of an infection, with different delay distributions between infection and getting tested. Moreover, the proportion of tests under each pillar has been changing systematically since Pillar 2 testing began. Our model is the result of a number of modelling choices, and can be improved in a number of ways. Firstly, our aim is to track local reproduction numbers and provide nowcasting of epidemic development in local areas, rather than understanding how NPIs affect transmission rates. This lead to our choice of a nonparametric GP prior for the reproduction numbers, rather than a generalised linear model relating transmission rates to NPIs. It is possible to extend our model to model effect of NPIs as in Flaxman et al. (2020). It also lead to our choice not to explicitly model the susceptible population, since it impacts the model just via lowered transmission rates. Secondly, our model uses only Pillars 1 + 2 case data, which as noted above have biases that are not well understood. This affects our confidence in the inferred local transmission rates and forecasts. Further, in our model we assumed that positive test cases correspond 1-1 to infections, which in fact does not hold due to asymptomatic infections. We can correct for these biases by incorporating less biased data like hospitalisation and death counts, as well as less granular but better understood estimates of prevalence data obtained from randomised surveys such as REACT (Riley et al., 2020) and the ONS infection survey (Pouwels et al., 2020). In order to model cross-area dependencies, we also used commuting flow data from the 2011 Census. However, this data does not necessarily reflect the commuter flow accurately during the pandemic, especially since the data is static. We used a simple approach to parameterise a time-dependent flow matrix via t which captures the overall amount of travel in each week. Nonetheless, our model is likely to improve if this limitation is addressed by using more accurate, real-time commuter flow data. Finally, with the increasing importance of the roles of vaccines and variants, it is interesting to consider how these can be incorporated into our model. This will require a number of extensions, including separating the population into age bands and modelling the susceptible population. These extensions will incur significantly higher computational costs, and additional work will have to be performed with respect to software and implementational efficiency. Our hierarchical Bayesian model is sensitive to a number of hyperparameters, particularly those specifying the generation interval and incubation period distributions, and the spatial and temporal length scales of the latent GP. These are hard to specify in a fully Bayesian manner. For example, the posterior strongly prefers spatial length scales that are too long due to model misspecification. Until there are good, fully Bayesian approaches to dealing with such situations, we have kept to a more pragmatic approach of using cut models and cross validation (see Section 3.1.2). Our hierarchical model introduces stochasticity at all three layers of the model to capture different aspects of the unfolding epidemic. As a reviewer noted, there can be complex interplays between these layers, for example resulting in non-identifiable parameters. The various components of the model have been chosen to avoid the worse of these, but we have not performed a systematic study of the impacts of these choices. This will be an illuminating piece of future research. APPENDIX A. ADDITIONAL MODEL VARIATIONS In addition to the final model described in the main paper, we have also considered a number of model variations which did not result in improved performance so did not include them. A.1 Global effects term in GP prior We have also explored adding an additional global effects term to the spatial part of the kernel in (6): This has the effect of adding another GP term f global ∶ ∼ GP(0, global K time ) to (5) that is shared across all areas i = 1, … , n. However this has an effect of over-generalising estimates of R i,t from the high incidence areas (for which the likelihoods constrain inference of R i,t sufficiently) to the low incidence areas (for which they do not). A.2 Modelling infectiousness and susceptibility separately We have also explored a somewhat more elaborate meta-population model. Note that in (3) and (4) the number of transmissions occurring in an area i depends only on R i,t and not on R j,t of the areas j that are 'sending' infections to area i. We can extend this to a model where the predicted mean count depends on properties of both the area that 'receives' an infection and the area that 'sends' it: where R ′ j,t can be interpreted as an infectiousness level of area j, and R i,t a susceptibility of area i, with the overall transmission rate being a function of both, as well as of the fluxes. While this extension is more complex and flexible, it is not clear whether both the infectiousness and susceptibilities are well-identified from case count data. Empirically, we have not found it to perform differently from the simpler meta-population model (3) and (4). We used the same GP prior for both the infectivities R ′ i,t and susceptibilities R i,t in these experiments. As a result of the lack of statistical gains and of computational costs, we decided to use the simpler model (3)-(4). B.3 Regional inference In order to track the daily evolution of the epidemic in real time it is preferable for the posterior simulations to run overnight. However, the model described in 3 is quite complex, and full posterior simulation for the whole of Great Britain using Markov chain Monte Carlo (MCMC) has significant computational costs. In this section we describe a two-stage procedure to reduce the computational costs to a manageable level. During the first stage the epidemic time courses of individual local areas are approximately inferred first by ignoring cross-area dependencies in both the meta-population infection model and the GP prior. This first stage can be easily parallelised across the 348 areas and completed quickly. In the second stage, we split Great Britain into nine regions (seven NHS regions in England, plus Wales and Scotland), and modelled each region independently using the model described in Section 3. In order to account for transmissions to and from other regions, we fix the latent epidemic process for areas in other regions to the posterior median inferred during the first stage. To reduce the approximation error due to only modelling each region rather than the whole of Great Britain, we include in each region model a number of areas outside the region, such that for all areas within that region at least 80% of the off-diagonal flux probabilities (corresponding rows in F fwd and F rev ) are included in the model. Figures C1 and C2 show case count and R t predictions for all models and variants of EpiMap. Figures 2 and 3, respectively, showing losses and uncertainty calibration stratified by week during the 3-week prediction period. Figure C5 shows the log(RMSE + 1) for individual LTLAs which are stratified by week and compared between models. stratified by week over the 3-week prediction period. As expected, the quality of uncertainty estimates degrades in the later weeks compared to earlier weeks, as indicated by reliability curves that are further from the ideal diagonal. Once again, the relative ordering of EpiMap variants typically remains unchanged for different weeks, however differences in uncertainty calibration between models tend to exacerbate. [Colour figure can be viewed at wileyonlinelibrary.com] C.3 Regional estimates Figure C6 shows the regional estimates for cases and R t on remaining NHS regions in England, in the same setting as Figure 4.
2022-04-14T06:54:20.439Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "cf59f16dd7329d65f05b0b6e312a82b0c661d485", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "963b1f579abb46f2b5aee465cec2638298f90de8", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
158868736
pes2o/s2orc
v3-fos-license
“Security Union” and the digital sphere: unpacking securitization processes : Since 2016, the EU is boosting its agenda on security in a geopolitical context that comprises multiple challenges, namely the fight against terrorism, the migration pressure, relations with Russia, Brexit, and the redefinition of the Euro-Atlantic partnership. This article exposes drivers of the EU’s perspective on security, in particular, in the context of the “Security Union” framework and the emergence of the digital sphere as a defence matter. I. Introduction The European Union (EU) has been evolving as a security actor with significant transformations since the Lisbon Treaty 2009. In defence matters, we are witnessing a "brave new world" for the Union in the sense that there has been noticeable acceleration in the last two years and results are expected from 2018 onwards. We argue that the promotion of the "Security Union" commissioner under Juncker's leadership promotes an agenda for security that operationalizes security nexuses that define the Union's external action. Additionally, the broadening of the security agenda in terms of internal and external threats blurs the lines between institutions dealing with justice and home affairs and external affairs. This raises the issue of consistency of the Union's policies. This article aims to give, firstly, an overview of the Union's understanding of security by presenting security nexuses at play and securitization processes. Secondly, the analysis unpacks how the "Security Union" developments reflect this understanding and further presents key developments in the defence realm that tackle the digital/cybernetic dimension of threats. II. Balancing values and interests: the meaning of "security" for the EU The EU, as a security actor, has been prominently analysed under two prisms: the security nexuses and the processes of securitisation. These two conceptual frameworks are informative of the Union's vision about what security means and what is constitutive of threats. This section unpacks the main contributions of the two approaches in understanding the rationales that drive the "Security Union" agenda and EU's external action at large. The EU's political values shape an approach, through which the EU promotes transformation in third countries, namely in its enlargement policy and neighbourhood policy. EU values include respect for human dignity and human rights, freedom, democracy, equality and the rule of law. In the context of enlargement and relations with neighbouring countries, including Russia, the values and principles are defined as follows: the rule of law; good governance; respect for human rights, including the rights of minorities; promoting good neighbourly relations; principles of market economy and sustainable development. 1 However, the Union also has strategic interests as protracted by Member States and institutions, that also shape its external policies. The postulate is that Brussels prioritizes a normative approach when it emphasizes the rule of law, democracy, and human rights. When framing its decisions in terms of security, it adopts a strategic approach. This dichotomy represents the values-security nexus and produces tensions among EU actors in the promotion of external policies. The second security nexus is known as "internal-external" and results from processes of securitization. The relationship between "inside" and "outside" has long been regarded as central in the EU's security policy. 2 Securitization is a discursive process through which a securitising agent is successful in portraying an issue as an existential threat to a referent object and in demanding exceptional measures to tackle that threat. "Securitisation theory is premised on a constructivist notion of security, in the sense that 'security is a quality actors inject into issues by securitising them'". 3 Bigo 4 has developed the analysis on the internal-external nexus, namely concerning the issue of migration. This literature is part of a global approach of the EU security "actorness" 5 that is accompanied, in parallel, by the Union's own narrative on the nexus: internal-external, security-development, civilian-military, public-private. 6 The thinking about the security nexus is, thus, also driven by the "securitization" concept that highlights the role of the externalisation of internal security for the legitimation of the EU's role. 7 Additionally, the merging of internal and external security has prompted the creation of an external dimension of the EU area of Justice and Home Affairs that seeks to promote the rule of law in neighbouring countries. Each specific policy field needs to be analysed in order to understand how the EU displays simultaneously normative and strategic intents, as opposed to assuming a strict dichotomy to define its actions. 8 The literature concerning the EU as a normative/security power is, thus, related to the thinking about the internal-external security nexus. As Traunert underlines, the comprehensive coherence of EU foreign policy is at stake depending on the balance between values and priorities. "One of the major challenges for the EU has been to ensure that the mainstreaming of internal security objectives in the EU's external relations does not undermine the normative aspirations of EU foreign policy-making". 9 He underlines the relegation of values in favour of security concerns, specifically in the area of Justice and Home Affairs. III. "Security Union" and the digital/cybernetic dimension of security We argue here that the "Security Union" policy area is framed under the two above-mentioned nexuses and processes of securitization. This understanding originates the agenda for security of the EU and the means that it ought to develop. One of the main objectives of the European Commission is to "address the existing shortcomings of EU information systems for security and border management." Additionally, it incorporated the aim "to counter radicalisation and the cyber threat. The idea that security under the Justice and Home Affairs portfolio has to be integrated in a comprehensive approach is, thus, materialized in the "Security Union" that merges internal and external threats. As far as the emergence of the digital dimension is concerned, a process of securitization has brought home the idea that we are more vulnerable because there is no security setting for how we relate to the world. 11 The EU's view is about creating a European agenda for security where information systems need to be defended and resilient. This agenda is fast evolving and widening. For instance, in the prism of "external border", the dimension of combating hybrid threats was introduced in April 2016 with a "Joint Framework". 12 The Union is progressing towards a definition of these threats that comprises "nonconventional forms, such as radicalisation leading to terrorist attacks, chemical attacks, cyber-attacks or disinformation campaigns." They "combine conventional and unconventional, military and nonmilitary activities that can be used in a coordinated manner by state or non-state actors to achieve specific political objectives" that resume to endanger European societies and EU values. 13 Beyond emerging agendas and technical issues, such as the creation of interoperability of EU information systems for borders and security 14 , there is a geopolitical context that explains why the Union is producing this specific set of policies to address external threats. The geopolitical situation in its immediate vicinity has turned the fight against terrorism into a priority and the migration pressure a security issue, resulting from a process of securitisation. Additionally, the degradation of relations with Russia in the aftermath of the annexation of Crimea in March 2014 has highlighted the digital/cybernetic threats in the context of methods of hybrid warfare. 15 The creation of the East Stratcom Task Force, in 2015, at the European External Action Service exemplifies the above-mentioned understanding. The Task Force received funding from the EU budget for the first time, for the 2018-2020 period. 16 The body aims to raise awareness and understanding of disinformation and improve the Union's own performance concerning its news and communication and support to journalism in Eastern Europe. Taking into account the nexus between internal and external threats and the balance between security and the normative concerns, above-mentioned, EU policies to address security needs can be found in several dimensions of the European process of integration. The broader framework of the "comprehensive approach" sustains this understanding. This approach was formulated in 2016 and is further complemented by the EU Global Strategy of the same year. 17 The bottom line is the will to use the Union's tools in a more coherent way, including an inter-institutional perspective. In this broader context, an emphasis on defence is taking place with, for instance, advancements in the military sphere such as Permanent Structured Cooperation (PESCO) and a tightening of EU-NATO cooperation. PESCO results from the provisions of the Lisbon Treaty and was adopted by 25 Member States in December 2017. Among the 17 projects that are being developed, several include the cyber domain. 18 The cooperation with NATO highlights the cyber dimension as well and EU-NATO joint work is instrumental in the EU's view. Since December 2016, initiatives include the participation of the Union in NATO's cyber exercises, the exchange of military concepts, interoperability and staff-to-staff contacts. 19 In the words of the Head of Cabinet to European Commissioner for Security Union, both the end of the peace dividend and of the financial crisis explain today developments such as PESCO, as compared to the post-Treaty of Lisbon period. 20 Additionally, the cyber threat is so massive that it demands collective action and responsibility. The costs of developing tools in the cyber domain are high and PESCO can be a facilitator because it demonstrates the linkage of digital to the field of defence that is increasingly complex, comprehensive, and integrated. IV. Conclusion The EU is confronted with many security challenges that require multiple forms of defence and resilience. The steps towards digital interoperability in several domains such as Justice and Home Affairs or tackling cyber threats are part of processes of securitization that comprise two elements. On the one hand, the Union is increasingly viewing threats as being internal and external in nature. Consequently, on the other hand, the policies and instruments to address these threats have to engage all the portfolios of EU actors. The first accelerator of the incorporation of the digital dimension in the EU's security policies is, thus, the European Commission's new ambition to bring security and defence to the core of the EU. The second factor is the new external environment that includes challenges such as migration, Russia, the distancing by the United States, a traditional ally with growing isolationist proclivities, and Brexit. 17 High Representative of the European Union for Foreign Affairs and Security Policy. Joint communication to the European Parliament and the Council, The EU's comprehensive approach to external conflict and crises. Brussels, 11.12.2013 JOIN(2013) 30 final (2016). 18 Council of the European Union, "Defence cooperation: Council adopts an implementation roadmap for the Permanent Structured Cooperation (PESCO)", (2018), http://www.consilium.europa.eu/ en/press/press-releases/2018/03/06/defence-cooperation-council-adopts-an-implementationroadmap-for-the-permanent-structured-cooperation-pesco/. 19 NATO, "Statement on the implementation of the Joint Declaration signed by the President of the European Council, the President of the European Commission, and the Secretary General of the North Atlantic Treaty Organization", (2016) https://www.nato.int/cps/ua/natohq/official_ texts_138829.htm. 20 James Morrison, Address…
2019-05-20T13:04:26.402Z
2018-08-08T00:00:00.000
{ "year": 2018, "sha1": "c50bb1d52079358fa4ca259dbccedb5fdff4ae60", "oa_license": null, "oa_url": "https://revistas.uminho.pt/index.php/unio/article/download/19/5", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7968c0581ef166db83aa2dc6fad0fa0158dc7e82", "s2fieldsofstudy": [ "Political Science", "Computer Science" ], "extfieldsofstudy": [ "Political Science" ] }
47019423
pes2o/s2orc
v3-fos-license
Efficient CRISPR-Cas9 Gene Disruption System in Edible-Medicinal Mushroom Cordyceps militaris Cordyceps militaris is a well-known edible medicinal mushroom in East Asia that contains abundant and diverse bioactive compounds. Since traditional genome editing systems in C. militaris were inefficient and complicated, here, we show that the codon-optimized cas9, which was used with the newly reported promoter Pcmlsm3 and terminator Tcmura3, was expressed. Furthermore, with the help of the negative selection marker ura3, a CRISPR-Cas9 system that included the Cas9 DNA endonuclease, RNA presynthesized in vitro and a single-strand DNA template efficiently generated site-specific deletion and insertion. This is the first report of a CRISPR-Cas9 system in C. militaris, and it could accelerate the genome reconstruction of C. militaris to meet the need for rapid development in the fungi industry. To classify the synthetic mechanism of the bioactive compounds in C. militaris, various metabolic pathways were predicted (Zheng P. et al., 2011) by high-throughput sequencing. To further construct engineered strains with better productivity, the establishment of an efficient genome-editing method is vital to the complete process. However, reports of genomic editing in Cordyceps are still rare (Zheng Z. et al., 2011;Yang et al., 2016). Traditional genomic editing technologies such as methods based on homologous recombination (HR) and Agrobacteriummediated random gene insertion were insufficient and too complicated to satisfy the requirement of accurate and repeatable editing. Clustered regularly interspaced short palindromic repeats (CRISPR) system was a newly innovative and efficient genome-editing tool with gene knockout, insertion and replacement abilities. Among all types of CRISPR systems, the type II CRISPR system, which consists of only two compounds, the CRISPRassociated endonuclease Cas9 from Streptococcus pyogenes and the single-guide RNA (sgRNA), which is fused with precrRNA and tracrRNA (Cong et al., 2013), is the most popular one. With guidance to specific sites by the sgRNA, genome sequences are cut, and double-stranded breaks (DEGs) are generated by Cas9. Subsequently, gene knock-out would be caused by nonhomologous end-joining (NHEJ) while gene replacement would be obtained via HR. Among the vast applications of the type II CRISPR system in a wide range of species, the CRISPR-Cas9 system is also implemented in filamentous fungi such as Aspergillus nidulans (Katayama et al., 2015;Nødvig et al., 2015), Trichoderma reesei (Liu et al., 2015) and Neurospora crassa (Matsu-ura et al., 2015). However, different from these filamentous fungi, industrial mushrooms such as those in the genus Cordyceps, possess a much more complicated two-stage growth and a well-known high ratio of fruiting body gemmated degeneration, which leads to more difficulties in the application of a CRISPR system. In this study, the application of a CRISPR system in the genus Cordyceps is reported for the first time. With the help of the newly discovered promotor Pcmlsm3 and terminator Tcmura3, the cmcas9 gene was successfully transformed into the cells of Cordyceps militaris. A stable Cas9-expressing strain was proved by a fluorescent GFP tag and western blot analysis. sgRNA and donor single-stranded DNA (ssDNA) synthesized in vitro were further transformed into the Cordyceps militaris strain with cmcas9 expression. As a result, the target gene ura3, which was first used in S. cerevisiae as a selective marker (Boeke et al., 1984), was successfully disrupted by the CRISPR-Cas9 system. Strains and Cultural Conditions Escherichia coli strain DH5α (Weidi Bio, Shanghai, China) was used for vector construction. Agrobacterium tumefaciens AGL-1 (Weidi Bio, Shanghai, China) and the pCAMBIA0390 plasmid (Cambia, Queensland, Australia) were used for mediating the fungal transformation. C. militaris CM10 was purchased from Lucky Mushroom Garden (http://www.bjjixunyuan.com, Beijing, China) as the host for gene disruption. C. militaris CM10 was cultured in potato peptone dextrose agar (PDA) at 25 • C. The materials for the nucleic acids are shown in Table 1. Quantitative Real-Time PCR (qPCR) Total RNA was extracted from 100 mg of frozen mycelia pellets using an E.Z.N.A. Fungal RNA Miniprep kit (OMEGA Bio-Tek Inc., GA, USA). The qPCR template cDNA was synthesized from 1 µg of total RNA by using TransScript All-in-One First-Strand cDNA Synthesis SuperMix for qPCR (TransGen Biotech, Beijing, China). All primers used for quantitative real-time PCR (qPCR) are listed in Table 3. Each qPCR system consisted of a total volume of 20 µl, which contained 50 ng of cDNA, 160 nM each of the relevant primers and TransStart Tip Green qPCR SuperMix (TransGen Biotech). All qPCR was carried out by a QuantStudio 3 Real-Time PCR System (Thermo Fisher Scientific, MA, USA), following the reaction parameters from the TransStart Tip Green qPCR SuperMix instruction book (TransGen Biotech). Using the elongation factor 1-alpha (tef1) gene (NW_006271969.1) as an internal control for each sample (Lian et al., 2014), the relative mRNA levels were calculated by the delta-delta Ct method (Livak and Schmittgen, 2001). Western Blot Protein was extracted from the mycelia by radioimmunoprecipitation assay (RIPA) buffer (Sigma-Aldrich, USA) for western blotting. The supernatants were collected after centrifugation at 10,000 × g for 5 mins and mixed with 5× loading buffer (60 mmol/L Tris-HCl, 2% SDS, 25% glycerol, 5% 2-mercaptoethanol, and 0.2% bromophenol blue) at a ratio of 1:4. Samples were then heated at 95 • C for 5 min and separated on 5% SDS-polyacrylamide gel. The separated proteins were then transferred to a 0.22 µm polyvinylidene difluoride (PVDF) membrane (Perkin Elmer, MA, USA). The primary antibody used for western blotting was Guide-it TM Cas9 Monoclonal Antibody (Takara, Beijing, China). The secondary antibody was HRP-conjugated Goat Anti-Mouse Secondary Antibody (ABmart, Shanghai, China). C. militaris mycelia were cultured in 100 ml of potato dextrose broth medium at 25 • C without shaking and then harvested by centrifugation after 7 days. Using sterile water and 0.7 M KCl to wash twice, the mycelia were digested by 0.7 M KCl with 2% lywallzyme (Guangdong Culture Collection Center, Guangzhou, China) at 30 • C with gentle shaking (85 rpm min −1 ) for 3 h. Protoplasts were filtered and washed in 0.7 M KCl twice. After subsequent washing in STC buffer (1 M sorbitol, 25 mM CaCl 2 , 10 mM Tris-Cl pH 7.5), the protoplasts were resuspended in STC buffer at a density of 10 8 protoplasts in each milliliter. Each 50 µl of protoplasts were mixed with 2 µg of sgRNA or/and 3 µg of ssDNA and then placed on ice for 5 min. A volume of 500 µL of PEG buffer (PEG 4000 25%, 25 mM CaCl 2 , 10 mM Tris-Cl pH 7.5) was slowly added. The whole mixture was incubated at room temperature for 20 min and then mixed with 500 µl of STC buffer and incubated at room temperature for 5 min. Protoplasts were harvested by centrifugation, resuspended in STC buffer and then incubated in PDA with 0.8 M mannitol at 25 • C. After 3 days of regeneration, PDA medium with 5-FOA or 5-fluorocytosine but only half agar was used to directly cover the surface. Until the mycelia had grown out of the medium, individuals were picked and tested by PCR verification using the primer pair Ura3F/R ( Table 3). Identification of Resistance Marker for C. militaris To identify the resistance marker for genetic manipulation, hygromycin was primarily used as a selection agent according to a previous study in C. militaris (Mullins et al., 2001). However, hygromycin did not have a toxic effect on CM10 or the other C. militaris strains in our purchase stock (testing dose up to 4 g L −1 ). Several other drugs, such as G418, phleomycin, nourseothricin, Basta, 5-fluorocytosine and 5-FOA, that are commonly used in the genetic manipulation of fungi were tested. Only Basta and 5-FOA showed obvious lethality to the conidia or mycelia of C. militaris CM10. The effect of the minimum inhibitory concentration of Basta and 5-FOA on CM10 was determined by cultivating CM10 on PDA plates with different drug concentrations for a month. The growth of conidia was completely inhibited at a Basta concentration of 0.4 g L −1 , and the growth of mycelia was completely inhibited at a 5-FOA concentration of 0.1 g L −1 within a month. Therefore, the resistance genes for Basta (blpR) and 5-FOA were chosen for further study. The Basta resistance gene blpR was ligated into p390-cmcas9 for positive selection in the construction of a genome-editing system. Cotransformation of a Single Vector With the sgRNA-cmcas9 Cassette Since there were only a few available resistance markers in C. militaris, sgRNA and the cas9 gene were primarily ligated and expressed as a cassette. Since identification of the RNA polymerase III promoter in C. militaris failed, a composite RNA molecule with an sgRNA scaffold, hammerhead (HH) and hepatitis delta virus (HDV)-type ribozymes, which was predicted to process in a self-catalyzed manner and then to release the sgRNA from a larger transcript as previously reported (Gao and Zhao, 2014), was adopted with an RNA polymerase II promoter and terminator from Aspergillus nidulans trpC (Zheng Z. et al., 2011) for the synthesis of sgRNA. These molecular elements were all constructed into p390-blpR-cmcas9-gfp to build p390-blpR-sgRNA-cmcas9-gfp (the sequence of the sgRNA cassette is shown in Table S6) and cotransformed into C. militaris CM10 subsequently. After resistance detection, PCR amplification and gene sequencing of ura3, more than 100 transformants with cmcas9 gene integration were obtained. Nevertheless, there were no strains with mutations in the ura3 gene identified in all these transformants. Then, a putative transformant was picked randomly to verify the transcript of the sgRNA by qPCR (Figure 2A). The sgRNA cassette had been transcribed, but it seems that it failed to work cooperatively with Cas9. Stable Expression of the cas9 Gene in C. militaris Cotransformation of the sgRNA-cas9 cassette failed to function in C. militaris. To classify the failure of the sgRNA-cas9 cassette, stable expression of the cas9 gene became the priority target. After the construction and transformation of expression vector p390-blpR-cmcas9-gfp, 85 transformant strains were harvested using herbicide Basta (400 µg ml −1 ) as a selection agent. After PCR verification, agarose gel electrophoresis was performed to verify the transformants. Fifty-five transformants were proven to have the target sequence of the cmcas9 gene because they showed obvious bands and were consistent with the expected size in an electrophoresis gel ( Figure 3A). Five random transformants were tested by western blot, and the relevant size of the protein bands (186.5 kDa) was shown in the experimental samples, while the negative control was reversed ( Figure 3B). The relatively high positive rate may be due to the successful selection mechanism of Basta and its resistance gene blpR as well as the modification of the ATMT protocol. One transformant chosen for fluorescence microscopy observation showed no obvious morphological change but significant fluorescence in cells (Figure 4), indicating that the Cas9-GFP fusion protein was successfully expressed. Then, qPCR was performed to test the transcription level of cmcas9. As shown in Figure 2B, cmcas9 expressed a relatively high transcript level compared to the housekeeping gene tef1. The high transcript level may have been due to the strong efficiency of the gpd promotor and the codon optimized in cas9. This transformant also showed no significant difference in growth rate, indicating its availability for further study. Transformant C. militaris:: blpR-cmcas9-gfp was renamed as C. militaris C9. Since molecular expression elements were relatively rare in ediblemedicinal mushrooms such as C. militaris, two new elements, the native promotor from the housekeeping gene cmlsm3 and the terminator from cmura3, were newly discovered. Availability of the promoter Pcmgpd (Gong et al., 2009), which had been predicted to be usable in genome editing, and the Basta resistance gene blpR as a positive selection marker was also proved. Establishment of a CRISPR-Cas9 Gene Disruption System in C. militaris Since the existence and function of sgRNA were relatively hard to detect, free sgRNA was chosen for the construction of the CRISPR-Cas9 system. To detect the efficiency of the CRISPR-Cas9 system, a target site was chosen in the ura3 gene locus because URA3 will convert 5-FOA into a toxic compound and lead to cell death. As a negative control, the mycelia protoplasts of CM10 and C9 were treated with 5-FOA, and 30 and 22 individuals were presented, respectively. However, none of the ura3 mutations was obtained while missing the relevant sgRNA. After transforming sgRNA-gUra3-1 and ortUra3-1 into mycelia protoplasts of C. militaris, we obtained a total of 51 individuals, which were grown out of the 5-FOA medium. Six mutant transformants of the gUra3-1 target site were obtained out of 51 putative transformants and are shown in Figures 5C-E (the sequencing chromatogram in the box shows the reliability of the sequencing results). As shown in Figure 5C, a 2-bp deletion (shown by the Red line) occurred in the gUra3-1 target (shown by the underlined sequence) and was located 4 bp upstream of the PAM site (AGG). Two kinds of replacement (1 bp replaced by 17 and 14 bp replaced by 4 bp) occurred and were located 3 bp upstream of the PAM site ( Figure 5D). A 1-bp insertion occurred and was located 5 bp upstream of the PAM site ( Figure 5E). It has been reported (Wiedenheft et al., 2012) that the 12 bp nearest the PAM site are the most important signal for sgRNA binding, so this mutation will abort the disruption by CRISPR-Cas9. Though we transformed the guide sgRNA along with their single-stranded repair templates, these 4 mutations showed that the target site was cut by Cas9 and repaired by NHEJ rather than HR. Interestingly, a special 39-bp insertion was also generated in the target site gUra3-1 and was located 4 bp upstream of the PAM site, and these 39 bp were similar (87.2%) to part of the homology repair template ortUra3-1 (71 bp), which showed that the double-stranded break may be repaired by a new mechanism other than NHEJ or HR in C. militaris. All the disruption resulted in 5-FOA resistance (Boeke et al., 1984), proving the availability of orotidine 5 ′ -phosphate decarboxylase (ura3) as a negative selection marker in C. militaris. Moreover, it was reported that mutations in URA3 will cause auxotrophy for uridine, but during the test, the URA3 mutants in C. militaris could still grow naturally in medium without adding uridine. This finding implied that the ura3 gene was not as indispensable for cell growth in C. militaris as it is in S. cerevisiae. For further testing of the efficiency of multidisruption, sgRNA-gUra3-1 and sgRNA-gUra3-2 were cotransformed into C. militaris C9 (CM10:: blpR-cmcas9-gfp) as well. Rather than the expected double-site mutation, mutations in the transformants showed that the gUra3-2 target site in the host had a 96bp insert. As shown in Figure 5F, the insertion was located 5 bp upstream of the PAM site (TGG), and it was similar to sgRNA-gUra3-1 (84.3%), while the gUra3-1 site was not mutated. DISCUSSION In this research, a Cas9-expressing strain of C. militaris was constructed by using new-found molecular elements and ATMT. With transformation with an sgRNA that was simply synthesized in vitro, genome editing could be easy and flexible to apply in the traditional edible mushroom C. militaris. Available selection markers are still lacking in medical ascomycetes fungi such as C. militaris, posing a real obstacle to the advancement of genome editing. Though high doses of Basta and 5-FOA could be used in this study, none of them could avoid high rates of false positives during the transformant selection. As a two-stage sac fungus, C. militaris is well-known for a high ratio of fruiting body gemmated degeneration in farming, which means a high dose of antifungal drugs might accelerate its degeneration to avoid the toxic effect. Since gene integration leads to more stable expression in highratio degeneration sac fungus, the transformation of a single vector with an sgRNA-cas9 cassette was under consideration. A previous report (Gao and Zhao, 2014) showed that sgRNA could be generated by the RNA polymerase II promoter along with two specific ribozymes. These HH and HDV ribozymes have been used to generate sgRNA in plants, filamentous fungi (Mitchell et al., 2017) and even in the ascomycete fungus Alternaria alternata (Wenderoth et al., 2017). As previously reported, an sgRNA cassette was constructed with these two ribozymes and driven by a verified TrpC promotor and terminator. Since this strategy failed to edit the genome in C. militaris, qPCR was performed, which verified that the sgRNA was expressed successfully. It showed that the sgRNA cassette failed to generate functional sgRNA in this study because of the fail function of HH-or HDV-type ribozymes in C. militaris. Therefore, a strategy for generating a Cas9-expressing C. militaris strain and then transforming the presynthesized sgRNA has been performed. The target sites gUra3-1 and gUra3-2 were located by transformed sgRNA-gUra3-1 or sgRNA-gUra3-2 and then identified by the editing protein Cas9-GFP. Subsequently, the DSBs were generated and repaired by the DNA repair mechanisms of C. militaris. There were six kinds of ura3 mutations obtained on the 5-FOA plates while Cas9, sgRNA and/or donor DNA were present. In the CRISPR-Cas9 editing system, double-stranded breaks caused by Cas9 will be generated near the PAM site and could be repaired by NHEJ or HR (Cong et al., 2013). In this study, all of the mutations that occurred were located 3-5 bp upstream of the PAM site of ura3, which was consistent with the Cas9 cleavage pattern. Therefore, we believed that the mutations were edited by the CRISPR system. However, the mutation of three transformants showed an interesting repair mechanism. While transforming the sgRNA-Ura3-1 along with the corresponding micro oligonucleotide repair template ortUra3-1 into C. militaris, the expected specific site should have an insertion of the nucleotide sequence "TAGATAGATAG" which contains a terminator codon and would result in early termination while translating the mRNA of URA3. Microhomology-mediated end-joining has been used to repair the DSBs by mediating local sequence homology recombination according to the micro oligo templates in mammalian cells (Wang et al., 2013;Bae et al., 2014) and eukaryotic parasites (Peng et al., 2014). However, the unexpected 39-bp inserted results, which were DNA sequences similar to ortUra3-1 (Figure 5E), showed that the DSB was repaired by catching the free oligonucleotide fragment and pulling it into the break. In addition, while cotransforming two sgRNA into the cells, a 96-bp inserted sequence generated in gUra3-2 was similar to sgRNA-gUra3-1 (Figure 5F). One of the sgRNA nucleotides was accidentally caught and inserted into the DSB of the target site. These two kinds of mutations revealed that C. militaris has a unique repair mechanism that might repair DSBs by direct ligation or free nucleotide insertion. This random mismatch repair mechanism might be one of the reasons why C. militaris faces a high ratio of fruiting body gemmated degeneration. In this study, we aimed to induce DSB repair by homologydirected repair but not random deletion or insertion. The single-stranded templates are more efficient than doublestranded in mammalian cells and some eukaryotic parasites (Peng and Tarleton, 2015), so the micro single-stranded oligonucleotides rather than double-stranded DNA were used as repair temples when utilizing the CRISPR system. However, it was hard to succeed with this microhomology-mediated end-joining strategy in C. militaris, which may due to the low homology recombination efficiency or other unknown reasons. C. militaris is the primary species that can highly produce the valuable drug cordycepin. With this CRISPR technique, the pathway of cordycepin production will be the first target to edit. By disrupting the repressor gene or reconstructing the promotor of cordycepin synthetase, the yield of cordycepin will be further raised, and it will promote the development of the related industry. To summarize, this is the first report of a successful CRISPR system development in the traditional edible-medicinal mushroom C. militaris, which could greatly raise hope for molecular breeding development to increase the production of bioactive strain degeneration delay. AUTHOR CONTRIBUTIONS B-XC performed vector construction, RNA synthesis and was the major contributor in drafting the work; TW helped revise the manuscript critically for important intellectual content; H-BT performed strains transformation and PCR verification; FY and L-ZK performed strain cultivation; Z-WY supervised B-XC in sequencing data analysis; L-QG and J-FL agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors approved the last version of manuscript to be published.
2018-06-12T13:03:57.385Z
2018-06-12T00:00:00.000
{ "year": 2018, "sha1": "372b5f7807b280c40de2b84e87b56b35a8d19bb6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.01157/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "372b5f7807b280c40de2b84e87b56b35a8d19bb6", "s2fieldsofstudy": [ "Biology", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
240524072
pes2o/s2orc
v3-fos-license
An Analysis of Indian Entertainment Industry – Past, Present, and Future Purpose: The entertainment industry (casually known as Show Business) is included in the tertiary sector of the economy and embraces fields so theater, films, fine arts, dance, music, television, radio, media, sports, cultural events, etc. This industry is continuously evolving with innovations and ideas for the industry. It is developing dynamically in terms of revenue and volumes employing creative and technical people. It witnessed explosive growth during post liberalization that led to internationalization and expansion of the market. This study explores the evolution, growth, threats, challenges, future trends, and impact of FDI on the Indian entertainment industry. This study uncovers the impact of internationalization and its potential for providing employment. Methodology: This study is based on secondary data including Google, online journals, reports, and news articles. Findings: Initially, the entertainment started with storytelling, a way to pass on their culture, traditions, values, and history. The introduction of television brought a big change in entertainment and now online entertainment is on-trend and this way the method of delivering entertainment has expanded progressively. The Indian government has increased the percent of FDI’s and many international collaborations have helped the Indian entertainment industry to grow nationally and internationally There is a lot of employment opportunity in the entertainment industry from the people who work on screen and behind the screen. But due to the pandemic, few classes of workers in the industry are suffering but others are surviving in the online platforms. Television has retained its position as the largest entertainment segment, while digital media overtook the print entertainment sector, and online gaming overtook a filmed entertainment segment. In the future, the online form of entertainment will overpower and the OTT platform will boom. Research limitations/implications: Very few sources are available for references, few concepts are not highlighted and most of the information is outdated hence the collection of relevant information was a challenge. Originality/value: This paper brings to focus the imperative of due consideration by the Government and other regulating bodies to adopt some incentive measures to boost the Indian entertainment industry, being a sunrise industry, and also try to work on the personal and financial safety and stability of all the stakeholders of the entertainment industry, especially during abnormalities like the present pandemic-. Paper type: This industry analysis is an exploratory study. INTRODUCTION : Industry analysis is type of an exploratory research where different companies in a selected industry sector are studied by means of systematic analysis [1][2][3][4]. The entertainment industry (casually known as Show Business) is a part of service industry sector and is included in the tertiary sector of the economy and embraces fields so theater, films, fine arts, dance, music, television, radio, media, sports, cultural events, etc. This industry is continuously evolving with innovations and creative ideas. It is developing dynamically in terms of revenue and volumes employing creative and technical people. It witnessed explosive growth during post liberalization that led to internationalization and expansion of the market. This study explores the evolution, growth, threats, challenges, future trends, and impact of FDI on the Indian entertainment industry. This study uncovers the impact of internationalization and its potential for providing employment. Storytelling has been a very important part of entertainment from the earliest times. The bygone style of communicating the happenings and experiences was with the help of images, words, sounds, and gestures. Telling a story is one of the ways how individuals passed on their cultural traditions, values, and history from generations. Even now the stories are shared within the early styles, for instance, while camping, or sharing stories of another culture to a tourist, and others. Earlier the storytelling sequences were dedicated to writing, initially speaking from mouth, to ear and now we enjoy it in the form of novels and films [5][6]. The Entertainment Industry embraces different segments (sub industries) like  Television Industry: This industry is diversified and includes thousands of programs in multiple India's authorized languages [7].  Print industry: This industry produces the reproduction of written materials or images in multiple copies [8][9].  Digital Media Industry: This industry focuses on broadcasting or communication of information (text, audio, video, and graphics) through a screen. Digital media can be viewed, created, distributed, modified, listened to, watched, and preserved on digital electronics gadgets [10][11].  Filmed Entertainment Industry (motion pictures) (cinema): This industry comprises technological and commercial aspects of filmmaking [12][13].  Animation and VFX Industry: This industry provides visual effects and can mix real shooting with animated images to give certain effects [14].  Live Events Industry: This industry consists of an event or function organized or sponsored at which the performance of music is the main focus but it's not restricted to music [15].  Online Gaming Industry: This industry consists of online games, electronic games played over an electronic device using the internet [16][17].  Out-of-Home Media Industry: This industry is also called outdoor advertising because the advertising is experienced outside of the home [18][19].  Radio Industry: This industry includes the public service or commercial service providers involve in the broadcasting of radio stations or ancillary services [20].  Music Industry: This industry is made up of companies and independent artists who get income by creating, performing, and selling recordings [21].  Sports Industry: This industry has public, profit, and commercial organizers involving in producing, facilities, promoting or organizing any activities focused on sports [22][23]. Entertainment may be public or private, involving spontaneous and unscripted performances or formal, scripted performances, as within the instance of theatre or concerts. Several forms of entertainment have been experimented with over years and are advancing because of the changes in culture, fashion, and technology. Even though every individual's attention is regulated and controlled by different factors because everyone has unique preferences, many varieties of entertainment are recognizable and acquainted [14]. Now a days, Information communication and computation technology (ICCT) plays an important role in entertainment industry [24]. OBJECTIVES : (1) To know the evolution and expansion of the Indian Entertainment Industry. (2) To unravel the potential in Indian Entertainment Industry for providing employment. (3) To find the threats and challenges of the show business. (4) To study the impact of government and FDI in the internationalization of Indian show business. (5) To analyze the future trends in the Indian Entertainment Industry. REVIEW OF LITERATURE : The paper published by Barathi, C., Balaji, C. D., & Meitei, C. I. (2011) [25] on Trends and Potential of the Indian entertainment Industry-an in-depth analysis, focused on the major factors that help the Indian M&E industry to evolve are the favorable demographics, technology up-gradation, growing literacy, support from the government, increasing affluence, and increasing interest. This development in this sector has helped in the shoot up of television channels, the increasing social media popularity. The paper published by Barat, S. (2017) [26] on the Entertainment Industry and India, Inc. Asian Cinema, focused on the performance of the Indian entertainment industry. This article discusses the survival of one of the largest entertainment industries globally for decades of mismanagement to acquire the popularity of incredible got marketing platforms from unique entities -both national and international. The study concludes with numerous interesting ideas for future research. The paper published by Sini. V. Pillai. (2018) [27] on Data Analytics Changing the Pace of Entertainment Industry revealed that the M&E industry of India is developing at a growing rate, with companies finding it difficult to match the speed. The demanding situations arise due to constraints to reduce the costs and trying to increase revenues. The challenges have made a way for this industry to implement big data. Media companies are preliminary adopters of big technologies. The paper published by Kishnani, N. (2018) [28] on A Review on Internationalization of Indian Entertainment industry shows that Entertainment attracts the audience during leisure time, and is an income producer for many. The Indian M&E industry is expanding and growing swiftly the segment comprises television, music, print, radio, films, animation and gaming, Internet advertising along Visual Effects (VFX) is expanding rapidly and is employing creative and technical people. The industry boomed after liberalization and internationalization entering into multinational markets. THE EVOLUTION AND GROWTH IN THE INDIAN ENTERTAINMENT INDUSTRY : From the historic period, humans had relished entertainment as a medium to spurt from their regular cares of livelihood. This industry has many firms that produce, manufacture, and distribute different content of entertainment to the public. This industry has various sub-groups that produce and sustain the entertainment industry as a whole. These entertainment sub-sectors are accountable for the distribution of a variety of entertainment to the audience which includes music, exhibition, live, electronic, and mass media entertainment. Musical entertainment comprises orchestras and hall /openair performances of composers, musicians, and vocal artists. Live entertainment contains comedy performances, circus, musical theater programs, sports events, the performing arts, and concerts. Exhibition entertainment contains trade shows, amusement parks, and fairs. Mass media comprise the internet, film, and broadcasting. Electronic entertainment consists of digital media, social media, video games, and streaming services [29][30][31]. The first motion picture (cinema/film) was witnessed in India in the late 1910s. Indian cinema. This has evolved from black and white to colorful films, from multiple reels to single showreel, from no graphics to animation, and from theatre to OTT platform. Entertainment. The need for talents, skill, and creativity in this industry will overtake the supply and the speed of growth. This industry solely requires 170K to 180K trained or employable respective entering the industry yearly for the next five years. (Report of Confederation of Indian Industry and Boston Consulting Group) With regards to the number of films created, The Indian film sector is wide-ranging globally. There are more than 900 production houses with 72 corporate hoses in the film production business [25], [32]. In 1959, Television was launched in India for few years it was restricted to state-possessed telecaster Doordarshan.TV is expected to remain the largest segment of the entertainment industry. In 2017 this sector grew from the US $ 8.7 billion to $ 9.7 billion in India. Currently, this sector approximately has 48 paid-for broadcasters, more than 60,000 cable operators, 6000 MSOs (Multi -System Operators), and 7 DTH (direct-to-home) operators, all similar to the general public service broadcaster -Doordarshan. Now there are more than 900 TV channels are registered with the Ministry of Information and Broadcasting and has 1,18,239 registered publications (periodicals and newspapers) and up to 2,500 multiplexes [33][34]. SRINIVAS PUBLICATION Animation has witnessed itself as an important component of the entertainment industry which has been growing at a CAGR of 16.4% in 2016 to extend US$ 878 million soon. Industry specialists anticipate that Animation and VFX to gain US $ 1.9 billion by 2021, in India. VFX and Animation sections make the most of because of rising demand from domestic content corporations (it has made more than1,600 hours of original content on OTT,1800 films, and more than 2,00,000hrs of entertainment on TV) in addition to international content, firms producing larger content for both growing and developed sector [35][36]. In the gaming sector, which is another segment of the Indian entertainment sector the market value was around 90 billion rupees in FY 2020. This is anticipated to grow more than 143 billion rupees by 2022 [16]. India is one of the swiftly growing mobile markets in the world and more than half of the internet users are merely from mobile, almost 60% used the internet for the first time on their mobiles in India whereas, in other countries, the first devices used were desktops and laptops. The launch of Reliance Jio, 4G services in 2016 and the succeeding launch by other service providers was a joint force in the expansion of India's data story which gave rise to data usage. A recent KPMG report reveals that over 900 million Indians will use mobile sets competent to stream and download videos [37-38]. Digital innovation has added and redefined the entertainment industry and made innovative changes around global markets through its advertising, exhibition, distribution, and reception strategies [39]. Indian media and entertainment industry has an enormous degree for development in every one of the sections due to increased pay and a better standard of life. Media is swamped by audiences from different socio-economics and with distinct modes like films, radio, TV, movement, and special visualization (VFX), music, out-of-home (OOH), gaming, advanced promoting, and print. Content of OTT is transforming from niche to mass, attracting vast audiences. The enlarged quality of huge screens and investments in the creation of original content is attracting huge audiences. Live streaming is a focus point for OTT competitors, the sports genre is attracting a lot of viewership and is beneficial in monetization. Video is impelling the data flow [40][41]. The music is growing on digital platforms hitting around two hundred million because of the kick-off of audio streaming modes and superior implementation. OOH, growth steered by metro station naming rights, airport advertising, and Indian Railways has increased incomes. Print readers fell slightly and thus had a decline in revenue. The radio sector saw a fall in revenue because of the sink in economic participation, which affected retail advertisers investing in advertisements [42-43]. The Indian media and entertainment sector witnessed a 24 percent degrowth in the year 2020 as the television and print segments were adversely impacted because of the pandemic. Only the online gaming and digital media segment witnessed a growth in 2020, Online gaming is the fastest-growing sector with a growth of 31% [34], [44]. KPMG report shows that the Indian M&E sector has to gauge a notable reduction in total revenue by 20 % in the FY2021 because of deep cuts in films, television, and print sectors as an impact of COVID. Digital segment, OTT, and online gaming are showing positive growth as digital consumption has increased in this pandemic situation. As per the revised estimation, by 2028, India can have billions of digital users rather than in 2030 as stated earlier. It is predicted that all the sectors may bounce back and grow by 33% in FY 2022. In FY2022, India's M&E sector will reach Rs 1,86,600 crore revenue [45]. EMPLOYMENT OPPORTUNITIES IN THE ENTERTAINMENT INDUSTRY : The industry is rising as a vital segment for India's development. It has and will be contributing highly to our country's economic growth. It has additionally been a paramount employment creator and is known to encourage creative and innovative talents. This industry gives jobs under five predominant sub-sectors; Film, Gaming and Animation, Music Industry Television and Radio, Print Industry, Advertising, and Digital Industry. The gamut of jobs offered under each of these heads are all driven through innovative creative forces and are require a certain amount of innovative, revolutionary, and inventive thinking. This factor of this industry is attracting a pool of creative attractive individuals, most of them are youngsters [46]. According to the CII-BCG report (2017), in the next five years, this industry is anticipated to grow doubly in the range of 11 to 12 percent and is assured to provide 7 to 8 lakh extra new jobs. With the increasing and changing business models, consumer demands, digital and technological changes, the industry needs a unique and talented workforce. This industry solely requires 1,40,000-1,60,000 trained International Journal of Management, Technology, and Social Sciences (IJMTS), ISSN: 2581-6012, Vol. 6, No. 2, September 2021 SRINIVAS PUBLICATION or employable persons yearly for the following 5 years. The report reveals that there was a direct economic impact of Rs 1,35,000 crore and hired more than 1 million individuals in 2017. The overall economic impact to the economy of this industry was Rs 4,50,000 crore and added 2.8% to India's GDP rate. The total employment openings -including direct, indirect, and induced spawn by this industry stand near to 4 million in the year 2017 [47]. Anil Pant, MD & CEO, Aptech Ltd on December 2020 stated that the M&E industry in India is presently in a stage where the IT sector was before two decennia, it is on the verge of boom and provides large placements. This industry in a decade is set to grow 20-25% CAGR and is anticipated to generate up to 3 lakh jobs every year [48]. There has been an enlargement of jobs inside the media and entertainment sector throughout the spectrum. The current media are given a gamut of job opportunities that can be all pushed through innovative forces and require a sure quantity of revolutionary thinking. This industry is an umbrella zone for several interesting and potential-filled career avenues for aspirants. Being extensively technology and skill-based, these jobs are enormously boom-oriented [49-50]. Moreover, these new employment possibilities imply that there is a need for revamping the existing skill sets. Abilities together with strategic making plans and operational making plans: These business skills can benefit an organization's vision, execution, and progress toward goals. Additionally, transdisciplinary abilities together with good listening abilities, proactivity, thinking out of the box, etc. can assist one stand out withinside the crowd. Today's highly competitive job situation continually calls for students to be prepared. The industry is especially pushed through sturdy intake in rural and small cities, with the help of regional media, and flourishing new media agencies. This industry is very versatile and the career opportunity in this discipline is wide. This industry will keep growing at a double speed with more choices and higher payments. Most firms in India are in the private sector, while pretty some are under the government. Other than the public authority organizations, media firms, mechanical houses, and new companies are for the most part dispatching magazines, papers, and TV stations. The advertising industry is another significant boss for media experts. A portion of the top publicizing organizations in India incorporate J Walter Thompson, O, and, Mudra Communication Pvt. Ltd., FCBUlka Advertising Ltd., and soon. individuals who are keen on promoting and publicizing for organizations on the web, online media occupations provide a decent bearing. Online media is presently getting one of the greatest salaried positions in India [28], [51]. INTERNATIONALIZATION OF THE INDIAN ENTERTAINMENT INDUSTRY : The cut-throat competition after the liberalization made way for MNC's to enter India. This made Indian Companies explore international and global markets, suppliers, and technology. Internationalization has given unique opportunities for innovation and growth. The Government permits FDI investments, digitalization, and growth in network helped in two-way exchange of program globally and enhance the experience of customers. The growth of the tourism, hospitality and information technology industries helped in expanding the brand value of our country. The various segments of the entertainment industry have helped in international finance and internationalization. The FDI's and the co-production contracts with many countries have increased the potential of export of the entertainment sector. Digitization has added a feather to internationalization and has expanded the market globally which makes this industry the star of the Indian economy. The uniqueness and cultural diversity have attracted many viewers, producers, and distributors globally [28], [51][52][53]. The Indian entertainment industry is flourishing at an impressive pace with the government permitting up to 100% FDI capital, digitalization, rapid growth in cable networks, and foreign media and production. Films, TV, and sports attracted sizeable investments from Hollywood biggies (Universal, DreamWorks, Sony, Disney, Fox, and others) for encashing economic activities. The inflow of Foreign Direct Investment (FDI) in the information and broadcasting (I&B) sector (along with print media) was at US$9.33 billion for the year, April 2000 -June 2020. PVR Cinemas Multiplex chain operator and India Accelerator (IA) In November 2020 agreed to guide and support start-ups in this sector. In October 2020, Zee5 combined with KelltonTech, to build, cloud-native content management system (CMS) that can provide relevant, real-time content everywhere.GB Labs launched 'Unify Hub' in October 2020, to supply services and products to help in boosting the scale of production and post-production of the artists in this sector. Zee Entertainment Enterprises Limited in October 2020 introduced its first lifestyle channel, ZEE Zest. The UK-based -Langhard, a film investment, production, and distribution company, International Journal of Management, Technology, and Social Sciences (IJMTS), ISSN: 2581-6012, Vol. 6, No. 2, September 2021 SRINIVAS PUBLICATION partnered with content company Divinity Studios in September 2020. Langhard will invest in the development, commission, production, and distribution of more than twenty films and web series. Kolkata Knight Riders have collaborated with Meraki Sport and Entertainment in September 2020.Dream11's (Indian fantasy sports app) parent firm, Dream Sport, In September 2020 raised US$ 225 million, increasing its valuation to ~US$ 2.5 billion, Chrys Capital TPG Tech Adjacencies (TTAD), Tiger Global Management, and Footpath Ventures led the financing round. Zee Entertainment Enterprises Ltd, In September 2020 started a pay-per-view movie service, a new film Zee Plex distribution service, to display new films on OTT and DTH platforms to meet the increasing demand for watching movies during this pandemic. BenQ (display technology devices manufacturer) in September 2020 launched a brand-new Home Entertainment Projector TH585 to fulfill the increasing demand for watching the content at home and elevate the OTT demand in India. A.L.T. Balaji collaborates with Chingari (a short-video app) to proliferate its reach in the 'Hindi Speaking Market' (HSM) and expand its market. Nickelodeon India in September 2020, partnered with Nickelodeon International to co-produce a new series. This alliance aims at the mixture of eastern and western storytelling [42]. Indian Government has upheld this industry's enlargement with various activities, like digitizing the link dispersion area to draw in more notable institutional subsidizing, expanding Foreign Direct Investment (FDI) limit is ranging from 74% to 100%. On September 2, 2020, the Government of India revealed its arrangements to build up an Animation, Visual Effects, Gaming, and Comic (AVGC) Center for Excellence in collaboration with IIT Bombay [42]. The Media & Entertainment sector in India has estimated a notable turn down in overall revenue by 20 % in the FY2021 as an effect of COVID. Digital segment, OTT, and online gaming are showing positive growth as digital consumption has increased in this pandemic situation. It is predicted that all the sectors may bounce back and grow by 33% in FY 2022 [42]. Some important regulating bodies in Media & Entertainment industry in India: Ministry of Information and Broadcasting -its bureau is accountable for phrasing and enactment of the framework, policies, laws, rules, and regulations related to the Indian films, press, broadcasting, and information industry. Central Board of Film Certification (CBFC or Censor Board)-it scrutinizes whether a film is apt for an Indian audience. Telecom Regulatory Authority of India (TRAI)-It controls levies payable by the service providers in the broadcasting sector and TV channels subscribers [54-55]. The Indian Government is supporting the M&E industries and the FDI limits in different segments are liberalized:  FDI up to 100 percent in publications of scientific and technical magazines/ specialty journals/ periodicals. Movies, Animation, Gaming, and VFX through automatic route have up to 100 percent FDI is allowed. Direct-to-home (DTH) satellite and digital cable networks raised from 74 to 100 percent.  FDI investment is up to 26 percent in an Indian firm dealing with the publication of newspapers and periodicals the FDI investment is up to 26 percent in Indian editions of foreign magazines. In the current budget (2021) the government had announced an increase in FDI from 26 to 49 percent and few benefits are given for startups [54][55]. The Telecom Regulatory Authority of India (TRAI) Telecom Regulatory Authority of India (TRAI) with the Ministry of Information and Broadcasting, Government of India, trying to ameliorate the broadcasting sector. The National Centre of Excellence for Animation, Gaming, Visual Effects, and Comics industry will be established by the Indian government in Mumbai. The Canadian and also The Indian government have agreed upon an audio-visual co-production deal to interchange and explore culture and creativity. the Government of India on September 2, 2020 plans to develop an Animation, Visual Effects, Gaming, and Comic (AVGC) Centre for Excellence in collaboration with IIT Bombay. In the next couple of years by 2024s, The AVGC sector, audiovisual, and services sector will be the fastest-growing sector. To attract institutional funding the Government of India has taken a lot of initiatives like they have increased the FDI limit to 100% in many sectors, digitizing most of the sectors, the film industry was given the status of the industry for easy availability to institutional finance. in CHALLENGES AND THREATS OF THE ENTERTAINMENT INDUSTRY : 1. The high tax burden on the Indian media and entertainment industry. 2. Financial management because large capital investments are required and there is a lack of financial support. 3. The cost of promotion is very high as there is cut-throat competition. 4. Piracy -The Killer Disease which reduces the income of the owners. 5. Low Screen Penetration, more Investment in theatres, and multiplexes are required for the Indian population but due to the current situation, they are not so active. 6. The sustainable growth of the industry is a challenge. 7. Lower concentration and importance to the Indian Entertainment Industry by the Indian Government. 8. Lack of Transparency in the Indian Entertainment Industry. 9. Compliance with laws/regulations of the government and other regulatory bodies is the biggest challenge. 10. Licensing requirements, Copyright, and piracy issues. 11. Menace to media and entertainment channels and life peril to people employed in this industry worry about the data privacy, Inequity, Despotic and harassment at the place of work, environmental issues and their control using green technology are also biggest challenges [25][42] [57]. FUTURE TRENDS AND OPPORTUNITIES :  The online and offline content appears to be fulfilling two different desires of the public, TV (offline) even now has a special place for Indian viewers. Rural and the older generation prefer big screen and can relate much better to the TV content.  In the coming days, OTT suppliers will be designing the content based on viewer's friendly preferences and recommendations, like kindle.  Online content is preferred over offline content as there is higher management and control for the viewer, in the future Content can be more attractive, and participative will be more involving.  Price may be an obstacle for availing the online streaming content. Different rates could be given as an option based on the consumer's needs.  The Indian Media & Entertainment sector is predicted to grow swiftly. This industry has the potential to reach US $ 100 billion by 2030.  Growth can be prognosticated in retail advertisement.  The Media and Entertainment sector in India is anticipated to flourish by 25 percent in 2021.  India by 2024 is going to emanate as the world's sixth major OTT (Over --Top) streaming sector.  The Indian media and show business by 2030 are anticipated to reach US$ 100 billion and may use cloud-based storage of information [36][40] [58]. FINDINGS :  Initially, the entertainment started with storytelling, a way to pass on their culture, traditions, values, and history. The introduction of television brought a big change in entertainment and now online entertainment is on trend this way the method of delivering entertainment has expanded progressively.  The Indian government has increased FDI's and many international collaborations have helped the Indian entertainment industry to grow nationally and internationally  There is a lot of employment opportunity in the entertainment industry from the people who work on screen and behind the screen. But due to the pandemic, few classes of workers in the industry are suffering but others are surviving in the online platforms. On the whole, it has created a lot of employment opportunities. SRINIVAS PUBLICATION  Television has retained its position as the largest entertainment segment, while digital media overtook the print entertainment sector, and online gaming overtook a filmed entertainment segment.  In the future, the online form of entertainment will overpower and the OTT platform will boom. SUGGESTIONS :  The Indian government should give little more concentration and importance to the entertainment industry as it contributes a larger portion of the economy.  The Indian government should try to reduce the tax burden and try to give some subsidies for the development.  Government and other regulating bodies should try to make licensing, copyright, and other legal procedures simpler.  The government, regulating bodies and the industry should try to find ways to reduce piracy and increase transparency.  Customize the online entertainment content based on the customers' tastes, preferences, and financial conditions.  The government with the entertainment industry must try to work on the personal and financial safety and stability of all the stakeholders of the entertainment industry, especially during this pandemic situation. CONCLUSION : The Indian entertainment business has evolved, grown, and is unique not only in terms of volume and variety but also in terms of the sheer quantity of viewers. There are lots of positive and negative changes taking place due to the pandemic. Few sectors in the entertainment industry are badly affected by COVID like the television, print, film, out-of-home media, and sports industries. Whereas the other segments of the entertainment industry are benefitting mainly the digital media, animation, VFX, and online gaming industries. Overall, the sub-industries are attempting to survive, innovate and develop. The industry is getting global recognition and is also providing employment opportunities [59]. The Indian M&E business has many segments like Television, filmed entertainment, print, animation, digital media, VFX, live events, out-of-home media, radio, online gaming, music, and sports all are expanding at a high speed and is expected to grow at a significantly greater standard.
2021-10-20T15:50:50.464Z
2021-09-16T00:00:00.000
{ "year": 2021, "sha1": "abbc9267a2915469a2be6ceeba6a49e5fe65049b", "oa_license": null, "oa_url": "https://srinivaspublication.com/journal/index.php/ijmts/article/download/852/502/553", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "fd9bad6994fca4ad198ca9ea141951f192d78e1e", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
3083108
pes2o/s2orc
v3-fos-license
The trouble with social computing systems research Social computing has led to an explosion of research in understanding users, and it has the potential to similarly revolutionize systems research. However, the number of papers designing and building new sociotechnical systems has not kept pace. We analyze challenges facing social computing systems research, ranging from misaligned methodological incentives, evaluation expectations, double standards, and relevance compared to industry. We suggest improvements for the community to consider so that we can chart the future of our field. Introduction The rise of social computing is impacting SIGCHI research immensely.Wikipedia, Twitter, Delicious, and Mechanical Turk have helped us begin to understand people and their interactions through large, naturally occurring datasets.Computational social science will only grow in the coming years. Likewise, those invested in the systems research community in social computing hope for a trajectory of novel, impactful sociotechnical designs.By systems research, we mean research whose main contribution is the presentation of a new sociotechnical artifact, algorithm, design, or platform.Systems research in social computing is valuable because it envisions new ways of interacting with social systems, and spreads these ideas to other researchers and the world at large.This research promises a future of improved social interaction, as well as useful and powerful new user capabilities.Traditional CSCW research had no shortage of systems research, especially focusing on distributed teams and collaboration [1][17] [30].In some ways, systems research has already evolved: we dropped our assumptions of single-display, knowledge-workfocused, isolated users [10].This broader focus, married with a massive growth in platforms, APIs, and interest in social computing, might suggest that we will see lots of new interesting research systems. Unfortunately, the evidence suggests otherwise.Consider submissions to the Interaction Beyond the Individual track at CHI 2011.Papers submitted to this track that chose "Understanding Users" as a primary contribution outnumbered those that selected "Systems, Tools, Architectures and Infrastructure" by a ratio of four to one this year [26].This 4:1 ratio may reflect overall submission ratios to CHI, represent a steady state and not a trend, or equalize out in the long term in terms of highly-cited papers.However, a 4:1 ratio is still worrying: a perceived publication bias might push new researchers capable of both types of work to study systems rather than build them.If this happens en masse, it might threaten our field's ability to look forward. In this paper we chart the future of social computing systems research by assessing three challenges it faces today.First, social computing systems are caught between social science and computer science, with each discipline de-valuing work at the intersection.Second, social computing systems face a unique set of challenges in evaluation: expectations of exponential growth and criticisms of snowball sampling.Finally: how can academic social computing research compete or cooperate with industry?Where possible, we will offer proposed solutions to these problems.They are not perfect -we hope that the community will take up our suggestions, improve them and encourage further debate.Our goal is to raise awareness of the situation and to open a conversation about how to fix it. Related Work This paper is not meant to denote all social or technical issues with social computing systems.For example, CSCW and social computing systems suffer from the socio-technical gap [2], critical mass problems [25], and misaligned incentives [15].These, and many others, are critical research areas in their own right. We are also not the first to raise the plight of systems papers in SIGCHI conferences.All systems research faces challenges, particularly with evaluation.Prior researchers argue that reviewers should moderate their expectations for evaluations in systems work: • Evaluation is just one component of a paper, and issues with it should not doom a paper [23][27]. • Longitudinal studies should not be required [22]. • Controlled comparisons should not be required, if the system is sufficiently innovative or aimed at wicked problems [14][22] [29].Not all researchers share these opinions.In particular, Zhai argues that existing evaluation requirements are still the best evaluation strategy we know of [35]. Others have also discussed methodological challenges in HCI research.Kaye and Sengers related how psychologists and designers clashed about study methodology in the conversation of discount usability analysis methods [18].Barkhuus traced the history of evaluation at CHI and found fewer users in studies and more papers using studies in recent years [3]. Novelty: Between A Rock and A Hard Science Social computing systems research bridges the technical research familiar to CHI and UIST with the intellectual explorations of social computing, social science and CSCW.Ideally, these two camps should be combining methodological strengths.Unfortunately, they can actively undermine each other. Following Brooks [8] and Lampe [20], we split the world of sociotechnical research into those following a computer science engineering tradition ("Builders") and those following a social science tradition ("Studiers").Of course, most SIGCHI researchers work as bothincluding the authors of this paper.But, these abstractions are useful to describe what is happening. Studiers: Strength in Numbers Studiers' goal is to see proof that an interesting social interaction has occurred, and to see an explanation of why it occurred.Social science has developed a rich set of rigorous methods for seeking this proof, both quantitative and qualitative, but the reality of social computing systems deployments is that they are messy.This science vs. engineering situation creates understandable tension [8].However, we will argue that the prevalence of Studiers in social computing (reflected in the 4:1 paper submission ratio) means that Studiers are often the most available reviewers for a systems paper on a social computing topic. Social computing systems are often evaluated with field studies and field experiments (living laboratory studies [10]), which capture ecologically valid situations.These studies will trade off many aspects of validity, producing a biased sample or large manipulations that make it difficult to identify which factors led to observed behavior.When Studiers review this work, even well-intentioned ones may then fall into the Fatal Flaw Fallacy [27]: rejecting a systems research paper because of a problem with the evaluation's internal validity that, on balance, really should not be damning.Solutions like online A/B testing, engaging in long-term conversations with users, and multi-week studies are often out of scope for systems papers [22].This is especially true in systems with small, transient volunteer populations.Yet Studiers may reject a paper until it has conclusively proven an effect.Social computing systems are particularly vulnerable to Studier critique because of reviewer sampling bias.Some amount of methodological sniping may be inevitable in SIGCHI, but the skew in the social computing research population may harm systems papers here more than elsewhere.In particular, it is more likely that the qualified reviewers on any given social computing topic will be Studiers and not Builders: there are relatively few people who perform studies on tangible interaction, for example, but a large number of those researching Facebook are social scientists. Builders: Keep It Simple, Stupid -or Not? Given the methodological mismatch with Studiers, we might consider asking Builders to review social computing systems papers.Unfortunately, these papers are not always able to articulate their value in a way that Builders might appreciate either. Builders want to see a contribution with technical novelty: this often translates into elegant complexity. Memorable technical contributions are simple ideas that enable interesting, complex scenarios.Systems demos will thus target flashy tasks, aim years ahead of the technology adoption curve, or assume technically literate (often expert) users.For example, end user programming, novel interaction techniques, and augmented reality research all make assumptions about Moore's Law, adoption, or user training. Social computing systems contributions, however, are not always in a position to display elegant complexity.Transformative social changes like microblogging are often successful because they are simple.So, interfaces aimed ahead of the adoption curve may not attract much use on social networks or crowd computing platforms.A complex new commenting interface might be a powerful design, but it may be equally difficult to convince large numbers of commenters to try it [19].Likewise, innovations in underlying platforms may not succeed if they require users to change their practice. Caught In the Middle Researchers are thus stuck between making a system technically interesting, in which case a crowd will rarely use it because it is too complex, and simplifying it to produce socially interesting outcomes, in which case Builder colleagues may dismiss it as less novel and Studier colleagues may balk at an uncontrolled field study.Here, a CHI metareviewer claims that a paper has fallen victim to this problem (paraphrased) 1 : The contribution needs to take one strong stance or another.Either it describes a novel system or a novel social interaction.If it's a system, 1 All issues pointed out by metareviewers are paraphrased from single reviews, though they reflect trends drawn from several. then I question the novelty.If it's an interaction, then the ideas need more development. For example, Twitter would not have been accepted as a CHI paper: there were no complex design or technical challenges, and a first study would have come from a peculiar subpopulation.It is possible to avoid this problem by veering hard to one side of the disciplinary chasm: recommender systems and single user tools like Eddi [6] and Statler [32] showcase complexity by doing this.But to accept polarization as our only solution rules out a broad class of interesting research. A Proposal for Judging Novelty The combination of strong Studiers and strong Builders in the subfield of social computing has immense potential if we can harness it.The challenge as we see it is that social computing systems cannot articulate their contributions in a language that either Builders or Studiers speak currently.Our goal, then, should be to create a shared language for research contributions. Here we propose the Social/Technical yardstick for consideration.We can start with two contribution types. Social contributions change how people interact.They enable new social affordances, and are foreign to most Builders.For example: • New forms of social interaction: e.g., shared organizational memory [1] or friendsourcing [7]. • Designs that impact social interactions: for example, increasing online participation [4]. • Socially translucent systems [13]: interactive systems that allow users to rely on social intuitions. Technical contributions are novel designs, algorithms, and infrastructures.They are the mechanisms supporting social affordances, but are more foreign to Studiers.For example: • Highly original designs, applications, and visualizations designed to collect and manage social data, or powered by social data (e.g., [6], [33]) • New algorithms that coordinate crowd work or derive signal from social data: e.g., Find-Fix-Verify [5] or collaborative filtering. The last critical element is an interaction effect: paired Social and Technical contributions can increase each other's value.ManyEyes is a good example [34]: neither visualization authoring nor community discussion are hugely novel alone.The combination, however, produced an extremely influential system. Evaluation: Challenges in Living Labs Evaluation is evolving in human-computer interaction, and social computing is a leading developer of new methodologies.Living laboratory studies [10] of social computing systems have broken out of the university basement, focused on ecologically valid situations and enabled many more users to experience our research.However, evaluation innovation means that we are the first to experience challenges with our methods. In this section, we identify emerging evaluation requirements and biases that, on reflection, may not be appropriate.These reflections are drawn from conversations with researchers in the community.They are bound to be filtered by our own experience, but we believe them to be reasonably widespread. Expecting Exponential Growth Reviewers often expect that research systems have exponential (or large) growth in voluntary participation, and will question a system's value without it.Here is a CHI metareviewer, paraphrased: As most of the other reviewers mentioned, your usage data is not really compelling because only a small fraction of Facebook is using the application.Worse, your numbers aren't growing in anything like an exponential fashion. There are a number of reasons why reviewers might expect exponential growth.First, large numbers of users legitimize an idea: growth is strong evidence that the idea is a good one and that the system may generalize.Second, usage numbers are the lingua franca for evaluating non-research social systems, so why not research systems as well?Last, social computing systems are able to pursue large-scale rollouts, so the burden may be on them to try.We agree that if exponential growth does not occur, authors should acknowledge this and explore why. However, it misses the mark to require exponential growth for a research system.One major reason this is a mistake is that it puts social computing systems on unequal footing with other systems research.Papers in CHI 2006 had a median of 16 participants: program committees considered this number acceptable for testing the research's claims [3].Just because a system is more openly available does not mean that we need orders of magnitude more users to understand its effects.Sixteen friends communicating together on a Facebook application may still give an accurate picture. Another double standard is a conflation of usefulness and usability [21].Usefulness asks whether a system solves an important problem; usability asks how users interact with the system.Authors typically prove usefulness through argumentation in the paper's introduction, then prove usability through evaluation.Evaluations will shy away from usefulness because it is hard to prove scientifically.Instead, we pay participants to come and use our technology temporarily (assuming away the motivation problem), because we are trying to understand the effects of the system once somebody has decided to use it.This should be sufficient for social computing systems as well.However, reviewers in social computing systems papers will look at an evaluation and decide that a lack of adoption empirically disproves any claim of usefulness. ("Otherwise, wouldn't people flock to the system?")However, why require usefulness -voluntary usagein social computing systems, when we assume it away -via money -with other systems research? A final double standard is whether we expect risky hypothesis testing or conservative existence proofs of our systems' worthiness [14].A public deployment is the riskiest hypothesis testing possible: the system will only succeed if it has gotten almost everything right, including marketing, social spread, graphic design, and usability.Systems evaluations elsewhere in CSCW, CHI and UIST seek only existence proofs of specific conditions where they work.We will not argue whether existence proofs are always the correct way to evaluate a systems paper, but it is problematic to hold a double standard for social computing systems papers. The second major reason it is a mistake to require exponential growth is that a system may fail to spread for reasons entirely unrelated to its research goals.Even small problems in the social formula could doom a deployment: minor channel factors like slow logins or buggy software have major impact on user behavior [31].Rather than immediate success, we should expect new ideas to spawn a series of work that gets us continually closer.Achieving Last.fm on the first try is very unlikely -we need precursors like Firefly first.In fact, we may learn the most from failed deployments of systems with well-positioned design rationale. Get Out of the Snow! No Snowball Sampling Live deployments on the web have raised the specter of snowball sampling: starting with a local group in the social graph and letting a system spread organically.CHI generally regards snowball sampling as bad practice.There is good reason for this concern: the first participants will have a strong impact on the sample, introducing systematic and unpredictable bias into the results.Here, a paper metareviewer calls out the sampling technique (paraphrased): The authors' choice of study method -snowball sampling their system by advertising within their own social network -potentially leads to serious problems with validity. Authors must be careful not to overclaim their conclusions based on a biased sample.However, some reviewers will still argue that systems should always recruit a random sample of users, or make a case that the sample is broadly representative of a population. This argument is flawed for three reasons.First, we must recognize that snowball sampling is inevitable in social systems.Social systems spread through social channels -this is fundamental to how they operate.We need to expect and embrace this process. Second, random sampling can be an impossible standard for social computing research.All it takes is a single influential user to tweet about a system and the sample will be biased.Further, many social computing platforms like Twitter and Wikipedia are beyond the researcher's ability to recruit randomly.While an online governance system might only be able to recruit citizens of one or two physical regions, leading to a biased sample, it is certainly enough to learn the highest-order bit lessons about the software. Finally, snowball sampling is another form of convenience sampling, and convenience sampling is common practice across CHI, social sciences and systems research.Within CHI, we often gather the users in laboratory or field studies by recruiting locals or university employees, which introduces bias: half of CHI papers through 2006 used a primarily student population [3].We may debate whether convenience sampling in CHI is reasonable on the whole (e.g., Barkhuus [3]), but we should not apply the criteria unevenly.However, we should keep in mind that convenience and snowball sampling are widely accepted methods in social science to reach difficult populations. A Proposal for Judging Evaluations Because our methodological approaches have evolved, it is time to develop meaningful and consistent norms about these concerns.An exhortation to take the long view and review systems papers holistically (e.g., [14][22] [29]) can be difficult to apply consistently.So, in this section we aim for more specific suggestions. Separate Evaluation of Spread from Steady-State We argued that exponential growth is a faulty signal to use in evaluation.But, there are times when we should expect to see viral spread in an evaluation: when the research contribution makes claims about spread. We can separate two types of usage evaluations: spread and steady-state.Most social systems go through both phases: (1) an initial burst of adoption, then (2) upkeep and ongoing user interaction.Of course, this is a simplification: no large social system is ever truly in steady-state due to continuous tweaks, and most new systems go through several iterations before they first attract users.But, for the purposes of evaluation, these phases ask clear and different questions.An evaluation can focus on how a system spreads, or it can focus on how it is used once adopted. Paper authors should evaluate their system with respect to the claims they are making.If the claims focus on attracting contributions or increasing adoption, for instance, then a spread evaluation is appropriate.These authors need to show that the system is increasing contributions or adoption.If instead the paper makes claims about introducing a new style of social interaction, then we can ignore questions of adoption for the moment and focus on what happens when people have started using the system.This logic is parallel to laboratory evaluations: we solve the questions of motivation and adoption by paying participants, and focus on the effects of the software once people are using it (called compelled tasks in Jared Spool's terminology). Authors should characterize their work's place on the spread/steady-state spectrum, and argue why the evaluation is well matched.They should call out limitations, but evaluations should not be required to address both spread and steady-state usage questions. Make A Few Snowballs As we argued, it is almost impossible to get a random sample of users in a field evaluation.Authors should thus not claim that their observations generalize to an entire population, and should characterize any biases in their snowballed sample.Beyond this, we propose a compromise: a few different snowballs can help mitigate bias.Authors should make an effort to seed their system at a few different points in the social network, characterize those populations and any limitations they introduce, and note any differences in usage.But we should not reject a paper because its sample falls near the authors' social network.There may still be sufficient data to evaluate the authors' claims relatively even-handedly.Yes, perhaps the evaluation group is more technically apt than the average Facebook user; but most student participants in SIGCHI user studies are too [3]. Treat Voluntary Use As A Success Finally, we need to stop treating a small amount of voluntary use as a failure, and instead recognize it as success.Most systems studies in CHI have to pay participants to come in and use buggy, incomplete research software.Any voluntary use is better than many CHI research systems will see.Papers should get extra consideration for taking this approach, not less. Research At A Disadvantage with Industry Social computing systems research now needs to forge its identity between traditional academic approaches and industry.Systems research is used to being ahead of the curve, using new technology to push the limits of what is possible.But in the age of the social computing, academic research often depends on access to industry platforms.Otherwise, a researcher must function as an entire start-up -marketing, engineering, design, QAand compete with companies for attention and users.It is not surprising that researchers worry whether startups are a better path (see Landay [22] and associated comments).If they stay in academia, researchers must satisfy themselves with limited access to platforms they didn't create, or chance attracting a user population from scratch and then work to maintain it.What path should we follow to be the most impactful? One challenge is that, in many domains, users are largely in closed platforms.These platforms are typically averse to letting researchers make changes to their interface.If a researcher wanted to try changing Twitter to embed non-text media in tweets, they should not expect cooperation.Instead, we must re-implement these sites and then find a complicated means of attracting naturalistic use.For example, Hoffman et al. mirrored and altered Wikipedia, then used AdWords advertisements cued on Wikipedia titles to attract users [16].Wikidashboard also proxied Wikipedia [33]. Many would go farther and argue that social computing systems research is a misguided enterprise entirely.Brooks argues that, with the exception of source control and Microsoft Word's Track Changes, CSCW has had no impact on collaboration tools [8].Welsh claims that researchers cannot really understand large-scale phenomena using small, toy research environments. 2cademic-industry partnerships represent a way forward, though they do have drawbacks.Some argue that if your research competes with industry, you should go to an industry lab (see Ko's comment in Landay [22]).Industrial labs and academic collaboration have worked well with organizations like Facebook, Wikimedia, and Microsoft.However, for every academic collaboration, there are many nonstarters: companies want to help, but cannot commit resources to anything but shipping code.Industrial collaboration opens up many opportunities, but it is important to create other routes to innovation as well. Some academics choose to create their own communities and succeed, for example Scratch [28], MovieLens 3 , and IBM Beehive [12].These researchers have the benefit of continually experimenting with their own platforms and modifying them to pursue new research.Colleagues' and our own experience indicate that this approach carries risks as well.First, many such systems never attract a user base.If they do, they require a large amount of time for support, engineering and spam fighting that pays back little.This research becomes a product.Such researchers express frustration that their later contributions are then written off as "just scaling up" an old idea. Even given the challenges, it is valuable for researchers to walk this line because they can innovate where industry does not yet have incentive to do so.Industry will continue to refine existing platforms [11], but systems research can strike out to create alternate visions and new futures.Academia is an incubator for ideas that may someday be commercialized, and for students who may someday be entrepreneurs.It has the ability to try, and to fail, quickly and without negative repercussion.Its successes will echo through replication, citations and eventual industry adoption. http://www.movielens.org/ Conclusion Social computing systems research can struggle to succeed in SIGCHI.Some challenges relate to our research questions: interesting social innovations may not be interesting technically, and meaningless to social scientists without proof.In response, we laid out the Social/Technical yardstick for valuing research claims.Other challenges are in evaluation: a lack of exponential growth and snowball sampling are incorrectly dooming papers.We argued that these requirements are unnecessary, and that different phases of system evaluation are possible: spread and steady state.Finally, we considered the place of social computing systems research with respect to industry. As much as we would like to have answers, we know that this is just the beginning of a conversation.We invite the community to participate and contribute their suggestions.We hope that this work will help catalyze the social computing community to discuss the role of systems research more openly and directly.
2014-10-01T00:00:00.000Z
2011-05-07T00:00:00.000
{ "year": 2011, "sha1": "e619d5b673f57c2ff3cc0bc885cfdffe13579e4a", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/64444/1/Miller_The%20Trouble.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "eddac4f9e95917a56555703716c3119548d7bb10", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
8236300
pes2o/s2orc
v3-fos-license
Risk of Complications in HIV-TB Co-Infection: A Hospital-Based Pair-Matched Case–Control Study Human immunodeficiency virus (HIV) is known to increase the morbidity and mortality among people with tuberculosis (TB).(1,2) Various systemic complications due to HIV among cases of TB have been reported.(3-8) But, there is paucity of information quantifying the risk of systemic complications due to HIV among cases of TB. A pair-matched case–control study comparing cases of HIV-TB co-infection with TB would help in quantifying the risk. Thus, this study was undertaken in one of the tertiary care referral centers of coastal South India. Introduction Human immunodeficiency virus (HIV) is known to increase the morbidity and mortality among people with tuberculosis (TB). (1,2)(5)(6)(7)(8) But, there is paucity of information quantifying the risk of systemic complications due to HIV among cases of TB.A pair-matched case-control study comparing cases of HIV-TB co-infection with TB would help in quantifying the risk.Thus, this study was undertaken in one of the tertiary care referral centers of coastal South India. Materials and Methods A hospital record-based pair-matched case-control study was undertaken after obtaining the required consent from the institutional ethics committee in two tertiary care hospitals of Kasturba Medical College, Mangalore, Karnataka State. Operational definitions TB was confirmed bacteriologically (i.e., smear-positive for acid fast bacilli) and HIV status was confirmed by the criteria established by the National AIDS Control Organization (NACO), India. (9)Cases were patients of HIV-TB co-infection irrespective of age and gender.Controls were patients of TB.All those cases and controls whose records were available with the hospital for the last 1 year were included in the study. Sample size Using the formula N = (Z α + Z β ) 2 (p 1 q 1 + p 2 q 2 )/(p 2 -p 1 ) 2 .p 1 and p 2 were assumed to be 56% and 10%, respectively. (10,11)ith a power of 0.01 and a precision set at 0.05, we got 14 cases.With a ratio of four controls for each case, we had to get 56 pair-matched controls. Sampling The list of cases fulfilling the eligibility criteria was made.The cases were then categorized according to age, sex and the following confounding factors, i.e. smoking, alcoholism, diabetes mellitus, hypertension, chronic obstructive pulmonary diseases (COPD) and malignancies.A similar list of controls for the same period was prepared.By simple random sampling, 14 cases were obtained.From a similar list for controls, 56 controls pair-matched for age, gender and confounding factors was obtained. Matching Age was matched in 5-year intervals.The gender was also matched.The following confounding factors were matched, i.e. smoking, alcoholism, diabetes mellitus, hypertension, COPD and malignancies. Study instrument A semi-structured proforma was used to collect the basic details of the patients (like name, age, sex, occupation, socioeconomic status) and for the categorization of cases and controls, Co-morbidities and complications were devised after pretesting and were used for data collection. Analysis Odds ratios were computed to quantify the risk of where "nj (+)" denotes the number of matched sets in which case is (+) and exactly "j" of controls are (+).Similarly, "n j (-)" denotes the number of matched sets in which case is (-) and exactly "j" of controls are (+). (12) Distribution of confounding factors , COPD (1) and oral cancer (1).The factors were matched.Controls with oral cancer were of different stages. The morbidity profile along with the odds ratios are presented in Table 1. Discussion We could not get comparable studies.In a retrospective cohort study, (2) it was found that the case fatality rate was higher among HIV-TB co-infection as compared with TB cases after adjusting for confounding factors. We found that a higher proportion of the cases was males and in the middle-aged group.Higher morbidity among the middle-aged group and males has been reported from USA. (13) A higher proportion among males may be because of underdetection among females. (14)Economic compulsions and stigma associated with HIV may be a barrier for the detection of HIV-TB co-infection. (15)se-control studies with small numbers and only hospital-based controls make interpretations difficult.Being a tertiary care center, the referred patient profiles may not be representative, creating a bias.Because of paucity of information, this study gives inputs to plan further prospective cohort studies required to arrive at relative and attributable risks. Table 1 : Morbidity profile of cases and controls along with the odds ratios Systems Involved Cases Controls Odds ratio* *Odds ratios were calculated as mentioned under "Analysis."(12)
2018-04-03T00:44:44.482Z
2010-10-01T00:00:00.000
{ "year": 2010, "sha1": "fa14b06692971aa9af537693af25ae20c59708fc", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0970-0218.74361", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "317dea5436817af0868d67a578eaab6bc7ab4a11", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218928394
pes2o/s2orc
v3-fos-license
IMPLEMENTATION OF REGULATORY ACTS ON THE REFORM OF THE COAL INDUSTRY OF UKRAINE IN THE CONTEXT OF THE USE OF ENVIRONMENTAL TECHNOLOGIES The article presents the results of monitoring regulations on the reform of the coal industry and coal regions of Ukraine in the context of the use of environmental technologies. It is established that all legal acts use different interpretations regarding the reform of the coal industry and coal regions of Ukraine in the context of the environmental technologies use, namely: modernization, transformation, restructuring, reconversion, reindustrialization, and decarburization. These definitions had the following meaningful content: reforming the coal-mining complex; reforming property relations with improving the industry management system; reforming the coal industry's scientific institutions; reconstruction of objects; reconstruction of coal enterprises in order to avoid inefficient projects in projects; reconstruction of the mine fund; development of industrial and social sphere of coal enterprises of the branch; enterprise reconstruction; technical re-equipment and modernization of coal mining enterprises; reconstruction and technical re-equipment; optimization of the structure of state-owned enterprises of the coal industry; the reconversion (re-industrialization) of coal regions; technical measures for updating the mine fund; prospect of further development and break-even level of production and economic activity of Ukrainian mines; reorganization of enterprises in terms of concentration of all personnel, financial and material resources on prospective mines; decarburization of national and global economies; decarburization of energy. It is established that the complex of legal acts on regulation of reform in the coal industry in Ukraine includes a number of programs and normative documents that are mostly declarative and do not provide for overcoming social problems in the coal regions of Ukraine. However, today's global challenges call for the introduction of the concept: just transformation is a model of development that envisages a decent life and fair earnings for all workers and communities affected by the process of active energy transition to decarbonize economy and the use of environmental technologies. The problems of reforming the coal industry and coal regions of Ukraine in the context of the environmental technologies use have been raised over the last twenty years, as evidenced by the good implementation of regulations on the reform of the coal industry and coal regions of Ukraine in the context of the environmental technologies use. Among modern regulations, the following should be mentioned: Energy Strategy of Ukraine for the period up to 2035 "Security, Energy Efficiency, Competitiveness" [1], Concept for Reforming and Development of the Coal Industry for the period up to 2020 [2], Concept of "green" energy transition of Ukraine by 2050 Law of Ukraine "On the Main Principles (Strategy) of the State Environmental Policy of Ukraine up to 2030" [4] and others [5][6][7][8][9][10]. However, all regulations use different interpretations regarding the reform of the coal industry and coal regions of Ukraine in the context of the environmental technologies use, namely: modernization, transformation, restructuring, reconversion, re-industrialization, which needs to be explored carefully. FORMULATION OF THE ARTICLE OBJECTIVES (FORMULATION OF THE TASK) Generalization of the current regulatory acts for reforming (modernization, transformation, restructuring, reconversion, reindustrialization) of the coal industry and coal regions of Ukraine in the context of the use environmental technologies use. Presentation of the main research material with full justification of the scientific results obtained. The existing set of regulations for the regulation of coal reform includes a number of programs and regulations that are mainly declarative in nature and distinct in the process of reforming the coal industry and overcoming social problems. In order to carry out a thorough analysis of the substantive content and content of the normative legal acts of Ukraine regarding the reform of the coal industry and coal regions of Ukraine, their monitoring should be conducted, Table 1. It will allow to emphasize the particular stresses in the reform of the coal industry in Ukraine at different periods www.economy.in.ua Year Name of legislative act Goal Modernization, transformation, restructuring, reconversion, reindustrialization of the coal industry and coal regions of ukraine 2001 Ukrainian Coal Program [5] Improving the efficiency of the coal industry and achieving the amount of coal needed to meet national needs economy Reforming the coal mining complex; reforming property relations with improving the industry management system; the revival of the role of science in the development of the coal industry and the reform of the scientific institutions of the coal industry. Creating conditions for defining a mine (cut, technological complex in the coal mining sub-sector) as the main link for reforming property relations in the coal industry. To develop and approve the procedure for creating new jobs for the workers who are made redundant in the process of reforming the industry, and to identify the sources and amounts of their financing based on the strategy of socio- The agreement is a legal document containing mutual obligations of Parties, the parties to the Agreement, aimed at ensuring the efficient operation of the industry and meeting the economic and social interests and needs of workers The Ministry of Energy and Coal Industry undertakes: 7.1. With the involvement of industry science to carry out technical and economic expertise of technical and working projects of construction and reconstruction of the Enterprises facilities of the coal industry in order to avoid in the projects of inefficient decisions. 7.2. In order to ensure the stable functioning and development of the industry when calculating the cost of production and the amount of government support, regardless of ownership, under the conditions specified by law, to provide: a) capital investments for the reconstruction of the mine fund, the development of industrial and social sphere of the coal industry enterprises; b) unevenness of the mining and geological enterprises; c) the cost of implementing the social and labor guarantees provided for by the applicable law and this Agreement. Complex solution of the coal industry functioning problems, implementation of systematic measures for utilization of its potential for growth of coal production, increase of efficiency and transfer of coal industry to the nonsubsidized and selfsupporting mode of activity while simultaneously solving environmental and social problems of mining regions and creating favorable mining regions conditions for the privatization of mines Effective reform of the coal industry, namely: optimization of non-core assets of coal mining enterprises; enhancing the investment attractiveness of coal mining enterprises; defining the mechanism of social protection of dismissed workers and solving environmental problems; accelerating the pace of mine preparation for privatization; determining concrete measures to reduce the cost of finished commodity coal products; bringing commodity coal prices to an economically sound level. This option will allow to take measures for elimination of loss-making mines, to bring coal enterprises to break-even level of work with production potential for ensuring energy security of the country, to create conditions for real attraction of private investments in development of coal-mining enterprises with their subsequent privatization. The main risk in the application of this approach is the dependence of the period of technical reequipment of prospective and liquidation or conservation of coal-mining enterprises on stable and full financing. During the reform of the state-owned enterprises of the coal industry it is envisaged to distribute the mine fund into the following groups: prospective mines that have a significant amount of industrial coal reserves and the ability to reach a break-even level of work as soon as possible; non-prospective mines, which are divided into two sub-groups: -mines subject to conservation (in case of absence of the buyer during the privatization and provided the feasibility study to restore their activity in the short run at a profitable level without involvement of state support) of the mine with low technical and economic indicators, high level of the mine fund wear, a considerable amount of capital investments necessary for bringing the mine to a break-even level of work and great volume of coal reserves; mines to be eliminated are the mines that are running out of residual industrial reserves or are unable to break even In order to optimize the structure of state-owned enterprises of the coal industry, by order of the As a result of the meeting, city and town mayors and members of the public supported the initiative to create a permanent Platform, which should include representatives of local authorities, local selfgovernment, public organizations, etc. interested in the process of coal conversion The restructuring of the coal industry will be accompanied by a set of measures to mitigate the social and environmental consequences of the elimination / conservation of coal mines and the social reconversion of mine closure regions in line with best European practices The closure / conservation measures for the loss-making state-owned mines will be completed by 2025 and a social and environmental mitigation plan will be adopted for each site. This work should take into account (with the involvement of large-scale international assistance) the best global experience in mitigating social consequences, including, in particular, severance pay, advisory assistance to dismissed staff, vocational training and retraining. A set of measures to mitigate the social impact of the coal industry restructuring will be implemented in close connection with the social conversion programs of mine closure / conservation areas, which must also be prepared and implemented with large-scale international assistance. In line with European best practices, such programs include the organization of public works on infrastructure reconstruction, job creation, advisory and financial support for business initiatives, the creation of business incubators, and the introduction of temporary special economic activity regimes in mine closure areas CONCLUSIONS TO THIS RESEARCH AND PROSPECTS FOR FURTHER EXPLORATION IN THIS AREA Thus, the monitoring of the current regulations on the reform of the coal industry and coal regions of Ukraine made it possible to conclude that for the last twenty years in Ukraine the legal and regulatory support included such terms as: reforming, modernization, transformation, restructuring, reconversion, re-conversion decarburization. These definitions had the following meaningful content: reforming the coal-mining complex; reforming property relations with improving the industry management system; reforming the coal industry's scientific institutions; reconstruction of objects; reconstruction of the coal industry enterprises in order to avoid inefficient projects in the projects; development of industrial and social sphere of coal enterprises of the branch; enterprise reconstruction; technical re-equipment and modernization of coal mining enterprises; reconstruction and technical re-equipment; optimization of the structure of state-owned enterprises of the coal industry; reconversion (re-industrialization) of coal regions; technical measures for updating the mine fund; prospect of further development and break-even level of production and economic activity of Ukrainian mines; reorganization of enterprises in terms of concentration of all personnel, financial and material resources on prospective mines; decarburization of national and global economies; decarburization of energy. However, today's global challenges require the introduction of the concept: just transformation is a model of development that provides a decent life and fair earnings for all workers and communities affected by the process of active energy transition (elimination of production capacity, enterprises, etc.). An important principle of effective equitable transformation is a broad social dialogue between all stakeholders: the representatives of public authorities and local self-government, civil society, science, media and business. The state, when developing the necessary national support strategies, should understand the needs of people living in these territories and dependent on monoproduction. In their turn, the representatives of the regions should clearly define the list of their needs and specific models of their satisfaction. Prospects for further exploration in this area should be developed in the development of the state program of fair transformation of coal regions of Ukraine. Seven key strategic components: maximizing energy efficiency; maximum deployment of renewable energy sources (hereinafter -RES) and electrification; transition to environmentally friendly transport, introduction of a "circular" economy (closed-loop economy); development of "smart" networks and communications; expanding bioenergy and natural carbon sequestration technologies; absorption of other CO2 emissions through carbon capture, storage and reuse technologies. The development of RES combined with energy efficiency measures making it the most powerful decarbonisation tool for the national and global economies. The decarbonisation of energy will be accompanied by its decentralization and the development of distributed generation, which will lead to a rapid increase in the number of energy objects, connections and complications of energy systems. Managing such systems will require a fundamentally new technology platform, the creation of "smart" networks built on the basis of digital technologies and information and communication systems. Decarbonisation in the sectors of production and supply of energy will contribute to reducing losses in the transportation of natural gas, electricity and heat, which will require significant modernization of backbone and distribution networks, localization of energy supply and more. In decarbonisation conditions, a necessary step will be to forecast infrastructure development needs and optimize existing transportation, distribution, storage of petroleum products, gas, electricity, and heat. Decarbonisation and greening of transport. State aid for fuel extraction may be permitted only for energy decarbonisation measures and / or measures that will contribute to the achievement of strategic goals for energy security and the achievement of Ukraine's energy independence, with a mandatory assessment of compliance with EU law and principles of the EU acquis Continuation of table 1
2020-05-07T09:09:46.118Z
2020-05-05T00:00:00.000
{ "year": 2020, "sha1": "4fe6f361a5f9f2790b3a166d68e390b5a21ba579", "oa_license": "CCBY", "oa_url": "http://www.economy.in.ua/pdf/4_2020/34.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "840e4debb56d662bdb2786c4a8d63a61fc92362f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
213006030
pes2o/s2orc
v3-fos-license
Assessment of the sustainable redesign of existing buildings in Greece in the context of an undergraduate course: Application of passive solar systems in existing, typical residences Residential buildings in Greece form an important part of the existing building stock. Most of them were built prior to the first Thermal Insulation Code (1981) and thus are characterised by poor energy performance and increased heating and cooling consumption. The 6th semester undergraduate course of the NTUA, School of Architecture “Special Topics on Environmental and Bioclimatic Design” attempts to educate students on assessing the thermal characteristics and the environmental performance of existing buildings and then propose and quantitatively evaluate the effect of low-tech and low-cost interventions with the use of energy simulation software (Design Builder®). The paper presents the teaching methodology for the application of passive solar systems -with and without thermal insulation of the building shell and openings- to existing, typical residences built after 1920, which are found mostly in suburban areas and settlements all around Greece, and the assessment of the diurnal thermal performance during the heating period. The results of the study are two-fold and involve, primarily the teaching outcome of the course and secondarily the assessment of simple bioclimatic interventions to existing buildings’ energy performance and thermal comfort conditions during the cold period of the year. Introduction One of the basic pre-requisites of sustainable design is the preservation and upgrade of existing buildings. As Elefante [1] so eloquently put: "the Greenest Building Is... One That Is Already Built". The demolition of existing buildings and their replacement with new, environmentally-friendly ones is neither an economically, nor an ecologically or socially sound alternative. In Greece, more than half of all the residential buildings (55%) have no thermal insulation, as they were constructed prior to the first Thermal Insulation Code (1981). From the total of residential buildings, 2.5% was built before 1919, 5.0% in the period up to the 2nd World War and 48.0% between 1946-80 [2]. Consequently, the refurbishment of these buildings is a "one-way street", as it not only helps preserve the energy initially embodied in them, but can also contribute to important energy savings for heating, cooling and lighting and, at the same time, significantly improve overall comfort conditions and quality of life. Single-family residences (monokatikies) form an important part of the existing building stock and are far more energy-intensive than multi-storeyed residence buildings (polikatoikies) [3]. As far as residences in suburban and rural areas are concerned, their plot arrangement and design is, in most of Course description The discussed course is a 6th-semester elective course that follows a building technology core course (5 th semester) on climate responsive design and further elaborates on the topics of sustainability, climatology, thermal comfort, daylighting, renewable energy sources and eco-friendly systems and energy refurbishments. The incorporation of all these parameters and systems in the design (or refurbishment) process is a key issue for the final architectural and constructional result. Throughout the semester, the topics covered by the lectures are correlated with simple calculations and software tools that help the students quantitatively assess the various aspects of passive and low energy systems. The main aim is to consolidate the acquired knowledge on sustainable, environmental and climate responsive design, by introducing the students to thermal simulation analysis and parametric energy performance design. The selected simulation and modelling tool is the Design Builder® software [4]. Teaching goals The main purpose of the course is to familiarize students with the impact of small scale/minimal interventions to the thermal properties and energy performance of the building envelope. While the key component variables (thermal insulation and relatively simple passive elements) that are investigated are standard and small-scale, the study presents new potentials as it tries to attract students' attention on existing and architecturally rather uninteresting buildings, which, nevertheless present a numerically important part of the existing building stock and in some cases may possess qualitative characteristics that may enhance their thermal performance. Based on Bloom's Taxonomy of Learning Domains [5], at the end of the course, students are able to:  Acknowledge the main issues related to bioclimatic and sustainable design.  Understand the fact that passive heating and cooling systems require, apart from qualitative, quantitative documentation (experimental) concerning their efficiency.  To be able to apply analytical and/or experimental procedures for the calculation of the optimum energy performance of passive heating and cooling systems.  To quantitatively analyse the contribution of the systems of bioclimatic design to the improvement of thermal comfort conditions and energy performance of buildings.  To have the ability to perform a synthesis of all the parameters involved within the framework of architectural design.  To evaluate the systems of bioclimatic and environmentally friendly design with the use of appropriate software tools. Based on the above, the course tries to make the issues of the environmental crisis that are immediately linked to the built environment (conventional energy consumption for heating, cooling and ventilation, as well as preservation and upgrade of existing buildings and materials) an integral part of the architectural education process. Only then can the long-standing phenomenon of these issues being a "marginal issue in academic discourse" [6] be overturned. As a result, the educational goal is dual: firstly, to emphasize on the need to maintain and upgrade the existing building stock, irrelevant of its architectural and/or heritage values and secondly, to educate students on the necessity to assess simple passive design strategies, with a combination of quantitative and qualitative analysis of the basic environmental parameters (temperature, relative humidity, heat gains and losses, etc.). For this reason, the performance modelling of given, typical residences, namely given / conventional buildings was favored over more complex architectural design projects. Teaching methodology The course is set up as a semester workshop, based on the facts that knowledge is acquired through practice [7], that students themselves prefer coursework, and that the quality of learning is proven to be higher in assignment-based courses [8]. Furthermore, the semester project is split into different assignment-based sections that correspond to the different scenarios that are investigated (see 3.2 Parametric investigation) and the work of each sub-project is presented in class, assessed and feedback is sent back to the students, acknowledging its powerful influence on the learning process [8], as well as its help for students to prepare for or improve their work prior to the final summative assessment at the end of the semester [9]. Course-work structure The coursework is formulated in an array of consecutive steps and work-packages. The students work alone or form groups of two or three. Each student or group is assigned a location from each climatic zone of Greece (A, B, C, and D) to perform a thorough climatic analysis with the Climate Consultant® software [10]. Inevitably, only cities with an available weather data file (.epw) are applicable. Depending on the total number of attending students, more students or groups of students may be assigned the same city (climatic zone), so the part of the climatic analysis is done by a group as teamwork. After the completion of the first stage of the semester project, the typical buildings that will be analyzed are selected and assigned. During the academic year of 2016-2017, two typical, one-storey houses were selected to form the base-case models, one with a pitched and one with a flat roof. The selection was based on the fact that these two building types are representative and largely found in various parts of Greece. The simplest possible design typology and form were selected for educational purposes, despite the fact that more complex variations of these building types also exist. The building characteristics of the base-case model were typical; conventional construction system with reinforced concrete frame baring structure (U-value = 3,00 W/m 2 .K, for 25-cm thick elements) with brick masonry infill walls (U-value = 1,30 W/m 2 .K, for a double-leaf brick wall with a 5 cm air gap in between) and timber or aluminum frame, single-glazed windows (U-value = 6,0 W/m 2 .K). Additionally, heavy-weight construction with 60 cm-thick, stone masonry walls (U-value = 2,0 W/m 2 .K), was also studied, as it is commonly found in suburban and rural areas around the country. With the completion of the design of the base-case model(s), which also completes the second stage/work-package, each student and/or team moves on to perform thermal simulation analyses of the parametric investigation scenarios described in the following section (3.2) with the use of the Design Builder® software. The number of different parameters that are simulated depend on the number of students that participate in the course. The results of the simulations and analyses for all the scenarios are compared to each other in order to draw relevant, comparative conclusions on various parameters such as energy efficiency, diurnal and most importantly inter-seasonal performance, etc. Comparisons of the efficiency of the same system in different climatic zones, as well as of different systems in the same climatic zone are encouraged, but are, unfortunately, not always feasible. The course is concluded with the students giving a presentation of their semester project and receiving final comments and feed-back, in order to submit their final report, which consists of the presentation file. The continuous shifting from individual to teamwork and back is deliberate, based on the strategy that each member is individually accountable for. In order for each group to advance the project and complete the assignment, co-operative input is needed [8], while individual learning of the group members is assimilated within the group work. Parametric investigation The parameters that were investigated in the study are presented in Table 1. For various, applicable combinations of these parameters, the following four scenarios were defined and simulated:  Scenario 0: Base-case (as is)  Scenario 1: Application of thermal insulation on walls, ground floor and roof and doubleglazing to the windows.  Scenario 2: Application of passive solar and/or passive cooling system(s). 2.a Application of passive systems to the uninsulated house of Scenario 0. 2.b Application of passive systems to the insulated house of Scenario 1. The parametric investigation derived from the basic climatic characteristics of each climatic zone (A, B, C, and D) and the corresponding basic bioclimatic design principles. In the present paper, only the results involving the application of passive solar systems (C.1 in Table 1) are presented. The main emphasis was on interventions on the building shell, mainly on its thermo-physical properties (Figure 2), with the application of external thermal insulation and double glazed windows being the first and most common strategy, continuing with shading and followed by less "popular" strategies, such as solar spaces or Trombe-Michel walls. More drastic interventions, such as alterations to the window to wall ratio and advanced solutions, such as cool roofs and/or phase-change materials, which are discussed in relevant studies [11], were discussed in class, but omitted from the semester project for reasons of simplicity and time limitations. Assumptions As the primary goal of the coursework is to familiarize the students with the simulation of the passive performance of simple bioclimatic interventions, a series of assumptions were made (Table 2), so as to minimize the various, defining parameters and to ensure that the achieved results are mostly due to the proposed interventions on the building shell. All the calculations were normalized by floor area, for the whole building (roof zone included) and the building without the roof zone (occupied zones only). Coursework results concerning the heating period The students followed the proposed course-work structure (Section 3.1) and parametric investigation (Section 3.2) and initiated the project work with the base-case (as is) scenario. As expected, the development for this, first stage of the course-work took the largest amount of time. This was due to several reasons: the students used the first model practically to familiarize themselves with the modelling tools of the software, as well as with all the necessary settings and the available analyses and simulations and tried to understand how it works in order to continue with the input of different parameters and test the proposed strategies. The heating period calculations were generated from the 'Heating Period' tab in Design Builder®, whereas the diurnal and seasonal variation of the various data were generated from the 'Simulations' tab. For the cold period of the year, the results of the simulations actually validated the data, which the students had qualitatively assumed during the relative core course of the 5 th semester, as well which they had heard during the course lectures. For the Base-case (Scenario 0) (Figure 3), they concluded that:  Both the conventional and the stone wall construction without insulation, have comparable thermal losses that are mainly dependent on their U-value. Furthermore, the diurnal and interseasonal internal air temperature variation in the two different constructions presents many similarities due to the fact that the conventional construction is actually quite heavyweight itself.  The roof zone presents considerable heat losses, which affects inter-zonal heat losses of its underlying rooms.  The uninsulated ground floor has a positive contribution to the winter thermal balance (due to the higher, compared to the air, ground temperature). The application of thermal insulation (Scenario 1) to the base-case model, led to the following remarks:  The adding of thermal insulation considerably reduces fabric heat losses.  For the openings, the replacement of single glazing with double, offers a relatively small thermal losses reduction, due to the fact that the typical houses had a relatively small total glazing area.  For the roof, thermally insulating the ceiling slab practically reduces these losses in half.  For the ground floor, the positive effect of the higher temperature is offset by the application of thermal insulation to the slab on ground. The application of passive solar systems (Scenario 2) to the uninsulated house (Scenario 2a) and to the insulated model (Scenario 2b) was assessed by the students through the comparison of representative software outputs of the same parameters (Figure 4), as well as with their own, original graphs ( Figure 5).  Overall, the contribution of the investigated passive solar systems on the whole building air temperature is limited, in both the non-insulated and the insulated models. The insulated models have about 2,0 o C higher air temperature compared to the uninsulated ones. This is probably due to the high thermal mass of the buildings.  At the beginning of the winter, when external air temperatures remain high, passive solar systems that do not have shading may have a negative contribution and lead to overheating. On the contrary, at the beginning of spring passive solar systems, in combination with the adding of thermal insulation can result to a free-running building, even without internal gains (occupancy, lighting and equipment). Trombe-Michel wall Sunspace Attached sunspace with an opaque roof Figure 6. Design Builder® models of the applied passive solar systems (Scenario 2). [Authors' own work] Lessons learned The presented research involves the academic year 2016-2017, when the proposed teaching methodology was applied for the first time, and is applied, with modifications and amendments, until today. The splitting of the assignment in distinct work-packages and the intermediate presentations upon the end of each work-package contributed to its timely completion. Furthermore, the workshop function of the course ensured that all the students participate in the class, cooperate with each other and tutors as the semester project progressed and succeed in submitting complete and comprehensive final reports and presentations. The drawbacks are mainly related to the limited semester time and students' small experience with research and simulation software. One relates to the semester time limitation prevented students from fully exploiting the features of the software and as a result, the more "ambitious" goals of the project, which mainly involved the comparisons of the efficiency of the same system in different climatic zones, as well as of different systems in the same climatic zone, were only superficially achieved, in the form of oral observations during the final presentation. Another is the students' lack, at the middle of the 5-year study period, of basic knowledge of research and comparisons, such as the simple fact that all relevant graphs should be plotted with a locked y-scale in order to directly and easily spot the differences caused by the different interventions. As a result, many of the graphs, even in the same project had different scales. Based on this observation, in the years that followed, the students received detailed and precise instructions on y-axis ranges and were encouraged to talk to each other in order to select the ranges that would accommodate the results of all four climatic zones. Conclusions The results of the study are two-fold and involve mainly the teaching outcome of the course, as well as the assessment of simple bioclimatic interventions to existing buildings' energy performance and thermal comfort conditions. In relation to the teaching outcome of the course, similar to previous research [12], it was clear that as building simulation tools are not fully adapted to standard design studio practice, the students take up considerable time to learn how to use the simulation software, and especially how to set up the model layout and configuration. As a result, it was proved difficult to run all the necessary simulations and reach qualitative and synthetic conclusions, within the time constraint of the regular 13-week semester. Moreover, it is clear that students at this level, given their small experience on such matters, have not yet developed the necessary criteria and knowledge to exploit all of the software's 10 possibilities and always conclude to accurate observations. The applied teaching methodology (coursework done throughout the semester with the different stages regularly presented in class, assessed and provided with feedback) (Section 3) has proven, since 2014-2015 (the year the course was introduced), to be largely accepted by the students and to contribute to the timely completion of the project and the fulfilment -as far as possible-of the course's the teaching goals. Finally the number of participating students is another critical parameter for the successful development of the course, as it's nature dictates no more than 20-25. Concerning the results of the study, it is clear that the application of commonly used passive solar systems to existing buildings of high thermal mass have a rather small contribution to the interior air temperatures during the coldest days of the year, their effect being more pronounced in the lower, night-time temperatures, which is rather important. On the other hand, window replacement with more efficient ones, especially in the spaces with large glass areas, is more effective. Furthermore, it is worth noting that most of these low impact interventions that basically refer to the building envelope, do not interfere with the users' activities and compensation and thus may demonstrate an improvement on the building thermal performance and energy use, especially due to the geometry of the building.
2020-01-30T09:06:00.961Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "2d821c1888c282361d95289fed19d590fca4aaef", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/410/1/012089", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ae485c72e35b1828d75e0d543f9fc14c757cbeb9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
259147239
pes2o/s2orc
v3-fos-license
Improving efficiency of semitransparent organic solar cells by constructing semitransparent microcavity Semitransparent organic solar cells have become attractive recently because of their photon harvesting in the near-infrared and ultraviolet range and passing in the visible light region. Semitransparent organic solar cells with Glass/MoO3/Ag/MoO3/PBDB-T:ITIC/TiO2/Ag/PML/1DPCs structure have been studied in this work and the effects microcavity with 1-dimensional photonic crystals (1DPCs) on the solar cell performance such as the power conversion efficiency, the average visible transmittance, Light utilization efficiency (LUE), the color coordinates in the CIE color space, and CIE LAB are investigated. The analytical calculation including the density of exactions and their displacement is used to model the devices. The model shows that the presence of microcavity can improve the power conversion efficiency by about %17 in comparison with the absence of microcavity. Although the transmission is decreasing slightly, microcavity does not change the color coordinates much. The device can transmit high-quality light with a near-white sensation to the human eye. Organic solar cells (OSCs) have attracted much attention due to their advantages such as low cost, easy fabrication, flexibility, and recently their potential applications for Semi-transparent Solar Cells (ST-SCs), although their stability is challenging [1][2][3][4][5][6][7] . ST-SCs, combining the benefits of light-to-electricity conversion and light transparency, have emerged as one of the most prominent energy harvesting technologies. These technologies can be used for Agri-voltaic (as the roof of greenhouses), and windows for buildings. The performance of ST-SCs depending on their applications is generally determines by their capability to convert the incident light into electricity while allowing transmitted light through the device. ST-SCs require not only a transparent active layer but also both electrical contacts, electron transport layer (ETL), and hole transport layer (HTL) to be transparent in a wide spectral range (from IR to UV) along with an efficient collection of photo-generated charge carriers. For front electrodes, transparent conductive oxide 8 , thin metal film [9][10][11] , the conductive polymer 12,13 , graphene 14,15 , and nanotube films 16 have been previously used and also for back electrodes mostly organic active bulk heterojunction layers are used 17 . However, using a transparent active layer and electrodes decreases the device efficiency 9 . To overcome this problem, a method is coating one-dimensional photonic crystals (1DPCs) on the top electrode of ST-SC [17][18][19][20][21][22] . Although the fabrication of the 1DPCs with a few to a dozen layers is a challenge and make additional fabrication cost, the advantage of this light trapping structure is considerable since the 1DPCs can increase the efficiency of the ST-SCs by reflecting all the photons with energies less than the photon bandgaps (PBGs) for re-absorption in the active layer. It should be mentioned that the active layer of the ST-SCs cannot completely absorb the reflected light by the 1DPCs since the thickness of the active layer is so thin. For conventional opaque OSCs, some methods have been previously developed and applied for light trapping. The methods are based on optical spacers, surface plasmons, and optical microcavity [23][24][25][26][27][28][29][30] . Using optical microcavity in combination with optical spacers can confine a large number of photons within the device and increase light absorption in the active layer 25,30 . To increase the absorption of reflected photons from 1DPC in the active layer of ST-SCs, considering the improvement in absorption is usually accompanied by a reduction in transparency, it is even more challenging to develop light-trapping structures which can improve photon absorption without reducing the transparency of semitransparent devices. www.nature.com/scientificreports/ This paper aims to develop a light-trapping structure to enhance photon absorption in ST-SCs using a semitransparent microcavity. For this purpose, the active layer is sandwiched between the MoO 3 /Ag/MoO 3 multilayer electrode and the Ag electrode which is capped by 1DPCs. The proposed structure without 1DPC has been previously studied by our group in terms of different structural parameters to improve efficiency and transparency, and an efficiency of about 4% with 45% transparency is reported. In this paper, the theoretical calculations based on Transfer-Matrix Method (TMM) 31,32 show that the semitransparent microcavity can improve the power conversion efficiency of ST-SCs. The effects of the variable pairs of 1DPCs on the power conversion efficiency of ST-SCs and their transparency are also investigated. Another commonly reported figure of merit for ST-SCs is light utilization efficiency (LUE). Taking into account both PCE and AVT, a direct comparison of LUE values can hold viable information in contrast to the direct comparison of the AVT values without knowledge of the PCE 33 . Achieving widespread adoption of semi-transparent organic solar cell technology requires combined optimization of PCE and AVT. While electronic displays require AVT > 80% (LUE > 5%), architectural tinted glass requirements typically start closer to 50%. PCE values of 5-10% (LUE > 2.5%) are required in BIPV (Building Integrated PV) applications to reduce electricity costs. However, 2-5% PCE (LUE > 1.5%) is sufficient for low-power mobile electronic devices. TPVs with similar PCE but lower AVT (LUE > 1%) can self-power smart windows or complement passive window coatings 34,35 . Recently, a high-order LUE transparent solar cell (LUE = 5.46%) with an efficiency of 9.1% and an AVT of 60% has been reported. Another reported work with LUE = 2.2% concerns a wavelength-selective DSSC device (PCE = 6.1%, AVT = 36%) with a low-cost diphenylamine-based dye and a highly transparent iodine-free electrolyte. Among inorganic translucent devices, CIGS cells are the ones with the best performance of LUE = 1.3%. Other inorganic semi-transparent devices have been demonstrated with Cu 2 ZnSn(S, Se) 4 and Sb 2 S 3 -based solar cells with still low performance (LUE < 1%) 36 . Device structure and theoretical model. The modeled ST-SCs consist of Glass/MoO 3 (I)/Ag/MoO 3 (II)/ Active layer (PBDB-T:ITIC)/TiO 2 /Ag/LiF/1DPCs structures. Where MoO 3 /Ag/MoO 3 acts as the transparent top electrode, and the inner MoO 3 layer (30 nm) acts as the HTL. Sol-gel processed ZnO layer is used as the ETL, and ITO is the transparent bottom electrode. MoO 3 (I) /Ag also acts as the input antireflection layer with a thickness assumed to be 10/6 nm, and Ag/LiF/1DPCs act as the output antireflection layer with thickness set as 10 nm, and different numbers of pairs of 1DPC to construct semitransparent microcavity and then improve the optical performance of the ST-SCs. The thickness of the electron transport layer (TiO 2 ) is set to 10 nm and the thickness of the active layer (PBDB-T: ITIC) is assumed to be 100 nm. The phase matching layer (PML), LiF film is sandwiched between the Ag electrodes and set to 84 nm. The 1DPCs are assumed to be composed of different pairs of WO 3 /LiF, as is illuminated in Fig. 1. The layer thicknesses of the WO 3 and the LiF are determined by Eq. 1 37 : where n WO3 and n LiF denote the refractive index of the WO 3 and LiF layers, and d WO3 and d LiF denote the thickness of WO 3 and LiF layers, respectively; λ is the center wavelength of the photonic bandgap of the 1DPCs. A schematic view of the structure of glass/MoO 3 (I)/Ag/MoO 3 (II)/active layer/TiO 2 /Ag/PML/1DPCs layers which according to thin film optics 30 acts as a microcavity is shown in Fig. 1. The transmission of the input (MoO3(I)/Ag/MoO3(II)) and output (TiO2/Ag/PML/1DPCs) mirrors of the constructed microcavity for the passband wavelength of the 1DPCs (300 nm-1000 nm) is shown in Fig. 2. As shown in the figure, the related passband confirms the device transparency after using the microcavity. Knowing www.nature.com/scientificreports/ the reflectance (R1, and R2) and transmission (T1 and T2) of the cavity mirrors, the transmission of the device can be calculated using thin film optics and is expressed as 38 : where β = 2πkd/ and k denote the extinction coefficient of the active layer. To model the device transparency, the transmittance of all individual layers, and all together, are calculated using the TMM method 25 . It should be reminded that the transparency properties of a device are determined by both average visible transmittance, AVT, and by transmittance characteristics in the visible light wavelength range (370-740 nm), taking into account the photopic response of the human eye V(λ). To calculate the device's performance parameters such as short-circuit current (Jsc), open-circuit voltage (V oc ), fill-factor (FF), and power conversion efficiency (PCE), we used the drift-diffusion model, where, in addition to considering the density of excitons, their displacement is taken into account. The calculated AVT value is extensively explained in our previous publications 39 . We have used Eq. (3) to consider the effects of PCE and AVT simultaneously (LUE) 33 : To qualify the implementation in practical applications such as architectural window glass and mobile surfaces, aesthetics are just as significant as PCE for TPV devices. Aesthetic quality can be quantitatively estimated from three main figures of merit: the AVT, color rendering index (CRI), and the CIELAB color coordinates (a*, b*). The calculation of AVT, CRI, and color coordinates requires the transmittance spectrum of the OTPV as the input data. The CIELAB is a device-independent, 3-dimensional color space that enables precise measurement and comparison of all perceivable colors using three color values. In this color space, numerical differences between values correspond to the amount of change humans see between colors, which was defined by the International Commission on Illumination (abbreviated CIE) in 1976. It expresses color as three values: L* for perceptual lightness and a* and b* for the four unique colors of human vision: magenta, green, blue, and yellow. In this paper, we have reported the CIELAB color space parameter set (a*, b*), which indicates the relative color concerning a reference illumination source 34 . With, www.nature.com/scientificreports/ where X , Y,Z are the tristimulus values of the test object color stimulus considered and X n , Y n , Z n are the tristimulus values of a specified white object color tristimulus. Generally, the specified white object color stimulus should be light reflected from a perfect reflecting diffuser illuminated by the same light source as the test object 36 . Result and discussion To calculate the performance parameters of the ST-SCs, two different structures (Device A and B) have been taken into account and the obtained results are compared and the key parameters are shown in Table 1. Device A (a ST-SC with a microcavity) has a structure of Glass/MoO 3 (10 nm)/Ag (6) Fig. 3. The amount of transmission in device A is reduced for all wavelengths, but this reduction is less for the eye sensitive area (orange color in Fig. 3). Reduction of the transmission spectrum means that the absorption spectrum for Device A is improved and this improvement is attributed to the resonance effects of the semitransparent microcavity. In the next step, we investigated the effects of the number of photonic crystal layers on cells' efficiency and transparency. We have calculated the transmission of the devices with the structure including 4 to 20 pairs layers of photonic crystal and related results are shown in Fig. 4. The results show that the number of photonic crystal layers does not have a great impact on the overall transparency and therefore on the performance parameters of the solar cell, although the behavior of transmission for different wavelengths is slightly different from a different number of photonics crystal layers. The effects of these pairs of layers on the PCE, AVT, and LUE of ST-SCs are investigated and depicted in Fig. 5, and Table 2. Calculations show that the photonic crystal with 8 pairs of layers in the structure can change the value of PCE and AVT, so It's found that PCE improved from 7.52 to 8.72% and AVT decreased from 25.9.19 to 22.24%. By increasing the number of layers, there is not much change in PCE and AVT. Therefore, PCE can be increased with microcavity even with a minimum number of pairs of layers (at least two pairs). Also, from Fig. 5 and Table 2, it could be found that there isn't a significant difference between the reported values of LUE for different pairs of 1DPCs, however, there is an optimum situation in the case of 8-12 pairs of 1DPCs. The CIE color space, including the coordinates of ST-OSC for both devices (with and without microcavity), is shown in Fig. 6. The color coordinates of both devices are very close to each other and located beside the color point or so-called "white dot" in the CIE chromaticity diagram. Using microcavity changes the color coordinates slightly, but the device can transmit high-quality light with a near-white sensation to the human eye with a very small changing the original color of an object. www.nature.com/scientificreports/ It should be pointed out that the PCE and AVT data in this paper are calculated using the model reported in our previous article 37 . As usual, the practical device PCE and AVT may be smaller than the calculated PCE and AVT data proposed in this paper. But, optical modeling in this paper can provide in-depth knowledge on how to apply semitransparent microcavity to simultaneously improve the photon absorption and transparency of the semitransparent OSCs. Predictive and descriptive research results obtained from these researches are considered very helpful for the design, fabrication, and experimental study of 1DPCs-based semitransparent OSC devices. Figure 7 represents the CIE LAB color space parameter set (a*, b*) of the ST-OSCs in two cases: with and without using microcavity. As expected, similar to the CIE color coordinate space diagram, for both cases, the values of a* and b* are nearby to each other located close to the white point. www.nature.com/scientificreports/ In Fig. 8a, we have compared the LUE versus AVT for our cases, without and with 8-pairs of 1DPCs, with some reported experimental data. As shown in the figure, the LUE does not change, and there is a very small change in AVT. Also, the calculated LUE for the optimum structure (8-pairs layers of 1D-PC), is in acceptable coincidence with the similar ST-OPVs. Figure 8b, shows the PCE versus AVT for our cases, without and with 8-pairs of 1DPCs, in comparison with some reported experimental data. As depicted in the figure, the PCE is changed 36 . Conclusion In this paper, the light trapping structure based on semitransparent microcavity is constructed for ST-SCs by sandwiching an active layer between the (MoO 3 /Ag/ MoO 3 ) multilayer electrode and a thin Ag electrode capped by 1DPCs. We have investigated the effects of different pairs of 1-dimensional photonic crystals (1DPCs) on the ST-SC's properties such as the PCE, AVT, LUE, the color coordinates in the CIE color space, and CIE LAB. As a result, the presence of microcavity can improve the power conversion efficiency by about %16 in comparison with the absence of microcavity. Although the transmission is decreasing slightly (14% decreasing), using microcavity does not change the color coordinates much and the device can transmit high-quality light with near-white sensation to the human eye. Data availability The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
2023-06-14T06:17:22.683Z
2023-06-12T00:00:00.000
{ "year": 2023, "sha1": "c1a5e6ba51e380754a0206590cfd0e19aa6dae39", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c06a5ee42716c5aab9e71fa37720b7053e0f6a0a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
213946330
pes2o/s2orc
v3-fos-license
Extraction and microencapsulation of tuna virgin fish oil with mangrove fruit extract fortified into extrusion cereals Long-chain omega-3 fatty acids (LCn-3FA) originally from fish have been recognized plays an important role in improving health status. Fortification these LCn-3FAs into foodstuffs will enhance their nutritional and functional value, however, their vulnerability, particularly to oxidation, requires extra protection. The goals of the research were to produce tuna eyes extra virgin fish oil (EVFO) microcapsules rich in DHA protected by mangrove fruit extract as a natural antioxidant and to obtain an extruded cereal with EVFO microcapsules. EVFO was extracted using cool centrifugation, microencapsulated by maltodextrin and arabic gum, and the cereal was formulated as extrusion flake products. The proportion of EVFO extracted from tuna eyes was 10.3596±0.73%. Mangrove fruit extract had a strong antioxidant activity with IC50 of 53.28 ppm. The best microcapsules were gained from arabic gum-maltodextrin coating material and the addition of 4000 ppm of mangrove fruit extract, which had an efficiency value of 93.71%, and round shape with size 6.26 μAm. Cereal with 35 g serving sizes and the addition of 3.6% EVFO microcapsules contributed approximately 134 kcal, 100 mg DHA, and 125 mg omega-3 increasing the great nutritional value. Introduction Omega-3 fatty acids play an important role in fetal brain development, infant motor skills and visual acuity, children lipid metabolism and cognitive development. The source of fish oil that rich in longchain omega-3 fatty acids which are currently being used extensively are the by-products of tuna industry [1]. The biggest part of tuna by-products was head (19.7%) [2], and tuna eyes contained high levels of eicosapentaenoic acids (EPA) and docosahexanoic acids (DHA) at 7% and 35% [3], however, their vulnerability need additional protection to inhibit oxidation. Fish oil extraction is generally carried out with wet rendering method [4], including cooking, pressing, and centrifugation, followed by adding synthetic antioxidant such as butylated hydroxytoluene (BHT) and butylated hydroxyanisole (BHA) reported could be carcinogen. Cold extraction is an important method expected to maintain quality, reduce production costs and reduce devouring chemicals (environmentally friendly) [5]. Fish oil protection can be prepared by adding Rhizophora mucronata mangroves extract. The R. mucronata extracts showed very strong antioxidant activity with IC 50 6.69-58.61 mg/mL [6][7][8]. Microencapsulation can also be prepared to protect omega-3 from oxidation. Microencapsulation technology has been developed as a technique for coating fish oil with materials that can form a shield from exposure to oxygen. Microencapsulation techniques convert liquid fish oil into a powder so that it is easily fortified to food products [9]. The Food and Agriculture Organization (FAO 2010) recommends consumption omega-3 as much as 250 mg per day and this omega-3 requirement can be fulfilled through a healthy (fortified with omega-3) breakfast cereals. Westernization of food habits in the middle-class population, tourist population, and high public health awareness contributed significantly to the increasing of breakfast cereal needs, especially in the Asia Pacific. Research of virgin fish oil microcapsules rich in omega-3 with antioxidant protection of mangrove fruits for breakfast cereal fortification has not been conducted, the result obtained from this study should provide valuable information for recovering EVFO, selecting and extracting mangrove fruits as an antioxidant source, and formulating the EVFO fortified breakfast cereal as a functional food. Objectives The goals of these research were to recover and characterize tuna eyes EVFO and mangrove's fruits natural antioxidant, to characterize the EVFO microcapsules protected by natural antioxidants from mangroves fruit crude extracts, and to examine the quality changes of breakfast cereal products fortified by EVFO microcapsules. Method The research procedures comprised preparation, extraction, and characterization of virgin tuna eye oil and mangrove fruits antioxidant, microencapsulation and characterization of EVFO microcapsules, formulation and fortification EVFO into breakfast cereal. Preparation of tuna eye was done by separated the eye meat from hard parts such as lenses and sclera. The meat and liquid of tuna eye were mixed and crushed using a homogenizer. The paste then extracted by centrifugation (11200 xg, 30 minutes, 4˚C) to separate the oil from other eye components, such as eye meat, blood, and water. Characterization of EVFO included weighing proportions of each tuna eye, analyzing EVFO quality and fatty acid composition. Extracted mangrove fruit was completed by boiling mangroves for 30 minutes with a ratio of mangroves and water was 1:5 (w:v). Description of mangrove fruits antioxidant covering phytochemical and antioxidant activity analysis using 2,2-Azinobis (3-etilbenzotiazolin)-6-Sulfonic acids (ABTS) method. Microcapsule formulation consisted of arabic gum-maltodextrin coating with 2000 ppm mangrove antioxidant (A01), arabic gum-maltodextrin with 4000 ppm mangrove antioxidant (A02), sodium caseinate with 2000 ppm mangrove antioxidant (B01), and sodium caseinate with 4000 ppm mangrove antioxidant (B02). Microcapsules A01 and A02 were made by homogenization (18928 xg, 10 minutes) and dried with T inlet = 160˚C and T outlet 95˚C and airflow rate 73 m3/hour and feed rate 5.3g/minute. B001 and B002 microcapsules created by homogenization (448 xg, 1 hour) were dried with T inlet = 120˚C and T outlets = 80˚C and airflow rates 73 m3/hour and feed rate 5.3g/minute. Characterization of EVFO microcapsules comprised Breakfast cereal production was conducted using a drum dryer with ingredients consisting of bran flour: sorghum flour: tapioca = 2:7:1. The process of making breakfast cereal through the boiling stage until thickening, steaming (70-80 o C, 15 minutes), and drying. Breakfast cereal was served by adding soto flavorings. Fortification was conducted by adding microcapsules 6% (b/b) each (selected microcapsules). The characterization of EVFO breakfast cereal covered hedonic testing and comparison of partners (30 panelists), physical characteristics, chemical characteristics, and percentage contributions to the recommended daily intake (RDI). Results The results of this research consist of organoleptic freshness and chemical composition of tuna eye, visualization of separation form and proportion of tuna eye fish oil, characteristics of extra virgin fish oil (EVFO), Visual, morphometric, phytochemie, and antioxidant activity of mangrove extract, characteristics of EVFO microcapsule, characteristics sensory of fortified EVFO cereals, nutrition and energy donations of cereal products toward the recommended daily intake (RDI). Organoleptic freshness and chemical composition of tuna eye Tuna eyes used to consist of 3 types, namely eyes (A), eyes (B), and eyes (C). All three types of tuna eyes were organoleptic tested by 30 panelists based on SNI 2729:2013 about fresh fish to know the freshness level of all three types of tuna eyes. Organoleptic results of tuna eyes are shown in Figure 1. The eyes A used had an average weight of 162.86±40.07 g with an average diameter of 8.07±0.34 cm. The proportion of each part of the eye was 92% of the meat and liquid, while the lens and sclera were 7-8%. The extracted eye belonged to the medium curved eye. Tuna eyes A contained a water 73.32±1.40 %, ash 1.03±0.07%, protein 4.03±0.14%, carbohydrate 3.58%, and fat 18.03±0.58%. Visualization of separation form and proportion of tuna eye fish oil Cold centrifugation formed 4 layers where the oil was on the top layer. The proportion of tuna eye fish oil was 10.3596±0.73%. Visualization of separation form and tuna fish oil are exhibited in Figure 2. Characteristics of extra vurgin fish oil (EVFO) The quality characteristics of tuna eye fish oil include free fatty acids (%FFA), acidity value, peroxide value, p-anisidin value, and total oxidation. The value of %FFA, acidity value, and p-anisidin value meets the provisions of CODEX 329-2017 about standard for fish oil. The quality characteristics of EVFO are summarized in Table 2. Characteristics of EVFO microcapsules Characteristics of EVFO microcapsules comprise the efficiency value, peroxide value, microstructure, and performance of fatty acids. The A01 and A02 formulas used a coating of arabic gum and maltodextrin, while B01 and B02 used a sodium caseinate coating. The Efficiency value of tuna eye fish oil microcapsules is recapitulated in Table 3. Peroxide value is the number of peroxide in the active oxygen milliequivalent contained in 100 g of compounds. Detection of peroxide gives the initial evidence of rancidity. Analysis of peroxide value conducted for the tuna eye fish oil microcapsules added with natural antioxidants, complemented with control antioxidants (ascorbic acid), and without addition of antioxidants. The peroxide value of tuna eye fish oil and the microcapsules are condensed in Table 4. It gives a measure of the extent to which an EVFO and the microcapsules samples have undergone primary oxidation, nevertheless, the addition of mangrove fruits extracts was effective enough to inhibit the secondary oxidation process. Microcapsules microstructure analysis was performed using a Scanning Electron Microscope (SEM). The observation results indicate that the size of microcapsules was 6.26 µm. The Microstructure of tuna eye fish oil microcapsules can be seen in Figure 3. Performance analysis of microcapsules and tuna EVFO fatty acids were performed using Gas Chromatography (GC). The EVFO fatty acids content was dominated by Polyunsaturated Fatty Acids (PUFA). Performance of microcapsules and tuna eye fish oil fatty acids are presented in Table 5. Characteristics sensory of fortified EVFO cereals Sensory analysis was completed for the cereal fortified by tuna eye fish oil microcapsules and control products. The tested characteristics include appearance, color, aroma, taste, texture, and aftertaste. The Cereal sensory results are displayed in Figure 4. Nutrition and energy donations for cereal products the recommended daily intake (RDI) The cereal fortified product donated energy amounting to 134 kcal in a 35 g per serving size. Cereal with fortification 3.6% tuna eye fish oil microcapsules contained 125 mg omega-3, 100 mg DHA, and 1 mg AA. The nutrition value of cereal fortified extra virgin fish oil is performed in Table 6. Free fatty acids are formed due to the hydrolysis reaction of triglycerides during storage and extraction processes. Hydrolysis will occur if oil comes into contact with water and heat [12]. The value of free fatty acid is directly proportional to the acidity values. Acidity value indicate the quality of fish during the process or storage of oil [13]. The three tuna eye fish oils have a low value of free fatty acids indicating that the oil is in good quality. Tuna fish oil had peroxide value and total oxidation that have not met the CODEX standard (2017). The high peroxide and total oxidation value in fish oil are caused by the high content of longchain unsaturated fatty acids, especially EPA and DHA which are very susceptible to oxidation [14]. Increases in peroxide value also occur during storage so that the longer the storage time the higher the peroxide numbers [15]. Oil B had the lowest peroxide value because the raw material was stored at -20˚C in vacuum packaging. Vacuum packaging prevents the formation of ice crystals which can damage tissue and limit contact with oxygen so that oxidative reactions can be inhibited [16]. The IC 50 value of mangrove fruits using the ABTS method was 52.96±0.42 ppm and belongs to a powerful antioxidant activity. Previous research mentions that IC 50 value of mangrove fruits R. mucronata extract was 25.07 ppm [7]. The high antioxidant activity in the water extracted mangrove fruit correlates with the results of phytochemical test. Phenol hydroquinone compounds and flavonoids were positively identified, and they have long been recognized to be very potential as an antioxidant that able to inhibit oxidation in some sensitive oxidative material so that their shelf life can be extent longer [17]. Microcapsules A02 with arabic gum-maltodextrin coating material and an addition of 4000 ppm mangrove fruit antioxidants have the highest efficiency of microcapsules. Maltodextrin plays a role in the formation of viscoelastic films in microcapsules, therefore, it can improve the efficiency of microcapsules [18], whereas Arabic gum has good properties as encapsulation agents, namely high solubility, low viscosity, and good emulsification characteristics [19]. The combination of both was able to form microcapsules with a higher efficiency value than sodium caseinate coating material. Sodium caseinate was not able to protect fish oil efficiently because it was unable to form dense skin in the early stages of drying. The spray drying process can also cause protein denaturation and b-lactoglobulin aggregation [20]. Peroxide in encapsulated EVFO has increased. The increasing of peroxide number in tuna eye fish oil microcapsules is due to oil and air contact in the homogenization process [21]. The use of high temperatures during the drying process can also increase peroxide numbers [22]. Microcapsules with the addition of 4000 ppm fruit antioxidants have peroxide values that are not significantly different from the control (+) ascorbic acid. Antioxidant activity in mangroves is caused by the presence of polyphenols. Polyphenols function as antioxidants with a radical binding mechanism [23]. The result of the fatty acids showed that the content of omega-3 in microcapsules of tuna fish oil decreased by 22.87%. The contents of EPA and DHA respectively also decreased by 26.01% and 27.63%. A decrease in the percentage of fatty acids occurs during the microencapsulation process. Most tuna EVFO consists of n-3 components. Omega-3 is a compound unsaturated fatty acid that has many double bonds so it is very susceptible to heat [24]. The homogenization and spray drying process in the microencapsulation development used high temperature, consequently, some breaking of double bonds on long-chain fatty acids occurs and the microcapsules being oxidized [25]. Bulk density is also influenced by the drying method. Drying with a drum dryer using high temperatures causes termination of the branch chain of starch so that the amylose content increases and the solubility value gets higher [26]. Water absorption is influenced by particle size. The smaller particle sizes have higher water absorption because of the larger surface area and so they have more
2020-02-27T09:15:14.069Z
2020-02-20T00:00:00.000
{ "year": 2020, "sha1": "173f1e366ccef45af88c723da44f2923e7257847", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/420/1/012032", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ad0ed713d450f0f2685fca64dab6eb056a73d0b8", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
9191578
pes2o/s2orc
v3-fos-license
Price differentials of oral triptans in eight European Union countries Triptans are presently a milestone in the treatment of migraine patients. Because of their effectiveness and safety, they have radically improved migraine treatment but their use has meant a substantial increase in spending for medicines. We thus compared retail prices of triptans in eight European Union member states to establish the existence and the amount of price differentials. We found wide price differentials between countries (from 83% to 140%) and within countries, where they attained 191% in Belgium. The least and most expensive products differed from country to country. These differentials mean that the most cost-effective triptans differ from country to country and this can be an important source of variation in the treatment of migraineurs. A better-harmonised European system of pricing could limit these unethical variations. Introduction Since their introduction in early 1990s, selective 5-HT1B/1D agonists (triptans) have radically improved migraine management because of their effectiveness, tolerability and safety. Their use, despite the high cost, has largely increased in the last years. Five different compounds (sumatriptan, naratriptan, zolmitriptan, rizatriptan and almotriptan) are listed in the 2002 WHO-ATC list [1] and thus are clinically available in at least one market. Eletriptan and frovatriptan are two newer triptans, and a total of seven different compounds will be soon available for treatment of migraine patients. Because of the large number of clinically available triptans, physicians need comparative clinical information to select those products with the highest likelihood of clinical success as well as comparative cost information to prefer, when possible, those products with the best cost-efficacy ratio. Even if several studies have demonstrated the costeffectiveness of triptans, their use leads to substantial costs and there is a need, in a capped budget era, to limit expenditures without affecting quality of care. Despite several reports comparing clinical efficacy of triptans and a meta-analysis comparing 53 clinical trials [2], no comparative data at the European level about triptan prices have been published and there is a gap of information at this level. Moreover, large price discrepancies between the European countries have been described for a variety of medicines. For all these reasons we thought it was of interest to compare costs of triptans and to study their price differentials. Pietro Folino-Gallo Fabio Palazzo Giuseppe Stirparo Sergio De Filippis Paolo Martelletti Abstract Triptans are presently a milestone in the treatment of migraine patients. Because of their effectiveness and safety, they have radically improved migraine treatment but their use has meant a substantial increase in spending for medicines. We thus compared retail prices of triptans in eight European Union member states to establish the existence and the amount of price differentials. We found wide price differentials between countries (from 83% to 140%) and within countries, where they attained 191% in Belgium. The least and most expensive products differed from country to country. These differentials mean that the most cost-effective triptans differ from country to country and this can be an important source of variation in the treatment of migraineurs. A better-harmonised European system of pricing could limit these unethical variations. Materials and methods Data were obtained from the EURO-Medicines database, a European Union-funded project aimed to collect information about medicines available in European countries and whose data are now available on the Internet (www.euromedicines.org). Details of the methodology and data sources used for collecting and analysing these data were provided elsewhere [3]. Our comparison refers to prices in 2000 and is limited to solid oral triptan formulations, which are licensed in all the EU countries and are more commonly used. Thus, suppositories and parenteral forms (injection and spray) were excluded from the comparison. Prices were compared using the cost per single unit (tablet, capsule, wafer), calculated by dividing the retail (pharmacy) price of the pack by the number of single units. Retail price includes ex-factory (industry) price, wholesale margin, pharmacist margin and value added tax (VAT); the extent of these different components differs from country to country. Retail prices were calculated in local currency and converted in euro using the fixed conversion rate for 5 countries within the European monetary area (Belgium, France, Germany, Italy and Netherlands), or the September 2000 exchange rate for the 3 additional EU countries (1 euro equals 7.43 Danish crowns, 8.48 Swedish crowns and 0.63 British pounds). Incremental costs within countries were calculated assuming the cheapest price in each country as a reference. Results The lowest and highest prices, the countries with the lowest and highest prices and the percentage differences between countries are reported in Table 1. The percentage differences between countries range, for a same compound, from 83% for rizatriptan (5 mg and 10 mg) to 140% for zolmitriptan (2.5 mg). The incremental costs per single unit of the triptans and of their different strengths are reported in Table 2. Naratriptan 2.5 mg is the least expensive triptan in Belgium and in the UK (together with zolmitriptan 2.5 mg in the UK). There are some important differences between these two countries: (1) the price per tablet of naratriptan in the UK (€ 6.35) is independent of pack size, while there is a 73% difference in Belgium according to the number of tablets in the pack; (2) the price of a zolmitriptan 2.5-mg unit is the same as that of a naratriptan 2.5-mg unit in the UK, while it is higher by 12%-86% in Belgium; (3) the price of rizatriptan 5 mg, compared with naratriptan 2.5 mg, is 11% higher in the UK and 97% higher in Belgium. Zolmitriptan 2.5 mg is the least expensive triptan in Denmark, France, Italy, UK and Sweden (Table 2). There are no major differences in the cost of a single zolmitriptan 2.5-mg unit between different pack sizes in the UK and Italy, while these prices vary by 19%, 24% and 26% in Sweden, France and Denmark, respectively. The cost of a sumatriptan 50-mg tablet, as compared with zolmitriptan 2.5 mg, is similar in Italy (1%) but differs by 18%-24% in the UK. The cost of a rizatriptan 5-or 10-mg unit is higher than that of zolmitriptan 2.5 mg by 11% in the UK, 1%-37% in Italy and 29%-39% in Denmark. Discussion From our data it appears that wide differentials in prices of triptans exist at a European Union level: the differences between countries for one compound range from 83% to 140% and they are not all in the same direction: the least expensive product in one country can be the most expensive in another country. Moreover the extent of variations, excluding sumatriptan 100 mg, varies within countries from 24% in UK and France to 97% in Belgium. Including sumatriptan 100 mg the extent of variation becomes much wider, attaining 100% and 191% in the UK and Belgium, respectively. Selection of the right triptan for an individual patient is a complex choice: it must take in consideration at least the clinical characteristics of the single patient (frequency and severity of migraine attacks, risk factors, contraindications to treatment), the efficacy of each triptan (pain-free response at 2 h, sustained pain-free rates, and recurrence rates), and the costs of treatment. The wide price differentials mean that the most costeffective triptan can differ in Europe from country to country and this is a source of variation in the treatment of migraineurs. A better-harmonised European system of pricing could limit these unethical variations.
2016-05-04T20:20:58.661Z
2003-03-01T00:00:00.000
{ "year": 2003, "sha1": "2903ddaba82de2e3b52b70830509f0c633d89c0a", "oa_license": null, "oa_url": "https://thejournalofheadacheandpain.biomedcentral.com/track/pdf/10.1007/s101940300013", "oa_status": "BRONZE", "pdf_src": "Anansi", "pdf_hash": "2903ddaba82de2e3b52b70830509f0c633d89c0a", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
159017708
pes2o/s2orc
v3-fos-license
Non-Linearities in International Prices We consider multiple sources of non-linearity at the same time within a structural model that accounts for previously omitted variables and allows estimation of product-level convergence rates both within and outside the band of no trade. Accounting for the role of theoretically-implied variables and their non-linear interactions in the convergence process, we find that goodlevel convergence rates are systematically faster as compared to convergence estimates from reduced-form models. Contrary to conventional wisdom, we find that good-level price differentials exhibit mean-reverting behavior even within the bands of no trade, and that rates of mean-reversion within or outside the no-trade band are strongly related to goods’ economics characteristics. Furthermore, while implied trade costs dramatically increase as we move from within country comparisons to comparisons across countries, inconsistent with our priors services have somewhat comparable trade costs to tradable goods. Finally, wage differentials are negatively associated with the speed of price adjustment and this effect is stronger for city pairs that are farther apart. Introduction We consider multiple sources of non-linearity at the same time within a structural threshold auto-regressive (TAR) model that accounts for previously omitted variables and allows estimation of product-level convergence rates within and outside the band of no trade, that do not suffer from the type of misspecification and omitted variables bias present in previous work based on reduced form TAR models.The null hypothesis in the latter non-linear models is that inside the no-trade band, price deviations from the LOP are persistent while above or below it arbitrage takes place inducing deviations from the LOP to mean-revert.Other possible reasons for nonlinearities were ignored in these previous studies. Our model encompasses the main elements of conventional TAR models that allow the dynamics of relative prices to differ above and below the band of inaction.Relative to reduced form models, however, our model has two notable distinctions.First, although conventional models focused on transport costs as the major source of the inaction band, we show that the band is also generated by additional factors including differences in local distribution costs and wages across locations.Second, the behavior of price differentials within the band hinges upon differences in local distribution costs and wages, and hence does not necessarily follow a random walk process.The combination of the first and second features implies that local factors play an important role in the dynamics of relative prices through the channel of wages and distribution costs, consistent with a view that market segmentation is driven by local factors as well as international trade costs.After all, assuming final goods are comprised of a traded and a non-traded input as per the retail pricing model in Crucini et.al (2005), the determinants of goods' prices should be related to their traded and non-traded components, influenced respectively by trade costs and by factors such as local input costs and productivity.It thus follows from basic economic theory that variables such us wages are relevant for examining whether poorer countries behind the technology frontier tend to exhibit faster price convergence leading to price convergence via the non-traded component of final prices, along the lines of the Balassa-Samuelson framework. 1Basic economic theory, say gravity mod- 1 Another possibility is that international movements of factors of production induce 2 els, also suggests variables like physical distance are relevant for examining the role of trade costs in price convergence, via the traded inputs channel. 2 The specific theoretical framework within which we approach the empirical analysis of nonlinear price adjustment is an extension of the one-good, two-country endowment economy model of Sercu and Uppal (2003) that incorporates a nontradable good, local distribution costs and a labor input. This framework allows us to incorporate the role of additional factors and their non-linear interactions in the convergence process, in a theory-consistent manner. 3 In line with the above, our empirical analysis differs from the existing empirical literature in that we estimate convergence speeds both outside and inside the thresholds for individual goods and services (both tradeable and non-tradeable) allowing for the theoretically-implied role of factors like wage differentials and distance, as well as for their non-linear interactions.By contrast, previous empirical studies had typically focused on price adjustment of tradables outside the band.Importantly, accounting for the role of theoretically-implied variables in the convergence process, we find that goodlevel convergence rates are systematically faster as compared to those estimated using reduced-form models.Our evidence suggests that the omission of variables which may affect price dynamics and resulting misspecification of econometric models, may lead to downward bias in reverting speeds of price differentials. Contrary to conventional wisdom, we find that good-level price differentials exhibit mean-reverting behavior even within the bands of no trade. convergence in wages as in Zachariadis (2012) where immigration is shown to matter for price convergence. 2 These traded and non-traded components via which price convergence occurs can interact with each other, as shown in Glushenkova, Kourtellos and Zachariadis (2018).For example, lower trade costs appear to be conducive to price convergence only for countries that have the non-traded Balassa-Samuelson catch up process operating in full force given low initial incomes.We thus consider such non-linearities in addition to TAR-type ones. 3 Several theory papers suggest international price processes are non-linear.Dumas (1992) and Sercu et.al (1995) argue threshold nonlinearities arise due to transactions costs in international arbitrage that create a "band of inaction" within which the marginal cost exceeds the marginal benefit of arbitrage, whereas outside this no-arbitrage band, arbitrage acts as a convergence force towards the LOP.These transaction costs have been interpreted by Dixit (1989) and Krugman (1989) as "market frictions" capturing sunk costs of international arbitrage where traders enter only if large enough opportunities arise. Furthermore, rates of mean-reversion within or outside the no-trade band are strongly related to goods' economics characteristics.In addition, consistent with conventional wisdom, implied trade costs dramatically increase as we move from within country comparisons to comparisons across countries. Moreover, inconsistent with our priors, services have somewhat comparable trade costs to tradable goods.Finally, wage differentials are negatively associated with the speed of price adjustment suggesting a role for consumers' search intensity and firms' pricing-to-market producing persistent price deviations in line, e.g., with Alessandria and Kaboski (2011) where costly consumer search makes local wages matter for price-setting behavior.We also see that this effect of wages is stronger for city pairs that are farther apart. The next section describes the theoretical framework from which our empirical specification used in the third section derives.The fourth section presents our empirical findings while the last section briefly concludes. Methodology In this section, we provide an empirical framework with which we analyze nonlinear adjustment of relative prices.We extend the one-good, two-country endowment economy model of Sercu and Uppal (2003) to incorporate a nontradable good, local distribution costs and a labor input.As in the latter paper, we assume complete financial markets and focus on two types of goods market frictions, local and international transaction costs.Our approach is meant to provide a tractable framework to carry out explicit analysis of price adjustment, with emphasis on the interplay of different factors driving nonlinearity in international relative prices.Appendix A discusses the details of the approach that we use to derive the dynamics of relative prices, given by: The (logarithm of) international relative price of retail goods, q i,j,t , is defined by the ratio of the retail price of country j to the retail price of country i in period t. θ 1 = γα−α+δ 1−α+αγ and θ 2 = δ(α+γ −αγ).The parameter α is the expenditure share of the tradable good and γ is the inverse of the intertemporal elasticity of substitution.w i,j,t = w j,t − w i,t is the difference of the (logarithm of) real wages in country i relative to country j at time t.Parameter τ is the bilateral trade cost associated with bringing the tradable good from its point of loading abroad to the point of unloading in the importing country. η ij = η j − η i where η i and η j represent local costs that are entangled in the movement of the tradable good from its point of production or unloading to the point of retailers in countries i and j respectively. Equation (1) encompasses the main elements of a conventional TAR model where the dynamics of relative prices differ above and below the band of inaction.However, whereas conventional models focused on transport costs as the major source of the inaction band, our model implies that the band is in part generated by additional factors such us cross-country differences in distribution costs and wages.Furthermore, in our model, the behavior of price differentials within the band hinges upon differences in local distribution costs and wages, and hence does not necessarily follow a random walk process.The combination of the first and second features implies that local factors play an important role in the dynamics of relative prices via the wages and distribution costs channels, consistent with the view that market segmentation is driven by both local factors and international trade costs. Explicitly deriving the determinants of price adjustment, we obtain the estimable equation shown below: (q i,j,t−1 − a u i,j ) + β u a a u i,j + β out,u w w i,j,t + e out i,j,t if q i,j,t−1 > a u i,j λ in 1 q i,j,t−1 + β 1 η i,j + β in w w i,j,t + e in i,j,t otherwise λ out,l 1 (q i,j,t−1 − a l i,j ) + β l a a l i,j + β out,l w w i,j,t + e out i,j,t if q i,j,t−1 < a l i,j (2) where we introduced a u i,j := η i,j + τ and a l i,j := η i,j − τ for notational brevity and added e out i,j,t and e in i,j,t for the error terms for the (i, j) pair.Estimating the above-derived equation (2) allows us to examine theory-implied nonlinearities in international price reversion behavior.Parameters λ out 1 s and λ in 1 to be estimated are of particular interest.The first measures the speed at which price differentials between markets revert back to the band once they cross the thresholds.On the other hand, λ in 1 , relates to the speed of convergence within the band of no trade. Model (2) must obey certain restrictions, such as β l a = β u a and β out,l w = β out,u w due to the fact that party i's export is party j's import and vice versa.4With these restrictions imposed, the general TAR model that we consider is as follows: where w i,j,t can be a collection of variables such that w i,j,t = −w j,i,t .We consider price comparisons within the U.S. (UU), between the U.S. and the European Union (UE), and between the U.S. and other countries (UO), for goods and services separately, by modelling We estimate five variants of the model as follows: (M0) τ = δ 0 , no w i,j,t , no λ out 2 q i,j,t−2 .(M1) τ = δ 0 + δ 1 ln(dist i,j ), w i,j,t only, no λ out 2 q i,j,t−2 .(M2) τ = δ 0 + δ 1 ln(dist i,j ), w i,j,t and w i,j,t • ln(dist i,j ), no λ out 2 q i,j,t−2 .(M3) τ = δ 0 + δ 1 ln(dist i,j ), w i,j,t only, λ out 2 q i,j,t−2 included.(M4) τ = δ 0 + δ 1 ln(dist i,j ), w i,j,t and w i,j,t • ln(dist i,j ), λ out 2 q i,j,t−2 included. where dist i,j is the geographical distance between i and j.Specification (M0) is the simplest TAR model (M0) and excludes the terms for distribution costs and wages.Specification (M1) is a structural TAR model with neither interaction effects nor any higher order price adjustments terms included.Model (M2) includes interaction effects but excludes any higher order adjustment terms.Models (M3) and (M4) have a second order AR term in addition to (M1) and (M2) respectively. Data The source of our micro price data is the Worldwide Cost of Living Survey discusses issues related to sample selection and reliability of this dataset in great detail.As explained in detail there, this dataset is suitable to address the key questions at hand regarding price dispersion and price adjustment across countries.First, these survey prices are quite comparable across cities as they are usually specific in terms of both quality and quantity, e.g., aspirin (100 tablets), Coca Cola (1 liter), and tennis balls (six, Dunlop).Moreover, these price data are collected in a consistent manner by a single agency. Finally, since the data are absolute prices for goods and services rather than indexes, we are able to evaluate the absolute magnitude of cross-sectional LOP deviations and resulting price adjustment of each item. Prices for most tradeable goods are sampled from two different outlets, a supermarket/chain store and mid-priced/branded store, and are separately reported in the survey.We examine the dynamics of relative prices for both types of outlets, but report results from the supermarket/chain store due to its higher comparability across locations.By doing so, we avoid the same goods appearing more than once in our analysis.Later, we compare the convergence speeds of these two types of outlets to check if prices from lowprice outlets (supermarket/chain stores) exhibit different reverting behavior than prices in mid-priced stores. Main results Our study differs from previous work in that we estimate convergence speeds for goods and services inside of the band, allowing for the effect of wage differentials, in addition to estimating rates of convergence outside of the band.Previous studies focused on price adjustment of tradables outside the band, in reduced-form settings.We outline the main results arising from our structural approach to the data below and present more details in the subsections that follow. Accounting for theoretically-implied variables within a structural TAR model, we find that good-level convergence rates are systematically faster as compared to those implied by reduced-form TAR models previously considered.As expected, price shocks are relatively short lived for non-services and for city pairs within a country (the U.S. in particular).Contrary to conventional wisdom, the process of price differentials does not necessarily follow a random walk when trade does not occur. 5Estimated trade costs vary widely across individual goods and services and across locations.Trade costs for services are comparable to those for tradeable goods and increasing in distance. 6Finally, non-linearities in the form of interactions between the traded and non-traded channels play a role in the convergence process of price differences across the world.Accounting for the interaction between wages and physical distance, convergence rates become somewhat slower as compared to the case where this non-linearity is ignored, but still faster than rates of convergence from reduced-form TAR models. Convergence rates In this subsection, we describe the results arising from our structural estimation of convergence rates in more detail.The average (across goods or services) speed outside the band, λ out 1 , at which price differentials between markets revert back to the band once they cross the thresholds, and the mean convergence speed within the band, λ in 1 , along with the corresponding halflives are reported respectively in Tables 1 and 2. In each case, we present separate results for comparisons of locations within the US (UU), between the US and European Union countries (UE), and between US and other country locations (UO), separately for goods and services. The next few findings from Tables 1 and 2 constitute our main contribution in terms of novel empirical evidence.First, in all cases considered, the structural TAR models without (M1, M3) and with (M2, M4) interaction effects imply faster convergence speed λ out,T 1 for tradeable goods than the standard reduced-form TAR model (M0) that has typically been estimated in previous work. 7This can be inferred by comparing the first column of results in Table 1 with the second to fifth columns of results shown there., than those without interaction terms (M1 and M3) as can be seen by comparing column two with column three and column four with five in Table 1.This suggests that accounting for interactions between the traded and non-traded components as in Glushenkova, Kourtellos and Zachariadis (2018) provides us with lower estimates of price convergence as compared to our models which exclude these interactions. We also find, somewhat surprisingly, that implied convergence speeds for services (λ out,S 1 ) for comparisons within the US are usually comparable to those for goods (λ out,T 1 ) for comparisons (UE and UO) across countries as can be seen in Table 1.In particular, models Mo, M1, M2, M3 show that convergence speeds for services within the U.S. are comparable to and sometimes faster than those for goods (tradeables) across countries, which tells us that price differentials of services within the U.S. are arbitraged away as quickly as those of tradables between the U.S. and other countries. This suggests that the role labor mobility across US cities plays for price convergence within the US is comparable in force to the role played by trade in final goods across international locations. Our last potentially important finding is that, as shown in Table 2, convergence speeds inside the band implied for goods (λ in,T 1 ) are faster than one would have expected based on the findings and assumptions in previous work.In particular, one would expect that price differentials follow a random walk process within the band where no adjustment takes place which is why a body of literature imposes the assumption that λ in 1 = 0.This view is not necessarily correct based on our estimates in Table 2.For example, in the case of comparisons between US and EU cities, our benchmark structural TAR model M1 suggests a half-life of 4.6 years within the band as shown in Table 2.This compares to a half-life of 2.5 years outside the band as shown in Table 1 for the same model and set of bilateral comparisons.As reported in Table 3, on average, the implied share of traded inputs for goods amounts to 70%.This means that price differentials within the band of inaction can be less persistent than expected, as price differentials of traded inputs contained in the final good tend to be arbitraged away. 8It then comes as no surprise that the convergence speed λ in,T 1 is estimated to be faster than for international comparisons (UE and UO).The implied half-life for tradeable goods inside the band for our benchmark structural TAR model M1 shown in Table 2, is 4.6 years for UE comparisons and 3.3 years for UO comparisons as compared respectively to 5.5 years and 4.2 years for the half-life of services outside the band for model M1 in Table 1.We note that the share of traded inputs in services is only 33% and thus there is little error-correction force driven by traded-inputs.Instead, price differentials of services within the band will follow a process determined mostly by changes in local demand and supply so that half-lives for services within the band of inaction can be huge as shown in Table 2.For instance, our benchmark structural TAR model M1 implies a half-life of over 14 years within the band for price comparisons between the U.S. and EU countries for services. Our last set of findings from Tables 1 and 2 serves to confirm previous standardized facts and in doing so to ensure the relevance of our data and methodology.First, as we can see in Table 1, implied convergence speeds for goods (λ out,T 1 ) are faster than for services (λ out,S 1 ) in all cases, irrespective of the statistical model or set of price comparisons considered.That is, it takes longer for price differentials of services as compared to those of tradeable goods to adjust.Second, as we can also see in Table 1, implied convergence speeds for goods (λ out,T 1 ) within the U.S. (UU) are higher than those between countries (UE and UO) for all statistical models considered, implying a higher degree of market integration and arbitrage within a country.Furthermore, this holds for goods as well as services, suggesting that the mechanisms bringing prices closer faster within a country do not just relate to trade in goods but also perhaps relate to how fast factors of production move within a country as compared to across countries.Third, implied convergence speeds for goods (λ out,T 1 ) outside the "bands of inaction" shown in Table 1 are faster than those inside those bands (λ in,T 1 and λ in,S 1 ) shown in Table 2 for the structural TAR models estimated here. 9This means price differentials outside the band are relatively short lived as compared to those within the band, indicating the presence of TARtype non-linear adjustment of price differentials. 10Reassuringly, the slowest convergence speeds we find are within the bands of inaction for services, irrespective of the statistical model and the bilateral set of comparisons being considered.These findings square well with conventional wisdom, providing compelling evidence that our model and estimation methodology correctly capture reverting properties of Law-of-One-Price deviations. Next, to help understand the role potentially played by the tradability of final goods but also by the share of non-traded inputs embodied in any final good, Table 4 reports correlations between (absolute values of) convergence speeds for individual goods and their characteristics; namely, degree of tradability and share of nontraded inputs.Because the tradability and nontraded input share variables are measured by industry and are more aggregated than the retail price data we have at hand, we assign each good-specific estimate of convergence speed to an industry and then choose the median for each industry to use as that industry's measure of convergence speed.Noting that the number of goods in each industry varies widely, we use these numbers as weights in computing the Pearson correlation coefficients.In Table 4, we show that convergence speeds are positively associated with tradability but negatively related to the nontraded input share both outside and inside the bands, which supports our assertions.The λs used are from the bench-mark model, M1.Although not reported here, convergence speeds estimated from other model specifications (M2, M3, M4) give similar results.Namely, substantially large and economically significant correlations.For price comparisons between the U.S. and the EU, the correlations equal 42% (65%) between tradeability and the estimated convergence rates outside (inside) the band, and minus 85.4% (90.3%) between the latter convergence rates and non-traded input shares.Overall, these strong statistically significant correlations suggest that the heterogeneity in convergence rates we estimate across individuals goods and services is meaningful in that it relates sensibly to their economic characteristics. In order to further understand the role of goods characteristics for price convergence, we consider and contrast sub-categories of tradable goods.First, we consider perishable versus non-perishable goods.Perishable goods (for example, fresh chicken) decay more easily within a short period of time than non-perishable (frozen chicken) goods, and hence are less likely to be traded. However, if price differentials are large enough to induce trade occurrence, the nature of perishability makes arbitrage more active (i.e., urgent) and therefore leads to faster price reversion towards the band for perishable goods. This is exactly what we find in Table 5. Implied convergence speeds outside the band for perishable goods are faster than those for non-perishable ones. Second, we consider goods sold at supermarkets versus goods sold at high-price or brand stores.Consumers who shop at supermarkets tend to price-shop for frequently purchased goods, while firms have more incentive to charge different markups across brand stores.Therefore, one would expect more persistent price differences across locations for goods sold at brand outlets, indicative of faster price reversion of goods at supermarkets.This is what we observe in Table 5. Convergence is faster for supermarkets compared to brand stores and other mid-priced outlets. Implied trade costs In addition to helping us obtain theory-consistent estimates of convergence, our structural estimation approach also provides us with additional meaningful parameter estimates we discuss in this and the next subsection. For instance, estimated parameter δ 0 of the structural models M1, M2, M3 and M4 specified in section 2, captures the component of trade costs not explained by distance.This parameter can then be related to things like the border, taxes, and pricing-to-market.Parameter δ dist , on the other hand, relates to the impact of distance on trade and could be thought of as the component of trade costs related to distance.We can see several features in Tables 6 and 7 where we report respectively mean values of δ 0 and δ dist . First, there is variation in estimated trade costs both across different types of items and across different bilateral comparisons, e.g., within versus across countries.This points to the importance of product-specific and locationspecific factors in characterizing international market frictions.Consistent with conventional wisdom, both δ 0 and δ dist dramatically increase as we move from within country comparisons (UU) to comparisons across countries (UE and UO) for the structural models we consider M1, M2, M3 and M4. 11 Inconsistent with our priors, we see in Table 6 that services have on average comparable trade costs to tradable goods.One would expect higher non-distance-related trade costs, δ 0 , for services.However, the last feature we uncover poses a challenge to the traditional view that δ 0 is necessarily higher for services.We find that δ 0 is not systematically higher for services as compared to tradable goods.In fact, for bilateral comparisons between the US and Europe for our structural TAR models M1 to M4, δ 0 is always lower for services as compared to tradables. In Table 7, we see that the impact of distance on trade, δ dist , is not systematically higher for tradable goods as compared to services.A possible reason for this is that many service industries exhibit geographic concentration in production and therefore have trade costs similar to manufacturing industries, pertaining to the apparent role we estimate for δ S dist .A second reason could be that, considering there exists a significant degree of home bias in the consumption of services, it would be natural for trade costs of services to significantly depend upon distance, i.e., relatively high δ S dist .Finally, based on Table 7, we note that perishable goods exhibit larger δ dist than non-perishable goods for within-country price comparisons (UU), but comparable δ dist in an international context.This is probably because perishable goods are processed to be non-perishable when they are traded over long distances (internationally). Wage differentials High wage differentials are likely to hinder price adjustment by prohibiting price differentials from being arbitraged away.This is because higher income differentials are associated with larger differences in local costs, and 11 For the reduced form model, M0, δ 0 also goes up but less dramatically. with higher ability of firms to price-to-market.Our interest in wage differentials is motivated by basic features of consumer purchasing behavior. Consumers spend a considerable amount of time in search-related activities such as shopping.This search intensity is related to the opportunity cost of time so that high-income consumers tend to search less per purchase than low-income consumers.Thus, a change in the relative wage across locations changes the relative cost of consumer search so that consumers in a relatively high-income region search less intensively than consumers in a low-income region.This effectively makes firms vary their markups to these markets accordingly.This pricing-to-market leads to larger price dispersion across locations as in Alessandria and Kaboski (2011) or Alessandria (2009). 12In this sense, if wages are greatly dispersed across cities in a particular region, the prices of a good will be widely dispersed as well. Based on the above, our prior is that adjustment of price differentials will be lower the larger income differentials are.In this sense, β out w in equation ( 3) is expected to be positive.This is exactly what we see in Table 8.Thus, wage differentials are negatively associated with the speed of price adjustment, implying a potential role for consumers' search intensity and firms' pricingto-market interacting to produce persistent price deviations.In the case of comparisons between the US and other countries (UO), the interaction terms show that this effect is stronger for city pairs that are farther apart.This implies that markets are more segmented for city pairs that are farther apart (i.e., β out (w * dist) > 0) the larger income differentials are.This becomes evident as we move from within country comparisons (UU) or US-Europe comparisons (UE) to UO. Evidently in Table 8, the effects of wage differentials are all positive once we incorporate the interaction terms.In the case of UO price comparisons, M4 predicts an average β out w and β out (w * dist) of -0.014 and 0.005 respectively, indicating that the effect of wage differentials for two cities 2,000 kilometers apart will be -0.014+0.005×(log(2000))=0.024. Conclusion We consider a structural threshold auto-regressive model to estimate product-level price convergence rates that do not suffer from misspecification and omitted variables bias present in previous work.Using a detailed dataset of retail prices, we estimate convergence rates both within and outside the no trade band for individual goods and services within and across countries. Accounting for the role of theoretically implied variables in the convergence process, we find that good-level convergence rates are systematically faster as compared to those implied by estimating reduced-form models.The heterogeneity of these convergence rates across product items relates strongly to their economic characteristics.Individual rates of convergence are positively associated with tradability but negatively related to the nontraded input share both outside and inside the bands, while goods that are more perishable exhibit faster reversion than non-perishables. Furthermore, consistent with conventional wisdom, implied trade costs dramatically increase as we move from within country comparisons to comparisons across countries.Inconsistent with our priors, services have somewhat comparable trade costs to tradable goods. In addition, we estimate in our context that non-linearities in the form of interactions between the traded and non-traded channels play a role in the convergence process of price differences across the world.Accounting for the interaction between wages and physical distance, we typically find that goodlevel convergence rates are somewhat slower than in the case where this form of non-linearity is ignored but still faster than estimates from reduced-form TAR models.Moreover, wage differentials are negatively associated with the speed of price adjustment, which suggests a role for consumers' search intensity and firms' pricing-to-market interacting to produce persistent price deviations, along the lines of Alessandria and Kaboski (2011) where costly consumer search makes local wages matter for the price-setting behavior of firms. where X i,t is the amount of exports from country i measured before trade and distribution costs, while ((1+η i ) is the amount of imports from country j measured after trade and distribution costs.The appearance of the cost factors in the denominator is the essence of the iceberg cost assumption: a proportion of the shipped traded good is lost before this arrives at the importing destination. As in Crucini et al. (2005), retailers combine tradable goods with nontradable services using a Cobb-Douglas function to place the retail goods in outlets which yields the following expression for the retail price: P j,t = P T j,t δ P N j,t 1−δ (12) Assuming that financial markets are frictionless and complete, the model is solved as a central planner problem whose objective is to maximize aggregate utility by choosing the amount of trade: subject to constraints ( 5)- (10) When financial markets are complete, the ratio of marginal utility of consumption between countries is linked to international relative prices.From a standard Lagrangian problem of a central planner, the (logarithm of) international relative price of retail goods defined by a ratio of the retail price of country j to the retail price of country i, is then given by: where η i,j = η j −η i , w i,j = w j,t −w i,t , θ 1 = γα−α+δ 1−α+αγ and θ 2 = δ(α+γ −αγ).All lowercase letters denote logarithms of the corresponding variables.Equation ( 14) shows that trade and distribution costs along with wage differentials determine the band of inaction around which trade patterns and resulting international relative prices are characterized.When gains from trade are sufficiently large to cover goods' market frictions, arbitrage takes place and the price in the importing country is higher than in the exporting country by the weighted average of goods' market frictions and wage differences. Even in the absence of distribution costs (i.e., η j = η i = τ = 0), wage differences will still drive a natural wedge between prices across locations. As a result of distribution costs, international relative prices do not move in tandem with wage differences within the band.In the extreme case where all market frictions are eliminated and labor markets perfectly integrated (i.e., w i,t = w j,t ), the central planner sets the optimal relative consumption equal to unity and corrects any deviations from unity by re-allocating goods, so that international relative prices are equal to unity and the LOP unambiguously holds. Restrictions on parameters Since q i,j,t = −q j,i,t , the equation ( 2) can be written as −∆q j,i,t =    −λ out,u 1 (q j,i,t−1 + a u i,j ) + β u a a u i,j + β out,u w w i,j,t + e out i,j,t if − q j,i,t−1 > a u i,j −λ in 1 q j,i,t−1 + β 1 η i,j + β in w w j,i,t + e in i,j,t otherwise −λ out,l 1 (q j,i,t−1 + a l i,j ) + β l a a l i,j + β out,l w w j,i,t + e out j,i,t if − q j,i,t−1 < a l i,j or, equivalently, ∆q j,i,t =    λ out,u 1 (q j,i,t−1 − (−a u i,j )) + β u a (−a u i,j ) + β out,u w (−w i,j,t ) − e out i,j,t if q j,i,t−1 < −a u i,j λ in 1 q j,i,t−1 − β 1 η i,j + β in w (−w i,j,t ) − e in i,j,t otherwise λ out,l 1 (q j,i,t−1 − (−a l i,j )) + β l a (−a l i,j ) + β out,l w (−w i,j,t ) − e out i,j,t if q j,i,t−1 > −a l i,j .In addition, by observing η i,j = −η j,i , a u i,j = η i,j and, similarly, a l i,j = −a u j,i , and w i,j,t = w j,t − w i,t = −(w i,t − w j,t ) = −w j,i,t , we can see the equation ( 2) is equivalent to the following equation (15). Here, we implicitly assumed e out i,j,t = −e out j,i,t and e in i,j,t = −e in j,i,t , which can be justified when we assume e out i,j,t = e out j,t − e out i,t and e in i,j,t = e in j,t − e in i,t . Since indices i and j are nominal, the equation ( 15) must hold when we call i as j and j as i i.e. λ in 1 q i,j,t−1 + β 1 η i,j + β in w w i,j,t + e in i,j,t otherwise λ out,l 1 (q i,j,t−1 − a u i,j ) + β l a a u i,j + β out,l w w i,j,t + e out i,j,t if q i,j,t−1 > a u i,j (16) Comparing equations ( 2) and ( 16), we can see the following restrictions must hold: The variables w i,j,t have a property that w i,j,t = −w j,i,t .If we are to include a new variable z i,j,t in the model that has a property z i,j,t = z j,i,t such as a constant, then the coefficients of the variable, namely β out,u Models we estimated are (M0) ∼ (M4) as described in the section 2.1.We estimated all five models per each good category.The good index is omitted for notational simplicity.The definitions and notation of this subsection is based on (M4).The x out,u i,j,t (θ), x out,l i,j,t (θ), x in i,j,t (θ), β out , and β in below may be defined appropriately for other models but change of definitions should be straightforward.Let us define x out,u i,j,t (θ), x out,l i,j,t (θ), and x in i,j,t (θ) imply that they are functions of θ.Now let us define , and . Then the model ( 3) is written as follows. collected by the Economist Intelligence Unit (EIU).The survey covers 300 individual retail goods and services across 140 cities in 91 countries semiannually over the period 1990-2013.Bergin, Glick, and Wu (2013), Andrade and Zachariadis (2016), and Glushenkova, Kourtellos and Zachariadis (2018) also use these semi-annual EIU data or subsets of these, to study issues of price adjustment.The online appendix ofAndrade and Zachariadis (2016) or β in w in (16), must satisfy the following restrictions: [Negative symmetric coefficients] β out,u z = −β out,l z and β in z = 0. Applying these restrictions, have the model (3) in the section 2.1. Table 1 . Convergence speed of LOP deviations: Outside the band Table 2 . Convergence speed of LOP deviations: Inside the band We report the mean (averaged across goods) speed at which price differentials between markets revert back to the band once they cross the thresholds.M1 stands for our benchmark structural TAR model.M2 adds interaction effects to M1. M3 is the AR2 version of M1.M4 is the AR2 version of M2.UU signifies comparisons of prices within the US.UE signifies comparisons of prices between the US and European Union countries.UO signifies comparisons of prices between the US and other countries. Notes: Table 3 Tradability and non-traded input shares : We report Pearson correlation coefficients computed using weighted means, weighted variances and weighted covariance.UU signifies comparisons of prices within the US.UE signifies comparisons of prices between the US and European Union countries.UO signifies comparisons of prices between the US and other countries. Notes Table 5 . λ out We report the mean (averaged across goods) speed at which price differentials between markets converge within the band.UU signifies comparisons of prices within the US.UE signifies comparisons of prices between the US and European Union countries.UO signifies comparisons of prices between the US and other countries. Notes: Table 8 . Averages of β out w and β out : We report averages of wage-related coefficients across goods.M0 stands for the reduced form TAR model.M1 stands for our benchmark structural TAR model.M2 adds interaction effects to M1. M3 is the AR2 version of M1.M4 is the AR2 version of M2.UU signifies comparisons of prices within the US.UE signifies comparisons of prices between the US and European Union countries.UO signifies comparisons of prices between the US and other countries. Notes
2019-05-20T13:05:01.749Z
2018-05-01T00:00:00.000
{ "year": 2023, "sha1": "e358812971f0201f51f81249d990dcdb7958264e", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/caje.12672", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f2e66c735562a71f350f677551470fcc3c2a23bc", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
259034258
pes2o/s2orc
v3-fos-license
Reviving India's economy through Goods and Services Tax (GST): A conceptual study GST is a game changer for its far-sweeping impact on businesses and end-consumers. Manufacturers, traders and service providers across India have been placed under one unified tax umbrella. They no longer need to work with a tedious array of distinct types of taxes they were previously required to comply with. Despite many of its advantages in the current ongoing scenario, the end consumer needs to improve in price hiking in basic amenities. The bill was brought to implement one tax in the country but resulted in the most significant drawback and a deadly weapon for the poor and middlemen of the country. The current picture of GST is not as rosy as it was portrayed. The code of conduct on which GST works, i Introduction There's no denying that yield from indirect taxes is the bedrock of the government's revenue. Indirect taxation consists of all those taxes which are indirectly collected from the consumers on the consumption of goods and services by the union and state government through intermediaries at different stages. Indirect tax is the base and foundation of the Central Government's revenue. Goods and Services Tax (GST), an indirect tax, is a comprehensive and embracive tax on the manufacture, sale and consumption of goods and services throughout the country. It is a single indirect tax for the whole nation, making India a unified common market. In the case of indirect tax, the liabilities of the tax can be passed on to someone else. This means that when the wholesaler pays value-added tax (VAT) on his sale, he can pass on the liabilities to the retailer, and when the retailer pays the tax, he can pass on the liability to the customer. So, in effect, the consumer pays the price of the item and the VAT at various levels. This whole system leads to double taxation (Banik & Das, 2018) [5] . This system directly hits the final consumer, who has to bear the entire burden, which is largely borne by the poor people who spend a major portion of their income on the consumption of goods. To make the consumer somewhat accessible and to release them from a load of all taxes, the Government of India has introduced the concept of GST, i.e., Goods and Services Tax. GST was launched as the 101 st amendment of the Indian Constitution at midnight of June 30 -July 1, 2017. It was applicable to the whole of India. Central Hall of Parliament witnessed the mega launch of GST by the President and Prime Minister of India (Srivastava & Srivastava, 2017) [17] . Applauded and Glorified by the Central Government, GST came into existence as the most awaited and conscientious tax rejuvenation post 17 years of heated arguments (since 2000 when it was first proposed) among the centre and the states. Being an indirect tax, GST is going to replace the plethora of all indirect taxes which were previously imposed by Union and State Governments. GST is the landmark amendment in the history of the Indian taxation System. The amendment is expected to increase GDP by a percentage point or even more (Nayyar & Singh, 2018) [16] . ~ 167 ~ Objective of the Study The objectives of the present study are as follows:  To illuminate the concept of GST.  To understand the cascading effect of the former tax structure.  To highlight the pros (benefits) and cons (flaws and adverse effects) of GST. Research Methodology Being conceptual in nature, the present study is based on secondary data extracted from journals, articles, newspapers, and magazines. Considering the objectives of the study, descriptive type of research design has been adopted to have more accuracy and rigorous analysis of research study (Tabassum & Yameen, 2022) [19] . Available secondary data has been extensively used for the study. Classification of Indian Taxation India has a well-structured and simplified taxation system, wherein authoritative segregation has been done among the Central Government, the different State Governments, and the Local Bodies. The Department of Revenue under the Government of India's Ministry of Finance is solely responsible for the computation of tax. This department levies taxes on individuals or organizations for income, customs duties, service tax and central excise. However, the agriculture-based income tax is imposed by the respective State Government. Local bodies have the power to compute and levy tax on properties and other utility services like drainage, water supply and many others. The past 15 years have witnessed tremendous reformations of the taxation system in India. Apart from the rationalization of the rates of tax, many steps for simplification of the different laws of taxation have been taken during this period. However, the process of tax rationalization is still in progress (Nayyar & Singh, 2018) [16] . Cascading Effect: Issue of Present Multi-staged Tax System One of the primary goals of a taxation regime is always avoidance of "taxation over taxes" or the "cascading effect" of incident taxes. The cascading effect of taxes is one of the significant distortions of the Indian taxation regime. This cascading, caused due to a variety of charges by Union and State Government, has raised the tax burden on Indian products and made them less competitive in the international market (Tyagi et al., 2019) [21] . The gargantuan sizes of corporate taxes owe much to this taxation structure and have led to the adoption of tax evasion practices. A common person finds himself throttled in the Gordian knot of multiple tax rates, laws and elaborated processes and often fails to comply with these complex legislations. The extra tax paid due to taxation of the already taxed amount is finally borne by the end-user or consumer, which is a common man and badly affects them in addition to inflation (Tyagi et al., 2019) [21] . The Federal Structure of our democracy allows states and the Centre to levy taxes separately, which has caused cascading of taxes mainly. While Excise duty, service tax and central sales tax are levied by the Central Government, VAT, entry tax, state excise, property tax, agriculture tax, luxury tax and more are charged by the State Governments. There are many possible transactions which come under the ambit of two or more of these taxes, and the value of the second tax is arrived at by adding the value of the first tax to the value of the transaction. The prevalent complex, multistaged and cascading tax structure steals the advantage of the availability of cheap labour and other factors of production. It brings the market price (post taxes) at par or above the price of international. Hence the manufacturing industry of India is not able to compete with other countries due to high market prices of products (Bhattacharjee & Bhattacharya, 2018) [6] . Goods & Service Tax (GST): An Integrated Tax System GST is a comprehensive and concise indirect tax on the manufacture, sale and consumption of goods and services throughout India. The introduction of GST is a significant step in transforming the Indian indirect tax system. The simplicity of the tax will lead to easier administration and enforcement. Amidst economic crisis across the globe, India has posed a beacon of hope with ambitious growth targets, supported by a strategic understanding named GST that is expected to provide the much-needed stimulant for economic growth in India by transforming the existing base of indirect taxation towards the free flow of goods and services throughout the nation. GST will turn India into one common national market, leading to the greatest ease of doing business from the consumer's perception; the greatest benefit would be in terms of a reduction in the overall tax burden, which is currently estimated as 25-30%, free movement of goods from one state to another and reduction in the paperwork to a large extent (Singh & Sarva, 2016) [18] . The Frame of Reference for GST in India In general, there are mainly two models of GST, i.e., unitary and dual. In the former, Union Government collects GST; in the latter, both Union and State Governments collect GST. India has chosen to adopt the dual system of GST. A fourtier structure has been formulated for the running GST regimes in India, which includes: CGST and IGST are the concepts of Union Government, while the other two are of state and UTs' matter. IGST is applicable to inter-state sales that will help smooth transfer between states and Union (Kumar, 2016) [14] . Demystifying the Tax Slab The rate has been codified in the four-tier tax slab structure under the GST regime, which shall cover every product, commodity and service sold or bought in India. Interestingly, a 0% tax rate is also inserted under which essential commodities of everyday use, such as food grains, rice, wheat, etc., are included. This will help the rural population to a greater extent and help in controlling inflation. There is a special rate of 0.25% on rough, precious, and semi-precious stones and 3% on gold (Gade et al., 2022) [9] . However, there are three items which do not fall under the purview of GST, i.e., alcohol for human consumption, petroleum products such as petroleum crude, motor spirit (petrol), high-speed diesel, natural gas and aviation turbine fuel and the last one is electricity which has been kept aside under the range of GST (Gupta, 2014) [10] . Reasons for Adopting Dual GST System India is a federal country where the Centre and States have been assigned the powers to levy and collect taxes through appropriate legislation. Both levels of government have distinct responsibilities, according to the division of power prescribed in the Constitution, for which they need to raise resources. A dual GST will, therefore, be in keeping with the Constitutional requirement of fiscal federalism. (Khan & Shadab, 2012) [12] A GST is a consumption-based tax; where the revenue of State Governments depends on the consumption of goods & services within the State. Consequently, some states may have to suffer. With the view to prevent the states from revenue losses, the Union government has assured that if any government suffers in terms of less income generation due to the introduction of GST, that portion of revenue loss will be compensated by Union Government for five years from the date of implementation of GST Act (Gupta, 2014) [10] . GST in Countries Other than India Presently there are around 160 countries which follow GST tax system or VAT in some form or other. France was the first country to implement GST to reduce tax evasion (Dey, 2020). Similar to Indian Context, it is only Canada that has the Dual GST System. Throughout the world, rate of GST ranges between 15-20% generally. It may differ to higher or lower side in few countries. Although we do not follow an ideal GST, we just follow the Indian version of GST which is termed as Indian GST. Nevertheless, Indian GST has the highest rate of tax as compared with all over the world. The economy of a country grows when its people and their business grow, and there is an increase in the Government's revenue in the long run. GST has been introduced to flourish morality and to deliver corruption-free trade throughout the country. Here, there are some points plotted that describe the benefits of GST for all: 1. GST shall ease the way of doing business and attract foreign capital and foreign investment in the country. 2. GST shall reduce the overall transaction cost of businesses, which will affect the business positively. 3. GST will curb the circulation of black money. This can only happen if the "Kacha Bill", or temporary bill normally followed by traders and shopkeepers, is put to check. A unified tax regime will lead to less corruption which will indirectly affect the common man. 4. Reduction in the total cost of different goods can also be passed on to the customers. 5. The reduced cost will induce people to purchase more, and this will enhance the demand for products and boost their sales, thereby benefiting the overall economy. 6. The enhanced production means a reduction in the unemployment rate. It is also expected that GST will contribute to producing more jobs in the field of accounts & commerce. This may reduce unemployment in the country by a pinpoint (Vasanthagopal, 2011) [22] . 7. If more employment is generated in the country, this leads to high demand. This vicious circle of demandproductionemploymentdemand implies that the GDP of the country can be seen in an increasing trend. Shortcomings and Negative Impact of GST The basic idea of GST in India is to mitigate the ill multiple tax system and to provide a uniform tax system. GST has brought in a "One Nation One Tax" system, but its effect on various industries, businesses and consumers is slightly different, and this made GST a deadly weapon for the common man (Tabassum & Yameen, 2023) [20] . A comparative look at the rates in Asia and Europe shows that India has the highest tax rates and that by splintering a so-called unified structure; we have made it a whole lot more confusing. Therefore, by the present model of GST, we cannot conclude just by seeing its few advantages. Still, we should also consider its disadvantages and the decision to implement GST in the Indian phase. The present GST tax system needs to be fixed. It lacks economic vision as the present GST Act has various loopholes laying adverse effects on the Indian economy and the country's people, either from the working or business class. Contrary to Single Tax Concept The present GST system disaffirms the concept of one nation, one tax. The principle on which GST works, i.e., one nation, one tax, is not suitable for India. Previously we had 32 taxes, including 29 state VAT taxes, one sales tax, one excise duty and one service tax. After implementing GST, we now have 31 taxes, including 29 SGST taxes, 1 CGST and 1 IGST, which again gives a complicated tax structure to the country and contradicts the principle of a single tax in the nation (Abbasi, 2018) [1] . On the other hand, the Constitutional provisions and judgments on GST do not allow imposing a single GST tax system in India due to the following reasons. According to Article 246A, parliament and state can levy taxes on the supply on goods & services. Therefore, not only parliament but state can have its own GST. Article 279A of the ~ 169 ~ Constitution says the GST council has only recommendatory powers. So, it's up to state governments to implement their ideas. In this way, the state government levies its own GST and distorts the entire GST system of the country. On 11th November 2016, 9 judges on behalf of the Supreme Court of India gave their judgment regarding the entry tax case that every state is as sovereign as the parliament in its powers to levy taxes. So, it provides freehand to state by which they can levy their own GST (Abbasi, 2018) [1] . Multi-tier and Complicated Tax Structure The current model of GST with numerous exemptions and a four-tier rate structure -5%, 12%, 18% and 28% -apart from a compensation cess and exempt items and different rates for gold (3%) and rough diamonds (2.5%), is very different from the original plan of the single tax rate. The Asia-Pacific Tax Complexity Survey conducted by Deloitte said India's tax structure is the second most complicated tax structure. Indian tax laws are perceived to be the second most complex in the Asia-Pacific region; well over half of the respondents believe that complexity in the regime has increased in the last three years. International Monetary Fund has lauded India's efforts to lower the compliance burden under the Goods and Services Tax but stated that the GST regime must have fewer rates and efforts should also be made to lower the tax slabs (Kumar & Choudhary, 2017) [13] . Filing Returns become more complex The GST system is dependent on the online submission of taxes. As a result, it overburdens the online system of the Ministry of Corporate Affairs. Therefore, the problem of hanging and website crashes repeatedly occurs, which makes tax filing more adverse than before (Debnath, 2016) [7] . The existing online infrastructure could be more robust. According to the previous tax system in our country, one had to file tax twice a year, but now the system has been so complicated that one must file GST thrice a month on the 10, 15 and 20 dates of the month, respectively, only through an online system. So, in a simple way, one must file 36 GST taxes and then has to file 12 TDS returns. So now 36+12=48 and one annual return, so a total of 49 taxes are filed in a year, which is a tedious process and hence lays overburdening on the tax department of India and businessmen. If a person owns 13 outlets in 13 states of the country, he has to file 49*13= 637 taxes in the same year, which is a very irrelevant system. It's been more than five years since GST was rolled out, and there are still unclear GST provisions and procedures. During the initial phase, the government was taking initiatives to help people understand GST, but all that seem cold now (Modi, 2017) [15] . Lead to the problem of Tax Evasion and Corruption The implementation of the present GST system in the country also increases the problem of tax evasion, which results in a massive loss in the economic condition of the country due to the following provision existing in the bill, which states that business entity with an annual turnover less than Rs. 20 lakhs are given exemptions under GST registration (Kumar, & Choudhary 2017) [13] . The above provision provided in the bill is the biggest loophole which can increase the problem of tax evasion and can be explained by a simple example -If a businessman owns a firm or company with an annual turnover of 80 lakhs and falls under the taxpaying category according to the norms of the GST but instead paying taxes he divides his business into four firms of 20-20 lakhs and make his wife, son, daughter and himself director of the following four firms. By showing the business into four parts with an annual turnover of Rs. 20 lakhs, he is not entitled to pay GST. Still, these four firms were originally only in the papers, and he saved his firm's annual turnover of 80 lakh rupees to pay GST. This is how people will do tax evasion in many forms and, thus, will result in huge economic loss to our country. Higher tax burden for manufacturing SMEs GST regime will not be easy for small businesses in the manufacturing sector (Agarwal, 2022) [2] . Under the excise laws, only manufacturing businesses with a turnover exceeding Rs. 1.50 crores had to pay excise duty. Whereas under GST, the turnover limit was reduced to Rs. 20 lakhs which increased the tax burden for many manufacturing SMEs. However, SMEs with a turnover of up to 75 lakhs can opt for the composition scheme, pay only 1% tax on turnover instead of GST, and enjoy lesser compliance. The decision to choose between higher taxes or the composition scheme will be a tough one for many SMEs (Krishna & Jaiswal, 2017) [12] . Disruption to Small Businesses Small businesses are unable to afford the cost of computers and accountants required to implement GST (make bills and file tax returns). 28% GST rate on some products like marble, plywood, and automobile parts is too much for common people. Buyers are willing to purchase from unregistered dealers to avoid paying high GST, especially products with a 28% GST rate (Agrawal, 2017) [3] . Assigning maximum retail price (MRP) to handmade products like local shoes, Banarasi sarees, etc., takes a lot of work. Most small artisans are illiterate and, therefore, unable to write MRP on their products and/or do any paperwork. Dealers need clarification on how to rate such products. Small businesses with a low annual turnover exempted from GST are still afraid to supply as they have no proof that they are exempted from GST. Buyers are demanding bills from even those GST exempted shops but need evidence. Many dealers are still buying from unregistered wholesalers on cash without bills and without paying any tax (Asmuni et al., 2017) [4] . Conclusion GST is a significant reform. It has some teething issues. There is no doubt that GST is aimed at increasing the taxpayer's base by bringing SMEs and the unorganized sector under its purview. This will make the Indian market more competitive and create a level playing field between large & small enterprises. Further, Indian Businesses will be able to compete better with other foreign countries. Notwithstanding, the present structure is not perfect, but this model could be a grand success if it had been planned considering the welfare of poor people. Despite all its shortcomings, GST is expected to positively benefit the two pillars of the growing economy, i.e., India's business climate as well as the financial system of the country. Both play a crucial role in shaping the economy of any country. However, all will not be smooth sailing. A ~ 170 ~ policy change of such a huge nature is sure to be forced with teething troubles. Therefore, a strict check on profiteering activities needs to be done so that the final consumers can enjoy the real benefit of GST. The study concludes that GST is not a perfect tax system; it is merely a drift in consumers' pockets, as the present picture of GST is not as rosy as it was portrayed.
2023-06-03T15:15:49.758Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "87a0a970bf7f440f299d27d02ad68b760bd33713", "oa_license": "CCBYNCSA", "oa_url": "https://www.theeconomicsjournal.com/article/view/189/6-1-36", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b552c254cc28c6effc6fd1bf1b5778239a1ff70f", "s2fieldsofstudy": [ "Economics", "Law" ], "extfieldsofstudy": [] }
7122819
pes2o/s2orc
v3-fos-license
Optimized dark matter searches in deep observations of Segue 1 with MAGIC We present the results of stereoscopic observations of the satellite galaxy Segue 1 with the MAGIC Telescopes, carried out between 2011 and 2013. With almost 160 hours of good-quality data, this is the deepest observational campaign on any dwarf galaxy performed so far in the very high energy range of the electromagnetic spectrum. We search this large data sample for signals of dark matter particles in the mass range between 100 GeV and 20 TeV. For this we use the full likelihood analysis method, which provides optimal sensitivity to characteristic gamma-ray spectral features, like those expected from dark matter annihilation or decay. In particular, we focus our search on gamma-rays produced from different final state Standard Model particles, annihilation with internal bremsstrahlung, monochromatic lines and box-shaped signals. Our results represent the most stringent constraints to the annihilation cross-section or decay lifetime obtained from observations of satellite galaxies, for masses above few hundred GeV. In particular, our strongest limit (95% confidence level) corresponds to a ~500 GeV dark matter particle annihilating into tau+tau-, and is of order~ 1.2x10^{-24} cm^3 s^{-1} - a factor ~40 above thethermal value. A Flux upper limits 29 1 Introduction Dark matter (DM) is an, as yet, unidentified type of matter, which accounts for about 85% of the total mass content and 26% of the total energy density of the Universe [1].Despite the abundant evidence on all astrophysical length scales implying its existence, the nature of DM is still to be determined.Observations require the DM particles to be electrically neutral, non-baryonic and stable on cosmological time scales.Furthermore, in order to allow the small-scale structures to form, these particles must be "cold" (i.e.non-relativistic) at the onset of structure formation.However, a particle fulfilling all those requirements does not exist within the Standard Model (SM); thus, the existence of DM inequivocally points to new physics.A particularly well motivated class of cold DM candidates are the weakly interacting massive particles (WIMPs, [2]).WIMPs are expected to have a mass in the range between ∼10 GeV and a few TeV, interaction cross-sections typical of the weak scale and they naturally provide the required relic density ("WIMP miracle", see, e.g., [3]).Several extensions of the SM include WIMP candidates, most notably Supersymmetry (see, e.g., [4]), as well as theories with extra dimensions [5], minimal SM extensions [6], and others (for a review see, e.g., [3]). The search for WIMPs is conducted on three complementing fronts, namely: production in particle accelerators, direct detection in underground experiments and indirect detection.The latter implies the searches, by ground-and space-based observatories, of the SM particles presumably produced in WIMP annihilations or decays.Accelerator and direct detection experiments are most sensitive to DM particles with mass below a few hundred GeV.Positive results for signals of ∼10 GeV DM reported by some direct-searches experiments [7][8][9][10] could not be confirmed by other detectors, and are in tension with results obtained by XENON100 [11,12] and LUX [13].On the other hand, the rather heavy Higgs boson [14] and the lack of indications of new physics at the Large Hadron Collider, strongly constrain the existence of a WIMP at the electroweak scale.Therefore, the current status of these experimental searches strengthens the motivation for DM particles with masses at the TeV scale or above -the mass range best (and sometimes exclusively) covered by the Imaging Air Cherenkov Telescopes (IACTs).For this reason, IACT observations in the very high energy domain (E 100 GeV) provide extremely valuable clues to unravel the nature of the DM.Such searches are the primary scope of this work. Among the best-favored targets for indirect DM detection with gamma-ray observatories are dwarf spheroidal galaxies (dSphs).The dSph satellites of the Milky Way are relatively close-by (less than a few hundred kpc away), and in general less affected by contamination from gamma rays of astrophysical origin than some other DM targets, like the Galactic Center (GC) and galaxy clusters (see, e.g, [15,16]).Furthermore, given the low baryonic content and large amounts of DM expected in these kind of galaxies, dSphs are considered highly promising targets for indirect DM searches.Over the last decade, a number of dSphs have been observed by the present generation of IACTs: MAGIC [17][18][19], H.E.S.S. [20][21][22] and VERITAS [23,24], as well as by the Large Area Telescope (LAT) on board the Fermi satellite [25]. In this work we present the results of deep observations of the dSph galaxy Segue 1 with MAGIC.Discovered in 2006 in the imaging data from the Sloan Digital Sky Survey [26], Segue 1 is classified as an ultra-faint dSph, of absolute magnitude M V = −1.5 +0. 6 −0.8 .With a mass-to-light ratio estimated to be ∼3400 M ⊙ /L ⊙ [27], this is the most DM-dominated object known so far.Furthermore, given its relative closeness (23±2 kpc), lack of backgrounds of conventional origin, high expected DM flux and its favorable position in the Northern hemisphere and outside of the Galactic plane (RA, DEC = 10.12 h , 16.08 • ), Segue 1 has been selected as an excellent target for DM searches with MAGIC. We present the results of a three-year long (2011-2013) observational campaign on Segue 1 carried out with MAGIC in stereoscopic mode.With almost 160 hours of goodquality data, this is the deepest exposure of any dSph by any IACT so far.No significant gamma-ray signal is found.The gathered data are used to set constraints on various models of DM annihilation and decay, providing the most sensitive indirect DM search on dSphs for the WIMP mass range between few hundred GeV and 10 TeV.In particular, we improve our previous limits, obtained from ∼30 hours of Segue 1 observations in the single telescope mode [19], by one order of magnitude.This improvement is achieved through the increased sensitivity of the MAGIC stereo system, the deep exposure, and the use of the full likelihood analysis [28] -a method optimized for searches of characteristic, DM-induced gamma-ray spectral features. This paper is structured as follows: first, we describe the observations of Segue 1 with MAGIC, data reduction and standard analysis procedures (section 2).Then, in section 3, we describe the basics of the full likelihood method, used for the analysis of our data, and 157.9 the particular choices we have made regarding the likelihood construction.In section 4 we give details on the expected photon flux from DM annihilation and decay, with accent on the spectral shapes of the considered models and the calculations of the astrophysical term of the flux.Section 5 presents our results -the upper limits on annihilation cross section and lower limits on the DM particle lifetime for the studied channels, and in section 6 those are put into context and compared with constraints from other gamma-ray experiments.Lastly, section 7 summarizes this paper and our conclusions. Observations and conventional data analysis The Florian Goebel Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) Telescopes are located at the Roque de los Muchachos Observatory (28.8 • N, 17.9 • W; 2200 m a.s.l.) on the Canary Island of La Palma, Spain.The system consists of two, 17 m diameter telescopes with fast imaging cameras of 3.5 • field of view.The first telescope, MAGIC-I, has been operational since 2004, and in 2009 it was joined by MAGIC-II.The trigger threshold of the system is ∼50 GeV for the standard observations; the integral sensitivity above 250 GeV, for 5σ detection in 50 hours, is ∼0.7% of the Crab Nebula flux, with an angular resolution better than 0.07 • (39% containment radius, [29]).Observations of Segue 1 were performed between January 2011 and February 2013.During this period, the MAGIC Telescopes underwent a series of important hardware changes, aimed at the homogenization and improvement of the performance of both instruments [29].First, at the end of 2011, the readout electronics of the telescopes were upgraded to the Domino Ring Sampler version 4 (DRS4)-based readouts, thus reducing the dead time of 0.5 ms (introduced by the previously used DRS2-based readout electronics in MAGIC-II) to 26 µs [30].Second, by the end of 2012, the camera of MAGIC-I was exchanged with a replica of that of MAGIC-II, with uniform pixel size and extended trigger area with respect to the old camera [31]. Figure 1: Segue 1 significance skymap above 150 GeV, from 157.9 hours of observations.Nominal positions of Segue 1 and η Leo are marked with the square and the star, respectively.Also shown is the wobbling scheme used in different samples: for periods A, B1 and B2 the wobbling was performed with respect to the 'dummy' position (triangle), and around Segue 1 for period C, while the centers of the OFF regions were used for background estimation at the position of the source (full circles for periods A, B1 and B2, and empty circles for period C).Positions of the camera center are found at an equal distance from Segue 1 and the respective OFF positions, and are not shown here.See the main text for more details. Due to the upgrade, the response of the telescopes varied throughout the Segue 1 campaign.Consequently, we have defined four different observational samples, such that for each of them the performance and response of the instruments are considered stable.Sample A corresponds to the beginning of the campaign in 2011, before the upgrade started.Samples B1 and B2 refer to the first half of 2012, when both telescopes were operating with new, DRS4-based readouts.B1 corresponds to the commissioning phase after the first upgrade, and is affected by several faulty readout channels, which were fixed for period B2.Finally, sample C was obtained at the end of 2012 and beginning of 2013, with the final configuration of the system including the new camera of MAGIC-I.Table 1 summarizes the relevant observational parameters of each of the four periods.Each sample has been processed separately, with the use of contemporaneous Crab Nebula observations (for optimization of the analysis cuts and validation of the analysis procedures) and dedicated Monte Carlo productions (for evaluation of the response of the instruments). The data taking itself was performed in the so-called wobble mode [32], using two pointing (wobble) positions.The residual background from Segue 1 (ON region) was estimated for each of the two pointings (W1 and W2) separately, using an OFF region placed at the same relative location with respect to the center of the camera as the ON region, but from the complementary wobble observation.To ensure a homogeneous ON /OFF acceptance, special care was taken during the data taking to observe both wobble positions for similar amounts of time within similar azimuthal (Az ) ranges.For periods A, B1 and B2, in order to exclude the nearby, 3.5-apparent magnitude star η Leo from the trigger area of MAGIC-I old camera, the wobbling was done with respect to a 'dummy' position located 0.27 • away from Segue 1 and on the opposite side with respect to η Leo (figure 1).With wobbling done at an offset of 0.29 • , in a direction approximately perpendicular to the one joining η Leo and Segue 1, the pointing positions were ∼0.4 • and ∼1 • away from the potential source and the star, respectively.For period C, with the new MAGIC-I camera, the star could no longer be excluded from its extended trigger area, so the standard observational scheme was used instead: the wobble positions were chosen at the opposite sides of Segue 1, at an 0.4 • offset and in a direction parallel to the one used for the A, B1 and B2 samples.Data reduction was performed using the standard MAGIC analysis software MARS [33].Data quality selection was based mainly on stable trigger rates, reducing the total sample of 203.1 hours of observations to 157.9 hours of good quality data (see table 1 for total and effective observation times (t eff ) of each of the four samples).For the data analysis, after the image cleaning, we retain only events with a size (total signal inside the shower) larger than 50 photo-electrons in each telescope and reconstructed energy greater than 60 GeV.Furthermore, for every event, we calculate the angular distance θ between the reconstructed arrival direction and the ON (or OFF) center position, as well as the signal-to-noise maximization variable hadronness (h) by means of the Random Forest method [34], and select events with θ2 < θ 2 cut and h < h cut .The values of θ 2 cut and h cut are optimized on the contemporaneous Crab Nebula data samples, by maximizing the expected integral flux sensitivity (according to Li & Ma [35]) in the whole energy range.For all four of the considered data samples, we obtain θ 2 cut = 0.015 deg 21 , whereas for h cut we get 0.30 for samples A and B1, and 0.25 for samples B2 and C. Additionally, the effect that η Leo has on the data was also taken into account.Namely, pixels illuminated by the star produce background triggers with a rate larger than those due to atmospheric showers.To avoid the saturation of the data acquisition with such events, the thresholds of the affected pixels are automatically increased/decreased with the rotation of the star in the camera.As a consequence, a region is created around the η Leo position where the efficiency for shower detection is reduced with respect to the nominal one.This causes differences between the spatial distributions of events with respect to the ON and OFF positions, which could introduce inhomogeneities in the instrument response function across the field of view.In our Segue 1 observations, this effect is only significant for sample C (because the star is closer to the camera center).We restore the ON -OFF symmetry by removing those events of reconstructed energy lower than 150 GeV and for which the center of gravity of the shower image, in either one of the cameras, lies less than 0.2 • away from the position of η Leo (or from a position at the same relative location with respect to the OFF region as the star is from the ON region).The optimization of these energy and angular distance cuts was done by imposing agreement between ON and OFF θ 2 -distributions for events with 0.35 < h < 0.60 (i.e.excluding those selected as the signal candidates).The corresponding Monte Carlo production was treated with the same star-related cuts.Figure 1 represents the skymap centered at Segue 1: no significant gamma-ray excess can be seen at its nominal position.The same conclusion is made from the overall θ 2distribution (figure 2).Consequently, we proceed to calculate the differential and integral flux upper limits for the gamma-ray emission from the potential source, by means of the conventional analysis approach (for details and formalism, see [28]), currently standard for IACTs.However, as it has been shown that the conventional method is suboptimal to the full likelihood analysis for spectral emissions with known characteristic features (as expected from gamma-rays of DM origin, [28]), those upper limits are quoted for completeness and cross-checking purposes only and can be found in Appendix A. The actual analysis of the Segue 1 data proceeds with the full likelihood method, and more details on it are provided in the following section. Full likelihood analysis As shown by Aleksić, Rico and Martinez in [28], the full likelihood approach maximizes the sensitivity to gamma-ray signals of DM origin by including the information about the expected spectral shape (which is fixed for a given DM model) into the calculations.The sensitivity improvement obtained by the use of this method is predicted to be a factor ∼(1.5-2.5) with respect to the conventional analysis (depending on the spectral form of the searched signal). In this work, we present the results of the full likelihood analysis applied to our Segue 1 observations with MAGIC.We follow the formalism and nomenclature defined in [28] (for the Reader's convenience, also included below), and address our specific choices for the different terms entering the likelihood in more detail. The basic concept behind the method is the comparison of the measured and expected spectral distributions; that is, we have to model the emission expected from the ON region.For a given DM model, M , the spectral shape is known (see section 4), thus the only free parameter is the intensity of the gamma-ray signal (θ).The corresponding likelihood function has the following form: with N OBS (= N ON + N OFF ) and N EST denoting the total number of observed and estimated events, respectively, in ON and OFF regions.P(E i ; M (θ)) is the value of the probability density function of the event i with measured energy E i : where E min and E max are the lower and upper limits of the considered energy range.P (E; M (θ)) represents the differential rate of the events, such that: where P OFF (E) and P ON (E; M (θ)) are the expected differential rates from the ON and OFF regions.In this work, P OFF (E) is determined from the data (see bellow), whereas P ON (M (θ); E) is calculated as: True energy is denoted with E ′ ; dΦ(M (θ))/dE ′ is the predicted differential gamma-ray flux, and R(E; E ′ ) is the response function of the telescope.Lastly, τ refers to the normalization between OFF and ON regions.Thus, in practice, for the construction of the full likelihood function, we need the instrument response to gamma-ray events, a model of the background differential rate, and an assumed signal spectrum: • the instrumental response function (R(E; E ′ )) can be described by the effective collection area (A eff ) and by the energy dispersion function.The latter is well approximated, for a given E ′ , with a Gaussian, whose mean and width will be referred to as the energy bias and resolution, respectively.For each of the four observational periods, the response functions are independently determined from the corresponding Monte Carlo simulations; • the model of the background differential rate (P OFF (E)) is obtained, for each period and each wobble pointing, directly from the Segue 1 observations at the complementary wobble position.For each pointing we select four model regions that have similar exposure as the OFF region, and we define them to be of the same angular size and at the same angular distance from the camera center as the corresponding OFF region, and adjacent to it.Then, by the means of the Kolmogorov-Smirnov statistics (K-S), we compare the energy distribution of events from each of the modeling zones (individually and combined) to that from the OFF region.The modeling region(s) providing the highest K-S probability is (are) selected, and its (smoothed) measured differential energy distribution is used as the background model in the full likelihood (for more details, refer to [36]).The statistical and systematic uncertainties introduced by this procedure in our final results are estimated by comparing the limits obtained using the selected modeling region(s) with those that we would obtain if the average of all four zones was used instead.Our constraints on DM properties are found to vary by up to 10% for the considered range of DM particle masses. • the signal spectral function (dΦ(M (θ))/dE ′ ) is known and fixed for a given DM model. In this work, we consider several channels of photon production from DM annihilation or decay: secondary photons from SM final states, gamma-ray lines, virtual internal bremsstrahlung (VIB) photons and gamma-ray boxes.More details on these final states are provided in section 4.1. For each of the two pointing positions of each of the four defined observational periods, an individual likelihood function is constructed using the corresponding background model, plus the signal spectral function convoluted with the appropriate response of the telescopes.The global likelihood, encompassing the entire Segue 1 data sample, is obtained as a product of those individual ones (eq.( 5.1) in [28]).It is maximized with the gamma-ray signal intensity as a free parameter, while N EST and τ of each individual sample are treated as nuisance parameters with Poisson and Gaussian distributions, respectively.The free parameter is bounded to the physical region during the likelihood maximization: that is, the signal intensity is not allowed to take negative values.We note that the results obtained this way are conservative (i.e. they may have a slight over-coverage, see [37]), since negative fluctuations cannot produce artificially constraining limits. The full likelihood calculations are performed for the 95% confidence level (CL) and one-sided confidence intervals (∆ log L = 1.35) using the TMinuit class of ROOT [38]. Expected dark matter flux Before proceeding to the results of the full likelihood analysis of our Segue 1 sample, let us first comment on how the limits on the DM-induced gamma-ray signal are translated to limits on DM properties. The prompt gamma-ray flux produced in annihilation or decay of DM particles is given as a product of two terms: The particle physics term, dΦ PP /dE ′ , solely depends on the chosen DM model -it is completely determined for the given theoretical framework and its value is the same for all sources.The astrophysical term, J(∆Ω), on the other hand, depends on the observed source (its distance and geometry), the DM distribution at the source region and the properties of the instrument and the analysis. In the case of annihilating DM, the particle physics term takes the form: where m χ is the DM particle mass, σ ann v is the thermally averaged product of the total annihilation cross-section and the velocity of the DM particles, and dN/dE ′ = n i=1 Br i dN i /dE ′ is the differential gamma-ray yield per annihilation, summed over all the n possible channels that produce photons, weighted by the corresponding branching ratio Br.All the information regarding the spectral shape is contained in the dN/dE ′ term. On the other hand, the astrophysical factor (J ann ) is the integral of the square of the DM density (ρ) over the line of sight (los) and the solid angle covered by the observations (∆Ω), i.e.: (when applicable, the FSR is included in the spectrum).Modeling is done according to the fits provided in [39].(Right) Spectral distribution from annihilation into leptonic three-body final states (solid lines), with the contributions from FSR and VIB photons (dashed and long-dashed lines, respectively).The assumed mass-splitting parameter value is µ = 1.1. For the case of decaying DM, the particle physics term depends on the lifetime of the particle τ χ , and its form is obtained by replacing the σ ann v /2m χ contribution with 1/τ χ in eq.(4.2).As for the astrophysical term (J dec ), it scales linearly with the DM density (ρ 2 → ρ in eq. ( 4.3)). We express the results of our DM searches as upper limits to σ ann v (for the annihilation scenarios) or as lower limits to τ χ (for the decaying DM).In the full likelihood analysis, σ ann v or τ χ play the role of the free parameter. Considered spectral shapes As already mentioned in section 3, in this work we search for DM annihilating or decaying into different final states.In particular, we consider the following channels: b b, t t, µ + µ − , τ + τ − , W + W − and ZZ.The resulting spectra from secondary photons are continuous and rather featureless, with a cutoff at the kinematical limit E ′ = m χ (figure 3-left).In our analysis, we use the parametrization presented in [39].When applicable, the final state radiation (FSR) contribution is included in those fits. We also analyze final states leading to sharp spectral features.First, we consider the direct annihilation into two photons (χχ → γγ), or a photon and a boson (χχ → γZ/h).Although loop-suppressed (∼ O(1/α 2 )), such processes are of great interest, as they would result in a sharp, monochromatic line-like feature in the photon spectrum -a feature whose detection would represent the smoking gun for DM searches.Such a line is described as: where E 0 = m χ and N γ = 2 for annihilation into γγ, while ) and N γ = 1 for annihilation into γZ/h.In the latter case, there is also a contribution to the gamma-ray spectrum originated in the fragmentation and decay of the Z and Higgs bosons.As for the case of the decaying DM, line production is also a possibility.In this work, we consider the case of two-body decay into one monoenergetic photon (χ → γν) for fermionic DM particles.The spectral function of the resulting line is obtained by simply making the substitution m χ → m χ /2 in eq.(4.4). Another scenario resulting in sharp spectral features involves emission of VIB photons: if the DM particle is a Majorana fermion, which couples via a Yukawa coupling to a charged fermion (f ) and a scalar (η), the photon produced in the internal bremsstrahlung process (χχ → f f γ) will have a very characteristic spectrum, displaying a salient feature close to the kinematic endpoint and resembling a distorted gamma-ray line.The exact expression of the differential gamma-ray spectrum of the 2→3 process is given by eq.(2.8) in [40].The total spectral distribution also receives a contribution from FSR of the nearly on-shell fermions produced in the 2→2 annihilation (χχ → f f ), as well as from the fragmentation or decay of the fermions produced both in the 2→2 and in the 2→3 processes (figure 3-right).The contribution from FSR becomes more and more important as the mass splitting µ (≡ m 2 η /m 2 χ ) between η and the DM particle increases, eventually erasing the strong gamma-ray feature from internal bremsstrahlung.It can be verified that the gamma-ray feature stands out in the total spectrum when µ 2, which is the case we assume in our analysis. Lastly, a sharp feature might arise in scenarios where a DM particle annihilates into an on-shell intermediate scalar φ, which subsequently decays in flight into two photons: χχ → φφ → γγγγ.In the rest frame of φ, photons are emitted isotropically and monoenergetically; therefore, in the galactic frame, the resulting spectrum will be box-shaped (for the exact expression for dN/dE ′ , see eq. ( 2) in [41]).The center and width of such a feature are completely determined by the masses of the scalar (m φ ) and DM particle: the box is centered For m φ ≈ m χ , almost all of the DM particle energy is transferred to the photons, and the resulting spectral shape is intense and similar to the monochromatic line.On the other hand, for m φ ≪ m χ the box becomes wide and dim in amplitude; still, it extends to higher energies and thus is not negligible as a contribution to the signal spectrum. The astrophysical factor J for Segue 1 The choice of density profile plays a crucial role in the calculation of the astrophysical factor J, as it has direct implications on the expected photon flux.This is particularly true for DM annihilation (eq.(4.3)): as J ann is proportional to ρ 2 , cored central distributions (described by a constant density value close to the center) will yield lower fluxes than the cusped ones (described by a steep power law in the central region), for the same total DM content.This dependence is less pronounced for the decaying DM, since J dec ∝ ρ.Motivated by the numerical simulations, we model the DM density distribution assuming the Einasto profile [42], with scale radius r s = 0.15 kpc, scale density ρ s = 1.1×10 8 M ⊙ kpc −3 and slope α = 0.30 [19,43]. The value of J is determined by the DM distribution within the integrated solid angle ∆Ω (eq.(4.3)), and hence by the analysis angular cut θ 2 cut (figure 4-left).In addition, in order to compute the residual background in the ON region, we measure the number of events acquired in the OFF regions, defined by the same θ 2 cut and at an angular distance from the position of Segue 1 of Φ = 0.58 • (for samples A, B1 and B2) and Φ = 0.80 • (for sample C).The OFF regions may contain non-negligible amounts of DM-induced events (figure 4right), which are accounted in the analysis as background, hence reducing the sensitivity for detection of signal events in the ON region.We take this into account by using the difference in J between ON and OFF regions as astrophysical factor, that is: J(∆Ω) = J ON (∆Ω) − J OFF (∆Ω).This correction is negligible (less than 1%) for annihilation, but has an effect of ∼10% for decay, since in this case the abundant, although less concentrated quantities of DM at large Φ contribute relatively more to the total expected flux than in the case of annihilation.For the angular cut θ 2 cut = 0.015 • and the used OFF positions, the astrophysical factor for annihilating DM is J ann = 1.1×10 19 GeV 2 cm −5 , and the corresponding values for decay are J dec = 2.5×10 17 GeV cm −2 for periods A, B1 and B2, and J dec = 2.7×10 17 GeV cm −2 for period C. The dominating systematic uncertainty on J, resulting from the fit of the Segue 1 DM distribution to the Einasto profile, is about a factor of 4 at 1σ level for J ann , and about a factor of 2 for J dec [43].These uncertainties affect our σ ann v and τ χ limits linearly.A discussion about comparisons of J uncertainties for different classes of objects is included in section 6.2. Limits for dark matter annihilation and decay models Here we present the results, in the context of indirect DM searches, of 157.9 hours of selected data from the Segue 1 observations with MAGIC, analyzed with the full likelihood approach.The results are introduced in the following way: for each of the considered DM models, a limit is set (95% CL upper limit on σ ann v or lower limit on τ χ ) by using the combined likelihood for the whole data sample.This constraint is then compared to the expectations for the null hypothesis (no signal), as well as for signals of 1σ and 2σ significances, estimated from the fast simulations (comparison with negative signals is meaningless in this work, as the free parameter is constrained to only have positive values, see section 3).In order to make the results as model-independent as possible, in all the cases, the branching ratio is set to Br = 100%.Considering the energy range for which the MAGIC telescopes are sensitive to gamma rays, we search for DM particles of mass m χ between 100 GeV and 10 TeV for annihilation scenarios and between 200 GeV and 20 TeV for the decaying DM.Furthermore, all of the results are produced without the assumptions of some additional boosts, either from the presence of substructures [44] or from quantum effects [45]. and ZZ, from the Segue 1 observations with MAGIC.The calculated upper limit is shown as a solid line, together with the null-hypothesis expectations (dashed line), and expectations for 1σ (shaded gray area) and 2σ (shaded light blue area) significant signal. Secondary photons from final state Standard Model particles Figure 5 shows the upper limits on σ ann v , together with the null hypothesis, 1σ and 2σ expectations, for annihilation into six different final states: quarks (b b, t t), leptons (µ + µ − , τ + τ − ) and gauge bosons (W + W − , ZZ).All bounds are consistent with the no-detection scenario.For a more comprehensive overview, the σ ann v upper limits for the considered final states are shown in figure 6-left.A clear dependence between the shape of the expected photon spectrum and the derived constraints can be noticed: the strongest bound corresponds to the τ + τ − channel ( σ ann v ∼ 1.2×10 −24 cm 3 s −1 ), as it produces photons whose spectrum is the hardest at energies for which the sensitivity of MAGIC is at peak.Similar considerations apply to the decaying DM scenarios: again, the most constraining lower limit on τ χ from Segue 1 observations is obtained for the τ + τ − channel, and is of the order of τ χ ∼ 2.9×10 25 s. Gamma-ray lines Figure 7 shows the upper limits on σ ann v for direct annihilation of DM particles into two photons.For the considered m χ range, the constraints are set between 3.6×10 −26 and 1.1×10 −24 cm 3 s −1 .In almost the entire considered mass range, the upper limits are within 1σ from the null hypothesis; the largest deviation is observed at m χ ∼ 1.3 TeV where the signal is slightly larger than 2σ.The probability that this is caused by random fluctuations of the background is relatively large (∼5%) and hence not enough to be considered a hint of a signal2 .On the other hand, should the excess be caused by gamma-rays from DM annihilation or decay, a detection at a 5σ significance level would require about 1000 hours of observations with a sensitivity comparable to the ones used here. Upper limits on σ ann v from DM annihilation into photon and Z boson, for the considered DM masses, span the range between 7.8×10 −26 cm 3 s −1 and 2.3×10 −24 cm 3 s −1 .In the calculation of these limits we do not take into account the contribution of secondary photons originating from fragmentation and decay of Z, as the bound from the resulting continuous contribution is expected to be negligible compared to that from the line (figures 5 and 7).Furthermore, due to the finite width of the Z boson, the gamma-ray line is not monochromatic.The calculation of the consequent corrections to the σ ann v upper limits are beyond the scope of this paper; however, we note that, given the energy resolution of MAGIC, the Figure 8: Upper limits on σannv for the µ + µ − γ channel as a function of mχ, from the Segue 1 observations with MAGIC (solid line), and as expected for the case of no signal (dashed line), or for a signal of 1σ or 2σ significance (gray and light blue shaded areas, respectively).The value of the mass splitting parameter µ is 1.05 (left) and 2.00 (right).line broadening due to Z width (Γ ∼ 2.5 GeV) is not expected to be of relevance in the considered m χ range. We also search for lines produced in DM decay.The derivation of such constraints is straightforward from the results of the annihilation scenario, so we limit our discussion on the matter to the comparison with bounds from other experiments, in section 6.2.1. Virtual internal bremsstrahlung contribution Here we consider the annihilation into leptonic channels with VIB contribution, and set limits on the total 3-body annihilation cross section.Figure 8 shows the σ ann v upper limits for the µ + µ − γ channel, for different values of the mass splitting parameter, chosen such that the VIB contribution is significant with respect to the continuous one (µ = 1.05 and 2.00, respectively).No positive detection can be claimed.The comparison of these exclusion curves is better illustrated in figure 9-left: we see that, for several mass-splitting values (µ = 1.05, 1.50 and 2.00), the obtained limits are rather similar.This can be understood considering how the spectral shape of the VIB signal is practically independent of µ for such small mass-splitting parameter values.Still, it can be noticed that the most degenerate case, µ = 1.05, provides the strongest limit, of σ ann v ∼ 8.4 × 10 −26 cm 3 s −1 for m χ ∼ 250 GeV.For illustration purposes, on the same plot we also show the σ ann v constraints calculated for the annihilation into the muon final state, but without the VIB contribution.In this particular case, the presence of VIB photons in the spectrum leads to almost two orders of magnitude more stringent bounds. Analogous conclusions are reached for annihilation into the τ + τ − γ final state (figure 9-right).The strongest limit in this case corresponds to σ ann v ∼ 8.3×10 −26 cm 3 s −1 , for m χ ∼ 250 GeV and µ = 1.05.The relative contribution of the VIB 'bump' in this case is less significant than for the µ + µ − γ channel, as the continuous gamma-ray spectrum of τ leptons is of harder spectral slope. Gamma-ray boxes Here we consider the case of DM annihilation resulting in the production of four photons (χχ → φφ → γγγγ).As for the DM decay scenarios, they are not covered here, given that the transformation of an upper limit on σ ann v to a lower limit on τ χ is trivial, by making the replacement σ ann v /8πm 2 χ → 1/m χ τ χ in eq. ( 4.2) and m χ → m χ /2 in eq. ( 4.3).Figure 10 shows the σ ann v exclusion curves for extreme degeneracies when m φ /m χ = 0.1 and m φ /m χ = 0.99.In both cases, the strongest constraints are similar, of the order σ ann v ∼ (5.1-5.4)×10−26 cm 3 s −1 , for m χ ∼ 250 GeV and ∼400 GeV, respectively. Figure 11-left shows upper limits on σ ann v for the already mentioned extreme values of m φ /m χ (= 0.10, 0.99), as well as for some intermediate cases (m φ /m χ = 0.50, 0.70).As can be seen, with exception of the most narrow box scenario, all constraints are essentially the same, and only a factor few weaker than for the most degenerate configuration.This is understood given that the wide boxes compensate the dimmer amplitudes (with respect to the m φ ≈ m χ cases) by extending to higher energies, where the sensitivity of the telescopes is better. For a more general view on the importance of box-shaped features, figure 11-right shows the upper limits on σ ann v from the most degenerate box model (m φ /m χ = 0.99) and from the line searches previously shown (figure 7).The obtained bounds are of the same order of magnitude, although the direct comparison between the two exclusion curves is not immediate: the line is centered at E γ = m χ and is normalized for 2 photons per annihilation, while the box-shaped feature is centered at E γ = m χ /2 and is normalized for 4 photons.This is reflected as a shift of the exclusion curves in x and y coordinates. For a more comprehensive overview, the most constraining bounds for all of the final state channels presented in this section are summarized in table 2. Discussion In this section, we discuss the advantages brought by the exploitation of the full likelihood analysis method, compare our results with other relevant experimental constraints and link them to the expectation from theoretical models. Sensitivity gain from the full likelihood method There are two aspects of the full likelihood analysis applied in this work that carry advantages for DM searches with IACTs: i) a sensitivity improvement due to the use of specific spectral signatures -such as those coming from DM annihilation and decay, and, ii) the combination of results from different data samples, e.g.obtained under different experimental conditions, becomes a trivial operation. To quantify the improvement in sensitivity, we compute the improvement factors as defined in [28], i.e. the average ratio of the widths of the confidence intervals for the signal intensity, calculated by the full likelihood and conventional analysis methods, respectively, assuming a common CL.The confidence intervals are computed using fast Monte Carlo simulations of background events, with the same statistics and PDF as in the actual experimental conditions. Table 3 shows the improvement factors obtained for signal from the b b and τ + τ − annihilation channels and m χ of 100 GeV, 1 TeV and 10 TeV.The calculations with the conventional method are done over both the optimized and the full energy range.The obtained improvement factor values are in agreement with the predictions made in [28], and imply a very significant boost in the sensitivity for DM searches: an improvement factor of f is equivalent to f 2 times more observation time.Compared to our previous results of DM signals from Segue 1 [19], these results represent Table 3: Sensitivity improvement factors for the b b and τ + τ − annihilation channels from the use of the full likelihood method over the conventional one, when the latter is calculated for the full (no opt.) or optimized (opt.)energy integration range. [GeV] an overall improvement of about a factor of 10.This has been accomplished by the increase in the observation time of a factor ∼5.3 (i.e. a factor ∼2.3 better sensitivity), the operation of MAGIC as a stereoscopic system (a factor ∼2 better sensitivity with respect to singletelescope observations [46]), plus the improvement factor coming from the full likelihood analysis. Furthermore, the full likelihood method allows a trivial merger of results obtained with different instruments or from different observational targets.As discussed in section 2, our data sample is divided into four periods with different instrumental conditions.In addition, for each period we use two different pointing positions, with slightly different background models.We have built dedicated individual likelihood functions for each of the eight different sub-periods, and merged them into a global likelihood following eq.(5.1) in [28], for our final results.The limits on σ ann v (assuming annihilation into the b b with Br = 100%) obtained by each of the eight considered sub-samples, compared to the global limit, are shown in figure 12. Comparison with the results from other gamma-ray experiments In the previous section we have estimated the σ ann v upper limits and τ χ lower limits for various channels of DM annihilation/decay.Here, we put those constraints in context and compare them with the currently most stringent results from other gamma-ray observatories. Secondary photons from final state Standard Model particles Annihilation Figure 13 shows our σ ann v upper limits from DM annihilation into the b b, µ + µ − , τ + τ − and W + W − channels3 , compared with the corresponding constraints from i) joint analysis of 4 years of observations of 15 dSphs by Fermi -LAT [25]; ii) 112 hours of the GC Halo observations with H.E.S.S. (assuming generic q q final state, [47]); and iii) ∼48 hours of the Segue 1 observations with VERITAS [24].Note, however, that the VERITAS results have been questioned in reference [28], where it is discussed why the VERITAS limits could be over-constraining by a factor two or more.Lastly, we also show the limits from ∼30 hours of the Segue 1 observations with MAGIC in single-telescope mode [19]. As seen from figure 13, for DM particles lighter than few hundreds GeV (depending on the specific channel), the strongest limits on σ ann v come from Fermi -LAT4 ; for higher m χ values, the most constraining bounds are derived from the H.E.S.S. observations of the GC halo.We note, however, that the H.E.S.S. result heavily depends on the assumed DM profile, as it is sensitive to the difference in the DM-induced gamma-ray fluxes between the signal and background region, rather than to the absolute flux.In fact, by using a Navarro-Frenk-White (NFW) profile [48] instead of the Einasto one, the H.E.S.S. limit becomes a factor of ∼2 less constraining, or even weaker for very cored profiles with similar fluxes from the relatively close ON and OFF regions (figure 1 in [47]).This is particularly relevant considering possible uncertainties as, e.g., the effect of baryonic contraction in the GC that could have an important effect on the final DM profile [49]. Our σ ann v limits from 157.9 hours of the Segue 1 observations with MAGIC are the strongest limits from the dSphs observations with IACTs, and, for certain channels, also the most constraining bounds from dSph observations in general (table 2).For annihilation into the b b and W + W − final states, the MAGIC constraints complement those of the Fermi -LAT dSphs observations for m χ > 3.3 TeV and 2.8 TeV, respectively.For the leptonic channels, on the other hand, our limits become the most constraining above m χ ∼ 300 GeV and ∼550 GeV, for annihilation into µ + µ − and τ + τ − , respectively. Decay Over the last couple of years, a lot of attention has been given to the decaying DM as a possible explanation of the flux excesses of high-energy positrons and electrons measured by PAMELA [50], Fermi -LAT [51], H.E.S.S [52] and AMS-02 [53].The needed DM particle lifetime in such a case, τ χ > 10 26 s, is much longer than the age of the Universe, so that the slow decay does not significantly reduce the overall DM abundance and, therefore, there is no contradiction with the astrophysical and cosmological observations. appropriate relic density through the thermal production is achieved, naturally leading to a cosmological history consistent with thermal leptogenesis and primordial nucleosynthesis [55]. The currently strongest constraints on τ χ from gamma-ray observatories are derived from the Fermi -LAT diffuse gamma-ray data: the 2-year long measurements at energies between ∼1 and 400 GeV [56] exclude decaying DM with lifetimes shorter than 10 25 -10 26 s (depending on the channel) for m χ between 10 GeV and 10 TeV.VERITAS also provides lower limits to τ χ from ∼48 hours of Segue 1 observations [24] (albeit the already mentioned caveat regarding reliability of those results applies also in this case), excluding values below 10 24 -10 25 s (depending on the channel) for m χ ≃ 1-10 TeV.Contrary to the annihilation case, the H.E.S.S. observations on the GC Halo are not competitive in the case of decay, as the expected gamma-ray fluxes are very similar in the signal and background regions.On the other hand, in [57] Cirelli et al. have shown that ∼15 hours of observations of the Fornax cluster with H.E.S.S. [58] could lead to the τ χ lower limits of the order of ∼(10 24 -10 26 ) s for m χ between 1 and 20 TeV. particle into quark-antiquark, lepton-antilepton and gauge boson pairs 5 .Our strongest limits correspond to m χ = 20 TeV, and are between ∼5.9×10 24 s and ∼2.9×10 25 s.Compared to the bounds from Fermi -LAT, for the lightest DM particles, the limits from this work are three-four orders of magnitude weaker; on the other hand, for more massive scenarios (m χ > 1 TeV), the MAGIC bounds are a factor ∼30 to a factor ∼3 less constraining (for the muon and tau lepton final states, respectively).In all of the considered scenarios our results are more stringent than those of VERITAS (table 2).For the leptonic channels, we also show the regions that allow to fit the Fermi -LAT, PAMELA and H.E.S.S. cosmic-ray measurements [59], with 95% and 99.99% CL: our exclusion curves are factors ∼30 and ∼2 away from constraining those fits, for the µ + µ − and τ + τ − final states and masses m χ = 2.5 TeV and 5 TeV, respectively. In general, observations of galaxy clusters are better suited than dSph for constraining -LAT Fermi γ γ → χ χ Figure 15: Upper limits on σannv for direct DM annihilation into two photons, from the Segue 1 observations with MAGIC (red line), compared with the exclusion curves from the GC region observations from Fermi -LAT (3.7 years, blue line, [62]) and H.E.S.S. (112 hours, green line, [63]).Also shown is the σannv value corresponding to the 130 GeV gamma-ray line (violet triangle, [64]).τ χ , due to the linear dependence of J dec with the density ρ and the great amount of DM present in this kind of objects.This is reflected in the fact that the predicted limits for ∼15 hours observations of Fornax are of the same order of the ones we obtain for ∼160 hours of Segue 1 data [57]. Gamma-ray lines The importance of the detection of gamma-ray lines from DM annihilation or decay can not be overestimated: not only would a line be a firm proof of DM existence, it would also reveal important information about its nature.This is why this feature has been so appealing, and many searches for a hint of it have been conducted so far, in galaxy clusters [60], Milky Way dSph satellites [61], and in the GC and Halo [62,63].In addition, it is worth mentioning that there is a recently claimed hint of a line-like signal at ∼130 GeV in the Fermi -LAT data of the GC region [40,64]: if the observed signal originates from direct DM annihilation into two photons, the WIMP particle should have a mass of m χ = 129±2.4+7 −13 GeV and an annihilation rate (assuming the Einasto profile) of σ ann v γγ = (1.27±0.32 +0.18 −0.28 )×10 −27 cm 3 s −1 .Although this result could not be confirmed (nor disproved) by the Fermi -LAT Collaboration [62], the potential presence of such a feature has stirred the scientific community, and numerous explanations have appeared about its origin (for a review, see [65]). Annihilation The currently strongest upper limits on spectral lines from DM annihilation are provided by the 3.7 years of observation of the Galactic Halo by Fermi -LAT [62], and 112 hours of the GC Halo region by H.E.S.S. [63].The Fermi -LAT upper limits on σ ann v extend from ∼10 −29 cm 3 s −1 at m χ = 10 GeV to ∼10 −27 cm 3 s −1 at m χ = 300 GeV, while the H.E.S.S. bounds range between ∼10 −27 cm 3 s −1 at m χ = 500 GeV and ∼10 −26 cm 3 s −1 at m χ = 20 TeV. Figure 16: Upper limits on σannv for DM annihilation into a photon and a Z boson, from the Segue 1 observations with MAGIC (red line), compared with the exclusion curve from 2 years of the GC region observations with Fermi -LAT (blue line, [66]).Also show is the σannv value corresponding to the 130 GeV gamma-ray line (violet triangle, [64]). [GeV] Figure 15 shows our limits for the line search, assuming DM annihilation into two photons, compared to the described bounds from Fermi -LAT and H.E.S.S.The strongest constraint from MAGIC is obtained for m χ = 200 GeV, with σ ann v ∼ 3.6×10 −26 cm 3 s −1 , which is about one order of magnitude higher than the Fermi -LAT limit, and a factor ∼30 above the sensitivity needed for testing the hint of a line at 130 GeV.For higher m χ values, the H.E.S.S. limits are more constraining than ours by a factor ∼50 (as expected).We note, however, that similar considerations as those discussed in section 6.2.1 apply when comparing the results of line searches from dSphs and Galactic Halo. Results from line searches when DM particles annihilate into a photon and a Z boson are shown in figure 16: the strongest bound from this work corresponds to σ ann v ∼ 7.8×10 −26 cm 3 s −1 , for m χ ∼ 250 GeV.Compared to the constraints from 2 years of Fermi -LAT observations of the GC region [66], in the overlapping energy range, our limits are one-two order(s) of magnitude weaker.Also shown is the σ ann v estimate for the γZ explanation of the line-like feature at 130 GeV; the MAGIC upper limit is a factor ∼30 away from this value. Decay We also use our search for spectral lines to constrain the properties of decaying DM.If the DM particle is a gravitino in R-parity breaking vacua, with τ χ ∼ 10 27 s or larger, it can decay into a photon and a neutralino, producing one monochromatic gamma-ray line at E γ ≃ m χ /2.Searches for features of such origin have been conducted by the Fermi -LAT, in 2 years of observations of the GC region [66], setting lower limits on τ χ of few×10 29 s up to m χ ∼ 600 GeV, whereas, as explained in section 6.2.1, H.E.S.S. observations on the GC Halo are not sensitive for decaying DM searches. Figure 17 shows our results compared to those from Fermi -LAT.Although the considered m χ range extends well beyond the energies required for decay into W or Z bosons (that would consequently fragment into photons with continuous spectrum), here only the monochromatic emission is considered.MAGIC limits are almost three orders of magnitude less constraining than those of Fermi -LAT, but cover a complementary range of masses.This is expected, since (as discussed before), dSphs are suboptimal targets for DM decay signals. Our strongest bound is of the order of τ χ ∼ 1.5×10 26 s for m χ ∼ 8 TeV.The case of the decay of scalar DM into two photons is not considered here, as it is trivial to derive the τ χ lower limits for that scenario: the gamma-ray signal would be the same as for the γν channel, only twice as strong. Implications for models Generating the correct WIMP relic density requires a DM annihilation cross section at the time of the freeze-out of σ ann v ≃ 3×10 −26 cm 3 s −1 .This value then constitutes a modelindependent upper limit on σ ann v today, and sets the minimal sensitivity required to observe a DM annihilation signal.The constraints derived in this paper from deep observations of Segue 1 lie, in most cases, two orders of magnitude above the canonical value.Nevertheless, for some channels -notably in χχ → τ + τ − -the very characteristic photon spectrum allows us to derive more constraining bounds.In particular, for m χ ∼ 500 GeV the limit on σ ann v for tau final states lies just a factor of ∼40 away from the thermal cross section.However, it should be borne in mind that a signal is expected at σ ann v ≃ 3×10 −26 cm 3 s −1 when the annihilation cross section is s-wave dominated.Some well motivated DM scenarios suggest the p-wave dominated annihilation (see, e.g., [67]), and as such suppressed today by the velocity squared of the DM particles.If this is the case, the expected σ ann v can be five-six orders of magnitude smaller than the canonical value.Scenarios of this class include those where the DM particle is a Majorana fermion that annihilates into a light fermion, for example χχ → µ + µ − or τ + τ − . The expected rate of annihilations producing spectral features is also typically smaller than the canonical value.The direct production of two photons (χχ → γγ) occurs at the one-loop level, hence the expected rate for a thermally produced WIMP is necessarily suppressed by a factor α 2 , giving an annihilitation cross section which is, in most scenarios, O(10 −30 ) cm 3 s −1 .Therefore, even if the limits on σ ann v for direct photon production are close to the thermal value, a sensitivity increase of at least two-three orders of magnitude is required in order to possibly observe a signal.For annihilations with VIB contribution, the rate is suppressed compared to the canonical value by the extra electromagnetic coupling and by the three-body phase space, amounting to a factor of ∼ α/π.Hence, observation of a gamma-ray signal from the final states µ + µ − γ or τ + τ − γ requires a sensitivity of σ ann v ∼ O(10 −28 ) cm 3 s −1 , which is three orders of magnitude below the limits obtained in this paper.Lastly, the rate of annihilations producing gamma-ray boxes is a priori unsuppressed, since σ ann v for the process χχ → φφ can be as large as the thermal value, and the branching fraction φ → γγ can be sizable, even 1.For this class of spectral features, the limits are then only a factor of a few above the values for the cross section where a signal might be expected. It should be kept in mind, however, that these results are somewhat conservative: no flux enhancements, due to possible boost factors, have been considered.In general, the uncertainties entering the expected fluxes are large enough so that potential surprises cannot be excluded. Summary and conclusions We have reported the results on indirect DM searches obtained with the MAGIC Telescopes using observations of the dSph galaxy Segue 1.The observations, carried out between January 2011 and February 2013, resulted in 157.9 hours of good-quality data, thus making this the deepest survey of any dSph by any IACT so far.In addition, this is one of the longest observational campaigns ever, with MAGIC or any other IACT, on a single, non-variable object.That imposes important technical challenges on data analysis, for which suitable and optimized solutions have been successfully designed and implemented. The data have been analysed by means of the full likelihood method, a dedicated analysis approach optimized for the recognition of spectral features, like the ones expected from DM annihilation or decay.Furthermore, with this method, the combination of data samples obtained with different telescope configurations has been performed in a straightforward manner.This has resulted in sensitivity improvements by factors ranging between 1.7 and 2.6, depending on the DM particle mass and the considered annihilation/decay channel. No significant gamma-ray excess has been found in the Segue 1 sample.Consequently, the observations have been used to set constraints on the DM annihilation cross section and lifetime, assuming various final state scenarios.In particular, we have computed limits for the spectral shapes expected for secondary gamma-rays from annihilation and decay into the SM pairs (b b, t t, µ + µ − , τ + τ − , W + W − and ZZ), for monochromatic gamma-ray lines, for photons produced by the VIB processes and for the spectral features from annihilation to gamma-rays via intermediate scalars.The calculations have been done in a model-independent way, by assuming a branching ratio of 100% to each of the considered final states.95% CL limits to σ ann v and τ χ have been obtained for m χ in the 100 GeV − 10 TeV range and 200 GeV − 20 TeV, for annihilation and decay scenarios, respectively.The most constraining limits are obtained for DM annihilating or decaying purely into τ + τ − pairs: σ ann v < 1.2 × 10 −24 cm 3 s −1 for m χ ≃ 500 GeV and τ χ > 3 × 10 25 s for m χ ≃ 2 TeV. Studying different targets is of particular importance for indirect DM searches.On one hand, a certain confirmation of the DM signal, especially if it is a featureless one, can only come from observations of at least two sources.On the other hand, diversity among observational targets is necessary, as searches in different objects are affected by different uncertainties.For instance, although most aspects of the general cold DM halo structure are resolvable from numerical approaches, the current knowledge and predictive power regarding its behaviour are limited by the complex interplay between the DM and baryonic components. It is still a long way until the effects baryons have on the DM distribution are fully perceived.This is particularly relevant for targets like the GC and Halo, or galaxy clusters, since their significant luminous content can influence the evolution of the DM component.Furthermore, there are also uncertainties coming from the presence of substructures in the halo, and the possible enhancements of the cross-section due to the quantum effects, that directly influence the value of the total expected flux.These uncertainties are large (O(10) or more) and their impact on halos may be different on different scales.Thus, diversification of the observational targets is the optimal strategy for the discovery. Altogether, the results from this work represent an important step forward in the field of DM searches, significantly improving our previous limits from dSph galaxies and complementing the existing bounds from other targets.Figure 18: Differential flux upper limits from 157.9 hours of the Segue 1 observations with MAGIC, assuming a point-like source and a power law-shaped signal emission and different spectral slopes Γ.As a reference, the Crab Nebula differential flux (solid line, [46]) and its 10% and 1% fractions (long-dashed and dashed lines, respectively), are also drawn. Figure 2 : Figure2: Segue 1 θ 2 -distribution above 60 GeV, from 157.9 hours of observations.The signal (ON region) is presented by red points, while the background (OFF region) is the shaded gray area.The OFF sample is normalized to the ON sample in the region where no signal is expected, for θ 2 between 0.15 and 0.4 deg 2 .The vertical dashed line shows the θ 2 cut value. Figure 3 : Figure3: Gamma-ray spectrum for DM annihilation into different final states.(Left) Secondary photons (when applicable, the FSR is included in the spectrum).Modeling is done according to the fits provided in[39].(Right) Spectral distribution from annihilation into leptonic three-body final states (solid lines), with the contributions from FSR and VIB photons (dashed and long-dashed lines, respectively).The assumed mass-splitting parameter value is µ = 1.1. Figure 4 : Figure 4: Astrophysical factor for Segue 1 for annihilating (Jann, solid line, left axis) and decaying (J dec , dashed line, right axis) DM, assuming the Einasto density profile.(Left): as a function of the angular cut (θ 2 cut , see section 2), for observations centered at the nominal position of Segue 1.The vertical dotted red line corresponds to the value used in this analysis, θ 2 cut = 0.015 • .(Right) as a function of the angular distance from Segue 1 (Φ), for a fixed anglar cut θ 2 cut = 0.015 • .The vertical dotted red lines correspond to the values of distance between the ON and OFF regions relevant in this analysis. Figure 5 : Figure 5: Upper limits on σannv for different final state channels (from top to bottom and left to right): bb, t t, µ + µ − , τ + τ − , W + W −and ZZ, from the Segue 1 observations with MAGIC.The calculated upper limit is shown as a solid line, together with the null-hypothesis expectations (dashed line), and expectations for 1σ (shaded gray area) and 2σ (shaded light blue area) significant signal. Figure 6 : Figure 6: Upper limits on σannv (left) and lower limits on τχ (right), for secondary photons produced from different final state SM particles, from the Segue 1 observations with MAGIC. Figure 7 : Figure7: Upper limits on σannv for direct annihilation into two photons, as a function of mχ, from the Segue 1 observations with MAGIC (solid line) and as expected for the case of no signal (dashed line), as well as for a signal of 1σ or 2σ significance (gray and light blue shaded areas, respectively). Figure 9 : Figure 9: Upper limits on σannv for µ + µ − γ (left) and τ + τ − γ (right) final states, as a function of mχ, for different values of the mass splitting parameter µ.Also shown are the exclusion curves for the annihilation without the VIB contribution (dashed line). Figure 10 : Figure10: Upper limits on σannv for wide-(m φ /mχ = 0.1, left) and narrow (m φ /mχ = 0.99, right)-box scenarios, as a function of mχ, from the Segue 1 observations with MAGIC (solid line), and as expected for the case of no signal (dashed line), as well as for the signal of 1σ or 2σ significance (gray and light blue shaded areas, respectively). Figure 11 : Figure 11: Comparison of upper limits on σannv , as a function of mχ, from the Segue 1 observations with MAGIC, for different ratios of scalar and DM particle masses (left), and of the narrow box scenario (m φ /mχ = 0.99) with a monochromatic gamma-ray line (right). Figure 12 : Figure 12: Upper limits on σannv for the b b annihilation channel, from individual wobble positions and different Segue 1 observational periods.Also shown is the limit from the combined likelihood analysis. Figure 17 : Figure17: Lower limits on τχ for DM decay into a neutrino and a photon, from the Segue 1 observations with MAGIC (red line), compared with the exclusion curve from 2 years of the GC region observations with Fermi -LAT (blue line,[66]). Figure 19 : Figure 19: Integral flux upper limits from 157.9 hours of the Segue 1 observations with MAGIC, assuming a point-like source and a power law-shaped signal emission and different spectral slopes Γ. Dashed lines indicate the expectations for the null hypothesis case. Table 1 : Basic details of the Segue 1 observational campaign with MAGIC.Refer to the main text for additional explanations. Table 2 : Summary of the strongest limits and corresponding mχ, obtained from the Segue 1 observations with MAGIC, for various final states from DM annihilation (ANN) and decay (DEC).When applicable, it is stated for which range of considered mχ these limits become the most constraining from dSph observations, among the published results. Table 4 : Differential flux upper limits from 157.9 hours of Segue 1 observations with MAGIC, in four energy bins and for several power law-shaped spectra.
2014-02-06T15:05:28.000Z
2013-12-05T00:00:00.000
{ "year": 2013, "sha1": "e986b36d24e0da415b9639b086cbce78e9b7b568", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1475-7516/2014/02/008/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "2ceeface83cd9a0ee6bfa13ec6f78a6c2a721272", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
54452577
pes2o/s2orc
v3-fos-license
Spinal cord injury without radiological abnormality (SCIWORA) manifested as self-limited brown-SEQUARD syndrome Introduction Combination between SCIWORA and Brown-Sequard syndrome in a patient is a rare condition. In SCIWORA, there is usually a delay in neurologic deficits which can potentially lead to misdiagnosis. Therefore, the clinician should have a good understanding of the course of the disease to make a good diagnosis and treatment. Case report Reporting a case of female 20 years old with chief complaint of severe neck pain and delayed limbs weakness. The mechanism of injury was fall with the head hit the ground in left lateral flexion position. The physical examination showed zero motor power of the right limbs and contralateral pain and temperature deficit 1 h after the injury. We diagnosed the patient with incomplete spinal cord injury at C4 level with associated Brown-Sequard syndrome. We gave soft collar neck for immobilization, medication with NSAID for analgetic and Methylprednisolon. We found dramatic improvement in 10 h after the injury with motor improvement from 0 to 5 and normal sensory function. The patient then was discharged with good functional outcome and with no sequelae. Conclusion Incomplete cervical spinal cord injury without radiological abnormality that manifested as Brown-Sequard syndrome is a rare case and potentially confusing condition. Better understanding of the course of the disease may help the clinician to make a right diagnosis and plan for management. Introduction Spinal cord injury without radiological abnormality (SCIWORA) is defined as the occurrence of acute traumatic myelopathy despite normal plain radiographs and normal computed tomography (CT) studies. This occurs predominantly among pediatric population with incidence range from 4% to 66% of all spinal cord injuries (10%-20% of all pediatric spinal trauma) [1]. In young children, the pathogenesis of SCIWORA may be related to the mismatch in the elasticity of the tissue of the vertebral column and spinal cord [1][2][3]. The mechanism of injury could be direct or indirect spinal cord traction or compression and vascular or ischemic injury [1,3]. SCIWORA have a large spectrum of neurologic deficit, ranging from mild, transient until complete spinal cord injuries. The neurologic deficits can happen in delayed form, ranging from hours to days after the injury [1,3,6]. Brown-Sequard syndrome is an anatomic hemisection of the cord, resulting in ipsilateral motor and proprioception loss and contralateral pain and temperature deficit. It is a rare condition, as the trauma or something should damage the nerve fibers on just one half of the spinal cord. Fortunately, the prognosis for significant motor recovery is good and the most important prognostic variable relating to neurologic recovery is completeness of the lesion [5]. https://doi.org/10.1016/j.tcr.2018.11.007 Accepted 21 November 2018 Combination between SCIWORA and Brown-Sequard syndrome is a rare condition that can lead to misdiagnosis. Therefore the clinicians should have a good awareness and understanding of the disease to make a right diagnosis and therapy for the patients. Case report Incomplete cervical spinal cord injury without radiological abnormality was diagnosed in a victim of sport injury at the authors' hospital. A 20-year-old female presented with chief complaint of severe neck pain and delayed limbs weakness after suffered from sport injury in martial arts competition. The mechanism of injury was fall with the head hit the ground first in left lateral flexion position. No history of loss of consciousness. After the accident, the patient felt neck pain and about 1 h after the accident she could not move her right limbs. From physical examination, the general condition was good, the vital sign was within normal limit. The motor functions of the right upper and lower extremities from C5 -S1 level were decreased to zero with no muscle tone and the sensory functions of the left side (contralateral side) below the C4 level were decreased with loss of pain and temperature sensation. The ASIA scores were 56 for There is no fracture and dislocation found here. The alignment and pretracheal soft tissue is also looks normal. sensory functions and 50 for motor functions and the functional score was 4 for Nurick scale that means unable to walk without assistance. We performed thorough diagnostic approach including X-ray, CT scan and MRI investigation ( Figs. 1 and 2). From plain Xray and CT scan there was no visible fracture or dislocation and the alignment was good. From MRI there were no signs of abnormality. So we diagnosed the patient with incomplete spinal cord injury at C4 level with associated Brown-Sequard syndrome manifestation. We performed observation and managed the patient with soft collar neck for immobilization, medication with NSAID for analgesia and Methylprednisolon 30 mg/kg in 15 min followed by maintenance with 5,4 mg/kg/h for the next 23 h, and we found dramatic improvement in 10 h after the injury with motor improvement from 0 to 5 in both affected extremity and normal sensory function in the contralateral side. The Nurick scale improved to 1 that means the patient had no difficulty in walking. We observed the patient for the next 24 h and gave rehabilitation program. The patient then was discharged with good functional outcome and with no sequelae. Discussion Spinal Cord Injury Without Radiographic Abnormality (SCIWORA) was first introduced by Pang and Wilberger and was first reported by Burke in 1974. It was used to define clinical symptoms of traumatic myelopathy with no radiographic or CT abnormality. The symptoms have a broad spectrum from mild and transient paresthesia in fingers to quadriplegia. The symptoms can be appear at the moment of injury but can also be delay until several days of injury. The main therapeutic treatment is external immobilization of the spine for up to 12 weeks [6]. Incomplete cervical spinal cord injury without radiological abnormality could be potentially confusing and frustrating because there could be a delay in neurologic deficit and the course of the disease is very dramatic. In this case, SCIWORA manifested as Brown-Sequard syndrome which is a rare condition. The neurologic status and the imaging studies of the patient were initially normal but suddenly worsening very quickly placing the patient and families in a big worried. And then, without any surgical intervention, there was surprisingly fast recovery time and excellent functional outcome. The delay in neurologic deficit needs full and thorough observation. We use soft collar neck for immobilization, high dose of intravenous corticosteroid, intravenous NSAID agents, and rehabilitation as the management approach. The prognosis is related to the severity of the spinal cord dysfunction. Outcome after incomplete injuries in older children is excellent [1]. The patient age and the first clinical presentation after the injury are maybe the clinical predictors for this patient to have a very good recovery. Conclusion Incomplete cervical spinal cord injury without radiological abnormality that manifested as Brown-Sequard syndrome is a rare case and potentially confusing condition because of the delay in neurologic deficit and the course of the disease that is very dramatic. Better understanding of the course of the disease may help the clinician to make a right diagnosis and plan for management with better explanation and education to the patient and the frustrated families.
2018-12-16T18:46:01.148Z
2018-11-26T00:00:00.000
{ "year": 2018, "sha1": "7734b8bf074fbc493f69d941d6c1cc2c8ef446cc", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.tcr.2018.11.007", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7734b8bf074fbc493f69d941d6c1cc2c8ef446cc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
149270757
pes2o/s2orc
v3-fos-license
Why the Lebanese NGOs didn ’ t Succeed in Reforming the Citizenship Law ? Allow me on behalf of thousands of Lebanese women married to non-Lebanese men to raise my voice high so that it reaches their Excellencies, Ministers and MPs… A woman says: “why does the Lebanese government grant a Lebanese man the right to pass his nationality to his children and wife, while it deprives a Lebanese woman from this right?” Where is the logic? Doesn’t this undermine blatantly her citizenship rights and the principle of equality? Doesn’t this undermine the rights of children, men, women, and the family combined? Isn’t this regarded as a violation of human rights and unfair discrimination between men and women? This shouldn’t be the case given that the Lebanese Constitution acknowledges the principle of equality among citizens as do all international agreements ratified by Lebanon, namely the Convention on the Elimination of all Forms of Discrimination Against Women (CEDAW). Allow me on behalf of thousands of Lebanese women married to non-Lebanese men to raise my voice high so that it reaches their Excellencies, Ministers and MPs… A woman says: "why does the Lebanese government grant a Lebanese man the right to pass his nationality to his children and wife, while it deprives a Lebanese woman from this right?"Where is the logic?Doesn't this undermine blatantly her citizenship rights and the principle of equality?Doesn't this undermine the rights of children, men, women, and the family combined?Isn't this regarded as a violation of human rights and unfair discrimination between men and women?This shouldn't be the case given that the Lebanese Constitution acknowledges the principle of equality among citizens as do all international agreements ratified by Lebanon, namely the Convention on the Elimination of all Forms of Discrimination Against Women (CEDAW). I write these lines while my mind is flooded with memories of cases of households of Lebanese women married to non-Lebanese men, whom I met while conducting a study on the status of Lebanese women married to non-Lebanese men within the framework of the National Committee for the Follow up on Women's Issues (NCFUWI) and the United Nations Development Program (UNDP) project on Lebanese women's rights and the citizenship law (Charaffedine, 2010).The study revealed that between 1995 and 2008, 18,000 women suffered from the repercussions of the citizenship law, i.e. an average of 80,000 women, children, and men.It is noteworthy that all the cases are similar irrespective of social status, nationality of husband, confession, and geographical location. Why the Lebanese NGOs didn't Succeed in Reforming the Citizenship Law? Nayla Madi Masri Women are blatantly discriminated against when it comes to the injustice inflicted by the citizenship law which undermines their complete and efficient citizenship.This law does not only prohibit them from exercising their fundamental rights as female citizens, but it preemptively confiscates the rights of their children as human beings.The citizenship law has become obsolete and no longer meets the ambitions and needs of the Lebanese society.Hence, this ever-present issue has to be addressed from all angles be they historical, legal, or social dimensions.Here civil society plays an important role. No one can deny that the NGOs in Lebanon have undertaken great efforts to amend the discriminatory law of the land.Why have they not succeeded so far in reforming the citizenship law?What are the difficulties and challenges they are confronting? According to Fahima Charafeddine (2005) the Lebanese women's movement is an itegral part of the Arab women's movements that was established in the early stages of the twentieth century where the first women's union was founded in 1921.This raises a big question mark about the achievements made by the Lebanese women's organizations.As part of Lebanese civil society, women NGOs have managed to highlight the citizenship law issue as a prominent social issue that constitutes a blatant violation of human rights.Non-Governmental Organizations have succeeded in generating media interest in the issue.Thus, the audiovisual and print media have allotted air time and columns to discuss this problem through presenting live testimonies In a political and sectarian system based on sharing power between confessions, numbers play an essential role in defining constituencies and determining their future.In Lebanon the last census was carried out in 1932 by the French mandate.All further census taking was blocked as a result of an attempt to cover up demographic changes.The lack of progress with respect to the citizenship law must be seen in this context.Sectarian logic has prevented a debate based on facts. Thus, citizenship law became one of the doors that confessions want to keep closed.The political sectarian system has prevented the possibility of changing these laws till now. Despite the efforts of the NGOs to reform the present law, this problem is still a main challenge preventing the Lebanese state from fulfilling its commitments to the international instruments it signed. There are undoubtedly a number of factors that affect women's status in Lebanon.The two primary components include the inherent sectarianism and the patriarchal nature of society as we mentioned before.There are 19 formally recognized religious sects in Lebanon. We believe that these factors have not only affected women's status, but they have directly impeded the success of the women's advocacy movement in Lebanon. Lebanese women have been very successful in making significant strides in society, particularly in comparison to other MENA countries; however, many obstacles still lay ahead.The patriarchal culture in Lebanon defines and intensifies many of these challenges, making it all the more difficult to eliminate laws and traditions that are based on male dominance. Women's NGOs in Lebanon, though successful in bringing about a number of positive changes, have not united to create an effective, core movement capable of agreeing on key reform issues and to work together to achieve those reforms.This has primarily been a result of sectarian divisions and competition for donor funds towards the improvement of women's status.These obstacles will need to be overcome in order to develop a more unified women's advocacy movement in Lebanon. Sectarian divisions have had a number of effects on the women's rights movement.As noted, the Lebanese people tend to identify more with their sectarian affiliation than with their national identity.This tendency works against women's advancement in several ways.First, even among women, the interests and priorities of the sect are held above issues of gender rights, giving women's issues less importance than other issues.Furthermore, each sect has differing views on reform, limiting women's ability to form a critical mass that transcends sectarian divisions and supports a national feminist agenda.In turn, this has led to weak coordination among women's NGOs and subsequently limited their ability to mobilize significant public support for improved women's status. The allocation and availability of donor funds has also proven to be an obstacle to the formation of a unified coalition for women's rights.Though there is an increasing number of NGOs committed to improving the status of women in Lebanon, these NGOs remain largely in competition with one another for donor funding.Additionally, NGOs often veer away from their original causes to accommodate donor priorities and secure additional funds.Donors also contribute to the movement's fragmentation through a lack of coordination among one another in the programs they fund, leading to a duplication of efforts among groups that already have difficulty collaborating. It is also noteworthy that protest activities organized to support women's issues do not attract a large number of participants.One reason for this could be that NGOs lack the ability to mobilize public support as we mentioned above.This is due to many factors, mainly lack of national coordination and strategy, lack of a media strategy, as well as the political affiliations of some civil society associations and their members. The heterogeneous nature of Lebanese society often means that the simplest issues become issues of high politics.As Marguerite Helou (2010) has pointed out, issues related to the survival of communities, their identity, share of power, and loyalty to the group supersede any other loyalties.This type of culture acts against women's advancement in two ways at least.First, issues of gender equality are usually pushed down on the list of group priorities (especially in sectarian religious cultures) and second, the political behavior of women, as that of men, tends to serve the interests and priorities of their sect over gender and human rights issues thus consecrating a culture that works against them. In summary, it should be reaffirmed that the civil society organizations (NGOs, political parties, syndicates etc.) can play an active role in spreading the principles of human rights.They have the ability to form pressure groups to reform the discriminatory laws across all sectors of society. In order for this to take place in the Lebanese context, the following are necessary: goal to form one pressure group to lobby for change; recipients and not merely personal benefit and profit; democratic governance and the meaning of citizenship.By necessity, these would include discussions of the rights and obligations of citizens and the role of citizens in a democratic context; and different media outlets to promote articles about these issues and make them available to as wide a public as possible; other stakeholders to enhance civic oversight over state performance in the areas of human rights; the aim would be to build partnerships between parliamentarians and international and national File organizations working on human rights issues.Specific indicators that would measure performance on these issues can be established; these would also help establish international human rights and democratic standards into everyday practices. grants expatriates the rights to Lebanese citizenship if they meet a collective set of criteria (such as they were born in Lebanon, have lived in Lebanon for a set number of years, have a permanent residence, etc); under this new law, Lebanese women would also be granted the right to pass on their nationality to foreign-born spouses and children. A prerequisite for the above is that leading women's NGOs, and civil society in general, sever their ties to political parties and religious institutions.Furthermore, an overall political will must develop to achieve real progress in the field of nationality legislation, and citizenship law with respect to women in particular. Moreover, we believe that lobby groups should be formed to advocate on the basis of issues and not partisan politics.NGOs should coordinate among each other to prioritize the common public interest and not their own particular interests.We should bear in mind that affiliation with human rights transcends any identification with politics in order to boost human rights in the country, especially the human rights of women.In conclusion, reforming the citizenship law is not a political option, but rather a human rights necessity. Nayla Madi Masri is a women's rights activist. Email: naylamadi@yahoo.fr in the field such as lawyers, social workers, and psychologists to talk about the issue.Studies and campaigns were also prepared, such as the project on the Lebanese women's rights and the citizenship law funded by UNDP and implemented by NCFUWI between 2008 and 2010, and the campaign on "Nationality is my right and the right for my family" conducted by the Collective for Research and Training on Development -Action (CRTD.A) and other NGOs. Fileand hosting specialists Thus, interest in the topic has also grown among civil society in general.Lobbying, research, training sessions, and demonstrations have been dedicated to this issue.The question remains, why hasn't this problem been solved yet?Why couldn't Lebanese women's organizations fulfill their promises to amend the citizenship law?What are the major excuses given by Lebanese politicians and legal experts who successfully prevent any changes in the present law?
2019-05-11T13:06:01.002Z
1970-01-01T00:00:00.000
{ "year": 1970, "sha1": "c449f80825a02019cf70bc85dd518527fd4ab730", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.32380/alrj.v0i0.56", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "c449f80825a02019cf70bc85dd518527fd4ab730", "s2fieldsofstudy": [ "Political Science", "Law" ], "extfieldsofstudy": [ "Sociology" ] }
250627248
pes2o/s2orc
v3-fos-license
Holographic description of entanglement entropy of free CFT on torus The Ryu-Takayanagi conjecture provides a holographic description for the entanglement entropy for the strongly coupled holographic CFTs in the semi-classical limit. It proposes that the entanglement entropy is given by the area of the minimal homologous surface in the dual bulk. We show that the common terms of the entanglement entropy for the free massless fermions or bosons on a torus in the high-temperature expansion can be described by the sum of the signed area of extremal surfaces in the BTZ spacetime. The resulting EE and the corresponding bunch of the exgremal surfaces have preferable properties rather than those from the Ryu-Takayanagi conjecture. Introduction The holographic principle of gravity proposes that the degrees of freedom of d+1-dimensional gravity is that of d-dimensional theory without gravity.This perspective is originated from the Beckenstein-Hawking formula that the black hole entropy is the area of its event horizon.The AdS/CFT correspondence [1][2][3] provides a toy model to investigate the holographic property of gravity.The Ryu-Takayanagi conjecture [4,5] is a generalization of the Beckenstein-Hawking formula.It is believed to give a gravity dual of the entanglement entropy(EE) of the strongly coupled holographic CFTs in the semi-classical limit.Let ρ A denote the reduced density matrix of a space-like region A in CFT, then the EE S A of a region A is defined as (1.1)For the semi-classical limit CFT, the Ryu-Takayanagi conjecture provides the EE of a region A as where G is the gravitational constant and γ A is the global minimal surface in the dual bulk spacetime, that is homologous to the sub-system on the AdS boundary [6].This conjecture has played a fundamental role in studying this issue and shed light on the relationship between the AdS/CFT correspondence and the information theory.It also brings many information-theoretical analyses [7][8][9][10] of the AdS/CFT correspondence.Thus, it is perhaps the most fundamental key to grasping the AdS/CFT correspondence.We will focus on the EE of one interval for free massless fermion or boson on the torus [11,12].We will see that the common terms in them can be described as the sum of the signed area of the extremal surfaces in the BTZ spacetime.Although a free CFT does not have the gravity dual, it is interesting to describe the EE of free CFTs in an holographic way, because it presents a geometrical point of view for the EE of free CFT and it allows us to compare the EEs between holographic CFTs and free CFTs.In addition, we will see that the configuration of the extremal surfaces and their signs has a geometrical consistency between the CFT side and the gravity side.Surprisingly, the resulting EE is smaller than the holographic EE given by the Ryu-Takayanagi formula and the corresponding bunch of the extremal surfaces has preferable holographic properties. The construction of this paper is as follows.In section 2, we will review the replica trick and discuss the geometrical consistency between the replica manifold of the CFT side and the gravity side.In section 3, we will see the extremal surfaces in the BTZ spacetime.In section 4, we will point out that the EE of one interval for free massless fermion on the torus is described by the sum of signed area of all the extremal surfaces that extend from the edge of the interval on the AdS boundary.Finally, section 5 is the conclusion. Replica manifolds and deficit angle consistency We will investigate the geometry of the replica manifold and find that it has the conical singularities at the edge of a region.Eq.(1.1) can be rewritten by the replica trick [13,14] as For convenience, we will consider the Renyi entropy S A in the following discussion.Let Z be the partition function of a CFT on a manifold B, and A be partition function of the same CFT on the replica manifold A is written by the partition functions as A , we can evaluate the Renyi entropy of A as the following 2-point correlation function. S (n) where T n and Tn are the primary twist operators with the conformal weight h = c(n 2 − 1)/24n. To investigate the geometry of the replica manifold B (n) A with B = C ∪ {∞} and A = [u, v], consider the scale transformation of the Renyi entropy.For a moment, we will abbreviated the lower indices A and specify the quantities defined on B (n) by the upper index (n).Applying the Ward-Takahashi identity for the scale transformation to eq.(2.2), we obtain where is a scale of the system, and g µν , g and R are the metric, the determinant of the metric and the Ricci scalar, respectively.Compered with eq.(2.3), the replica manifold B (n) A should be singular at ∂A = {u, v} and we can consider the Ricci scalar as R . Thus, there exists the conical deficit with angle ∆φ = π(n − 1/n) on the replica manifold at ∂A. We can also confirm that there exists conical singularity with angle ∆φ = π(n−1/n) at ∂A considering a CFT on C/Z N [15], where Z N = Z/N and N denotes an positive integer.Consider a free massless scalar field on C/Z N with the central charge c.Let N = 1/n and the sub-system A = [0, ∞), then the partition function Z where Λ and are the IR and UV cutoff lengths, respectively.Compared with eq.( 2.4), we can understand that there exists conical singularity with angle ∆φ = π(n − 1/n) at a single edge of the interval A. Focus on the gravity dual of the above replica manifold A has the conical singularities as discussed above, the replica bulk manifold M (n) A should have the canical singularities in the geometricary consistent manner.As the cosmic string with string tension (n 2 − 1)/8nG that makes the singularities with deficit angle π(n − 1/n) around it for n ∼ 1, it is natural to consider that the replica bulk manifold M (n) A contains the cosmic string with string tension (n 2 − 1)/8nG [16].This seems the unique way of constructing the replica bulk manifold M (n) A consistently about the deficit angles between ∂M (n) A and B (n) A .However, notice that we can consider for introducing the cosmic string with opposite signed string tension −(n 2 − 1)/8nG and there are more configurations of cosmic strings satisfying this boundary condition than before.Actually, we will see that the Renyi entropy of free CFT with n ∼ 1 is almost described as many cosmic strings with the both signed string tension satisfying the deficit angle consistency. Extremal surfaces in BTZ spacetime The cosmic strings for n ∼ 1 as mentioned in the previous section behave as 1-dimensional extremal surfaces.We will see the extremal surfaces in the BTZ spacetime and its areas.The metric of the BTZ spacetime can be described as follows. where M is the mass parameter of the black hole and L is the AdS radius, and t and θ-coordinate is periodic.As it is 2 + 1-dimensional spacetime, we will derive space-like geodesics that extend from ∂A : (t, r, θ) = (0, ∞, ±θ A ) into the bulk on the t = 0 time-slice.Minimizes the following length of a line. where we integrated from r = r 0 to r = r max to avoid the IR divergence, and we assumed r max r 0 and ignored the sub-leading terms.For the later discussion, consider the geodesics extend from only (0, ∞, +θ A ).We can immediately obtain them by replacing θ → θ + θ A in eq.(3.3).In this case, Each length of them are also expressed as (3.4) substituting θ A = 0. Some of them are depicted in figure 1.Notice that m represents the number of times that the corresponding geodesic passes through the opposite side of the black hole against the interval [−θ A , θ A ] on the AdS boundary.The minimal radius of the geodesic r 0 approaches to the black hole horizon radius √ M L taking m → ∞.The BTZ black hole spacetime is considered as the dual gravity spacetime of 1 + 1 dimensional CFT on torus in the AdS/CFT correspondence.Let us consider CFT defined on S 1 of which circumference is C with the UV cutoff length and the inverse temperature β.The bulk parameters are translated into the above CFT quantities as follows.The black hole mass and the IR cut off are β/C = 1/ √ M and = L/(2πr max ).The Brown-Henneaux formula [17] translates G into the central charge c of CFT as c = 3L/(2G).The length of an interval is := Cθ A /π ∈ [0, C].Then, we can describe eq.(3.4) as follows. In the next section, we will see that the EE of one interval for the free massless field on a torus is almost described by an appropriate sum of ±s(mC ± ).< l a t e x i t s h a 1 _ b a s e 6 4 = " F s 6 Holographic description for EE on torus < l a t e x i t s h a 1 _ b a s e 6 4 = " j U I X 1 J f a i y E t 2 s R d h p 5 n t X d on this system in the high-temperature expansion is as follows [11,12]. This EE is described as the following sum of signed length of extremal surfaces from eq. (3.5). The corresponding extremal surfaces are depicted as fig. 2. From an algebraic viewpoint, we cannot determine the configurations of each surface.In particular, the surfaces corresponding to −s(1) seem to need not extend from each ± /2.However, from the conical deficit angle consistency as we discussed in section 2, it is natural to describe the surface configuration as depicted in fig. 2. In what follows, we will comment on this holographic description for free field EE compared with the Ryu-Takayanagi conjecture in the BTZ black hole spacetime.The Ryu-Takayanagi conjecture predicts the corresponding EE as where S BH = cπ/3β corresponds with the black hole entropy.We will focus on a few differences between them on geometrical aspects of the configuration of the extremal surfaces corresponding to them.First, the most striking difference between eq.(4.3) and eq.(4.2) is The black hole horizon emerges as a result of the difference of the surface wrapped m + 1 times around the black hole and one wrapped m times taking m to the infinity.Both sides of eq.(4.5) are equivalent as the surfaces not just as the value.Finally, we should consider that eq.(4.2) may describe the holographic EE for the holographic CFTs.The EE given by eq.(4.2) is smaller than that from the Ryu-Takayanagi formula eq.( 4.3).To consider the hamologous condition, pay attention for the topology of the surface configuration in fig. 2. The total winding number of the surfaces for m ≥ 1 around the black hole is 0. Thus, when we regard all the surfaces in fig. 2 as a single surface, it is homotopy equivalence to the sub-region A on the AdS boundary.Therefore, eq.(4.2) may be the true holographic EE. Conclusion We provided a holographic description of the entanglement entropy given by eq.(4.1) as the sum of the signed areas of the extremal surfaces satisfying the deficit angle consistency.It has preferable properties for the HEE in the following reasons.First, it does not have the entanglement shadow region that the Ryu-Takayanagi conjecture has.Second, the resulting holographic EE eq.(4.2) gives smaller EE than that from the Ryu-Takayanagi conjecture, and the bunch of the surfaces corresponding with eq.(4.2) is homotopy equivalent to the sub-system on the AdS boundary.Thus, eq.( 4.2) can be a candidate of the holographic EE for the holographic CFT. . 2 ) For example, consider B = C ∪ {∞} and A = [u, v].Instead of performing the path integral on the corresponding replica manifold B (n) Consider 1 + 1 - dimensional CFT on the circle of which circumference C = 1 at the inverse temperature β.The common term of the EE of an interval A = [− /2, /2], ∈ ( , 1 − )] r s U d 2 z 5 7 Z x 6 + 1 K n 6 N u p c D m t W G l t v 5 0 M n o + v u / K o N m D 3 t f q j 8 9 e y h g w f e q k 3 f b Z + q 3 0 B r 6 8 u F Z b X 1 x b b I S Y 9 f s h f x f s S p 7 o B u Y 5 T f t Z p W v n S N I D Z B / P n c r y M w k 5 N n E z O p c N L l 8 3 G h F N 8 Y w g T i 9 9 z y S W E E K a T r X w C k u c B l 4 l U a k M W m 8 k S o F m u 0 b x r e Q Y p / W 8 I 0 R < / l a t e x i t > s(1 `) < l a t e x i t s h a 1 _ b a s e 6 4 = " 3 Y r s U d 2 z 5 7 Z x 6 + 1 K n 6 N u p c D m t W G l t v 5 0 M n o + v u / K o N m D 3 t f q j 8 9 e y h g w f e q k 3 f b Z + q 3 0 B r 6 8 u F Z b X 1 x b b I S Y 9 f s h f x f s S p 7 o B u Y 5 T f t Z p W v n S N I D Z B / P n c r y M w k 5 N n E z O p c N L l 8 3 G h F N 8 Y w g T i 9 9 z y S W E E K a T r X w C k u c B l 4 l U a k M W m 8 k S o F m u 0 b x r e Q Y p / a + o 0 T < / l a t e x i t > s(`) 7 W d D y 1 S q c I + m 1 S R j D J H t k t a 7 A H d s e e 2 P u v t W p e j a a X K s 1 K S 8 u t 7 e D x W O 7 t X 5 V O s 4 v 9 T 9 W f n l 3 s Y s H z q p F 3 y 2 O a t 1 B b + s r B a S O 3 m J 2 s T b E r 9 k z + L 1 m d 3 d M N j M q r e p 3 h 2 T M E q A G J 7 8 / 9 E x S S 8 c R s P J m Z i 6 a W j 1 q t 8 C O M C c T o v e e R w g r S y N O 5 O k 5 w j g Figure 1 . Figure1.The extremal surfaces in the BTZ black hole spacetime with M = 1, L = The red and blue lines the extremal surfaces described as eq.(3.5) with 0 ≤ ≤ 2, that extend from θ A = ±π/3 and θ A = −π/3, respectively.Each disk represents a time slice of the spacetime compactified for the radial direction.The outer circles represent the AdS boundary, and the black hole region is not written out.
2020-11-03T02:00:53.511Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "c713de16f3a507d45b44e60eb0497ad13f45af58", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c713de16f3a507d45b44e60eb0497ad13f45af58", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256039648
pes2o/s2orc
v3-fos-license
Gravitational free energy in topological AdS/CFT We define and study a holographic dual to the topological twist of N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} gauge theories on Riemannian three-manifolds. The gravity duals are solutions to four-dimensional N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} gauged supergravity, where the three-manifold arises as a conformal boundary. Following our previous work, we show that the renormalized gravitational free energy of such solutions is independent of the boundary three-metric, as required for a topological theory. We then go further, analyzing the geometry of supersymmetric bulk solutions. Remarkably, we are able to show that the gravitational free energy of any smooth four-manifold filling of any three-manifold is always zero. Aided by this analysis, we prove a similar result for topological AdS5/CFT4. We comment on the implications of these results for the large N limits of topologically twisted gauge theories in three and four dimensions, including the ABJM theory and N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} SU(N) super-Yang-Mills, respectively. Introduction and outline The AdS/CFT correspondence conjectures an equivalence between certain quantum field theories (QFTs) and quantum gravity with appropriate boundary conditions [1][2][3]. In [4] we proposed to formulate a "topological" version of AdS/CFT, where the boundary theory is a topological QFT (TQFT). In the dual gravity description this amounts to studying a more specific class of boundary conditions, which induce a Witten-type topological twist [5] of the dual QFT on the conformal boundary. Such TQFTs typically have a finite number of degrees of freedom, and in some instances can be solved completely. 1 Of course 1 For example, the Donaldson-Witten twist of N = 4 SU(N ) super-Yang-Mills is relevant for the set up in [4]. For N = 2 the topological correlation functions have been computed explicitly for simply-connected spin four-manifolds of simple type in [6]; they may be written in terms of Abelian Seiberg-Witten invariants. JHEP09(2018)100 these theories are often also of independent mathematical interest, since observables are topological/diffeomorphism invariants. A key motivation for studying AdS/CFT in this set up is that the field theory is potentially under complete control: observables are mathematically well-defined and exactly computable. One can then focus on the dual gravitational description. In principle this is defined by a quantum gravity path integral, with boundary conditions determined by the observable one is computing. However, we have no precise definition of this, and in practice an appropriate strong coupling (usually large rank N ) limit of the QFT is described by supergravity. This classical limit is to be understood as a saddle point approximation to the quantum gravity path integral, where one instead finds classical solutions to supergravity with the appropriate boundary conditions. But in general even this is quite poorly understood: which saddle point solutions should be included? For example, in addition to smooth real solutions, should one allow for certain types of singular and/or complex solutions, e.g. as in [7][8][9]? When the dual theory is a TQFT in principle all observables are exactly computable in field theory, for many classes of theories defined on different conformal boundary manifolds. The AdS/CFT correspondence can then potentially help to clarify the answers to some of these questions, since the semi-classical gravity result must match the TQFT description. Of course, one is tempted to push this line of argument further and speculate that this is a promising setting in which to try to formulate a topological form of quantum gravity on the AdS side of the correspondence. Such a theory should be completely equivalent to the dual TQFT description. At present this looks challenging, to say the least, but there is an analogous construction in topological string theory. Here U(N ) Chern-Simons gauge theory (a Schwarz-type TQFT) on a three-manifold M 3 is equivalent to open topological strings on T * M 3 [10]. There is a large N duality relating this to a dual closed topological string description. For example, for M 3 = S 3 the closed strings propagate on the resolved conifold background, with N units of flux through the S 2 [11]. Here both sides are under computational control, and relate a TQFT to a topological sector of quantum gravity (string theory). This duality shares many features with AdS/CFT, 2 and might hint at how to attack the above problem. In [4] we began much more modestly, setting up the basic problem in N = 4 gauged supergravity in five dimensions. With appropriate boundary conditions this defines the Donaldson-Witten topological twist of the dual N = 2 theory on the conformal boundary four-manifold, and we focused on the simplest observable, namely the partition function. Under AdS/CFT in the supergravity limit, minus the logarithm of the partition function is identified with the holographically renormalized supergravity action. We refer to this as the gravitational free energy in this paper, and the main result of [4] was to show that this is indeed a topological invariant, i.e. it is invariant under arbitrary deformations of the boundary four-metric. The computation, although in principle straightforward, was technically surprisingly involved. Since four-manifolds are also notoriously difficult, in this paper we JHEP09(2018)100 set up an analogous problem in one dimension lower. The relevant bulk supergravity theory is a Euclidean version of N = 4 SO(4) gauged supergravity in four dimensions. As well as the metric, the bosonic content of the theory contains two scalar fields and two SU(2) gauge fields. Here Spin(4) = SU(2) + ×SU(2) − is the spin double cover of SO (4), and the fermions transform in the fundamental 4 representation of this R-symmetry group. The topological twist in particular identifies the boundary value of one of these two SU(2) R-symmetry gauge fields with the spin connection of the conformal boundary three-manifold (M 3 , g). There is then a consistent truncation in which the other SU(2) gauge field is identically zero in the bulk. Such Witten-type twists of N = 4 gauge theories in three dimensions have been studied in [12]. In the first part of the paper we establish an analogous result to that in [4], namely that the gravitational free energy of such solutions is indeed invariant under arbitrary deformations of the boundary three-metric on (M 3 , g). We next analyse in more detail the geometry of supersymmetric solutions to the fourdimensional bulk supergravity theory. This geometry is characterized by what we call a twisted identity structure. In an open set where the bulk spinor is non-chiral and the SU(2) R-symmetry gauge bundle is trivialized, the spinor defines a canonical orthonormal frame {E a } a=1,...,4 . However, {E I } I=1,2,3 rotate as the vector 3 under the SU(2) gauge group, while E 4 is invariant, so that globally this frame is twisted by the R-symmetry bundle. We show that a supersymmetric solution to the bulk supergravity equations equivalently satisfies a certain first order differential system for this twisted identity structure. Using these equations, remarkably we are able to show that the bulk on-shell action is always a total derivative. By carefully analysing the global structure of the canonical twisted frame, and how this behaves where the bulk spinor becomes chiral or zero, this is shown to be globally a total derivative for any smooth solution. This is true on any four-manifold Y 4 that fills any three-manifold boundary M 3 = ∂Y 4 . Moreover, on applying Stokes' theorem the bulk integral then always precisely cancels the boundary terms (including the holographic counterterms) in the action, with the net result being that the gravitational free energy of any smooth solution is zero! Aided by this analysis, we return to the topological AdS 5 /CFT 4 set up of [4], and prove a precisely analogous result. Of course, here not every four-manifold bounds a smooth five-manifold. At first sight these results are somewhat disappointing: the classical free energy is zero for smooth fillings, irrespective of their topology. Zero is a topological invariant, and not a very interesting one. However, if one believes that smooth real saddle points are the dominant saddle points in gravity, this is then a robust prediction for the large N limits of various classes of topologically twisted SCFTs, in both three and four dimensions. For example, since N = 4 gauged supergravity in four dimensions [13] is a consistent truncation of eleven-dimensional supergravity on S 7 (or S 7 /Z k ) [14], as we discuss later in the paper this leads to a prediction for the large N limit of the partition function of the topologically twisted ABJM theory, on any three-manifold M 3 . On the other hand, with the exception of the SU(N ) Vafa-Witten partition function on M 4 = K3 discussed in section 8, to date none of these large N limits have been computed in field theory: such computations now become very pressing! It might be that these match our supergravity results for smooth solutions, but if not then one necessarily has to consider more general JHEP09(2018)100 saddle points, allowing e.g. for appropriate singularities and/or complex saddle points. Notice that although our computation of the classical gravitational free energy will in general break down for such solutions, the result that this quantity is independent of boundary metric deformations is a priori a more general result. We have also so far only focused on the partition function, while in principle one should also be able to compute topological correlation functions using similar holographic methods. We leave a fuller discussion of some of these issues to section 8. The outline of the paper is as follows. First, in section 2 we review the topological twists of three-dimensional supersymmetric field theories, as they are perhaps less well known than their four-dimensional relatives, and discuss the gravity dual to the ABJM theory. In section 3 we introduce the relevant four-dimensional N = 4 Euclidean gauged supergravity. Surprisingly the supersymmetry transformations of this theory, as formulated in [14], do not appear in the literature, and we hence first fill this gap. After holographically renormalizing the action, in section 4 we identify the conformal boundary Killing spinor equations which admit a topological twist as a particular solution on any oriented Riemannian three-manifold (M 3 , g). The bulk spinor equations are then expanded in a Fefferman-Graham-like expansion. In section 5 we prove that the gravitational free energy is independent of the metric g on M 3 , following a similar computation in [4]. In section 6 we show that a supersymmetric solution to the bulk equations of motion equivalently satisfies a first order differential system of equations for the twisted identity structure described above. Using this we prove that the gravitational free energy of any smooth real solution is zero. In section 7 we return to the AdS 5 /CFT 4 correspondence in [4], and prove an analogous result. We conclude in section 8 with a more detailed discussion of some of the issues mentioned above. 3d TQFTs and topological twists We begin in section 2.1 by reviewing topological twists of three-dimensional supersymmetric QFTs. In section 2.2 we focus on the ABJM theory, its gravity dual, and the consistent truncation of eleven-dimensional supergravity on S 7 /Z k to four-dimensional N = 4 gauged supergravity. Twisting N = 4 theories One perspective on the topological twist is that it involves a modification of the global symmetry group of the theory, obtained by combining the spacetime symmetries with the R-symmetry of the theory. Concretely, one looks for group products such that a supercharge would transform as a singlet under an appropriate diagonal subgroup. In three dimensions every orientable manifold is spin. 3 Therefore, the frame bundle of any orientable threemanifold may be lifted to a Spin(3) ∼ = SU(2) E , which constitutes the (Euclidean) spacetime symmetry. One may twist using either SU(2) + or SU(2) − , obtaining generically inequivalent TQFTs. The inequivalence of the two twists is not immediate from the supercharges: they transform as (2, 2, 2) under SU(2) E × SU(2) + × SU(2) − , so taking diagonal combinations of SU(2) E with either factors of the R-symmetry group leads to (1, 2) ⊕ (3, 2). Nevertheless, the twisted fields transform differently in the two twists, as can be seen from the scalars. For instance, consider the scalars in a hypermultiplet q: after the two twists, they would transform as (1,2) On the other hand, because of the exchange of SU(2) + and SU(2) − , the scalars in the twisted hypermultiplet transform in the opposite way. The same goes for vector multiplets and twisted vector multiplets: the scalars in a vector multiplet form a triplet under SU(2) + and a singlet under SU(2) − , so they distinguish between the two twists, but the opposite is true of the scalars in the twisted vector multiplet. In a three-dimensional N = 4 super-Yang-Mills (SYM) theory, with a vector multiplet, the two twists are inequivalent. The first twist may also be recovered by dimensionally reducing the four-dimensional N = 2 Donaldson-Witten twist. The resulting model is sometimes referred to as super-BF or super-IG model, and the partition function reproduces the Casson invariant of the background three-manifold [15][16][17]; and conjecturally, via renormalization group flow, the Rozansky-Witten invariants [18,19]. 4 The second twist, instead, is intrisically three-dimensional (it is not known to arise from the reduction of any four-dimensional theory) and supposedly provides a mirror-symmetric description of the Casson invariant [12]. There exists a third topologically twisted three-dimensional SYM theory with two twisted scalar supercharges, which may be obtained by a partial twist 4 More precisely, the Casson invariant arises when the gauge group G ∼ = SU(2), for three-manifolds M3 with the same homology groups as S 3 . It was originally defined in terms of the combinatorics of SU(2)representations of π1(M3). However, the Casson invariant naturally generalizes to the Lescop invariant, which is defined on any oriented three-manifold. Moreover, the TQFT Casson model suggests an extension of this invariant to any gauge group G . JHEP09(2018)100 of three-dimensional N = 8 SYM, or via dimensional reduction of the half-twist of fourdimensional N = 4 SYM. It is closely related to the Casson model, but differs from it by the matter content [20]. In three dimensions it is also possible to couple Chern-Simons theory to free hypermultiplets to obtain N = 4 supersymmetries [21], and twist the resulting theory [22,23]. As in the previous case, if there are only untwisted or twisted hypermultiplets in the theory the two twists are inequivalent, and usually referred to as an A-twist and B-twist, respectively. However, in a theory with both hypers and twisted hypers, the difference between the two twists amounts to the exchange between the untwisted and twisted matter. Therefore, one may consider a twist by a single factor in Spin(4) R and exchange the "quality" of the hypermultiplets, obtaining theories, sometimes called AB-models, which have both types of hypermultiplets. For concreteness, after the twist, an AB-model contains matter (2.1) Therefore, the bosonic fields are two scalars and a spinor, whilst the fermionic fields are a scalar, a one-form and two spinors. Chern-Simons-matter theories with N > 4 contain an equal number of untwisted and twisted hypermultiplets, so the symmetry between the A and B twist is automatically implemented. JHEP09(2018)100 In the first case it is clear that a twist with SU(2) E does not lead to any scalar supercharge, while for the second twist one reduces to the AB-model [23]. It is not completely clear what the observables of the topologically twisted Chern-Simons-matter theories compute. In [23] it was argued that the A-model is related via the novel Higgs mechanism [34] to the super-BF theory obtained by twisting N = 4 SYM, and thus computes the Casson invariant of the background three-manifold. Similarly, the mathematical content of the observables of the topological models of [22] is also currently unclear. The group-theoretic point of view on the topological twist considered above is not the only possible viewpoint. One may also describe the topological twist in the context of background rigid supersymmetry. For instance, in four dimensions the conditions for the background geometry to support a supersymmetric field theory have been studied by coupling to a non-dynamical supergravity [35], and via holography [36]. In the first case it has been shown that the topological twist arises as a particular case where the SU(2) connection corresponding to the gauged R-symmetry cancels part of the spin connection in the Killing spinor equation, thus allowing a scalar supercharge [37,38]. In the second case it has been shown that the geometric structure of the bulk supergravity solutions reduces at the boundary to a quaternionic Kähler structure, which appears on any orientable Riemannian four-manifold [4]. Three-dimensional field theories with N = 2 have been extensively studied in the context of rigid supersymmetry, both from holography [36,39] and by coupling to supergravity [40]. However, the same cannot be said for N = 4 theories. As already mentioned, we will find very concretely that the topological twist corresponds to identifying the boundary value of one SU(2) factor of the gauged R-symmetry with the spin connection. This allows us to construct a solution to the Killing spinor equation obtained from three-dimensional N = 4 conformal supergravity, in analogy with the standard approach. The ABJM theory and its supergravity dual The AdS/CFT correspondence has been especially influential in the context of threedimensional field theories. In particular the AdS 4 × S 7 near-horizon geometry describing a stack of N M2-branes provided strong evidence for the existence of a strongly-coupled maximally supersymmetric conformal field theory with N 3/2 degrees of freedom. After initial work by Bagger-Lambert-Gustavsson [26][27][28][29], the worldvolume theory of N M2branes probing C 4 /Z k was eventually found ten years ago by Aharony-Bergman-Jafferis-Maldacena [24]. The ABJM theory in flat spacetime R 1,2 is conjectured to be holographically dual to M-theory on AdS 4 × S 7 /Z k . In order to study the gravity dual of the field theory defined on different manifolds M 3 in the large N limit, one may consider a consistent truncation of eleven-dimensional supergravity on S 7 , or S 7 /Z k , to an effective four-dimensional bulk supergravity theory. Such a consistent truncation has been found in [14], where it is shown that any solution to the four-dimensional N = 4 supergravity theory of Das-Fischler-Roček [13] uplifts to an eleven-dimensional solution. In particular this supergravity theory has a Spin(4) ∼ = SU(2) × SU(2) gauged R-symmetry, where the massless gauge fields arise, JHEP09(2018)100 as usual in Kaluza-Klein reduction, from a corresponding isometry of the internal space. Specifically, the uplifting/reduction ansatz in [14] identifies the SU(2) × SU(2) isometry as acting in the 2 of each factor in C 4 ≡ C 2 × C 2 , where the internal space S 7 is the unit sphere in C 4 . This description makes it clear that one may also replace the internal space by S 7 /Z k , where the Z k acts on the coordinates of C 4 via the diagonal action z i → e 2πi/k z i . This manifestly commutes with the SU(2) × SU(2) ⊂ SU(4) C 4 action above. There is another notable geometric symmetry, namely the Z 2 that acts by exchanging the two copies of C 2 in C 4 , and thus exchanges the SU (2) isometries. This symmetry is then inherited by the four-dimensional N = 4 gauged supergravity theory. According to the holographic dictionary, symmetries of the eleven-dimensional solution correspond to symmetries of the field theory. In particular the SU(2) × SU(2) isometry of the internal space, which becomes a Spin(4) R gauged R-symmetry of the consistently truncated four-dimensional theory, corresponds to the Spin(4) R R-symmetry of the field theory dual. The Z 2 that acts as an outer automorphism, exchanging the group factors in Spin(4) R ⊂ Spin(6) R , is indeed a symmetry of the N = 6 ABJM theory, since the latter has an equal number of untwisted and twisted hypermultiplets, in N = 4 language, and therefore its matter content is symmetric under the exchange of SU(2) + and SU(2) − [41]. In the rest of the paper we will work entirely within the Das-Fischler-Roček fourdimensional N = 4 gauged supergravity theory. Any solution to this theory, for a bulk asymptotically locally hyperbolic four-manifold Y 4 , automatically uplifts on S 7 /Z k to give a gravity dual to the ABJM theory defined on the conformal boundary M 3 = ∂Y 4 . In particular we note that the effective four-dimensional Newton constant is Holographic supergravity theory We begin in section 3.1 by defining a real Euclidean section of N = 4 gauged supergravity in four dimensions and determine the fermionic supersymmetry transformations. A Fefferman-Graham expansion of asymptotically locally hyperbolic solutions to this theory is constructed in section 3.2, for arbitrary conformal boundary three-manifold (M 3 , g). Using this, in section 3.3 we holographically renormalize the action. Euclidean N = 4 gauged supergravity As outlined so far, holographic duals to three-dimensional SCFTs with a Spin(4) = SU(2) + × SU(2) − R-symmetry should be solutions of a four-dimensional N = 4 SU(2)×SU(2) gauged supergravity. As discussed in the previous subsection, the Das-Fischler-Roček [13] theory has a supersymmetric AdS 4 vacuum and was shown in [14] to be a consistent truncation of eleven-dimensional supergravity on S 7 /Z k . In Lorentzian signature the bosonic sector of this N = 4 supergravity theory comprises the metric G µν , two real scalars φ, ϕ which together parametrize an SL(2, R) coset, and JHEP09(2018)100 two triplets of SU(2) gauge fields A I µ , I µ (I = 1, 2, 3). The associated field strengths are and we have taken equal gauge couplings g for each of the SU(2) factors in the non-simple gauge group. It is convenient to introduce the scalar field X ≡ e 1 2 φ and defineX ≡ X −1 q where q 2 ≡ 1 + ϕ 2 X 4 . The bosonic action and equations of motion in Lorentzian signature appear in [14]. However, as we are interested in holographic duals to TQFTs defined on Riemannian three-manifolds, we require a Euclidean signature version of this theory. After a Wick rotation the action becomes Here R = R(G) denotes the Ricci scalar of the metric G µν , and * is the Hodge duality operator acting on forms. The equations of motion which follow from this action are: 6 I=1F I µνF Iµν and the Bianchi identities define the SU(2) covariant derivatives In general, equations (3.3)-(3.7) are complex, and solutions will likewise be complex. However, note that taking the axion ϕ to be purely imaginary effectively removes all factors of i. Note also that the action and equations of motion are invariant under the Z 2 symmetry: JHEP09(2018)100 which corresponds to the field theory outer automorphism exchanging the group factors in Spin(4) R ∼ = SU(2) + × SU(2) − . This second Z 2 symmetry acts on the supergravity fields as X →X, ϕX 2 → −ϕX 2 , A I → I and I → A I . Whilst not manifest in the action and equations of motion, it can be made so upon rewriting the scalar kinetic terms in (3.2) as 2XXdX ∧ * dX − 1 2 d(ϕX 2 ) ∧ * d(ϕX 2 ). In the Lorentzian theory the fermionic sector contains four gravitini, ψ a µ , and four dilatini, χ a , which together with the spinor parameters ǫ a all transform in the fundamental 4 representation of the Spin(4) global R-symmetry group, which we label by a = 1, . . . , 4. The supersymmetry transformations are not given in [14] and the form of the action is different to that appearing in the original literature [13]; in particular the parametrization of the scalars and their coupling to the gauge fields is different. We cannot, therefore, simply take the supersymmetry transformations given in [13]. Of course, the two actions represent the same theory but presumably in different symplectic duality frames, and possibly with different gauge fixed SL(2, R) scalar coset representatives. Instead of translating between the different presentations in Lorentzian signature and then Wick rotating to the Euclidean, we have instead derived the conditions for preserving supersymmetry by a different method. We started with a general ansatz for the gravitino and dilatino variations and then acted on the dilatino with the Dirac operator, adding additional field dependent multiples of the dilatino variation in order to recover a subset of the bosonic equations of motion (3.3)-(3.7). This essentially shows that the dilatino field equation (in a bosonic background) maps to some of the bosonic field equations. Computing the integrability condition on the spinor parameter, which can be rephrased in terms of the free Rarita-Schwinger equation for the gravitino, and adding further dilatino variations recovers the remaining bosonic equations of motion. Hence the fermionic field equations map to bosonic ones, i.e. the theory is supersymmetric. At the end of this analysis we find: Here the gauge covariant derivative acting on the supersymmetry parameter is and η I ab ,η I ab are respectively the self-dual/anti-self-dual 't Hooft symbols. In addition, Γ µ , µ = 1, . . . , 4, are generators of the Euclidean spacetime Clifford algebra, satisfying {Γ µ , Γ ν } = 2G µν , and we define Γ 5 ≡ −Γ 1234 . Note that the Z 2 symmetry that reverses the signs of g and the two SU(2) gauge fields is also a symmetry of these supersymmetry equations, provided one combines it with Γ µ → −Γ µ . JHEP09(2018)100 For the purposes of completeness, we note that the transformations satisfy and In deriving these conditions we have not needed to specify the type of spinor we are using. Later, in section 4, we will deal with a truncation of this theory in which one triplet of gauge fields is set to zero and the spinors are taken to be symplectic-Majorana. Fefferman-Graham expansion In this section we determine the Fefferman-Graham expansion [42] of asymptotically locally hyperbolic solutions to this Euclidean supergravity theory. This is the general solution to the bosonic equations of motion (3.3)-(3.7), expressed as a perturbative expansion in a radial coordinate near the conformal boundary. We take the form of the metric to be [42] G µν dx µ dx ν = 1 The AdS radius ℓ = 1, and in turn we have the expansion Here g 0 ij = g ij is the boundary metric induced on the conformal boundary M 3 at z = 0. It is convenient to introduce the inner product α, β between two p-forms α, β via where vol denotes the volume form, with associated Hodge duality operator * . The volume form for the four-dimensional bulk metric (3.15) is JHEP09(2018)100 The determinant may then be expanded in a series in z, around that for g 0 , as follows Here we have denoted t (n) ≡ Tr (g 0 ) −1 g n and indices are always raised with g 0 . The remaining bosonic fields are likewise expanded as follows: 22) We have chosen a gauge in which all dz terms in the gauge field expansions are set to zero. We now substitute the above expansions into the equations of motion (3.3)-(3.7) and solve them order by order in the radial coordinate z in terms of the boundary data g 0 = g, X 1 , ϕ 1 , A I and I . For the Einstein equation (3.7) we will need the Ricci tensor of the metric (3.15): where ∇ is the covariant derivative for g. Examining first the axion equation (3.4) gives at the first two orders which can be solved by setting g = ± 1 √ 2 . These equations fix the gauging coupling in terms of the AdS 4 length scale, which we have set to unity. At even higher order we find Moving on to the dilaton equation (3.3) we find which are again solved by g = ± 1 √ 2 together with JHEP09(2018)100 Next the A I gauge field equation (3.5) yields where the curvature is Notice that a I 1 , and hence a I 2 , is partially undetermined. Similarly, the other gauge field equation (3.6) gives withF I ≡ d I + 1 2 g ǫ IJKÂJ ∧ K . The non-trivial information from the ij component of the Einstein equation (3.7), using (3.25), is which is a matter-modified version of the boundary Schouten tensor. From this expression we immediately deduce that the trace of g 2 ij is The zz component of the Einstein equation in (3.7), together with (3.24), determines the trace of the highest order component in the expansion of the bulk metric: Holographic renormalization Having solved the bulk equations of motion to the relevant order, we are now in a position to holographically renormalize the Euclidean N = 4 gauged supergravity theory. The bulk action (3.2) is divergent for an asymptotically locally hyperbolic solution, but can be rendered finite by the addition of appropriate local counterterms. We begin by taking the trace of the Einstein equation (3.7). Substituting the result into the Euclidean action (3.2) with g = ± 1 √ 2 , we arrive at the bulk on-shell action Here Y 4 is the bulk four-manifold, with boundary ∂Y 4 = M 3 . In order to obtain the equations of motion (3.3)-(3.7) from the original bulk action (3.2) on a manifold with boundary, one has to add the Gibbons-Hawking-York term JHEP09(2018)100 Here more precisely one cuts Y 4 off at some finite radial distance, or equivalently non-zero z > 0, and (M 3 , h) is the resulting three-manifold boundary, with trace of the second fundamental form being K. Recall from (3.15) that h ij = 1 z 2 g ij . The combined action I o-s + I GHY suffers from divergences as the conformal boundary is approached. To remove these divergences we use the standard method of holographic renormalization [43][44][45]. Namely, we introduce a small cut-off z = δ > 0, and expand all fields via the Fefferman-Graham expansion of section 3.2 to identify the divergences. These may be cancelled by adding local boundary counterterms. We find As is standard, we have written the counterterm action (3.38) covariantly in terms of the induced metric h ij on M 3 = ∂Y 4 . The total renormalized action is then which by construction is finite. The choice of counterterms (3.38) defines a particular renormalization scheme. For this theory there are other local, gauge invariant counterterms that one can construct from the boundary fields, that have non-zero (and finite) limits as δ → 0. It is straightforward to check that there are no such finite counterterms constructed without using the scalar fields; but including the latter we may write down finite counterterms proportional to the boundary integrals of ϕ 3 , (X − 1) 3 , ϕR(h), etc. There are also local but non-gauge invariant terms that one might consider. For example, boundary Chern-Simons terms for the SU(2) gauge fields, and the boundary gravitational Chern-Simons term. However, such terms would change the gauge invariance of the theory, and we shall hence not consider them further. 7 In principle we should use a supersymmetric holographic renormalization scheme, but in the absence of a prescription for this we shall use the minimal scheme with counterterms (3.38) in the remainder of the paper, cf. the discussion in [46][47][48][49]. In any case, for the topological twist boundary condition the boundary values ϕ 1 , X 1 of ϕ and X will be zero, and the above-mentioned finite gauge invariant counterterms are all zero. Given the renormalized action we may compute the following vacuum expectation values (VEVs): (3.40) 7 The topological twist will later identify one boundary SU(2) gauge field with the boundary spin connection of (M3, g), so that these Chern-Simons terms are the same. Moreover, since any oriented three-manifold is parallelizable there is always a globally defined frame. Choosing such a frame then allows one to interpret the gravitational Chern-Simons term as a global three-form on M3. However, its integral depends on the choice of framing. JHEP09(2018)100 Here, as usual in AdS/CFT, the boundary fields g ij , X 1 , ϕ 1 , A I i and I i act as sources for operators, and the expressions in (3.40) compute the VEVs of these operators. Using the above holographic renormalization we may write (3.40) as the following limits: Here K ij is the second fundamental form of the cut-off hypersurface (M 3 , h ij ), and * h denotes the Hodge duality operator for the metric h ij . A computation then gives the finite expressions Notice that each of these expressions contain terms that are not determined, in terms of boundary data, by the Fefferman-Graham expansion of the bosonic equations of motion. In particular the g 3 ij term in the stress-energy tensor T ij , the scalars X 2 , ϕ 2 that determine respectively Ξ, Σ, and a I 1 ,â I 1 appearing in the SU(2) R currents. As a quick check/application of these formulae, consider a boundary Weyl transformation δσ under which δg ij = 2g ij δσ, the scalars X 1 , ϕ 1 have Weyl weight 1: δX 1 = X 1 δσ, δϕ 1 = ϕ 1 δσ and the gauge fields Weyl weight 0. Then it is a simple exercise to show that which is consistent with the fact that there is no conformal anomaly in three-dimensional SCFTs. JHEP09(2018)100 4 Supersymmetric solutions In this section we study supersymmetric solutions to the Euclidean N = 4 supergravity theory. We begin in section 4.1 by deriving the Killing spinor equations on the conformal boundary from the bulk supersymmetry equations, and then compare them to the component form equations of off-shell three-dimensional N = 4 conformal supergravity. In section 4.2 we describe how the topological twist arises as a special solution to these Killing spinor equations, that exists on any Riemannian three-manifold (M 3 , g). Finally, in section 4.3 we expand solutions to the bulk spinor equations in a Fefferman-Graham-like expansion. Boundary spinor equations We begin by introducing the charge conjugation matrix C for the Euclidean spacetime Clifford algebra. By definition Γ * µ = C −1 Γ µ C , and one may choose Hermitian generators Γ † µ = Γ µ together with the conditions C = C * = −C T , C 2 = −1. We may then define spinors in Euclidean signature to satisfy the symplectic-Majorana condition with Ω = σ 3 ⊗ iσ 2 . It is straightforward to check that when I = 0, and provided the axion ϕ is purely imaginary with all other bosonic fields being real, the supersymmetry variations (3.10), (3.11) are compatible with this symplectic-Majorana condition. We will be interested in solutions that satisfy these reality conditions, and henceforth work in the truncation of the bulk supergravity theory for which the triplet of SU(2) gauge fields I µ is set to zero. For completeness we record here the truncated bulk supersymmetry conditions: We next expand the bulk Killing spinor equations (4.2), (4.3) to leading order near the conformal boundary at z = 0. We will consequently need the Fefferman-Graham expansion of an orthonormal frame for the metric (3.15), (3.16), together with the associated spin connection. The following is a choice of frame E µ µ for the metric (3.15): where e i i is a frame for the z-dependent metric g. The latter then has the expansion (3.16), but for the present subsection we shall only need that JHEP09(2018)100 where e i i is a frame for the boundary metric g 0 = g. The non-zero components of the spin connection Ω νρ µ at this order are correspondingly where ω jk i denotes the boundary spin connection. We take as the generators of the Clifford algebra the following so that and where σī the usual Pauli matrices. The bulk Killing spinor is then expanded as with the upper/lower signs corresponding to taking g = ± 1 √ 2 . We can then satisfy this equation by taking ε a to have a definite chirality under Γz and ξ a to have the opposite chirality. Recall that there is a Z 2 symmetry of the action, equations of motion, and supersymmetry equations, that sends g → −g, A I → −A I , Γ µ → −Γ µ . Using this, without loss of generality we set g = − 1 √ 2 from now on, so that ε a has positive Γz chirality and ξ a negative chirality, and we write them as (4.12) The leading order term in the i-component of the gravitino equation is then seen to be identically satisfied. The next order gives the boundary Killing spinor equation (KSE): where the covariant derivative is with respect to the Levi-Civita spin connection of the boundary metric g 0 ij = g ij , and σ i = σī eī i , so that {σ i , σ j } = 2g ij . Note that after redefining the conformal spinor parameter such that ξ a R = ξ a R − 1 4 ϕ 1 ε a L , the boundary KSE becomes (4.14) JHEP09(2018)100 This is the equation which results from setting to zero the gravitino supersymmetry variation of off-shell 3d N = 4 conformal supergravity [50]. Turning to the bulk dilatino equation (4.3), the leading order term is equivalent to the chirality property of ε a . At the next order we obtain two conditions, corresponding to the left and right-handed components After the redefinition of the conformal spinor parameter and Hodge dualising one term these read These equations are not equivalent, and matching them to the single algebraic condition arising from setting a three-dimensional dilatino variation to zero is not therefore entirely straightforward. The Weyl multiplet of off-shell N = 4 conformal supergravity contains two auxiliary scalar fields S 1 , S 2 of Weyl weight 1 and 2 respectively, and generically six gauge fields. The vanishing of the dilatino supersymmetry transformation [50] when one triplet of gauge fields is turned off is, schematically, Clearly (4.17) is of this form once we identify S 1 ∼ ϕ 1 , S 2 ∼ 1 2 ϕ 2 1 + X 2 1 − 2X 2 . However, (4.18) does not match so neatly as * a I 1 is not a field strength. Moreover, our spinor expansion should recover a single equation, and so it is perhaps some linear combination of (4.17) and (4.18) that reproduces (4.19). In any case, it is not clear that the leading order dilatino equation should match this particular off-shell formulation of N = 4 conformal supergravity. Topological twist Recall that the boundary Killing spinor equation (4.13) written in full is To solve this equation with a topological twist, we begin by setting the boundary scalar ϕ 1 and conformal spinor parameter ξ a R to zero. We then identify the boundary SU(2) gauge field with the spin connection as follows JHEP09(2018)100 The constant spinor which solves the Killing spinor equation is then where w is any complex number and It is useful to note that the 't Hooft symbol action on ε a L may be exchanged for the Pauli matrix action: We have solved the leading order KSE. Turning to the algebraic spinor equations we note that, in general, the conformal spinor parameter ξ a R can be solved for by taking the σ i trace of the KSE (4.13). Substituting this generic expression for ξ a R into (4.15) and rescaling by √ 2 leads to with R = R(g) the boundary Ricci scalar. Specialising to the field configuration which solves the boundary KSE above, this simplifies to and therefore fixes The other algebraic relation (4.16) now reads (4.28) Here recall that a I 1 is (proportional to) the VEV of the remaining SU(2) R current. One can use (4.24) to swap the 't Hooft symbol for a Pauli matrix, plus the usual relation σīσj = δ ij + iǫ ijk σk . (4.29) The resulting equation takes the algebraic form where (σ b ) are the extended Pauli matrices (4.23), and the coefficients c b are real. In particular here we use that ϕ 2 is purely imaginary. Using the solution (4.22), one can easily check that as long as w = 0 equation (4.30) implies that c a = 0 for all a = 1, 2, 3, 4. We thus conclude the equations Note here the trace over frame indices and SU(2) R indices in the expression for ϕ 2 : this makes sense globally, since the topological twist identifies the gauge bundle with the spin bundle. Having identified indices we may view (a I 1 )ī as a two-tensor. Supersymmetric expansion In this section we continue to expand the bulk spinor equations to higher order in z. From this we extract further information about some of the fields which are not fixed, in terms of boundary data, by the bosonic equations of motion. We will continue to use the boundary conditions appropriate to the topological twist. The frame, spin connection and spinor expansions beyond the leading order given in section 4.1 will be needed, so we first give details of these. The frame expansion is where in particular e i i is a frame for the boundary metric and we have used a local SO(3) rotation to gauge fix the order z 2 term. The additional spin connection components we will need are j . (4.33) The bulk spinor then has the following expansion where ε a are constant with positive chirality under Γz. The remaining orders of the bulk dilatino equation give us The remaining gravitino expansions give 0 = ε a 3,L + JHEP09(2018)100 From the topological twist condition (4.21) the boundary gauge field strength is Substituting this and the expressions for X 2 , a I 1 and ϕ 2 into (4.37), (4.38) allows us to identify We also find that equation ( (4.47) We will not solve (4.42) as knowledge of a I 2 or ω (2) is not relevant for our purposes. Turning now to (4.43), using previous results we can re-express this particular equation as By taking the real part we can extract the remaining term in the Fefferman-Graham expansion of the bulk metric (4.49) Metric independence Our aim in this short section is to show that, for any supersymmetric asymptotically locally hyperbolic solution to the Euclidean N = 4 supergravity theory, with the topologically twisted boundary conditions on an arbitrary Riemannian three-manifold (M 3 , g), the variation with respect to the arbitrary boundary metric of the holographically renormalized action is identically zero. An arbitrary deformation of the renormalized action can be written as specifiable boundary field X 1 which, recall, has Weyl weight 1. In order for δX 1 to be relatable to δg ij , X 1 must be a scalar function built from the boundary curvature tensors, R ijkl , R ij and R. However, from these tensors we cannot construct a Weyl weight 1 object. Consequently we choose to set X 1 = 0 as part of the topological twist boundary conditions. To evaluate δA I i we require the variation of the boundary spin connection in terms of the boundary metric: Thus Therefore the variation of the action for the topological twist boundary conditions reduces to where we have introduced vol 3 ≡ √ det g d 3 x. Dropping the total derivative, which is zero for the closed three-manifolds we are considering, and inserting the expressions for the stress-energy tensor and SU(2) current from (3.42) and (3.45) gives where the effective stress-energy tensor is Note that because we have identified spacetime and R-symmetry indices, the covariant derivative in T ij acts on both the I and i indices of (a I 1 ) i . Inserting the expression for g 3 ij from (4.49) when X 1 = 0 gives Expanding the field strengths we have Here covariant derivatives of (a I 1 ) i in the first line are understood to act with respect to the index outside the bracket only, in contrast to the action on the second line. By carefully expanding, using the definition of the spin connection as the connection of the frame bundle, and recalling from section 4.2 that when X 1 = 0, (a I 1 ) i is symmetric in I and i indices, we JHEP09(2018)100 find delicate cancellations and ultimately that T ij = 0. Notice this is true for an arbitrary background closed three-manifold (M 3 , g), and that while the Fefferman-Graham expansion does not determine (a I 1 ) i , nevertheless the expression for T ij is identically zero. We close this section by commenting on more precisely when the derivation in this section holds, and in particular when the formula (5.1) holds. The latter computes the variation δS of the on-shell action. A variation of the boundary fields induces a corresponding variation of the bulk fields. Since the background solution that we are varying about solves the bulk equations of motion, crucially the bulk contribution to the resulting variation of the on-shell action is zero (by definition, this bulk integrand multiplies the bulk equations of motion). Thus δS is necessarily a boundary term, and for smooth saddle point solutions dual to the vacuum, one expects the only boundary to be the conformal boundary ∂Y 4 = M 3 . Equation (5.1) is the resulting boundary expression. However, this computation would also hold if the bulk solution is singular, or has internal boundaries, provided these do not contribute a corresponding surface term in the interior, in addition to (5.1). The internal boundary conditions for fields are clearly then relevant, but if one is going to allow internal singularities/boundaries of this type in a putative saddle point, the absence of these additional surface terms is a fairly clear constraint. Geometric reformulation In this section we first reformulate the bulk supersymmetry conditions (4.2), (4.3) in terms of a local identity structure. We then use this structure in section 6.2 to determine the renormalized on-shell action for any smooth filling with topological twist boundary conditions. Twisted identity structure Recall that the bulk spinor is originally a quadruplet of Dirac spinors, and we halved the number of degrees of freedom by requiring that it solve the symplectic-Majorana condition (4.1). Therefore, the quadruplet of spinors has the form where ǫ 1,2 are Dirac spinors on the four-manifold Y 4 and the charge conjugate is ǫ c = C ǫ * . Notice that the Weyl condition imposed with Γ 5 acting on the spinor indices is not compatible with the topological twist. One sees this from the expressions (4.8) and (4.12): the leading order term in the expansion of the bulk spinor is chiral if and only if it is zero. However, we may instead act with Γ 5 on the R-symmetry indices of the spinor and require This condition is compatible with the gravitino and dilatino equations (4.2) and (4.3), since Γ 5 commutes with the self-dual 't Hooft symbols. Projecting onto the subspaces with positive or negative "internal chirality" in (6.2) further reduces the bulk spinor to JHEP09(2018)100 Using the single Dirac spinor ζ, we may define the following (local) differential forms S ≡ ζζ , P ≡ ζΓ 5 ζ , where a bar denotes Hermitian conjugation. Globally, the full bulk spinor is a section of Spin(Y 4 ) ⊗ E, where E is a real rank 4 vector bundle associated to the principal SU(2) R bundle. By considering the change between local trivializations of the spinor under the SU(2) R ⊂ Spin (4), one can check that S and P are global smooth functions. Moreover, K is a global one-form on where V is the rank 3 vector bundle associated to the SO(3) R = SU(2) R /Z 2 . In order to have a globally well-defined bulk spinor ǫ a , we have to lift the SO(3) R bundle acting on V to an SU(2) R bundle acting on E. Moreover, we should define the spinor in the first space, thus lifting the orthonormal frame bundle of the tangent bundle to a Spin(4) frame bundle. In both cases, the obstruction to the lifting is the second Stiefel-Whitney class of the real vector bundles, that is, w 2 (V ), w 2 (Y 4 ) ∈ H 2 (Y 4 , Z 2 ). However, because the full bulk spinor is a section of Spin(Y 4 ) ⊗ E, we only need in order for the tensor product of the "virtual" bundles to be defined. We say that the bulk spinor is a Spin SU(2) spinor, as originally introduced in [51]. This is a non-Abelian generalization of the perhaps more familiar Spin c spinors that are required, for instance, in Seiberg-Witten theory. Geometrically, a single Dirac spinor in four dimensions defines a local identity structure on the four-manifold, or equivalently a local orthonormal frame. In order to construct it, we split the bulk spinor into its components with positive and negative chirality under Γ 5 , ζ = ζ + + ζ − , and define where S ± ≡ ζ ± ζ ± . Then an orthonormal frame can be defined by and we choose the orientation induced by the volume form E 4123 . We also define the function θ by We may then re-express the local differential forms above in terms of the frame as This canonical frame degenerates at θ = 0, π, where the spinor has positive/negative chirality, and also when S = 0, where the spinor vanishes. The subset of Y 4 with these points excluded will be denoted Y Starting with the bulk Killing spinor equations (4.2) and (4.3), we may find a set of Killing spinor equations for ζ. Choosing negative internal chirality in (6.2), they read From these equations, one can use standard spinor bilinear manipulations to obtain differential conditions for the frame and the fields: Here the covariant derivative acting on E I is We may in particular combine these equations to obtain an expression for ϕ: where α ∈ iR, and we have used that everything in this last equation is globally defined to integrate, assuming that Y 4 is path-connected. The system of equations (6.12)-(6.16) is in fact necessary and sufficient to have a supersymmetric solution to the bulk equations of motion. There are several steps involved in showing this. Firstly, we note that for a Dirac spinor ζ the set {ζ, ζ c , Γ µ ζ Γ µ ζ c } spans the spinor space. Thus contracting the dilatino equation (6.11) with the Hermitian conjugate JHEP09(2018)100 of each element of this set gives a collection of equations which are equivalent to the dilatino equation. In turn, these equations can be shown to be equivalent to (6.15) and (6.16). On the other hand, since we have a (local) identity structure, the intrinsic torsion is determined by the exterior derivatives in (6.12)-(6.14). It follows that (6.12)-(6.16) are equivalent to the Killing spinor equations. One next considers the truncated integrability conditions derived from (3.13) and (3.14). From these it is straightforward to show that the Killing spinor equations imply the equations of motion, while the Bianchi identity for F I has to be imposed additionally. In particular the proof of this uses the fact that the bulk spinor ζ is Dirac. The upshot is that the complete system of equations to solve is given by the first order differential system (6.12)-(6.16). It is interesting, especially in light of the computation of the on-shell action in the next section, to consider the expansion of the bilinear equation near the boundary. Using the Fefferman-Graham coordinate z, the bulk spinor ζ has the expansion where χ is a constant 2-component spinor given by with c ∈ R (compare with (4.22) with c = −w). Without loss of generality, we may set c = 1 in the following, and the norm of the spinor takes the form We also find The vanishing of ϕ 1 allows us to fix the constant α in (6.17): expanding the latter equation leads to ϕ 1 = α/2, so under the assumption of the topological twist, α = 0. In a neighbourhood of the conformal boundary, the bulk frame has the form Near the boundary, the leading order of the equations (6.12)-(6.16) is trivial apart from (6.14), which corresponds to the condition that e I satisfy the first Cartan's structural equation de I + ω I J ∧ e J = 0 . (6.23) JHEP09(2018)100 Here the spin connection ω I J arises from the topological twist boundary condition for the gauge field (4.21). In some sense (6.23) is a redundant equation, simply stating that the frame defined by supersymmetry is compatible with the boundary metric. As in the AdS 5 /CFT 4 example, the bulk differental equations are tautological on the boundary, where they simply define a (twisted) frame for the three-manifold M 3 . The analogous statement in AdS 5 /CFT 4 is that the bulk differential system at the boundary defines a quaternionic Kähler structure on the supersymmetric background, which one can construct on any four-manifold [4]. On-shell action Thanks to these results, we can now greatly simplify the expression for the on-shell action. We start with the expression (3.36) and setF I = 0, obtaining (6.24) Then, using (3.3) and (3.4), we may exchange the gauge field contribution for an exact term Notice that, using the equations for the orthonormal frame and (6.17), we can write 26) and this, using the expression (6.17) for ϕ, is exactly (modulo a numerical factor) the potential term in the on-shell action (6.25). Therefore, the on-shell action is exact The global arguments discussed above imply that the four-form in the action is globally well-defined on Y 4 . In what follows we assume that the subset of Y 4 where the spinor becomes chiral or zero is measure zero. As in section 3.3, we cut off the bulk Y 4 at some small radius z = δ > 0, so that ∂Y 4 = M δ ≡ {z = δ} ∼ = M 3 . Using Stokes' theorem, we may then write the on-shell action as integrals over the conformal boundary M 3 ∼ = M δ , and over the boundaries T ǫ of the small tubular neighbourhoods of radius ǫ > 0 surrounding the subsets Y 4 \ Y where the frame degenerates. Let us consider first the contribution from the conformal boundary: using the expansion of the spinor (6.18) and of the fields (6.21), it is easy to show that near the conformal boundary JHEP09(2018)100 To this we should add the contributions from the Gibbons-Hawking-York term (3.37) and the counterterms (3.38), which in a neighbourhood of the boundary are Once we take into account the change in sign of the on-shell terms, due to the orientation of the bulk compared to the orientation of the boundary, the contribution to the renormalized action from the conformal boundary is zero in the limit δ → 0. Therefore, the renormalized gravitational action only receives contributions from the subsets where the frame degenerates: where the limit collapses the small neighbourhood around the degeneration locus. However, this gives zero for a smooth solution. That is, a supergravity solution with a smooth metric and smooth bosonic fields. Clearly the last two forms in Υ, which only involve X, ϕ, are well-defined if the bosonic fields are smooth. In particular since X = e 1 2 φ , this means that X > 0 (indeed, bounded below by a positive constant since Y 4 is compact). The last two terms in Υ therefore provide zero contribution when integrated over a subset of vanishing measure. The only non-trivial contribution could arise from X −1 * K. Consider first the subset where the spinor is chiral but non-vanishing. While changing from local SU(2) R gauge patches of definition for ǫ a , ζ is a linear combination of ζ and ζ c , but note that in four dimensions Γ 5 ζ = ±ζ if and only if Γ 5 ζ c = ±ζ c . Therefore, spacetime chirality is a well-defined global concept for the Spin SU (2) spinor. If the spinor is chiral but non-vanishing, S = 0 and the bilinears K and V I vanish, so X −1 * K is zero there, and the integral is zero. Secondly, consider the subset where the spinor is vanishing. One might worry that K is not well-defined here, as S = 0. However, note that we may write (6.33) Using (6.12) we then in turn have We may thus use ρ > 0 as a radial coordinate near to the where the spinor vanishes at ρ = 0, and more precisely define T ǫ = {ρ = ǫ > 0}. It follows that X −1 * K is the product of a bounded function X −1 sin θ (as long as X > 0 is smooth), and the volume form E 4 vol 4 induced on T ǫ from the four-dimensional bulk metric. The integral hence vanishes in the limit ǫ → 0, where the volume of the tubular neighbourhood T ǫ vanishes. We conclude that the renormalized action for any smooth supergravity solution is zero. In particular, we have made no assumptions at all here on the topology of M 3 , or of its path-connected filling Y 4 with ∂Y 4 = M 3 . JHEP09(2018)100 7 Revisiting topological AdS 5 /CFT 4 Inspired by the evaluation of the on-shell action in the previous section, here we revisit the 5d/4d correspondence of [4]. After some brief background in section 7.1 recalling the work in [4], we show in section 7.2 that smooth supersymmetric five-dimensional bulk gravity fillings likewise have zero action. Note that this section is entirely independent of the rest of the paper, despite sharing considerable overlap in notation. We trust this will not cause confusion. Background In [4] we defined a holographic dual to the Donaldson-Witten topological twist of N = 2 gauge theories on a Riemannian four-manifold (M 4 , g). The duals are described by a class of asymptotically locally hyperbolic solutions to (Euclidean) N = 4 + gauged supergravity in five dimensions. Working in a truncation where a certain doublet of two-forms are set to zero, and with appropriate boundary conditions, we showed that the holographically renormalized on-shell action is independent of the boundary metric. The action for the truncated Euclidean N = 4 + gauged supergravity is Here R = R(G) denotes the Ricci scalar of the five-dimensional metric G µν , * is the Hodge duality operator acting on forms and F = dA, F I = dA I − 1 2 ǫ I JK A J ∧ A K . The equations of motion which follow from this action are: with F 2 ≡ F µν F µν and (F I ) 2 ≡ 3 I=1 F I µν F Iµν . Note that the one-form A here plays a similar role to the axion ϕ in the bulk four-dimensional supergravity of sections 3-6. In particular in [4] the field A was likewise taken to be purely imaginary, with all other bosonic fields real. A Fefferman-Graham expansion of the bosonic fields, together with imposing boundary conditions appropriate to the 4d N = 2 Donaldson-Witten topological twist, leads to the JHEP09(2018)100 On-shell action In [4] we showed that the on-shell action could be rewritten using the Einstein equation as However, by additionally using the scalar field equation (7.2) twice and (7.4) to rewrite the Chern-Simons term we arrive at the following simpler expression Now with some simple manipulation of the differential system (7.9)-(7.10) we can show that 13) and immediately conclude that the on-shell action is (locally) exact; In addition to A being a global one-form, with F a global two-form, we assume that X > 0 is a smooth global function on Y 5 . Further, note that J I ∧ J I ∝ * K and K is fixed by (7.9) in terms of X, A and S. Hence K is globally defined as long as the spinor norm S =ζζ = 0. We should hence more precisely define Y and the gravity solution is smooth. In summary, the on-shell action is globally exact and we may use Stokes' theorem to conclude Here there are two types of boundary in ∂Y (0) 5 : firstly ∂Y 5 ∼ = M 4 is the UV conformal boundary, and as in the previous section there is also the boundary of a tubular neighbourhood T ǫ around the locus where ζ = 0. The above on-shell action must be supplemented by the standard Gibbons-Hawking-York term at the UV boundary, I GHY as in section 3.3. In addition, the divergences may be cancelled by adding local boundary counterterms. The divergences identified by expanding (7.11) and I GHY are cancelled by adding JHEP09(2018)100 Here the integral is over the UV boundary ∂Y 5 ∼ = M 4 of Y 5 . As the on-shell actions given by (7.11) and (7.15) are equivalent, I ct must also cancel divergences arising from the latter when supplemented by the common Gibbons-Hawking-York term. The total renormalized action is then S ren = lim δ→0 (I o-s + I GHY + I ct ) , (7.17) where δ is a cut-off for the radial coordinate z. In order to calculate the UV contribution to S ren of the term 1 3 X −2 J I ∧ J I in I o-s we require the Fefferman-Graham-like expansion of the spinor ζ to one more order in z than given in [4]. Continuing the line of reasoning there (and in section 4.3) we eventually compute 10 The last line, seemingly, cannot be written in terms of the lowest order constant spinor χ, whose norm is one, however it will not play a part. From this expansion and the definition of the bilinears in (7.8) we determine Here we have restricted the two-forms to the boundary at constant z = δ. On forming the exterior product there are several simplifications, in particular the anti-symmetric indices of da 2 and Da I 2 are traced over and do not contribute. This can also be shown by expanding the equation K ∧ J I ∧ J I = vol 5 . We are finally in a position to evaluate the UV contribution to the renormalized onshell action (7.17). We find At first sight the log δ term is problematic as it diverges. However, the topological condition ∂Y 5 (E + P) * 4 1 = 0 is required in order for A to be a global one-form, or equivalently to have a non-zero partition function for the boundary TQFT [4]. Moreover, the Ricci scalar is a globally defined function on ∂Y 5 , and consequently for boundaryless four-manifolds, i.e. ∂(∂Y 5 ) = 0, the second term vanishes on using Stokes' theorem. The same argument applies JHEP09(2018)100 to the finite piece of S UV ren as the bulk scalar X, and hence X 2 , is a global smooth function. It follows that the UV contribution to the renormalized action is zero for smooth fillings. As in the previous section, that now leaves us with the contribution from the small tubular neighbourhood T ǫ : The contributions from the second and third forms are zero for smooth solutions, again since X > 0 is smooth, and A is assumed to be a global smooth one-form on Y 5 . Thus the integrals tend to zero as the volume enclosed by T ǫ tends to zero. On the other hand, the first term may be written as That this also contributes zero may now be argued in exactly the same way as at the end of section 6.2, using equation (7.9). We conclude that the renormalized action for any smooth supergravity solution is zero. In particular, since Y 5 is assumed to have boundary ∂Y 5 = M 4 , together with the topological constraint mentioned after equation (7.20) one necessarily has Euler number and signature of M 4 equal to zero: χ(M 4 ) = 0 = σ(M 4 ). Apart from this, no other topological assumption is made about M 4 or its filling in the above computation. Discussion In this paper we have defined and studied a holographic dual to the topological twist of N = 4 gauge theories on Riemannian three-manifolds and verified that the renormalized gravitational free energy is independent of the boundary three-metric, thus providing an additional construction of topological AdS/CFT beyond [4]. We have also reformulated the bulk supersymmetry equations in terms of a twisted identity structure, and used this structure to prove that the gravitational free energy of all smooth bulk fillings, irrespective of their topology, is zero. Let us again emphasize that the latter result does not make the former result of section 5 redundant: the computation of the variation of the gravitational free energy holds for smooth solutions, but a priori it is more general. Metric-independence will still hold for singular solutions, provided the additional surface terms around the singularities are zero. In fact if one allows singular saddle point solutions at all, this should be a clear constraint. In addition we have revisited the AdS 5 /CFT 4 correspondence and similarly showed that smooth fillings there also have zero gravitational free energy. The results presented here and in [4] raise a number of interesting questions and directions for future research. In general the classical supergravity limit of the AdS/CFT correspondence identifies − log Z QFT = S ren . (8.1) Here on the right hand side we have the least action solution to the given filling problem in the bulk supergravity, while the left hand side is understood to be the leading term JHEP09(2018)100 in the corresponding strong coupling (typically large rank N ) limit of the QFT partition function. For example, uplifting the four-dimensional N = 4 gauged supergravity solutions to M-theory on S 7 /Z k leads to the effective four-dimensional Newton constant in (2.4), which scales as N 3/2 . The latter multiplies the holographically renormalized on-shell action S ren on the right hand side of (8.1). On the other hand, in this paper we have shown that this gravitational free energy is always zero, for any smooth supergravity filling of any conformal boundary three-manifold M 3 . We have already noted that every oriented three-manifold is spin, but another important topological fact is that every such threemanifold bounds a smooth four-manifold (which may be taken to be spin). There is thus no topological obstruction to finding such a bulk filling of M 3 . Of course, an important assumption here is that there exist smooth fillings that solve the supergravity equations, with prescribed conformal boundary (M 3 , g). We have recast the supergravity equations as the first order differential system (6.12)-(6.16), and thus existence and uniqueness theorems for solutions to these equations will play an important role. Given that such solutions are supersymmetric and are dual to a topologically twisted theory, one naturally expects better behaviour than the non-supersymmetric Einstein filling problem, typically studied by mathematicians. In any case, assuming that such smooth fillings are the dominant saddle points in (8.1), the results of this paper imply that the large N limit of the topologically twisted ABJM partition function is o(N 3/2 ), for any three-manifold M 3 . This should be contrasted with the non-twisted partition function on (for example) S 3 , where both sides of (8.1) agree and equal π √ 2k 3 N 3/2 in the large N limit [52]. It thus remains an interesting open problem to compute the large N limit of the topologically twisted ABJM theory, on a three-manifold M 3 , and compare with our holographic result. Moreover, if the leading classical saddle point indeed contributes zero, the next obvious step is to try to compute the subleading term, as a correction to the supergravity limit. Since by construction everything is a topological invariant, this may well be possible. Similar remarks apply to the Donaldson-Witten twist studied holographically in [4]. Here the bulk five-dimensional N = 4 + gauged supergravity solutions uplift on S 5 to solutions of type IIB supergravity, where now the five-dimensional Newton constant is given by 1 On the other hand, the Donaldson-Witten twisted partition function has been computed, for general rank gauge group G = SU(N ), on M 4 = K3 in [53,54]. This follows from the fact that on the hyperKähler K3 manifold the Donaldson-Witten and Vafa- 11 One may also uplift to solutions of M-theory, which are dual to N = 2 theories of class S with N 3 scaling, but we won't discuss this further here. JHEP09(2018)100 Witten twists are equivalent (and in fact equivalent to the untwisted theory). However, |σ(K3)| = 16 and a smooth filling by Y 5 does not exist in this case, so there is no obvious classical gravity solution to compare to. Nevertheless, the partition function is (for N prime) [53,54] where q = exp(2πiτ ), with τ = θ 2π + 4πi g 2 YM the usual complexified gauge coupling, ω = exp(2πi/N ), and G(q) = 1/η 24 (τ ), with η the Dedekind eta-function. Taking the 't Hooft coupling λ = g 2 YM N fixed and large, the N → ∞ limit is dominated by the first term in (8.2), resulting in the leading order behaviour log Z(K3) ∼ 8π 2 N 2 λ . (8.3) As mentioned above, in general the classical gravitational free energy is order N 2 , which for smooth fillings of M 4 we have shown is multiplied by zero for the holographic Donaldson-Witten twist. However, there is no such smooth filling of M 4 = K3, so it is not clear what the dual classical solution should be. Perhaps one should allow for certain singular Y 5 , and/or fill the boundary S 5 × K3 with a topology that is not simply an S 5 bundle over Y 5 . These would lie outside the class of smooth solutions to the consistently truncated five-dimensional N = 4 + gauged supergravity we have studied. That said, a perhaps naive interpretation of (8.3) is that the leading classical O(N 2 ) term is indeed zero, with the N 2 /λ term being a subleading string correction to this. This particular example clearly deserves much further study. More generally, there are a wide variety of possible topologically twisted theories in diverse spacetime dimensions. One could ask if zero action/gravitational free energy for smooth supergravity solutions dual to TQFTs is a general property. Perhaps this is specific to cases in which the preserved supercharge Q in the TQFT satisfies Q 2 = 0, which is generally not the case. The apparent simplicity of our results suggests there should be a more elegant way to set up the holographic problem. Recall that in field theory, invariance of the TQFT partition function with respect to metric deformations crucially relies on the stress-energy tensor being Q-exact. We have shown the corresponding result holographically, but in a less direct manner. It is natural to conjecture that a topological sector of gauged supergravities, in this holographic setting, may be similarly described using a boundary BRST symmetry [55][56][57][58][59].
2023-01-21T14:37:09.343Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "9a6da22a40bb7df125226d21c880327987f91e99", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP09(2018)100.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "9a6da22a40bb7df125226d21c880327987f91e99", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
257087171
pes2o/s2orc
v3-fos-license
Optimal equity capital requirements for large Swiss banks Ten years after the worst financial crisis of the post-war period, Switzerland has established a Too-Big-To-Fail (TBTF) framework. Under this framework, the two large Swiss banks are subject to substantial capital requirements. It is not obvious whether the TBTF capital requirements are sufficient to prevent banks from plunging the country into a financial crisis once again. We estimate the social costs and benefits of higher capital requirements for the two large Swiss banks and derive socially optimal capital ratios from the cost-benefit trade-off. Our results show that Swiss TBTF capital requirements still fall short of socially optimal capital ratios. Introduction This paper seeks to contribute to the discussion of the optimal equity capital requirements for banks from a society's perspective. In Junge and Kugler (2013), we limited ourselves to a comparison of the social costs and benefits and concluded that long-run benefits exceed long-run costs by a substantial multiple. 1 In this paper, we present an attempt to determine the optimal leverage and capital ratios for Switzerland's global systemically important banks (G-SIBs). The economic debate about the appropriate minimum level of regulatory capital requirements for banks from society's perspective is highly controversial. At one end of the spectrum, Admati and Hellwig (2013, p. 179) argue that there are no social costs associated with higher equity capital requirements and propose a leverage ratio requiring equity capital on the order of 20 to 30% of total assets. At the other end, banking industry representatives continue to emphasize that higher equity capital requirements in particular reduce the availability of credit and retard economic growth. 2 The conflict over the appropriate minimum level of banking capital also blocked the finalization of Basel III at the beginning of 2017. Some members of the Basel Committee on Banking Supervision (BCBS) emphasized that only strongly capitalized and highly liquid banks can support economic growth, while others argued that the pendulum of the Basel III revisions had already swung too far and undermined the economic recovery. 3 In October 2015, Switzerland amended its Too-Big-To -Fail (TBTF) legislation and decided to raise the required going concern leverage ratio for Switzerland's G-SIBs-Credit Suisse and UBS-to 5%. 4 This decision was based on the recommendation of the "Group of Experts on the Further Development of the Financial Market Strategy in Switzerland" that Switzerland should be among the countries with the most stringent capital requirements. 5 Designing Swiss capital requirements along the same lines as foreign standards is one choice, as well as the orientation on international competitiveness. 6 However, as relevant as they are, these considerations should be complementary in nature as they do not address the key question of whether the new TBTF capital requirements are appropriate from society's point of view. An optimal level of bank equity capital should be determined by some aggregate welfare objective taking into account that higher equity capital requirements benefit the economy by reducing the likelihood of banking crises while simultaneously imposing economic cost in terms of a lower potential economic output. Along these lines, we extend Junge and Kugler (2013) and seek to determine the long-run steady-state optimal leverage and capital ratios for the This approach is in accordance with a major strand of economic research on bank capital and regulatory requirements. After the financial crisis of 2007/2008, the approach was applied by, among others, the BCBS (2010a), Kashyap et al. (2010), and Miles et al. (2011 and2013). 8 Our article is arranged as follows. In Section 2 we present an updated estimate of the size of the Modigliani-Miller (M-M) effect for Switzerland's G-SIBs, extending the sample period of Junge and Kugler (2013) by 5 years up to 2015. Based on the M-M effect, we calculate in Section 3 the banks' overall cost of funds and the social cost of higher equity capital requirements using a translog production function. In Section 4 we re-estimate the effect of banking crises using a novel and extensive data set from 1892 to 2016 and combine this with the analysis of Junge and Kugler (2013) to obtain a social benefit curve for additional equity capital requirements. In Section 5 we compare the social cost and benefit associated with higher equity capital requirements and determine optimal leverage and capital ratios under different capital definitions. Finally, Section 6 concludes. 2 Empirical evidence of the Modigliani-Miller theorem of capital structure irrelevance for Swiss G-SIBs As shown by Modigliani-Miller, a company's overall cost of funds is unaffected by the mix of equity and debt under perfect capital markets and in the absence of taxes and subsidies. An increase in equity, which is more expensive than debt, will simply be offset by a new mix of equity and debt with lower required rates of return on equity and debt. 9 In this case, the banks' overall funding costs will not change and therefore the lending of banks will remain unaffected. However, if the idealized conditions of the M-M theorem are not perfectly met, the M-M offset is incomplete and an increase in equity will raise the funding costs and consequently bank lending rates. The key empirical question is to what extent the mechanism holds for the Swiss G-SIBs. The estimation of the size of the M-M offset was first explored by Kashyap et al. (2010) as well as Miles et al. (2011 and2013) and was applied to Swiss data by Junge and Kugler (2013). The framework is derived from the Capital Asset Pricing Model (CAPM) and the M-M theorem. Assuming that bank debt is risk free, the following linear relationship between systematic equity risk and leverage is obtained 10 : where β equity is the systematic equity risk of the bank, β asset is the systematic risk on the bank's assets, and Lev ¼ EþD E is the bank's leverage with its equity (E) and debt (D) components. According to Eq. (1), a reduction in leverage (i.e., a relative increase in equity) leads to a proportional decline of systematic equity risk. For example, assume a bank initially has a leverage of 40 and an equity market beta of 2. If equity is doubled, and hence leverage is halved to 20, equity beta declines from 2 to 1. As pointed out in a recent study by Clark et al. (2015), 11 Eq. 1 is an appropriate specification for TBTF banks that benefit from implicit government guarantees and from deposit insurance in general. In this situation, the market perceives the debt of TBTF banks as risk free and the adjustment to changes in leverage will be channeled through equity as stated in Eq. (1). 12 In contrast, for smaller, non-TBTF banks, the debt mechanism for adjustment cannot be ignored and the present framework is less appropriate. Equation (1) can be tested directly by running a regression of β equity on leverage and testing the hypothesis that the intercept is equal to zero (a 100% M-M offset). Alternatively, we can generalize Eq. (1) by considering the log-linear model β equity = β asset L b and test the 100% M-M hypothesis that c is equal to 1. The intercept term of this regression is now log(β asset ) and should have a negative sign. Both tests are performed below. In our 2013 study, we employed quarterly data from 1999 and 2010 and estimated an M-M offset of 55% (log-linear) for the two Swiss G-SIBs. But much has happened since 2010. In response to the Swiss TBTF legislation, both banks have more than doubled their common equity (CET1) levels. In mid-2015, Credit Suisse reported a CET1 ratio of 10.3% of risk-weighted assets and UBS a ratio of 14.4% which can be compared to a benchmark of 4.5% of RWA at the end of 2010. 13 In addition, both banks enhanced their liquidity ratios and are in the process of implementing the TBTF resolution requirements. Table 1 reports the results of linear and log-linear regressions of equity beta (estimated in the framework of the CAPM) on lagged bank leverage. Lagged bank leverage is used as a regressor in order to avoid potential endogeneity problems. The panel characteristic of the data is taken into account by fixed bank effects as well as a fixed or random time effect. The random time effect Leverage is defined as the ratio of balance sheet to BIS Tier 1 capital. Balance sheet assets and BIS Tier 1 capital are collected from Bloomberg and the bank's quarterly reports at group level model is adopted in order to get an efficiency gain in estimation when the Hausman test shows no significant correlation of the regressor and the time effects. Table 1 shows the estimates for the full sample and a sample split in 2010. For the full sample, we have to adopt the two-way fixed effects model because bank as well as time effects are highly statistically significant and appear to be correlated with the residuals (according to the Hausman test). The estimates of the log-linear model are highly statistically significant with a slope coefficient of 0.534 which is very close to the value reported by Junge and Kugler (2013). This estimate is significantly below 1 and therefore points to a partial M-M offset. As to the linear regression, we notice a positive and significant intercept and a significant slope coefficient of 0.0175 which implies an elasticity (M-M offset) of 0.46 evaluated at the means of beta and leverage. 14 A significant intercept again rejects the hypothesis of a full M-M offset and confirms the existence of a partial M-M offset. The estimates for the first sub-period until 2010 are very close to those of the full sample. For the second sub-sample, the slope coefficient is larger than the first sub-sample, namely 0.0292 implying an elasticity (M-M offset) of 0.533 at the means, whereas the directly estimated log-linear elasticity is 0.649. The first sub-sample estimate is within one standard error of the second sub-sample estimate, and we find, therefore, no sign of a structural break in the regressions. Note that we could use the random time effect specification in the second sub-period according to the Hausman test. Moreover, the sizably lower adjusted R-squared in the random effect model is to be expected, as the time dummy variables in the fixed effect model contribute to the R-squared whereas in the random effects model these effects are in the error term. In summary, the results of Table 1 not only confirm our earlier findings, they also show that the M-M offset for the Swiss TBTF banks is robust across sub-periods and sizeable, amounting to about 50% of what is predicted under full M-M validity. This applies equally to the linear and the log-linear specification of the regression. Particularly important is the stability of the size of the M-M offset given the changes in regulatory and economic conditions for the Swiss G-SIBs after 2010. This evidence is in line with other studies that find M-M offsets in the range of 40 to 70%. 15 3 Social cost of additional capital requirements Bank funding costs As already mentioned, if the M-M offset is incomplete as in the case of the Swiss G-SIBs, higher equity requirements will increase the funding cost of banks. The banks will pass on the additional cost to borrowers, and bank lending rates will rise. This in turn raises the economic costs of capital formation and leads ultimately to a permanent drop in GDP. In our model, the banks' funding costs are the weighted average cost of capital, WACC. As we assume that debt has a zero beta, the cost of debt is equal to the risk-free rate R f . Given these assumptions, the banks' WACC is: where R Equity is the expected return on equity and R f the risk-free rate. E DþE is the leverage ratio (LR). Since we estimated the size of the M-M effect as a function of leverage (rather than the leverage ratio), we rearrange Eq. (2) in terms of leverage. For this, we replace E EþD by 1 L , where Lev stands for leverage. WACC Lev In line with the leading empirical studies of the M-M offset, 16 we apply the CAPM in order to include the results of our regressions between leverage and β equity in Eq. (3). The CAPM states that the required return on equity, R Equity , is proportional to the (bank specific) beta, β equity , times the equity market risk premium, R p . whereâ is the constant andb is the coefficient on leverage from our beta regressions (see Table 1). Substituting Eq. (4) into Eq. (3) yields: Equation (5) shows that WACC is an inverse function of leverage and depends on the regression estimatesâ andb. These coefficients are based on Pre-Basel III definitions of leverage, i.e., of the ratio of Balance Sheet Assets to BIS Basel II Tier1 capital. In order to express WACC in terms of the definitions of the Basel III Accord, we need to convert the Pre-Basel III definition of leverage accordingly. Assuming that C con is the conversion factor between the Pre-Basel III and the Basel III definition of leverage, we write Eq. (5) as follows: Equation (6) includes all the elements needed to calculate the overall funding cost of the Swiss G-SIBs, which can be rewritten in terms of the leverage ratio as: Thus, WACC is a linear function of the leverage ratio. Sinceâ andb are positive, higher capital requirements imply higher cost of capital. The conversion factor C con ensures that the leverage ratio is expressed in terms of Basel III Look-through (fully-applied) leverage ratio. Appendix 1 explains in detail the different definitions of the leverage ratio and the derivation of the conversion factors. In the base case of the calculations developed below, _ a = 0.8269 andb = 0.01754 are the estimated regression coefficients over the sample from 2001 and 2015. The conversion factor C con is 0.713 (= 0.77/1.08, see Appendix 1). For risk-free money market rate R f , we use the repo reference rate of the SNB, which was about 1% during this period. For the equity market risk premium, R p , we assume a lower (5%) and an upper (10%) level to take account of the well-known fact that equity risk premiums vary greatly in size over time. All parameters and their values used in our analysis are summarized in the tables of Appendix 3. Table 2 shows the increase in WACC for the Swiss G-SIBs caused by a 1 percentage point increase in the leverage ratio. Two basic scenarios are compared: (i) the estimated M-M offset on WACC resulting from the linear regression 2001Q2 to 2015Q2 and (ii) the WACC impact under the assumption that the required return remains invariant to leverage, i.e., there is no M-M offset. Moreover, all calculations show the WACC before and after conversion to the final Basel III standards as of 1 January 2018. Thus, results are expressed in terms of the Basel III Tier1 Look-through (fully-applied) and CET1 Look-through (fully-applied) definition of the leverage ratio. They are about 40% (for Tier1) and 80% (for CET1) higher than the corresponding pre-Basel III WACCs. 3. Increases in the leverage ratio lead to proportional changes in WACC. A 1 percentage increase of the leverage ratio raises the Basel III Tier1-based (Lookthrough) WACC by only 5.8 bps (assuming an equity premium of 5%) and by 11.6 bps (assuming an equity premium of 10%). The corresponding WACCs for Basel III CET1 are higher amounting to 7.4 and 14.9 bps, respectively. The responsiveness of GDP to the banks' cost of capital The starting point is the simple approach adopted by Miles et al. (2011 and2013), which is based on a production function for GDP with capital (K) and labor (L) inputs and technological progress represented by a time trend Y = f(K,L,t). If factor prices are equal to marginal products, the elasticity of output with respect to the price of capital can be written simply as a function of the substitution elasticity σ KL, t and the elasticity of output with respect to capital S K, t (equal to the income share of capital). The subscript (t) reflects the possibility that the elasticity of output, E Y, PK, t , with respect to the price of capital, P K, t , as well as σ KL, t and S K, tt can change over time: Note: (1) The calculation is based on the estimated M-M offset resulting from the linear regression 2001Q2 to 2015Q2 (see Table 1). (2) Calculated under the assumption that the required return remains invariant to leverage dY t dP K ;t Equation (8) is based on growth theory and therefore provides an estimate of the long-run steady-state impact of an increased price of capital on output. In line with the neoclassical growth theory, a permanent increase in the price of capital alters the equilibrium capital stock and leads to a permanent decline in the level of output as measured by GDP. This is an economy-wide framework, which includes all goods and services produced in the economy and allows us to calculate the economic cost of higher capital cost in terms of lost output. 17 In Junge and Kugler (2013), we adopt the CES production function with constant σ and S K and estimated an elasticity of substitution between capital and labor for the real (non-financial) sector of approximately 1, as in the special case of the Cobb Douglas production function. This is surprising given the estimates for other advanced countries which are usually clearly lower than one. Moreover, new statistics for Switzerland's capital stock and the income distribution between capital and labor for the period 1995 to 2014 have recently been published, which provide an opportunity to check the case in a more flexible translog framework. The translog framework is based on a second-order Taylor approximation of an unspecified logarithmic production function. This allows for a time-varying rate of substitution and production elasticity of capital and includes the Cobb Douglas function as a special case. 18 The estimation of the translog production function is reported in detail in Appendix 2. It results in an elasticity of substitution varying between 0.42 and 0.44 during the period 1995-2014. Together with the time series of the capital cost share (S K, t ) and the elasticity of substitution (σ KL, t ), we are able to calculate a time-varying estimate for the elasticity of production with respect to the price of capital as given in Eq. (8), i.e., E Y, PK, t . This crucial parameter for our analysis varies between 0.27 and 0.34 with a mean and median of approximately 0.31. This implies that the median level of GDP decreases permanently by 0.31% if the cost of capital of non-financial corporations increase by 1%. Interestingly, this parameter reaches its absolute maximum before the financial crisis and decreases in absolute value since 2008, implying a weaker reaction of GDP to capital costs changes in recent years (see Appendix 2, Fig. 6). The result is driven by a decrease in the cost share of capital in production, which is probably the consequence of the increased importance of human capital and skilled labor in production in the last 10 years. The translog production elasticity of 0.31 lies clearly below the estimate of 0.43 used in Junge and Kugler (2013). However, compared to other advanced countries, the production elasticity of 0.43 appears high. Miles et al. (2011 and2013) and Clark et al. (2015) apply a production elasticity of 0.25 on the basis of empirical studies related to the UK and USA. 19 The advantage of our new translog estimate is that it is more plausible than the CES estimate and of the same order of magnitude as the UK and US estimates. As a next step, we need to determine the capital costs for the Swiss companies in line with the assumed market risk premiums of 5% and 10%. To this end, we estimate the equity beta of Swiss non-financial companies β Corp where r Corp SPI t is the log return on the corporate sector of the SPI index (i.e., excluding financial and insurance companies) and r SPI t is the log return on the SPI. Not surprisingly, the beta for Swiss non-financial companies, β Corp turns out to be slightly above 1, namely 1.1. Next, we apply the CAPM and calculate the capital costs for the Swiss non-financial companies,P K , under the same assumptions that are used to calculate the return on equity for the banks. Given the two risk premiums (5% and 10%), we determine a lower (6.5%) and an upper (12%) estimate of the capital cost for Swiss non-financial companies. As the Swiss TBTF legislation applies to the Swiss G-SIBs, only these institutions are under pressure to increase lending rates. 20 Consequently, economy-wide lending rates will increase only by a certain proportion, determined by the role of the G-SIBs in credit supply. Since in our approach the impacts of higher WACCs are channeled through the Swiss corporate sector, the relevant market share is the share of G-SIBs in external financing of the Swiss corporate sector. This share is 10.8%. 21 Finally, we assume that any rise in WACC is, at least partially, passed on to their customers whereby we distinguish between two simple scenarios: a 100% and a 50% pass through (PT). The size of the PT depends on the market power of the two G-SIBs reflected in their competitive position and their ability to bind their customers. A 100% PT combined with the assumption that borrowers will not substitute away from UBS and CS to other Swiss banks suggests a perfectly inelastic demand for credit from the two large Swiss banks. Given the fierce competition among Swiss banks, this seems to be rather unlikely at first sight. However, on closer inspection, one notices that banks have certain means to lock-in their customers. In credit markets, banks retain their customers through contingent lending arrangement in the form of credit lines or revolving loans. Credit lines and revolving loan agreements are of considerable value for corporates as it allows them to choose when and how much to borrow as well as to repay loans in line with their business needs. This provides corporates with a great deal of flexibility. A look in the statistics (Table 3) shows that the two Swiss G-SIBs are especially generous in providing credit lines to small and large businesses. The ratios of credit line to utilization provided by the two G-SIBs are always higher than those of the other banks. This is valid on average and for any company size (Table 3). Next, a bank loan is often part of a wider business relationship between the main (house) bank and a company. It includes for instance, accepting deposits, payment services, check clearing, investment advice, cross selling, and a range of other services. 22 Typically, these arrangements are close and long-lasting and not easily questioned. They are quite common in Switzerland, Germany, and Austria. It fits this picture that 75% of the Swiss small and medium enterprises (SMEs) have a financing relationship with only one bank. Another 19% have two as shown in a recent survey. 23 The same survey finds that only 2 to 3% of Swiss SMEs have changed their house bank over the past 12 months or might consider changing it in the next 12 months. Therefore, it is not surprising that the survey concludes that Swiss SMEs are generally satisfied with their house bank and therefore see no reason to change. From these considerations, we conclude that, despite fierce competition, the two Swiss G-SIBs have limited scope to raise their bank lending rates without borrowers leaving them in masse. In the light of a capital cost increase of between 12 and 22 bps (100% PT), we believe that the assumption of an inelastic demand for credit for the two large banks is acceptable. However, in order to investigate the impact of an incomplete pass through, we will include in our calculations a PT of 50% and report its impact on the optimal leverage ratio. The ingredients of the above discussion can be summarized by the GDP multiplier (GDPM) in Eq. (10). Equation (10) states that the responsiveness of output depends on the elasticity of production with respect to the price of capital, E Y, PK, t , on the share of external financing of the Swiss corporate sector by the G-SIBs, SEF, the percent of the pass through, PT, and the price of capital for the Swiss non-financial companies, P K, t . As an example, take an increase of WACC by 11.6 bps (see Table 2, Basel III Tier1) with 100% PT. At a given SEF of 10.8%, the cost of capital for the non-financial firms rises by 1.25 bps above its current cost P K, t of 1200 bps. This is an increase of 0.104% (1.25/1200 = 0.104%) and translates into a permanent fall in output of 3.2 bps given the elasticity E Y, PK, t of 0.31 (0.31 × 0.104% = 3.2 bps). Given Eqs. (7) and (10), the GDP cost of higher leverage ratios, LR Basel _ III is: After defining a base level of LR Basel _ III _ 0 as a point of departure for the increases of the leverage ratio, LR Basel _ III , we can simplify the equation as Equation (12) is a linear, upward-sloping function of the leverage ratio and measures the GDP cost of additional capital requirements in comparison to a given base level of LR. We use this equation to calculate the GDP impact of higher capital requirements. For the base level for LR Basel _ III _ 0 here and for the GDP benefit curve developed in the next section, we select 3.3%, which is approximately the mean value of the Basel III converted leverage ratio for Tier1 over the period 2013 to 2015. Inserting the already mentioned values of the parameters (see Appendix 3, Tables 14 and 15 for a detailed presentation of the parameters and their values) into Eq. (12) allows us to calculate the social economic cost resulting from a 1 percentage point increase in the leverage ratio. Table 4 presents the results. Table 4 shows the results assuming a 100% pass through. The social economic costs related to higher capital requirements for the Swiss G-SIBs are very small. A 1 percentage point increase in the TBTF leverage ratio for Basel III Tier1 capital leads to a permanent annual output losses of 0.03% to GDP. In terms of the Basel III CET1 leverage ratio, the impact is slightly stronger with a permanent fall in the level of real GDP by 0.04%. Using an annual discount rate of 5%, the estimates imply a fall in the present value of all future GDP of between 0.6 and 0.8%. 24 Thus, the recent decision by Switzerland to lift the TBTF Basel III Tier1 LR from 3 to 5% implies a social economic cost of about 0.06% per annum whose present value is equal to 1.2% of current output. Note that the size of the market risk premium does not matter very much for the results in Table 4. It influences the economy as a whole (both the banking and the corporate sector simultaneously) leaving the relative cost between the banking and the corporate sector largely unaffected. 25 The last two columns of Table 4 report the social economic costs if there were no M-M offset. They are nearly twice as high as the results which include the M-M effect, thereby once more highlighting that the M-M effect matters. Finally, if we assume a pass through of 50%, then all reported values are simply halved and-as we will see below-the optimal leverage ratio becomes larger. Social benefits of additional capital requirements In this section, we provide an updated estimate of the effects of banking crises on the growth path of Swiss GDP. It is based on the analysis of Junge and Kugler (2013) who use data from 1881 to 2010. In the interim, however, we have better historical data for real GDP from 1892 to 1947 available and do not have to deflate nominal GDP by the consumer price index in order to get a proxy for real GDP for the years before 1948. Moreover, we now have nearly 10 years of data since the last financial and banking crisis and we should therefore get a much more reliable estimate of its effect on GDP. Switzerland has experienced four fully fledged banking crises since 1881, namely in 1911, 1931, 1991, and 2007. 26 In addition, we account for the recessions of the two world wars (1916 and 1942) as well as the oil price shock of 1974. In order to estimate the long-run impact of these crises, we use a deterministic time trend model for log GDP taking into account the effects of major shocks by including level shift dummy variables (being equal to 0 before the event and 1 after) for all major adverse shocks. The dummies do not capture the short-run effect of a crisis but only their permanent effects on GDP. Thus, the results are robust and minor differences of plus or minus 1 year in dating the crises do not matter. The transitory cyclical deviations from the trend are captured by the residual of Eq. (13) which we expect to be strongly autocorrelated but stationary. Before turning to the results of this model, let us briefly mention that the residuals of this deterministic trend model appear to be stationary. Indeed, the residuals are identified as following an AR(1) process with a coefficient of 0.65 and a Kwiatkowski-Philips-Schmidt-Shin (KPSS) test does not reject at any reasonable significance level the null hypothesis of stationarity (KPSS = 0.0758, 10% critical value = 0.119). However, the standard critical values are not valid for residuals of trend break models. In order to get the appropriate critical values, we ran 1000 bootstrap replications taking into account the AR(1) property of the residuals. By this exercise, we obtained 10%, 5%, and 1% critical values of equal to 0.145, 0.169, and 0.212, respectively. Thus, the stationarity hypothesis is clearly in line with the data, as the KPSS statistic calculated is lower than the appropriate 10% critical value of 0.145. The empirical results for this model and annual Swiss data from 1892 to 2016 are presented in Table 5. First of all, consider the coefficient estimate for the time trend. It is 0.0344, which implies a potential GDP growth of nearly 3.5% instead of the historical average of 2.34%. This drop in measured GDP growth was brought about by permanent shifts of the GDP growth path by the crises reflected in our dummy variables. Table 1). (2) Calculated under the assumption that the required return remains invariant to leverage We see that, in particular, the occurrence of banking crises has a strong and highly statistically significant permanent negative impact on the level of GDP. For instance, we see that the largest negative impact of approximately 30% is associated with the crisis in the early 1990s (estimate of δ 3 = − 0.298). In general, the results are qualitatively similar to those obtained in Junge and Kugler (2013) using the old data set. However, there are some quantitative differences. We find a statistically different impact for the four banking crises according to the corresponding highly significant F-statistic reported in Table 5: The banking crises of 1931 and 1991 clearly had a stronger effect than those of 1911 and 2007. This appears plausible as in 1931 and 1991 the banking crises occurred in connection with a strong depression, or at least a strong recession, whereas in 1911 and 2011 the crises originated in the banking sector. For the non-banking crises, we also found negative permanent effects, but their impacts are lower and of lesser statistical significance and there is weak evidence that these crises had a different effect, i.e., WW I appears to have had a stronger impact on Swiss GDP than WW II and the oil crisis. Nevertheless, we use the restriction that the three non-banking crises have the same effect along with the assumption of different effects of the two pairs of banking crises. The F-statistic for these restrictions is approximately 2, which has a marginal significance level of 9% and cannot be rejected at the usual 5% significance level. The estimation results for this restricted model are provided in the last column of Table 5. The estimated long-run impact of a "pure" banking crisis is − 0.196 whereas the recession triggered banking crises have a larger impact estimate of − 0.285. As the effect of the other crises on the growth path of GDP is estimated to be − 0.109, we get a new "indirect" net estimate of a banking crises of 0.176 (the difference between − 0.285 and − 0.109) which is statistically highly significant. Interestingly, the new indirect estimate of the net effect of a banking crisis is almost exactly equal to the one reported in Junge and Kugler (2013). Moreover, the long-run impact on GDP of pure banking crises is estimated as − 0.196. This "direct" estimate is very close to the indirect estimate, and we conclude that we can safely use the benefit function reported in Junge and Kugler (2013), which is displayed in Fig. 1, for the determination of the optimal leverage ratio. The estimated impact of banking crises on Swiss GDP (Fig. 1) is based on a probit estimate of the dependence of the annual probability of the occurrence of a banking crisis on leverage. The explanatory variables of this model are leverage of the Swiss large banks, interest rate spread (mortgage/savings rate), real GDP growth, and inflation. 27 For this purpose, we decomposed the first three variables into a transitory or cyclical and a permanent or trend component using the HP filter. Inflation was decomposed into an expected (using an AR(2) model to predict inflation) and an unexpected inflation rate (the residual of the AR(2) model). All regressors were lagged 1 year in order to avoid simultaneity problems. We have to mention that leverage is defined as total assets divided by total book equity. This approach was chosen for data reasons, since it was only for this definition of leverage that we had the long-time series we need for our analysis. For leverage and the interest rate spread, only the cyclical component was statistically significant. An increase in cyclical leverage (interest rate spread) leads to an increase (decrease) in the probability of a banking crisis. The findings appear reasonable: a strong short-run increase in leverage and a cyclical decline in the interest rate spread are indicators for overexpansion, with fierce competition in the banking sector, and are typical of the euphoria paving the way to a bubble. The change in trend GDP (10% significance) and in expected inflation (5% significance) reduce the probability of a banking crisis. These results were in line with our a priori expectations. An increase in trend growth indicates that loans become less risky and the incomplete adjustment of bank (sight) deposit rates to inflation. There is no direct significant effect of the trend component of leverage on the probability of a banking crisis, but there is an indirect impact resulting from the relationship between the variability of the cyclical component and the trend component of leverage. Indeed, the application of an EGARCH model provided a statistically highly significant effect of trend leverage on the variance of the cyclical leverage component. This function was estimated as the mean of 50,000 Monte Carlo replications simulating the effect of the trend component of leverage on the probability of a banking crisis. This function is finally multiplied by the estimated GDP loss of a banking crisis in order to get the function displayed in Fig. 1. The details of the model estimation are reported in Junge and Kugler (2013). For further analysis, we follow the approach of Cline (2016) and approximate the function displayed in Fig. 1 by an exponential expression: This function provides a very close fit (R-squared = 0.998) to the data of Fig. 1 and the exponent ρ is estimated to be 2.54 and the constant A 1.56E−04. The exponent describes the convex slope of the function and the constant A reflects the expected GDP loss when the leverage is zero, i.e., the asset/capital ratio is 1. 28 Moreover, the function is now expressed in terms of the Basel III leverage. The conversion factor is B con = 0.676 (= 0.73/1.08, see Appendix 1) and turns the accounting-based leverage multiple of balance sheet assets/book equity used in the estimation of the probability function into a Basel III compliant expression. This function is transformed in terms of the leverage ratio LR = 1/Lev: The change in expected benefits compared to a base leverage ratio LR Basel _ III _ 0 is therefore given by the following equation: This function is displayed in Fig. 2 where the starting value of the leverage ratio is set to 3.3%, with the approximate mean value of the Basel III converted leverage ratio expressed in terms of Basel III Tier1 over the period 2013 to 2015. A 1 percentage point increase of the leverage ratio from 3.3 to 4.3% yields a GDP benefit of 0.16%. This is clearly above the impact on GDP cost of 0.03% (see Table 4) and in line with the conclusion of Junge and Kugler (2013) that the benefits exceed long-run costs by a substantial multiple. However, after a certain level the marginal benefit of additional capital declines and falls short of marginal cost. For example, a 1 percentage point increase of the leverage ratio from 7 to 8% amounts to only 0.01% GDP benefit and hence is below GDP cost. This behavior stems directly from our estimation of the annual crisis probability and reflects the fact that extreme crisis events are rare and require significantly more capital. The sharply shaped benefit curve is an observation that has also been made in other studies. We have already mentioned Cline (2016). But Miles et al. (2011 and2013), Brooke et al. (2015) and a recent IMF paper (Dagher et al. 2016) also estimate similar shapes of benefit curves. The common feature is that the marginal benefits of additional capital are material at first, but rapidly decline after a certain level of bank capitalization. Comparing social cost and benefits and the determination of the optimal leverage ratio Using the cost line Eq. (12) and the benefit curve Eq. (16), we calculate the social marginal cost (MC) and benefit (MB) and determine the optimal leverage ratio for the Swiss G-SIBs. The optimal leverage ratio will occur where the two marginal effects are equal (MC = MB). The derivative of the social cost line Eq. (17) with respect to the required Basel III leverage ratio is: All terms in Eq. (17) are constants and hence the derivative with respect to the leverage ratio is a constant. The derivative of the benefit Eq. (16) is: Equation (18) states that increases of the leverage ratio reduce the marginal benefit. The shape of the function is concave and reflects the diminishing benefit to increases in the leverage ratio. Solving for the optimal LR* yields: Table 6 reports the base case for the optimal LR* for Swiss G-SIBs in terms of the Basel III Tier1 and CET1 leverage ratios, and Fig. 3 provides the graphical presentation in terms of Basel III Tier1. The base case varies with respect to two parameters: the capital definition (Basel III Tier1 or CET1) and the Risk Premium (5% or 10%). 29 The base case suggests that the optimal leverage ratio for Basel Tier 1 capital requirements is about 6% and for CET1 capital requirements about 4.4%. Thus, the Swiss regulatory TBTF minimum leverage ratios fall short of the optimal level by about 1 percentage point. This result can be translated into risk-weighted capital ratios. Since the Swiss TBTF framework establishes a fixed linear relationship between the leverage ratio and the capital ratio for Swiss G-SIBs, 30 capital ratios are easily calculated and compared to other studies of optimal capital ratios (see Table 7). Going over Table 7 leads to three conclusions: First, the optimal capital ratios for Swiss G-SIBs are about 2.5 percentage points higher than the required Swiss TBTF capital ratios of 14.3% (Basel III Tier1) and 10% (CET1). Second, a similar picture emerges for the large banks of other countries. The optimal capital ratios are always above the minimum equity requirements of the BCBS. Third, results vary across studies. With the exception of Brooke et al. (2015), all studies estimate optimal capital ratios above 15%. This difference mainly results from a different understanding of loss-absorbing bank capital. As in all the other studies, Brooke et al. (2015) estimate the optimal capital ratios on the basis of going concern equity capital because bank equity is the only reliable loss absorber in financial crises. Other forms of capital, in particular hybrid capital, failed in the financial crisis of 2007/2008. Nevertheless, Brooke et al. (2015 make a general downward adjustment of their optimal capital ratios to reflect the regulatory requirement that banks have to meet gone concern capital in addition to the going concern bank equity. 31 Gone concern capital is equity-like capital such as subordinated debt subject to bail-in. Its intention is to provide capital for failed banks in order to recapitalize, restructure, or wind them down without using taxpayer funds. Moreover, from a conceptional point of view, it is not appropriate to mix going and gone concern capital considerations as it would imply not only a different cost curve but also a benefit curve that includes two objectives: (i) the reduction of crisis probability due to higher equity capital and (ii) the benefits of an orderly resolution due to gone concern capital. Interestingly, dynamic general equilibrium models exhibit similar results as presented in Table 7. Clerc et al. (2015) set up a dynamic stochastic general equilibrium model which allows an explicit welfare analysis of macroprudential policies. Similar to the approaches reported above, increases in capital requirements imply both a reduction in the supply of loans due to higher interest rates and a lower average default rate of banks. The authors point out that the steady-state solution of their model is consistent with the results of Miles et al. (2011) and BIS (2010. 32 The dynamic equilibrium model of Martinez-Miera and Suarez (2014), which also includes an explicit welfare analysis, points in the Fig. 3 Optimal Leverage Ratio LR*, Basel III Tier1, PR=5% same direction. They find a socially optimal CET1 capital ratio of 14%. It is reassuring to see that alternative approaches yield similar results. Nevertheless, it is useful to assess the uncertainty attached to the estimations and the choice of parameters. We thereby follow a methodology of Cline (2016) and provide alternative parameter values for key variables and calculate optimal LRs* for all possible combinations. The parameter values considered are reported in Table 15 in Appendix 3 together with a detailed account of the selection of the different parameter values. In general, we use plus minus two standard errors around an estimated parameter value if this approach is possible. The table also includes alternative M-M offsets and pass through scenarios. Combining the values reported in Table 15 results in 648 combinations of parameter values. 33 Figure 4 presents a histogram of these calculations. The lowest optimal LR* is 3.72%, which is obtained by assuming a zero M-M offset (â = 1), the higher risk premium (10%), a larger share of external financing of firms affected by a capital cost increase (18.5%), a high elasticity of GDP with respect to capital costs (0.34), a lower GDP loss severity (10%), and a downward adjusted exponent of the benefit curve (minus 2 standard errors). The median of the optimal LR* is 6.67% which is slightly above the benchmark case. The maximum LR* is equal to 13.69% which emerges by assuming an M-M offset of 67%, a low risk premium (5%), the share of non-financial corporates' financing provided by G-SIBs at a reduced level (5.4% percent), a low elasticity of GDP with respect to capital costs (0.27), a high GDP loss severity (28.5%), and an adjusted exponent of the benefit curve (plus 2 standard errors). It is worth mentioning that the asymmetry of the frequency distribution is strongly driven by the M-M offset and the pass through. Finally, the CET1 histogram exhibits a similar pattern as shown for Basel III Tier1 in Fig. 4. Its median is equal to 4.86%, which is above the minimum TBTF standard of 3.5% and somewhat above the optimal base case LR*of 4.43% of the CET1 base case. When we vary the parameter values, we arrive at a minimum optimal CET1 leverage ratio of 2.91 and a maximum of 9.99%, respectively. Conclusions This paper extends the analysis of Junge and Kugler (2013) on the effects of increased equity capital requirements on Swiss GDP to the determination of an optimal leverage ratio. In addition, we improved our model by using the flexible translog production function (instead of CES), updated our estimates, and used newly available historical GDP data to estimate the effect of a banking crisis on real GDP. The study finds that the optimal leverage ratios for Swiss G-SIBs are of approximately 6% in terms of Basel III Tier1 and 4.5% in terms of CET1. The corresponding optimal risk-weighted capital ratios are 17% and 13%, respectively. On this basis, the revised minimum TBTF requirements for the Swiss G-SIBs fall short of optimal leverage and capital ratios by about 20%. The paper also addresses the large range of uncertainty surrounding the estimates. Although variations in the key parameters can result in big changes in the estimated optimal capital requirements ranging from 3.7 to 13.7%, the median of the distribution is 6.7%, which is slightly above our benchmark estimate of 6.1%. The minimum value is mainly due to the assumption of no M-M offset, a one-to-one pass through of interest rate adjustment of G-SIBs and relatively low GDP loss of a banking crisis. By contrast, the maximum of 13.7% for the leverage ratio is based on a 67% M-M offset, a 50% interest rate pass through, and a high GDP loss of a banking crisis. Our estimates of optimal equity requirements are smaller than the Admati and Hellwig (2013) proposition of 20 to 30%. Their argument is based on the full M-M offset that higher equity capital requirements would not increase the banks' overall funding costs and hence do not impact GDP. In our model, this assumption, strictly speaking, leads to an undetermined optimal leverage ratio. Optimal leverage ratios of the order of 20 to 30% imply a nearly complete M-M offset and/or very low interest rate pass through resulting in a downward shift of the marginal cost curve. Besides these two determinants of the optimal leverage ratio, the limiting factor for additional increases in capital requirements stems mainly from the GDP benefit curve. Its shape implies that the marginal benefits of additional capital decline sharply at leverage ratios clearly below 20-30%. Finally, given the uncertainty around our estimates, we are the first to caution against a too-literal interpretation of the "optimal" equity capital requirements. Rather, our investigation of the trade-off between social cost and social benefit of higher equity capital requirements should be taken as an important complementary alternative to other approaches to bank capital determination. At any rate, our investigation addresses the central question of the optimal level of bank equity capital. The issue, however, is far too complex to be treated by one approach alone. Instead, different approaches-including international benchmarking exercises and competitiveness considerations as applied by the Swiss Group of Expertsshould be used to determine the appropriate level of bank equity. Switzerland's domestic systemically important banks (D-SIBs) do not come within the scope of our examination because these banks are not publicly traded on the Swiss stock exchange and therefore cannot be included in the methodological approach pursued in this paper. The three Swiss D-SIBs are Raiffeisen Gruppe, Zürcher Kantonalbank, and PostFinance. The Swiss TBTF legislation requires that D-SIBs have to meet a lower minimum going concern leverage ratio of 4.5%. Note that in contrast to Credit Suisse and UBS, the three Swiss D-SIBs have comfortable levels of capital. 8 Another strand of the literature (e.g., Clerc et al. (2015), Martinez-Miera and Suarez (2014)) uses dynamic general equilibrium models to estimate the trade-off between social benefits and social costs of changes in capital requirements. 9 As a numerical example, assume a bank with a CHF 100 balance sheet financed by CHF 97 in debt and CHF 3 in equity. The return on debt is assumed to be 5% and the return on equity 25%. The overall funding costs are 5.6% [= (5% × 0.97) + (25% × 0.03)]. If the bank decides to raise equity to CHF 5 and reduce debt to CHF 95, the bank is less risky than before. Under 100% M-M validity, the required rate on equity will drop from 25 to 21.75% and the required return on debt from 5 to 4.75%. The overall funding cost however remains unchanged at 5.6% [= (4.75% × 0.95) + (21.75% × 0.05)], i. e., there is a 100% M-M offset. 10 See in particular Miles et al. (2011 and2013) for a presentation of the theoretical basis of Eq. (1) 11 These authors point out that Equation (1) is a variant of the Hamada framework. See Hamada (1969). 12 Bank CDS spreads seem to indicate that the debt of G-SIBs is not risk free. This is a correct observation. However, it should be taken into account that market-based spreads are risk-neutral and overstate real-world default probabilities. But more importantly, Eq. (1) is based on the idea that equity is more risky the higher is leverage and vice versa. 13 The definition of CET1 was introduced with the announcement of the Basel III framework at the end of 2010. A rough estimate of the CET1 capital ratio of the two Swiss G-SIBs can be derived from the Comprehensive Quantitative Impact Study (QIS) of the BCBS (2010b), (December 16, 2010) and Junge and Kugler (2013), footnote 21. 14 For the linear regression, the M-M offset is: Cline (2015) finds that the M-M offset amounts to 45% for US banks, and an ECB (2011) suggest a range of M-M offsets between 41 and 72% for 54 large international banks. 16 Leading empirical studies to estimate the M-M offset are Kashyap et al. (2010), Miles et al. (2011 and2013), Clark et al. (2015). As an alternative, one could test directly the relationship between the required return on bank equity and bank leverage. However, good time series on expected earnings are essential for this, which we could not obtain. 17 One may argue that the importance of mortgage lending in Switzerland justifies a separate treatment of higher capital requirements on residential housing. Such an approach would be indispensable in any disaggregated macroeconomic model in which different kinds of capital (plant and equipment as well as non-residential and residential buildings) are distinguished. However, in the aggregated macroeconomic model used in this paper there is only capital, which is provided by the business sector. Therefore, we implicitly assume that all construction investments are conducted by the business sector. Although this is an overstatement, it is not utterly wrong when we recall that rented and not owned housing is the dominant form in Switzerland. 18 For an introduction to the estimation of translog production functions, see Berndt (1991, Chapter 9). 19 See in particular Smith (2008), Barnes et al. (2008) andJones (2003) 20 In principle, the Swiss TBTF legislation applies to both the Swiss G-SIBs and the D-SIBs. However, currently, the D-SIBs are out of scope because details of their regulation are still open. Moreover, the Swiss D-SIBs are well capitalized. 21 The share of all Swiss banks in external financing of Swiss companies is 35% and has been stable for years (see Trend (2013 SECO (2017). 24 We use the discount rate of 5% only in order to facilitate the comparison of our results with the results from other studies. In particular, the BCBS tends to present estimates of social economic cost using a discount rate of 5%. The appropriate social discount factor for Switzerland should be much lower. 25 In Eq. (12), the market risk premium enters the numerator through the change in the banks' funding costs (Eq. (6)) and the denominator through P K with little overall impact on real GDP growth. 26 Ritzmann (1973) is a comprehensive reference for the history of Swiss banks. SNB (2007) provides some information on the history of banking crises in Switzerland including the crisis of 1991. 27 The sample period runs from 1906 to 2010. 28 Note that the constant A = 1.56E−04 refers to expected loss (crises probability * 17.7%). 29 A complete list of the parameters and variants is shown in Appendix 3, Table 14. 30 The link between the leverage and the capital ratios for Swiss G-SIBs is the RWA density, which is the average risk weight per unit of exposure for any given bank ( RWA LRD Þ , where LRD is the leverage ratio denominator (see Appendix 1). In order to ensure a coherent interaction between the leverage and the capital ratios, the Swiss TBTF framework requires an RWA density of 35% for G-SIBs. Hence, the capital ratio (CR) is easily determined from the leverage ratio and the RWA density: CR ¼ LR RWA dDensity . 31 See Brooke et al. (2015), chart 9 and the critique of Vickers (2016) pointing out that the approach of Brooke et al. (2015) is misguided. 32 See Clerc et al. (2015), page 38 33 We have four parameters that can take three values and three parameters that can two values resulting in a total of 3 4 times 2 3 combinations. 34 The calculation of the Basel III leverage ratio and in particular its denominator is described in detail in: BCBS (2014), "Basel III Leverage Ratio Framework and Disclosure Requirements", January 2014 and in FINMA (2015b) "FINMA Circular 2015/3 Leverage Ratio". 35 The quarterly financial reports of CS and UBS in 2015 provide an impression of the different treatments of derivatives in SFTs between US accounting rules and IFRS. For example, in case of CS (US-GAAP), the adjustments of derivatives to LRD leads to a significant increase of the LRD exposure (CHF 124bn, Q3 2015), whereas in the case of UBS (under IFRS) produces a sharp reduction of LRD (CHF 137bn, Q3 2015). Regulatory capital definitions and conversion methods This Appendix presents the various definitions of leverage ratios used to calculate the economic costs and benefits of higher equity capital requirements and explains how they can be converted into a common leverage ratio in line with the definitions of the Basel III Accord. Based on this conversion, we are able to express our results in terms of the Basel III definition of the leverage ratio. The estimation of the M-M offset and WACC before any conversion applies the Basel II BIS Tier1 capital definition as numerator and the banks' Balance Sheet Asset as denominator of the leverage ratio. The estimation of the annual probability of banking crises occurring, and the economic benefit before conversion, are estimated using Book Equity defined as capital and Balance Sheet Assets as the denominator in the leverage ratio. In order to compare the results of various definitions of the leverage ratio, the latter must be made compatible with a common Basel III basis. The new Basel III definition requires that the numerator consists of loss-absorbing equity capital, i.e., dominantly CET1 and a proportion of AT1. This is a markedly stricter definition than the Basel II BIS Tier1 capital definition. In particular, the Basel III definition excludes any hybrid capital items, which were found in the financial crisis to be poor in absorbing losses. Also the definition of the denominator of the Basel III leverage ratio (LRD) goes beyond the definition of balance sheet assets. It additionally includes off-balance sheet items and treats the calculation of securities financing transactions and derivatives in its own way. 34 In order to convert the different capital and asset definitions to the Basel III standards, we used the leverage ratios reported by CS and UBS under both a Basel II and Basel III approach for a common reporting period. Tables 8, 9, 10, 11, and 12 present the results. All conversion factors refer in each case to the look-through or fully-applied equity capital definition of Basel III. They capture the equity capital position of the banks assuming the full application of Basel III, excluding the phase-in adjustment of the transition period from 2014 up to 2018. The conversion factors related to CET1 and Basel II BIS Tier1 and between CET1 and Book Equity are shown in Tables 8 and 9, respectively, and were calculated on the basis of a common (pre-phasing-in) reporting period from Q4 2011 to Q4 2013. Basel III CET1 Look−through ¼ 0:60 Â Basel II BIS Tier1 Basel III CET1 Look−through ¼ 0:52 Â Book Equity In the same way, we calculated the conversion factors between Basel III Tier1 Look-through and BIS Basel II Tier1 and Book Equity (see Tables 10 and 11). The conversion factors were determined as follows: Basel III Tier1 Look−through ¼ 0:77 Â BIS Basel II Tier1 Basel III Tier1 Look−through ¼ 0:73 Â Book Equity Again, the calculations are based on the quarterly financial reports of CS and UBS. However, for the relationship Basel III Tier1 Look-through versus Basel II Tier1 and Book Equity, we used the data between 2013 Q4 and 2015 Q3 because the banks did not disclose Basel III Tier1 Look-through calculations prior to Q4 2013. Finally, we calculated the conversion factor between Balance Sheet Assets and the Basel III LRD over the period from Q4 2014 to Q3 2015 (Table 12). This is the earliest period available where the two big banks recorded simultaneously LRD and Balance Sheet Assets. The individual conversion factors of the two banks are rather different and reflect to a great deal the differences in the accounting standards of the two banks. CS balance sheet calculations follow US-GAAP while the UBS calculations are based on IFRS standards. Given the differences in the treatment of derivatives and SFTs between US-GAAP and IFRS accounting rules, it is no surprise that the conversion factor of CS is considerably larger than the UBS conversion factor. 35 The combined CS and UBS conversion factor is: Basel III LRD ¼ 1:08 Â Balance Sheet Assets It may be objected that the sample is too small to be able to calculate reliable conversion factors. However, there are reasons to believe that the calculated conversion factors are robust. First, a great proportion of on-balance sheet items are treated in the same way across US-GAAP, IFRS, and LRD and hence limit the scope for unfounded measurement deviations. Second, thanks to pro-forma LRD calculations of UBS back to Q4 2012 we can calculate the conversion factor for this period. It typically hovered in a small corridor slightly below 1 with an average conversion factor of 0.98. This suggests that the sampled conversion ratios between LRD and the accounting measurements IFRS and US-GAAP, respectively, are reliable. 9 Appendix 2 9.1 Estimation of the translog production function, Switzerland 1995Switzerland -2014 The translog analysis is usually done in the dual framework of cost-share equations. The term dual means in this context that all the information needed to obtain the relevant parameters of the production function is contained in the corresponding cost function and vice versa. In a model with two production factors K and L, their corresponding shares in total production costs (S K and S K ) are represented as linear function of factor prices (P K and P L ): S K ;t ¼ δ K þ γ KK log P K ;t À Á þ γ KL log P L;t À Á þ γ Kt t 20 ð Þ S L;t ¼ δ L þ γ LK log P K ;t À Á þ γ LL log P L;t À Á þ γ Lt t 21 ð Þ For theoretical reasons, the γmatrix is symmetric (γ KL = γ LK ) as substitution of capital by labor is symmetric. As the left-hand variables are shares the slope coefficients add up to zero (γ KK + γ KL = 0; γ KL + γ LL = 0; γ Kt + γ Lt = 0), whereas the intercepts add up to 1 (δ K + δ L = 1). Given these restrictions, we only have to estimate one equation. Using the restriction γ KK = − γ KL we can write the first equation of the system above as: Note that this model collapses to the Cobb Douglas case if both γ coefficients are zero and we arrive at a constant cost share of capital which is independent of factor price and equal to the intercept term δ K . Correspondingly, the labor cost share is constant and equal to δ L = 1 − δ K . If the elasticity of substitution is below 1, then we have a positive γ KK coefficient, and if technical progress is biased in favor of capital, γ Kt is positive. The elasticity of substitution is calculated as σ KL;t ¼ γ KL þ S K ;t S L;t S K ;t S L;t ¼ −γ KK þ S K ;t S L;t S K ;t S L;t 23 ð Þ which is, of course, 1 for the Cobb Douglas function. As we can see from the equation above, the opposite case of a zero elasticity of substitution implies a maximum positive parameter value of γ KK = S K S L . Figures 5 and 6 display the data used in our estimation. Figure 5 shows the development of the cost share of capital defined as net operating surplus + depreciation (or capital consumption) divided by the sum of capital costs and compensation of employees (labor costs). The factor prices were calculated by dividing capital income by the capital stock and total labor income by the number of hours worked. The estimation results for the capital share Eq. (22) are given in Table 13. In order to avoid simultaneity problems, we estimated the model with a lag of one for factor prices. Table 13 shows a positive γ KK -estimate which is statistically significantly different from zero and implies a substitution elasticity which is clearly lower than one in absolute value. However, no evidence in favor of a non-neutral technical progress is found, while the deterministic trend coefficient is small and statistically not different from zero. Therefore, we estimated the model without a time trend which gives a slightly higher γ KK -estimate in absolute value. Inserting the time-varying factor shares displayed in Fig. 5 results in an elasticity of substitution estimate varying between 0.42 and 0.44 during the period 1995-2014. Given this time series of the capital cost share (S K, tt ) and the elasticity of substitution (σ KL, t ), we are able to calculate a time-varying estimate for the elasticity of production with respect to the price of capital as given in Eq. (9). It varies between − 0.34 and − 0.27 with a mean and median approximately equal to − 0.31. As shown in Fig. 6, the elasticity of output with respect to price of capital reached its absolute maximum before the financial crisis and decreases in absolute value since 2008 implying a weaker reaction of GDP to capital costs changes in recent years.
2023-02-23T15:18:57.355Z
2018-08-22T00:00:00.000
{ "year": 2018, "sha1": "e3312fafd4e6a16a617b085b560b600b819c947f", "oa_license": "CCBY", "oa_url": "https://sjes.springeropen.com/track/pdf/10.1186/s41937-018-0025-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "e3312fafd4e6a16a617b085b560b600b819c947f", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
246331815
pes2o/s2orc
v3-fos-license
Arctic mixed-phase clouds sometimes dissipate due to insufficient aerosol: evidence from observations and idealized simulations . Mixed-phase clouds are ubiquitous in the Arctic. These clouds can persist for days and dissipate in a matter of hours. It is sometimes unknown what causes this sudden dissipation, but aerosol-cloud interactions may be involved. Arctic aerosol concentrations can be low enough to affect cloud formation and structure, and it has been hypothesized that, in some instances, concentrations can drop below some critical value needed to maintain a cloud. We use observations from a Department of Energy ARM site on the north slope of Alaska at Oliktok Point (OLI), the ASCOS 5 field campaign in the high Arctic Ocean, and the ICECAPS-ACE project at the NSF Summit Station in Greenland (SMT) to identify one case per site where Arctic boundary-layer clouds dissipated coincidentally with a decrease in surface aerosol concentrations. These cases are used to initialize idealized large eddy simulations (LES) in which aerosol concentrations are held constant until, at a specified time, all aerosols are removed instantaneously – effectively creating an extreme case of aerosol-limited dissipation which represents the fastest a cloud could possibly dissipate via this process. These LES simulations 10 are compared against the observed data to determine whether cases could, potentially, be dissipating due to insufficient aerosol. The OLI case’s observed liquid water path (LWP) dissipated faster than its simulation, indicating that other processes are likely the primary driver of the dissipation. The ASCOS and SMT observed LWP dissipated at similar rates to their respective simulations, suggesting that aerosol-limited dissipation may be occurring in these instances. We also find that the microphysical response to this extreme aerosol forcing depends greatly on the specific case being 15 simulated. Cases with drizzling liquid layers are simulated to dissipate by accelerating precipitation when aerosol is removed while the case with a non-drizzling liquid layer dissipates quickly, possibly glaciating via the Wegener-Bergeron-Findeisen (WBF) process. The non-drizzling case is also more sensitive to INP concentrations than the drizzling cases. Overall, the simulations suggest that aerosol-limited cloud dissipation in the Arctic is plausible and that there are at least two microphysical pathways by which aerosol-limited dissipation can occur. conclude that the observed dissipation was not driven entirely by a lack of aerosol particles. Prior to this aerosol removal time, a temperature nudging scheme is used to maintain a 205 stable cloud. At each time step, each grid point is linearly nudged back to the initial temperature profile with a time scale τ = 1 h. Nudging values are computed based on the current domain-average temperature profile, so all grid points at a given height z are nudged the same amount. The result in all simulations is a cloud that is quasi-steady in thickness and water content. After the removal of aerosol from the model, the temperature nudging scheme is turned off and the thermodynamics of the system are allowed to evolve naturally - this is done so that the post-aerosol environment is able to evolve naturally. Large-scale 210 subsidence is applied throughout the simulation by imposing a horizontal divergence of 2 × 10 − 6 s − 1 at every model level, with a boundary condition of w sub = 0 at the surface. Introduction The Arctic has been shown to be extremely sensitive to a warming climate, with data showing the Arctic warming anywhere from 1.5 -4.5x the global mean warming rate (Holland and Bitz, 2003;Serreze and Barry, 2011;Cohen et al., 2014;Previdi et al., 2021). Clouds, in general, directly affect the surface energy budget and can act as net-warming or net-cooling influences, depending on their specific physical characteristics. Of particular note in the Arctic environment are low-level, boundary layer 25 stratocumulus clouds which cover large fractions of the Arctic throughout the year (Shupe, 2011). They have been found to be a net-warming influence on the surface, except for a short period in the summer when they act as a net-cooling influence (Intrieri et al., 2002;Shupe and Intrieri, 2004;Sedlar et al., 2011). These clouds tend to be mixed-phase, meaning they simultaneously contain liquid and ice water. Shupe et al. (2006) found that mixed-phase clouds accounted for 59% of the clouds identified during a year-long campaign on an icepack in the Beaufort Sea, with the remaining 41% consisting of mostly ice-only clouds. In this study, we investigate whether or not aerosol-limited dissipation occurs on a case-by-case basis. While likely infrequent, this method of cloud dissipation is worth examining in more detail because of how sensitive the Arctic environment is 90 to low-level cloud cover, and the highly uncertain changes in Arctic aerosol concentration (both natural and anthropogenic) in a warming climate (e.g. Schmale et al., 2021). We examine three observed cases of potential aerosol-limited dissipation across three different environments (northern Alaskan coast, high Arctic pack ice, and the Greenland ice sheet) and use large eddy simulations (LES) to simulate a "worst-case scenario" of aerosol-limited dissipation: immediately removing all aerosols from a simulated cloudy environment and comparing changes in cloud properties to observations, which should indicate whether or 95 not these cases should continue to be investigated as examples of this phenomenon. boundary layer mixed-phase clouds. Figure 1 shows the three case locations on a map. Details of each case are summarized in were observed to decrease from > 50 cm −3 to < 20 cm −3 in a span of 4 hours. Many such periods exist, and the results were 110 examined manually to select cases where aerosol-limited dissipation may have been a factor in transitioning from a cloudy to cloud-free environment. One such case (Fig. 2) occurred on the 12th of May, 2017. At 09:00 UTC, the CPC measured a transition in aerosol concentration from ∼ 100 cm −3 to <10 cm −3 in the span of about one hour (Fig. 2c). Aerosol data from OLI was particularly noisy, with a clear trend of concentrations ∼ 100 cm −3 but with intermittent spikes upwards of 1000 − 10000 cm −3 (not shown). 115 To smooth out the data and best show what we consider to be a representative aerosol concentration timeseries, we filtered out values > 1000 cm −3 and downsampled the result from one-second to one-minute averages. Data from the Ka-band ARM prior, dissipated coincidentally with a decrease in surface aerosol concentration from >100 cm −3 to <10 cm −3 (Sedlar et al., 2011;Mauritsen et al., 2011;Sotiropoulou et al., 2014). aerosol concentrations were collected using a differential mobility particle sizer (DMPS; Birmili et al. 1999;Tjernström et al. 2014) measuring size distributions of particles between 3 nm -10 µm. Helicopter flights measuring aerosols during this time found that concentrations were below 10 cm −3 (for aerosols 145 < 14 nm) for the entirety of the boundary layer (Stevens et al., 2018) during a flight in the dissipation period at 20:13 UTC. After cloud dissipation, winds which were previously calm were observed to more consistently blow from the northeast (not shown). At the same time, surface temperature drops over 6 • C, though it's unclear whether there was a change in airmass or if temperature dropped as cloud is no longer present as a warming influence on the surface. Surface pressure analysis (Fig. 3d) shows the extension of northern high pressure directly over the location of Oden at this time, suggesting a possible change in 150 airmass. Like the OLI case, RH values are generally high throughout the boundary layer. Unlike OLI, there is a dry layer at 400 m. A change in θ and RH profiles at 400 m indicate weak decoupling at this level. This case has previously been investigated as existing in a potentially tenuous regime (Mauritsen et al., 2011;Loewe et al., 2017;Stevens et al., 2018;Tong, 2019). OLI and ASCOS, below-cloud RH decreases towards the surface, reaching ∼50% directly above the surface inversion. Model Description The Colorado State et al., 1992;Jiang et al., 2001;Jiang and Feingold, 2006). Radiation parameterization is provided by the Harrington scheme (Harrington, 1997), and turbulence is parameterized by a Deardorff level 2.5 scheme, which parameterizes eddy viscosity as a function of turbulent kinetic energy (TKE). RAMS uses a double-moment bulk microphysics scheme (Walko et al., 1995;Meyers et al., 1997;Saleeby and Cotton, 175 2004) that predicts the mass and number concentration of eight hydrometeor categories: cloud droplets, drizzle, rain, pristine ice, aggregates, snow, hail, and graupel. Each of these hydrometeor categories is represented by a generalized gamma distribution. The scheme simulates nucleation (cloud and ice), vapor deposition, evaporation, collision-coalescence, melting, freezing, secondary ice production, and sedimentation. Cloud droplets are activated from aerosol particles using lookup tables (Saleeby and Cotton, 2004) built based on Köhler theory and cloud droplet growth equations formulated in Pruppacher and Klett (1997). 180 Water vapor is depleted from the atmosphere upon activation by assuming that newly activated droplets have a diameter of 2 µm. Ice crystals are heterogeneously nucleated by the parameterization in DeMott et al. (2010), with the number of ice nuclei (L −1 ) given by: Where n in is the ice nuclei number concentration, T k is the air temperature in Kelvin, and a, b, c, d are constants. The variable n in the original DeMott parameterization is the number concentration of aerosol particles with diameters larger than 0.5 µm, Table 1). 190 The observations were used to generate an initial sounding and to specify aerosol concentration for each simulation. We use a simplified aerosol treatment in which number concentrations are fixed to a single value throughout the domain. A list of experiments and initial aerosol/ice nuclei concentrations is found in there was little change in the liquid water for n = 1, 5, or 10 L −1 . There were moderate differences in ice water content, and as there are ice water path (IWP) retrievals for both the OLI and ASCOS cases, we picked a value of n that yielded simulated IWP values closest to observations. For the SMT case, no ice measurements were available. Both simulated ice and liquid were sensitive to choice of n, so a value of 0.1 L −1 was used; this value is consistent with currently unpublished INP data from Summit Station (available upon request) and resulted in simulated liquid water path that was closest to observations. 220 3 Results Figure 5 shows domain-averaged liquid water (color shading) and ice (dashed contours at 0.01 and 0.001 g kg −1 ) show typical Arctic mixed-phase clouds in which a layer of supercooled liquid water is situated at cloud top with ice precipitating below. In OLI and ASCOS, the liquid layer is well-above the ice layer (∼200 m from cloud top to the 0.001 g kg −1 ice contour), whereas in SMT the ice extends nearly to cloud top. 225 Figure 6 shows the domain-mean liquid water path (LWP) for the OLI, ASCOS, and SMT simulations and the corresponding observed LWP. Observed LWP data were taken from microwave radiometers at OLI (Gaustad, 2014), ASCOS (Westwater et al., 2001), and SMT (Cadeddu, 2010) This figure shows that, in all cases, the simulated LWP decreases to near-zero within hours of the aerosol removal time (09z in OLI, 06z in ASCOS and SMT). Both the OLI and ASCOS simulations show a slow LWP response to aerosol removal, with LWP approaching 0 g kg −1 in about 4-5 hours. The SMT simulation, on the other hand, has 230 a very pronounced LWP response to aerosol removal, with LWP approaching zero within 2 hours. With instantaneous aerosol removal, the simulations should theoretically represent the fastest possible dissipation of a cloud due to insufficient aerosol. Where this simulated LWP response is slower than observations -such as OLI -it is likely that a lack of aerosol is not in fact the primary driver of dissipation. Where the simulated LWP response is more similar to observations (ASCOS and SMT), it is more likely that these are indeed cases of aerosol-limited dissipation. c) SMT Observed Modeled 08z 09z 10z 11z 12z 13z 14z 15z 16z 17z 05z 06z 07z 08z 09z 10z 11z 12z 13z 14z 05z 06z 07z 08z 09z 10z 11z 12z 13z 14z Each case will now be discussed in detail; since the time of aerosol removal was determined rather subjectively, and because the aim of this paper is not to compare directly with observations (but instead to compare timescales), all further discussion will be discussed in the context of hours before/after aerosol removal, instead of UTC, to better compare cases with one another. OLI It is evident from Figure 6a that the OLI cloud dissipation was not due to a lack of available aerosol. While the observed LWP 240 decreased from 100 g kg −1 to <10 g kg −1 in ∼1 hour, modeled LWP took 4-5x this time. While the OLI case may not be a real-world example of aerosol-limited dissipation, examining its simulated response to aerosol removal when compared to the different cases still yields valuable insights to this phenomenon. Domain-average 2D and column-integrated liquid and ice budgets, radiative heating, and vertical momentum flux for OLI are shown in Figure 7. After a 1-hour spin-up period (not shown), the cloud settles to quasi-equilibrium with approximately 245 constant liquid precipitation reaching the surface and consistently positive integrated cloud droplet growth by condensation, which occurs primarily at cloud base, where supersaturation is largest, and at cloud top. In the cloud interior there is slight net liquid evaporation and net ice depositional growth due to an active WBF process. The growth of ice and liquid are balanced by persistent precipitation of both liquid and ice hydrometeors throughout the pre-aerosol removal time period. Riming makes up only a small part of the liquid and ice budgets. Radiative cooling (Fig. 7c) is strongest at cloud top as expected, which drives the Liquid budget (a) shows condensational growth of cloud and rain, and removal by precipitation (precip) and riming. Ice budget (b) shows growth of all ice species by condensation (cond), riming, and removal from precipitation (precip). After the removal of aerosol, a large increase in liquid precipitation and a smaller relative increase in ice precipitation occur. Removing aerosol inhibits the nucleation of new cloud droplets, meaning that any supersaturation must be condensed onto existing droplets rather than being used to create new droplets. This results in a rapid increase in droplet sizes (not shown) 255 and an enhanced collision-coalescence process, leading to increased liquid precipitation. The precipitation is initially strongest near cloud base and contributes to a rise in cloud base. Since new droplets are unable to be nucleated (and available liquid to condense upon is being precipitated), supersaturation levels increase (not shown). Approximately three hours after aerosol removal, cloud condensation falls off sharply. Figure 7(d-e) show that, after aerosol removal, there is an increase in ice growth which maximizes after liquid is mostly removed (Fig. 6a). However, at this point the cloud top radiative cooling has ceased, 260 circulations weaken, and the ice begins to slowly decay as well. Figure 5a shows that, after aerosol removal, the OLI simulation dissipates with a rising cloud base, and a lesser rising of the cloud top. However, radar observations (Fig. 2b) show a cloud that dissipates with a cloud top that is lowering. After aerosol removal (and temperature nudging is turned off) in the OLI simulation, the entire boundary layer cools and stabilizes (not shown). As a result of this stabilization, turbulence generated by cloud top cooling is not able to extend as far down as before, 265 resulting in a rising cloud bottom. It is not clear what is causing the cloud top to lower in the observed case, but this difference in the cloud shape during dissipation -combined with the much faster observed LWP response compared to simulationsindicates that the observed dissipation is likely due to larger-scale factors such as the possible weak frontal passage described in section 2.1.1. We also speculate that the liquid water profile added to the model initialization results in cloud-top LWC
2022-01-28T16:37:28.779Z
2022-01-25T00:00:00.000
{ "year": 2022, "sha1": "a6d1ac7c7aa608766efa1ac93fad2b66dd372883", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/preprints/acp-2022-36/acp-2022-36.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "8d8fa7129160331157a5758f4d25cfb14925c825", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
119170783
pes2o/s2orc
v3-fos-license
Continuous formulation of the Loop Quantum Gravity phase space In this paper, we study the discrete classical phase space of loop gravity, which is expressed in terms of the holonomy-flux variables, and show how it is related to the continuous phase space of general relativity. In particular, we prove an isomorphism between the loop gravity discrete phase space and the symplectic reduction of the continuous phase space with respect to a flatness constraint. This gives for the first time a precise relationship between the continuum and holonomy-flux variables. Our construction shows that the fluxes depend on the three-geometry, but also explicitly on the connection, explaining their non commutativity. It also clearly shows that the flux variables do not label a unique geometry, but rather a class of gauge-equivalent geometries. This allows us to resolve the tension between the loop gravity geometrical interpretation in terms of singular geometry, and the spin foam interpretation in terms of piecewise flat geometry, since we establish that both geometries belong to the same equivalence class. This finally gives us a clear understanding of the relationship between the piecewise flat spin foam geometries and Regge geometries, which are only piecewise-linear flat: While Regge geometry corresponds to metrics whose curvature is concentrated around straight edges, the loop gravity geometry correspond to metrics whose curvature is concentrated around not necessarily straight edges. Introduction The classical starting point of Loop Quantum Gravity (LQG) [1, 2] is a Hamiltonian formulation of general relativity in terms of first order connection and triad variables. The basic fields parametrizing the phase space are chosen to be the su(2)-valued Ashtekar-Barbero connection A [3], and its canonically conjugate densitized triad field E, both being defined over spatial hypersurfaces foliating the spacetime manifold. The theory comes with a set of first class constraints, namely, the vector constraint generating diffeomorphisms of the spatial hypersurface, the scalar constraint generating time reparametrizations, and the Gauss constraint generating internal SU(2) gauge transformations. As a first step towards the construction of the quantum theory, one defines a smearing of the classical Poisson algebra formed by the canonical pair (A, E) by introducing oriented graphs. Given a graph Γ embedded in the spatial manifold, the continuous variables A(x) and E(x) are replaced by a pair (h e , X e ) associated to each edge e. The variable h e ∈ SU(2) corresponds to the holonomy of the connection along the edge e, and X e ∈ su(2) represents the "electric" flux of the densitized triad field across a surface dual to e. At the quantum level these new variables form the so-called holonomy-flux algebra [4], which is a cornerstone of the entire construction of LQG. The Hilbert space H Γ of representations associated with this algebra is the so-called spin network Hilbert space. It captures only a finite number of degrees of freedom in the theory. One recovers the continuous kinematical Hilbert space by taking the projective limit of graph Hilbert spaces H Γ . The main challenge is then to formulate a consistent and semi-classically meaningful version of the Hamiltonian constraint acting on the spin network basis. In this construction, two very different procedures are realized at once. There is a discretization procedure in which the continuous fields are replaced by discrete holonomies and fluxes associated with graphs, and in the same stroke, these variables are promoted into quantum operators. The main idea we want to take advantage of is that the processes of discretization and quantization are totally independent, a point that has been unappreciated until now. In this work we would like to disentangle these two steps. We propose to study only the process of discretization using graphs, without delving into the quantization of the theory. This means that we first associate to a given graph a finite-dimensional holonomy-flux phase space generated by (h e , X e ) ∈ T * SU (2). The phase space of loop gravity on a graph is obtained as a direct product over the edges of SU(2) cotangent bundles. The main point of the present paper is to understand the exact relationship between this finite-dimensional discrete phase space, and the continuous phase space of gravity. We show explicitly that an element of the discrete phase space represents a specific equivalence class of continuous geometries. The advantage of considering classical loop gravity is threefold. First, it provides a truncation of the classical phase space of gravity in terms of finite-dimensional holonomy-flux phase spaces, whose quantizations are given by spin network states. Second, it allows us to shed some light on the geometrical interpretation of the holonomy-flux variables, and the type of geometry that they represent. For instance, we will see that both the singular geometry of LQG and the piecewise flat geometry of spin foam models are represented by the same flux data as two representatives of the same equivalence class. As we will see in the end, our result also allows us to understand more precisely the relationship between the spin foam geometrical interpretation and Regge geometry. Namely, it shows that twisted geometries [5] described by fluxes can be understood as piecewise flat geometries which are not necessarily piecewise-linear flat, as is the case for Regge geometry [6,7]. Finally, this approach is designed to allow us address at the classical level one of the most challenging questions of LQG: Is it possible to express a proper gravitational dynamics in terms of holonomies and fluxes? If there is a clear positive answer to this question at the classical level, then the quantization of loop gravity will be reduced to the treatment of quantization ambiguities in a finite-dimensional system. If, on the other hand, we get a negative answer at the classical level, then no quantization in terms of holonomy-flux variables can express the quantum gravitational dynamics. It is therefore of utmost importance to eventually understand the classical dynamics of general relativity in terms of the holonomy-flux representation. Let us stress that the classical picture of the loop gravity phase space that we develop here is, when quantized, related to the picture first proposed by Bianchi in [8]. In this precursor work, it is argued that the spin network Hilbert space can be identified with the state space of a topological theory on a flat manifold with defects. Our analysis makes the same type of identification at the classical level and emphasizes the fact that the frame field determines only an equivalence class of geometries. The idea that the discrete data labels only an equivalence class of geometries has already been advocated in [9] on a general basis. Our approach gives a precise understanding of which set or equivalence class of continuous geometries is represented by the discrete geometrical data. We begin in section I by defining the continuous phase space of gravity in terms of the connection and triad variables A and E, and recall some facts about the process of symplectic reduction. In section II we introduce the discrete classical spin network phase space associated to a graph. In particular, we explain how to obtain the discrete data (h e , X e ) starting from the continuous fields A and E, and showing the fluxes cannot depend only on E but need to involve the connection in their definition. This construction explains why the flux variable carries information about both intrinsic and extrinsic geometry, in agreement with what has been pointed out already in [5]. In section III, we prove that the discrete holonomy-flux phase space can be obtained as a symplectic reduction of the continuous phase space. This shows that the discrete data corresponds to an equivalence class of continuous three-geometries related by gauge transformations. In section IV we show that given a particular gauge choice, the discrete data can be used to reconstruct a configuration of the continuous fields. We will show in particular that it is possible to represent a given equivalence class of geometries by either a singular gauge choice in agreement with the LQG interpretation of polymer geometry, or a flat gauge choice corresponding to the geometrical interpretation of spin foams. Finally, in section V we discuss the notion of cylindrical consistency and cylindrical operators, and explain how it is possible to relate operators constructed on the discrete and the continuous phase spaces. I. CONTINUOUS PHASE SPACE OF GRAVITY The loop approach to quantum gravity relies on the well-known idea that the phase space of Lorentzian or Riemannian general relativity can be parametrized in terms of an su(2)-valued connection one-form A i a and a densitized triad fieldẼ a i , both fields being defined over a base threedimensional spacetime manifold Σ (which here we assume to be isomorphic to R 3 ). The su(2) Ashtekar-Barbero connection A i a is related to the spacetime so(3, 1) spin connection ω IJ µ and to the geometrodynamical variables of the ADM phase space via where γ ∈ R − {0} is the Barbero-Immirzi parameter, Γ i a is the Levi-Civita spin connection, and K i a the extrinsic curvature one-form. The densitized triad and the three-dimensional frame field e i a are related byẼ These variables form the Poisson algebra The classical configuration space of the theory is the space A of smooth connections on Σ. The phase space is the cotangent bundle P ≡ T * A, and carries a natural symplectic potential. In the following we will denote by E ab i (without tilde) the Lie algebra-valued two-form related to the densitized vectorẼ i a through The symplectic potential of the cotangent bundle is given by where we denote by Tr the natural metric on su(2) which is invariant under the adjoint action Ad SU(2) of the group. The phase space P also carries an action of the gauge group SU(2) and of spatial diffeomorphisms. In fact, since P is 18 · ∞ 3 -dimensional, the (first class) constraints of the canonical theory have to be taken into account in order to obtain the physical phase space with 4·∞ 3 degrees of freedom. This can be achieved through the process of symplectic (or Hamiltonian) reduction, which we now describe. Let P be a symplectic manifold, which is seen as the classical phase space of the theory of interest, and G a group of transformations. Suppose that the infinitesimal group transformations are generated via Poisson bracket by a Hamiltonian H. Then the Marsden-Weinstein theorem [10,11] ensures that the symplectic reduction of P by the group G, denoted by the double quotient P G, is still a symplectic manifold and carries a unique symplectic form. The reduced phase space is given by imposing the constraints and dividing the constraint surface by the action of gauge transformations. This is written as For notational simplicity, we will denote the group of transformations G and the associated Hamiltonian H with the same letters. In the case of four-dimensional gravity, the physical phase space is obtained from the kinematical (unconstrained) phase space P by performing three symplectic reductions. The first one is defined with respect to the group of SU(2) gauge transformations G ≡ C ∞ Σ, SU (2) . Since the action of this gauge group on P is Hamiltonian, we can define the gauge-invariant phase space T * A G. More precisely, the Hamiltonian generating these transformations is the smeared Gauss constraint: where d A denotes the covariant differential and α is a Lie algebra-valued function. Its infinitesimal action on the phase space variables is given by (1.8) The other relevant symplectic reduction is defined with respect to the group of spatial diffeomorphisms, and enables one to construct the diffeomorphism-invariant phase space T * A G ×Diff(Σ) . Here, the action of the group of diffeormorphisms on the phase space variables is given by where L ξ is the Lie derivative along the vector field ξ a . This group is generated through Poisson brackets with the Hamiltonian (1.10) Finally, the physical phase space can be obtained from the gauge and diffeomorphism-invariant phase space by performing a symplectic reduction with respect to the scalar constraint. This latter is given by where the smearing variable is the lapse function N , and σ = ∓1 in Lorentzian or Riemannian signature respectively. Notice that for a (anti) self-dual connection (γ = ±i in the Lorentzian case, or ±1 in the Riemannian case) the second term vanishes and the constraint simplifies greatly. II. SPIN NETWORK PHASE SPACE In loop gravity, one does not work directly with the continuous kinematical Hilbert space, but instead with the projective limit of Hilbert spaces associated to embedded oriented graphs Γ [4,12]. The Hilbert space associated with one graph is the so-called spin network Hilbert space. It represents a truncation of the full Hilbert space to a finite number of degrees of freedom. What we would like to emphasize here is that spin network Hilbert spaces can be obtained as the quantization of finite-dimensional phase spaces associated to embedded oriented graphs Γ. Each of these truncated phase spaces form the so-called holonomy-flux algebra. This fact has already been recognized in the literature [9] and is at the basis of most of the recent semi-classical analyses of LQG [13][14][15]. Our main point is that the process of truncating the theory to a finite number of degrees of freedom and the process of quantizing this truncated theory are separate constructions which have to be studied individually. Here we would like to adopt the point of view that the continuous kinematical phase space P can be described as the projective limit of phase spaces P Γ associated to embedded oriented graphs Γ. In particular, we would like to understand the relationship between these finite-dimensional phase spaces P Γ and the continuous phase space variables. An oriented graph Γ is defined as a one-cellular complex [16] consisting of a set E Γ of oriented edges e (one-dimensional submanifolds of Σ) and a set V Γ of vertices v. The end points of the oriented edges are the vertices, and we denote by s, t the two functions assigning a source vertex s(e) and a target vertex t(e) to each edge e. We also denote by e −1 the edge e with a reverse orientation. The kinematical spin network phase space P Γ associated with such a graph is isomorphic to a direct product for each edge of SU(2) cotangent bundles 1 : (2.1) Explicitly, this phase space is labeled by couples (h e , X e ) ∈ SU(2) × su(2) of Lie group and Lie algebra elements for each edge e ∈ Γ. This data depends on a choice of orientation for each edge, and under an orientation reversal we have Since we have chosen here to trivialize T * SU(2) with right-invariant vector fields, this last relation means that under orientation reversal of the edge we obtain the left-invariant ones. The variables (h e , X e ) satisfy the Poisson algebra where we have used notations such that 2 X e ≡ X i e τ i . As shown in [18,19], the symplectic potential and symplectic two-form for this Poisson structure are given respectively by On the spin network phase space P Γ , we can define the action of the gauge group G Γ ≡ SU(2) |V Γ | at the vertices V Γ of the graph. Given an element g v ∈ SU(2), finite gauge transformations are given by where s(e) (resp. t(e)) denotes the starting (resp. terminal) vertex of e. This action on the variables h e and X e is generated at each vertex by the Hamiltonian which can be understood as a discrete Gauss constraint. Since this action is Hamiltonian, we can define the gauge-invariant phase space by symplectic reduction where, as explained above, the double quotient means imposing the Gauss constraint at each vertex v and then dividing out the action of the SU(2) gauge transformation (2.5) that it generates. The question we would like to address is: What is the relationship between the continuous phase space P described in the previous section, and the spin network phase space P Γ ? More precisely, we would like to know if it is possible to reconstruct from the discrete data P Γ a point in the continuous phase space P? In order to describe the relationship between the discrete and continuous data, we need a map from the continuous to the discrete phase space. We can then study its kernel and see to what extent it can be inverted. This is the object of the next sections. A. From continuous to discrete data In order to construct the discrete data, let us first choose an embedding f Γ : Γ −→ Σ of the graph Γ into the spatial manifold Σ. Given this embedding, it is well understood in the discrete picture that the group elements h e represent holonomies of the Ashtekar-Barbero connection A i a along edges e. It is necessary to work with such objects because an important step toward the quantization of the canonical theory is the smearing of the Poisson algebra (1.3). Since the connection A i a is a one-form, it is natural to smear it along paths e. Now we could just take the integral of A along e as a smearing but this will not respect the gauge transformations. What is needed is a smearing that does intertwine the notion of continuous and discrete gauge transformations. It is well known that this is given by the notion of parallel transport along e, encoded in the holonomy whereė a denotes the tangent vector to the path and −→ exp denotes the path-ordered exponential. Let us recall some fundamental properties of the holonomy functional. The holonomy is invariant under reparametrizations of the path e, and the holonomy of a path corresponding to a single point is the identity. If we consider the composition e = e 1 • e 2 of two paths which are such that s(e 2 ) = t(e 1 ), the holonomy satisfies (2.9) If we reverse the orientation of a path, we have (2.10) These properties come from the fact that the holonomy is a representation of the groupoid of oriented paths [20]. Under SU(2) gauge transformations, the holonomy transforms as 11) which shows that the finite gauge transformation g ⊲A = gAg −1 +gdg −1 of the connection becomes a discrete gauge symmetry acting on the vertices defining the boundary of the edge e. Finally, under the action of a diffeomorphism Φ ∈ Diff(Σ), the holonomy transforms as The exact meaning of "momentum" variable X e is less clear. Roughly speaking, we usually build a flux operator by smearing the field E a i along a surface F e dual to an edge e [2]. But if one wants this integrated flux to have a covariant behavior under gauge transformations, it is essential for the integration along F e to involve some notion of parallel transport. Indeed, the naive definition of the flux is not covariant under gauge transformations, i.e. (2.14) This is an important point which has often been ignored in the LQG literature, the only noticeable exceptions being [21], and more recently [5,22]. For the holonomy, the only reason we consider the paralell transport operator instead of the simple integral of A along e is to have a discretisation covariant under gauge transformation. It is as important to preserve this covariance for the flux than it is for the holonomy. The way around this problem is to define a flux operator which also depends on the connection through its holonomy. Given an oriented edge e ∈ Γ and a point u on this edge, we choose a surface F e intersecting e transversally at u = F e ∩ e. We also choose a set of paths π e assigning to any point x ∈ F e a unique path π e going from the source s(e) to x. Such a path starts at the source vertex of the edge e, goes along e until it reaches the intersection point u = F e ∩ e, and then goes from u to any point x ∈ F e while staying tangential to the surface F e . More precisely, we have π e : F e × [0, 1] −→ Σ such that π e (x, 0) = s(e) and π e (x, 1) = x. With the set of data (F e , π e ), one can define the flux operator A. (2.16) Notice that by definition, the source of the path π e is s(e), and its target is the point x ∈ F e . Therefore, under the gauge transformations the flux operator becomes One can see from the definition of the paths π e that under an orientation reversal of the edge we have π e −1 = e −1 • π e , and therefore Moreover, the surface F e possesses a reverse orientation F e −1 = −F e , and thus we have which proves that our mapping is consistent with (2.2). Notice also that any two fluxes which differ only by the choice of paths and surfaces, but possess the same intersection with e, are in the commutant of the holonomy algebra: Finally, one can see that the mapping we have described reproduces the Poisson algebra (2.3). Indeed, if we choose the surface F e such that the intersecting vertex u = F e ∩ e approaches s(e), we obtain Finally, we know that the requirement of consistency with the Jacobi identity imposes that the fluxes do not commute among each other. This property, which seems inconsistent if one thinks of X e as depending purely on the (commuting) densitized triad field, is perfectly understandable once we realize that the flux depends also on the connection, and explains the "mystery" behind the non-commutativity of the fluxes [23]. This is consistent with the understanding of the spin network phase space in terms of twisted geometries [5], where it appears clearly that the flux operators also contain information about the holonomies, and cannot be thought of as being purely geometrical. In other words, the flux operators are not commuting because they capture information not only about the intrinsic geometry, but also about the extrinsic curvature. The map that we have described depends on three types of data. It depends on a choice of embedding f Γ of Γ into Σ, a choice of surface F e transverse to the edge e at u, and a choice of path π e going from s(e) to a point x ∈ F e . Once this data is given, we can construct a map which has the key property of intertwining gauge transformations on the continuous and discrete phase spaces, is compatible with the orientation reversal of the edges, and respects the Poisson structure of T * SU(2). B. From discrete to continuous data Now we would like to investigate to what extent it is possible to invert the map from continuous to discrete data I : P −→ P Γ . In other words, to what extent does the discrete data determine the continuous data? Can we reconstruct a unique representative of the continuous data starting from the discrete one, or describe a specific equivalence class? At first sight, this seems like an impossible task. Indeed, if one first focuses on the connection, one needs to choose an embedding f Γ to construct the holonomies, so there is no way the discrete group elements will determine the connection unless we know this embedding. Moreover, one clearly sees that the flux operator is not uniquely defined by the electric field E. There are several ambiguities in its definition. There are many possible choices of surfaces F e that are transverse to the edge e, and also many possible paths that one can choose on F e . Different choices lead to different mappings from the continuous data to the discrete data. This means that giving a flux X (Fe,πe) (which we will call X e for simplicity) does not allow one to reconstruct a continuous field E, which constitutes a fundamental ambiguity. This state of affairs is fine if one treats the discrete data as some approximate description of continuous geometry which only takes physical meaning in some continuous limit. This is the usual point of view [9], and it implies that operators expressed in terms of the fluxes X e do not have a sharp semi-classical geometric interpretation. In this work we would like to be more ambitious and interpret the discrete data as potential initial value data for the continuous theory of gravity. The challenge is to show that one can reconstruct continuous fields (A, E) explicitly from the knowledge of the discrete data (h e , X e ). How can this be possible in light of all the ambiguities that we have listed above? In order to make some progress in this direction, let us first remark that there are configurations of fields for which the ambiguities disappear. This is the case in particular for a flat connection. Suppose that we focus on a region C v of simple topology (isomorphic to a three-ball) around a vertex v ∈ C v , and that in this region the connection A is flat. In this case, the expression (2.15) for the flux becomes independent of the system of paths π e , since the flatness of the connection implies that there exists an SU(2) element a(x) such that A = ada −1 and h πe (x) = a(v)a(x) −1 . Indeed, we have 24) and the dependence on the system of paths has disappeared. Moreover, one can see that the Gauss law expresses the fact that X i Fe = X i F ′ e , for if F e and F ′ e have the same oriented boundary, their union encloses a volume ∂C v and we have that: In the next section, we are going to make this statement more precise, and study the case of a partially flat connection. III. PARTIALLY FLAT CONNECTION In this section, we formulate and prove the equivalence between the continuous phase space of partially flat geometries and the discrete spin network phase space. In order to do so, we first need to introduce some notions of topology. ii) The boundary of the closure of an n-dimensional cell is contained in a finite union of cells of lower dimension. The n-skeleton ∆ n of a cellular decomposition is the union of cells of dimension less than or equal to n. Clearly, the n-skeleton of a cellular decomposition is also a cellular decomposition. In particular, the one-skeleton ∆ 1 of a cellular decomposition is a graph. Let us now suppose that we have a graph Γ embedded in Σ. We need to introduce the notion of a cellular decomposition dual to Γ. Definition 2. A cellular decomposition ∆ of a three-dimensional space Σ is said to be dual to the graph Γ if there is a one-to-one correspondence v −→ C v between vertices of Γ and three-cells of ∆, and a one-to-one correspondence e −→ F e between edges of Γ and two-cells of ∆, such that: ii) The two-cells F e intersect Γ transversally at one point only, and the intersection belongs to the interior of the edge e of Γ. In other words, a cellular decomposition dual to Γ is such that each vertex of Γ is dual to a three-cell, and each edge of Γ is dual to a two-cell. Finally, let us consider a pair (Γ, Γ * ) of graphs embedded in Σ. Definition 3. We say that an embedded graph Γ * is dual to the embedded graph Γ, or that (Γ, Γ * ) forms a pair of dual graphs, if there exists a cellular decomposition ∆ dual to Γ, whose one-skeleton ∆ 1 is Γ * . From now on, we consider that (Γ, Γ * ) is a pair of dual embedded graphs, and we denote by ∆ the cellular decomposition dual to Γ with a one-skeleton ∆ 1 given by Γ * . Notice that if we take any diffeomorphism Φ o on Σ which does not act on Γ * or the vertices of Γ, we obtain an equivalent 3 cellular decomposition Φ o (∆). Given such a pair of dual graphs, we are going to construct a certain phase space P Γ,Γ * , and prove that it is the continuous analogue of the discrete spin network phase space P Γ . In fact, we are going to show that there is a symplectomorphism between P Γ,Γ * and P Γ . A. The reduced phase space P Γ,Γ * To define the reduced phase space P Γ,Γ * , we first construct a group F Γ * × G Γ of gauge transformations acting on P. For this, let us consider an infinite-dimensional Abelian group of transformations F Γ * parametrized by Lie algebra-valued one-forms φ i ∈ Ω 1 Σ, su(2) which have the property that they vanish on Γ * : This group action is Hamiltonian and generated by the curvature constraint 2) whose action on the continuous phase space P is given by This constraint enforces the flatness of the connection outside of the one-skeleton graph Γ * . The second group, G Γ , is the group of gauge transformations parametrized by Lie algebra-valued functions α i ∈ Ω 0 Σ, su(2) which have the property that they vanish on the vertices of Γ: This group action is also Hamiltonian. It is generated by the smeared Gauss constraint whose infinitesimal action on the phase space variables is given by From the various Poisson brackets we see that the Hamiltonians (3.2) and (3.5) form a first class algebra. We are interested in the phase space obtained from P by symplectic reduction with respect to F Γ * and G Γ , which we denote by is the constrained space. This is the infinite-dimensional space of flat SU(2) connections onΣ ≡ Σ\Γ * , and fluxes satisfying the Gauss law outside of V Γ . Once we divide this constrained space by the action of the two gauge groups introduced above, we obtain the finite-dimensional orbit space P Γ,Γ * . We are going to prove that P Γ,Γ * is the continuous analogue of the discrete spin network phase space P Γ . Let us start by constructing a three-dimensional cellular decomposition of the region. Since we have chosen Γ * to be the one-skeleton ∆ 1 of the cellular decomposition ∆ of Σ, the cellular decomposition ofΣ is simply given by∆ ≡ ∆\∆ 1 . Explicitly, the decomposition∆ can be written as∆ To solve the curvature constraint, let us define on a three-cell C v a group-valued map a v (x) : C v −→ SU(2) as the path-ordered exponential where the integration can be taken over any arbitrary path from the point x ∈ C v to the vertex v because the connection is flat and C v is simply connected. By construction, this map is such that a v (v) = 1. This allows us to reconstruct on C v the flat connection A as The second constraint to satisfy is the Gauss law outside of the vertex v which lies inside the cell C v . Because the connection is flat, the covariant derivative of the electric field E can be written as where we have introduced the Lie algebra-valued two-form field (3.14) Therefore, we see that the Gauss law implies that the two-form X v is closed outside of v since The electric field can now easily be reconstructed since we have One can conclude that a general solution of the two constraints F (A) = d A A = 0 and d A E = 0 on C v and C v − {v} respectively, is given in terms of a Lie algebra-valued closed two-form X v and a group element a v : C v −→ SU(2), the connection and flux fields being given by (3.12) and (3.16). Now we can extend this solution to the whole spaceΣ by gluing consistently the solutions on each cell. We have labeled the three-dimensional cells C v with vertices of the graph Γ. Consequently, the two-dimensional cells F e , labeled by edges e = (v 1 v 2 ) of Γ connecting two vertices (such that s(e) = v 1 and t(e) = v 2 ), are obtained by intersecting two three-dimensional cells as where the bar denotes the closure of the cell. We assume that the two-dimensional cells F e are oriented, and that their orientation is reversed when we change the orientation of the edge e. Demanding that the connection and flux fields be continuous across the two-dimensional cells amounts to assuming that there exists, for each F e , an SU(2) element h e such that for x ∈ F e . Notice that the first equality can be written as where x is any point on the two-cell F e , and once again the definition does not depend on x because the connection is flat. By construction, one can see that under an orientation reversal we have h e −1 = h −1 e . This construction shows that the constrained space C is isomorphic to the data (a v , X v , h e ), subject to the conditions (3.18). We are now interested in the quotient of this constraint space by the gauge group F Γ * × G Γ . Elements of this gauge group are pairs φ(x), g o (x) , where φ is a Lie algebra-valued one-form which vanishes on Γ * , and g o is an element of SU(2) (obtained by exponentiation of α) fixed to the identity of the group at the vertices V Γ . The action of F Γ * × G Γ on the pair (A, E) ∈ P translates on the constraint surface C into an action on the data (a v , X v , h e ) given by (3.20) Following (2.15), let us compute the flux X e across a surface dual to an edge e which is such that s(e) = v. It is given by where we have used the fact that a v (v) = 1. We see that the observables which are invariant under this gauge transformation are simply given by the holonomies h e and the fluxes X e . B. The symplectomorphism between P Γ,Γ * and P Γ Now we come to our main result, which is the symplectomorphism between the continuous phase space P Γ,Γ * and the discrete spin network phase space P Γ . Let us construct a map between the constrained continuous data in C Γ,Γ * (see (3.9)) and discrete data on the spin network phase space P Γ , and denote it by For this, we define for every three-cell C v a group-valued map a v : C v −→ SU(2) such that a v (v) = 1 and a Lie algebra-valued two-form X v : C v −→ Ω 2 C v , su(2) closed outside te vertices of Γ. Given these fields, we can reconstruct on C v the connection and the two-form field using The map I is then defined by where in the definition of h e , x is any point on the two-cell F e , and once again the definition does not depend on x because the connection is flat. To compute the holonomy h e , we have used the group elements a s(e) (x) and a t(e) (x) to define the connection on the two cells dual to the vertices s(e) and t(e) respectively. It is possible to use equation (3.24b) to write down the relationship between the discrete and continuous Gauss laws. We already know from (3.15) that the Gauss law is equivalent to the requirement that the two-form X v be closed outside of the vertex v. We can now write that which relates the continuous and discrete constraints. This shows that the violation of the continuous Gauss constraint is located at the vertices of Γ, and given by a distribution determined by the discrete Gauss constraint: Since the map I is invariant under the gauge transformations F Γ * × G Γ we can write it as a map [I] : P Γ,Γ * −→ P Γ . We will now show that this map is not only invertible, but also a symplectomorphism. Proposition 1. The map [I] : P Γ,Γ * −→ P Γ defined by (3.24) is a symplectomorphism, and is invariant under the action of diffeomorphisms connected to the identity preserving Γ * and the set V Γ of vertices of Γ. We are going to prove this proposition in the remainder of this work. Before doing so, let us stress that this result implies the existence of an inverse map which allows one to reconstruct from the discrete data an equivalence class [A(h e ), E(h e , X e )] of continuous configurations satisfying the curvature and Gauss constraints (i.e. configurations in the constrained space C). Explicitly, this equivalence class is defined with respect to the equivalence relation where once again φ is a Lie algebra-valued one-form vanishing on Γ * , and g o is an element of SU (2) fixed to the identity of the group at the vertices V Γ . By construction, we see that the map I intertwines the notion of gauge transformations, i.e. satisfies I(g ⊲ A, g ⊲ E) = g ⊲ I(A, E). Evidently, proposition (1) implies a similar proposition for the gauge-invariant phase spaces. Indeed, if one defines and G = C ∞ Σ, SU(2) is the group of full SU(2) gauge transformations, we have the symplectomorphism P G Γ,Γ * = P G Γ between the continuous and discrete gauge-invariant phase spaces. This follows directly from proposition (1), and the fact that G = G Γ × G Γ , where G Γ is the group of discrete gauge transformations acting at the vertices v ∈ V Γ only. Notice that when we act with the full group G of SU(2) transformations, the holonomies h e and the fluxes X e clearly become gauge-covariant. Indeed, since the group element g is not fixed to the identity at the vertices v anymore, we have g ⊲ a v (x) = g(x)a v (x)g(v) −1 , and therefore the definition (3.18) tells us that we have g ⊲ h e = g v 1 h e g −1 v 2 , where e is an edge of Γ connecting the vertices v 1 and v 2 . C. The symplectic structures In this subsection we use the map (3.24) to prove the equivalence of the symplectic structures on the continuous and discrete spaces P Γ,Γ * and P Γ . We know that the spaces P and P Γ are symplectic manifolds, their symplectic structures being given by (1.5) and (2.4) respectively. Since the space P Γ,Γ * has been obtained from P by symplectic reduction, the Marsden-Weinstein theorem ensures that it also carries a symplectic structure. We are now going to show that the symplectic structures on the spaces P Γ,Γ * and P Γ are in fact identical. Let us start with the symplectic potential coming from the first order formulation of gravity. It is given by where ⋆ denotes the Hodge duality map in the Lie algebra su(2). We first use the cellular decomposition ∆ to evaluate this symplectic potential on the set of partially flat connections and write This is exactly the symplectic potential associated to |E Γ | copies of the cotangent bundle T * SU(2). It shows that the symplectic structure of the spin network phase space is equivalent to that of first order gravity evaluated on the set of partially flat connections. In particular, since the symplectic forms are invertible by definition, this proves that the continuous phase space P Γ,Γ * is indeed finite-dimensional and isomorphic to P Γ . D. Action of diffeomorphisms Now we prove the second point of proposition (1), which concerns the invariance of the symplectomorphism under a certain class of diffeomorphisms. The isomorphism I : P Γ,Γ * −→ P Γ depends on a choice of cellular decomposition ∆ dual to Γ with one-skeleton ∆ 1 = Γ * . Diffeomorphisms Φ ∈ Diff(Σ) act naturally on the continuous phase space P Γ,Γ * by A −→ Φ * A and E −→ Φ * E. Let us start by choosing a particular diffeomorphism Φ o which preserves the graph Γ * and the vertices V Γ inside the cells C v , and is connected to the identity 4 . Because the connection is flat oñ Σ, the holonomy h e (A) is independent of the choice of path between s(e) and t(e) as long as any two paths are in the same homotopy class ofΣ. The edges e and Φ o (e) are in the same homotopy class if Φ o is connected to the identity and not moving Γ * . Then it is clear that we have Similarly, the action of Φ o on the group element a v (x) maps it to a v Φ o (x) . This implies that the two-form X v defined by (3.14) satisfies . Now, since Φ o does not move the graph Γ * , we have that ∂F e = ∂ Φ o (F e ) ∈ Γ * , and therefore F e ∪ Φ o (F e ) encloses a volume, which furthermore does not contain any vertices of Γ. Thus, by virtue of (2.25) and (3.21), we have that E). and on the electric field like where ι denotes the interior product. Now, if the data (A, E) is on the constraint surface C, the curvature vanishes outside of Γ * , while d A E vanishes outside of the set V Γ of vertices. Therefore, if we consider a vector field ξ a which vanishes on Γ * and on V Γ , we see that the action of diffeomorphisms is a combination of flat transformations (3.3) and gauge transformations (3.6) with field-dependent parameters of transformation: Therefore, such diffeomorphisms vanish on the gauge-invariant variables (h e , X e ). IV. GAUGE CHOICES FOR THE ELECTRIC FIELD Now that we have established the isomorphism between P Γ and the continuous phase space P Γ,Γ * , we have a correspondence between discrete geometries and an equivalence class of continuous geometries related according to (3.27) by group gauge transformations and translations. Up to group gauge transformations, the holonomy uniquely determines a choice of connection. For the E field however, the story is different since even after we perform a group gauge transformation, there is still a huge ambiguity E −→ E +d A φ on the continuous electric field determined by the fluxes. It is clear that in order to construct a continuous field configuration starting from the discrete data, one has to specify which continuous field representative to pick in the particular equivalence class determined by the discrete data. In other words, a choice of a representative in this equivalence class is a choice of gauge. More precisely, we have the following definition: Definition 4. A choice of gauge is a map from the discrete data to the continuous phase space, which is the inverse of I in the sense that We say that a gauge fixing is diffeomorphism-covariant if Φ * T is equal to the map T defined on the graphs Φ(Γ) and Φ(Γ * ), for any diffeomorphism Φ : Σ −→ Σ. In other words, choosing a gauge amounts to giving a prescription for reconstructing continuous fields A(h e ) and E(X e , h e ) starting from the discrete data, such that (4.2) holds, i.e. Note that a gauge fixing T is a right inverse for I, while the reverse is not true. The map T • I is not the identity, it just maps a continuous configuration (A, E) that solves the Gauss and curvature constraints into another gauge-equivalent configuration which satisfies the gauge choice. As we have already seen, at the continuous level a flat connection onΣ is determined on every cell C v by a group element a v (x). Locally, it is always possible to perform a gauge transformation that sends this element to the identity of the group, and thereby construct a trivial connection. If we pick two neighboring cells C v 1 and C v 2 such that the vertices v 1 and v 2 bound the edge dual to the face F e = C v 1 ∩ C v 2 , the relevant gauge-invariant information about the connection is encoded in the transition group element h e . For the electric field, there is more gauge freedom since the variable E can be acted upon by both F Γ * and G Γ . Therefore, there is a priori a huge ambiguity in the choice of gauges that one can choose to reconstruct the continuous data. This means that knowledge of the fluxes does not accurately determine the geometry of space, but only a family of geometries that are gauge-equivalent under translations of the type However, there is a powerful way in which we can restrict the gauge choices that are available. This can be done by asking that a gauge choice transforms covariantly under the action of diffeomorphisms. A diffeomorphism Φ of Σ acts on the continuous data in the usual manner (A, E) −→ Φ * A, Φ * E . The same diffeomorphism also acts on the discrete data (h e , X Fe ) as h Φ(e) , X Φ(Fe) . Note that here we have made explicit the fact that the flux field X e depends on Γ * via the choice of a surface F e whose boundary is supported on Γ * . A gauge choice is said to be covariant if this action of the diffeomorphisms commutes with the gauge map T . If we restrict ourselves to gauge choices that are covariant under the action of diffeomorphisms, the ambiguity in the gauge choices is dramatically resolved, and there are only a few choices available. In the following we present two such gauge choices 5 . First, the singular gauge choice in which the electric field E vanishes outside of Γ, and then the flat gauge in which E is flat outside of Γ * . It is remarkable that these two gauge choices correspond to the two main interpretations of the fluxes used in the literature. In loop quantum gravity one usually interprets the E field as having support only on Γ, whereas in the spin foam literature one usually interprets the E field as being flat outside of Γ * . Our analysis shows that these two pictures are not contradictory, but that they correspond to two different covariant gauge choices underlying the same discrete data. Now we want to emphasize that the restriction on the gauge choices coming from the requirement of covariance under diffeomorphisms is the analog of the so-called uniqueness theorem of the quantum representation of the holonomy-flux algebra [24]. This theorem states that there is a unique diffeomorphism-covariant gauge choice, which corresponds to the singular gauge in which E has support on the graph Γ only and vanishes on Γ * . In this singular gauge, which we refer to as the LQG gauge, the electric field E vanishes outside of the graph Γ dual to the triangulation ∆. This can be written as E|0 = 0, where the vacuum state |0 is the state of no geometry. Indeed, in LQG excitations of quantum geometry have support on the graph Γ only. Therefore, in all the regions of ∆ outside of Γ, there is simply no geometry, and the electric field vanishes. We are going to give below an explicit construction of the continuous singular electric field. The key observation is that there is another legitimate choice of representative configuration in the equivalence class (3.27) of continuous geometries which respects the diffeomorphism symmetry. As we already said, it is given by the flat gauge. At the quantum level, this corresponds to a choice of a vacuum state |0 F in which the curvature vanishes. This corresponds to the flat, or spin foam gauge, in which we have F (A)|0 F = 0. This diffeomorphism-invariant vacuum is missed by the LOST theorem due to additional technical hypotheses. It is interesting to note that such a vacuum state appears naturally in our context and that it corresponds to the spin foam description. It can be seen as the dual of the singular gauge, in the sense that it defines a flat geometry within the cells C v , with a non-vanishing electric field E on the dual graph Γ * . As we will see in more detail, the availability of this gauge clearly shows that it is possible to define a locally flat geometry without necessarily having a triangulation with straight edges and flat faces. In Regge geometries [6], the extrinsic curvature is concentrated along the one-skeleton ∆ 1 of the triangulation, but in the present construction, the edges of Γ * are not necessarily straight. Here we have drawn a parallel between a choice of gauge at the classical level and a choice of a vacuum state at the quantum level. It would be interesting to develop this analogy further. Before giving more details about the gauge choices for the electric field, let us make a comment about the gauge group F Γ * . We have seen previously that the flatness constraint generates a transformation of the electric field given by (4.4) Since we have constructed the gauge group F Γ * with the condition φ(x) = 0 for x ∈ Γ * , one may question whether this imposes a restriction on the gauge choices that we can obtain using the transformation (4.4). The following lemma ensures there is no such restriction. Proof. Consider an element ρ ∈ Ω 2 Σ, su(2) , along with an edge e * in Γ * parametrized by a coordinate x. In a neighborhood of the edge, we can find coordinates y and z which are perpendicular to x. Using φ(e * ) = 0 and therefore ∂ x φ| e * = 0, we can compute Choosing φ z = 0, we can find a solution in the neighborhood of e * , given by where the constants ρ 1 , ρ 2 and ρ 3 , are the components of ρ evaluated on the edge, i.e. There are of course other possible solutions, but we only give one to establish the proof. In the remainder of this section, we are going to study in more detail the singular and flat gauges for the electric field. Our goal is to study the gauge freedom for the basic variables on the continuous phase space, and to construct explicitly the electric field as a functional of the discrete variables h e and X e . A. Singular gauge The singular gauge is a gauge in which the electric field E vanishes outside of the graph Γ. In this section, we show by an explicit construction that it always possible to make such a gauge choice. More precisely, we construct explicitly continuous fields A(h e ) and E S (h e , X e ) which are such that E S (x) = 0 if x / ∈ Γ, and which satisfy the property I(A, E S ) = (h e , X e ) under the action of the map (3.24). In order to prove this, let us first introduce the following form: This object is a (1,1)-form, i.e. a one-form in x, and a one-form in y. This form satisfies a key property, which is summarized in the following lemma. Proof. First, it is straightforward to show that ∂ i ω i (x) = 0 for x = 0. Moreover, it is possible to show by a direct computation in spherical coordinates that where S ε is a sphere of radius ε. Since this integral is also equal to where B ε is the ball of radius ε, we obtain that ∂ i ω i (x) = δ(x). By a direct computation we can now get that with (4.14) The lemma is therefore established by introducing α(x, y) ≡ ω i (x − y)s i . Given this lemma, it is now a straightforward task to construct a singular flux field. For this, we first construct a flat connection A onΣ following the construction of subsection III A, and then we define the singular flux field as ω(x, y) . (4.15) The integral entering this definition is a one-dimensional integral over the edge e parametrized by the variable y, which implies that the term inside the parenthesis is a one-form in x. The proof that this flux satisfies all the desired requirements is straightforward. First, it is obvious that the Gauss law d A E S = 0 is satisfied on Σ\Γ * since d 2 A = F (A) = 0 on this space. Moreover, using the previous lemma and the definition of the holonomy, we can compute explicitly the covariant derivative: where δ(x, y). (4.17) The last two terms in (4.16) can be reorganized in terms associated with the vertices to find is the holonomy going from the vertex v to the point x. Now the last term vanishes due to the discrete Gauss law (2.6). Therefore, we finally find that the singular electric field is . (4.19) This electric field is obviously vanishing outside of Γ, and is such that X e (A, E S ) = X e . It is interesting to note that the integral of the two-form α(x, y) along S, is simply the solid angle of S as viewed from y divided by 4π. B. Flat cell gauge In this subsection, we prove that it is always possible to perform a gauge transformation which leads to a flat geometry within each cell. Since the connection A is flat onΣ, a flat geometry oñ Σ is determined by the choice of a frame field (i.e. an invertible su(2)-valued one-form e = e i τ i ) which satisfies the torsion-free condition d A e = 0 inΣ = Σ\Γ * . Indeed, since d A e = γ[K, e], the vanishing of the torsion and the invertibility of e together imply that K = 0, and therefore the SU(2) Ashtekar-Barbero connection is simply equal to the spin connection A = Γ(e). The flatness of the SU(2) connection therefore imposes the flatness of the spin connection, which implies that the metric determined by e is flat. To construct such a flat geometry, we need to obtain an electric field E so that d A e = 0 within each cell C v . However, E as defined does not necessarily imply this condition. Recall that we have an equivalence class of electric fields yielding the same integrated flux, related by transformations generated by the flatness constraint F Γ * . Given the gauge transformation in (3.3), is there a choice of φ which leads to a flat geometry? To answer this question, we need to define the action of the constraint F Γ * on the triad field. The action on the electric field is given by One can show that this leads to the following transformation property for the triad field: where ⋆ e denotes the hodge star operator determined by e. Considering this, let us define a map M e : Ω 1 Σ, su(2) −→ Ω 1 Σ, su(2) , given by This map is clearly a homomorphism. Now, since e b j e i a /2 − e aj e bi is invertible, the kernel of this map is the space Z 1 Σ, su(2) of twisted-closed one-forms, that is, the space of all su(2) valued one-forms ω such that d A ω = 0. Then, using the fundamental theorem of homomorphisms, we have that im(M e ) ∼ = Ω 1 Σ, su(2) ker(M e ) = Ω 1 Σ, su(2) Z 1 Σ, su (2) . (4.24) Proof. Let us define a map m e : by m e ([φ]) = M e (φ). This map is well defined, since for some α ∈ Z 1 Σ, su(2) , we can write [φ] ∋ φ ′ = φ + α, and φ and φ ′ are mapped to the same point: The map m e is one-to-one, since we can write New let us go back to our problem, and suppose that we have an electric field E which does not define a flat geometry. A non-flat triad satisfies d A e = 0, which implies that This means that within C v , we can always choose a φ such that M e (φ) = −e + β where β ∈ Z 1 Σ, su(2) , and therefore obtain d A e + M e (φ) = 0, (4.29) which defines a flat geometry. This shows the existence of a gauge choice with a flat frame field e, which is what we desired. Note that since there is an equivalence class of flat triads related by diffeomorphisms, the choice of φ is not unique. We are now interested in reconstructing the flux elements X e starting from the flat frame field e. To do so, let us first define the gauge-transformed frame field where the holonomy starts at the vertex v in the cell (and does not depend on the path). This definition will enable us to define the flux (2.15) with the appropriate smearing. The torsion-free condition d A e = 0 implies that e v is twisted-closed and hence twisted-exact on C v . This means that the frame field can be written as where the zero-form x v provides a set of flat coordinates on the cell C v . Then, the covariant discrete flux elements are simply given by where we have used the fact that a v (v) = 1. C. Regge geometries and cotangent bundle The previous calculation shows that we can think of the phase space P Γ as the phase space of piecewise (metric) flat geometries on Σ\Γ * . Such geometries possess an invertible locally flat metric, with curvature concentrated on the one-skeleton of the cellular decomposition. This description is reminiscent of Regge geometries. However, it is known that the phase space of loop gravity is bigger than the phase space of Regge geometry [7]; Regge geometries appear only as a constrained subset. This fact has triggered the search for the proper geometrical interpretation of the loop gravity phase space, for instance in terms of twisted geometries [5]. We can now clearly understand the key difference between the phase space of loop gravity and that of Regge geometries. The loop gravity phase space corresponds to piecewise flat geometries on Σ\Γ * while the Regge phase space corresponds to piecewise-linear flat geometries on Σ\Γ * . This means that the geometry is flat, but also that the edges of Γ * and the faces of the two-complex are flat lines and planes. It is this additional restriction which allows us to identify a loop gravity configuration with a Regge configuration. To understand how this comes about, let us go back to the formula for the fluxes that we have derived in the previous subsection: where x v is the flat coordinate in the cell C v . One sees that if the two-cells are chosen to be flat, then dx k v is constant over F e and the expression simplifies drastically since the fluxes can be expressed as a cross product of discrete frame fields. This condition implies that the fluxes can be constructed entirely in terms of a discrete piecewise flat geometryà la Regge, and that they satisfy the so-called gluing constraints [7]. This means that a set of fluxes satisfying the gluing constraints corresponds to a Regge geometry and can be implemented as a piecewise flat geometry on Σ\Γ * with the additional constraint that the edges of Γ * are straight with respect to the flat structure 6 . The phase space of full loop gravity then corresponds to piecewise geometries where this additional restriction is not imposed. In other words, the edges of Γ * do not have to be flat even if the curvature is concentrated on them. Our construction shows that this additional restriction is not necessary. The meaning of the additional Regge restriction becomes even clearer when expressed in terms of the extrinsic curvature. What happens in Regge geometry is that both the intrinsic and extrinsic geometry are concentrated on the flat edge. However, the extrinsic geometry K i a is not allowed to freely fluctuate since the condition of flatness of the edge amounts to demanding that the extrinsic curvature has non-zero components only in the direction parallel to the edge, i.e. K i a =ė a K i wherė e a is the vector tangent to the edge. The Gauss law further imposes that K i is also parallel to the edge. Therefore, in Regge geometry, we can access (up to a rotation) only one component of the extrinsic curvature tensor (the deficit angle). This unnecessary restriction is relaxed in the loop gravity phase space, since the extrinsic curvature tensor is now allowed to freely fluctuate as it should. From this point of view, we see that the phase space of loop gravity possesses extra freedom which allows us to fully capture the dynamics. We hope to come back to these points in the future. The result of our construction is that after a choice of gauge, we can express the elements of P Γ as a connection A and an su(2)-valued frame field one-form e, which are solutions to Since δF (A) = d A δA, this is nothing but the cotangent bundle of the space of flat SU(2) connections on Σ\Γ * . That is where M Γ * denotes the moduli space of flat connections modulo gauge transformations. This means that at the quantum level we can represent the quantization of holonomies and fluxes in terms of operators acting on holonomies of flat connections. This interpretation has already proposed by Bianchi in [8]. It is interesting to note that this is reminiscent of the geometry considered by Hitchin in [25]. D. Diffeomorphisms and gauge choices We have seen in subsection III D that diffeomorphisms Φ o connected to the identity that do not move Γ * and the vertices of Γ leave the construction of the holonomy-flux algebra invariant. We have also seen in the beginning of this section that the singular gauge and the flat gauge are diffeomorphism covariant. In general, the construction of h e and X Fe depends both on Γ via the choice of e, and on Γ * via the choice of a two-cell F e . Now, because of the flatness of the connection, the holonomy does not really depend on the choice of edge e, but solely on the choice of the homotopy class of e, which itself is left unchanged by diffeomorphisms that are connected to the identity. It is interesting to note that the choice of the singular gauge is invariant under a diffeomorphism that does not move Γ, whereas the choice of the flat gauge is invariant under diffeomorphisms that do not move Γ * . Indeed, in the singular gauge the frame field depends on the choice of an edge e ∈ Γ, and we have Φ * E = E if Φ(Γ) = Γ. Moreover, under an infinitesimal diffeomorphism ξ, the flux becomes where h πe (x) is again the holonomy going from the source vertex of the edge e to the point x in F e . We clearly see that this expression vanishes for all ξ when the electric field is in the singular gauge. In the flat gauge, the flux does not depend on Γ, and the construction is therefore invariant under diffeomorphisms leaving Γ * invariant. This shows that there is an interesting duality between the two gauges. While the singular gauge respects diffeomorphism invariance with respect to Γ, the flat one respects diffeomorphism invariance with respect to Γ * . V. CYLINDRICAL CONSISTENCY In this section, we analyze to what extent the knowledge of a collection of operators on P Γ for all Γ determines a continuous operator. Given a collection of operators O Γ ∈ P Γ , we introduce the notion of cylindrical consistency: This proposition gives us a powerful criterion to check wether a continuous operator can be represented as a collection of operators associated with P Γ . For instance, we can analyze the status of geometrical operators such as area and volume. We know that the continuous expression for the area operator is One can easily see that even when we restrict this operator to the constraint surface F (A) = 0 outside Γ * and d A E = 0, this operator is not invariant under the translations E −→ E + d A φ. Therefore, this operator is not expressible purely in terms of holonomies and fluxes associated with the graph Γ. However, in loop quantum gravity, the area operator is expressed as an operator acting on the graph Γ purely in terms of the fluxes: Our proposition therefore shows that the LQG area operator does not come from the continuous area operator. This means that we have A(S)| C − A LQG (S) = 0. (5.5) So in that sense, the LQG operator is not a proper approximation of the continuous area operator. This is puzzling since the LQG area operator has been used extensively and derived in many ways. This result thus raises the question of the exact relationship between these two operators. To what extend does the LQG operator capture information about the continuous area operator? Now that we have the exact relationship between the discrete and continuous phase spaces, we can investigate this question a bit further. First, let us recall that the continuous and LQG area operators are not unrelated. In fact, for any product h Γ of holonomies supported on the graph Γ, they satisfy So even if A| C − A LQG does not vanish, it belongs to the commutant of the holonomy algebra. The second key remark is that if we have a non gauge-invariant operator like A(S), we can promote it to a gauge-invariant operator under F Γ * by picking up a gauge. This can be done by working with A T (S) ≡ A(S) E(X e ) instead of A(S)(E), where T is a gauge choice as described in section IV. Such an operator is by construction invariant under F Γ * , since it depends only on the fluxes. Moreover, the difference between two operators that differ by a choice of gauge belongs to the commutant of the holonomy algebra: The relationship between the LQG area operator and the continuous operator is now clear: The LQG area operator is the continuous area operator in the singular gauge. This explains why it can be expressed purely in terms of fluxes. What is less clear is to what extent the knowledge of an operator in a given gauge allows to reconstruct the continuous operator. It is also clear that if one chooses another gauge, like the flat gauge of spin foam models, we are going to construct a different family of area operators associated with graphs, which will differ from A LQG by an element of the commutant of the holonomy algebra. It is also not clear which family of operators (if any) we should use to capture in the most efficient way information about the continuous volume operator. Discussion and conclusion In this paper, we have shown that the discrete phase space of loop gravity associated with a graph Γ can be interpreted as the symplectic reduction of the continuous phase space of gravity with respect to a constraint imposing the flatness of the connection everywhere outside of the dual graph Γ * . This allows us to give a clear interpretation of the discrete flux variables as labeling an equivalence class of continuous geometries. The point of view that the discrete data represents a set of continuous geometries has already been advocated in [9]. Our approach gives a precise understanding of which set or equivalence class of continuous geometries is represented by the discrete geometrical data (h e , X e ) on a graph. It provides a classical understanding of the work by Bianchi [8], who showed that the spin network states can be understood as states of a topological field theory living on the complement of the dual graph. It also allows us to reconcile the tension between the loop quantum gravity picture, in which geometry is thought to be singular, and the spin foam picture, in which the geometry is understood as being locally flat. We now see that both interpretations are valid and correspond to different gauge choices in the equivalence class of geometries represented by the fluxes. It gives us a new understanding of the geometrical operators used in loop quantum gravity as gauged fixed operators, and allows us to investigate further the relationship between these operators and the continuous ones. Finally, it opens the way to a classical formulation of loop gravity. We can now face the question of whether the dynamics of classical general relativity can be formulated in terms of these variables. We plan to come back to this issue of defining a loop classical gravity in the future. Aknowledgments It is a pleasure to thank Valentin Bonzom, Eugenio Bianchi, Johannes Brunnemann and Kirill Krasnov for discussions and comments.
2013-04-03T15:41:00.000Z
2011-10-21T00:00:00.000
{ "year": 2011, "sha1": "33410e2740b78c7a65b763fd13fed34598c7bcbd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1110.4833", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "33410e2740b78c7a65b763fd13fed34598c7bcbd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
2889670
pes2o/s2orc
v3-fos-license
Antiangiogenic Resistance: Novel Angiogenesis Axes Uncovered by Antiangiogenic Therapies Research. The mechanisms of tumor growth and progression involve the activation of different processes such as neovascularization and angiogenesis. These processes involve tumoral cells and stromal cells. Hence, inhibiting angiogenesis affects tumor growth and proliferation in patients with different types of cancer. Nevertheless, tumoral cells and stromal components are responsible for the resistance to antiangiogenic therapies. The majority of tumors respond to this type of therapy; however, some tumors may be indifferent to antiangiogenic therapies (intrinsic resistance) and other tumors become resistant during treatment (acquired resistance). Different strategies have been proposed to prevent resistance. Preclinical studies and clinical trials are focused to fight this therapeutic approach in order to prevent or delay tumor resistance to antiangiogenic therapies. Angiogenesis in tumor development The main characteristics of cancer cells is the lack of controlling cellular divisionand are able to grow as neoplastic lesion composed of tumor cells and stroma. Both types of cells structurally and functionally contribute to tumor development. Nevertheless, the neoplastic lesion cannot form a tumor mass beyond a certain limiting size, generally 1-2 mm³, due to a lack of proper diffusion of oxygen and other essential nutrients. Then, tumors induce blood vessel growth, angiogenesis, by up-regulating the expression and secretion of various pro-angiogenic growth factors such as vascular endothelial growth factor (VEGF), fibroblast growth factors (FGFs), angiopoietins (Ang), placental growth factor (PlGF), and some integrins, and concomitantly down-regulating several antiangiogenic factors [1]. Furthermore, there are evidences that the angiogenic process precedes the formation of the tumor, suggesting that angiogenesis may represent the rate-limiting step not only for tumor growth, but also to the occurrence of malignant tumors [2]. In addition, angiogenesis coincides with increased circulating tumor cells facilitating metastatic spread. Thus, tumor cells cooperate with other cell types of the tumor microenvironment to achieve the essential feature of angiogenesis. Immune cells, inflammatory cells, hematopoietic cells and stromal fibroblasts contribute to activate endothelial cells of tumor angiogenesis by secretion of various types of inducers [3]. Interestingly, tumors often show an inflammatory phenotype, described by Dvorak in 1986 as "wounds that never heal," which could tip the balance in favor of angiogenesis and thus promote the formation of new vasculature able to oxygenate and nourish the growing tumor mass. The imbalance in producing sustained pro-angiogenic factors, together with the persistent lack of vasculature stabilizing factors, leads to the formation of immature and dysfunctional vascular system that cannot keep pace with the rapid growth of the tumor mass. Therefore, the vascular tree in a tumor is typically chaotic with dead-end vascular branches and areas of inverted and intermittent blow flow, which some impairs the vascular function and leads to regions of lowered perfusion and hypoxia. Nevertheless, different types of tumors have low oxygenates areas (hypoxic regions) and present upregulation of different transcriptions factors such as HIFs and hypoxia-depend genes (Carbonic anhydrase, glucose transporters...) [4]. Different processes such as glycolytic metabolism, oxygen consumption, survival, angiogenesis, migration and invasion could be modulated through HIF-1, nevertheless their stabilization has an important repercussion in the behavior of the cells and in their gene expression profile [5,6]. Moreover, hypoxia actively participates in the activation of tumor angiogenesis, this being responsible for regulating the inducers and inhibitors factors that contribute to angiogenesis. It is in fact capable of regulating the expression of molecules that disrupt endothelium and pericyte coverage, as angiopoietin-2, which further contributesto the start of sprouting (developing vascular branches). Furthermore, multiple types of mobilizing stem cells from the bone marrow and the recruitment of immune cells to the tumor microenvironment are positively modulated by tumor hypoxia [7]. Interestingly, recent advances in molecular biology techniques and the study of families with hereditary renal cancer (Syndromes 'Von Hippel-Lindau,' 'Hereditary papillary,' 'Birt-Hogg-Dube' and 'hereditary leiomyomatosis and renal cancer') have permitted the recognition of genes and proteins involved in the pathogenesis of some tumor entities, giving the ability to select the most appropriate therapy for a given disease [8,9]. In particular, inactivation of the VHL gene (tumor suppressor gene) in patients with RCC involves hyper activation of HIF1α signaling due to lack of degradation even under normoxia, resulting in an accumulation of HIF which promotes transcription its down-stream effectors such as VEGF, GLUT1, TGF-α and PDGF [10,11]. Therefore, the therapy against VEGF and the usage of inhibiting molecules of the receptor that binds ligand have been used in many types of tumors [12,13] Anti-angiogenic strategies The neoplastic dependence on tumor angiogenesis and the stromal contribution to the formation of new vessels suggested new therapeutic targets to control tumor growth. Based on their mechanism of action we classify anti-angiogenic drugs in two groups: Direct Anti-angiogenic drugs. those that prevent vascular endothelial cells from proliferating, migrating or avoiding cell death in response to a spectrum of proangiogenic proteins, including VEGF, FGF, IL-8, platelet-derived growth factor (PDGF) among others. Indirect Anti-angiogenic drugs. those that secondarily prevent the expression or block the activity of tumor proteins that activate angiogenesis. Their target is a signaling pathway in the tumor cells responsible for synthesis or secretion of pro-angiogenic molecules. The typical example being mTOR inhibitors that target a tumor cell survival pathway and secondarily decrease VEGF expression thus secondarily indirectly exert an anti-angiogenic effect. In this review we will only cover the direct anti-angiogenic drugs, which are typically directed to inhibit pro-angiogenic signaling pathways. For its role as the main promoter of angiogenesis, vascular endothelial growth factor (VEGF) is the main target of the anti-angiogenic drugs currently approved [1]. VEGF as a prototypical angiogenesis target Monoclonal Antibodies: they have a direct and indirect action. The direct action is to block the ligand (VEGF) or its receptors (VEGFRs), which blocks its signaling function. The indirect action is mediated by the immune system (complement system activation, cytotoxic lymphocytes and macrophages) and contributes to the destruction of the tumor cell. This class is the first anti-angiogenic drug that demonstrated a clear clinical effect increasing survival in metastatic colorectal cancer. The most well-known example is Bevacizumab, an antibody against human VEGF ligand [15]. Selective inhibitors of kinase activity: compete with ATP for binding to the catalytic domain of the protein, thereby blocking the kinase activity of VEGFRs. These drugs also were initially tested as anti-proliferative agents for tumor endothelial cells, which started the development of a large number of inhibitors that act at different pathways and cell types apart from VEGFRs (promiscuous tyrosine-kinase inhibition profile). Currently sunitinib and sorafenib are the most widely used drugs of this class since they demonstrate the best anti-angiogenic activity [16]. Other novel targets recently proposed FGF-FGFR inhibitors: the fibroblast growth factor/fibroblast growth factor receptor (FGF/FGFR) signaling axis plays an important role in normal organ, vascular, and skeletal development. Deregulation of FGFR signaling through genetic modification or overexpression of the receptors (or their ligands) has been observed in numerous tumor as the FGF/FGFR axis also plays a key role in driving tumor angiogenesis. Preclinical data shows that inhibition of FGFR signaling can result in anti-proliferative and/or pro-apoptotic effects, both in vitro and in vivo, thus confirming the validity of the FGF/FGFR axis as a potential therapeutic target [17]. Several drugs agains different pro-angiogenic targets have been developed for their anti-angiogenic effect in preclinical and clinical studies [1] Sema-like ligand: semaphorins (SEMAs) are a superfamily of secreted or membraneassociated glycoproteins implicated in the control of axonal wiring and involved in angiogenesis and cancer progression. The proliferation, cell survival, alteration in cell adhesion and tumor invasiveness can be positively or negatively modulated in tumoral cells by SEMAs [18]. These can also alter cell migration and proliferation in stromal components [19,20]. Thus, increased expression in Sema3E produced a decrease in tumor burden, neutralizing tumor angiogenesis and moreover increasing the metastatic capacity of tumors. Furthermore, in tumor cells, the Uncl-Sema3E-PlxnD1complex fails to elicit the ErbB2mediated pro-invasive and pro-metastatic pathway [18] With these results they proposed Uncl-Sema3E as a novel anti-angiogenic and antimetastatic therapeutic approach. Angiopoietin2 inhibition: the angiopoietins are proteingrowth factors that promote angiogenesis and help stabilize the development of blood vessels from pre-existing blood vessels. Ang1 and Ang2 are required for the formation of mature blood vessels, as demonstrated by mouse knock out studies [22]. Moreover, Ang2 is critically associated tumor angiogenesis and progression. It has been described that Ang2 regulates tumor angiogenesis in cooperation with VEGF as well as Ang1 through the Tie2-dependent pathways. On the other hand, Ang2 stimulates tumor angiogenesis, invasion, and metastasis through Tie2-independent pathways involving integrin-mediated signaling.Therefore, Ang2 is currently an attractive therapeutic target, as it has been corroborated by recent studies using a neutralizing anti-Ang2 antibody [23]. CLINICAL CONTROVERSIAL RESULTS Pre-clinical studies often report positive results about the benefit of anti-angiogenic treatment, but clinical trials' results vary depending on the cancer type and antiangiogenic therapy used. Phase III studies have indeed shown the benefits of Bevacizumab or Sunitinib as well as other VEGF-targeted therapies, either as single agents or in combination with chemotherapy. Blocking the formation of new blood vessels with anti-angiogenic therapy is currently used to treat certain types of cancers, including metastatic renal cancer [12,13]. Therefore, the metastatic renal cell carcinoma which is characterized by being dependent on VEGF growth is controlled by the antiangiogenic therapy, confirming the positive effect of this kind of therapy and supported by various clinical trials [24][25][26][27]. Nevertheless, many authors coincide in the observation that anti-angiogenic treatments are more effective in the increase of theprogression free survival (PFS) than on the prolongation of overall survival (OS). However, based on obvious clinical benefits with a remarkable increase in PFS although in the absence of robust statistically significant increase in OS, VEGF pathway inhibitors are the mainstay of therapy in RCC approved by FDA [28][29][30]. This discrepancy between PFS and OS feeds the controversy of how to best measure clinical benefit of treatment, because anti-angiogenic therapies typically exert an effect in terms of increased necrosis as observed by imaging studies. As mentioned before, antiangiogenic treatments have an effect on cavitation and loss of viable tumor burden, causing an impact in tumor growth with no alteration in the parameters of RECIST (Response Evaluation Criteria in Solid Tumors) [31,32]. Tumor development is deeply affected by tumor type and induces modifications on the formation of tumors; in particular by their own angiogenic characteristics and their pro-angiogenic capacity coming from tumor-stroma specific interaction. The inactivation of VHL tumor suppressor is highly frequent in RCC [33], for that reason angiogenesis is presumably highly dependent on VEGF. Similar to hepatocellular carcinoma (HCC) that are particularly angiogenic when growing in liver displacing the normal parenchyma, moreover the dependence on angiogenesis is presumed to be the key for the efficacy of antiangiogenic therapy. On the contrary, colon-rectal cancer (CRC) shows considerably less clinical benefits and VEGF-targeted therapy is therefore administered in combination with chemotherapy. On the other hand metastatic foci of CRC, typically growing in the liver, often replaces the liver parenchyma, rather than displacing it, by the FAS ligand-induced death in the hepatocytes. This leads to the co-option of existing blood vessels instead of dependence on sprouting angiogenesis [30,31,34]. The adaptability of the tumors to classical chemo-therapy and radiation emerges also for anti-angiogenic therapy [35,36]. Thus, anti-angiogenic therapies have proven to be beneficial in many patients, but these clinical benefits are overshadowed by apparent acquired resistance to anti-angiogenic therapies. Moreover, some patients don´t respond to these therapies at all demonstrating upfront refractoriness to therapy or intrinsic resistance. Resistance to antiangiogenic therapy The initial assumption was that antiangiogenic therapy does not cause resistance, because it was specific against endothelial cells that showed no genetic instability [37]. However, experimental and clinical evidence has shown that the benefit of this therapy have been mild and transitory [13]. The majority of tumors respond to therapy but it is important to differentiate between refractoriness, intrinsic or acquired resistance [31]. Intrinsic resistance (IR) to anti-angiogenic therapy. In this type of resistance the tumor becomes indifferent to antiangiogenic therapy and there is no response to treatment ( Figure 2). Some patients treated with Bevacizumab, Sorafenib, and Sunitinib developed this type of resistance [38,39]. It has been shown that tumors are capable of expressing from the beginning of its progression multiple pro-angiogenic factors, so that anti-VEGF therapy is not fully effective, as it is only able to partially block the process of angiogenesis [40]. Another molecular mechanism that may be involved in the intrinsic resistance is the deregulation of the HIF pathway. In the tumors with activation of HIF, such as renal tumors, are consistently found high levels of genes of pro-angiogenic molecules controlled by this factor, thereby reducing the effect of anti-angiogenic therapy [12,13]. Other mechanisms could be independence from angiogenesis process that have a role in tumor revascularization, including sprouting, co-option of pre-existing vessels, vasculogenic mimicry, mosaic vessels, and mobilization of latent vessels [41]. Could the differential angiogenic features of each tumor have a repercussion in their upfront sensitivity or resistance to anti-angiogenic therapy? Interestingly in astrocytomas, a class of highly oxygen dependent brain tumors, their development is mediated by changes in the way of tumors acquires their supply through blood vessels. . Thus low-grade astrocytomas grow coopting pre-existing normal brain vessels whereas progressing from grade III to grade IV, so called glioblastoma multiforme (GBM), an enhanced request of oxygen and nutrients activates an angiogenic program [42]. Bevacizumab was approved by the United States Food and Drug Administration (FDA) for the treatment of recurrent GBM based on several studies demonstrating efficacy in terms of increased PFS and OS in combination with conventional chemotherapy [30]. Unfortunately, tumor resistance occurs with new distant foci of progression or diffuse in-situ infiltration associated or not with local tumor recurrence as shown by fluid attenuated inversion recovery (FLAIR) and magnetic resonance imaging (MRI) analysis [30,43,44]. Acquired resistance (AR) to anti-angiogenic therapy. In addition to the traditional resistance of some drugs, which is acquired by mutations that affect the target of drugs or alterations of entry mechanisms of the compound [45] the AR resistance to antiangiogenic therapies is more indirect and evasive. Typically, alternative mechanisms are created that lead to activation of angiogenesis even when the target of the drug remains inhibited [46]. Tumors have long been shown to have remarkable plasticity and adaptability to classical chemotherapy and radiation, which contributes to resistance to anti-angiogenic therapy [3,47,48]. However, the specific mechanisms of acquired resistance to anti-angiogenic therapies are unique, and many of these mechanisms show reversibility after anti-angiogenic therapy has been stopped (Paez-Ribes and Casanovas, unpublished observations). Indicating that these types of resistance could reflect the adaptations to therapy instead of mutations or gene amplifications characterizing acquired resistance to other therapeutic strategies. In fact, clinical evidence of this reversibility has been described in metastatic renal cell carcinoma treated repeatedly with VEGFR inhibitors [13,18]. Several different mechanisms of acquired resistance to anti-angiogenic therapy have been described among which are (Fig. 2):  Overexpression of alternative pro-angiogenic factors: initially be described in pre-clinical a transgenic mouse model of neuroendocrine tumors (RIP-Tag2).  Recruitment of stromal pro-angiogenic cells: hypoxic conditions induced by anti-angiogenic treatment promote the recruitment of large numbers of cells derived from bone marrow (BMDCs) at the boundaries of the tumor. These cells have the ability to promote tumor revascularization [50]  Vessel coverage by pericytes: preexisting tumor vessels that have a high number of pericytes surface coverage remain functional and exhibit no regression [2,[51][52][53]. This suggests that endothelial cells have the ability to recruit pericytes, which are able to secrete VEGF and other factors promoting their survival [2,54,55].  Vascular mimicry: defined as the formation of microvascular channels by the aggressive tumor cells themselves, which would allow the transport of oxygen and nutrients [41]. Interestingly, there are some parallelism among the mechanisms that lead to IR and AR. The difference lies in the intrinsic characteristics of each tumor as tumors with AR require some time in order to generate these molecular changes and become resistant to this therapy, whereas tumors with IR are immune to this therapy since from the beginning have over expression of these factors. Furthermore, resistance to antiangiogenic therapies for cancer implicates tumor cells and stromal components, but its contribution is relatively different in each cancer subtype. One crucial step for the development of the neoplastic lesion is the interaction between tumor cells and tumor microenvironments; moreover the tumor-stromal cell collaboration is also involved in tumor responses to therapeutic inhibition of VEGFpathway [30]. However, tumor and stroma cells contribute to the inefficacy of the therapy in tumors that present intrinsic resistance similar to acquired resistance tumors Most of the tumors present different mechanisms of resistance that depend on cells which involve the modification of the stroma components across the modification of the stroma as the recruitment of infiltrating cells, such as cancer-associated fibroblasts (CAFs) and tumor-associated macrophages (TAMs), or the production of alternative pro-angiogenic factors [30]. One of the main modifications induced by anti-angiogenic treatment in tumors is the increase of hypoxia and HIF-1 stabilization. Interestingly, neoplastic cells could react to hypoxia becoming tolerant and modifying the metabolic characteristics to resist to low levels of oxygen. Alternatively, tumor cells could engage in an escape from hypoxic environment alone or sustained by their stromal neighbors [30]. A perspective Hence, approaching antiangiogenic resistance is a key step in the generation of novel antiangiogenic drugs. A number of strategies have been postulated to prevent resistance, targeting multi-pathway inhibitors or multi-combination of anti-angiogenic therapies that inhibit different pathways that could avoid resistance. Moreover, the plasticity to the treatments observed in pre-clinical studies suggest a new therapeutic hypothesis that sequential treatment with an anti-angiogenic drug followed by a non-anti-angiogenic drug (i.e. another targeted therapy or chemotherapy) could resensitize patients to another anti-angiogenic drug as a third line of treatment. Obviously, many studies are warranted to unravel the pre-clinical basis and clinical potential of these strategies to finally determine its clinical benefit for patients. Therefore, as both tumor cells, stroma and their interactions initiate tumorigenesis, sustain neoplastic growth, and allow for metastatization and therapeutic resistance, these two neoplastic partners should be considered in the development of new therapeutic approaches. In this sense, clinical studies that investigate and address these approaches in the coming years are warranted.
2018-04-03T03:22:18.110Z
2016-11-30T00:00:00.000
{ "year": 2016, "sha1": "80ce6bc7c11052fcc31519aeb98c3d9df52832e6", "oa_license": "CC0", "oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/171639/1/Antiangiogenic%20resistance_Novel%20Axes.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "56cf39a04d555dcb4933858b79dd5510a1989256", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265549027
pes2o/s2orc
v3-fos-license
A Rare Case of Spontaneous Steinstrasse Spontaneous steinstrasse (“stone street”) is a collection of stones within the ureter and is a rare and understudied event. Factors such as infection, altered kidney function, and degree of obstruction are used to define the most adequate therapeutic option. Treatment can be either conservative or surgical. The decision of which depends on the clinical presentation. This paper reports a rare case of a 59-year-old patient with spontaneous steinstrasse examined at a urology clinic. Surgical intervention was required because of altered kidney function. The patient is currently undergoing follow-up for the metabolic investigation. Introduction Steinstrasse or "stone street" is an aggregation of particles in the ureter.On x-ray, such collections have the appearance of a cobbled street, hence the term steinstrasse, which means "street of stone" in German.Steinstrasse occurs in up to 15% of cases after extracorporeal shockwave lithotripsy (ESWL) [1], and 6% of these cases require intervention [2].The incidence is related to factors such as the size of the calculi, location [3], and the energy imposed during ESWL [4].The main complication of this event is ureteral obstruction, which can occur in up to 23% of cases [5], leading to the loss of kidney function [4]. Post-ESWL steinstrasse is classified into three types.Type 1 is characterized by multiple small fragments.Type 2 has fragments measuring 5 mm or more and small proximal fragments.Type 3 has multiple fragments measuring 5 mm or more [6]. Spontaneous steinstrasse is a spontaneous accumulation of small stones without a preceding surgical intervention, a rare and understudied event [7].Some factors such as the infection, altered kidney function, and degree of obstruction are used to define the most adequate therapeutic option.Management can be either conservative or surgical.The decision of which depends on the clinical presentation.This paper presents a rare case of a patient with spontaneous steinstrasse examined at a urology clinic. Case Presentation A 59-year-old male patient, hypertensive, visited a urology clinic with the complaint of recurring renal colic on the right side, with no previous urological procedures.Ultrasonography of the kidney and ureters performed two months earlier identified a calculus measuring 1.4 cm in the right ureteropelvic junction and a branched calculus measuring 3.3 cm in the left kidney. The patient was sent to the emergency room.Computed tomography revealed renal lithiasis: calculus measuring 9 mm in the right lower calyx and multiple calculi in the right kidney, with the largest of which being 1.2 cm in the lower calyx; multiple small stones situated along the right ureter (some overlapping), in greater quantity at the crossing of the iliac vessels; and moderate dilation of the right collector system and density of all calculi ranging from 420 to 505 UH (Figure 1). FIGURE 1: Computed tomography (coronal axis) showing right-side steinstrasse and ipsilateral ureteral dilation (arrow). Laboratory exams revealed kidney function, with serum creatinine of 1.78 mg/dL, urea of 59 mg/dL, and discrete hyperkalemia (5.4 mg/dL), with no associated infection based on the urine exam (Table 1 The patient was submitted to two sessions of ureteroscopy by rigid ureteroscope and laser lithotripter, with a six-week interval because of the stone burden, without complications, resulting in the complete resolution of the ureteral calculi.The patient is currently undergoing follow-up at a nephrology clinic for metabolic investigation of the calculi. Discussion Cases of spontaneous steinstrasse are rare, and different factors contribute to the indication of the best therapeutic option to adopt.In the present case, the patient had type 3 steinstrasse and altered kidney function. Treatment for this condition can be conservative or surgical, and the decision is directly related to the clinical presentation.In the present case, surgical intervention was performed because of the altered kidney function. The literature describes the association between spontaneous steinstrasse and nephrocalcinosis with renal tubular acidosis [8].In the present case, the patient had bilateral nephrolithiasis but no indication of tubular acidosis or nephrocalcinosis.Currently, the patient remains in metabolic investigation and urological follow-up because of the nephrolithiasis. Analyzing 958 patients with renal stones who underwent ESWL, Kim et al. verified that 63.6% of cases have spontaneous resolution [9].However, the therapeutic approach to patients with spontaneous steinstrass requires more clinical studies, as the rarity of cases makes the standardization of conduct difficult. Although conservative conduct is a therapeutic option, patients with persistent symptoms and ureteral obstruction are preferably treated surgically, as in the present case.Thus, when conservative treatment (spontaneous elimination of calculi) is not satisfactory, the conduct should include temporary urinary deviation for the monitoring of infection.With the resolution of this condition, definitive treatment is instituted: surgical removal of the steinstrasse. Conclusions Spontaneous steinstrasse is an uncommon event, for which the therapeutic approach lacks scientific evidence.The most adequate therapeutic option depends on the patient's clinical condition and the size of the calculi.Patients with persistent symptoms and ureteral obstruction are preferably treated surgically. ).
2023-12-03T16:16:12.518Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "b7f82d7defa1ca4207864334b1b07d3dda63012b", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/208561/20231129-32290-1i06pah.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75e22b92b11ce065a77008c89f196f76094c2ced", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237243603
pes2o/s2orc
v3-fos-license
In focus in HCB The Editorial of the first issue of Histochemistry and Cell Biology in 2022 starts with “Most popular articles published in HCB in 2019” and continues to highlight one Review about dynamic changes of histone methylation in mammalian oocytes and early embryos and three Original Articles reporting (1) that fat causes necrosis and inflammation in human steatotic liver, (2) the effect of a P2X7 receptor antagonist in a rat model of ulcerative colitis, and (3) NANOS3 downregulation in Down syndrome hiPSCs. We wish you good reading! Higher visibility and greater impact of their manuscript since Open Access articles are accessed 4 times more often on average, and cited 1.6 more times on average than those published without Open Access in Springer hybrid journals. 2. Easy compliance with mandates from funding agencies, since many now require Open Access publication of manuscripts resulting from their funding; some funding agencies may even consider this compliance when assessing future grant applications. 3. Last but not least, the copyright remains with the authors since most Open Choice articles are published under the liberal Creative Commons Attribution 4.0 International (CC BY) license. After all, greasy hair is not that bad Cholesterol is important for hair growth and cycling, and dysregulation of cholesterol homeostasis has been implicated in various hair disorders (Palmer et al. 2020). In their present work, Palmer and colleagues (2021) applied immunofluorescence to determine the cellular expression and localization of various cholesterol transport proteins-ABCA1, ABCG1, ABCA5 and SCARB1-in human hair follicles throughout the hair cycle. In addition, filipin was used as a stain for free cholesterol. Cultured outer root sheath (ORS) keratinocytes were used for Western blot and gene expression analyses and cholesterol efflux assays. The following is an excerpt from the multitude of beautifully illustrated results reported by the authors. The ubiquitous cholesterol efflux transporter ABCA1 showed a distinct staining pattern with higher expression in the epithelial compartment compared with the mesenchymal connective tissue sheath (CTS) in anagen hair follicles (Fig. 1). The ORS of the isthmus displayed the highest staining intensity, with a polarized distribution in the basal ORS. The immunostaining within the inner root sheath was indistinct, whereas it was membranous in the hair shaft cuticle. Differential distribution patterns were also observed during catagen. 3-hydroxy-3-methylglutaryl-coenzyme A reductase is the enzyme responsible for the rate-limiting step in cholesterol synthesis. During anagen, intense immunostaining for the enzyme was found in the matrix, dermal papilla and ORS (being highest within the isthmus) with lower levels in the IRS and the hair shaft. Immunostaining in the mesenchymal connective tissue sheath was low to absent. It was concluded that the widespread expression of 3-hydroxy-3-methylglutaryl-coenzyme A reductase across the hair cycle points to the capability of hair shafts for de novo cholesterol synthesis. This conclusion was supported by the filipin staining for free cholesterol. In a nutshell, the authors demonstrated the capacity of human hair follicles for cholesterol transport and trafficking. A primary cilia EMT response in bladder cancer… Epithelial-to-mesenchymal transition (EMT) is a wellknown process involved in multiple aspects of tumor progression (Zhang and Weinberg 2018). EMT is driven by a variety of signaling pathways, including Hedgehog (Hh). Hh signaling is interestingly dependent upon a primary cilia-type mechanism (Bangs and Anderson 2017), and has been shown to be involved in the carcinogenic mechanisms of various types of cancer (Eguether and Hahne 2018). Iruzubieta et al. (2021) have now investigated the potential role of primary cilia-driven Hh signaling in the progression of bladder cancers. Urothelial tumors, the most common form of bladder cancer, are classified into non-muscle invasive bladder cancers (NMIBC) and muscle invasive bladder cancers (MIBC) (Humphrey et al. 2016). In their study, utilizing tissue samples from normal and both subclasses of urothelial cancers, Iruzubieta et al. (2021) used immunohistochemistry and immunofluorescence staining employing antibody markers for epithelial cell and mesenchymal cell phenotypes, Hh signaling pathway proteins, and cilia (Fig. 2). Furthermore, they performed a detailed . Acetylated tubulin (red immunofluorescence) marks ciliary axoneme, while Pericentrin (green immunofluorescence) labels centrioles and, consequently, basal bodies. DAPI counterstaining (blue). From Iruzubieta et al. (2021) transmission electron microscopic analysis of urothelial cells and tumors to characterize the ultrastructural features of the cells. Their immunohistochemical results demonstrated the occurrence of EMT in both types of bladder cancers, as well as the presence of primary cilia in cells from both normal and bladder tumor samples. The electron microscopy results detailed the ultrastructural features of the tumor cells, and described for the first time the presence of primary cilia in healthy normal and cancerous bladder cells. Overall, their study added further details concerning the possible roles of the Hh signaling pathway and primary cilia in the process of urothelial cancer progression. Implanting the idea of steroid hormone influence on epithelial cell polarity The implantation of the human embryo into the endometrium represents a striking instance of non-cancerous tissue invasion. Indeed, just prior to embryo implantation, the human endometrium undergoes a complicated remodeling process, involving alterations in the polarity of epithelial cells related to the redistribution of junctional complex proteins including desmosomal and adherens junction proteins. In this regard, in earlier published work, Buck and colleagus used immunofluorescence microscopy to investigate the localization and distribution of endometrial epithelial junction proteins during the human menstrual cycle (Buck et al. 2012), and further demonstrated the great utility of creating and using endometrial spheroids as a model system for studying human embryo implantation (Buck et al. 2015). In their current investigation, Buck et al. (2021) continue their use of the endometrial spheroid model to investigate the effect of steroid hormones and human choriogonadotropin on the polarity-inducing localization of cellular adhesion proteins. They created spheroid cultures from the Ishikawa human endometrial cell line, treated them with ovarian steroids or human choriogonadotropin, and then performed multilabel immunostaining followed by wide-field light microscopic imaging. They found that treatment of the spheroids with progesterone, medroxy-progesterone acetate, or human choriogonadotropin resulted in a redistribution of the desmosomal plaque protein Dsp-1 to the basolateral membrane, while the zonula occludens protein ZO-1 remained in the apical membrane (Fig. 3). Likewise, the same hormone treatments resulted in a redistribution of the extracellular matrix adhesion protein α6-integrin to the lateral membrane; staining of human tissue samples from different stages of the menstrual cycle confirmed this redistribution of α6-integrin. Thus, these results extend and confirm the hypothesis that a hormone-effected decrease in epithelial cell polarity is required for the receptivity of the endometrium for embryo implantation. Moreover, the authors demonstrate the great value and utility of using cellular spheroids for 3D tissue models in human reproductive research, as has also been shown recently for other tissues such as lung (Cunniff et al. 2021) and diseases such as cancer (Huch and Koo 2015). A chloride channel-associated protein keeps keratinocytes quiet An insult that damages the skin barrier requires a quick response to restore its structure and function. Among the various repair components and mechanisms, different chloride channels may be involved since they play a role in keratinocyte migration, proliferation, and differentiation (Dong et al. 2015;Guo et al. 2016;Pan et al. 2015) as well as tumor suppression (Zhang et al. 2013). The activity of the chloride channels seems to be regulated by various chloride channel accessory proteins (Patel et al. 2009) that are also present in epidermal keratinocytes (Braun et al. 2010;Connon et al. 2004). Through their regulatory effect, they can modulate cell proliferation and apoptosis, and via their integrin-binding domains, they can promote cell adhesion and control migration and invasion. However, distinct functional 1 3 species-related differences among the various chloride channel accessory proteins have been reported. Therefore, Hämäläinen et al. (2021) have analyzed the expression and possible function of the rat calcium-activated chloride channel-associated protein rCLCA2 in cultured rat epidermal keratinocytes and correlated their findings with the mouse homolog in mouse skin (Fig. 4). They observed high and stable expression levels of rCLCA2 mRNA and protein in cultured rat epidermal keratinocytes and in organotypic cultures throughout the different stages of epidermal maturation. Through siRNA-mediated silencing, the authors showed that rCLCA2 facilitates UV-induced apoptosis. However, this condition did not significantly influence the keratinocyte migration in a scratch wound assay. Furthermore, they observed that a single UV irradiation resulted in a modest down-modulation of rCLCA2 mRNA, with a duration of at least 7 days. In addition, the number of UV irradiation caused apoptotic cells was reduced by rCLCA2 silencing.
2021-05-13T06:16:08.221Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "f0a4fe20d0d7d328b0e7960f271ffc3caf4d77a7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00418-021-01991-0.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "4f463925e2d409974649b51792a90c30535e8217", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236920867
pes2o/s2orc
v3-fos-license
Canadian food ladders for dietary advancement in children with IgE-mediated allergy to milk and/or egg Food ladders are clinical tools already widely used in Europe for food reintroduction in milk- and egg-allergic children. Previously developed milk and egg ladders have limited applicability to Canadian children due to dietary differences and product availability. Herein we propose a Canadian version of cow’s milk and egg food ladders and discuss the potential role that food ladders may have in the care of children with IgE-mediated allergies to cow’s milk and/or egg, as either a method of accelerating the acquisition of tolerance in those who would outgrow on their own, or as a form of modified oral immunotherapy in those with otherwise persistent allergy. To the editor, Cow's milk and egg are among the most common food allergies in young children. IgE-mediated milk and egg allergies are not only significant causes of food-induced anaphylaxis in children but are associated with numerous other adverse medical and psychosocial outcomes including nutritional and growth concerns and impaired quality of life for both children and their caregivers [1][2][3][4]. Although milk and egg allergies have historically been regarded to have a good prognosis, with many children outgrowing their food allergy in childhood, recent studies suggest that the rate of resolution may be slowing over time, with only 50% resolution by 5-6 years of age and increasing persistence of these allergies into adolescence or adulthood [5][6][7]. High baseline sIgE levels, such as those greater than 10kU A /L for egg and milk, may predict persistence of allergy [5][6][7]. Management of food allergies has historically been limited to avoidance with periodic reassessment. However, there is increasing recognition that children with egg and milk allergy may tolerate baked/processed forms of milk and egg, and that ongoing ingestion of these forms may help with resolution of their food allergy [8,9]. Conformational changes in immune-activating epitopes that occur during the baking or heating processes alter the allergenicity of milk and egg and may allow for tolerance [10]. Milk and egg ladders (henceforth called food ladders) are tools designed to guide patients through a homebased gradual stepwise introduction of increasingly allergenic forms of milk and egg in a demedicalized setting. Originally designed in the United Kingdom for the management of non-IgE-mediated food allergies, food ladders extrapolate from previous evidence that the vast majority of egg-and milk-allergic children are able to tolerate extensively heated forms of these allergens, such as in baked goods [11][12][13][14][15]. Regular ingestion of tolerated forms of milk and egg may induce accelerated tolerance, allowing liberalization of the diet to more allergenic forms of the food over time. Food ladders are now widely used in Europe for this purpose and are included in the British Society for Allergy & Clinical Immunology's guidelines for the management of egg allergy [16]. According to a 2017 survey of 114 healthcare professionals from around the globe, 68% of respondents reported that they utilized milk ladders [17]. In Canada, food ladders appear to be increasingly being adopted by allergists. Despite increasing use, there is a paucity of published research on food ladders. Ball and Luyt studied the role of milk ladders in 86 milk-allergic children with a history of mild reactions to milk [18], and ultimately 91% of children in their study were able to tolerate the majority of dairy products within 4-6 months. While 43% experienced minor adverse reactions, there were no cases of anaphylaxis. To our knowledge, there are no studies published to date describing egg ladder use. European versions of food ladders have limited applicability to the Canadian diet, as they include foods that may be seldomly consumed in many Canadian households. Hence, we developed the Canadian Food Ladders using foods more typically consumed by Canadian children [19,20] (Figs. 1, 2). There are four "Steps" in each ladder, with the least allergenic forms of milk or egg in Step 1, progressing to the most allergenic forms in Step 4. Children are typically introduced to their relevant allergen at Step 1, starting with a grain-or pea-sized amount of food. If tolerated, the child should consume the food on a daily basis. The serving size offered is gradually increased as tolerated over several days to weeks until an age-appropriate amount is reached. We advise that children continue to consume age-appropriate serving sizes of foods at that step on a daily basis for at least 1-3 months before advancing to the next step in the ladder. If IgE-mediated allergy symptoms occur with the introduction of a new food on the ladder, the child should return to consuming previously tolerated foods for at least 1 month before again cautiously attempting to advance on the ladder. Parents should be counselled by the allergist overseeing their child's food ladder use on how to recognize and manage allergic reactions. If a child is confirmed to be fully tolerant to foods on a higher step of the ladder, they need not start at Step 1; rather, they may start at the step corresponding to foods currently tolerated. Caregivers are advised that children can progress as slowly through the food ladder as tolerated and desired, as even consuming baked goods regularly (Step 1) has been shown to promote tolerance [8,9]. The Canadian Food Ladders are intended for use in preschool-aged children with a history of only mild IgE-mediated reactions to milk and/or egg. The use of food ladders is likely safest in preschool-aged children based on safety data extrapolated from studies on oral immunotherapy revealing higher rates of anaphylaxis in older children compared to preschoolers [21,22]. Contraindications to the use of food ladders include a previous life-threatening episode of anaphylaxis or asthma that remains inadequately controlled on medium dose inhaled steroid therapy. Relative contraindications include both medical and socioeconomic factors, such as a recent severe asthma exacerbation, a language barrier or cognitive impairment. Food ladders should be initiated at the recommendation of an allergist, and patients using a food ladder should receive regular follow up (we suggest at least every 6 months). Data in regard to efficacy of oral immunotherapy for food allergies has been promising, with excellent safety and effectiveness in the preschool aged group [21,23]. We propose that food ladders be considered a modified form of oral immunotherapy for preschoolers with very high baseline sIgE levels or older children, representing phenotypes that would be unlikely to outgrow their allergy via strict avoidance. Similar to oral immunotherapy, food ladders consist of the regular administration of small doses of food allergen and likely lead to similar immune changes that assist in establishing tolerance. In addition, food ladders have the added benefit of allowing children to gradually expand their diet, whether by promoting tolerance or following the natural progression of resolution of their food allergy in a home setting, while potentially using fewer healthcare resources than other models of oral immunotherapy delivery (especially due to the lack of need for oral food challenges or multiple visits for conventional oral immunotherapy with the unheated food). Similar to other models of oral immunotherapy, food ladders have the potential to alleviate food-allergy related anxiety and improve quality of life for families and their children with milk and egg allergy. However, while food ladders are a promising tool for facilitating dietary expansion for children with milk or egg allergies, further research is needed to improve confidence with their use. Further safety and efficacy data are needed, particularly for the egg ladder where this data is mainly extrapolated from baked egg ingestion and oral immunotherapy studies. Additionally, with further study, this concept may ultimately prove safe and appropriate for older children and adults. And although we propose that food ladders be considered a modified form of oral immunotherapy, long term data is needed to establish whether their use truly increases reaction thresholds and protects against potential accidental exposures. Finally, qualitative data from patients and evaluation of the impact of food ladder use on quality of life and food allergy-related anxiety is also needed. As such, our Conclusion Food ladders offer a flexible and proactive approach to management of lower risk egg-or milk-allergic children. They have the potential to facilitate gradual dietary expansion and accelerate the resolution of allergy. For children with persistent allergy beyond the preschool age, we propose that food ladders be considered a modified form of oral immunotherapy. While food ladders are not appropriate for use in all children with egg and milk allergies, they are a promising tool with evidence supporting efficacy and safety extrapolated from studies on oral immunotherapy as well as from limited studies of milk ladder use.
2021-08-05T14:18:19.304Z
2021-08-05T00:00:00.000
{ "year": 2021, "sha1": "b5e927b6a08a283ae4c72c06fccee0a5e166dc22", "oa_license": "CCBY", "oa_url": "https://aacijournal.biomedcentral.com/track/pdf/10.1186/s13223-021-00583-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5e927b6a08a283ae4c72c06fccee0a5e166dc22", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229481858
pes2o/s2orc
v3-fos-license
Simultaneous detection of atmospheric HONO and NO2 utilising an IBBCEAS system based on an iterative algorithm We present an improved incoherent broadband cavity-enhanced absorption spectroscopy (IBBCEAS) system based on an iterative retrieval algorithm for the simultaneous detection of atmospheric nitrous acid (HONO) and nitrogen dioxide (NO2). The conventional IBBCEAS retrieval algorithm depends on the absolute change in the light intensity, which requires high light source stability and the stable transmission of the light intensity of all optical components. The new algorithm has an iterative module to obtain the effective absorption optical path length, and the concentrations of HONO and NO2 are then determined by differential optical absorption spectroscopy (DOAS) retrieval; thus, the method is insensitive to the fluctuation in the absolute light intensity. The robustness of the system is verified by simulating the influence of the relative change in the light intensity on the spectral retrieval results. The effect of nitrogen purging in front of the cavity mirrors on shortening the actual cavity length was measured and corrected using NO2 gas samples. Allan deviation analysis was conducted to determine the system stability, and it indicated that the detection limits (2σ ) of HONO and NO2 are 0.08 and 0.14 ppbv at an integration time of 60 s respectively. Furthermore, Kalman filtering was used to improve the measurement precision of the system. The measurement precision at an integration time of 3 s can be improved 4.5-fold by applying Kalman filtering, which is equivalent to the measurement precision at an integration time of 60 s without applying Kalman filtering. The atmospheric HONO and NO2 concentrations were observed by the IBBCEAS system based on an iterative algorithm and were compared with values measured by conventional IBBCEAS. Introduction As nitrous acid (HONO) can absorb solar radiation between 300 and 400 nm to form the hydroxyl radical (OH) and nitric oxide (NO), it has been demonstrated that HONO contributes significantly to the OH budget during the daytime (Harrison et al., 1996). Recent studies have shown that the contribution of HONO to OH production plays an important role not only in the morning but also throughout the day (Spataro et al., 2013;Alicke, 2002). It has been reported that the contribution of HONO photolysis to OH production can reach 60 % during the day (Michoud et al., 2012;Lu et al., 2013). However, the exact mechanisms leading to HONO formation are still under discussion. Existing gas-phase sources cannot explain the high concentration of HONO observed during the daytime (Zhou et al., 2002), which indicates unknown sources of strong HONO during the day (Acker et al., 2006;Kleffmann, 2005). Therefore, fast and accurate measurement of the HONO concentration is the premise of studying the atmospheric chemical behaviour of HONO and its contribution to regional oxidation. However, the lifetime of HONO may be only a few minutes with a low concentration during the day, even a few hundred parts per trillion by volume (10 −12 , pptv) (Laufs et al., 2017;Hou et al., 2016). Therefore, the rapid and accurate detection of HONO has become a challenge. HONO detection methods are mainly classified into two categories: one is based on wet chemical techniques, and the other is based on spectroscopic methods. The wet chemical methods mainly include denuder absorption-ion chromatography (Denuder-IC; Neftel et al., 1996), gas and aerosol collector (GAC) systems (Dong et al., 2012), stripping coilion chromatography (SC-IC; Xue et al., 2019;Cheng et al., 2013), and long-path absorption photometry (LOPAP; Chen et al., 2014;Heland et al., 2001;Kleffmann et al., 2006). These kinds of methods have a lower detection limit and can reach several parts per trillion by volume. However, these methods need to be calibrated to obtain accurate HONO concentrations. Furthermore, maintenance is cumbersome and requires the frequent replacement of the chemical solution. Spectroscopic methods are based on the Beer-Lambert law and quantify the concentration of HONO by measuring its absorption spectrum in a specific wavelength region, which is not easily affected by chemical interference. Spectroscopic methods can be divided into conventional absorption methods and cavity-enhanced methods (Fiedler et al., 2003). Conventional absorption methods mainly include differential optical absorption spectroscopy (DOAS; Tsai et al., 2018;Qin et al., 2009), Fourier transform infrared spectroscopy (FTIR; Stockwell et al., 2014), and infrared quantum cascade laser (QCL) absorption spectroscopy (Cui et al., 2019). In these methods, the absorption spectrum of gas is obtained by passing the beam through multi-pass cells or a long open path, and the optical path length is the key factor that affects the sensitivity of the system. The cavity-enhanced methods are based on high-finesse optical cavity-enhanced absorption spectroscopy, mainly including cavity ring-down spectroscopy (CRDS; Wang and Zhang, 2000) and incoherent broadband cavity-enhanced absorption spectroscopy (IBBCEAS; Jordan et al., 2019;Duan et al., 2018;Gherman et al., 2008;Nakashima and Sadanaga, 2017;Donaldson et al., 2014;Scharko et al., 2014;Wu et al., 2014). IBBCEAS methods have a higher spatial resolution and are easier to deploy on different platforms. Compared with CRDS techniques based on a single wavelength, IBBCEAS can achieve simultaneous measurements of multiple gases using a broadband light source. IBBCEAS technology is such that the light beam is reflected back and forth in a high-finesse optical cavity formed by two high-reflectivity mirrors. An optical cavity of several tens of centimetres can make the effective absorption path length reach several kilometres, thereby improving the detection limit of the system. In recent years, IBBCEAS technology has been demonstrated to apply to HONO field measurements in remote (Duan et al., 2018;Tang et al., 2019) and urban regions (Crilley et al., 2019;Wu et al., 2014;Min et al., 2016;Nakashima and Sadanaga, 2017). Although a large number of HONO intercomparisons between different instruments have been carried out in previous studies (Crilley et al., 2019;Duan et al., 2018;Xue et al., 2019;Kleffmann et al., 2006;Stutz et al., 2010), deviations in HONO measurements between different methods still exist. In a recent field observation, the correlation between different instruments was found to be high (r 2 > 0.97); unfortunately, the absolute concentration difference was observed to reach 39 % (Crilley et al., 2019). Therefore, the factors that affect the accuracy of measurement need to be discussed further. Conventional IBBCEAS technology retrieves the HONO concentration by measuring the absolute change in the light intensity. It depends heavily on the stability of the instrument and is sensitive to environmental factors, such as temperature and vibration. Recently, some researchers have equated the optical cavity of IBBCEAS to a multi-pass cell and have then determined the gas concentration according to the DOAS retrieval algorithm (Herman et al., 2009;Hoch et al., 2014;Horbanski et al., 2019;Meinen et al., 2010;Platt et al., 2009;Thalman and Volkamer, 2010). Because DOAS retrieval uses the narrowband differential absorption characteristics of a trace gas to quantify the gas concentration, the IBBCEAS system based on the DOAS retrieval is insensitive to the broadband change in light intensity. The key point of this technology is to determine the effective absorption path length so as to determine the gas concentration using the DOAS retrieval. The most common method to correct the effective absorption path length uses the measured optical density (Hoch et al., 2014). However, in this way, as with conventional IBBCEAS, the retrieved gas concentration will be affected by fluctuations in the intensity of the light source. There have also been attempts to calculate the effective absorption optical path using the known absorption of O 4 gas, but the measurement accuracy has been limited (Thalman and Volkamer, 2010;Herman et al., 2009). Recently, Horbanski et al. (2019) used an iterative method to calculate the effective absorption optical path and developed a nitrogen dioxide (NO 2 ) instrument, indicating the effectiveness of the method (Horbanski et al., 2019). Compared with NO 2 , the HONO concentration ranges from parts per trillion by volume to several parts per billion by volume, and the spatial and temporal distributions are highly variable, which brings challenges to the accurate measurement of atmospheric HONO. Therefore, the measurement of HONO requires the IBBCEAS instrument to be highly stabile, and the application of an iterative algorithm is helpful to improve the accuracy of the HONO measurement. This paper describes the improvement of the IBBCEAS system for the simultaneous detection of atmospheric HONO and NO 2 . The concentrations of HONO and NO 2 are determined by multiple iterations combined with the DOAS retrieval algorithm. The instrument can eliminate the influence of the broadband change in light intensity by using an iterative algorithm and also helps prevent effects from light source instability and mechanical vibration. Moreover, Kalman filtering technology is an effective post-processing technology for gas concentration measurements (Wu et al., 2010;Leleux et al., 2002). Kalman filtering was first applied to the real-time laser absorption spectroscopy measurement of CO 2 and NH 3 at the part per million by volume level (Leleux et al., 2002). In this work, we applied the Kalman filtering technique to realise trace gas concentration measurements, thereby improving the measurement precision of the system. To our knowledge, this is the first use of the Kalman filtering technique for HONO and NO 2 measurements. The capability of our instrument to make fast highsensitivity measurements of HONO and NO 2 is of great significance with respect to understanding the source of HONO and studying its role in atmospheric chemistry. 2 System and principle Theory of IBBCEAS The basic idea of the IBBCEAS system based on an iterative algorithm is to use a high-finesse optical cavity to increase the effective absorption light path, thereby improving the detection sensitivity of the instrument, and to use the DOAS retrieval algorithm to determine the gas concentration (Hoch et al., 2014;Herman et al., 2009;Meinen et al., 2010;Platt et al., 2009;Thalman et al., 2015;Thalman and Volkamer, 2010). For this system, the cavity-enhance optical density D CE (λ) is defined as follows (Horbanski et al., 2019): where c i is the concentration of the gas species i, σ i (λ) is the absorption cross section of the gas at wavelength λ, ε b (λ) is the broadband extinction caused by Rayleigh scattering and Mie scattering, and L eff (λ) is an effective path length. The DOAS evaluation determines the gas concentrations based on Here, I 0 (λ) is the intensity of light passing through the cavity without gas absorption, I (λ) is the intensity of light passing through the cavity with gas absorption, σ i (λ) is the differential part of the absorption cross section of the gas, and the polynomial term in Eq. (2) represents the broadband spectral structure in the measurement spectrum other than the differential absorption part. In traditional DOAS, the optical path length L eff is a constant as determined by the physical distance, whereas it is not a constant and has a dependence on the optical density in IBBCEAS (Platt et al., 2009). The gas concentrations of IBBCEAS measurements can be obtained using the DOAS evaluation. Here, the wavelength- Figure 1. The optical layout of the IBBCEAS system. MFC represents the mass flow controller, and HR mirror represents the highreflectivity mirror. dependent effective absorption optical path L eff (λ) is calculated by an iterative algorithm. The implementation of the algorithm is described in Sect. 3.3. Optical layout The IBBCEAS system in this study was based on the research of Duan et al. (2018). The optical layout of the IB-BCEAS system is shown in Fig. 1. The light source of the instrument uses a near-ultraviolet light-emitting diode (LED; LED Engin) with a centre wavelength of 368 nm. In order to ensure the stability of the LED light intensity, the LED is mounted to a Peltier device with a heat sink, and the temperature of the LED is stabilised at 20 ± 0.1 • C via a thermistor temperature sensor and a PID (proportional integral derivative) controller. The light from the LED is coupled to a long 68 cm optical cavity through the achromatic lens (Edmund Optics). The optical cavity is composed of two highreflectivity mirrors and a perfluoroalkoxy alkane (PFA) tube with an inner diameter of 22 mm. The high-reflectivity mirror (LAYERTEC) is installed in the adjustment frame at both ends of the optical cavity. The light transmitted through the optical cavity is filtered by a bandpass filter (BG3, Newport), focused by an off-axis parabolic mirror (Edmund Optics), and finally coupled to one end of an optical fibre (600 µm, Ocean Optics). The other end of the optical fibre is connected to a spectrometer (QE65000 Pro, Ocean Optics), and the spectrometer is used to collect the corresponding spectral signal. In order to prevent reflectivity degradation during the measurement due to the adsorption of aerosol or organic species onto the mirror surfaces, the surfaces of the two high-reflectivity mirrors are purged with high-purity nitrogen (99.999 %). The flow rates of the purge are controlled by two mass flow controllers at 0.1 L min −1 . The sampling tube of the instrument consists of a PFA tube with an outer diameter of 6 mm. A 0.2 µm polytetrafluorethylene (PTFE) filter membrane is connected to the inlet of the sampling port to prevent particles from entering the optical cavity. A diaphragm pump draws the ambient air into the instrument through the sampling tube at a flow rate of 6 L min −1 . The ambient air enters the system and is divided into two paths via a three-way PFA joint: one airflow is discharged from the air outlet, and the other airflow enters the cavity at a flow rate of 1.2 L min −1 using a mass flow controller. The use of this sampling gas path ensures that the residence time of the atmospheric air in the sampling tube can be shortened. It does so by increasing the total sampling flow rate while maintaining a fractional constant flow rate of the cavity, thereby reducing the secondary generation and loss of HONO in the sampling tube. The software control interface is programmed using LabVIEW to ensure the orderly operation of the mass flow controller and spectrometer in the process of instrument operation. Determination of mirror reflectivity As the absorption optical path of the gas in the optical cavity is related to the reflectivity of the mirrors, it is necessary to determine the reflectivity of the high-reflectivity mirrors before the gas concentration is retrieved. According to the method mentioned by Washenfelder et al. (2008), the Rayleigh scattering difference between nitrogen and helium is used to determine the wavelength-dependent reflectivity of the cavity mirrors (R(λ)): Here, d 0 is the cavity length, ε Ray is the extinction caused by Rayleigh scattering, and I N 2 and I He are the light intensity when the cavity is filled with nitrogen and helium respectively. The accuracy in determining the mirror reflectivity will affect the accuracy of subsequent gas concentration measurements. Therefore, in order to achieve high measurement accuracy, it is necessary to stabilise the gas temperature inside the optical cavity. The high-purity nitrogen (99.999 %) and high-purity helium (99.999 %) are then introduced into the optical cavity in turn, and the corresponding spectra are recorded once the spectra become stable after gas filling. The wavelength-dependent mirror reflectivity is calculated by substituting the ratio of the recorded nitrogen spectrum intensity and helium spectrum intensity into Eq. (3). The resulting dependence of the mirror reflectivity on wavelength is shown in Fig. 2. The red line is the spectrum measured when the cavity is flushed with nitrogen, and the black line is obtained when the cavity is filled with helium. It can be seen that the reflectivity of the mirrors is ∼ 0.99980 at a wavelength of 368.2 nm. Figure 2. Dependences of the transmission intensity when the cavity is filled with N 2 and He gas respectively, and the calculated cavity mirror reflectivity. Calibration of the effective cavity length Considering the effect of mixing between the actual atmospheric gas in the cavity and the nitrogen purge gas within a narrow space in front of the cavity mirrors during measurements, the effective cavity length becomes shorter than its physical length. Calibration experiments are needed to determine the effective cavity length (d eff ). We determine the value of d eff by measuring the effective concentration of an NO 2 gas mixture with and without mirror purge gas to the cavity mirrors. The NO 2 gas sample is made by mixing a cylinder gas with a nominal 10 ppm NO 2 and zero air in a Teflon (FEP) gas bag, which has low adsorption properties. The NO 2 gas mixture in the FEP gas bag is then injected into the IBBCEAS system and measured by an iterative algorithm. The inlet flow rate of NO 2 was 1 standard litre per minute, and the purging flow rate at both ends of the optical cavity was 0.1 L min −1 . The purge gas is opened and closed intermittently during operation of the instrument, and the spectrum at the corresponding time is recorded. The NO 2 concentration is retrieved according to the spectra measured during this period. Figure 3 shows the change in the NO 2 concentration during purge on and off. The effective concentration of NO 2 becomes higher after purge off, and the concentration of NO 2 is returned to the previous value again after purge on. This indicates that the mirror reflectivity and the NO 2 gas mixture before and after purge off remain unchanged, and the concentration of NO 2 is relatively stable during purge on and off. The average concentration of NO 2 measured at purge on is 62.64 ± 0.32 ppb, and the average concentration of NO 2 measured after purge off is 70.92 ± 0.19 ppb. For a cavity with a physical length (d 0 ) of 68 cm, d eff can be calculated according to Eq. (4). The calculated d eff is 60.06 cm. Determination of the effective absorption optical path and concentration retrieval For traditional DOAS fitting, the retrieval result is not affected as long as there is no change in the narrowband structure, as the DOAS system is insensitive to the variation of absolute light intensity. The optical absorption path length of a conventional DOAS system is constant. However, in the IB-BCEAS system, the effective optical absorption path length is strongly dependent on the wavelength, which is related to the wavelength-dependency of the mirror reflectivity and the intra-cavity absorption. For conventional IBBCEAS, the wavelength-dependence problem is solved by converting the optical density to the extinction absorption, but this method seriously depends on the absolute stability of the light intensity. For the IBBCEAS system based on an iterative algorithm, the absorption optical path is not equal to the average optical path (L 0 (λ)); thus, L 0 (λ) needs to be corrected. Previous studies have tried to calculate the effective absorption optical path by using the concentration of known gases, such as O 4 . However, the measurement accuracy is limited when the relative variation in the wavelength dependence of the path length is corrected with a single wavelength: when the peak wavelength of O 4 in the retrieval band is at 380 nm, and the absorption cross section of O 4 at 380 nm is 3 times weaker than that at 477 nm, error may occur in the retrieval of the slant column density of O 4 . According to the description of Horbanski et al. (2019), the scalar correction factor is not suitable for strong differential absorbers, because their absorption band distortion can only be corrected by wavelength-resolved correction factor. Recently, Horbanski et al. (2019) proposed an iterative method to calculate the wavelength-dependent effective absorption optical path length. Through multiple iterative retrieval, the effective absorption optical path can be finally determined. Because the filter membrane is added to the front end of the sampling port of the system in this work, the influence of gas absorption and Rayleigh scattering is only considered in the retrieval of concentration. According to Platt et al. (2009), the cavityenhanced D CE (λ) of the system can be defined as follows: where I tot and I tot0 are the light intensity detected with and without gas absorption in the optical cavity respectively, n indicates the nth intra-cavity reflection, and the total transmission can be considered as a sum of the transmissions of all of the individual intra-cavity sub-beams of consecutive mirror reflections. Using the Beer-Lambert law, Eq. (5) can be changed to According to the description of Platt et al. (2009), the relationship between the effective optical path length (L eff (λ)) and the optical density (D CE (λ)) is as follows: By substituting Eq. (6) into Eq. (7), the L eff (λ) can be calculated: Therefore, if we know the optical density, we can correct the effective absorption optical path, and we can continuously approach the real effective absorption optical path using multiple iterative retrieval methods. Horbanski et al. (2019) carried out a detailed derivation; here, a brief introduction of the steps used is given: 1. Assuming that the concentrations of HONO and NO 2 are known, the optical density can be calculated according to Eq. (6). 2. Combined with the optical density and lens reflectivity calculated in Eq. (1), the effective absorption optical path is calculated according to Eq. (8). 3. Using the DOAS method to fit D CE, meas (λ) and L eff (λ)· σ i (λ), new HONO and NO 2 concentration values are obtained. The HONO and NO 2 concentrations obtained in Step 3 are substituted into Step 1 to recalculate the optical density; thus, Step 1, Step 2, and Step 3 are repeated until the changes in the HONO and NO 2 concentrations with respect to their values in the previous iteration become less than an allowable tolerance range. A stop condition for the iteration is that the concentration difference between two retrievals is less than the fit error. The final retrieval results of the HONO and NO 2 concentrations are obtained. The retrieval steps are shown in Fig. 4. All of the data processing is based on DOASIS software (Kraus, 2006). The algorithm takes the high-resolution cross sections of HONO (Stutz et al., 2000), NO 2 (Voigt et al., 2002), and O 4 (Greenblatt et al., 1990) as the input, and it then convolutes these high-resolution cross sections with the instrument functions of 0.49 nm FWHM (full width at half maximum). The fitting range of the spectrum is from 363 to 388 nm. Figure 5 shows the change in the effective optical path length in an iterative algorithm retrieval, where L1, L2, L3, L4, and L5 are the corresponding absorption optical paths from the zero iteration to the fourth iteration respectively. The effective absorption optical path length is finally converged as the number of iterations increases. Based on the effective absorption optical path length at the final iteration, the real concentration can be obtained according to the DOAS fitting. Figure 6 shows the final HONO and NO 2 concentrations obtained using the iterative algorithm mentioned above in the actual large spectrum retrieval. The HONO and NO 2 concentrations obtained by the final fitting are 0.78 and 29.18 ppbv respectively. Detection limit and uncertainty of the system The Allan deviation is often used to calculate the sensitivity and stability of the system. Ideally, the detection sensitivity of the system can be improved by averaging successive measurements or by integrating the signal over a longer time period. However, actual measurement processes are affected by instrument drifts and other noise contributions. The system can only achieve an optimal detection sensitivity before slow-varying factors become dominant. Therefore, Allan deviation can be used to describe the overall performance and stability of the system. Zero air was introduced into the optical cavity at a flow rate of 0.1 L min −1 , and 10 000 spectra were continuously recorded. The integration time of each spectrum was 3 s, and each spectrum was fitted using the DOAS algorithm to obtain the HONO and NO 2 concentrations. Then the Allan deviation was then calculated according to Eq. (9): where m is the number of time series, and y k (τ ) is the average concentration during a time interval of τ . Figure 7 shows the variation in the Allan deviation in the system with integration time. Under an integration time of 60 s, the detection limits (2σ ) of HONO and NO 2 are 0.08 and 0.14 ppbv respectively. Table 1 shows the detection limits for HONO and NO 2 measured by different IBBCEAS instruments reported in the literature. Our system has higher detection sensitivity than most other instruments. The Allan deviation continuously decreases over the average time for several hours. This shows that the instrument's performance is very stable. The uncertainty of the system may be determined by a Gaussian error propagation. The uncertainty is mainly composed of contributions from the uncertainties in absorption cross sections of the spectral features, the mirrors' reflectivity, the effective cavity length, and the temperature and pressure in the cavity: the uncertainty of the mirrors' reflectivity is 5 %; the uncertainty of the effective cavity length is 3 %; the uncertainty of the temperature and pressure in the cavity is 1 %; and the uncertainty of the fit retrieval is 4 %. According to the literature, the uncertainty of the NO 2 and HONO absorption cross sections is 4 % (Voigt et al., 2002) and 5 % (Stutz et al., 2000) respectively. Therefore, the total uncertainty of the instrument is about 8.1 % for NO 2 measurements and about 8.7 % for HONO measurements. Effect of light intensity fluctuation In order to verify the insensitivity of IBBCEAS based on an iterative algorithm to the broadband change in the light intensity, we carried out a light intensity fluctuation experiment. By adjusting the angle of the off-axis parabolic mirror to change the intensity of the spectrum, the original lamp spectrum is attenuated by 95 %, 90 %, 85 %, 80 %, and 75 % respectively. After adjusting the light intensity of the lamp spectrum every time, the actual atmospheric HONO concentration is measured. Finally, three methods are used to retrieve the measured atmospheric spectrum: the original lamp spectrum combined with an iterative algorithm is used for the concentration retrieval in method 1, the original lamp spec-trum combined with conventional IBBCEAS retrieval algorithm is used for the concentration retrieval in method 2, and the lamp spectrum after each change combined with the conventional IBBCEAS retrieval algorithm is used for the concentration retrieval in method 3. In order to ensure that the light intensity is not affected by external environmental factors during the measurement process and to avoid the influence of the light intensity fluctuation caused by external environmental factors on the measurement results, the measurement time of each cycle should be as short as possible. Figure 8 shows the retrieval results for the three methods under different relative light source intensities. The results show that the retrieval results of method 2 are greatly influenced by the fluctuation in the light intensity, whereas the difference between the retrieval results of methods 1 and 3 is rel- atively small. Figure 9 shows the fitting results of method 2 and method 3 after light intensity changes. When the light intensity changes, the root mean square (RMS) of the fitting residual increases 2-fold. Although the HONO concentration can also be fitted well after the light intensity changes in method 2, the attenuation signal of the lamp spectrum is considered to be caused by HONO absorption, which leads to the high measurement result. Method 3 uses the lamp spectrum after the change in the light intensity to participate in the retrieval, which ensures the absolute stability of the light intensity. However, the lamp spectrum after the light intensity change cannot be obtained in time in the actual measurement. Method 1 always uses the original lamp spectrum to participate in the retrieval, which shows that it is independent of the fluctuations in the light intensity of the light source. Ambient measurement and comparison with conventional IBBCEAS Atmospheric HONO and NO 2 observations were carried out in the suburbs of Hefei City in Anhui Province (31.89 • N, 117.17 • E) from the 29 September to the 1 October 2019. The IBBCEAS system based on an iterative algorithm was placed in a room with an approximately constant temperature controlled by an air conditioner, and the sampling port was outside of the room. A conventional IBBCEAS (Duan et al., 2018) and the IBBCEAS system based on an iterative algorithm were used to measure the concentrations of HONO and NO 2 . The conventional IBBCEAS instrument was developed and reported by our research group previously. The length of the air sampling tubes for both instruments is about 2 m. The time series of the HONO and NO 2 concentrations measured by the two instruments are shown in Fig. 10. The Figure 9. Spectral fitting results of (a) method 3 and (b) method 2 after light intensity change. The RMS of fit residuals in method 2 is 4.94 × 10 −9 ; the RMS of fit residuals in method 3 is 2 × 10 −9 . integration time was 1 min. The highest HONO concentration of 3.12 ppbv appeared on the evening of 29 September, and the average value of HONO during the measurement period was 0.96 ppbv. The average concentration of NO 2 was 15.45 ppbv, and the maximum value was 49.55 ppbv. Figure 11 shows the correlation between the measurement results from the conventional IBBCEAS and the new IB-BCEAS system based on an iterative algorithm. The correlation coefficients (R 2 ) of the HONO and NO 2 results obtained by the two instruments are 0.94 and 0.99 respectively. The differences between two IBBCEAS systems are 1 % and 7 % for the HONO and NO 2 measurements respectively, which is within the measurement uncertainties of instruments discussed in Sect. 3.4. Although there are many intercomparisons regarding HONO measurements, the differences between different instruments have always existed. Crilley et al. (2019) reported that the IBBCEAS instrument and wet chemical methods have good consistency in field observations, but the absolute concentration difference is 12 %-39 % (Crilley et al., 2019). The cause of the difference was not clearly identified; therefore, more experiments on HONO comparison are needed in the future. Figure 10. The time series of the (a) NO 2 and (b) HONO concentrations measured by the two instruments. All data are 1 min averages. The red data point is the result of retrieval using an iterative algorithm, and the blue data point is the result of retrieval using a conventional IBBCEAS algorithm. Figure 11. The correlation between the measurement results of the two IBBCEAS systems: (a) correlation between the HONO concentrations determined by two IBBCEAS systems; (b) correlation between the NO 2 concentrations determined by two IBBCEAS systems. The application of Kalman filtering to measurement results Allan deviation describes the relationship between the integration time of the system and the stability of the system. According to the results of the Allan deviation, the optimum integration time of the system can be obtained so that the system achieves the best detection sensitivity. Kalman filtering technology can further improve the measurement precision of the system (Leleux et al., 2002;Fang et al., 2017;Wu et al., 2010). Compared with a simple moving average, Kalman filtering can deal with the "lag" effect and abnormal peak values (Leleux et al., 2002). The basic idea of Kalman filtering is to obtain the predicted state of the present time based on the state of the previous time, and the predicted state of the present time is fused with the observed state measured by the sensor to obtain the estimation of the current state (Wu et al., 2010;Leleux et al., 2002). This can be expressed as follows: Here,x k is the predicted value of time sequence k,x − k is the predicted value of time sequence k − 1, K k is the Kalman gain, and z k is the measurement value of time sequence k. In this work, the variance of the previous 10 values of the concentration measurements are used in the Kalman filtering. Figure 12 shows the measurements of the HONO and NO 2 concentrations under zero-air conditions as well as the application of the Kalman filtering. The measurement fluctuation in the 3 s integration time was 0.33 and 0.18 ppbv for NO 2 and HONO respectively. After applying Kalman filtering, the effect of the fluctuation is reduced, and the measurement precision in the 3 s integration time was 0.04 and 0.07 ppbv for NO 2 and HONO respectively. The results show that the measurement precision improved by a factor of 4.5. This result is comparable to the measurement precision at an integration time of 60 s. Therefore, using Kalman filtering technology can enhance the measurement precision and reduce measurement noise. Figure 13 shows the measurement of HONO and NO 2 concentrations with and without Kalman filtering under ambient conditions. In order to capture the sharp change in the NO 2 concentration, the filter's gain parameter is set to 40. The filtering results follow the change in the measured concentration, effectively reducing the influence of noise on the concentration results, and improve the measurement precision of the system. Conclusions Here, we have developed an IBBCEAS system based on an iterative algorithm for the simultaneous measurement of atmospheric HONO and NO 2 . The effective absorption optical path length is obtained by the iterative algorithm. The concentrations of HONO and NO 2 are then determined by DOAS retrieval. The iterative algorithm is insensitive to the broadband change in the light intensity and has good robustness. The reflectivity of the high-reflectivity mirror is characterised by the difference in the observed Rayleigh scattering between nitrogen and helium. The reflectivity of the mirror is measured to be 0.99980 at a wavelength of 368.2 nm. The effect of the cavity-mirror-protecting nitrogen purge on the effective cavity length is calibrated using a stable NO 2 concentration. The detection sensitivity of the system is analysed by Allen deviation analysis. The detection limit of the system is 0.08 ppbv (2σ value) and 0.14 ppbv (2σ value) for HONO and NO 2 at an integration time of 60 s respectively. The IB-BCEAS based on an iterative algorithm is in good agreement with the conventional IBBCEAS system when applied to a "proof-of-concept" atmospheric measurement for more than 2 days. The total uncertainty of the system is about 8.1 % for NO 2 measurements and about 8.7 % for HONO measurements. We also utilised a Kalman filtering technique to improve the measurement precision of the IBBCEAS system. This helps to realise high-precision measurements of atmospheric HONO and NO 2 . After applying Kalman filtering technology, the measurement precision at an integration time of 3 s can reach the corresponding precision under an integration time of 60 s without Kalman filtering. The system has a good application prospect for follow-up research on atmospheric HONO on several different platforms, such as vehicle, balloon, and airborne platforms. Data availability. The data used in this study are available from the corresponding author upon request (mqin@aiofm.ac.cn). Author contributions. MQ, PX, JL, WL, and WX contributed to the conception of the study. KT, JD, and WF built the IBBCEAS instrument. KT, JD, FM, HZ, and KY performed the experiments. KT performed the data analyses and wrote the paper. MQ and YH edited and developed the paper. Competing interests. The authors declare that they have no conflict of interest. Acknowledgements. This work was supported by the National Natural Science Foundation of China (grant nos. 41875154 and 91544104) and the National Key R&D Program of China (grant nos. 2017YFC0209400, 2016YFC0201000, and 2017YFC0209900). Financial support. This research has been supported by the National Natural Science Foundation of China (grant nos. 41875154 and 91544104) and the National Key R&D Program of China (grant nos. 2017YFC0209400, 2016YFC0201000, and 2017YFC0209900). Review statement. This paper was edited by Keding Lu and reviewed by three anonymous referees.
2020-10-30T08:05:51.033Z
2020-12-03T00:00:00.000
{ "year": 2020, "sha1": "a1ca6916f7eea9ffe79343e972eb201017f5b34b", "oa_license": "CCBY", "oa_url": "https://amt.copernicus.org/articles/13/6487/2020/amt-13-6487-2020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "728392f834d2c4294e116150446098a8bde49db5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
235490110
pes2o/s2orc
v3-fos-license
$\alpha$-Stable convergence of heavy-tailed infinitely-wide neural networks We consider infinitely-wide multi-layer perceptrons (MLPs) which are limits of standard deep feed-forward neural networks. We assume that, for each layer, the weights of an MLP are initialized with i.i.d. samples from either a light-tailed (finite variance) or heavy-tailed distribution in the domain of attraction of a symmetric $\alpha$-stable distribution, where $\alpha\in(0,2]$ may depend on the layer. For the bias terms of the layer, we assume i.i.d. initializations with a symmetric $\alpha$-stable distribution having the same $\alpha$ parameter of that layer. We then extend a recent result of Favaro, Fortini, and Peluchetti (2020), to show that the vector of pre-activation values at all nodes of a given hidden layer converges in the limit, under a suitable scaling, to a vector of i.i.d. random variables with symmetric $\alpha$-stable distributions. Introduction Deep neural networks have brought remarkable progresses in a wide range of applications, such as language translation and speech recognition, but a satisfactory mathematical answer on why they are so effective has yet to come. One promising direction, with a large amount of recent research activity, is to analyze neural networks in an idealized setting where the networks have infinite widths and the so-called step size becomes infinitesimal. In this idealized setting, seemingly intractable questions can be answered. For instance, it has been shown that as the widths of deep neural networks tend to infinity, the networks converge to Gaussian processes, both before and after training, if their weights are initialized with i.i.d. samples from the Gaussian distribution [Nea96, LBN + 18, dGMHR + 18, NXB + 19, Yan19]. (The methods used in these works can easily be adapted to show convergence to Gaussian processes when the initial weights are i.i.d. with finite variance.) Furthermore, in this setting, the training of a deep neural network (under the standard mean-squared loss) is shown to achieve zero training error, and the analytic form of a fully-trained network with zero error has been identified [JHG18, LXS + 19]. These results, in turn, enable the use of tools from stochastic processes and differential equations for analyzing deep neural networks in a novel way. They have also led to new high-performing data-analysis algorithms based on Gaussian processes [LSP + 20]. We extend this line of research on infinitely-wide deep neural networks by going beyond finite-variance distributions as initializers of network weights. We consider deep networks whose weights in a given layer are allowed to be initialized with i.i.d. samples from either a light-tailed (finite variance) or heavy-tailed distribution in the domain of attraction of biases. These parameters are initialized randomly, and get updated repeatedly during the training of the network. We adopt the common notation Y Θ (x), and express that the output of Y depends on both the input x and the parameters Θ = (W, B). Note that since Θ is set randomly, Y Θ is a random function. This random-function viewpoint is the basis of a large body of work on Bayesian neural networks [Nea96], which studies the distribution of this random function or its posterior conditioned on input-output pairs in training data. Our work falls into this body of work. We analyze the distribution of the random function Y Θ at the moment of initialization. Our analysis is in the situation where Y Θ is defined by an MLP, the width of the MLP is large (so the number of parameters in Θ is large), and the parameters Θ are initialized by possibly using heavy-tailed distributions. The precise description of the setup is given below. (Weights and Biases) The MLP is fully connected, and the weights on the edges from layer ℓ − 1 to ℓ are given by W (ℓ) = (W (ℓ) ij ) ij∈N 2 . Assume that W (ℓ) is a collection of i.i.d. symmetric random variables such that for each layer ℓ, (2.1.a) they are heavy-tailed, i.e. for all t > 0, for some α ℓ ∈ (0, 2], where L (ℓ) is some slowly varying function, or (2.1.b) E|W (ℓ) ij | 2 < ∞. (In this case, we set α ℓ = 2 by default.) Note that both (2.1.a) and (2.1.b) can hold at the same time. Even when this happens, there is no ambiguity about α ℓ , which is set to be 2 in both cases. Our proof deals with the cases when α ℓ < 2 and α ℓ = 2 separately. (See below, the definition of L 0 .) We permit both the conditions (2.1.a) and (2.1.b) to emphasize that our result covers a mixture of both heavy-tailed and finite variance (light-tailed) initializations. Let B (ℓ) i be i.i.d. random variables with distribution µ α ℓ ,σ B (ℓ) . Note that the dis- ij . This is because the biases are not part of the normalized sum, and normalization is, of course, a crucial part of the stable limit theorem. For later use in the α = 2 case, we define a function L (ℓ) by ij | > y) dy converges to a constant, namely to 1/2 of the variance, and thus it is slowly varying. For case (2.1.a), it is seen in Lemma A.1 that L (ℓ) is slowly varying as well. For convenience, let We have dropped the superscript ℓ from L 0 as the dependence on ℓ will be assumed. (Layers) We suppose that there are ℓ lev layers, not including those for the input and output. The 0-th layer is for the input and consists of I nodes assigned with deterministic values from the input x = (x 1 , . . . , x I ). We assume for simplicity that The layer ℓ lev + 1 is for the output. 2.3 (Scaling) Fix a layer ℓ with 2 ≤ ℓ ≤ ℓ lev + 1, and let n be the number of nodes at the layer ℓ − 1. We will scale the random values at the nodes (pre-activation) by a n (ℓ) := inf{t > 0 : Then, a n (ℓ) tends to ∞ as n increases. For future purposes we record the well-known fact that, for a n = a n (ℓ), 2 lim n→∞ na −α ℓ n L 0 (a n ) = 1. 2.4 (Activation) The MLP uses a nonlinear activation function φ(y). We assume that φ is continuous and bounded. The boundedness assumption simplifies our presentation, and in Section 4, we relax this assumption so that for particular initializations (such as Gaussian or stable), more general activation functions such as ReLU are allowed. 2.5 (Hidden Layers) For layer ℓ with 1 ≤ ℓ ≤ ℓ lev , there are n ℓ nodes for some n ℓ ≥ 2. We write n = (n 1 , . . . , n ℓ lev ) ∈ N ℓ lev . For ℓ with 1 ≤ ℓ ≤ ℓ lev + 1, the pre-activation values at these nodes are given, for an input x ∈ R I , recursively by i (x; n) := 1 a n ℓ−1 (ℓ) for each n ℓ−1 ∈ N and i ∈ N. We often omit n and write Y (ℓ) i (x). When computing the output of the MLP with widths n, one only needs to consider i ≤ n ℓ for each layer ℓ. However, it is always possible to assign values to an extended MLP beyond n which is why we have assumed more generally that i ∈ N. This will be important for the proofs as explained in Remark 2 below. i (x; n) depends on only the coordinates n 1 , . . . , n ℓ−1 , but we may simply let it be constant in the coordinates n ℓ , . . . , n ℓ lev . This will often be the case when we have functions of n in the sequel. 1 None of our methods would change if we instead let x i ∈ R d for arbitrary finite d. 2 For case (2.1.b), t 2 L 0 (t) becomes continuous and so na −α ℓ n L 0 (an) is simply 1. To see the convergence in case (2.1.a), first note that as P(|W For the reverse inequality, note that by (1) and the definition of an, for n large enough we have P |W 1+ǫ an ≥ 1/n, and by the definition of slowly varying that, ij | > an 1/n . . 2.6 (Limits) We consider one MLP for each n ∈ N ℓ lev . We take the limit of the collection of these MLPs in such a way that min(n 1 , . . . , n ℓ lev ) → ∞. (Our methods can also handle the case where limits are taken from left to right, i.e., lim n ℓ lev →∞ · · · lim n1→∞ , but since this order of limits is easier to prove, we will focus on the former.) Convergence to α-stable Distributions Our main results are summarized in the next theorem and its extension to the situation of multiple inputs in Theorem 5.1 in Section 5. They show that as the width of an MLP tends to infinity, the MLP becomes a relatively simple random object: the outputs of its ℓ-th layer become just i.i.d. random variables drawn from a stable distribution, and the parameters of Theorem 3.1. For each ℓ = 2, . . . , ℓ lev + 1, the joint distribution of (Y (ℓ) i (x; n)) i≥1 converges weakly to i≥1 µ α ℓ ,σ ℓ as min(n 1 , . . . , n ℓ lev ) → ∞, with σ ℓ inductively defined by σ α2 2 := σ α2 B (2) + c α2 |φ(y)| α2 ν (1) (dy), ℓ = 2, 1 (x). That is, the characteristic function of the limiting distribution is, for any finite subset L ⊂ N, Remark 1. The integrals in Theorem 3.1 are well-defined since φ is bounded. For (possibly) unbounded φ, these integrals are well-defined as well under suitable assumption on φ. See Section 4. Remark 2. Before embarking on the proof, let us make an important remark. For each n = (n 1 , . . . , n ℓ lev ), the MLP is finite and each layer has finite width. A key part of the proof is the application of de Finetti's theorem at each layer, which applies only in the case where one has an infinite sequence of random variables (for a given layer, our sequence is such that there is one random variable at each node). As in [FFP20], a crucial observation is that for each n = (n 1 , . . . , n ℓ lev ), we can extend the MLP to an infinite-width MLP by adding an infinite number of nodes at each layer that compute values in the same manner as nodes of the original MLP, but are ignored by nodes at the next layer. Thus, the finite-width MLP is embedded in an infinite-width MLP. This allows us to use de Finetti's theorem. Heuristic of the proof. The main takeaway of the theorem is that, even though the random variables (Y (ℓ) i (x; n)) i∈N are dependent through the randomness of the former layer's outputs (Y (ℓ−1) j (x; n)) j∈N , as the width grows to infinity, this dependence vanishes via an averaging effect. Let us briefly highlight the key technical points involved in establishing this vanishing dependence on a heuristic level. By de Finetti's theorem, for each n there exists a random distribution ξ (ℓ−1) (dy; n) such that the sequence (Y (ℓ−1) j (x)) j is conditionally i.i.d. with common random distribution ξ (ℓ−1) . By conditioning on ξ (ℓ−1) , we obtain independence among the summands of as well as independence among the family (Y i (x)) i . Let α := α ℓ , n := n ℓ−1 , and a n := a n ℓ−1 (ℓ). With the help of Lemma A.2, the conditional characteristic function of Y where b n is a deterministic constant that tends to one. Assuming the inductive hypothesis, the random distribution ξ (ℓ−1) converges weakly to µ α ℓ−1 ,σ ℓ−1 as n → ∞ in the sense of (3), by Lemma A.6. Since L 0 is slowly varying, one can surmise that the conditional characteristic function tends to which is the characteristic function of the stable law we desire. To make the above intuition rigorous, the convergence of (4) is verified by proving uniform integrability of the integrand with respect to the family of distributions ξ (ℓ−1) over the indices n. Namely, by Lemma A.4, the integrand can be bounded by O(|φ(y)| α±ǫ ) for small ǫ > 0 and uniform integrability follows from the boundedness of φ. The joint limiting distribution converges to the desired stable law by similar arguments. Proof of Theorem 3.1. We start with a useful expression for the characteristic function conditioned on the random variables {Y where σ := σ α ℓ B (ℓ) and the argument on the right-hand side is random. Case ℓ = 2: Let us first consider the case ℓ = 2. Let n = n 1 , α = α 2 , a n = a n1 (2), and t = 0. We first show the weak convergence of the one-point marginal distributions, i.e., we show that this is a straight-forward application of standard arguments, which we include for completeness. Denote the common distribution of Y (1) j (x), j = 1, . . . , n by ν (1) . Taking the expectation of (5) with respect to the randomness of {Y an t = 1. Otherwise, setting b n := na −α n L 0 (a n ), for fixed y with φ(y) = 0 we have that, as n → ∞, By Lemma A.4 applied to G(x) := x −α L 0 (x) and c = 1, for any ǫ > 0, there exist constants b > 0 and n 0 such that for all n > n 0 and all y with φ(y) = 0, where | · | α±ǫ denotes the maximum of | · | α+ǫ and | · | α−ǫ . Since φ is bounded, the right-hand side of (6) is term-by-term integrable with respect to ν (1) (dy). In particular, the integral of the error term can be bounded, for some small ǫ and large enough n, by (Set |φ(y)| α L 0 ( an |φ(y)| ) = 0 when φ(y) = 0.) Thus, integrating both sides of (6) with respect to ν (1) (dy) and taking the n-th power, it follows that From the bound in (7), we have, by dominated convergence, that as n → ∞ Since b n = na −α n L 0 (a n ) converges to 1 by (2), we have that Thus, the distribution of Y (2) i (x) weakly converges to µ α,σ2 where as desired. Next we prove that the joint distribution of (Y Taking the expectation over the randomness of {Y This proves the case ℓ = 2. ..,n is no longer i.i.d.; however, it is still exchangeable. By de Finetti's theorem (see Remark 2), there exists a random probability measure where ω ∈ Ω is an element of the probability space. As before, we start by proving convergence of the marginal distribution. Taking the conditional expectation of (5), given ξ (ℓ−1) , we have for some/any i, j. Using Lemma A.2 and Lemma A.4 again, we get Note that these are random integrals since ξ (ℓ−1) (dy) is random, whereas the corresponding integral in the case ℓ = 2 was deterministic. Also, each integral on the right-hand side is finite almost surely since φ is bounded. By the induction hypothesis, the joint distribution of (Y (ℓ−1) i (x)) i≥1 converges weakly to the product measure i≥1 µ α ℓ−1 ,σ ℓ−1 . We claim that To see this, note that First, consider the first term on the right-hand side of the above. By Corollary A.7, the random measures ξ (ℓ−1) converge weakly, in probability, to µ α ℓ−1 ,σ ℓ−1 as n → ∞ in the sense of (3), where n ∈ N ℓ lev . Also, by Lemma A.4, we have ≤ b|φ(y)t| α±ǫ for large n. For any subsequence (n j ) j , there is a further subsequence (n j k ) k along which, ω-a.s., ξ (ℓ−1) converges weakly to µ α ℓ−1 ,σ ℓ−1 . To prove that the first term on the right-hand side of (12) converges in probability to 0, it is enough to show that it converges almost surely to 0 along each subsequence (n j k ) k . Fix an ω-realization of the random distributions (ξ (ℓ−1) (dy, ω; n)) n∈N ℓ lev such that convergence along the subsequence (n j k ) k holds. Keeping ω fixed, view g(y n ) = |φ(y n )t| α±ǫ as a random variable where the parameter y n is sampled from the distribution ξ (ℓ−1) (dy, ω; n). Since φ is bounded, the family of these random variables is uniformly integrable. Since ξ (ℓ−1) (dy, ω; n) converges weakly to µ α ℓ−1 ,σ ℓ−1 along the subsequence, the Skorokhod representation and Vitali convergence theorem [RF10,p. 94] guarantee the convergence of the first term on the right-hand side of (12) to 0 as n tends to ∞. Now, for the second term, since lim n→∞ |φ(y)t| α L 0 an |φ(y)t| L 0 (a n ) = |φ(y)t| α for each y and φ is bounded, we can use dominated convergence via (13) to show that the second term on the right-hand side of (12) also converges to zero, proving the claim. Having proved (11), we have  Thus, the limiting distribution of Y Recall that characteristic functions are bounded by 1. Thus, by taking the expectation of both sides and using dominated convergence, we can conclude that the (unconditional) characteristic function converges to the same expression and thus the (unconditional) distribution of Y (ℓ) i (x) converges weakly to µ α,σ ℓ . Finally, we prove that the joint distribution converges weakly to the product i≥1 µ α,σ ℓ . Let L ⊂ N be a finite set and t = (t i ) i∈L . Conditionally on {Y Taking the expectation with respect to {Y a similar argument to that of convergence of the marginal distribution shows that completing the proof. Relaxing the Boundedness Assumption As we mentioned earlier in Remark 1, the boundedness assumption on φ can be relaxed, as long as it is done with care. To show the subtlety of our relaxation, we first present a counterexample where, for heavy-tailed initializations, we cannot use a function which grows linearly. Despite the above remark, there is still room to relax the boundedness assumption on φ. If φ is bounded, then the above is obviously satisfied. It is not clear whether there is a simpler description of the family of functions that satisfy this assumption (see [Ald86]); however, let us argue now that this is general enough to recover the previous results of Gaussian weights or stable weights. In [dGMHR + 18] (as well as many other references), the authors considered Gaussian initializations with an activation function φ satisfying the so-called polynomial envelop condition. That is, |φ(y)| ≤ a + b|y| m for some a, b > 0 and m ≥ 1 and W ∼ N (0, σ 2 ). In this setting, we have a n ∼ σ n/2 and α = 2 for all ℓ, and c n,j = c where the variance is uniformly bounded over n if we assume (15). For θ > 1, the ν := m(2 + ǫ 0 )θ-th moment of S n can be directly calculated, which is known to be . This is uniformly bounded over n, and hence |φ(S n )| 2+ǫ0 is uniformly integrable over n. This shows that φ satisfying the polynomial envelope condition meets (UI1) and (UI2) assuming (15). Let us now see that c n,j satisfies condition (15) in both the Gaussian and symmetric stable case. For ℓ = 3, c n,j = φ(Y Proof of Theorem 3.1 under (UI1) and (UI2). We return to the claim in (11) to see how conditions (UI1) and (UI2) are sufficient, even when φ is unbounded. We continue to let n := n ℓ−2 . Choose a sequence {(n, n)} n , where n = n(n) depends on n and n → ∞ as n → ∞ in the sense of (3). Note that (i) to evaluate the limit as n → ∞, it suffices to show the limit exists consistently for any choice of a sequence {n(n)} n that goes to infinity, and (ii) we can always pass to a subsequence (not depending on ω) since we are concerned with convergence in probability. Therefore, below we will show a.s. uniform integrability over some infinite subset of an arbitrary index set of the form {(n, n(n)) : n ∈ N}. Joint Convergence with Different Inputs In this section, we extend Theorem 3.1 to the joint distribution of k different inputs. In this section, we show that the k-dimensional vector (Y i (x k ; n)) converges, and represent the limiting characteristic function via the spectral measure Γ ℓ . For simplicity, we use the following notation: • ·, · denotes the standard inner product in R k . • For any given j, let the law of the k-dimensional vector Y k,s for 1 ≤ s ≤ k, and the projection onto the two coordinates, i-th and j-th, is denoted by ν • For α < 2, we denote the k-dimensional symmetric α-stable distribution with spectral measure Γ by SαS k (Γ). For those not familiar with the spectral measure of a multivariate stable law, see Appendix B. Proof. Let t = (t 1 , . . . , t k ). We again start with the expression Here ψ B and ψ W are characteristic functions of the random variables B Case ℓ = 2: As before, let n = n 1 , α = α 2 , a n = a n1 (2). As in Theorem 3.1, (Y (1) j ( x)) j≥1 is i.i.d, and thus As before, ψ W 1 a n t, φ(y) The main calculation needed to extend the proof of Theorem 3.1 to the situation involving x is as follows. Assuming the uniform integrability in Section 4, we have, for some b > 0 and It thus follows that Therefore, Let · denote the standard Euclidean norm. Observe that for α < 2, Thus, by Theorem B.2, we have the convergence Y where M 2 is given by (19), which is equal to the characteristic function of N (M 2 ). Extending the calculations in (8), the convergence (Y (ℓ) i ( x; n)) i≥1 w → i≥1 SαS k (Γ 2 ) follows similarly. Proof. If L is bounded, then since L is increasing, L(x) converges as x → ∞. Thus L is slowly varying. If L is not bounded, then by L'Hôpital's rule, The next four lemmas are standard results for which we give references for their proofs. In particular, the next lemma is a standard result concerning the characteristic function of heavy-tailed distributions [Pit68, Theorem 1 and Theorem 3] (see also [Dur19, Eq. 3.8.2]). Lemma A.3. If L is slowly varying, then for any fixed ǫ > 0 and all sufficiently large x, Moreover, the convergence as t → ∞ is uniform in finite intervals 0 < a < x < b. An easy corollary of the above lemma is the following result, which we single out for convenience [Pit68, Lemma 2]. Lemma A.4. If G(t) = t −α L(t) where α ≥ 0 and L is slowly-varying, then for any given positive ǫ and c, there exist a and b such that In particular, for sufficiently large t > 0, we have The next lemma regards the convolution of distributions with regularly varying tails [Fel71, VIII.8 Proposition]. Lemma A.5. For two distributions F 1 and F 2 such that as x → ∞ with L i slowly varying, the convolution G = F 1 * F 2 has a regularly varying tail such that Recall that de Finetti's theorem tell us that if a sequence X = (X i ) i∈N ∈ R N is exchangeable then for some π which is a probability measure on the space of probability measures Pr(R). The measure π is sometimes called the mixing measure. Our final lemma characterizes the convergence of exchangeable sequences by convergence of their respective mixing measures. It is intuitively clear. However, its proof is not completely trivial. As far as we know, this lemma has not appeared in the literature before. Lemma A.6. For each j ∈ N∪{∞}, let X (j) = (X (j) i ) i∈N be an infinite exchangeable sequence of random variables with values in R (or more generally, a Borel space). Let π j be the mixing measure on Pr(R) corresponding to X (j) , from (25). Then the family (X (j) ) j∈N converges in distribution to X (∞) if and only if the family (π j ) j∈N converges in the weak topology on Pr(Pr(R)) to π ∞ . In the lemma, the topology on Pr(Pr(R)) is formed by applying the weak-topology construction twice. We first construct the weak topology on Pr(R). Then, we apply the weak-topology construction again this time using Pr(R), instead of R. In the proof of Theorem 3.1, we use the special case when the limiting sequence X (∞) is a sequence of i.i.d. random variables. In that case, by (25), it must be that π ∞ concentrates on a single element ν ∈ Pr(R), i.e. it is a point mass, π ∞ = δ ν , for some ν ∈ Pr(R). More specifically, we have the following corollary. Proof of Lemma A.6. First suppose (π j ) j∈N converges to π ∞ . We want to show that (X (j) ) j∈N converges in distribution to X (∞) . By [Kal02,Theorem 4.29], convergence in distribution of a sequence of random variables is equivalent to showing that for every m > 0 and all bounded continuous functions f 1 , . . . , f m , we have as j → ∞. Rewriting the above using (25) we must show that as j → ∞, But this follows since ν → R m m i=1 f i (x i ) ν ⊗m (dx) is a bounded continuous function on Pr(R) with respect to the weak topology. We now prove the reverse direction. We assume (X (j) ) j∈N converges in distribution to X (∞) and must show that (π j ) j∈N converges to π ∞ . In order to show this we first claim that the family (π j ) j∈N is tight. By [Kal17, Theorem 4.10] (see also [GVdV17,Theorem A.6]), such tightness is equivalent to the tightness of the expected measures ν ⊗N π j (dν) j∈N . But these are just the distributions of the family (X (j) ) j∈N which we have assumed converges in distribution. Hence, its distributions are tight. Let us return now to proving (π j ) j∈N converges to π ∞ . Suppose to the contrary that this is not the case. Since the family (π j ) j∈N is tight, by Prokhorov's theorem there must be another limit point of this family,π = π ∞ , and a subsequence (j n ) n∈N such that π jn w →π as n → ∞. By the first part of our proof, this implies that (X (jn) ) n∈N converges in distribution to an exchangeable sequence with distribution ν ⊗Nπ (dν). However, by assumption we have that (X (j) ) j∈N converges in distribution to X (∞) which has distribution ν ⊗N π ∞ (dν). Thus, it must be that ν ⊗Nπ (dν) = ν ⊗N π ∞ (dν). By the uniform integrability assumption, the right-hand side can be made arbitrarily small by increasing M . Appendix B. Multivariate Stable Laws This section contains some basic definition and properties of multivariate stable distributions which may help familiarize some readers. The material in this section comes from the monograph [ST94] and also [Kue73]. Definition B.1. A probability measure µ on R k is said to be (jointly) stable if for all a, b ∈ R and two independent random variables X and Y with distribution µ, there exist c ∈ R and v ∈ R k such that aX + bY If µ is symmetric, then it is said to be symmetric stable. Similar to the one-dimensional case, there exists a constant α ∈ (0, 2] such that c α = a α +b α for all a, b, which we call the index of stability. The distribution µ is multivariate Gaussian in the case α = 2. Theorem B.2. Let α ∈ (0, 2). A random variable X taking values in R k is symmetric stable if and only if there exists a finite symmetric measure Γ on the unit sphere S k−1 = {x ∈ R k : for all t ∈ R k . The measure Γ is called the spectral measure of X, and the distribution is denoted as SαS k (Γ). In the case k = 1, the measure Γ is always of the form cδ 1 + cδ −1 . Thus, the characteristic function reduces to the familiar form Ee itX = e −|σt| α .
2021-06-22T01:15:56.167Z
2021-06-18T00:00:00.000
{ "year": 2021, "sha1": "f079db0a369b3904263fa573558cc5fff795b636", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f079db0a369b3904263fa573558cc5fff795b636", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
258829413
pes2o/s2orc
v3-fos-license
Effect of an Educational Program on Nurses' Knowledge and Practice of Oxygen Therapy Introduction: Past studies have shown that healthcare professionals may lack awareness and knowledge regarding oxygen therapy, and its implementation often has several obstacles. This study was carried out to investigate the effect of an educational program about oxygen therapy on nurses' knowledge and practices. Methods: This cross-sectional, quasi-experimental study was conducted in 2022 at the pediatric department of Nishtar Hospital, Multan, where 160 nurses from primary and secondary health centers attended an educational program delivered in the pediatric department. The pre-test-post-test approach was used to evaluate the effectiveness of the structured educational program. The independent variable was the educational program, and the dependent variable was the nurses’ knowledge and practice about oxygen toxicity. Data analysis was performed using SPSS version 23 (IBM Corp., New York, USA). The data were tabulated as means and standard deviations for numerical values and frequency percentages for categorical values. The student's t-test and the chi-square test were applied to investigate any associations among variables. Results: The average test scores before and after the implementation of the educational program were 10.75±2.65 and 17.52±2.04, respectively. The average post-test score was greater than that of the pre-test, and the difference was statistically significant (p<0.001). Conclusion: The study found that after the implementation of the educational program, the knowledge and practices of nurses regarding oxygen therapy improved significantly, with the majority showing a positive attitude toward the program. Introduction Oxygen therapy (OT) is a vital form of medical treatment in many different scenarios and is one of the primary treatments for patients with chronic respiratory distress [1]. Despite the significance of oxygen therapy, past studies have suggested that there may be a need for more awareness and knowledge among healthcare professionals concerning oxygen therapy and several obstacles in its implementation [2]. The use of oxygen as a therapeutic drug in critically ill patients has been in practice for many decades, regardless of the environment in which it was delivered [3]. The calculated OT required amount for patients with hypoxemia is often underestimated, and incorrect calculations may lead to fatal conditions [4]. In critically ill patients, oxygen should be administered safely and appropriately, which depends on a complete understanding of the purpose and benefits of its delivery method. The most common harmful effect due to oxygen delivery is the toxicity that can occur in cases where oxygen is delivered at a concentration above 50% for more than 24 hours [5]. Mainly the eyes, respiratory system, and central nervous system are affected by oxygen toxicity in the human body, with high-risk populations including deep-sea divers, premature infants, hyperbaric operation theater patients, and patients exposed to high levels of oxygen [6]. Therefore, efforts should be made to warn against the toxic effects of OT under continuous use. Furthermore, pulse oximetry and arterial blood gases (ABGs) should be monitored continuously, as harmful pulmonary changes can be irreversible [7]. Nurses should routinely inspect the mucous membrane and skin of the mouth to assess signs of any physical damage due to the tubing or oxygen toxicity [8]. This inspection could involve the detection of color changes, inflammation, ulceration, secretions, and other potential issues. It is essential to detect problems early, as this can prevent further complications and ensure that necessary treatments are given promptly. Regular inspections are critical for achieving optimal health and well-being [9]. It is evident that consistent educational programs held on a yearly basis can have a beneficial influence on 1 the knowledge and execution of OT among healthcare practitioners [10]. By providing the necessary information and training, these programs can effectively aid physicians and nurses in the proper implementation of OT, subsequently leading to enhanced patient care and outcomes [11]. As a result, the objective of our study is to assess the effect of an educational program on the practices and knowledge of OT among nurses in our region. Materials And Methods The program was delivered in the emergency department of the hospital's pediatric unit. The program's primary objective was to improve nurses' knowledge regarding OT, and the secondary objective was to enhance their understanding and description of the anatomy of the respiratory system. This work used a cross-sectional quasi-experimental research design with a pre-test-post-test approach to evaluate the effectiveness of a structured educational program. The development of the educational program and the assessment tool used in this study resulted from the researcher's review of related literature [1,8,9] to assess nurses' knowledge regarding oxygen toxicity. The two-part tool was created to provide nurses with standardized educational information and their assessment process to ensure accuracy and consistency. The first part of the tool consisted of a questionnaire. This questionnaire was designed to assess nurses' knowledge of how to properly evaluate the patient's oxygen toxicity levels, including their signs, symptoms, risk factors, and treatments. This part also focused on nurse demographics such as age, sex, workplace, qualification, and experience. All nurses were administered a validated pre-test questionnaire before starting the educational program. The questionnaire consisted of 20 questions and took approximately 15-20 minutes to complete it. Knowledge assessment was done two times through the same questionnaire: once at the beginning of the study (also named the pre-test assessment) and again immediately after the implementation of the educational program (also named the post-test assessment). After pre-test completion, the educational program was delivered, which involved several vital protocols, including proper hand hygiene, equipment provision, patient verification, respiratory assessment, and patient response evaluation. It also involved the correct technique of administering OT, considering age variations to adjust oxygen flow rates, placing the patient in a comfortable position after administering OT, taking safety precautions when leaving the room, documenting patient findings and essentially reporting any areas of concern. The program educated the nurses about the function of the respiratory system, the definition of oxygenation, indications of OT, oxygen humidification, interpretation of pulse oximetry, OT at home, patients' health education, and the nurses' role in oxygen delivery or toxicity. The second part of the tool was a scoring system that used the responses to the questionnaire to assign a numeric value to the nurses' education level. In this tool, a correct answer is scored as "1," and an incorrect answer is scored as "0". The maximum total score is 20 and is categorized as either unsatisfactory (score below 12, i.e., 60% of the total) or satisfactory (score above 12). The Institutional Review Board of the Institute of Mother and Child Care (I-MACCA) Multan, Pakistan, approved the research proposal with letter number CR/0622/0005 dated 12.06.2022. Written consent was obtained from nurses willing to participate in the study after the researcher explained its nature and purpose. The researcher also assured the nurses that the collected data would only be used for research purposes. Additionally, the nurses were assured of their confidentiality and anonymity and the right to refuse or withdraw from the study at any time and for any reason. The data collected from the pre-test and post-test were analyzed to determine the effectiveness of the educational program. Descriptive and inferential statistical methods were used to find the differences between the pre-test and post-test scores. Data analysis was performed using SPSS version 23 ((IBM Corp., New York, USA). The independent variable was the educational program, and the dependent variables were the nurses' knowledge and their intended practices about oxygen toxicity. The data were tabulated as means and standard deviations for numerical values and frequency percentages for categorical values. The student's t-test and the chi-squared test were applied to determine any association among variables. Results Overall, 160 nurses were included in our study, of which 46 (28.8%) were charge nurses and 114 (71.3%) were staff nurses. The mean test scores and the significance of each question before and after the educational program are shown in Table 1. The differences in pre-test and post-test scores for questions eight, 12, 13, and 14 were found to be statistically significant (i.e., p<0.050). Pre-test score Post-test score p-value Decision The overall average test scores before and after the implementation of the educational program were 10.75±2.65 and 17.52±2.04, respectively. The average post-test score was greater than that of the pre-test, and the difference was statistically significant (p<0.001). Furthermore, the differences in the mean test score of staff nurses and charge nurses both before and after the implementation of the educational program were statistically significant (p<0.001), as shown in Table 2. The average post-test scores of staff and charge nurses were 18.03±1.78 and 16.23±2.08, respectively. Therefore, the average post-test score of staff nurses was greater than that of charge nurses, and the difference was statistically significant (p<0.001), as shown in Figure 1. Discussion Oxygen therapy is a necessary treatment for various medical conditions, and nurses play an essential role in assessing its need and ways of administering it [12]. However, there needs to be more evidence that nurses are provided with a regular annual training program to inform and support nursing practices related to OT. To address this knowledge gap, research is needed to better understand the needs of OT patients, identify best practices for the successful management of OT, and develop evidence-based guidelines for nurses to follow when providing OT [13]. Our study is the first in our region to explore the effects of such an educational program. In our study, in parallel with our healthcare settings, all nurse participants were female, though the faculty has recently recognized male nursing as well. The majority of participants had nursing experience totaling 5-10 years. In a study conducted in 2012, Eastwood et al. [14] found that most nurses employed in emergency and intensive care units had bachelor's degrees in nursing. Lack of participation in any previous annual training program may be because hospitals usually do not have a staff development program related explicitly to oxygen toxicity. This result is supported by Rochester [15], who found that nurses working in intensive care units need additional education (like training courses and refresher courses) to provide the best possible care to patients receiving OT and prevent oxygen toxicity. Chen et al. [16] conducted a study in 2018 and found that most nurses surveyed had never attended any training course on OT, highlighting the need for more comprehensive education programs to tackle this and other healthcare-related issues. Such programs should include detailed information on how to safely and effectively administer OT and the most up-to-date guidelines about medical protocols and standards of practice. As demonstrated in a 2016 study conducted by Markocic et al. [17], understanding the potential for oxygen toxicity is critical for nurses to properly assess and identify any issues that could arise from OT. Healthcare professionals, such as nurses, are critical in administering OT to patients in critical situations. As Aloushan et al. [1] pointed out in 2019, a lack of knowledge can negatively impact patients' health outcomes. As such, nurses must be given adequate education to ensure they can adequately assess and administer OT safely and effectively. According to Urden et al. [18], nursing professionals' knowledge of OT is often only moderate. To ensure that these professionals are adequately trained, in-service educational programs are needed to improve their knowledge and practice. These programs should focus on improving nurses' understanding of OT, including oxygen delivery systems, oxygen saturation monitoring, and patient assessment; they should also cover the importance of OT in treating patients with respiratory conditions such as chronic obstructive pulmonary disease (COPD) and pneumonia, as well as OT in premature infants [19]. Furthermore, nurses should be taught how to recognize and manage the potential risks associated with OT, such as hypoxemia and hyperoxemia. Through these programs, nurses can understand OT comprehensively, enabling them to provide the best possible care to their patients [20]. The study underscores the need for a regular annual training program for nurses that should assess their knowledge and practices regularly. This assessment process can help identify areas of improvement or gaps in the nurses' knowledge and performance while guiding further training or remedial actions to ensure they are well-equipped to provide the best patient care. More longitudinal research to check if such programs result in long-lasting effects is really necessary. One of the primary limitations of this study focusing on the effect of an educational program on nurses' knowledge and practice of oxygen therapy was using a single methodology. The study exclusively relied on self-reported surveys to measure the effectiveness of the educational program, like Temsah et al.'s study [21], but it may be subject to recall bias and social desirability bias. Moreover, the study also lacked a control group, which made it difficult to discern whether any changes in knowledge or practice genuinely resulted from the educational program or other factors. Additionally, the study's sample size could have been more extensive, which may reduce the generalizability of the findings. Finally, the study was conducted in a single clinical setting, which may not represent the broader range of practice contexts in which OT is administered, thereby limiting the generalizability of the study's findings to other settings. Conclusions Implementing an educational program on OT showed significant direct improvements in the knowledge and intended practices of nurses. The program focused on providing nurses with up-to-date information on the correct use of OT and the importance of monitoring its administration. The results demonstrated that a structured educational program can enhance nurses' competence and confidence when administering OT, improving patient care and outcomes. It is essential for healthcare facilities to prioritize continuing education opportunities for nurses to ensure that they are equipped with the knowledge and skills necessary to provide optimal patient care. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. IRB-Institute of Mother and Child Care issued approval CR/0622/0005. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-05-22T15:04:22.208Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "c44960beb701ab2a94f639d44d77eebc4c7c9982", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7759/cureus.39248", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c032861cccecad2a904d593bc88631201aebbed8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
119351927
pes2o/s2orc
v3-fos-license
Emergent Dirac carriers across a pressure-induced Lifshitz transition in black phosphorus The phase diagrams of correlated systems like cuprates or pnictides high-temperature superconductors are characterized by a topological change of the Fermi surface under continuous variation of an external parameter, the so-called Lifshitz transition. However, the large number of low-temperature instabilities and the interplay of multiple energy scales complicate the study of this phenomenon. Here we first identify the optical signatures of a pressure-induced Lifshitz transition in a clean elemental system, black phosphorus. By applying external pressures above 1.5 GPa, we observe a change in the pressure dependence of the Drude plasma frequency due to the appearance of massless Dirac fermions. At higher pressures, optical signatures of two structural phase transitions are also identified. Our findings suggest that a key fingerprint of the Lifshitz transition in solid state systems, and in absence of structural phase transitions, is a discontinuity of the Drude plasma frequency due to the change of Fermi surface topology. The phase diagrams of correlated systems like cuprates or pnictides high-temperature superconductors are characterized by a topological change of the Fermi surface under continuous variation of an external parameter, the so-called Lifshitz transition. However, the large number of lowtemperature instabilities and the interplay of multiple energy scales complicate the study of this phenomenon. Here we first identify the optical signatures of a pressure-induced Lifshitz transition in a clean elemental system, black phosphorus. By applying external pressures above 1.5 GPa, we observe a change in the pressure dependence of the Drude plasma frequency due to the appearance of massless Dirac fermions. At higher pressures, optical signatures of two structural phase transitions are also identified. Our findings suggest that a key fingerprint of the Lifshitz transition in solid state systems, and in absence of structural phase transitions, is a discontinuity of the Drude plasma frequency due to the change of Fermi surface topology. PACS numbers: The Lifshitz transition, the change of the Fermi surface topology under variation of an external parameter 1 , is a fundamental phenomenon in strongly correlated systems like the YBa 2 Cu 3 O 6+y 2 , Bi 2 Sr 2 CaCu 2 O 8+δ 3 and Ba(Fe 1−x Co x ) 2 As 2 4 superconductors, and is suspected to play a relevant role in determining their electronic properties. In these materials, it may induce a band flattening and increase the density of states close to the Fermi level, thus promoting high-temperature superconductivity 5 . However, these compounds also exhibit a variety of low-temperature phase transitions that can mask the thermodynamic and transport properties of a pure Lifshitz transition. As a result, the dynamical charge and current fluctuations across a Lifshitz transition are still poorly understood. Several different Lifshitz transitions have been observed (or theoretically predicted) in elemental black phosphorus, as a function of doping 6,7 , electric field 8 , and pressure 9,10 . BP is an attractive material for electronic applications due to its very high electron mobility (10 4 cm 2 V −1 s −1 ) and the presence of a direct, tunable infrared band gap 11 . Its A17 orthorhombic structure is extremely anisotropic, with grooves oriented along the so-called zig-zag direction 12 (see Fig.1a). The band structure is parabolic along both the interlayer and the zig-zag directions, while along the armchair direction the dispersion is almost linear, thus allowing possible Dirac cones 13,14 . Upon pressurization, both the structure and the electronic properties undergo dramatic changes and a Lifshitz transition occurs at a pressure P L = 1.5 GPa (see Fig.1b). At low pressures, the electronic band gap gradually closes until the valence and conduction bands touch at the Z point, before intersecting each other without hybridizing. Around P L , four-fold degenerate Dirac points are formed, which are then evolving in both electron and hole-like Fermi pockets when the pressure is further increased 15 . At P > P L , the orthorombic (A17) structure becomes rhombohedral (A7) around 5 GPa and then simple cubic (sc) around 10 GPa [16][17][18][19] , where superconductivity also occurs 20,21 . The occurrence of a pressure-induced Lifshitz transition in an elemental semiconductor provides a unique setting for the study of its electrodynamics in the absence of other low-temperature instabilities. Here, we address this topic by driving BP across the pressureinduced Lifshitz transition and studying its optical response with synchrotron-based infrared spectroscopy and first principles density functional theory (DFT) calculations. We identify for the first time the optical signatures of a pressure-induced topological Lifshitz transition in an elemental semiconductor. At a transition pressure P L = 1.5 GPa, BP evolves from a semiconductor to a Dirac semimetal by building up a plasma of massless charge carriers. At higher pressures, we observe the optical fingerprints of the two structural phase transitions occurring in the semimetal phase. storage ring, with a Bruker 70v interferometer mated to a broadband infrared microscope. BP samples were obtained from two different providers (Smart-Elements and HQ Graphene) and cut for use in a Diamond Anvil Cell (DAC) with CsI as the pressure transmitting medium, yielding identical experimental results. The samples, oriented along the basal ac plane (see Fig1a), were kept in contact with the diamonds in order to ensure a flat interface. Pressure was gauged through the ruby fluorescence technique 23 . The samples were mounted in two different DACs equipped with 1 mm and 0.4 mm culet diamonds respectively. The former allowed reliable measurements down to 100 cm −1 for pressures up to 2.2 GPa, while the latter allowed up to 10.4 GPa. Light was polarized along the most conductive direction, the c (armchair) axis crystallographic direction. From the reflectivity data at the sample-diamond interface we retrieved the optical conductivity through Kramers-Kronig transformations 24,25 . All the measurements reported in this study were performed at room-temperature (RT). We report in Fig. 2 the infrared signatures of the pressure-induced Lifshitz transition in the orthorhombic phase of BP, the key experimental observation of this work. Above 200 cm −1 , the low pressure (< 1 GPa) reflectivity R(ω) is approximately 0.1, and remains flat in the whole measured range, up to 8000 cm −1 . At 130 cm −1 , we detect a sharp peak assigned to the B 1u phonon mode 26 . The real part of the optical conductivity σ 1 (ω) is very low and consistent with semiconducting behavior at ambient pressure. As pressure is increased, both R(ω) and σ 1 (ω) are gradually enhanced. However, between 1.3 and 1.6 GPa, we observe an abrupt blue-shift of the reflectivity plasma edge that can be ascribed to the Lifshitz transition observed in ARPES 6 . As pressure is further increased and the phonon becomes screened, a Drude-like absorption term appears due to the delocalization of charge carriers. More details about the pressure dependence of the B 1u phonon are reported in the supplementary information. A reliable, model-independent figure of merit for the pressure-induced metallization is the low frequency spectral weight 24 , SW = 120 π Ω2 Ω1 σ(ω)dω, integrated between Ω 1 = 100 cm −1 and Ω 2 = 200 cm −1 . Ω 1 is the low frequency limit of our data, while Ω 2 is chosen in order to fully include the low frequency Drude term. As visible in Fig. 2c, the pressure dependent SW follows a sigmoid trend centered at P L = 1.5 GPa. This behavior maps exactly onto the resistivity measurement from Ref. 9 (see Fig. 2c), and was previously associated with the Lifshitz transition. In order to quantify the carrier density changes across the transition, we performed a Drude-Lorentz fitting of the data. The plasma frequency (ω p ) associated with the free-carrier Drude term is reported in Fig. 2d as a function of pressure. It is worth noting that a small, but sizeable RT conductivity in the order of few 10's (Ω · cm) −1 is observed also at ambient pressure, i.e. in the semiconducting phase, and can be ascribed to the presence of thermally-activated carriers. Notably, the plasma frequency increases linearly with pressure, up to P L . At the Lifshitz transition pressure P L , ω p increases more steeply at a rate of 900 cm −1 /GPa, almost three times the 260 cm −1 /GPa slope observed below P L . The discontinuity in the plasma frequency slopes can be as- cribed to the simultaneous presence of different fermions contributing to the conduction. Below P L only massive, thermally activated carriers contribute to the conduction. As the Dirac cone is formed above P L , also Dirac-like fermions contribute to the Drude conductivity, thus leading to a combination of massive (Schrödinger-like) and massless (Dirac) carriers. We evaluate the massive carrier contribution (ω p,S ) at all pressures by extrapolating above P L the linear behavior of ω p from below P L . As a consequence, the massless Dirac contribution to the plasma frequency ω p,D can be calculated at all pressures from ω 2 p = ω 2 p,S +ω 2 p,D . The resulting pressure dependent ω p,D is reported in Fig. 2d. The massless Dirac plasma frequency ω p,D can be microscopically calculated as 27 where g s and g v are the spin and valley degeneracies (g s =2 and g v =2 in BP 28 ). By making use of the experimental density of carriers determined from Hall effect measurements 29 , we can use Eq. 1 to estimate the pressure-dependent Fermi velocity v F (reported in Fig. 3c), and we find it to be around 2÷4 · 10 6 m/s in good agreement with theoretical calculations 15 . By further increasing pressure well above the Lifshitz transition, BP undergoes two distinct structural phase transitions, from orthorombic (A17) to rhombohedral (A7), and from rhombohedral to simple cubic (sc), at about 5 and 10 GPa respectively 16,17,20 . Recent experiments hint to the presence of large regions of phase coexistence between the various structural phases [30][31][32][33] . According to x-ray diffraction data from Ref. 32, performed on the same batch of samples used in this work, the A7 phase starts to appear above 5 GPa and coexists with A17 up to 10 GPa. Above this pressure, the A7 phase disappears, while the sc phase gradually sets in. We report in Fig. 3a-b, the optical properties across the A17-A7 phase transition to investigate how the mixed Drude responds to a structural phase transition. Under increasing pressure, the infrared reflectivity is enhanced and its plasma edge monotonically blue-shifts (see Fig. 3e), while the optical conductivity increases. The optical gap, located roughly at ∼ 2000 cm −1 (see the ambient pressure σ 1 (ω) reported in Fig. 3b for reference) is filled up. In this pressure range, the optical conductivity can be described by the combination of a Drude term and a mid-infrared (MIR) band (see supplemental material). The Drude plasma frequency grows linearly with pressure (black circles in Fig. 3e) with the same 900 cm −1 /GPa slope observed at 1.5 GPa. With increasing pressure, the MIR band progressively coalesces into the Drude and be-comes a second zero-frequency oscillator with a scattering rate γ ∼ 6000 cm −1 , i.e. much larger than the one associated to the massless carriers (γ ∼50÷500 cm −1 ). Remarkably, when summed in quadrature (ω p,Drude+MIR ), the two Drude terms exhibit a linear increase with the same slope of 900 cm −1 /GPa discussed above (see Fig. 3e). Let us note here that in this pressure range the decomposition of ω p in terms of ω p,D and ω p,S becomes relatively unimportant because of the dominance of the ω p,D contribution (see Fig. 3e). When entering the high pressure A7-sc mixed phase, the optical properties drastically change (Fig. 3c-d). The reflectivity edge shifts to 8000 cm −1 , resulting in a greatly enhanced σ 1 (ω). The two-bands electronic structure clearly identified in the A17-A7 mixed phase appears now to be merged into one single Drude term with ω p ∼ 15000 cm −1 , roughly corresponding to the ω p,Drude+MIR term introduced before to describe the A17-A7 phase. Our experimental findings can be benchmarked against first principles density functional theory (DFT) calculations of the structural, electronic, and optical properties under pressure. A first-principles description of the BP electronic properties across the semiconductormetal transition is challenging for local DFT exchangecorrelation functionals, which predict a metallic ground state at ambient conditions. In order to reproduce the small band gap at ambient conditions, we used the Tran-Blaha 34 meta-GGA exchange-correlation potential in DFT calculations 35 , which is quite reliable in describing small gap sp systems. The calculated zero-pressure band gap is 2000 cm −1 for the experimental, ambientpressure lattice parameters, in agreement with the ambient pressure optical conductivity (see Fig. 3b), and previous experimental reports 6,14 . More details about the DFT calculations are reported in the supplementary information. By using experimental structural information as a function of pressure from Ref. 17, we calculated the plasma frequencies reported and compared with the experiment in Fig.3e. Although the qualitative behavior is well reproduced, the results consistently overestimate the experimental values by a factor of 3. The onset of the Lifshitz transition is theoretically predicted at 2.1 GPa, close to, but slightly higher than the 1.5 GPa experimental value. This small discrepancy is likely due to defects and to the anomalous temperature dependence of the band structure 14,36 . Above 3 GPa the theoretical plasma frequency calcu-lated within the A17 structural phase increases linearly with pressure up to 8.5 GPa as experimentally observed (Fig. 3e). Considering the structural phase transition in the A7 phase, we found that the theoretical plasma frequencies (green triangles in Fig. 3e) are significantly enhanced with respect to the A17 case. Interestingly, the experimental values can match this leap if one considers the ω p,Drude+MIR plasma frequency (green circles in Fig. 3e) instead of ω p . The qualitative agreement between experimental data and theoretical calculations within this large pressure range and in different structural phases indicates that the midinfrared band can be attributed to partially localized charge carriers emerging in the A7 phase and coalescing in a Drude term at higher pressures. This spectral feature, likely related to strong interactions, is intriguing and deserves further study. In conclusion, we presented the first direct optical identification of a pressure-induced Lifshitz transition in elemental BP (P L = 1.5 GPa). The key spectral feature associated with this transition is a discontinuity in the pressure-dependent carrier density that can be attributed to the emergence of a plasma of massless Dirac carriers. The character of the Lifshitz transition has been confirmed through comparison with DFT calculations that provided an excellent description of the experimental plasma frequencies. The Dirac plasma frequency increases linearly with pressure, well into the A17-A7 mixed phase up to about 8 GPa. The onset of the A7 structural phase triggers the delocalization of a significant portion of charge carriers which become indistinguishable from the Dirac carriers when entering into the sc phase above 9 GPa. Our work in a clean, controlled elemental system will serve as a useful guide to identify optical signatures of a Lifshitz transition in more complicated systems, like hole-doped cuprates and iron pnictides, and will lead to a deeper understanding of this fascinating physical phenomenon.
2018-04-19T10:13:18.000Z
2018-04-19T00:00:00.000
{ "year": 2018, "sha1": "819803cd6a0090b3f2d66be8b8b0b6ccfbac3b87", "oa_license": null, "oa_url": "https://kups.ub.uni-koeln.de/8710/1/10.1103@PhysRevB.98.165111.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "819803cd6a0090b3f2d66be8b8b0b6ccfbac3b87", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244805434
pes2o/s2orc
v3-fos-license
Inter-Level Feature Balanced Fusion Network for Street Scene Segmentation Semantic segmentation, as a pixel-level recognition task, has been widely used in a variety of practical scenes. Most of the existing methods try to improve the performance of the network by fusing the information of high and low layers. This kind of simple concatenation or element-wise addition will lead to the problem of unbalanced fusion and low utilization of inter-level features. To solve this problem, we propose the Inter-Level Feature Balanced Fusion Network (IFBFNet) to guide the inter-level feature fusion towards a more balanced and effective direction. Our overall network architecture is based on the encoder–decoder architecture. In the encoder, we use a relatively deep convolution network to extract rich semantic information. In the decoder, skip-connections are added to connect and fuse low-level spatial features to restore a clearer boundary expression gradually. We add an inter-level feature balanced fusion module to each skip connection. Additionally, to better capture the boundary information, we added a shallower spatial information stream to supplement more spatial information details. Experiments have proved the effectiveness of our module. Our IFBFNet achieved a competitive performance on the Cityscapes dataset with only finely annotated data used for training and has been greatly improved on the baseline network. Introduction Semantic segmentation is a task to predict the corresponding category of each pixel in the image. This topic is a very popular direction in computer vision and can be applied to many practical tasks, such as city scenes [1][2][3][4][5], satellite topographic survey [6,7], and medical image analysis [8][9][10]. The new point-to-point network structure designed by Fully Convolutional Networks (FCN) for semantic segmentation [11] performs well in this dense pixel prediction task. This structure foundation accelerates the development of semantic segmentation. So far, many excellent networks have been developed [12][13][14][15]. In FCN [11], the multilayer convolution and pooling structure results in a 32-fold reduction in the final feature compared to the input image. This design loses a lot of spatial information, resulting in inaccurate predictions, especially on the edge details of the picture. To solve this problem, many networks have tried various methods. For example, the atrous convolution applied in DeepLabV3 [16] increases the field of perception while not reducing the size of the feature map. There is also a parallel atrous convolution (Atrous Spatial Pyramid Pooling, ASPP) [2,16] structure that can be used to improve the result of the segmentation if added to most of the segmentation networks. Additionally, the encoder-decoder [1,8] network structure is often a countermeasure to the above loss of spatial structure information. In an encoder-decoder network, the backbone network of the classification network is often used as the encoder [11,13,17], which is responsible for encoding the input pictures into feature mappings with low resolution but rich semantic information. The decoder then restores the pixels of the low-resolution feature to obtain a pixel-level category prediction of the same size as the original image, usually designed as a series of convolution and up-sampling operations. Because the direct up-sampling of low-resolution feature maps still lacks the spatial detailed information lost in the encoder, decoders often incorporate low-level features into the up-sampling parts to capture the fine-grained information. A typical structure design is DeepLabV3plus [2]. Carefully analyzing most of the existing segmentation networks, it is not difficult to find that there are usually two ways to integrate low-level features in decoders: concatenation and element-wise addition. Element-wise addition adds features with the same size and number of channels. When convoluting, a priori is added. By the way of addition, new features can be obtained. This new feature can reflect some unique information of the original features, but some information contained in the original features will be lost in the process. Concatenation is the splicing of feature mappings of the same size in the channel dimension. After splicing, each channel corresponds to the corresponding convolution kernel. Compared with concatenation, addition saves more computation than concatenation, but there is no information loss in the concatenation process without considering the amount of computation. Therefore, to obtain a better prediction result, the concatenation fusion feature is more commonly used in semantic segmentation [2,8,18]. However, this also leads to some problems. Because decoders usually combine in-depth features with shallow features, we know that these two features carry very different types of information. Convolution-output features after direct concatenation are not well integrated, resulting in low information utilization. As shown in Figure 1, the lower parts contain more simple spatial and line information, while the higher features include rich semantic information. The convolution output after concatenation shows that the spatial information in the semantic features has been optimized greatly. However, the overall features are still mixed, resulting in a situation where both the semantic and spatial characteristics are not prominent. This would undoubtedly result in inaccurate fuzzy predictions for pixel-level segmentation tasks. [19]. From left to right are input images, low-level features, deep-level features, and features after concatenation fusion. Inspired by the above work, we designed an inter-level feature balanced fusion module to fuse the inter-level features in a more balanced and efficient manner, which solves the problem of inefficient feature utilization and weak purpose of the regular inter-level feature fusion method of concatenation or element-wise addition. The core idea of this module is inspired by the differences between spatial and semantic features. In the process of two-level feature fusion, the correlation between two-level features is calculated in the channel dimension after concatenation of the channel dimension. The spatial weights of the spatial feature in the channel dimension and the semantic weights of the semantic feature in the channel dimension are given respectively by using the method of a normalized score. Additionally, in the back propagation [20], we continuously update and optimize the weight information parameters. Compared with the previous feature fusion methods, we consider that the information between the fusion features is different. Our feature balanced fusion module design can let the network learn how to integrate the features, guide the two different information interaction balances, each taking its role, and finally contribute to better segmentation prediction. Overall, our network is based on the encoder-decoder structure, which uses a backbone with a deep convolution feature extraction network to extract rich semantic information, followed by an Atrous Spatial Pyramid Pooling (ASPP) [16] structure to extract multi-scale features to obtain rich contextual information without reducing the feature resolution. The decoder is designed with the skip-connections structure, which restores the high-level feature resolution while fusing with the low-level feature, adds spatial information to the semantic information, and gradually restores the segmentation boundary. To obtain more spatial information, we also designed a shallow branch of spatial flow and fused it with the previous level features. In each level of fusion structure, we applied the feature balanced fusion module to balance the fusion of features from different parts. Our contributions can be summarized as follows: 1. An inter-level feature balanced fusion module was designed to solve the problem of feature imbalance caused by traditional concatenation or element-wise addition, which makes the fusion more balanced and utilization of features more effective. 2. A shallow spatial stream with only three convolution layers was designed and added into the network, which is fused with the main semantic features before outputting the prediction in the decoder. This further enriches the spatial information. 3. Our IFBFNet achieved a comparative performance of 81.2% to mIoU on the Cityscapes dataset with only finely annotated data used for training, significantly improving over baselines. Semantic Segmentation Before FCN [11] is introduced, the CNN convolution layer is connected by several fully connected layers, while FCN replaces the fully connected layers of the network with common convolution layers, finally outputting a feature mapping of the same size as the input. Since FCN proposes such a point-to-point full convolution network to complete the task of semantic segmentation, this has triggered a wave of research on the direction of semantic segmentation. Researchers have been committed to improving the accuracy of pixel-level prediction. Directions can be roughly divided into three groups: a pyramid module, an encoder-decoder structure, and an attention mechanism. Pyramid Module For the pixel-level prediction task of semantic segmentation, the segmentation of small objects is in a very awkward situation in the whole image segmentation, which often causes segmentation errors or rough segmentation contours, or is even completely ignored. To solve the problem of small object segmentation, a multi-scale pyramid module has become the main solution, which consists of multi-scale pooling [17,21] or the dilated convolution of different rates [14,16,22]. To obtain a good segmentation prediction, our goal is to minimize the overall stride of the network to prevent feature mapping from becoming too small and losing too much spatial information. However, the reduction of the stride will result in a significant reduction in the final feature receptive field [23][24][25]. The above two problems seemed to be contradictory until the advent of ASPP [16]. It solves this problem by expanding the feature receptive field without sacrificing the spatial resolution of the feature. The given input features are sampled in parallel with the atrous convolution at different dilated rates by ASPP, which is equivalent to capturing the context of an image at multiple scales. In our network, ASPP was chosen for multi-scale feature extraction, because it can ensure that high-level semantic features maintain a receptive enough field without losing too much spatial information at the same time. Encoder-Decoder The encoder-decoder network architecture has achieved excellent results in semantic segmentation tasks [1,8,15,26], such as segmentation of urban landscapes [2,17] and medical images [8][9][10]. At present, most of the popular segmentation networks with good performance are based on this framework to modify or add some modules. It uses an encoder to learn more rich and dense semantic features, and then uses a decoder to incrementally increase the resolution of features to achieve segmented output. Meanwhile, low-level spatial features are incorporated into the decoder process to supplement the spatial information lost by the bilinear interpolation up-sampling of high-level features. The encoder typically employs backbone networks, such as ResNet and VGG series [27,28], that are commonly used for image classification [17,21,25], because they have strong feature extraction capabilities, they are well-suited for semantically segmented tasks that require rich deep features. Therefore, the improvement of the performance of a segmentation network of the encoder-decoder architecture mainly depends on the structure design of the decoder and the connection mode design between the encoder and the decoder. If the design is ingenious, it is easy to improve the segmentation score. For example, in the design of DeepLabV3plus [2], the encoder features are upsampled, then concatenated with the low-level features, and finally, the final output is obtained by upsampling the concatenated features. This design greatly improves network performance. AResU-Net [29] uses the network design of UNet as a reference. The features obtained by the encoder are concatenated with the features of the upper level after being sampled at each level, and then the upper sampling is concatenated. In this way, repeated operations are carried out and finally, the output containing multi-level information is obtained. Inspired by these works, our network also adds skip-connections [1,8,11,30,31] to produce clearer boundaries. Attention Mechanism Attention mechanisms are designed to mimic the human visual system, selectively focusing on more significant areas rather than dealing equally with the entire scene. Attention not only tells us where the focus is, but also enhances the representation of interests. Our goal is to improve performance by using attention mechanisms: focusing on important features and suppressing unnecessary ones. A lot of work has used this idea to accomplish various computer vision tasks. A compact module was introduced in the excellent network SENet [32] of image classification, which calculates the attention weight of the feature channel by compressing the excitation feature mapping. In SSA-CNN [33], the target detection box is used as a segmentation ground truth and is further used for learning segmentation features. As the attention map of the detection feature, the feature will be fused with the detection feature for detection. OCNet [34] and DANet [35] use a self-attention mechanism to explore the context. In the segmentation network proposed by Chen et al. [36], different scale features are automatically fused according to the weights calculated by constructing an attention model. Inspired by these efforts, we designed a feature balanced fusion module to learn the convergent attention weights between different levels of semantic segmentation and to guide them towards more efficient fusion rather than a concatenation of simple channel dimensions. The comprehensive experimental results show that this strategy does make the fusion of deep semantic features and shallow spatial features more balanced. Approach In this section, we will introduce the details of our network structure design, which is divided into encoder-decoder, feature balance, and shape stream. Our network is mainly composed of three parts: encoder, decoder, and spatial stream. The overall framework of the network is shown in Figure 2. The overall structure of IFBFNet. There are three parts: encoder, decoder, and spatial stream. The encoder is composed of a backbone network and ASPP to extract rich high-level semantic information. In the decoder, we added inter-level feature balanced fusion module into each skip-connection structure. The spatial stream supplements more low-level spatial information. Our Encoder-Decoder Based on previous encoder-decoder networks [1,8,15,26], our network also uses the encoder-decoder network architecture. We use ResNet101 [27] as an encoder, which has a strong feature extraction capability, a CNN commonly used for image classification. As shown in Figure 2, We usually divide ResNet into four stages. S1, S2, S3 and S4 in the figure represent stage1, stage2, stage3 and stage4, respectively. The traditional ResNet101 extracts features from the input pictures, and finally reduces the resolution of the feature map to 1/32 of the original input size. This large resolution reduction is negative for the task of outputting a pixel-level prediction of the same size as the input image (H × W × 3). To maintain the resolution of the feature map extracted from the backbone network, we added atrous convolution in ResNet. Specifically, we set dilations as (1,2) and stride sizes as (2,1) in the last two stages of ResNet to obtain the feature map (H/16 × W/16 × 2048) with 1/16 the size of the input image. To obtain context information at multiple scales, we reduced the feature channels extracted by the backbone from 2048 to 256 (H/16 × W/16 × 256) to reduce subsequent computations of ASPP after that, as shown in Figure 2. The decoder consists of several skip-connections and an upsampling structure designed to restore spatial characteristics. After the process of ResNet and ASPP, we upsample the feature (H/16 × W/16 × 256) four times and concatenate it with the first low-level feature of the backbone. We call the first low-level layer stage one, as shown in S1 (H/4 × W/4 × 256) in Figure 2. To optimize spatial detail information on the premise of preserving most of the semantic information, we use convolution with a kernel size of 3 × 3, padding of 1 × 1, and stride of 1 × 1 to reduce the number of feature channels of stage one (S1) to 64 before concatenation. After the concatenation of the two levels feature, we reduce the channel dimension to 256 (H/4 × W/4 × 256) using two 3 × 3 convolutions. The feature is then inputted into an inter-level feature balanced fusion module to optimize the feature expression. To complement the spatial feature of S1 to make the prediction map boundary clearer, we add a spatial flow branch to concatenate with the feature balanced above, and then change the channel number of concatenated feature to 256 using 1 × 1 convolution. After that, the feature map is processed by an inter-level feature balanced fusion module to obtain the optimized features. Finally, we use two 1 × 1 convolutions to reduce the feature channel dimension from 384 to 128, and then to the number of categories, and then upsample the feature map four times to obtain a prediction with the same size as the original input image (H × W × N_class). Inter-Level Feature Balanced Fusion Module In many network architectures ( [2,8,18,37]), it is not difficult to find that in the process of restoring the boundary of prediction, the common method is to concatenate the lowlevel features with high-levels in the channel dimension. For two inputs X and Y, their sizes are the same, and the numbers of the channel are C 1 and C 2 , respectively. We assume that the two inputs are X 1 , X 1 , ..., X C1 and Y 1 , Y 2 , ..., Y C2 . Due to the feature concatenation of channel dimensions, differently from the simple feature addition, the subsequent convolution calculation is calculated separately for each channel. Processed by convolution K 1 , K 2 , K 3 , ...,K C1+C2 , the concatenated feature map Z concat 's calculation can be expressed as follows: where * represents convolution, and K i stands for the ith convolution layer. However, this simple concatenation of high-level and middle-level features of channel dimension results in subsequent features simply as a result of splicing each channel. To some degree, this will ignore the differences between different levels of features and their contribution to the output. As a result, neither of the two levels' features play their best role. For example, the high-level information contains complex semantic information, while the low-level features mainly represent the features of picture shape, line, color, and texture information. The result is that the simple concatenation cannot express the shallow information well, nor can it express the deep information well. Inspired by the above, we propose a feature balanced fusion strategy that can guide the two-level features' fusion towards a more balanced direction and improve the utilization of features. Unlike previous methods, after concatenating the two feature levels Feat h ∈ R C h ×H×W and Feat l ∈ R C l ×H×W , we use an average pooling operation to compress the spatial information of the fused features into a one-dimensional concentrated expression W ∈ R (C h +C l )×1×1 , and then use 1 × 1 convolution to calculate the correlation information between the two features, and finally use sigmoid normalization to obtain the balance weights W h ∈ R C h ×1×1 and W l ∈ R C l ×1×1 of the two-feature fusion. The calculation equation can be seen in (2): In Equations (2) and (3), || c represents the splicing of channel dimensions. Then, the weights are multiplied with the original two levels of features to obtain the fused feature map after optimizing the weights, as we describe in Equation (3): At the same time, in order to preserve some of the primitive information, we added a residual structure at the end, adding the unprocessed features to the optimized feature by element-wise addition. The specific structure of the inter-level feature balanced fusion module is shown in Figure 3. Overall, after concatenating features at different levels in the channel dimension, we compress the feature map space into 1 × 1 dimensions and normalize the relationship between the two levels of features to generate the balanced weights of the two levels of features. The formula for calculating the feature after the balance of features is: By comparing the feature expressions calculated by Equations (1) and (4), it is clear that our feature balanced fusion method yields a more adaptive feature fusion calculation, which can lead to a richer and more accurate feature expression. Spatial Stream Generally speaking, for semantic segmentation tasks, the depth of the segmentation network [17,24] is very large, because only in this way can enough perception fields be obtained. Specifically, these networks mostly encode the input images by continuous down-sampling and convolution of the input, which results in rich semantic information and somewhat good predictions. However, in this process, the resolution of the output will be compressed many times, and thus the predicting boundary details still need to be improved. We can see from related work [17,31,38] that the maintenance of spatial information does have an impact on prediction accuracy. Considering the importance of spatial information, based on the deep advanced semantic information extraction network, we also designed a shallow spatial flow to supplement the spatial information lost due to down-sampling in the deep path.The specific structure of the spatial flow branch is shown in Figure 4. It contains only three convolution layers, which are very simple. The first two layers both use a convolution layer with a stride of 2. The first and second convolution kernels are 7 × 7 and 3 × 3, respectively. Simple spatial information is extracted on the convolution kernels of different scales. The last convolution layer no longer reduces the feature size. The convolution layer of the 1 × 1 kernel is used to change the number of channels of the spatial flow for flexible adjustment of the amount of whole spatial information. The whole spatial flow only reduces the input to a quarter of the size, and the network structure is shallow. This design retains most of the spatial relations based on extracting the line color information, which is exactly what we need. Figure 4 shows the structure diagram of the spatial stream. The input image receives a quarter of the feature map containing spatial information through the spatial flow. In this process, we visualize the features of each layer. It can be seen that with the increase of convolution layers, rich spatial information is extracted. Loss Function The loss function used in our experiments is Online Hard Example Mining (OHEM) loss [39]. The core idea of this algorithm is to filter the input samples according to the loss of input samples. It filters out hard examples, which indicate the samples that have a great influence on classification, and then applies the filtered samples to training in Stochastic Gradient Descent (SGD) [40]. We treat the input picture as a pixel point sequence [x 1 , x 2 , x 3 ,..., x N ], where N is the number of pixels. For the pixel point Xi (i belongs to 1 to N), we can use Equation (5) to calculate the cross-entropy CE(x i ) of the point. p x i is the probability that pixel x i is predicted to be the correct category The corresponding loss function expression is Equation (6) following the loss in BiseNet [41]: where C is the number of categories, and y ij is a one-hot vector containing only 0 and 1 elements. If the category is the same as the category of the sample, take 1; otherwise, take 0. As for p ij , it indicates the probability that the i th predicted sample belongs to category j. For OHEM loss, entropy values are calculated from the input image pixel point sequence [x 1 , x 2 , x 3 ,..., x N ] according to Equation (5). Then, the sequence of new pixel points [x 1 , x 2 , x 3 , x 4 ,..., x N ] is obtained by sorting the entropy values from the largest to the smallest. We remove the last quarter of the small loss pixel points and train the first three-quarters of the larger loss targets. The corresponding OHEM loss function calculation formula is shown as Equation (7): where y ij and p ij are the one-hot vectors after the pixel points are reordered and the probability of the predicted j class, respectively. Drawing on what has been done before [13,37,41], we also use the auxiliary loss function in the network training. We designed a bypass output branch that consists of two convolution layers, namely a 3 × 3 convolution followed by a 1 × 1 convolution. The first convolution layer reduces the number of channels from 256 to 64, and the second layer directly reduces the number of channels to the number of label categories. Both convolution layers have a stride of 1, so the size of the stage one feature is not changed. To supervise this coarse segment prediction with the ground truth, we also need to sample the features four times to obtain the final rough segment result map we need. Therefore, our loss function consists of two parts; one is the loss function l out calculated by the network output, and the other is the auxiliary loss function l aux of the coarse output branch. To optimize the loss function better, we give the auxiliary loss function a weight following PSPNet [17], which is expressed as follows: Using such a joint loss function with an auxiliary loss function to supervise network learning will make our network easier to optimize. Experiment To verify the effectiveness of our proposed module, we have carried out a number of experiments on the Cityscapes dataset [19]. Our network achieved a competitive performance and has greatly improved on the baseline network. Additionally, we performed some visual contrast experiments to prove the effectiveness of our module. The Cityscapes dataset, jointly provided by three German units including Daimler, contains stereo vision data of more than 50 cities for urban scene understanding, including 19 classes for urban scene analysis and pixel-level segmentation. It contains 2975 fine labeled images for training, 500 for validation, and 1525 for testing. Additionally, there are an additional 20,000 coarse segmentation labeled images for training. It is worth noting that all our experiments were conducted on the Cityscapes finely annotated set. Implementation Details Our network is based on PyTorch; following the setting of learning rate in previous work [13,16,17], we adopted the poly learning rate policy, where the learning rate of the current iteration can be calculated by way of multiplying by the factor (1 − iter maxiter ) 0.9 . We used a Stochastic Gradient Descent (SGD) optimizer [40] to optimize the network parameters. For the Cityscapes dataset, we set the initial learning rate of the network to 0.01, the weight decay coefficient to 0.0005, and the momentum to 0.9. In the network training process, we set the learning rate of the coarse segmentation output branch and feature balanced fusion module parts to 10 times, and the remaining parts to one time. The loss function is shown in Equation (8), and λ was set to 0.4 to achieve the best fusion effect. The OHEM [39] loss function was used as a category of network loss functions to purposefully improve the learning of difficult samples. In training, we replaced all BatchNorm layers with InPlaceABN-Sync [42]. For the data augmentation, the input pictures were randomly cropped to 876 × 876 sizes during the training and flipped horizontally. All experiments were performed on two Nvidia GTX 1080Ti GPUs. The total number of training iterations was set to 81k, and the first 1k iterations were warmup processes. Experimental Results Applying the methods we proposed and some common training techniques, and following the implementation rules described in Section 4.1, after training only on a finely annotated training set, our mIoU on the Cityscapes validation set reached 81.2% in terms of mIoU. Compared to the basic network DeepLabV3plus [2], we thus achieved an improvement of nearly 1.3%. Table 1 shows in detail the improvements we have achieved with other advanced networks in each class. Comparing with the IoUs class, it is not difficult to find that in most classes, our indicators have been greatly improved. Ablation Study In this section, we outline a series of performed comparative experiments to demonstrate the validity of our proposed modules. Table 1. Comparison of our IFBFNet with DeepLabV3plus and other state-of-the-art networks on the Cityscapes validation set in terms of class IoUs and mean IoU.The blackened figure is the data with the highest index. Baseline Network Specifically, we set up two baseline networks as the basis of our experiments; one is ResNet101 [27], which is a very basic feature extraction network, and the other is ResNet101 designed with ASPP added to the end. We have made several experimental comparisons based on these two baselines. Baseline1: ResNet101-Dilated. To make ResNet101 more suitable for semantic segmentation tasks, we set the dilation of the last two layers of ResNet as [1,2] and stride as [2,1]. Thus, the output stride of the network was set to 16, so that the feature mapping of 1/16 the image size was finally obtained. We pre-trained the network on the ImageNet dataset, and then fine-tuned the parameters on the Cityscapes dataset. After pre-training on the ImageNet dataset, subsequent training for specific segmented datasets converged faster. Baseline2: ResNet101+ASPP. Just as we detailed in Section 2.2, ASPP is a module that has achieved tremendous success in semantics segmentation tasks due to its delicate design. To verify that our proposed module can also work with other modules to improve network performance, we also set up this ResNet101 + ASPP baseline. Its specific structure is composed of a global average pooling and three 3 × 3 dilated convolutions (dilated rates of 6, 12, and 18), which extract context information at multiple scales. Features from four branches are concatenated. Then, a 1 × 1 convolution was used to reduce the dimension of the concatenated map channel by 256. Inter-Level Feature Balanced Fusion Network. The structural design of inter-level feature balanced fusion network is introduced in detail in Section 3.2. To make the best use of this module where appropriate, we have applied our designed inter-level feature balanced fusion module to each feature fusion process at two different levels in the network. Since we use the global average pooling operation to compute the inter-layer weights, this part of the calculation is not large (relative to [34,35,37]). We use inter-level feature balanced fusion module to balance the differences between two levels of features from ASPP and stage one's low-level feature of the backbone network. Another time, inter-level feature balanced fusion strategy is applied in feature mapping fusion between a specially designed spatial branch and a previous backbone network. Inter-Level Feature Balanced Fusion Network with Spatial Stream. The spatial stream was designed to be relatively simple so as to obtain more spatial information in the input image, and thus the spatial stream consists of only three layers of convolution, with the number of intermediate feature map channels mimicking the first two layers of the ResNet network structure. We set the number of intermediate channels to 64. However, because we were not sure how to control the amount of spatial information, we performed a set of comparative experiments. We set the number of spatial flow branches with different output channels to obtain some spatial information of different sizes. To obtain the optimal result, we finally set the number of spatial stream output channels to 128. Ablation Study for Inter-Level Feature Balanced Fusion Module First, we used Baseline1 (101 and 50 of the atrous ResNet series) as the baseline network for the next series of experiments, and the corresponding output was directly up-sampled. The results of the experiment are shown in Table 2. We show the comparison data of two different depth backbone networks. To improve the network performance, we added the ASPP module at the end of the network, which improved the ResNet50 series by 2.80% and the ResNet101 series by 2.46% compared with the baseline network. To verify the effectiveness of our proposed inter-level feature balance fusion module, we added layer skip-connections to the baseline network ResNet + ASPP. One of the fusion methods of skip-connection is the common concatenation corresponding to the "Skip-Connection" in Table 2, and the other is our feature balance fusion mode. From the experimental data in Table 2, it can be seen that adding layer skip-connection to ResNet50 and ResNet101 improved the performance respectively by 2.06% and 2.40%. The inter-level feature balanced fusion method has improved separately from the normal fusion method by 2.43% and 1.19% on ResNet50 and ResNet101, respectively. Ablation Study for Spatial Stream In this section, we further analyze the importance of a spatial information stream to enhance experimental results. In Table 2, we find that adding spatial streams improved both ResNet50 and ResNet101, respectively by 0.34% and 0.11%. To obtain the most appropriate spatial flow design, we adjusted the spatial information of the spatial flow, corresponding to a series of convolution final output channels (16,32,48,64,128,256). We controlled the same settings except for the number of channels. Six sets of comparative tests were performed on the Cityscapes validation set. As shown in Table 3, we found that when the number of final output channels is 128, the best prediction results can be obtained. Ablation Study for Improvement Strategies As in [16,35,37], we also used similar improvement strategies, as shown in Table 4. (1) OHEM (online hard example mining): we focused on the training of difficult samples during the training process, which is reflected in the loss function in Section 3.4. We classified the samples predicted to be the correct kind with a probability of less than the threshold of 0.7 as difficult samples. With OHEM the performance on the Cityscapes validation set was improved by 1.48%. We added auxiliary loss supervising identical to the loss function in Equation (8). Due to the uncertainty of the auxiliary loss's weight combined with the main loss, which can lead to a better result if it is set properly, we conducted a set of comparative experiments. On the premise of ensuring the consistency of other parameters, we adjusted the coefficient of auxiliary loss from 0.1 to 1.0. The experimental results are shown in Table 5. After comparing the experimental data, we finally determined that when the weight parameter of auxiliary loss was set to 0.4, we could obtain the best performance on the Cityscapes validation set. (8), which control the size of the auxiliary loss.The blackened numbers represent the best performing group in this set of data. Visualization of Inter-Level Feature Balanced Fusion Module In the previous part, we have introduced the network structure design and experimental data comparison. To show the role of our inter-level feature balanced fusion module in the network more vividly, we input an image to visualize the feature maps of different levels when the network processes this image. The visualization results are shown in Figure 5. We have visualized the low-level features of the network (in our case, the first stage of the backbone), which are the results of the second line in the figure. They mainly contain shallow spatial information and some line outlines. The third line is the deep feature, which contains rich semantic information, and the fourth line is the result of the fusion of two-level features obtained by the ordinary concatenation method. The result of inter-level feature balanced fusion is shown in the fourth line. The comparative results demonstrate that our fusion results can better integrate the two parts of features from different levels, and can carry clear line contour information while containing rich semantic information. Comparing with the State-of-the-Art On the Cityscapes test set, we further compared our proposed network with the recently published methods. We input the original test set images into the trained model to obtain the test set prediction results that meet the requirements of the official test set. In this reasoning process, we applied multi-scales prediction and flipping strategies, which should improve the performance of our network. We packaged and uploaded the results to the official test script, waited for more than ten minutes, and obtained the corresponding results, as shown in Table 6. Compared with other previous methods, our network achieved better performance. Compared with the replicated DeepLabV3plus [16], the mIoU index of our network on the Cityscapes test set exceeded it by 1.1%. In the training process, we only used the finely segmented data of the Cityscapes dataset to refine our network, which further demonstrates the effectiveness of the proposed method. Table 6. Per-class results on the Cityscapes testing set. Our network outperformed existing approaches and achieved 79.6% in MeanIoU."-" indicates that the methods did not give the corresponding result.The blackened numbers represent the best performing group in this set of data. In Figure 6, we provide a comparison of predictions between our network and the baseline network. We visualize some samples of picture test results on the Cityscapes validation set for the baseline network and IFBFNet. We mark the areas where the results were significantly different with yellow lines. Looking at these graphs, we find that IFBFNet can significantly improve the prediction of object boundaries, and small objects, such as power poles, can also be well predicted to maintain the correct shape. Conclusions In this paper, IFBFNet is proposed for scene segmentation, which solves the problem that the information of feature fusion between semantic segmentation levels is mixed and unclear. Specifically, we adopt an adaptive feature fusion strategy to let the network learn by itself to reach a state where the low-level information and the high-level complex semantic information can be integrated more efficiently. At the same time, we add a shallow spatial information flow to increase the amount of spatial information. A series of ablation experiments and visualization of intermediate features have shown that using the inter-level feature balance fusion method can achieve a more balanced and clear feature representation, as well as a more accurate segmentation result by increasing the flow of spatial information. Our IFBFNet achieved very competitive results on the challenging Cityscapes dataset, significantly improving over baselines. In the future, we will further explore more application scenarios of the inter-level feature balanced module. The inter-level feature balanced module we designed is mainly applied to the adaptive integration of different levels of information in the network structure. It can be extended to some scenarios that need to take into account both spatial information and high-level semantic information. The design we proposed is applied to the integration of two levels; considering the complex situation, such as the scene of feature fusion of three or more levels, one can refer to the same design, but we need to add corresponding input branches and weight blocks here. The effectiveness of this structure also needs to be verified in future work. Author Contributions: D.L. and Q.Z. completed the main work, including proposing the idea, coding, training the model and writing the paper. L.Z. and C.F. reviewed and edited the paper. H.J. and Y.L. were responsible for guiding the experimental design and data analysis.All authors participated in the revision of the manuscript. All authors have read and agreed to the published version of the manuscript.
2021-12-03T16:10:57.202Z
2021-11-25T00:00:00.000
{ "year": 2021, "sha1": "7b9b9977ebdf06e9fbfce33e383cc4b0dc2cdff6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/23/7844/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2acc1f620935f4b15141cbe722dadedeeafca5e8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
15043643
pes2o/s2orc
v3-fos-license
Monoclonal gammopathy-associated pauci-immune extracapillary-proliferative glomerulonephritis successfully treated with bortezomib Extracapillary-proliferative glomerulonephritis is a rare complication of multiple myeloma. Partial remission of kidney involvement with cyclophosphamide therapy has previously been described. We report the case of a 60-year-old male patient diagnosed with rapidly progressive glomerulonephritis associated with IgG kappa monoclonal gammopathy. His kidney biopsy revealed pauci-immune extracapillary-proliferative glomerulonephritis without cryoglobulinaemia. Treatment with the proteasome inhibitor bortezomib induced rapid clinical and histological remission of his kidney disease. The patient's renal function remained stable on bortezomib maintenance therapy. Our findings suggest that bortezomib is a promising therapeutic approach to ameliorate severe kidney damage in monoclonal gammopathy- and myeloma-associated pauci-immune extracapillary-proliferative glomerulonephritis. Background Renal manifestations of multiple myeloma are clinically and histologically diverse. The most common form of renal involvement, accounting for 33 to >60% of cases, is cast nephropathy characterized by an overabundance of toxic light chains in the tubular system. Light-chain deposition disease, primary amyloidosis, often complicated by nephrotic syndrome, proximal and distal tubular dysfunction, renal vein thrombosis due to hyperviscosity or type 1 cryoglobulinaemia are less commonly seen. Rapidly progressive glomerulonephritis is an unusual complication of multiple myeloma that has rarely been reported in the literature [1][2][3][4][5]. It is characterized by severe glomerular damage often involving >50% of the glomeruli in a renal biopsy [6]. Due to the associated rapid decline of renal function, often within weeks, prompt initiation of therapy is crucial to prevent additional damage. In general, the most common cause is pauci-immune glomerulonephritis with a mean age at presentation of 60 years [6][7][8]. This form of extracapillary-proliferative glomerulonephritis is closely correlated with circulating pathogenic anti-neutrophil cytoplasmic antibodies (ANCAs) in 80-90% of patients [8]. Immune complex-mediated glomerulonephritis and anti-glomerular basement membrane nephritis are less frequently diagnosed in this setting. Standard treatment of extracapillary-proliferative glomerulonephritis includes induction therapy with cyclophosphamide and steroids. In severe kidney and pulmonary disease, plasmapheresis to remove circulating antibodies may be beneficial. Mycophenolate mofetil or azathioprine is usually employed for maintenance immunosuppression. Rituximab as a B-cell-depleting therapy has also been successfully used [9,10]. In multiple myeloma, bortezomib therapy in combination with dexamethasone recently became a first-line therapy for patients with myeloma-induced renal insufficiency [11]. Several studies documented a significant improvement in kidney function, usually within the initial two to three cycles of treatment [12]. We report a case of monoclonal gammopathy-associated pauci-immune extracapillary-proliferative glomerulonephritis successfully treated with the proteasome inhibitor bortezomib. serum creatinine to 140.8 µmol/L (1.6 mg/dL), haematuria of 290 erythrocytes/µL and spot urine protein-creatinine ratio (UPCR) of 1.9 g/g. He had also developed hypertension during the previous months. His medical history was uneventful, and physical examination did not show any pathologies. He subsequently underwent renal biopsy showing pauci-immune extracapillary-proliferative glomerulonephritis (4 of 14 crescents) with mild tubular atrophy and interstitial fibrosis (20%), as well as lymphohistiocytic infiltration ( Figure 1A). Polymorphonuclear leukocytes were present in the glomerulus and in the interstitium. Immunofluorescence revealed minimal mesangial deposits of C3 and complement complex C5b-9 as well as IgM. Immunofluorescence for IgA, IgG and fibrinogen was negative. No amyloid deposits or myeloma casts were identified. Serum C3 and C4 complement levels were normal. Serum ANCAs and cryoglobulines showed negative results. No anti-glomerular basement membrane antibodies were present. The patient received four monthly cycles of cyclophosphamide i.v. along with steroids. Informed consent was obtained from the patient prior to submission of this manuscript. During therapy, serum creatinine stabilized to 132.0 µmol/L (1.5 mg/dL), proteinuria (UPCR) decreased to 0.5 g/g and haematuria improved to 32 erythrocytes/µL, respectively. Maintenance treatment with azathioprine was started and the patient returned to the care of his nephrologist. After 10 months, the patient was referred to our service again with newly diagnosed macrohaematuria, increased UCPR of 5 g/g and elevated serum creatinine of 169.8 µmol/L (1.93 mg/dL). A thorough investigation revealed an increased serum β2-microglobulin of 3.5 mg/L and a pathological serum IgG kappa/lambda free lightchain ratio of 5.2 (normal value: 0.26-1.65) due to elevated serum IgG kappa light chains of 74.0 mg/L (normal range: 3.3-19.4 mg/L). A bone marrow biopsy showed an increase of monoclonal kappa-positive plasma cells to 10%. Re-evaluating the initial renal biopsy, no evidence of light-chain deposition disease, fibrillary glomerulonephritis or amyloidosis was present. A new renal biopsy was performed, again showing pauciimmune extracapillary-proliferating glomerulonephritis (4 of 10 crescents) with mild interstitial fibrosis. In this biopsy, there was also no evidence of classical myelomaassociated kidney disease. Due to the relapse of rapidly progressive glomerulonephritis after cyclophosphamide therapy and leucopenia during azathioprine treatment, we decided to administer two doses of 1 g of rituximab i. v. within 4 weeks and maintained the patient on a reduced dose of azathioprine in combination with cyclosporine A [1,10]. Unfortunately, the patient showed rapid deterioration of his renal function within the following 8 weeks to a serum creatinine level of 303.6 µmol/L (3.45 mg/dL), an increase of proteinuria to 9 g/g and an increase of haematuria. The serum kappa/lambda free light-chain ratio also increased to 9.1. Because of the underlying plasma cell dyscrasia and rapidly worsening kidney function, we decided to start the patient on the proteasome inhibitor bortezomib (1.3 mg/m 2 body surface i.v. on Days 1,8,15,22) in combination with dexamethasone based on the treatment recommendations for multiple myeloma. After the first cycle of bortezomib/ dexamethasone, serum creatinine decreased to 140.8 µmol/L (1.6 mg/dL), and urinalysis showed reduced proteinuria of 2 g/g. A control biopsy was performed revealing residual sclerosed crescents, completely sclerosed glomeruli, mild interstitial fibrosis and tubular atrophy ( Figure 1B). No signs of active extracapillary proliferations were detected. After the second cycle of bortezomib/ dexamethasone, the patient showed clinical remission with serum creatinine levels of 103.8 µmol/L (1.18 mg/dL), minimal proteinuria of 0.48 g/g, no haematuria and wellcontrolled hypertension. Maintenance therapy of monthly bortezomib was initiated, and the patient showed stable serum creatinine values as well as stable proteinuria with 8 months of follow-up. Discussion Glomerulonephritis with crescents, although rare, is a well-documented complication of multiple myeloma. This association was first described by Kaplan and Kaplan in 1970, presenting a 49-year-old patient with renal failure due to extracapillary-proliferative glomerulonephritis, nephrotic syndrome and an IgG paraprotein [13]. Meyrier et al. [2] described three cases of extracapillary-proliferative glomerulonephritis in which plasma cell dyscrasia was identified in two patients and Waldenstrom's macroglobulinaemia in one patient as the underlying cause of renal disease. Renal function was stabilized by melphalan and steroids in the first patient and by steroids in combination with plasmapheresis in the third patient. Rapidly progressive glomerulonephritis has also been reported in patients with primary and secondary amyloidosis [1,4,14,15]. The presence of paraproteins without characteristics of multiple myeloma (hypercalcaemia, anaemia, bone disease) is referred to as 'monoclonal gammopathy of undetermined significance' (MGUS). Recently, Leung et al. [16] suggested to introduce the term 'monoclonal gammopathy of renal significance' (MGRS) if renal damage is present in these patients. In this report, we describe a patient with pauciimmune extracapillary-proliferative glomerulonephritis due to IgG kappa monoclonal gammopathy, in which bortezomib and dexamethasone treatment significantly improved his renal function. Therefore, this patient suffers from MGRS. Histologically, after bortezomib therapy, crescents were sclerosed, a mild interstitial fibrosis and tubular atrophy developed and proliferative glomerulonephritis was stopped. The improvement of renal function after treatment with bortezomib was very rapid, suggesting a direct effect of bortezomib on the proliferative and inflammatory glomerular lesions. Bortezomib is a highly selective inhibitor of the 26S proteasome. This drug is known to inhibit protein degradation especially in high-turnover tumour cells, interfering with cell-cycle regulation and cell proliferation [17]. Due to the rapid improvement of the patient's kidney function, we hypothesize that bortezomib exerted its beneficial effects not only through a control of plasma cell proliferation and paraprotein secretion, but also through direct inhibition of cell proliferation in the kidney. In a mouse model of ANCA-associated necrotizing crescentic glomerulonephritis, bortezomib was able to prevent renal disease [18]. Interestingly, no additional hallmarks of myelomaassociated kidney involvement were present in this patient's renal biopsy. Histologically, no deposition of light chains or protein casts was detected. There was also no evidence for amyloidosis. Crosthwaite et al. [4] recently described a patient with primary AL amyloidosis and IgG kappa multiple myeloma that developed rapidly progressive glomerulonephritis in the setting of renal amyloidosis. In this patient, bortezomib and dexamethasone treatment led to a decrease of paraproteinaemia, but one month after initiation of therapy his serum creatinine started to rise and he became dialysis-dependent, suggesting that bortezomib treatment might be less effective if renal amyloidosis is present. In our patient, the diagnosis of his underlying disease was not made until the second episode of rapid deterioration of kidney function when a thorough workup revealed elevated serum IgG kappa light chains. The fact that monoclonal gammopathy was not discovered earlier indicates that this disease was not primarily taken into account as a cause for his progressive glomerulonephritis. Therefore, we suggest that monoclonal gammopathy and multiple myeloma should be ruled out, especially in patients with ANCA-negative rapidly progressive glomerulonephritis. In summary, this case demonstrates that in pauciimmune extracapillary-proliferative glomerulonephritis, the presence of monoclonal gammopathy and multiple myeloma, although a rare differential diagnosis, should be considered. Bortezomib therapy is a promising therapeutic approach to reverse severe kidney damage in myeloma-associated pauci-immune extracapillary-proliferative glomerulonephritis.
2018-04-03T05:22:35.055Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "0816cd6867e60470c9d58e4dd419cd9879d129cf", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ckj/article-pdf/6/3/327/1191922/sft044.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0816cd6867e60470c9d58e4dd419cd9879d129cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221621358
pes2o/s2orc
v3-fos-license
Intrauterine Vacuum-Induced Hemorrhage-Control Device for Rapid Treatment of Postpartum Hemorrhage Intrauterine vacuum-induced hemorrhage control may provide an effective treatment option for postpartum hemorrhage that has the potential to prevent severe maternal morbidity and mortality. sequelae.Transfusion of 1-3 units of red blood cells was required in 35 participants, and five participants required 4 or more units of red blood cells.The majority of investigators reported the intrauterine vacuum-induced hemorrhage-control device as easy to use (98%) and would recommend it (97%). CONCLUSION: Intrauterine vacuum-induced hemorrhage control may provide a new rapid and effective treatment option for abnormal postpartum uterine bleeding or postpartum hemorrhage, with the potential to prevent severe maternal morbidity and mortality. (Obstet Gynecol 2020;00:1-10) DOI: 10.1097/AOG.0000000000004138P ostpartum hemorrhage is the leading cause of maternal mortality worldwide and is responsible for 25% of maternal deaths from obstetric causes. 1 Moreover, the problem is growing, particularly in the United States, where rates of severe maternal morbidity and transfusions have increased2 despite commensurately increasing utilization rates of first-and second-line postpartum hemorrhage treatment modalities. 3 Many important efforts have been developed to address these trends, notably comprehensive safety bundles inclusive of recognition and prevention of abnormal postpartum bleeding, readiness with improved training and transfusion protocols, and robust quality reporting, [4][5][6] and yet there have been few innovative approaches to treat abnormal postpartum bleeding or postpartum hemorrhage before morbidity occurs. Uterine atony causes up to 80% of all postpartum hemorrhages. 7][10][11] In an atonic uterus, vessels are not constricted and hemorrhage ensues, prompting first-line therapy.When medical management alone is deemed unsuccessful, tamponade is currently the next treatment option added to control uterine atony.Tamponade directly compresses the vascular bed to impede bleeding as a temporizing measure.By using outward pressure on the uterine walls for 12-24 hours, 12,13 the uterus may then involute and regain normal tone. 14Although tamponade has been demonstrated to be effective in controlling hemorrhage in 87% (95% CI 84-90%) of atonyrelated cases, 15 the mechanism of action of using outward pressure to control bleeding from uterine atony is counterintuitive if the ultimate goal is uterine contraction.6][17] The frequency of complications attributed to uterine balloon tamponade use was up to 6.5% in the recent meta-analysis by Suarez et al. 15 Most protocols 18,19 recommend using tamponade or packing after at least 1,000 mL of blood have been lost and, with ongoing bleeding, up to 1,500 mL.Up until this point, there have been few other options appropriate for early use in the management of abnormal bleeding unresponsive to uterotonics alone or in a patient who has limited options for uterotonics owing to contraindications.Beyond these modalities, other treatment options consist of increasingly invasive procedures. The Jada System (novel intrauterine vacuuminduced hemorrhage-control device) was specifically designed to offer rapid treatment by applying lowlevel intrauterine vacuum to facilitate the physiologic forces of uterine contractions to constrict myometrial blood vessels and achieve hemostasis.The device was evaluated in a prior feasibility study outside the United States that showed promise as a rapid treatment for abnormal postpartum uterine bleeding or postpartum hemorrhage. 20The study reported herein was conducted in the United States to evaluate the effectiveness and safety of the intrauterine vacuuminduced hemorrhage-control device to control abnormal postpartum uterine bleeding or postpartum hemorrhage in a larger patient population. ROLE OF THE FUNDING SOURCE Funding provided by Alydia Health, Inc. Alydia Health, Inc., provided the study design and protocol, supported data collection and study monitoring, conducted analysis, and provided input and support for publication.The authors had access to the study protocol, analytic plan, and study report required to understand and report research findings.The authors take responsibility for the presentation and publication of the research findings, have been fully involved at all stages of publication and presentation development, and are willing to take public responsibility for all aspects of the work.All individuals included as authors and contributors who made substantial intellectual contributions to the research, data analysis, and publication or presentation development are listed appropriately.The role of the Sponsor in the design, execution, analysis, reporting, and funding is fully disclosed.The authors' personal interests, financial or nonfinancial, relating to this research and its publication have been disclosed. METHODS This was a prospective, observational, multicenter treatment study (clinicaltrials.govNCT02883673).The aim of the study was to evaluate the effectiveness and safety of the intrauterine vacuum-induced hemorrhage-control device for the control of postpartum hemorrhage.The intrauterine vacuum-induced hemorrhage-control device is made of medical-grade silicone, with an elliptical intrauterine loop on the distal end and, on the proximal end, a vacuum connector that allows connection using standard tubing to an inline graduated canister and regulated vacuum source (Fig. 1).In this study, the regulated vacuum source included standard wall suction and, in some cases, a transportable vacuum source.The inner surface of the intrauterine loop has 20 vacuum pores that facilitate creation of vacuum within the uterine cavity.The outer surface is covered by a shield that overhangs the vacuum pores to protect maternal tissue from the vacuum and to prevent the vacuum pores from clogging with tissue or blood clot.The intrauterine loop and other components are soft and smooth to limit the chance of tissue damage during insertion, treatment, and removal of the device. A manual sweep of the uterine cavity is customarily performed to evaluate for retained products and to assess the integrity of the uterine cavity; in the case of ongoing bleeding, it is performed again before device placement to clear any organized clot from the uterus before treatment.The device is then introduced through the cervix into the uterine cavity with direction either by the user's hands or with the assistance of standard instrumentation such as sponge for-ceps.The goal of placement is to place the intrauterine loop within the uterine cavity, with the donut-shaped cervical seal just outside the external cervical os at the top of the vagina, which limits vacuum application to the uterus only.The cervical seal is filled with sterile fluid (60-120 mL), and low-level vacuum (80610 mm Hg) is applied using a regulated vacuum source with an in-line canister.Pooled blood is evacuated from the uterus as the uterus collapses, which can be observed directly when the abdomen remains open during cesarean delivery or by abdominal palpation or realtime ultrasound scan after vaginal delivery.The volume of blood initially evacuated from the uterus and any ongoing blood loss is quantified in the canister during treatment.Control of abnormal bleeding or postpartum hemorrhage was defined in the protocol as the first report that abnormal bleeding had been stopped.Control is considered definitive when there is an absence of recurrence without need for additional escalation of treatment. The intrauterine vacuum-induced hemorrhagecontrol device remains in place (Fig. 2), with the vacuum applied for at least 1 hour after control of hemorrhage.With the uterine cavity collapsed and bleeding controlled, the continued application of vacuum allows time for physiologic or medicationinduced myometrial contractions that collapse the uterine cavity and occlude vessels.Control is evaluated by direct observation of blood flow through the system while feeling for a firm uterus.This contracted state, which mirrors the natural process after delivery, is designed to provide sustained control of bleeding.After active therapy is completed, the vacuum is disconnected and the cervical seal emptied.The device is left in place for a minimum of 30 minutes to allow close observation for any return of atony or abnormal bleeding necessitating further treatment before removal.Finally, to remove the device, one hand is placed on the abdomen to support the uterine fundus while the other hand slowly withdraws the device through the vagina.The device is not intended to be left within the uterus for more than 24 hours.Prophylactic antibiotic administration was not specifically required as part of the study protocol but could be prescribed based on the clinical judgment of the investigator and their local postpartum hemorrhagemanagement guidelines. Women were eligible for participation in the study if they were 18 years of age or older, able to consent, delivered at 34 weeks of gestation or later, had normal uterine anatomy (women with uterine leiomyomas not excluded) and normal placentation, and had atony-related pre-device placement estimated blood loss of 500-1,500 mL after vaginal delivery or 1,000-1,500 mL after cesarean delivery (device placed transvaginally after hysterotomy closure) unresponsive to treatment with uterotonics and uterine massage.Initial quantitative blood loss was not required, because many sites were not universally calculating real-time quantitative blood loss.However, if quantitative blood loss was available before placement of the vacuum-induced hemorrhage-control device, it was captured and used instead of estimated blood loss.Blood loss criteria for inclusion were developed acknowledging that the reVITALize 21 definition for postpartum hemorrhage was published in 2014, with the American College of Obstetricians and Gynecologists Practice Bulletin 22 subsequently updated for consistency to a cumulative blood loss of 1,000 mL or greater. 21,22However, both reVITALize and the American College of Obstetricians and Gynecologists highlight that a blood loss of 500-999 mL should trigger increased supervision and potential interventions as clinically indicated.Large state-wide perinatal-quality collaboratives con-tinue to cite blood loss greater than 500-999 mL as abnormal, 18,19 and care teams often initiate treatment in this abnormal range to minimize ongoing blood loss. 3Exclusion criteria included retained placenta without easy manual removal, uterine rupture, purulent infection, coagulopathy, or blood loss greater than 1,500 mL at time of device placement.Additional medications could be continued during or after treatment according to standard care at each clinical site, provided maximum dosing was not exceeded. Enrollment occurred from February 2018 to January 2020 at 12 hospitals across the United States.Race and ethnicity were categorized on the study case report forms according to National Institutes of Health standards and were abstracted from medical record review, reliant on patient self-report.Women were approached by trained research staff in the prenatal setting or the labor and delivery unit for consent.Informed consent was obtained before the diagnosis of postpartum hemorrhage to ensure the participants were not consented while in a state of duress.Women who gave consent were enrolled if they reached the estimated blood loss inclusion requirement and had suspected uterine atony that was determined to be refractory to initial treatment with uterine massage, prescribed uterotonics, and possibly tranexamic acid.If the participant underwent cesarean delivery, a minimal cervical dilation of 3 cm was required to attempt placement of the intrauterine vacuum-induced hemorrhage-control device.Only investigators who were trained on device placement and study procedures were permitted to place the device.Training for investigator participation included both a didactic session on the study protocol and use of the device and hands-on simulation using a task trainer uterine model to ensure proficiency using the device.The training included content on the protocol requirement to visualize or palpate uterine collapse after connection of the vacuum during the steps of using the device as an outcome of interest on the study.During each enrollment, a quick reference guide was included with the device, in addition to the instructions for use.These served as real-time references and visual aides to clearly outline procedural steps.A second study-trained individual was present to re-review inclusion and exclusion criteria with the investigator before the procedure to ensure the patient still met eligibility before device placement and to collect required study data for each participant enrolled. The primary effectiveness endpoint was the proportion of participants successfully treated for abnormal postpartum uterine bleeding and postpartum hemorrhage, defined as avoidance of other open surgical or nonsurgical interventions after intrauterine vacuum-induced hemorrhage-control device use in the setting of uterine atony.Nonsurgical, second-line treatment included uterine balloon tamponade therapy, uterine packing, or uterine artery embolization; open surgical interventions included exploratory laparotomy or re-operation, vascular ligation, uterine compression sutures, or hysterectomy.The primary safety endpoint was the incidence, severity, and seriousness of device-related adverse events.Adverse events were collected from enrollment to the 6-week follow-up visit, and all investigator reports of adverse events were reviewed by an independent obstetrician medical monitor.Secondary endpoints included time to control of hemorrhage, need for further nonsurgical treatment or surgical treatment after device placement for arrest of atony-related postpartum hemorrhage, treatment with blood transfusion after device placement and total units transfused, and assessment of usability at the conclusion of treatment as reported by the investigator placing the device based on a 5-point Likert scale (Strongly Agree, Agree, Neutral, Disagree, and Strongly Disagree). Categorical data were summarized using frequency tables, presenting participant counts and relative percentages.Continuous variables were summarized as mean, SD, median, interquartile range, minimum, and maximum as appropriate.A 95% CI was calculated for the treatment success rate.Statistical analysis was performed by an independent statistician (Advanced Research Associates) using SAS 9.4.The study was performed under an Investigational Device Exemption from the U.S. Food and Drug Administration.Institutional review board approval was obtained at each clinical site before commencement of study enrollment. Two analysis cohorts are presented in this article: an enrolled cohort (n5107) and an intention-to-treat (ITT) effectiveness cohort (n5106).The primary effectiveness analysis was performed on the ITT cohort, and the primary safety analysis was performed on the enrolled cohort. RESULTS Of 107 participants enrolled with primary postpartum hemorrhage or abnormal postpartum uterine bleeding, 106 received any study treatment with the device connected to vacuum.The participant disposition chart is shown in Figure 3. Demographics, obstetric history, and delivery details are presented in Tables 1-3.The mean maternal age was 29.765.5 years.Race for the majority of participants was reported as White (57%) or Black or African American (24%).The majority of enrolled participants (64%) met criteria for obesity at admission (body mass index [BMI, calculated as weight in kilograms divided by height in meters squared] 30 or higher).Eighty-five percent of the deliveries were vaginal, with a mean gestational age of 38.162.0 weeks.Fifteen participants (14%) delivered neonates with macrosomia (4 kg or more), and 11 (10%) participants were enrolled after delivering twins.The primary cause of abnormal postpartum uterine bleeding or postpartum hemorrhage in all participants was uterine atony.Thirty-four participants (32%) also had delivery-associated lower genital tract lacerations that either had already been repaired or were repaired during treatment with the intrauterine vacuum-induced hemorrhage-control device.The median (interquartile range) estimated blood loss before treatment was 870 mL (700-1,000 mL) for vaginal delivery and 1,300 mL (1,050-1,425 mL) for cesarean delivery. A 6-week postpartum health assessment was obtained for 103 of the 107 (96%) enrolled participants.A total of eight device-or procedure-related adverse events were reported in the study.These events included endometritis (n54), disruption of a vaginal laceration repair (n51), presumed endometritis (n51), bacterial vaginosis (n51), and vaginal candidiasis (n51).All of the events resolved with treatment and without serious adverse sequelae.No cases of uterine rupture, lower genital tract laceration, or uterine incision dehiscence related to device use were reported. The treatment success rate for the intrauterine vacuum-induced hemorrhage-control device was 94% (100/106, 95% CI 88-98%) in the ITT cohort.Five participants in the ITT cohort required additional sur-gical or nonsurgical treatment for atony-related bleeding; one participant did not require additional treatment for atony-related bleeding and instead received a suture for an initially unrecognized cervical laceration.The five participants requiring additional atony-related treatment included a participant treated with uterine balloon tamponade for recurrence of atony with bleeding 2.5 hours after device treatment had ended, when re-treatment with the device was not allowed per protocol (n51); a participant with intraoperative B-Lynch compression suture treatment added in conjunction with the study treatment (n51); uterine balloon tamponade used after the vacuum regulator was determined to be dysfunctional (n51); a B-Lynch compression suture followed by hysterectomy (n51); and a hysterectomy (n51).In the other 100 participants in the ITT analysis cohort, the device successfully controlled the hemorrhage. To objectively measure both the procedure performance and the use of resources for treatment, analyses were performed on time to uterine cavity collapse, time to hemorrhage control, and total procedure time (Table 4 and Fig. 4).In successful use of the intrauterine vacuum-induced hemorrhage-control device to control hemorrhage, the initial collapse of the uterus reported by investigators occurred in a median of 1 minute (interquartile range 1-2) from the time of vacuum connection, which was either palpated abdominally, demonstrated on ultrasound scan, or visualized intraoperatively (at cesarean delivery).In 82% of participants in whom the device controlled abnormal bleeding, the control occurred within 5 minutes, with a median time of 3 minutes (interquartile range 2-5). Including the required minimums of 60 minutes for vacuum treatment time and 30 minutes of observation without the vacuum connected, the median time of vacuum treatment was 144.0 minutes (interquartile range 85.8-295.8),with total device in-dwelling median time of 191.0 minutes (interquartile range 132.8-365.8).The duration of hospital stay from delivery to discharge was similar to standard delivery hospitalization lengths of stay, with a median stay of 2.2 days (interquartile range 2.0-2.7), with 73% of participants staying 2 days or less.The median length of stay for cesarean birth was higher at 3.0 days (interquartile range 3.1-4.4)compared with 2.0 days (interquartile range 1.9-2.4)for vaginal birth, a difference that is consistent with expected longer stays after cesarean birth.Forty participants (38%) in the ITT analysis cohort received any blood product.Thirty-five participants (33%) received 1-3 units, and five (5%) received 4 or more units of red blood cells.No participant developed coagulopathy.Although there was clinically significant blood loss before use of the use of the intrauterine vacuum-induced hemorrhage-control device, blood evacuation or loss during treatment was measurable in the tubing or canister and low at a median of 110 mL (interquartile range 75-200). Investigators who used the device for the study provided an independent assessment of device usability as a part of data collection during each case.Almost all users recommend the device for the treatment of postpartum hemorrhage (97%) and reported that the device was easy to use (98%) (Fig. 5). DISCUSSION In this single-arm observational study, we have shown that the intrauterine vacuum-induced hemorrhage-control device has the potential to be used to rapidly and effectively control abnormal postpartum uterine bleeding and postpartum hemorrhage.In this cohort, control occurred within minutes, the indwelling time for the device was short, and treatment was definitive for the majority of patients.The device had a low rate of adverse events during this study, all of which were expected risks and resolved with treatment without serious clinical sequelae.Investigators, all first-time users of the device, found the system easy to use, which suggests that, after device education and with availability of a quick reference guide outlining steps, there is a minimal learning curve for use.The intrauterine vacuum-induced hemorrhagecontrol device demonstrates the potential to mechanically achieve the goals of normal uterine physiology or pharmaceutical uterotonics when they are not working alone, contracting the uterus when this does not occur spontaneously immediately postpartum.The use of low-level vacuum (70-90 mm Hg) to contract the myometrium and decrease uterine size is in contrast to traditional mechanical methods used for tamponade, which work by creating outward pressure, causing uterine distention.With tamponade systems there can be complexities to effective placement and maintenance of treatment, because the balloon can rupture if overfilled; therefore, it is recommended to use the minimal amount of uterine distension to accomplish control of bleeding. 14Tamponade commonly requires the use of vaginal packing to keep the balloon in place, but, when used, a positive tamponade test must first be performed to ensure that packing does not obscure ongoing bleeding. 14he active nature of intrauterine vacuum treatment and the mechanism of action creates immediate observability and allows for monitoring of any ongoing blood loss, controlling hemorrhage in a definitive manner.Effectiveness is initially observed by the palpable change in uterine tone and visible cessation of blood flow.The ongoing active evacuation of any blood and clot from the uterine cavity using low-level vacuum allows real-time quantification of blood loss throughout treatment, and vaginal packing is not required.Blood collected during treatment can be used in resuscitation efforts through cell salvage. 23 review of available treatment options for postpartum hemorrhage reveals a significant unmet need.Atony-related postpartum hemorrhage that is nonresponsive to available uterotonics will require additional treatment.The intrauterine vacuum-induced hemorrhage-control device reported herein offers an additional treatment option, with the potential to be used early in ongoing bleeding, that is rapid, easy to deploy, effective, and has a reassuring safety profile without serious complications.With 87% effectiveness reported for balloon tamponade devices in a recent meta-analysis, 15 the 94% effectiveness of the vacuum device observed in this single-arm treatment study is promising.Treatment with more invasive procedures, such as uterine artery embolization and surgical interventions, which may not be available in all obstetric units and which carry more risk and cost, may potentially be avoided in a significant number of women when health care professionals have access to more treatment options. This study has multiple strengths, including the prospective design with a rigorously defined protocol, analysis powered to evaluate effectiveness in the included cohort and safety for common adverse outcomes, and training for investigators and research staff.However, the study is not without limitations, which include that this study was not randomized by design.There are challenges, although not insurmountable, to a randomized controlled trial for atony-related abnormal postpartum bleeding and postpartum hemorrhage.Such a design should be considered in the future, with careful evaluation of the most appropriate comparator for the device and optimal timing for device use within treatment algorithms.As the first large study of this device, enrollment was limited to participants with 1,500 mL or less estimated blood loss for safety reasons, so further study is needed in more severe postpartum hemorrhage cases.Additionally, the majority of cases described herein were vaginal deliveries, which could limit the generalizability of the results.With additional research on this device, safety will be further assessed in a greater number of cases.Finally, patientreported satisfaction or family-reported outcomes were absent from this study.For example, the majority of participants included had some form of neuraxial labor analgesia, which raises the question of how the procedure will be tolerated in patients without analgesia. Additional potential benefits of this approach include that using a definitive treatment as soon as it is determined that uterotonics and massage alone are not working, with subsequent rapid cessation of bleeding, may decrease overall blood loss and associated need for blood transfusion or quantity of transfused product.The short duration of time the device is in-dwelling may limit rates of device-related complications such as infection while also reducing resource utilization and cost by decreasing time spent in the labor and delivery unit.Finally, we can reasonably assume that this short duration and more physiologic approach to treatment with the device may be better aligned with shared postpartum treatment goals, including enhancing maternal recovery and facilitating maternal-newborn bonding. In conclusion, the intrauterine vacuum-induced hemorrhage-control device offers a therapeutic modality that may be considered early in the treatment of abnormal postpartum uterine bleeding or postpartum hemorrhage.Given the speed with which the device has been demonstrated to control abnormal bleeding and postpartum hemorrhage, it is likely to offer benefit to the patient and family, the clinical team, and the health care system overall.This study demonstrates that the intrauterine vacuum-induced hemorrhage-control device might fill an essential treatment need as we strive to decrease rates of severe maternal morbidity and mortality and improve maternal outcomes. Fig. 2 . Fig. 2. Placement of intrauterine vacuum-induced hemorrhage-control device with low-level vacuum connected (A) and uterine contraction (B).Images courtesy of Alydia Health.Used with permission. Table 1 . Demographics BMI, body mass index.Data are mean6SD or % (n).* Race and ethnicity categories were collected according to National Institutes of Health standards."Other" as a category was included by patient self-report.† Two participants were missing a height or weight for calculation of BMI and are excluded from this analysis. Table 2 . Obstetric and Medical History Table 3 . Delivery Characteristics Ten of the multiple births were vaginal deliveries, and one multiple birth was a cesarean delivery.‡Othertype of anesthesia includes combined spinal epidural(11), epidural or general (2), nitrous oxide (2), and nalbuphine (1).§ One hundred eighteen neonates were delivered to the 107 participants, including 11 sets of twins and 96 singletons. †
2020-09-10T10:11:47.697Z
2020-09-07T00:00:00.000
{ "year": 2020, "sha1": "ba9cbadb55b47e8a8127d0ac192f6131a4631664", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/aog.0000000000004138", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b2ae67cd9f5d243f4907f9912a02de83eb68c480", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8138910
pes2o/s2orc
v3-fos-license
Electronic Health Record for Intensive Care based on Usual Windows Based Software Background and objectives: In Intensive Care Units, the amount of data to be processed for patients care, the turn over of the patients, the necessity for reliability and for review processes indicate the use of Patient Data Management Systems (PDMS) and electronic health records (EHR). To respond to the needs of an Intensive Care Unit and not to be locked with proprietary software, we developed an EHR based on usual software and components. Methods: The software was designed as a client–server architecture running on the Windows operating system and powered by the access data base system. The client software was developed using Visual Basic interface library. The application offers to the users the following functions: medical notes captures, observations and treatments, nursing charts with administration of medications, scoring systems for classification, and possibilities to encode medical activities for billing processes. Results: Since his deployment in September 2004, the EHR was used to care more than five thousands patients with the expected software reliability and facilitated data management and review processes. Communications with other medical software were not developed from the start, and are realized by the use of basic functionalities communication engine. Further upgrade of the system will include multi-platform support, use of typed language with static analysis, and configurable interface. Conclusion: The developed system based on usual software components was able to respond to the medical needs of the local ICU environment. The use of Windows for development allowed us to customize the software to the preexisting organization and contributed to the acceptability of the whole system. INTRODUCTION The electronic health record (EHR) becomes increasingly important in modern health care systems. Numerous clear advantages over paper records are demonstrated. Any form of medical record needs to be accurate, consistent, legible, complete, and simply presented (1). The use of information technology in health care records allows the user to improve the quality of information, conveys accurate information quickly, meets specific needs, access a patient's data whenever it is needed, and enable the rapid extraction of data to improve overall patient care (2,3). These advantages of EHR applied also in intensive care were the amount of data to be processed, the turn over of the patients and the necessity for reliability and review processes indicate the use of Patient Data Management Systems (PDMS) (4). Commercial PDMS exist for intensive care but they lock the users with proprietary software as opposed to Usual Windows Software (UWS) which allows sharing of software resources and experience (5). They can also be used in intensive care. Open source software could also be some alternative solution (6) but are more sensitive to software or operating systems upgrades which can complicate data transmission or recovery. To respond to the needs of our intensive care unit and benefit of resources from UWS we developed an EHR system (and PDMS) based on Windows Based Software and components. The aim of that development was also to avoid to be locked with proprietary software and be dependent of costly licenses. Software Design The software used by the system was designed as an architecture running on the Windows operating system (Windows Server) and access the Access relational database (v Office 2000). The development of the system was based on the workflow and data flow observed in our unit and the procedures and documentations already in use. The data were mapped on the relational database. The client software was developed in Visual Basic using a bottom-up approach. The software offers the following functions: (Figure 1) • Medical notes captures with patient's history, observations and treatments, • Nursing charts for vital signs, IN-OUT balance, ventilation parameters and settings, • Functionalities for administration of medications, • Scoring system possibilities for patient's classification. (APACHE II (7), SAPS II (8)). • Automatic reporting at the end of hospitalization in Intensive Care. • Encoding of medical activities for administrative and billing processes. Interoperability between all modules of the software is realized through access to the Access database and not the use of local memory in the interface. The software was developed to be compatible in all its components and is interfaced with Windows for reporting. To track runtime errors and achieve sufficient software reliability, we systematically tested all the software elements . Implementation and software use All the above functionalities were implemented but only the medical part of the application was used. For every new patient, on admission, data have to be introduced in the software using classical medical observations: relevant medical history, previous treatment and chronological history of the actual problems justifying the ICU admission, as well as clinical findings and complementary examinations with specific description of the medical diagnostic. For every ICU day, clinical data on patient evolution can be described with the most important elements of the day: biological and bacteriological results, respiratory and hemodynamic status and results of the last performed complementary examinations. Daily treatment and therapeutic strategy may be prescribed and updated using an integrated care provider order entry. Description of ICU population is necessary to evaluate the patient prognosis and the severity of illness. Comparison between patients needs patient's evaluation and stratification for study or clinical purpose. It is therefore necessary for the ICU management to score the patients with well admitted ICU scores. For that purpose, APACHE II and SAPS score could easily be determined for every patient on admission and during the ICU stay permitting to stratify ICU population and giving important informations for the patient follow up and the ICU management. Reporting at the end of hospitalization in ICU is necessary for communication with referring hospital specialists or general practitioners. This allows transferring important follow up information and enhancing better collaboration between the different patient care teams: in ICU, in the other hospital wards and also at home when the patient leaves the hospital. The automatic help on reporting at the end of the hospitalization enhances collaboration and information transmission of all who are involved in the care of acute patients. Encoding of medical activities for administrative and billing processes can be done with the system. Complete administrative information is important for the general management of care institutions where ICU structures are known to use a lot of personal and financial means: the software allows obtaining, after a minimal time, various information to answer to the questions of the medical authorities inside or outside the hospital. Scientific studies are also enhanced by the possibility of extracting of the database complete data about a specific population and its ICU evolution. Hardware Minimum hardware configuration for the system consists of windows PC running available intrahospital network connections, such a configuration allowing backup and replication through the existing hospital systems. The hardware we used consists in classic Intel PC with uninterrupted power supply, and connection with the hospital network to assure the integrity of the database by replication.. Communications with other medical software. We planned to access the data from the general network of the hospital available in every wards. That didn't need complex and specific communication software to be developed and implemented. The data can then be transferred to the ICU database and to the database of the institution. The Mirth HL7 communication engine could also be used to send clinical data from the database to other medical software or export clinical summary at the end of hospitalization in intensive care but macro use in Windows based software was also a easily used solution. 2.5 Identification, authorization and security. Identification and authorization are to be done at the database level with encrypted passwords. Several levels of authorization must include read access only, write access with privilege for prescription or not, and administration of medications. Identification must be repeated at every important access like note writing, prescriptions and administration of medications. Except for local access on private network, database access must be done through secure connections. VNC on windows does not provide such secure access and supplementary software are needed. License The code developed for the system was published under the General Windows License existing, bought and widely used in the hospital. Upgrade Before extending this use to more IC units in our institution the system has not be upgraded which is a big advantage. Principal upgrades includes multi-platform support including Windows Operating System, separation of clinical data from patient identification and administrative data on separate data bases for each new IC unit to facilitate extraction of anonymous data for clinical review, research and privacy preservation. New developments will use more secure performances compatible with Windows upgrades policy. RESULTS The EHR was used in our unit, from September 2004, for the care of more than five thousand patients. The system is accessible at desks or offices through windows PCs. Its design allowed an access to the database's functionalities with a high availability level (no interruption over the years). The system is used on an every day basis for staff discussion using central display and for every patient's notes and treatments. Indicators were also developed to follow the activity of the unit and are used at regular intervals for evaluation as well as database queries to answer specific clinical question. The use of the system at every bedside represents the future developments. The use of Windows resources was however effective to customize the solution to ICU medical request and contributed to the acceptability of the software. The Access data base largely contributed to the overall efficacy and robustness of the system. The use of the Visual Basic language permits to obtain small response times and don't limit the portability of the system with complicated debugging process in this critical environment. For that reason, Windows environment has to remain stable to avoid versions related problems and runtimes errors. The system was well accepted locally for medical activity, and was not harder to interface with the information system of the hospital but has to be completed to include all the work at the bedside. Despite more and more technics in ICU patients, more frequent elderly patients and higher severity confirmed by ICU scores the mortality in our ICU seems to diminish. The use of HER could be an interesting complementary part to implement in ICU to enhance quality and security in severely ill patients DISCUSSION We developed the present software to respond to the needs of our intensive care unit, with the hope that this will enhance quality in our unit. In a review of Clinical Informatics in Critical Care, G. Daniel Martich describes several reasons to implement information system in intensive care (5). The first one is that information systems could reduce medical errors and first of all medications errors. The second reason is that information overload is present at point of care in intensive care units. Clinical informatics at the bedside can help to better manage this load. Other reasons are described like necessity to achieve and assess compliance to guidelines and accreditation rules. To be effective to improve outcome, databases should be developed and controlled at the level were change is to occur (10). Several studies have demonstrated the effectiveness of using relational databases to improve care of intensive care units patients and specially infected patients (11,12). We decided to base our development work on Windows based systems for three main reasons, first to benefit of the large library and resources, second to avoid to be locked into proprietary software and third to be able to adapt the software to the manual procedures preexisting in our unit. Economical reasons were also present. These reasons are similar to that described by Douglas Carnal. That author described in 2000 that collaboration over the Internet is changing development methods and that open source software (OSS) will be a significant part of the Medical Software's Future [6]. However, Martich's review of Clinical Informatics in Critical Care in 2004 does not mention any OSS used in Intensive Care and we found only one OSS specific to Intensive Care well described in the literature (13). 292 medical projects are available for download on sourceforge.net but only one is directly related to intensive care but available only for the German language. A community of users of OSS in Intensive Care is clearly to be created. Several factors limit adoption of OSS, like limited support and some times insufficient quality (14). For our application based on usual and widely used Windows software, we used external support for the hardware and internal support for the software. We tried to respect, during development and implementation, a level of quality corresponding to the needs of our environment. Intensive care environments require systems with high availability. The system described here was able to respond to these requirements by the use of dedicated and duplicated servers running on the existing network of our institution. Software development and testing for Intensive Care need to achieve high reliability. The VB language used to develop the software is fortunately by itself a safe widely used language. For that reason, we didn't systematically had to test the software with a suite of simulation based debugging and profiling tools like for example Valgrind. The uses of static analysis of the code with specific tools like Splint (15) early in the development process and before compilation, or the use of safer languages like Ada or SparkAda (16) recommended when OSS are developed were not necessary to obtain better solutions which are used to develop secure systems. Ada and SparkAda, with static analysis, were also not to be used for further developments with OSS. Finally, to better respond to the need of the work at the bedsides and better integrate nursing work, the system will be upgraded with flexible, evolving and configurable interfaces. Initially, the lack of module designed to communicate with other medical software and applications was a limitation of the system. The development of communications using the HL7 standard with the Mirth HL7 communication engine can solve this problem and facilitate the integration of the PDMS with other medical software and applications but simple macros used within the Windows software could resolve this problem without complicated interventions. CONCLUSION The developed system based on usual source software components was effective and able to respond to the medical needs of the local ICU environment. The use of Windows based systems allowed us to customize the software to the preexisting medical organization of the unit at low cost and con-tributed to the acceptability of the whole system. The system needs however further design and development to better integrate the work at the bedside and communication with other medical software or devices and applications.
2016-05-04T20:20:58.661Z
2015-07-30T00:00:00.000
{ "year": 2015, "sha1": "8a70a7430435db24fa2df8b52021c48008421a39", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4584085?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8a70a7430435db24fa2df8b52021c48008421a39", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
250668912
pes2o/s2orc
v3-fos-license
Impact of sterile neutrinos in lepton flavour violating processes We discuss charged lepton flavour violating processes occurring in minimal extensions of the Standard Model via the addition of sterile fermions. We firstly investigate the possibility of their indirect detection at a future high-luminosity Z-factory (such as FCC-ee). Rare decays such as Z → l1± l2± can indeed be complementary to low-energy (high-intensity) observables of lepton flavour violation. We further consider a sterile neutrino-induced charged lepton flavour violating process occurring in the presence of muonic atoms: their (Coulomb enhanced) decay into a pair of electrons μ¯e¯ → e¯e¯. Our study reveals that, depending on their mass range and on the active-sterile mixing angles, sterile neutrinos can give significant contributions to the above mentioned observables, some of them even lying within present and future sensitivity of dedicated cLFV experiments and of FCC-ee. Introduction Several extensions of the Standard Model (SM) add sterile neutrinos to the particle content in order to account for neutrino masses and mixings. These models are further motivated by anomalous (oscillation) experimental results, as well as by certain indications from cosmology (see [1,2] and references therein). The existence of these sterile states may be investigated in many fronts, among them at high-energy colliders. Motivated by the design study for a high luminosity circular e + e − collider (called FCC-ee) [3], we investigate the prospects for searches for sterile neutrinos by means of rare charged lepton flavour violating (cLFV) Z decays [4]. Moreover, sterile neutrinos can be -indirectly -searched for also at high-intensity facilities, via numerous possible manifestations of cLFV: in addition to radiative and three-body decays, rare muon transitions can take place in the presence of nuclei -when a µ − is stopped in matter, it can be trapped, thus forming a "muonic atom". Here we consider the Coulomb-enhanced decay of a muonic atom into a pair of electrons, µ − e − → e − e − [5]. SM extensions via sterile states The effective "3+1 model" A simple approach to address the impact of sterile fermions on rare cLFV Z decays consists in considering a minimal model where only one sterile Majorana state is added to the three light active neutrinos of the SM. This allows for a generic evaluation of the impact of the sterile fermions for these processes. In this simple toy model, no assumption is made on the underlying mechanism of neutrino mass generation. The addition of an extra neutral fermion to the particle content translates into extra degrees of freedom: the mass of the 1 Based on a work done in collaboration with Abada A, Monteil S, Orloff J and Teixeira A. new sterile state m 4 , three active-sterile mixing angles θ i4 , three new CP phases (two Dirac and one Majorana). Inverse Seesaw The Inverse Seesaw (ISS) mechanism [6] is an example of low-scale seesaw realisation which in full generality calls upon the introduction of at least two generations of SM singlets. Here, we consider the addition of three generations of right-handed (RH) neutrinos ν R and of extra SU (2) singlets fermions X, to the SM particle content. Both ν R and X carry lepton number L = +1. The ISS Lagrangian is given by , where i, j = 1, 2, 3 are generation indices andH = iσ 2 H * . Lepton number U (1) L is broken only by the non-zero Majorana mass term µ X , while the Dirac-type RH neutrino mass term M R does conserve lepton number. In the (ν L , ν c R , X) T basis, and after the electroweak symmetry breaking, the (symmetric) 9 × 9 neutrino mass matrix M is given by with m D = Y ν v the Dirac mass term, v being the vacuum expectation value of the SM Higgs boson. Under the assumption that µ X m D M R , the diagonalization of M leads to an effective Majorana mass matrix for the active (light) neutrinos [7] The remaining six (mostly) sterile states form nearly degenerate pseudo-Dirac pairs. In our analysis, and for both hierarchies of the light neutrino spectrum, we scan over the following range for the sterile neutrino mass: 10 −9 GeV < ∼ m 4 < ∼ 10 6 GeV, while the active-sterile mixing angles are randomly varied in the interval [0, 2π] 2 . All CP phases are also taken into account. Constraints on sterile neutrino extensions of the SM The introduction of sterile fermion states, which have a non-vanishing mixing to the active neutrinos, leads to a modification of the leptonic charged current Lagrangian: where U is the leptonic mixing matrix, i = 1, . . . , n ν denotes the physical neutrino states and j = 1, . . . , 3 the flavour of the charged leptons. In the standard case of three neutrino generations, U corresponds to the unitary matrix U PMNS . For n ν > 3, the mixing between the left-handed leptons, which we denote byŨ PMNS , corresponds to a 3 × 3 subblock of U, which can show some deviations from unitarity. One can parametrise [8] theŨ PMNS mixing matrix as U PMNS →Ũ PMNS = (1 − η) U PMNS , where the matrix η encodes the deviation of theŨ PMNS from unitarity [9,10], due to the presence of extra neutral fermion states. One can also introduce the invariant quantityη, defined asη = 1 − |Det(Ũ PMNS )|, particularly useful to illustrate the effect of the new active-sterile mixings (corresponding to a deviation from unitarity of theŨ PMNS ) on several observables. The non-unitarity ofŨ PMNS will induce a departure from the SM expected values of several observables. In turn, this is translated into a vast array of constraints which we will apply to our analysis (see details and references in [4]). We require compatibility with: the ν-oscillation data best-fit intervals [11]; unitarity bounds on the (nonunitary) matrix η [12]; electroweak precision observables; LHC data on invisible Higgs decays; laboratory searches for monochromatic lines in the spectrum of muons from π ± → µ ± ν decays; searches for neutrinoless double beta decay; leptonic and semileptonic decays of pseudoscalar mesons K, D, D s , B. Other than the rare decays occurring in the presence of nuclei, the new states can contribute to several charged lepton flavour violating processes such as → γ, → 1 1 2 . We compute the contribution of the sterile states to all these observables imposing compatibility with the current experimental bounds. Finally, cosmological observations [2] put severe constraints on sterile neutrinos with a mass below the GeV. cLFV processes Lepton-flavour changing Z decays These processes are forbidden in the SM due to the GIM mechanism [13], and their rates remain extremely small even when lepton mixing is introduced. The observation of such a rare decay would therefore serve as an indisputable evidence of new physics [14,15]. The mixing in the neutral lepton sector induced by the sterile Majorana fermions also opens the possibility for flavour violation in Zν i ν j interactions (flavour-changing neutral currents), coupling both the left-and right-handed components of the neutral fermions to the Z boson. Together with the charged-current LFV couplings, these interactions will induce an effective cLFV vertex Z ∓ 1 ± 2 . Decay of muonic atoms to e − e − pairs A new cLFV process was recently proposed in [16]. It consists in the flavour violating decay of a bound µ − in a muonic atom into a pair of electrons, and has been identified as potentially complementary to other cLFV muon decays: µ − e − → e − e − . In the above transition, the initial states are a µ − and a 1s atomic e − , bound in the Coulomb field of a nucleus [16]. Although the underlying sources of flavour violation giving rise to this observable are the same as those responsible for other non-radiative µ − e transitions (such as µ → eee), the µ − e − → e − e − decay in a muonic atom offers significant advantages. For instance, the rate of this process can be enhanced due to the Coulomb attraction from the nucleus, which increases the overlap of the 1s electron and muon wavefunctions. The muonic atom decay rate is thus enhanced by a factor ∼ (Z − 1) 3 , which can become important for nuclei with large atomic numbers. The µ − e − → e − e − process could be investigated by the COMET collaboration [17] (possibly being part of its Phase II programme). Results cLFV Z decays The prospects for the observation of cLFV Z decays in the ISS are summarised in the left plot of Fig. 1 by considering the values of BR(Z → ∓ 1 ± 2 ) in the (η, m 4−9 ) parameter space of this specific realisation, where m 4−9 is the average of the absolute masses of the mostly sterile states, m 4−9 = i=4...9 1 6 |m i |. We identify as grey points the solutions which fail to comply with (at least) one of the constraints listed in Section 3. These results indicate that this ISS realisation can account for sizeable values of cLFV Z-decay branching ratios, within the reach of FCC-ee (whose expected sensitivity is O(10 −13 )). As to the minimal extension of the SM by one sterile neutrino, the "3+1 model" can also account for values of BR(Z → ∓ 1 ± 2 ) within the sensitivity of a high luminosity Z-factory, such as the FCC-ee. Larger cLFV Z decay branching fractions (as large as O(10 −6 )) cannot be reconciled with current bounds on low-energy cLFV processes. Indeed, sterile neutrinos also contribute via Z penguin diagrams to cLFV 3-body decays and µ − e conversion in nuclei, which severely constrain the flavour violating Z ∓ 1 ± 2 vertex (see also [15]). Moreover, the recent MEG result on µ → eγ also excludes important regions of the parameter space. These constraints are especially manifest in the case of Z → eµ decays, since the severe limits from BR(µ → 3e) and CR(µ − e, Au) typically preclude BR(Z → eµ) 10 −13 . In the right plot of Fig. 1 we illustrate the complementary rôle of a high-luminosity Z-factory with respect to low-energy (high-intensity) cLFV dedicated experiments. We display the sterile neutrino contributions BR(Z → µτ ) versus BR(τ → µγ). We depict in red the points that survive all other bounds but are typically disfavoured from standard cosmology arguments. Finally, blue points are in agreement with all imposed constraints. We further highlight in dark 2 ) in the ISS, on the (η, m 4−9 ) parameter space (right) for a NH light neutrino spectrum, from larger (dark blue) to smaller (orange) values. Cyan denotes values of the branching fractions below 10 −18 . FCC-ee expected sensitivity is O(10 −13 ). On the right, BR(Z → µτ ) vs BR(τ → µγ) in the "3+1 model". The additional green vertical lines denote the current bounds (solid) and future sensitivity (dashed), and dark-yellow points denote an associated |m ee | within experimental reach. yellow solutions which allow for a third complementary observable within future sensitivity, which is the effective neutrino mass in 0ν2β decays. Decay of muonic atoms to e − e − pairs As emphasised in the discussion of Section 4, the process' rate can be significantly enhanced in large Z atoms (in particular the contributions from contact interactions). We thus compare the prospects for two different nuclei; this is illustrated in Fig. 2 for Aluminium (Z = 13, dark blue) and Uranium (Z = 92, cyan). Grey points correspond to the violation of at least one experimental bound: the most stringent constraints arise, as expected, from µ → eee (and also from CR(µ − e, Au)). The Coulomb enhancement is clearly visible: should this process be included in COMET's physics programme, the cLFV muonic atom decay should be within reach of COMET's Phase II, even for light nuclei, such as Aluminium (in the regime m 4 200 GeV); for heavier atoms, such as Uranium, branching ratios above 10 −15 render this process experimentally accessible (a similar situation occurs for Lead nuclei -albeit suppressed by a factor ∼ 7/9 when compared to Uranium). Conclusions We have considered two extensions of the SM which add to its particle content one or more sterile neutrinos. We have explored indirect searches for these sterile states at a future circular collider like FCC-ee, running close to the Z mass threshold. We have considered the contribution of the sterile states to rare cLFV Z decays in these two classes of models and discussed them taking into account a number of experimental and theoretical constraints. Among these, low-energy LFV observables like cLFV 3-body decays and µ − e conversion in nuclei impose strong constraints on the sterile neutrino induced BR(Z → ∓ 1 ± 2 ). Our analysis emphasises the underlying synergy between a high-luminosity Z factory and dedicated low-energy facilities: regions of the parameter space of both models can be probed via LFV Z decays at FCC-ee, at low-energy cLFV dedicated facilities and also via searches for 0ν2β. Notably, FCC-ee could better probe LFV in the µ − τ sector, in complementarity to the reach of low-energy experiments like COMET. We have further investigated the impact of sterile fermions on cLFV observables which occur in the presence of "muonic atoms" such as the (Coulomb enhanced) decay of muonic atoms into e − e − pairs. The experimental relevance of this observable is manifest even for the simple "3+1" toy model: sterile neutrinos with masses m 4 > ∼ 800 GeV, lead to BR(µ − e − → e − e − , Al) within the reach of COMET, and the contributions would be further enhanced for heavier atoms, such as Lead or Uranium, thus improving the experimental potential.
2022-06-28T02:09:38.369Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "e05998545b315556c6ad7027d64188a97b70358f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/718/6/062013", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e05998545b315556c6ad7027d64188a97b70358f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237374474
pes2o/s2orc
v3-fos-license
Faster decay of neutralizing antibodies in never infected than previously infected healthcare workers three months after the second BNT162b2 mRNA COVID-19 vaccine dose Objectives This study aimed to describe the longitudinal evolution of neutralizing antibody titres (NtAb) in three different cohorts of healthcare workers (HCWs), including vaccinated HCWs with and without a previous SARS-CoV-2 infection and previously infected unvaccinated HCWs. COVID-19 was mild or asymptomatic in those experiencing infection. Methods NtAb was tested before BNT162b2 mRNA COVID-19 vaccine (V0), 20±2 days after the first dose (V1_20), 20±3 days (V2_20) and 90±2 days (V2_90) after the second dose in vaccinated HCWs and after about 2 months (N_60), 10 months (N_300) and 13 months (N_390) from natural infection in unvaccinated HCWs. NtAb were measured by authentic virus neutralization with a SARS-CoV-2 B.1 isolate circulating in Italy at HCW enrolment. Results Sixty-two HCWs were enrolled. NtAb were comparable in infected HCWs with no or mild disease at all the study points. NtAb of uninfected HCWs were significantly lower with respect to those of previously infected HCWs at V1_20, V2_20 and V2_90. The median NtAb fold decrease from V2_20 to V2_90 was higher in the uninfected HCWs with respect to those with mild infection (6.26 vs 2.58, p=0.03) and to asymptomatic HCWs (6.26 vs 3.67, p=0.022). The median Nabt at N_390 was significantly lower than at N_60 (p=0.007). Conclusions In uninfected HCWs completing the two-dose vaccine schedule, a third mRNA vaccine dose is a reasonable option to counteract the substantial NtAb decline occurring at a significantly higher rate compared with previously infected, vaccinated HCWs. Although low, Nabt were still at a detectable level after 13 months in two-thirds of previously infected and unvaccinated HCWs. Introduction The BNT162b2 COVID-19 vaccine is known to induce a rapid production of neutralizing antibodies Vicenti et al., 2021b ); however, there are very limited data on their long-term kinetics. Favresse et al. (2021b ) described a robust humoral response 90 days after the first dose of vaccine both in pre-viously seropositive and seronegative subjects, but a significant antibody decrease in respect to the higher level reached occurred within this period. Interestingly, the administration of a third dose of the BNT162b2 vaccine, about two months from the second dose, to solid-organ transplant recipients significantly improved the immunogenicity of the vaccine ( Kamar et al., 2021 ). Taken together, this data suggest that a three-month period following the first dose of vaccine may be crucial to identify subjects requiring dedicated vaccine schedules. The main aim of this study was to analyse early (about 3 weeks) and late (about 12 weeks) changes in neutralizing antibody titres (NtAb) after the second BNT162b2 SARS-CoV-2 https://doi.org/10.1016/j.ijid.2021.08.052 1201-9712/© 2021 The Author(s). Published by Elsevier Ltd on behalf of International Society for Infectious Diseases. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ) vaccination dose in healthcare workers (HCWs) with a previous mild or asymptomatic infection with respect to uninfected HCW. Moreover, it describes long-term humoral responses of previously infected HCWs who were unvaccinated. Materials and Methods The study population included HCWs who were vaccinated following asymptomatic or mild infection according to WHO classification ( World Health Organization, 2020 ), a control group of vaccinated HCWs who had never been positive in all of the tests performed for hospital surveillance, and a small cohort of previously infected, unvaccinated HCWs. SARS-CoV-2 infection was diagnosed in the Veneto area in the period 01 March-30 May 2020, and had a negative molecular test since the end of the prescribed quarantine for the surveillance hospital program. All of the vaccinated participants received the BNT162b2 mRNA vaccine and the second dose was administered three weeks after the first dose. In vaccinated participants, NtAb were tested at four time points: V0 (before receiving the first dose), V1_20 (20 ±2 days after the first dose), V2_20 (20 ±3 days after the second dose) and V2_90 (90 ±2 days after the second dose). In the unvaccinated participants who acquired natural SARS-CoV-2 infection, the study points that were analysed included N_60 (baseline, after about 2 months from diagnosis), N_300 after a median of 291 days and N_390 after a median of 394 days. The study was approved by the local Ethics Committee and all participants gave written informed consent. For SARS-CoV-2 virus neutralization, two-fold serial dilutions (starting at 1:10 dilution) of heat-inactivated sera were incubated with 100 TCID 50 of SARS-CoV-2 virus (lineage B.1) at 37 °C, 5% CO2 for 1 h in 96-well plates. At the end of incubation, 10,0 0 0 preseeded Vero E6 cells per well (ATCC catalog no. CRL-1586) were treated with serum-virus mixtures and incubated at 37 °C, 5% CO2. Each run included an uninfected control, a virus back titration to confirm the virus inoculum, and a known SARS-CoV-2 neutralizing serum yielding a median (interquartile range [IQR]) titre of 69 (59.3-69.9) in five independent runs. After 72 hours, cell viability was determined through the commercial kit Cell-titre Glo2.0 (Promega, Wisconsin, USA) following the manufacturer's instructions. The serum neutralization titre (ID 50 ) was defined as the reciprocal value of the sample dilution that showed a 50% protection of virus cytopathic effect. Sera with ID 50 titres ≥10 were defined as SARS-CoV-2 positive and neutralizing; sera with ID 50 < 10 were defined as negative and scored as 5 for statistical analysis ( Vicenti et al., 2021a ). Continuous variables were expressed as the median (IQR), whereas categorical variables were indicated as absolute number and frequency. The Mann-Whitney U test, Wilcoxon signed rank sum test, Chi-squared test and Fisher's exact test were applied as appropriate. Statistical analyses were performed using MedCalc® Statistical Software version 20.009 (MedCalc Software Ltd, Ostend, Belgium; https://www.medcalc.org ; 2021) and the limit of significance for all analyses was established at p < 0.05. Results Sixty-two HCWs were enrolled. A complete set of data was available for 23 previously infected and vaccinated HCWs (14 with asymptomatic infection and nine with mild symptoms), 13 uninfected, vaccinated HCWs, and nine previously infected unvaccinated HCWs. A description of the study population, including vaccinated HCWs, is reported in Table 1 and Figure 1 and Figure 2 . NtAb values were comparable in HCWs with asymptomatic infection and with mild disease at all four study points. NtAb values of the uninfected HCWs were significantly lower with respect to those of the previously infected HCWs at V1_20, V2_20 and V2_90. In the uninfected vaccinated group, seven of 13 HCWs had undetectable NtAb at V1_20 and all but one had detectable NtAb at V2_90. The median NtAb fold change decrease from V2_20 to V2_90 was higher in uninfected HCWs with respect to those with mild infection (6.26 vs 2.58, p = 0.03) and to asymptomatic HCWs (6.26 vs 3.67, p = 0.022). A similar figure in longitudinal evolution of NtAb was observed when a whole cohort of 40 previously infected participants (27 asymptomatic and 13 with mild disease), including 17 without an intermediate time point, was considered: seven (17.5) had no NtAb detectable at V0 and the fold change between V2_20 and V2_90 was 3.670. A detailed description is given in the Supplementary Table Table 2 . Discussion Real world data have demonstrated that SARS-CoV-2 vaccines prevent infection in more than 90% of cases ( Butt et al., 2021 ;Dagan et al., 2021 ). Accordingly, a 74% reduction in the proportion of cases and 81% reduction in the proportion of symptomatic cases have been reported in HCWs, based on the Italian government's open data directory ( Mateo-Urdiales et al., 2021 ). Neutralizing antibody responses are a correlate of protection and their measurement through live virus assays is the reference method to investigate the magnitude and duration of immunity following vaccination or natural infection. This study evaluated the long-term neutralizing response to SARS-CoV-2 in different HCW groups representative of three possible scenarios: (i) mild or asymptomatic infection followed by two-dose vaccination about 10 months after diagnosis; (ii) absence of infection, as confirmed by regular monitoring, followed by two-dose vaccination; and (iii) past mild or asymptomatic infection not followed by vaccine for a very long time period. This last condition is now no longer allowed in Italy, since previously infected HCWs get vaccinated and therefore constitutes an unicum. In agreement with published data ( Vicenti et al., 2021b ), previously infected HCWs vaccinated with BNT162b2 demonstrated a Table 1 Description of the main characteristics and neutralizing antibody titres at V0, V1_20, V2_20 and V2_90 in the overall study population and asymptomatic HCWs or with mild disease. Subjects with undetectable NtAb at V2_90, n (%) 0 0 0 na 1 (7.7) 0.481 1 a Date shown as median and rangep refers to differences between HCWs with asymptomatic or mild disease and to the differences between control group and HCWs with asymptomatic or mild diseaseBold: significant p valuesna: not applicableHCWs: healthcare workersID 50 : reciprocal value of the sample dilution that showed a 50% protection of virus cytopathic effectV0: before receiving the first doseV1_20: 20 ±2 days after the first doseV2_20: 20 ±3 days after the second doseV2_90: 90 ±2 days after the second dose Figure 1. Neutralizing antibody titres in previously infected (n = 23) and uninfected healthcare workers (n = 13) tested at V0, V1_20, V2_20 and V2_90. Data are reported as longitudinal course and as individual ID 50 values at each study time The same symbols indicate the same HCWs at different time points ID 50 : reciprocal value of the sample dilution that showed a 50% protection of virus cytopathic effect V0: before receiving the first dose V1_20: 20 ±2 days after the first dose V2_20: 20 ±3 days after the second dose V2_90: 90 ±2 days after the second dose strong humoral response after the first dose, with a modest further increase 20 days after the second dose . The current data provide additional support on the long-term efficacy of the vaccine booster after a very long time from infection, with a median interval of ten months. The kinetics of NtAb decay in these HCWs compared with those following a single vaccine dose need further study: a significant antibody decline 3 months after the first dose in both seronegative and seropositive individuals who received two doses was recently described ( Favresse et al., 2021b ). However, the current results were obtained 3 months after the second vaccination, in a well characterized population, with a very long-term humoral memory : the interval between infection and first vaccine dose (median of 292 days, with IQR 267-300) was much loner with respect to that reported by Favresse et al (2021a) (mean 99 days with range 34-337 days). Of note, five (21.7%) of 23 previously infected HCWs had no detectable NtAb before vaccination; nevertheless, they experienced a strong response at V2_90, which was comparable to HCWs with detectable NtAb at V0, supporting the persistence of long-lasting B memory after natural infection ( Wang et al, 2021 ) . Further, the current results support the observation that infected vaccinated HCWs experience a significantly slower NtAb decay than uninfected vaccinated HCWs after 3 months from second dose, indicating the ability of the immune system to mount very high NtAb after repeated antigen stimulation . Although natural infection and vaccination may lead to different responses, and residual and recall immune responses may impact different immune protection, the role of humoral immunity in the first phase of viral infection is well established as a major early obstacle to viral spread, counteracting viral replication and evolution as well as immune escape. Thus, measuring circulating antibody levels, their durability, specificity, and recall kinetics is crucial for understanding and predicting the durability of protection . As recently reported, among vaccinated HCWs, the break- Table 2 Description of neutralizing antibody titres in previously infected and uninfected vaccinated HCWs (tested at V0, V1_20, V2_20 and V2_90) and in unvaccinated HCWs (tested at N_60, N_300 and N_390). . Neutralizing antibody titres in previously infected (n = 23) and uninfected healthcare workers (n = 13) tested at V0, V1_20, V2_20 and V2_90, and in unvaccinated HCWs (n = 9) tested at N_60, N_300 and N_390. Data are reported as individual ID 50 values and as median value at each study time The same symbols indicate the same HCWs at different time points ID 50 : reciprocal value of the sample dilution that showed a 50% protection of virus cytopathic effect V0: before receiving the first dose V1_20: 20 ±2 days after the first dose V2_20: 20 ±3 days after the second dose V2_90: 90 ±2 days after the second dose N_60: baseline, after about 2 months from diagnosis N_300: after a median of 291 days from diagnosis N_390: after a median of 394 days from diagnosis through infections were correlated with NtAb detected within the week immediately before ( Bergwerk et al., 2021 ). The advantage of high NtAbs against the infection by viral variants was also reported . Based on data reported here, a third mRNA vaccine dose is reasonable in uninfected HCWs, to further boost immunity and recapitulate the effect observed in previously infected vaccinated HCWs. However, the identification of specific target groups and the definition of the best timing for vaccination need further analysis with a more prolonged follow-up on a larger cohort. The current study included a group of previously infected unvaccinated HCWs who were tested thrice after SARS-CoV-2 infection: about 2 months later, after a median of 291 days (defined as corresponding to V0 in vaccinated HCWs) and after a median of 394 days (defined as corresponding to the last time point of vac- The same symbols indicate the same HCWs at different time points. ID 50 : reciprocal value of the sample dilution that showed a 50% protection of virus cytopathic effect V0: before receiving the first dose V2_90: 90 ±2 days after the second dose N_300: after a median of 291 days from diagnosis N_390: after a median of 394 days from diagnosis cinated HCWs). The lack of a study time corresponding to N_60 for vaccinated HCWs is regrettable; however, the main aim of this study was the long-term humoral response, and N_390 and V2_90 were definitely corresponding. It is believed that the description of a control group with a protracted follow-up was a useful contribution to the understanding of the durability of NtAb in this particular category of subjects: middle age, previously healthy and without comorbidities and with mild or asymptomatic infection. These individuals are now almost all vaccinated. Strengths of the study were the: availability of authentic virus neutralization with a SARS-CoV-2 isolate; homogeneity of the study population with previous asymptomatic or mild disease; long time interval from infection to vaccination; long follow-up after the second dose of vaccination; and different HCW cohorts tested. The main limitation was the number of evaluated participants, but data were immediately described after their availability. In conclusion, data obtained by analysis of the response to SARS-CoV-2 infection of this multifaceted population may help with clinical use of NtAb to inform health policy, that is: to evaluate the decision to administer a third dose of the vaccine and how to monitor unvaccinated personnel.
2021-09-02T13:16:24.704Z
2021-09-02T00:00:00.000
{ "year": 2021, "sha1": "e0c957f285f17e9f0ac24292058688f6cfa4813e", "oa_license": "CCBYNCND", "oa_url": "http://www.ijidonline.com/article/S1201971221006834/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e86ee9c789fbae60a3a5712933888327a5dd7a2a", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
255689758
pes2o/s2orc
v3-fos-license
Isorhamnetin Reduces Glucose Level, Inflammation, and Oxidative Stress in High-Fat Diet/Streptozotocin Diabetic Mice Model Background: Isorhamnetin is a flavonoid that is found in medical plants. Several studies showed that isorhamnetin has anti-inflammatory and anti-obesity effects. This study aims to investigate the anti-diabetic effects of isorhamnetin in a high-fat diet and Streptozotocin-(HFD/STZ)-induced mice model of type 2 diabetes. Materials and Methods: Mice were fed with HFD followed by two consecutive low doses of STZ (40 mg/kg). HFD/STZ diabetic mice were treated orally with isorhamnetin (10 mg/kg) or (200 mg/kg) metformin for 10 days before sacrificing the mice and collecting plasma and soleus muscle for further analysis. Results: Isorhamnetin reduced the elevated levels of serum glucose compared to the vehicle control group (p < 0.001). Isorhamnetin abrogated the increase in serum insulin in the treated diabetic group compared to the vehicle control mice (p < 0.001). The homeostasis model assessment of insulin resistance (HOMA-IR) was decreased in diabetic mice treated with isorhamnetin compared to the vehicle controls. Fasting glucose level was significantly lower in diabetic mice treated with isorhamnetin during the intraperitoneal glucose tolerance test (IPGTT) (p < 0.001). The skeletal muscle protein contents of GLUT4 and p-AMPK-α were upregulated following treatment with isorhamnetin (p > 0.01). LDL, triglyceride, and cholesterol were reduced in diabetic mice treated with isorhamnetin compared to vehicle control (p < 0.001). Isorhamnetin reduced MDA, and IL-6 levels (p < 0.001), increased GSH levels (p < 0.001), and reduced GSSG levels (p < 0.05) in diabetic mice compared to vehicle control. Conclusions: Isorhamnetin ameliorates insulin resistance, oxidative stress, and inflammation. Isorhamnetin could represent a promising therapeutic agent to treat T2D. Introduction Diabetes is considered one of the most common metabolic disorders worldwide. According to the International Diabetes Federation (IDF), around 537 million adults are living with diabetes, and this number is projected to rise to 643 million in 2030 [1]. Diabetes is characterized by hyperglycemia, affecting carbohydrate, fat, and protein metabolism The Hypoglycemic Effect of Isorhamnetin Serum glucose was significantly higher in the vehicle control diabetic group compared to the non-diabetic group ( Figure 1A, n = 6, p < 0.001). Treatment with isorhamnetin significantly reduced the serum glucose levels compared to the vehicle control group in the presence of T2D ( Figure 1A, n = 6, p < 0.001). Similarly, the serum glucose level in the diabetic group was significantly reduced with metformin treatment compared to the vehicle control ( Figure 1A, n = 6, p < 0.001). Unexpectedly, insulin levels were significantly increased in the vehicle control diabetic group compared to the non-diabetic group ( Figure 1B, n = 6, p < 0.001); however, treating diabetic mice with isorhamnetin significantly reduced insulin levels compared to the vehicle control group ( Figure 1B, n = 6, p < 0.001). Treating diabetic mice with metformin significantly reduced insulin levels compared to the vehicle control group ( Figure 1B, n = 6, p < 0.001). No difference was observed between isorhamnetin and metformin in terms of glucose and insulin. sion compared to the vehicle control diabetic group ( Figure 4A, n = 6, p < 0.001). Similarly, p-AMPK-α expression was significantly downregulated as a result of T2D ( Figure 4B, n = 6, p < 0.001), and treating diabetic mice with either isorhamnetin or metformin significantly upregulated p-AMPK-α expression compared to the vehicle control diabetic group ( Figure 4B, n = 6, p < 0.01, <0.05, respectively). No difference was observed in GLUT4 and p-AMPK-α expression between isorhamnetin and metformin groups. and insulin (B) levels in diabetic mice. HOMA-IR (C) was significantly reduced with isorhamnetin treatment. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injection (40 mg/kg) after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/kg metformin for 10 days, mice were then sacrificed, and serum collected for ELISA analysis. One-way ANOVA followed by Tukey post hoc, *** p < 0.001. VC; vehicle control, Iso; isorhamnetin, Met; metformin. To study the effect of isorhamnetin on insulin resistance, HOMA-IR was measured. The presence of T2D was confirmed by HOMA-IR, which was significantly increased in the vehicle control diabetic group compared to the non-diabetic group ( Figure 1C, n = 6, p < 0.001). Interestingly, isorhamnetin was able to restore HOMA-IR in the diabetic group, which was comparable to the non-diabetic group. The same effect on HOMA-IR was observed when diabetic mice were treated with metformin. HOMA-IR was not different between isorhamnetin and metformin groups. Moreover, blood glucose level during IPGTT was significantly lower in isorhamnetin and metformin groups compared to vehicle control ( Figure 2, n = 6, p < 0.001). Furthermore, water and food consumption was higher in diabetic groups compared to the non-diabetic group ( Figure 3A,B, respectively). No difference was found between the isorhamnetin and metformin groups and the vehicle control group in water and food consumption ( Figure 3A,B, respectively). To determine the mechanism by which isorhamnetin improves blood glucose and insulin resistance, GLUT4 and p-AMPK-α protein expression in skeletal muscle tissue were measured. GLUT4 protein expression was significantly downregulated in the presence of T2D ( Figure 4A, n = 6, p < 0.05); however, treating diabetic mice with isorhamnetin significantly upregulated GLUT4 expression compared to the vehicle control diabetic group ( Figure 4A, n = 6, p < 0.01). Metformin also was able to upregulate GLUT4 expression compared to the vehicle control diabetic group ( Figure 4A, n = 6, p < 0.001). Similarly, p-AMPK-α expression was significantly downregulated as a result of T2D ( Figure 4B, n = 6, p < 0.001), and treating diabetic mice with either isorhamnetin or metformin significantly upregulated p-AMPK-α expression compared to the vehicle control diabetic group ( Figure 4B, n = 6, p < 0.01, <0.05, respectively). No difference was observed in GLUT4 and p-AMPK-α expression between isorhamnetin and metformin groups. Molecules 2023, 28, x FOR PEER REVIEW treatment. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injec mg/kg) after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/ formin for 10 days, mice were then sacrificed, and serum collected for ELISA analysis. O ANOVA followed by Tukey post hoc, *** <0.001. VC; vehicle control, Iso; isorhamnetin, M formin. Figure 2. Isorhamnetin reduced glucose level during intraperitoneal glucose tolerance test ( Isorhamnetin significantly reduced glucose in diabetic mice during intraperitoneal glucos ance test. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injec mg/kg), after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/ formin for 10 days, mice were then fasted overnight before injection with 0.5 g/kg glucose in itoneally, and glucose level determined at 0, 30, 60, and 120 min. Two-way ANOVA follo Tukey post hoc, *** <0.001. VC; vehicle control, Iso; isorhamnetin, Met; metformin. Figure 3. Water and food consumption was higher in diabetic groups. Water consumption significantly higher in diabetic groups compared to non-diabetic group. Food consumption higher in diabetic groups compared to non-diabetic group. Two-way ANOVA followed by post hoc. VC; vehicle control, Iso; isorhamnetin, Met; metformin. Isorhamnetin significantly reduced glucose in diabetic mice during intraperitoneal glucose tolerance test. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injection (40 mg/kg), after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/kg metformin for 10 days, mice were then fasted overnight before injection with 0.5 g/kg glucose intraperitoneally, and glucose level determined at 0, 30, 60, and 120 min. Two-way ANOVA followed by Tukey post hoc, *** p < 0.001. VC; vehicle control, Iso; isorhamnetin, Met; metformin. Molecules 2023, 28, x FOR PEER REVIEW 4 of 15 treatment. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injection (40 mg/kg) after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/kg metformin for 10 days, mice were then sacrificed, and serum collected for ELISA analysis. One-way ANOVA followed by Tukey post hoc, *** <0.001. VC; vehicle control, Iso; isorhamnetin, Met; metformin. Figure 2. Isorhamnetin reduced glucose level during intraperitoneal glucose tolerance test (IPGTT). Isorhamnetin significantly reduced glucose in diabetic mice during intraperitoneal glucose tolerance test. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injection (40 mg/kg), after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/kg metformin for 10 days, mice were then fasted overnight before injection with 0.5 g/kg glucose intraperitoneally, and glucose level determined at 0, 30, 60, and 120 min. Two-way ANOVA followed by Tukey post hoc, *** <0.001. VC; vehicle control, Iso; isorhamnetin, Met; metformin. Figure 3. Water and food consumption was higher in diabetic groups. Water consumption (A) was significantly higher in diabetic groups compared to non-diabetic group. Food consumption (B) was higher in diabetic groups compared to non-diabetic group. Two-way ANOVA followed by Tukey post hoc. VC; vehicle control, Iso; isorhamnetin, Met; metformin. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injection (40 mg/kg), after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/kg metformin for 10 days, mice were then sacrificed, and soleus muscle was isolated and homogenized before western blotting performed. One-way ANOVA followed by Tukey post hoc, * <0.05, ** <0.01, *** <0.001. VC; vehicle control, Iso; isorhamnetin, Met; metformin. The Effect of Isorhamnetin on GSH, GSSG, MDA and IL-6 Serum GSH expression was significantly reduced in vehicle control diabetic mice compared to non-diabetic mice ( Figure 6A, n = 6, p < 0.001); however, treating diabetic mice with either isorhamnetin or metformin demonstrated a significant increase in GSH compared to the vehicle control diabetic group ( Figure 6A, n = 6, p < 0.05, 0.01, respectively). Moreover, serum GSSG was significantly increased in vehicle control diabetic mice , and LDL (C) levels in diabetic mice. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injection (40 mg/kg) after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/kg metformin for 10 days, mice were then sacrificed, and serum collected for ELISA analysis. One-way ANOVA followed by Tukey post hoc, *** p < 0.001. VC; vehicle control, Iso; isorhamnetin, Met; metformin. The Effect of Isorhamnetin on GSH, GSSG, MDA and IL-6 Serum GSH expression was significantly reduced in vehicle control diabetic mice compared to non-diabetic mice ( Figure 6A, n = 6, p < 0.001); however, treating diabetic mice with either isorhamnetin or metformin demonstrated a significant increase in GSH compared to the vehicle control diabetic group ( Figure 6A, n = 6, p < 0.05, 0.01, respectively). Moreover, serum GSSG was significantly increased in vehicle control diabetic mice compared to non-diabetic mice ( Figure 6B, n = 6, p < 0.05), however, isorhamnetin and metformin were able to reduce GSSG levels in diabetic mice compared to vehicle control ( Figure 6B, n = 6, p < 0.05). On the other hand, serum MDA ( Figure 6C, n = 6, p < 0.001) and IL-6 ( Figure 6D, n = 6, p < 0.001) concentrations were significantly increased in the presence of T2D. Interestingly, isorhamnetin showed an ability to reduce serum MDA and IL-6 levels significantly in diabetic mice in comparison to the vehicle control group ( Figure 6C,D, n = 6, p < 0.001). The same effect was observed when diabetic mice were treated with metformin ( Figure 6C,D, n = 6, p < 0.001). , and IL-6 (D) levels in diabetic mice. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injection (40 mg/kg) after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/kg metformin for 10 days, mice were then sacrificed, and serum collected for ELISA analysis. One-way ANOVA followed by Tukey post hoc, * <0.05, *** <0.001. VC; vehicle control, Iso; isorhamnetin, Met; metformin. Discussion T2D is characterized mainly by high blood glucose associated with insulin resistance [21] which can cause several complications including cardiovascular disease, nephropathy, retinopathy, and neuropathy [22]. In our study, the results demonstrated that isorhamnetin significantly reduced glucose levels, which was comparable to the gold standard, metformin, suggesting that isorhamnetin represents a promising emerging hypoglycemic agent. Furthermore, HFD/STZ combination resembles the characteristics of T2D such as early stage hyperinsulinemia, hyperglycemia, hyperlipidemia, and β-cells dysfunction [23]. Furthermore, our results reported increased systemic insulin in the vehicle control group which is aligned with several studies reporting that hyperinsulinemia exists, since the peripheral tissues lack their insulin-sensitizing property that ultimately results in hyperglycemia, which leads to an increase in insulin secretion in the early stages of T2D as part of the compensatory mechanism that aims to counteract the presence of insulin resistance in T2D [24][25][26][27]. Oddly enough, isorhamnetin was effective at bringing blood insulin levels back to normal or non-diabetic levels. When metformin was administered to diabetic mice, the same result was seen. Isorhamnetin enhanced insulin sensitivity in diabetic mice, and this is in tandem with the decrease in serum insulin observed following treatment with isorhamnetin. HOMA-IR was calculated to further understand the effect of isorhamnetin on insulin resistance. Previous studies established that HFD, which is enriched by saturated fatty acids, Figure 6. The anti-inflammatory effect of isorhamnetin. Isorhamnetin significantly increased GSH (A), and reduced GSSG (B), MDA (C), and IL-6 (D) levels in diabetic mice. Mice were fed with HFD for 8 weeks followed by two low doses of STZ injection (40 mg/kg) after diabetes confirmed, mice were treated with 10 mg/kg isorhamnetin or 200 mg/kg metformin for 10 days, mice were then sacrificed, and serum collected for ELISA analysis. One-way ANOVA followed by Tukey post hoc, * p < 0.05, *** p < 0.001. VC; vehicle control, Iso; isorhamnetin, Met; metformin. Discussion T2D is characterized mainly by high blood glucose associated with insulin resistance [21] which can cause several complications including cardiovascular disease, nephropathy, retinopathy, and neuropathy [22]. In our study, the results demonstrated that isorhamnetin significantly reduced glucose levels, which was comparable to the gold standard, metformin, suggesting that isorhamnetin represents a promising emerging hypoglycemic agent. Furthermore, HFD/STZ combination resembles the characteristics of T2D such as early stage hyperinsulinemia, hyperglycemia, hyperlipidemia, and β-cells dysfunction [23]. Furthermore, our results reported increased systemic insulin in the vehicle control group which is aligned with several studies reporting that hyperinsulinemia exists, since the peripheral tissues lack their insulin-sensitizing property that ultimately results in hyperglycemia, which leads to an increase in insulin secretion in the early stages of T2D as part of the compensatory mechanism that aims to counteract the presence of insulin resistance in T2D [24][25][26][27]. Oddly enough, isorhamnetin was effective at bringing blood insulin levels back to normal or non-diabetic levels. When metformin was administered to diabetic mice, the same result was seen. Isorhamnetin enhanced insulin sensitivity in diabetic mice, and this is in tandem with the decrease in serum insulin observed following treatment with isorhamnetin. HOMA-IR was calculated to further understand the effect of isorhamnetin on insulin resistance. Previous studies established that HFD, which is enriched by saturated fatty acids, impaired cellular glucose uptake and induced insulin resistance [28]. Notably, saturated fatty acids enhance lipids accumulation in muscles, thereby inducing insulin resistance [29]. Palmitate, for example, as a saturated fatty acid, promotes cytokines secretion as IL-6 and TNF-α that can lead to insulin resistance and glucose intolerance [30]. Meanwhile, HFD downregulates the expression of GLUT4, which induces glucose intolerance [31]. It has been reported that the activated AMP protein kinase (AMPK) plays a significant role in regulating cellular energy metabolism. Its malfunction is associated with insulin resistance and other metabolic disorders. Metformin alters the AMP/ATP ratio which activates AMPK through phosphorylation which improves glucose utilization [32]. However, HFD is attributed to a decrease in the phosphorylation of AMPK, thus reducing glucose uptake. Metformin has been shown to enhance the expression of GLUT4 and activation of AMPK through phosphorylation, thus increasing glucose uptake by cells. The isorhamnetin mechanism of action in reducing hyperglycemia could be similar to metformin which needs to be investigated further. The mammalian target of rapamycin (mTOR) is a serine and threonine protein kinase that has an established role in insulin resistance and AMPK directly phosphorylates Raptor, which is a component of mTORC1, to repress mTORC1 [33]. A recent study showed that isorhamnetin decreased the expression of mTOR [25] which might be also another mechanism by which isorhamnetin improves insulin sensitivity which needs to be studied in such a model of diabetes. In order to investigate the hypoglycemic mode of action of isorhamnetin, the p-AMPK and GLUT4 levels in skeletal muscle were measured. About 45-50% of the body's mass is made up of skeletal muscle, which also transports 80% of the body's glucose [34]. In skeletal muscles, AMPK regulates the transcription of the GLUT-4 gene. GLUT-4 is a critical glucose transporter, transporting extracellular glucose to insulin-sensitive cells to keep blood glucose homeostasis [35]. Moreover, reduced GLUT-4 skeletal muscle protein expression and inhibition of GLUT4 translocation results in insufficient glucose transportation and hence insulin resistance. Activation of the AMPK-GLUT4 pathway enhances insulin sensitivity which improves glucose control in T2D [36]. In addition, the role of AMPK in the prevention of T2D has previously been investigated in combination with the regulation of insulin signaling and GLUT-4 activity [37]. Our results demonstrate that isorhamnetin is capable of increasing the expression of both p-AMPK and GLUT4 in skeletal muscle suggesting that isorhamnetin can improve glucose uptake through the AMPK-GLUT4 pathway. On the other hand, in addition to hyperglycemia, T2D is associated with dyslipidemia. High postprandial TGs, total cholesterol, and LDL define diabetic dyslipidemia [38]. These lipid alterations are the key factors leading to T2D-associated complications [39]. Particularly, dyslipidemia is a significant risk factor for macrovascular diabetes complications, and numerous studies have linked dyslipidemia to microvascular complications associated with T2D, such as diabetic retinopathy, diabetic nephropathy, and diabetic neuropathy [38]. Our findings revealed that isorhamnetin reduced TGs, total cholesterol, and LDL in our T2D model suggesting that isorhamnetin may reduce cardiovascular disease risk associated with T2D. This should be explored in future studies. Many clinical and experimental studies show that there is a strong link between oxidative stress and the development of T2D, and its complication [40]. Oxidative stress is defined as reduced tolerance between oxidants and antioxidants, due to the production of reactive oxygen species (ROS) and reduction in the rate of antioxidant defense mechanisms including GSH (non-enzymatic antioxidant) [41]. ROS can damage the lipids, causing lipid peroxidation such as lipid peroxidation of low-density lipoprotein (oxLDL) or peroxidation of polyunsaturated fatty acid (oxPUFAs). Additionally, ROS induces the release of MDA, a highly reactive compound that interacts with protein, and nucleic acid, and causes damage to various tissues and cells [42]. MDA has been used as a biomarker of lipid peroxidation and as an indication of free radical damage in the blood [43]. Our findings show that isorhamnetin considerably reduces plasma MDA levels and increases GSH levels, implying that isorhamnetin could be effective as an antioxidant agent in T2D by reducing lipid peroxidation or increasing free radical scavenging activity. IL-6 has complex and often conflicting activities. It promotes an anti-inflammatory (M2-like) state in macrophages. Consistent with these observations, others have reported that IL-6 functions to limit atheroma formation and that it is secreted in response to physical exercise, mediating its insulin-sensitizing actions [44,45]. On the other hand, IL-6 also acts as a pro-inflammatory cytokine involved in the acute phase reaction to tissue injury. It has a contributory role in a number of inflammatory and autoimmune diseases, and its secretion by the adipose tissues of obese organisms contributes to metabolic dysfunction including insulin resistance and promoting atherosclerosis [46,47]. Subclinical chronic inflammation has been implicated as an independent risk factor for the development and progression of T2D and its complications [8]. In particular, the multifunctional cytokine interleukin 6 (IL-6) has been linked to the pathogenesis of T2D. Increased levels of systemic IL-6 are a strong predictor of T2D and are thought to have a role in the development of inflammation, insulin resistance, and β-cell dysfunction [48]. In addition, mounting data shows that IL-6 impairs insulin signaling in hepatocytes, and inhibits glucose-stimulated insulin release from the pancreatic β-cell [48]. Moreover, many studies suggest that anti-inflammatory activity plays a key role in the prevention of T2D development and the reduction of the incidence of diabetes complications [48]. As stated in the introduction, inflammation and oxidative stress play a key role in the development of T2D and its complications. Therefore, reducing inflammation and oxidative stress will improve the outcomes in T2D. A large body of evidence showed that activation of AMPK reduces inflammation and oxidative stress via different mechanisms which have a protective effect in diabetes [49][50][51]. Our findings in this study showed that isorhamnetin has anti-inflammatory and antioxidant activities which are in HFD/STZ-induced diabetes model aligned with previous reports [13,52,53]. Suggesting that AMPK activation by isorhamnetin could reduce inflammation and oxidative stress in such a model. In summary, isorhamnetin has the ability to reduce serum glucose levels and normalize insulin in the diabetic group compared to the vehicle control group. Additionally, the fasting glucose level was lower in the isorhamnetin group after IPGTT. In addition, isorhamnetin reduced HOMA-IR value in diabetic mice compared to vehicle control. These effects could be explained by the upregulation of p-AMPK-α levels which leads to an increase in the translocation of GLUT4 to the cell surface which promotes glucose uptake in the skeletal muscles, thus improving insulin sensitivity. Furthermore, isorhamnetin reduced LDL, cholesterol, and triglyceride levels in diabetic mice compared to vehicle control which might be through the activation of AMPK that is shown to promote glucose and fatty acid catabolism and prevents protein and fatty acid synthesis [54]. The reduction in LDL, cholesterol, and triglyceride could be a key contributor to reducing insulin resistance which might explain the improvement of insulin sensitivity observed in our study [55,56]. Moreover, isorhamnetin reduced MDA, GSSG, and IL-6 levels and increased GSH levels which suggests that isorhamnetin could have antioxidant and anti-inflammatory activities in T2D. These findings are in line with previous studies demonstrated that isorhamnetin reduced MDA levels and increased intracellular GSH levels by enhancing the activity of antioxidant enzymes, superoxide dismutase (SOD), and catalase (CAT) in STZ-induced type 1 diabetes rat model [57]. Another study also reported that isorhamnetin inhibited the NF-κB signaling pathway, which led to reduced levels of inflammatory mediators such as IL-6, ICAM-1, and TNF-α [52,58]. Taken together, the antioxidant and the antiinflammatory activities of isorhamnetin could be through the activation of antioxidant enzymes and inhibiting NF-κB signaling pathway, which needs to be studied more in the future. Limitations of this study include the following aspects: (i) isorhamnetin was administered for a short period of time (ii) GLUT4 expression was assessed using immunoblotting reflective of its total amount, however immunohistochemistry may be a better technique to assess its activity and translocation to the cell membrane, (iii) we only measured GLUT4 expression in the skeletal muscles, this should also be performed using liver and adipose tissue, and (iv) the anti-inflammatory and antioxidant activities of isorhamnetin should be studied more in T2D. Nevertheless, our findings in this study indicate the crucial role of isorhamnetin in improving typical features of T2D, which is the first report to date. Induction of T2D and Experimental Design Six-week-old male C57BL/6 mice were maintained under standard conditions including 12 h light/dark cycles and at 22 ± 2 • temperature [59]. T2D was induced by feeding the experimental mice with HFD (60% fat, D14292, Research Diets, Inc., New Brunswick, NJ, USA) for 9 weeks followed by intraperitoneal injection of STZ (40 mg/kg). At week 10, mice were administered another intraperitoneal low dose of STZ (40 mg/kg) to complete the induction of diabetes. The HFD/STZ-induced diabetes model is a well-established model for diabetes in which HFD feeding will lead to obesity, hyperinsulinemia, and altered glucose homeostasis due to insufficient compensation by the beta cells of the pancreatic islets. A single high dose of STZ causes sudden and significant destruction of pancreatic cells; however, progressive multiple low doses of STZ after HFD as the model of this study causes less destruction of pancreatic cells which portraits the same characteristics and mimics the pathogenesis and clinical features of T2D in human [26,60]. One week after STZ injection, plasma glucose was measured and mice with a plasma glucose concentration of >200 mg/dL were considered to have developed T2D and selected for the subsequent experiments. Mice were randomly divided into four groups (n = 6 each) as follows: (i) the normal control group (non-diabetic, ND) received a normal diet, (ii) the vehicle control (VC) diabetic group treated with dimethyl sulfoxide (DMSO, Panreac Quimica SA, Barcelona, Spain) only, (iii) diabetic group treated with 10 mg/kg isorhamnetin (Sigma-Aldrich, Hamburg, Germany), and (iv) diabetic group treated with 200 mg/kg Metformin (MeRCK, Frankfurt, Germany). The dose of 10 mg/kg isorhamnetin treatment was chosen based on previous study where they studied the effect of isorhamnetin on GLUT4 levels in HFD-induced obesity mice model [61]. In this study, they administered isorhamnetin in three different concentrations (10, 100, 1000 mg/kg) for 90 min. Isorhamnetin was able to significantly upregulate GLUT4 at doses of 10 and 100 mg/kg. Therefore, in our study, the lowest dose (10 mg/kg) was chosen to avoid the toxic effect which might occur with longer period of treatment. All treatments were given orally once per day. After 10 days of treatment, mice were fasted overnight (16 h) and then sacrificed using CO 2 chamber, blood and skeletal muscle (soleus muscle) were collected for ex vivo analysis. Mice sacrificing after 10 days of treatment with isorhamnetin was based on previous studies where isorhamnetin treatment was performed for 10 days in T1D STZ-induced model and other models for different diseases [62][63][64]. Measurement of Serum Glucose, Insulin, and Lipids Serum glucose was determined using a commercial kit (Glucose assay kit, MyBioSource, San Diego, CA, USA). Serum insulin was measured by ELISA using commercial kit (mouse insulin ELISA kit, MyBioSource, USA). Triglyceride (TG, triglyceride assay kit), low-density lipoprotein (LDL, LDL assay kit), and cholesterol (total cholesterol assay kit) were determined using commercially available kits (MyBioSource, USA) according to the manufacturer's instructions. Homeostasis Model Assessment of Insulin Resistance (HOMA-IR) This model represents the interaction between fasting plasma insulin and fasting plasma glucose which is a useful tool for determining insulin resistance. According to international diabetes federation, the HOMA-IR cut-off level in healthy individuals is less than 1 and in men with diabetes is 1.55, and women with diabetes is 2.22 [65]. Studies showed that normal HOMA-IR in healthy mice is 1.9 and in STZ mice is 21.6 [66]. In the current study, we used the following formula to compute HOMA-IR: HOMA-IR = (Fasting glucose (mg/dL) × Fasting insulin (µIU/mL)/405 [67] The constant 405 is a normalizing factor representing the result of multiplication of the normal fasting plasma insulin level (µIU/mL) with the normal fasting plasma level (81 mg/dL) [68]. Intraperitoneal Glucose Tolerance Test Mice were given an intraperitoneal injection of glucose (0.5 g/kg) after being fasted for 18 h. Using a glucometer, blood glucose levels were measured from the tail vein at 0, 30, 60, and 120 min (Accu-Check Performa, Roche Diagnostics, Basel, Switzerland). Reduced glutathione (GSH, GSH assay kit), oxidized glutathione (GSSG, GSSG assay kit), and IL-6 (IL-6 ELISA kit) levels were measured in the serum using commercially available kits (MyBioSource, USA). Plasma MDA level was determined by using commercial Thiobarbituric acid (TBA) Assay Kit (MyBioSource, USA) according to the manufacturer's instructions. Western Blotting Skeletal muscle tissues (soleus muscle) were homogenized in radioimmunoprecipitation (RIPA)-lysis buffer, containing a protease inhibitor cocktail (Santa Cruze Biotechnology, Dallas, TX, USA) using a tissue homogenizer. Homogenates were centrifuged at 12,000× g for 20 min at 4 • C and supernatant was collected. The total protein was quantified using bicinchoninic acid assay kit (Bioquochem, Asturias, Spain). Equal amount of protein was separated by sodium dodecyl sulfate-polyacrylamide gel and then transferred to a nitrocellulose membrane (Thermo Fisher Scientific, Waltham, MA, USA). The membrane was blocked for 1 h at room temperature using 3% bovine serum albumin (BSA) before incubating overnight with either phosphorylated AMPK-α1 (p-AMPK-α1, Abcam, Cambridge, UK) or GLUT4 (MyBioSource, USA) primary antibodies (1:1000 dilution). The membrane was washed three times with washing buffer (Tween-20/Tris-buffered saline) before incubating it with the goat anti-rabbit secondary antibody (MyBioSource, USA, 1:5000 dilution) for 1 h at room temperature. Following incubation, the membrane was washed three times before submerging into the ECL substrate (ThermoScientific, USA) for one minute followed by imaging with chemiLITE Chemiluminescence Imaging System (Cleaver Scientific, Rugby, UK). To ensure equal protein gel loading, β-actin was used as a housekeeping gene (MyBioSource, USA, 1:10,000 dilution). The intensity of the bands was measured using Image J software and adjusted to β-actin. Statistical Analysis All analyzed parameters were tested for the normality of the data using Kolmogorov-Smirnov test. Data are represented as mean ± SEM. Differences between groups were calculated using one-way analysis of variance (ANOVA) followed by Tukey post hoc using Graphpad Prism software version (9.3.1). The significance value of difference was considered when the p value < 0.05. Conclusions In conclusion, our results demonstrated that isorhamnetin could be a very useful hypoglycemic agent for the treatment of T2D due to its multifactorial effects including (i) the reduction in insulin resistance, (ii) an increase in glucose uptake by the skeletal muscle, (iii) improvement in the lipid profile, (iv) reduction in oxidative stress and inflam-mation and (v) the activation of the GLUT4-AMPK pathway. The effects and mechanisms demonstrated by isorhamnetin were very similar to metformin.
2023-01-12T17:37:10.033Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "5c6f4b0c7fa740526146a3f46f9846ee3a377307", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/2/502/pdf?version=1672844550", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41e54fad9a69dd252c342732e2515d16bf8dc1ad", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254614959
pes2o/s2orc
v3-fos-license
The Effect of Mineral Admixtures and Fine Aggregates on the Characteristics of High-Strength Fiber-Reinforced Concrete Introduction: the article discusses the effect of the complex of active mineral additives consisting of silica and fly ash, and a fine aggregate, including finely ground natural-white quartz sand for partial replacement of river sand, on the mechanical properties of high-strength concrete containing steel fiber. Materials and methods: high-strength concrete containing Dramix®3D 65/35 steel fiber in the amount of 100 kg per 1 m3 of concrete mixture was suggested where 22% to 100% of river sand was replaced by finely ground white natural sand of the particle size of 5 to 1800 μm and containing the complex of active mineral additives for partial replacement of cement as part of a multicomponent binder, consisting of low-calcium fly ash of thermal power plants and silica and containing, respectively, 20, 30, 40% fly ash and from 5 to 15% silica by weight of the binder. Results: research results have shown that 100% replacement of river sand with finely ground natural white sand, in concrete containing 20% of the mass as part of a multicomponent binder, fly ash and from 5 to 15% by weight of silica, contributes to the increase of its strength properties: the values of concrete compressive strength after 28 days were in the range from 118.5 to 128 MPa, tensile strength during bending and splitting, respectively, from 18.8 to 25.4 MPa and from 10.2 to 11.9 MPa, which is higher than the strength of concrete samples containing river sand. Conclusions: the achieved results have demonstrated the efficiency of using finely ground natural white sand as an alternative to river sand for producing high-strength concrete, thus helping to save the river sand resources in Vietnam. The use of fly ash and micro silicon, which are power and metallurgy wastes, as part of a multicomponent binder in order to partially replace cement reduces the carbon footprint in the production of binders and will also have a beneficial effect on environmental protection against industrial waste pollution. Introduction High-strength concrete is characterized by high compressive strength and is comparable to such high-strength natural rocks as basalt [1][2][3]. The use of high-strength concrete in construction can significantly reduce the cross-section of structures, make them lightweight and delicate, as well as reduce the costs of raw materials and of transporting finished products. The unique combination of the properties of such concretes makes it possible to obtain thin profiles and shells, and complex and various geometric shapes of products and structures, without losing their strength and durability. Over the past two decades, there has been a growing interest in high-strength concretes, while their scope is expanding, from repair and restoration of reinforced concrete structures and architectural elements to the oil and gas industry applications as well as to the construction of various hydraulic and underground structures [4]. Today, the most popular use of high-strength concrete is for the construction of bridges [5][6][7][8]. tion of cement and mineral additives is presented in Table 1. Fine aggregate-quartz sand from the Huong River, with the true density of 2650 kg/m 3 and the particle size modulus of 2.91; natural white sand with the average particle size of 0.1-1.8 mm compliant with the requirements of TCVN 7570-2006 [33], TCVN 10796-2015 [34], and GOST 8736-2014 [35], and finely ground quartz powder (quartz flour) obtained by grinding white sand with the average particle size of 5-63 μm, which is a microfiller. Particle size distribution curves for these materials are shown in Figure 2. Drinking water for preparation of concrete mixtures. The use of water-reducing superplasticizer (HRWR) is necessary to obtain highstrength concrete and its dosage can vary Gerlicher et al. [23] determined the dosage of 35 kg/m 3 of superplasticizer to be suitable for producing concrete mix with 360 mm cone flow. It is of interest to study the possibility of obtaining concrete containing steel fiber and having high strength by using a developed complex of active mineral additives consisting of silica and fly ash, as well as a fine aggregate including finely ground natural white quartz sand, still little used, in order to partially replace the scarce river sand in Vietnam, and to investigate the mechanical properties of the resulting high-strength concrete. In addition, the use of fly ash and silica, which are energy and metallurgy wastes, as part of a multicomponent binder for the purpose of partial replacement of cement will help reduce the carbon footprint in the production of binders and will also have a beneficial effect on environmental protection from industrial waste pollution. Pilot Study High-performance concretes have a high particle packing density contributing to low porosity, high mechanical strength, and impermeability. Figure 1 shows the compositions of concretes with different strength values. on environmental protection from industrial waste pollution. Pilot Study High-performance concretes have a high particle packing density contributing to low porosity, high mechanical strength, and impermeability. Figure 1 shows the compositions of concretes with different strength values. The composition of high-performance concretes, in addition to cement, fine aggregates, and additives, also includes steel fiber and finely dispersed nanofillers, which, in terms of achieving high strength to compressive and tensile loads, replace coarse aggregates in their composition [24]. The most widely used theoretical model for creating the maximum dense packing of particles in the design of UHPC is the model by Anderson and Andreasen [12]. However, it considers particles only under dry conditions, which may not reflect the actual packing of UHPC particles, since the effect of water and other liquids is not taken into account [25]. Therefore, the wet particle packing density method was introduced to obtain the "real" maximum packing of particles [26]. In this paper, UHPC design methods are considered based on dry particle packing and wet particle packing models. The composition of high-performance concretes, in addition to cement, fine aggregates, and additives, also includes steel fiber and finely dispersed nanofillers, which, in terms of achieving high strength to compressive and tensile loads, replace coarse aggregates in their composition [24]. The most widely used theoretical model for creating the maximum dense packing of particles in the design of UHPC is the model by Anderson and Andreasen [12]. However, it considers particles only under dry conditions, which may not reflect the actual packing of UHPC particles, since the effect of water and other liquids is not taken into account [25]. Therefore, the wet particle packing density method was introduced to obtain the "real" maximum packing of particles [26]. In this paper, UHPC design methods are considered based on dry particle packing and wet particle packing models. Materials The following raw materials were used in the work. Active mineral additives with pozzolanic reactivity and facilitating structure densification are: micro silica Sikacrete ® PP1-Sika Limited (Nhơn Trạch, Vietnam), with the particle size of <0.1 µm, the true density of 2150 kg/m 3 according to TCVN 8827-2011 [29] and GOST R 56592 [30], as well as the class F fly ash from Pha Lai Thermal Power Plant (Chi Linh, Vietnam), according to TCVN 10302-2014 [31] and GOST 25818 [32]. The chemical composition of cement and mineral additives is presented in Table 1. Fine aggregate-quartz sand from the Huong River, with the true density of 2650 kg/m 3 and the particle size modulus of 2.91; natural white sand with the average particle size of 0. [35], and finely ground quartz powder (quartz flour) obtained by grinding white sand with the average particle size of 5-63 µm, which is a microfiller. Particle size distribution curves for these materials are shown in (Table 2) according to TCVN 12392-1: 2018 [38]. Drinking water for preparation of concrete mixtures. Packing method for dry particles Fuller [39] and Andersen [40] developed the first continuous model by introducing the target particle size distribution P(D). Taking into account the effect of the minimum particle size on their packing, Funk and Dinger [41] developed a modified model and introduced Dmin. To obtain high-strength concrete, it is necessary for its frame to be dense and durable. When designing high-strength concrete, a continuous model is more preferable, making possible to obtain a denser framework of particles [42]. In this study, this modified model was used to design the specific matrix in accordance with Equation (1): where: P(Di)-the percentage of particles that can pass the sieve with smaller than D; Diaverage particle size, mm; Dmax-maximum particle size, mm; Dmin-minimum particle size, mm; q-the particle size distribution modulus. Using Equation (1), it is possible to design different concrete compositions, taking different values of q, which determines the percentage ratio between small and large particles in the concrete mixture. Brouwers and Radix [43] suggested that the distribution modulus is within the range of 0-0.25 for concrete with a high content of binders [44]. For high-strength concrete, the value of q was proposed within the range of 0.21-0.25 [45]. Using the modified model, Yu et al. developed the concrete mixture compositions for obtaining high-performance concretes [46]. At the same time, the distribution modulus was set at 0.23. Given the fact that large numbers of small particles are used to obtain the matrix of high-strength concrete, based on the recommendation [44], the value of q in this study was fixed at 0.23. The mass fractions of each individual raw component in the concrete mixture are adjusted to achieve the optimal ratio (the smallest difference) between the formulated mixture ( Figure 3, dotted thick line) and the target curve ( Figure 3, solid thick line), using the algorithm based on the "Method of least squares", as shown in Equation (2) [47]: 1. Packing method for dry particles Fuller [39] and Andersen [40] developed the first continuous model by introducing the target particle size distribution P(D). Taking into account the effect of the minimum particle size on their packing, Funk and Dinger [41] developed a modified model and introduced D min . To obtain high-strength concrete, it is necessary for its frame to be dense and durable. When designing high-strength concrete, a continuous model is more preferable, making possible to obtain a denser framework of particles [42]. In this study, this modified model was used to design the specific matrix in accordance with Equation (1): where: P(D i )-the percentage of particles that can pass the sieve with smaller than D; D i -average particle size, mm; D max -maximum particle size, mm; D min -minimum particle size, mm; q-the particle size distribution modulus. Using Equation (1), it is possible to design different concrete compositions, taking different values of q, which determines the percentage ratio between small and large particles in the concrete mixture. Brouwers and Radix [43] suggested that the distribution modulus is within the range of 0-0.25 for concrete with a high content of binders [44]. For high-strength concrete, the value of q was proposed within the range of 0.21-0.25 [45]. Using the modified model, Yu et al. developed the concrete mixture compositions for obtaining high-performance concretes [46]. At the same time, the distribution modulus was set at 0.23. Given the fact that large numbers of small particles are used to obtain the matrix of high-strength concrete, based on the recommendation [44], the value of q in this study was fixed at 0.23. The mass fractions of each individual raw component in the concrete mixture are adjusted to achieve the optimal ratio (the smallest difference) between the formulated mixture ( Figure 3, dotted thick line) and the target curve ( Figure 3, solid thick line), using the algorithm based on the "Method of least squares", as shown in Equation (2) [47]: where: RSS-residual sum of squares (at defined particle sizes); P mix -composed mixture; P target -the target mix of the granulometric composition calculated by Equation (1); n is the number of points (between D min and D max ) used to calculate the deviation. The quality of the fit of the resulting cumulative particle size distribution curve is evaluated by using the determination factor R 2 , as shown in Equation (3), since this factor provides the correlation value between the gradation of the target curve and the represented concrete mix: where P mix = 1 n ∑ n i=1 P mix (D i+1 i )-the average value of the entire distribution. However, the dry particle packing method does not account for the effects of water and HRWR. In practice, water and superplasticizer play an important role in particle packing and therefore affect the properties of UHPC [48]. 2. Packing method for wet particles The high friction of dry particles prevents their packing density increase [49]. The presence of water reduces the friction force, and if the particles are in a water-saturated or supersaturated state, then this friction force can be eliminated [50]. Moreover, adding superplasticizer affects the thickness of the water film formed on the surface of solid particles [48]. Therefore, when using the model of wet packing of particles, higher packing density will be achieved as compared to the dry packing model (Figure 4), since taking the moisture content of particles into account allows to develop the model that is more accurate and closer to reality [51]. where: RSS-residual sum of squares (at defined particle sizes); Pmix-composed mixture; Ptarget-the target mix of the granulometric composition calculated by Equation (1); n is the number of points (between Dmin and Dmax) used to calculate the deviation. The quality of the fit of the resulting cumulative particle size distribution curve is evaluated by using the determination factor R 2 , as shown in Equation (3), since this factor provides the correlation value between the gradation of the target curve and the represented concrete mix: where = ∑ ( )-the average value of the entire distribution. However, the dry particle packing method does not account for the effects of water and HRWR. In practice, water and superplasticizer play an important role in particle packing and therefore affect the properties of UHPC [48]. Packing method for wet particles The high friction of dry particles prevents their packing density increase [49]. The presence of water reduces the friction force, and if the particles are in a water-saturated or supersaturated state, then this friction force can be eliminated [50]. Moreover, adding superplasticizer affects the thickness of the water film formed on the surface of solid particles [48]. Therefore, when using the model of wet packing of particles, higher packing density will be achieved as compared to the dry packing model (Figure 4), since taking the moisture content of particles into account allows to develop the model that is more accurate and closer to reality [51]. The high wet packing density improves the macro002Dmeso-micropore structure, resulting in the increase of the compressive strength of UHPC [48]. The packing model of wet particles was proposed accounting for the effect of water and superplasticizer. The following is necessary for achieving the dense packing of wet particles: (1) setting the initial value of W/B; (2) measuring the required amount of water and cementing materials and mixing them; (3) placing the resulting mixture into cylindrical mold and determining the mass of the cement paste; (4) calculating the solids concentration (ϕ) and void ratio (u) using Equations (4)-(6); (5) repeating the above steps at the lower W/B ratio until the maximum packing density is reached. where: M and V are the mass and volume of cement paste in cylindrical form (the form has 62 mm diameter and 60 mm height); ρ w is the water density, ρ α , ρ β , and ρ γ are the densities of the solid components of various binders; R α , R β , and R γ -volumetric ratios of the solid components of various binders. The authors of [25] shows an example of determining the minimum void ratio (u) and the maximum solids concentration (φ). The optimum W/B can be determined by considering the maximum wet pack density. However, the highest packing density of particles does not always result in the expected properties of UHPC. For example, high particle packing density does not provide high fire resistance of concrete, since relatively high porosity is preferable for reducing pore pressure in high temperature environment. The developed compositions of high-strength concretes using industrial waste are presented in Table 3. Preparation of Steel Fiber Concrete Mixture and Molding of Samples In order to obtain UHPC, it was necessary to follow the following sequence of steps in preparing the concrete mixture: (1) fine aggregates were mixed in a dry state for 2 min for preventing agglomeration, as well as evenly distributing fine particles; (2) thereafter, adding binders, including cement, fly ash, and micro silica, followed by 5 min of mixing; (3) steel fibers were then gradually added and stirred for another 3 min until these fibers have been completely dispersed; (4) then, continuing stirring for about 10 min, gradually adding the solution of superplasticizer previously mixed with water to the resulting mixture; (5) the workability of the prepared concrete mixture was determined by the flow of the cone, which should be in the range from 280 to 310 mm without delamination, and then filled with the concrete mixture of the form. The consistency of the concrete mixture was determined according to TCVN 12209-2018 standards [52] and GOST R 57812-2017/EN 12350- 5:2009 [53]. The molds with the samples covered with a damp cloth were stored for 24 h in the open air at the temperature of 20 ± 5 • C, then the samples were removed from the molds and placed in the tank with fresh water of 20 ± 2 • C temperature where they hardened until reaching the estimated age of 3, 7, 14, 21, 28, and 120 days. The samples prepared in this way had the form of cubes 100 × 100 × 100 mm size for compressive strength tests, cylinders 100 × 200 mm for axial tensile tests and prisms 100 × 100 × 400 mm for tensile tests in bending. Determination of the Strength Characteristics of the Developed Concretes The test results are illustrated in Figures 5-7. The results of the experiment show the efficiency of combining active mineral additives (micro silica and fly ash) with natural white sand, finely ground quartz powder, and steel fiber in a concrete mixture, allowing to obtain concretes with high strength characteristics, such as compressive strength at 3, 7, 14, 21, 28, and 120 days of hardening, as well as the axial tensile strength and flexural tensile strength at the age of 28 days. adding binders, including cement, fly ash, and micro silica, followed by 5 min of mixing; (3) steel fibers were then gradually added and stirred for another 3 min until these fibers have been completely dispersed; (4) then, continuing stirring for about 10 min, gradually adding the solution of superplasticizer previously mixed with water to the resulting mixture; (5) the workability of the prepared concrete mixture was determined by the flow of the cone, which should be in the range from 280 to 310 mm without delamination, and then filled with the concrete mixture of the form. Determination of the Strength Characteristics of the Developed Concretes The test results are illustrated in Figures 5-7. The results of the experiment show the efficiency of combining active mineral additives (micro silica and fly ash) with natural white sand, finely ground quartz powder, and steel fiber in a concrete mixture, allowing to obtain concretes with high strength characteristics, such as compressive strength at 3, 7, 14, 21, 28, and 120 days of hardening, as well as the axial tensile strength and flexural tensile strength at the age of 28 days. Compressive Strength of Different Curing Ages We consider the change in compressive strength of the studied concretes with and without natural white quartz sand, obtained as the result of hardening for the concrete mixtures of twelve developed compositions. These compositions containing steel fiber (Table 3), depending on the content of fly ash and micro silica in the composition of a multicomponent binder as a partial replacement of sulfate-resistant Portland cement and hardening tested at different ages are shown on Figures 5 and 6. Using fly ash has reduced the compressive strength values as compared to the test composition (UC) at the early age of concrete hardening, which can be explained by the cement content decrease in the binder composition. It can be seen that the increase in the fly ash content is accompanied by the increase in the compressive strength of concrete samples at the longer age, namely, the compressive strength decrease at the age of 7 days ranged from 13 to 39%, while, in contrast, at the hardening age of 120 days, we observe the strength increasing by 5-18% as compared to the test concrete composition containing multicomponent binder from 20 to 40% from the mass fly ash to replace part of the sulfateresistant Portland cement. This indicates the gradual increase in the strength of such Compressive Strength of Different Curing Ages We consider the change in compressive strength of the studied concretes with and without natural white quartz sand, obtained as the result of hardening for the concrete mixtures of twelve developed compositions. These compositions containing steel fiber (Table 3), depending on the content of fly ash and micro silica in the composition of a multicomponent binder as a partial replacement of sulfate-resistant Portland cement and hardening tested at different ages are shown on Figures 5 and 6. Using fly ash has reduced the compressive strength values as compared to the test composition (UC) at the early age of concrete hardening, which can be explained by the cement content decrease in the binder composition. It can be seen that the increase in the fly ash content is accompanied by the increase in the compressive strength of concrete samples at the longer age, namely, the compressive strength decrease at the age of 7 days ranged from 13 to 39%, while, in contrast, at the hardening age of 120 days, we observe the strength increasing by 5-18% as compared to the test concrete composition containing multicomponent binder from 20 to 40% from the mass fly ash to replace part of the sulfateresistant Portland cement. This indicates the gradual increase in the strength of such Compressive Strength of Different Curing Ages We consider the change in compressive strength of the studied concretes with and without natural white quartz sand, obtained as the result of hardening for the concrete mixtures of twelve developed compositions. These compositions containing steel fiber (Table 3), depending on the content of fly ash and micro silica in the composition of a multicomponent binder as a partial replacement of sulfate-resistant Portland cement and hardening tested at different ages are shown on Figures 5 and 6. Using fly ash has reduced the compressive strength values as compared to the test composition (UC) at the early age of concrete hardening, which can be explained by the cement content decrease in the binder composition. It can be seen that the increase in the fly ash content is accompanied by the increase in the compressive strength of concrete samples at the longer age, namely, the compressive strength decrease at the age of 7 days ranged from 13 to 39%, while, in contrast, at the hardening age of 120 days, we observe the strength increasing by 5-18% as compared to the test concrete composition containing multicomponent binder from 20 to 40% from the mass fly ash to replace part of the sulfateresistant Portland cement. This indicates the gradual increase in the strength of such concretes in longer hardening periods. This trend can be probably explained by the cumulative effect of cement hydration in combination with the pozzolanic reaction of the used fly ash, leading to a gradual increase in the content of low-basic calcium hydro silicates in the composition of the concrete cement stone [54][55][56]. From the diagram in Figure 5, it can be seen that using natural white sand instead of river sand made a significant impact on the development of early compressive strength of the designed concretes. The results showed that for concretes containing river sand as a fine aggregate, the compressive strength at the age of 3 days was from 64 to 73.5 MPa, and for concretes with white sand, the three-day compressive strength reached 95-110 MPa, meaning the compressive strength of concrete increasing from 29 to 73% at 3-day hardening age. The strength increase at longer age (120 days) was significantly less and ranged from 5 to 29%. Figure 5 also shows the relationship between the micro silica content in the multicomponent binder composition and the compressive strength increase in the developed concretes with white and river sand at different hardening ages. At the same time, the concrete strength increase was observed with the increase of micro silica content, which can be explained by the increase in the formation of low-basic calcium hydro silicates (CSH) due to the pozzolanic reaction between free calcium hydroxide (CH) and amorphous micro silica (SiO 2 ): The product of the C-S-H pozzolanic reaction not only improves the adhesion between the cement stone and the surface of fine aggregate grains, but also results in compaction of the concrete structure. Thus, the pozzolanic reaction has the double effect: the increase in the compressive strength of concrete and the decrease in the total pore volume in its structure. In addition, micro silica particles < 0.1 µm can fill voids created by free water in the matrix. The rapid increase in compressive strength of concretes containing river sand was observed at the age from 3 to 28 days. At the same time, the increase of strength at 28-day age as compared to 3-day strength with the micro silica content of 7.5, 10, 12.5 and 15% from the mass amounted to 56.3%, 43%, 56.9%, and 56%, respectively. After 28 days, these concretes showed a slight increase in strength, which by 120 days reached values of 106, 108, 109, and 117 MPa, respectively. The highest compressive strength (137 MPa at the age of 120 days) was obtained with a micro silica content of 7.5% from the mass of the multicomponent binder. However, the decrease of the concrete compressive strength was observed with the larger increase of the micro silica content. The obtained results can be due to the fact that, with adding the larger amount of micro silica in place of sulfate-resistant cement, the part of it is formed without entering into a pozzolanic reaction with the formation of low-basic calcium hydro silicates strengthening the cement stone of concrete. In addition, their amount is reduced due to a decrease in the cement content in a multicomponent binder. Thus, the obtained experimental results showed that the optimal content of micro silica in the concrete mixture is 7.5% from the mass of the multicomponent binder, other things being equal. Axial and Flexural Tensile Strengths at 28-Day Age The axial tensile strength of the developed concretes using white and river sand as a fine aggregate was determined after 28 days of hardening of the samples (Figure 7). The results show that the above strength increases with the increase of the micro silica content in the multicomponent binder from 5 to 12.5% from the mass, and then decreases with its further growth to 15% from the mass. It is worth noting that white sand formulations show higher axial tensile strength than river sand formulations. The highest strength value of 11.9 MPa in axial tension was shown by the concrete containing white sand with 7.5% from the mass SF and 20% from the mass FA, which is by 78% higher than that of the test concrete sample containing river sand. The tensile strength in bending of 28-day age concrete samples was 9.5-29.2 MPa (Figure 7). When the multicomponent binder part of sulfate-resistant Portland cement was replaced simultaneously by 7.5% from the mass SF with fly ash up to 40% from the mass the tensile strength in bending of concretes with river sand ranged from 9.5 to 12.3 MPa, which is lower as compared to other compositions of the developed concretes. The test results of concrete samples of nine compositions with white and river sand containing 20% from the mass in the composition of the multicomponent binder, fly ash, and micro silica in the amounts of 5, 7.5, 10, 12.5, and 15% from the mass, showed that, other things being equal, using white sand enables to obtain the concrete of tensile bending strength varying from 21.4 to 29.2 MPa, up to 3 times higher than that of the concretes using river sand. Conclusions Based on the results of the conducted research, the following conclusions can be drawn: 1. With the use of the local Vietnamese raw materials, high-strength fiber-reinforced concretes can be obtained, of the high potential for use in the different constructions. The conducted studies have discovered that the highest compressive strength, equal to 137 MPa at 120-day age, as well as the axial tension and tensile bending equal to 11.9 MPa and 29.2 MPa, respectively, at 28-day age, were shown by the concrete composition with natural white quartz sand, contained in the multicomponent binder composition based on sulfate-resistant Portland cement 7.5% from the mass microsilica combined with 20% from the mass fly ash from thermal power plants. Thus, it was determined that the optimal content of micro silica and low-calcium fly ash in a multicomponent binder is 7.5% and 20% from the mass, respectively. 2. It has been established that using the natural white sand with grain size from 5 to 1800 µm increases the concrete strength characteristics as compared to river sand. Therefore, it is efficient to use white sand for producing concretes of high strength characteristics. The above use will also contribute to protecting the local river sand resources from depletion, which is relevant for Vietnam. 3. The use of local waste from the power and metallurgy industries in the form of fly ash from thermal power plants and micro silica as a partial replacement for Portland cement in multicomponent binder compositions reduces the carbon footprint of the cementing components production and helps to protect the environment from industrial waste pollution. In addition, it is beneficial in terms of saving power resources and reducing the concrete cost. Institutional Review Board Statement: Not applicable.
2022-12-14T16:19:19.408Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "92eb2baa8bbfc5b49a3e2d3e947430d4dd9ceebc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/24/8851/pdf?version=1670829347", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a77baecd0864c765cb190cb1097f9895f8ed246f", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
252858710
pes2o/s2orc
v3-fos-license
A New Stress Field Model for Semiclosed Crack under Compression considering the Influence of T -Stress Compression is a typical stress condition for cracks in deep-water structures, where the cracks tend to close from a nonclosed state, due to a certain gap that exists between the surfaces on both sides of cracks. The stress fi eld models around the crack have been established in previous studies, while the crack surfaces are simply assumed in a nonclosed or full-closed state. In fact, the cracks inside deep-water structures are usually in a semiclosed state, leaving the reliability of calculation results in risk. To re fl ect the actual state of crack, a comprehensive stress fi eld model around the semiclosed crack is established based on the complex potential theory, and the stress intensity factor K II at the crack tip related to the closure amount of crack surfaces, deep-water pressure, friction coe ffi cient in the closed region, and crack inclination angle is derived. The analytical solution of the stress fi eld around the semiclosed crack contains three T -stress components, i.e., T x , T y , and T xy . The rationality and e ff ectiveness of the proposed stress fi eld model are veri fi ed by the isochromatic fringe patterns around the crack obtained from the photoelastic experiment. It reveals that the proposed model can reasonably predict the evolution of the stress fi eld with the closure amount of crack under constant and variable stress conditions. Introduction In general, the most common construction material that has been used for off-shore oil production platforms is steel [1][2][3]; however, more than 50 reinforced concrete platform bases have been utilized by the Norwegians and British in the North Sea [4]. As shown in Figure 1, the reinforced concrete oil platform typically consists of a large gravity base and several concrete legs supporting the upper steel structure. The foundations of offshore wind turbine can also be made of concrete materials [5], as demonstrated in Figure 2. These are massive structures due to the deepwater pressures involved and the wave loading that has to be resisted. However, during the manufacture and construction of these concrete structures, discontinuity defects similar to fault or microcrack are inevitably generated owing to the uneven distribution of early thermal stress or shrinkage of cementitious materials [6,7]. Generally, these defects tend to occur and appear in the form of cracks, which can be seen in Figures 1 and 2. It would be no exaggeration to state that cracks have their great influences on the mechanical performance of concrete structure. Crack tips under load are susceptible to stress concentration effects, which in turn cause a reduction in structural capacity significantly through initiation, propagation, and interconnection; on the other hand, the presence of cracks leads to the corrosion of internal reinforcement by reducing the durability of the concrete and further weakens the capacity of the structure, often resulting in serious and disastrous consequences [8]. To realistically evaluate the effect of internal cracks on the structural capacity under deep-water pressure conditions, it is essential to have a clear understanding of the stress field around the cracks. Most previous studies on crack fracture behavior have focused on the stress field and crack initiation of tension cracks (mode I), and theoretical research on this type crack has become mature [9,10]. However, the internal cracks in deep-water structures are hardly subjected to such stress states similar to the pure mode I cracks. In fact, these cracks under the action of deep-water pressure, self-weight, and upper load are usually under multiaxial compression stress state, and the surfaces on both sides of the crack cause an interaction due to the closure, including mutual compression and friction effects [11,12]. Different closure amounts may cause to interact differently between the two surfaces, and these differences may be important to the stress field depending on the magnitude of the influence. For the analysis of crack problems, the stress field proposed by Williams [13] can well describe the stress distribution near the crack tip. The Williams expression contains not only the singular stress term with r 1/2 but also the nonsingular stress terms (generally called T-stress) and higher order terms with Oðr 1/2 Þ. In the past, only the singular stress term in the expansion is adopted by scholars when studying the stress field at the crack tip [14,15], ignoring the nonsingular term and the higher-order term. However, numerous studies [16][17][18] have shown that T-stress has a significant effect on the tip stress field, which can be summarized as follows: when r approaches to 0, the singular stress term in the expansion plays the main controlling role, and as the r gradually increases, the value of the singular stress term decreases rapidly while the proportion of nonsingular stress term gradually increases. It reveals that the nonsingular stress term cannot be ignored in particular under this condition. The study of the stress field near the crack when considering the T-stress under tensile conditions has a relatively mature theoretical basis, but the effects of T-stress in a compressive stress state are rarely considered for studying the stress field around the crack [19,20]. Many scholars have studied the influence of defects such as inclusions and cracks on the fracture behavior of materials by using experimental and numerical analysis, and Fan et al. [21] conducted uniaxial compression tests on the cuboidal sandstone containing a nonpenetrating crack to study the cracking mechanism of defects. The results show that crack first initiates at the tip of the crack on the front surface of the specimen, while the new crack initiates at the middle of the crack on the back surface. Zhang et al. [22] quantitatively studied the influence of two conditions, with and without inclusions, on the mechanical mechanisms of rock crack evolution, and the impact of inclusions on the mechanical properties of the rock during compressive loading was researched as well. Through a combination of experiments and numerical simulations, Yang et al. [23] investigated the relationship between wing crack expansion and peak strength for specimens containing main crack and prefabricated wing cracks, where it was concluded that the length of the prefabricated wing crack had a negligible effect on the peak strength of the specimen. For the study of crack in the closed state, Liu [24] established the stress field expression around the crack that is full-closed under uniaxial compression; Fan et al. [25] and Feng et al. [26] studied the initiation behavior and the evolution of tangential stress for rock material under compression. Nevertheless, all of the above studies are based on an ideal model with the cracks being in a full-closed state. It is worth noting that a certain gap usually exists between the crack surfaces, and under the action of compression, the crack is actually always in such a practical state; that is, the crack surfaces are gradually closed from the nonclosed state. To reflect the actual state of cracks in view of the shortcomings of the previous research results, this paper firstly derives a more comprehensive and detailed stress field model containing T-stress around the crack under compression stress state based on the complex potential theory of Muskhelishvili [27], considering not only the mutual Advances in Mathematical Physics compression and frictional effects between the surfaces on both sides of the crack but also the closure amount of crack surfaces. The rationality of the stress field model is then verified by comparing the isochromatic fringe pattern obtained from the previous photoelastic experiments with the contour lines predicted by the proposed model in this study. Finally, the evolution of stress fields with closure amount under equal stress and variable stress conditions is analyzed and compared, respectively. The aim of this paper is to accurately describe the stress fields of semiclosed cracks; furthermore, the results can also intend to provide a theoretical guidance for accurate and quantitative analysis of the distribution of stress fields around internal cracks in deep-water structures, so as to provide a design reference for structures. Stress Fields of Nonclosed and Full-Closed Crack under Compression Crack propagation strongly depends on the asymptotic stress field near the crack tip. It is well known that the asymptotic stress field at the crack tip in a twodimensional elastic medium can be described by the leading singular and secondary constant terms as follows [28]: > > > = > > > ; However, the establishment of the above two stress fields is only based on the ideal condition; that is, the crack is 3 Advances in Mathematical Physics under nonclosed or full-closed states. However, the actual situation is that the crack tips gradually close as compression proceeds. Furthermore, the above models are only applicable to the stress field in the very small area of the tip, and the reliability cannot be guaranteed for predicting stress field at distances far from the tip. Therefore, given the actual state of the general crack, it needs to be stated that the stress fields expressed by Equations (2) and (3) still have great limitations in application, and the influence of semiclosed crack on the stress field must be properly understood for their use in concrete fracture mechanics. Boundary Conditions and Model Assumptions. Consider an elastic infinite plate containing a semiclosed crack of length 2a subjected to two vertical stresses σ 1 and σ 3 at infinity, as shown in Figure 5. The angle between the crack and the vertical stress σ 1 is α, and the angle between the crack and the horizontal stress σ 3 is β. The crack tends to close under the action of compression, as illustrated in Figure 6. The two surfaces above and below the closed region of the crack tip produce mutual compression and friction, where the closed length and nonclosed length are denoted by Δa and b, respectively. The expression Δa/a is defined as the closure amount in this paper. Due to the existence of a very small gap between the two surfaces of the crack, the crack is no longer strictly linear during the process of tip closure. Comparatively, when the gap between crack surfaces is much smaller than the length of the crack, it can be assumed that the crack is still straight. The compressive stress in the closed region is σ n , which can be expressed as follows by a function: where f ðxÞ is a function in relation to x and σ N and σ N is the maximum compressive stress on the closed region, which can be taken as The shear stress on the closed region surface is given by: Due to the drive of shear stress, frictional resistance τ f can be generated in the closed region. When the shear stress τ s is less than the frictional resistance τ f on the crack surface, the friction can prevent the crack in the closed region from slipping; when the shear stress τ s is greater than the frictional resistance τ f on the surface of the crack, the surfaces on both sides of the closed region relatively slid, and the frictional resistance τ f on the surface of the closed region is where μ represents the friction coefficient. Therefore, the condition for relative sliding of the crack surfaces in the closed region is takes the stress form: Stress Function. According to Muskhelishvili, the complexity of a problem in the plane theory of elasticity can be simplified very significantly by finding ΦðzÞ and ΩðzÞ, which must satisfy the problem boundary conditions [27]. The stress field at the tip of the crack can always be expressed by these two complex functions: where z = x + iy and z = x − iy.σ + y and τ + xy and σ − y and τ − xy are defined representing the boundary values of the upper and lower surfaces of the crack, respectively. For the crack shown in Figure 6, the boundary values of surfaces in the closed region can be taken as below. Upper surface: Lower surface: By the Equations (11), (12), and (14), the boundary conditions of surfaces in the closed region take the form: adding and subtracting, which can be obtained: where PðtÞ and QðtÞ are known functions on the surface in the closed region. The Equations (18) and (19) are typical Riemann-Hilbert problem, which generally takes the following form: assume FðtÞ = ΦðtÞ + ΩðtÞ, the Equation (18) can be obtained when g = −1 and f = 2ðσ n − iτ f Þ; assume FðtÞ = Φ ðtÞ − ΩðtÞ, the Equation (19) can be obtained when g = 1 and f = 0. The two general solutions to the problem can be obtained, respectively: The two stress functions can be solved: where XðzÞ = ffiffiffiffiffiffiffiffiffiffiffiffiffi z 2 − a 2 p and PðzÞ = C 0 z and the two stress functions can be rewritten as in which It is assumed that the closed crack surface in the closed region is subjected to a uniformly distributed compressive stress as shown in Figure 7; by the Equations (4) and (5), we have The boundary values can be taken as The PðtÞ in the stress function is as follows: Advances in Mathematical Physics The general expressions of the two stress functions can be obtained: 3.3. Stress Intensity Factor. The relationship between the stress intensity factors K I and K II for mode I and mode II of inclined crack in the infinite plane can be expressed as [19,31] the term σ N ð1 − iμÞ in the stress function is given by Thus, the Equation (34) can then be expressed by where From Equation (36) the stress intensity factors K I and K II ahead of the crack tip are separately expressed as Wg: ð39Þ Figure 8 shows the relationship curve between the parameters W and b; it can be seen that W takes the range of values 0:5 ≥ W ≥ 0 when b takes a range of values 0 ≤ b ≤ a, and the term ð1 − 2WÞ in Equation (38) falls between the range of 0 to 1, indicating that the mode I stress intensity factor K Ι at the crack tip is a nonpositive value under compression. In addition, due to the noninvasive nature of the material, it can be considered that the crack tip does not have the characteristics of a mode I under the compression condition, that is K I = 0. Furthermore, when b = 0, W = 0:5, and K I = 0 it reveal that the crack surface is full-closed and the singular term in relation to the mode I crack disappears, which is consistent with previous research results [24][25][26]. For this case, the expression for K II is given by The above equation is the same as the stress intensity factor for the full-closed crack tip obtained by Fan et al. [25]. For the case when b = a and W = 0, the crack surfaces Advances in Mathematical Physics are closed but without interaction, and the stress intensity factor is Þsin 2β: ð41Þ Figure 9 shows the comparison of the variation of normalized stress intensity factor as a function of crack inclination angle obtained from the proposed theory and the previous theories [11,12,25]. As we can see, for the three models, the stress intensity factor has the characteristic of decreasing with the increase of the crack inclination angle. Besides, the stress intensity factor obtained from the semiclosed model in this paper is in between that of the nonclosed model and the fullclosed model for the same crack inclination angle, and different crack inclination angles corresponding to the maximum values of the stress intensity factor can be determined from the three different closed models. For the semiclosed crack, the condition for relative sliding of the crack surfaces is defined by the Equations (8) and (9), combining with the value range of W in Equation (39), the following equation always set up: To sum up, the mode II stress intensity factor K II at the tip of a semiclosed crack is a parameter related to the closure amount, the confining pressure of deep water, the friction coefficient in the closed region, and the inclination angle of crack. Analytical Solution of Stress Fields. The analytical solution of stress fields near the crack tip subjected to the loading in Figure 5 can be derived by the two stress function ΦðzÞ and ΩðzÞ. The coordinate system defining a double-ended crack in a complex z-plane is shown in Figure 10. Considering the coordinate origin at the center of the crack, the complex variable z is defined as z = x + iy. Thus, the P represents a point in the z-plane where the elastic stresses σ x , σ y , and τ xy are determined at (θ 1 , r 1 ). According to the polar coordinate system in which the crack is located, the relevant variables are defined as follows: ffiffiffiffiffiffiffiffiffiffiffiffiffi The stress function is expressed in terms of stress intensity factor as Li [12] ( /a = 0) Proposed model ( /a = 0.2) Tang [11], fan [25] ( /a = 1) Figure 9: Comparison of the variation of normalized stress intensity factor as a function of crack inclination angle obtained from the proposed and the previous theories. Advances in Mathematical Physics Inserting Equations (43) and (44) into Equations (45) and (46), the expressions of the two stress functions in a polar coordinate system are given by Therefore, the other functions Φ ′ ðzÞ, Φ ′ ð zÞ, and Ωð zÞ in Equation (11) can be rewritten as By the Equation (10), the relationship between the stress components σ x and σ y of the stress field around the crack can be written as follows: Similarly, apply the proper functions given above to Equation (11) to get the relationship between the stress components σ y andτ xy : Extracting the real and imaginary parts of Equation (53), the stress components σ y and τ xy can be obtained: Combining Equations (52) and (54) yields the stress component σ x , as follows: Advances in Mathematical Physics With respect to Equations (54), (55), and (56), one can see that each stress component contains a subterm related to the stress intensity factor K II and r, and the other subterm is unrelated to K II and r, the latter subterm is the T-stress, which can be expressed by the three T -stress components T x , T y , and T xy , respectively: where Williams [31] Proposed model Tang [11], fan [25] 1 For the specific case that w → 1 as b → 0, the three T-stress components are the same as those in full-closed crack obtained by Tang [11] and Fan et al. [25]. Figure 12 compares the variation of the three normalized T-stress components as a function of crack inclination angle obtained from the proposed theory and the previous theories [11,25,32]. By comparison, it can be seen that the three stress components obtained from the semiclosed model are larger than those obtained from the full-closed model under the same inclination angle, and the T x obtained from this paper is smaller than that obtained from the nonclosed model. In summary, it is clear from Equations (57)-(59) that the T-stress in stress field is the component in relation to the closure amount, confining pressure of deep water, friction coefficient in the closed region, and inclination angle of the crack. So far, the expressions of the three individual stress components σ x , σ y , and τ xy in stress field around the semiclosed crack under compression have been already derived as σ y = 2ra 2 K ΙΙ ffiffiffiffiffi ffi πa p rr 2 ð Þ 3/2 sin Verification of the Stress Field Model Quantitative visualization gradually becomes an essential experimental tool to understand the stress field evolution which govern mechanical and fracture behaviors in various engineering applications. The photoelastic method enables visualization of the stress field near the crack; therefore, the results of photoelastic fringe pattern have been used to fit the parameters in the analytical solution of the stress field in numerous studies [33][34][35]. In the photoelastic theory in two-dimensional plane, the difference of principal stresses can be expressed by isochromatic fringe order N, the material fringe value f , and the thickness of the plane h: For the plane stress problem, the principal stresses can be written as Inserting Equations (61)-(63) into (65) and combining (64), one can obtain the theoretical isochromatic fringes around the crack. It should be noted that the principal 10 Advances in Mathematical Physics stresses σ 1 ′ and σ 2 ′ herein are different from the far-field stresses σ 1 and σ 2 subjected to the plate containing a crack. Hoek and Bieniawski [36] conducted the photoelastic experiment on glass plate containing a single nonclosed inclined crack under uniaxial compression, a typical isochromatic pattern obtained around the crack in the plate, as illustrated in Figure 13. Lee et al. [37] studied the evolution of isochromatic fringes around the crack in the Homalite-100 plate under compression and compared the experimental results with the numerical simulation results so as to determine the distribution of stress field. Since it is difficult to achieve a fullclosed state on both sides of the crack during the specimen production process at the beginning of compression, we assume that the crack surfaces are in a state of semiclosed. The stress field model proposed in this paper can predict the isochromatic fringes of principal stress difference around the crack under various closure amounts; therefore, the rationality of the proposed model can be verified by comparing the theoretical isochromatic fringes obtained by the model with the isochromatic fringe patterns obtained by the photoelastic experiment. Figure 14 presents the results on the comparison of the isochromatic fringe patterns from photoelastic experiment and theoretical prediction, in which Figure 14 Advances in Mathematical Physics The comparison in Figure 14 reveals that when the crack surfaces are nonclosed or full-closed, the discrepancy between theoretical isochromatic fringe patterns and experimental results is considerable, while when the crack surfaces are semiclosed, theoretical results are in better agreements with the experimental results, especially under the relatively lower closure amount of crack surfaces. In order to study the effect of crack location and orientation around the tunnel on the stress intensity factor, Wang et al. [38] conducted a series of uniaxial compression tests 12 Advances in Mathematical Physics on samples made of transparent epoxy resins containing a single crack with different inclination angles and analyzed the photoelastic characteristics of crack near the tip. The distribution of isochromatic fringe patterns, only at the semiclosed crack tip, was investigated to further verify the rationality of the model proposed in this paper. An outcome of a comparison between the experimental results by Wang et al. [38], the model estimation in this paper, and the model estimation obtained by Fan et al. [25] is shown in Figure 15; in the two model estimations, the closure amount and the friction coefficient are taken as 0.2 and 0.3, respectively. It is revealed that there are somewhat differences between the experimental results and the isochromatic fringe patterns predicted by Fan et al. [25] who considered the crack in a full-closed state, while relatively good agreements exist between experimental results and predicted results of the model in this paper. As may be seen from the above comparisons, the predictions of the proposed semiclosed stress field model are accurate in representing the morphology of isochromatic fringe patterns which represents the stress field distribution on both of the crack and its tips. Evolution of the Stress Field as the Crack Transitions from a Nonclosed to a Full-Closed State The Influence of Closure Amount under Constant Stress. It is assumed that internal cracks in deep-water structures are subjected to constant stress σ 1 caused by self-weight or upper loads and constant stress σ 3 caused by the deepwater pressure, and the particular influence of different closure amount on evolution of stress fields is investigated in this section. The three stress components are normalized dividing both sides of Equations (61) to (63) by σ 1 , taking Equation (61) as an example, the equation can be written as where In the Equations (66) and (67), λ represents the water confining pressure coefficient, which can be expressed by λ = σ 3 /σ 1 . A finite domain with a dimension of 70 × 70 mm 2 contains a crack with length of 20 mm, and inclination angle of 30°was selected as the studied area. The friction coefficient μ of the crack surfaces in the closed region was taken as 0.3, and the water confining pressure coefficient λ was taken as 0.15. Figures 16-18 show the variation of the contour maps around the crack of the three normalized stress components with the closed amounts, respectively. From the above figures, it can be concluded that closure amount generally exhibits significant and visible effects on the stress fields around the crack. The crack tip has a more obvious stress concentration effect at lower closure amount in comparison to the higher one. For the stress components σ x and σ y under the same boundary conditions, the tensile stress (positive sign) area near the two crack tips gradually Initial state (1) ) n ( ) 2 ( 1 = 3 + q 1 1 = 3 1 = 3 + q 2 1 = 3 + q n 0 = 1 1 = 0 > 1 >0 1 = 0 > 1 > 2 >0 1 = 0 > 1 > 2 >...> n >0 For the stress component τ xy , the whole observed area is a region of compressive stress, it can be observed that the low compressive stress area on both sides of the crack gradually transitions to a relatively higher compressive stress as the closure volume increases; on the contrary, the high compressive stress area has a gradual transition to a relatively lower compressive stress. In consideration of the quantitative analysis of the magnitude of the stress field, the Figure 19 shows the variations of the three stress components on a circle with a radius of 0.05 mm around the crack tip. As can be observed from the figure, the variation of the closure amount has no influence on the angles corresponding to the peak values of the stress components. For the stress components σ x and τ xy , the absolute values of the extreme values decrease significantly with the increase of closure amount, and the corresponding reductions in the magnitude for σ x and τ xy when the closure amount changes from 0 to 1 are about 22.7% to 40.5% and 0% to 28.9%, respectively. For the stress component σ y , the increase of the closure amount makes the absolute value of the maximum value decrease and the absolute value of the minimum value increases by 79.6% and 14.9%, respectively. 14 Advances in Mathematical Physics The Influence of Closure Amount under Variable Stress. Besides under constant stress, this section will deal with the influence of closure amount on the evolution of the stress field under variable stress. Considering the conventional loading path shown in Figure 20, the confining pressure coefficient λ = 1 in the initial state and gradually decreases for a gradually applied stress σ 1 . Due to the existence of the confining pressure, the coefficient is close to 0, but not 0. It is assumed that the closure amount varies linearly with the increase of vertical stress, that is where the σ max 1 is the maximum vertical stress applied during loading. Let σ 3 = 20 MPa and σ max 1 = 100 MPa; thus, the closure amount is 0.2 at the initial state of loading and gradually becomes larger and tends to be 1 as gradually applied stress σ 1 . Combining the confining pressure coefficient, it can be summarized that the confining pressure coefficient λ gradually decreases while the closure amount Δa/a gradually increases as the loading proceeds. The Equations (8) and (9) illustrate that the relative sliding of the crack surfaces in the closed region requires a particular condition. Consequently, Figure 21 shows the relationship between the stress intensity factor K II and confining pressure coefficient λ. As can been seen from the figure, the stress intensity factor K II ≤ 0 when 0:68 ≤ λ ≤ 1, indicating that the shear stress on the closed surfaces is less than the frictional resistance, so that no relative sliding of crack surfaces occurs and the stress field model is not applicable. Nevertheless, the stress intensity factor is more than 0 when λ in the range of 0:2 ≤ λ < 0:68, and the stress field model is applicable since relative sliding of crack surfaces occurs. Similarly, a finite domain with a size of 70 × 70 mm 2 contains a crack with length of 20 mm, and inclination angle of 30°was also selected as the observed area. The distributions of the three stress components around the crack were investigated at the confining pressure coefficients of 0.6, 0.4, 0.3, and 0.2, respectively, and the corresponding vertical stresses were 33.33 MPa, 50 MPa, 66.67 MPa and 100 MPa, respectively, and the corresponding closure amounts were 0.33, 0.50, 0.67, and 1.00, respectively. The variations of the contour plots around the crack of the three stress components σ x , σ y , and τ xy during the loading process are shown in Figures 22, 23, and 24, respectively. The distributions of the three stress components exhibit apparently different characteristics at various confining pressure coefficients, as can be seen from Figures 22-24. For the three stress components, the distribution of their contour plots in the observed area has a uniform character, and slight stress concentration effect occurs at the crack tip. As the loading proceeds, there is stress concentration effect in the area near the tip which is more relatively obvious in magnitude compared to the area far from the tip. For the stress components σ x and σ y , the high stress and low stress regions are distributed on both sides of the crack tip, respectively. For a gradually applied stress σ 1 , the high stress region gradually transforms into a relatively low stress, while the low stress region gradually transforms into a relatively high stress. Finally, most of the observed area is distributed with low stress. For the stress component τ xy , it is on the both sides of crack surface where the low stress is distributed, but on the two tips where the high stress is distributed. As the loading proceeds, the tendencies of stress both in both sides of crack surface and the area near the tip will preferably decrease. To conclude, it can be considered that the stress around the crack surface and near the tip, whether in the region of high stress or low stress, tends to transform into a lower stress as the loading progresses. As the closure amount Δa/a increases and confining pressure coefficient decreases during the loading progress, the variation of the stress component on a radius of 0.05 mm circle around the crack tip is displayed in Figure 25. At the initial state, the magnitudes of three component stresses changed a little on the circle around the tip due to the large confining pressure coefficient, although closure amount is relatively small. As the loading proceeds, the component stress fluctuates significantly with the angle θ. Specifically, it can be clearly observed that when the closure amount changes from 0.33 to 1 together with the confining pressure coefficient changes from 0.6 to 0.2, the maximum absolute values of the extreme values of the three stress components are 8.7, 6.3, and 9.1 times of the initial state, respectively. The above analysis sufficiently demonstrates that the closure amount of crack has a significant effect on the evolution of the stress field, and it is necessary to consider the change in closure caused by the boundary stress during the compression loading, which will be certainly helpful to have a better and a more accurate understanding of the fracture behavior of cracks inside of the structure. Conclusions A prediction model for stress fields around the semiclosed crack in deep-water structures is innovatively developed in this study, where the compressive and frictional effects between crack surfaces, as well as the closure amount in the closed region, are comprehensively considered. The following conclusions can be drawn: (1) The stress fields around the semiclosed crack under compression are derived based on the boundary conditions, which include both singular terms containing the stress intensity factor K II and nonsingular terms containing the three T-stresses (T x , T y , and T xy ). These terms are critically related to deepwater pressure, friction coefficient, and closure amount in the closed region. Furthermore, the fact that K I singularity does not exist in the crack tip under compression is proved theoretically (2) According to comparisons between isochromatic fringe patterns obtained from the experiments and the proposed model herein, predicted results are in excellent agreements with the experimental ones, demonstrating that the proposed model can accurately and reasonably predict the actual stress field of semiclosed crack than the previous models (3) The closure amount of the crack surfaces is one of the key factors for determining the stress fields around the crack. With the increase of the closure amount, the stress of each component around the crack always tends to change to the lower stress. Under the condition of constant stress, the degree of stress concentration at the tip is negatively correlated with the closure amount. However, under the variable stress, a positive correlation is presented between the closure amount and the degree of stress concentration at the tip Data Availability No data were used to support this study Conflicts of Interest The authors declare that they have no conflicts of interest.
2022-10-13T15:02:43.937Z
2022-10-10T00:00:00.000
{ "year": 2022, "sha1": "5f3a552726dd928836d0e68c2444b20667fd835a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/amp/2022/7092716.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ccf68db8ecbf5957b29559b889ece013d1666a76", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
251869161
pes2o/s2orc
v3-fos-license
Impact of Wetland Development and Degradation on the Livelihoods of Wetland-dependent Communities: a Case Study from the Lower Gangetic Floodplains ‘Wise use’ of wetland ecosystem services has implications for achieving sustainable development goals. Globally, almost 87% of wetlands have been lost since 1700’s with losses projected to be much higher by 2050 in developing economies due to urbanisation. Little is known about how this loss might impact people’s wetland dependency at local scales in the peri-urban landscapes. To understand people’s perception about ecosystem services from the peri-urban Dankuni wetland in Eastern India and associated ecosystem changes, we conducted thirty-seven semi-structured interviews in a single village. Wetland-dependent people identified 18 ecosystem services of Dankuni wetland. The ecosystem services included 12 provisioning services and two each of regulatory, cultural and supporting services. Farming and use of wetland products including molluscs, fuelwood, fodder, fibre and fish was found to subsidize living costs and provide diverse livelihood options to local residents. However, encroachment of wetlands by factories and blockage of its riverine connection were reported as the main factors degrading the wetland.. As a result, life and livelihood of people, especially of landless widows and older residents were severely impacted. Respondents believed that it was possible to rejuvenate the wetland by restoring its riverine connections but stressed on vested interests in supporting its degradation. Their perceptions strongly impress upon the need for greater government accountability in wetland protection and integration of local knowledge along with locally suited political action in wetland restoration programmes. In this context, we strongly advocate for the implementation of laws that allow for wetland protection under a socio-ecological framework. Introduction Ecologically productive wetlands provide a range of ecological functions, or, ecosystem services that far outweigh those provided by terrestrial ecosystems (Gardner and Finlayson 2018). Wetland ecosystem services can be analyzed using the framework detailed in Millennium Ecosystem Assessment (MEA) (2005). For instance, they can be categorized under provisioning services such as food security (McCartney et al. 2010), supporting services such as regulating water and sediment quality, pollutants and nutrients (Chalov et al. 2017), regulating services such as mitigating climate change (Mitsch et al. 2013;Fennessy et al. 2018), and, cultural services such as providing cultural and spiritual inspiration (Pedersen et al. 2019). From an economic perspective, the global value of these wetland-based ecosystem services is worth USD 47.4 trillion per year (Davidson et al. 2019). Moreover, by maintaining the most fundamental nexus between water, food and energy (Russi et al. 2013), wetland functions critically link human livelihoods with sustainable development (McCartney et al. 2010;Gardner and Finlayson 2018). It is globally acknowledged that these ecosystems provide ecological infrastructure to meet a range of international policy objectives which advocate for sustainable use of natural resources. Hence, the need to integrate their 'wise-use' into national policies in order to achieve the transition to resource efficient, sustainable economies is imperative (Ramsar Convention Secretariat 2016). In fact, wetlands are central to achieving Sustainable Development Goals (SDGs), with 'improving water quality', 'sustainable management of resources' and 'efficient resource consumption' being identified as universal priority targets (Jaramillo et al. 2019) and thus directly catering to SDG 2 (zero hunger), 4 (quality education), 6 (clean water and sanitation), 8 (decent work and economic growth), 11 (sustainable cities and communities), 12 (responsible consumption and production) and 13 (climate action). This is why balancing between wetland conversion, sustainable utilization within the context of maintaining both human well-being and ecosystem services, and, conservation with approaches has been emphasized, particularly in developing countries, in which wetlands are being rapidly degraded (Mahmood et al. 2013;Darwall and Freyhof 2016;Mao et al. 2018;Finlayson et al. 2019;Kumar et al. 2020). Wetlands are model socio-ecological systems, i.e., systems with social and ecological subsystems, characterised by sustained human-environment interactions (Berkes 2017;Langan et al. 2018). The high dependence of local communities on wetland ecosystems has been documented throughout the world and especially in developing economies (Wondie 2018; Owethu and Buschke 2019; Camacho-Valdez et al. 2020;Aryal et al. 2021). Moreover, local ecological knowledge could be indispensable in addressing knowledge gaps on status of wetlands in countries where detailed wetland inventories are missing. This might create spaces for the choice of developing integrative and inclusive conservation strategies by taking informed decisions on wetland resource allocation when faced with competing uses such as diversion for development purposes (Baird and Flaherty 2005;De Groot et al. 2012;Camacho-Valdez et al. 2013;Adusumilli 2015;Chaikumbung et al. 2016). There is growing advocacy in current literature to overcome human-nature dualism to create management regimes in which the voices of local resource users are prominently represented, especially where inefficient administration, non-transparency, weak systems of regulation and corruption exists (Baird and Flaherty 2005;Sithirith 2015;Holl 2020;Kumar et al. 2020). In this respect, recent estimates suggesting an 87% decline in global wetland area since pre-industrial times (Walpole and Davidson 2018), is both ecologically and socially alarming. Infrastructure construction-led wetland conversion and industrial waste disposal in the wetlands were identified as important proximate causes of global wetland degradation (Van Asselen et al. 2013;Gardner and 1 3 Finlayson 2018), particularly in developing countries. Over 80% of untreated wastewater is released into wetlands globally (WWAP 2012;UN-Water 2015) with lower middle-class countries treating 28% and low-income countries treating 8% of their wetlands (Sato et al. 2013). Although wetland loss in developing countries was historically lower than developed countries, the future rates of loss is projected to be much higher in the former, particularly in Asia, with a predicted increase of urban population by 1.4 billion by 2050 (United Nations 2008;Hettiarachchi et al. 2015), thereby leading to increased risk of environmental disasters and livelihood loss (Ghosh and Sen 1987;Azarath et al. 1988;Smardon 2009). Water pollution is also increasingly worsening the conditions of all Asian rivers and wetlands (Davidson 2014;WWAP 2017). The Ramsar Convention on Wetlands of International Importance especially as Waterfowl Habitat has foregrounded wetland governance in the global environmental policy domain, but the Asian scenario highlights the convention's inadequacy in dealing with threats that originate from urban development policies shaped largely by broader political-economic forces of developmentalism (Hettiarachchi et al. 2015). In India, wetlands are protected as per a national wetland law-Wetlands (Conservation and Management) Rules 2017 (Ministry of Environment, Forest and Climate Change 2017) under the overarching Environment Protection Act, 1986, yet various types of wetlands, for example, marshlands are categorized as 'wastelands' under national development policies (National Remote Sensing Centre 2010). Even though a law supersedes a policy, rapid urbanization at the cost of wetlands continues, and has become the leading cause of the loss of ecosystem services in the Gangetic Plains 1 (Das and Das 2019). For example, the East Kolkata Wetland (EKW), a Ramsar site, located within the Lower Gangetic Floodplains, has contracted significantly due to encroachment of built up area in the metropolitan city of Kolkata resulting in reduced productivity and wild fish stocks (Kundu and Chakraborty 2017). The social costs of urban development for communities dependent on wetland ecosystem services in this floodplain remain less studied, other than in EKW. Damodar-Hugli interfluves 2 in the Lower Gangetic Floodplains has a wetland complex called Dankuni (Sinha et al. 2013) with dominant marshland vegetation in which one of the authors (TA) conducted population surveys of Fishing Cat (Prionailurus viverrinus), a wetland-dependent wild cat species (Adhya et al. 2011). The Fishing Cat is a high-rated 'Evolutionarily Distinct and Globally Endangered' (EDGE) species, i.e., a priority species for research and conservation, as well as a 'Vulnerable' species according to the IUCN Red List assessment (Mukherjee et al. 2016;Tensen 2018). It also deserves the highest protection measures in India as per the Indian Wildlife Protection Act, 1972. The study emphasized the deleterious impact of industries and roads on the Fishing Cat habitat. Later, in 2012, a Public Interest Litigation (PIL) 3 was filed at the regional high court by non-government organisations as much of the wetland encroachment was happening without adequate land and environmental clearances (Adhya 2015). Further, marshlands are recognized as 'wetlands' under Wetlands (Conservation and Management) Rules 2017 (Ministry of Environment, Forest and Climate Change 2017), which prohibits alteration of its ecological character for development purposes. Local residents like farmers, fishermen and inhabitants of villages surrounding the wetland complex, who were both directly and indirectly dependent on the wetlands for its provisioning and regulating services (e.g.-flood prevention), had also registered protests against the degradation with one prominent activist being murdered (Adhya 2015). However, irrespective of litigations, protests and existence of environmental laws, the degradation of the Dankuni wetland complex continues. This is partly facilitated by the non-transparent practice of declaring tracts of wetlands as agricultural land and thereafter converting them for industrial or real-estate construction purposes as has been acknowledged in the draft West Bengal Wetlands and Waterbodies Conservation Policy, 2012. Moreover, an inventory of the state's wetlands is yet to be prepared, as is mandated as per the Wetlands (Conservation and Management) Rules, 2017. This indicates the prevalence of serious apathy towards non-charismatic, non-protected, peri-urban wetlands, and the absence of the will to enforce regulations. Such wetlands are rather considered as easy land banks that can be converted for development purposes. Thus, we wanted to understand the importance of this wetland to the people and environment, which is being threatened due to development, through a study on direct and indirect dependencies of local residents on its ecosystem services. Some of the interviewees were also directly involved in protests and campaigns against the wetland degradation occurring throughout the marshes encompassing a number of villages with similar demography. We considered this study critical to understanding implications for the sustainable use of this wetland, the maintenance of its ecological character and persistence of threatened species like the Fishing Cat. With this background, the specific objectives of the study were to investigate-a) local resident's dependency on the Dankuni wetland complex, b) their perceptions of changes in the condition of the wetland, c) their perceptions of changes in their livelihoods and living due to wetland changes, d) their perceptions of political-economic forces as drivers of change, and, e) how according to them the threats to the ecosystem could be addressed. Study Site We chose to conduct the study in a single village, Jhakari (22.75 N, 88.29E to 22.75 N, 88.30E), located on the fringes of the Dankuni wetland complex (see Fig. 1). The Dankuni wetland complex is approximately 30 km 2 in area and is perhaps one of the last contiguous marshy stretches in the Damodar-Hugli interfluves of the Lower Gangetic Floodplains. It is traversed by one of the busiest railway tracks of the region and is bounded with a national highway in the east. The wetland complex is dominated by both tall and short emergent vegetation 4 which are visual cues of marshlands. It experiences seasonal inundation and flooding during monsoon (June-September) especially due to its connection with the river Ganges and starts drying up post-monsoon (October onwards). By summer (March-May), surface water is retained only in some depressions. The wetland is a popular birding site as it provides refuge to both resident and migratory birds throughout the year (Hazra et al. 2012). Apart from this, freshwater fishes, snakes, turtles, amphibians and various kinds of insects especially damselflies have been reported by nature enthusiasts, the exact numbers of which remain to be ascertained. Mammals like Small Indian mongoose, Palm Civet, Small Indian Civet, Golden Jackal, Jungle Cat and Fishing Cat are also present. During the dry season (December to May), cultivation takes place in some (Hazra et al. 2012). Some areas of the wetland were converted for small-scale aquaculture farms in the last two decades. Local residents also collect grasses, wild flowers and stalks, plant parts, molluscs and wild fish from the wetlands. Jhakari is a typical peri-urban village located in the fringe of the Dankuni wetland complex which is neither completely rural nor fully urban. It is well-connected by road and rail infrastructure and is located 35 km from the metropolitan city of Kolkata. The human population of the village consists of approximately 1000 families and the structure of the village society is heterogeneous with people belonging to both Scheduled and General Castes. 5 Most people are Hindus and a small section of Muslims are also present who are economically poor and landless. Methods Fieldwork was carried out in 2019-2020 and thirty seven semi-structured interviews, consisting of 7 female and 30 male interviewees were conducted through snowball sampling. The interviews were conducted in the local Bengali language. A villager was first approached and asked whether he/she was native to the village and could devote some time for the interview after which the motivation of the study was explained. It was also stated that their names would remain anonymous. Upon gaining consent, semi-structured interviews were conducted with the questions structured around the objectives described above (See Supplementary Material). No guideline or reference questionnaire was used to design the study questionnaire as it was formed based on the knowledge gained during one of the author's (TA) informal interactions with the local community when she conducted her survey on the Fishing Cat here. With the respondent's consent, the oral testimonies were recorded using a voicerecorder. Information was collected on their dependency on wetlands starting with leading questions such as do they farm, fish, collect anything from the wetland that helps them in their daily lives and provides them livelihood. Secondly, they were asked whether they perceived any change to the wetlands since their childhood as well as recently. During the course of the conversation, we tried to understand how the change might have affected their livelihood and living. Thirdly, we examined their perceptions of threats to the wetland. Lastly, we asked 'if anything can be done to address the threats', 'if so, what' and 'if any interventions were taken by local residents or the government'. Most women declined to participate in the interviews as either they were extremely busy with their daily household chores or they felt that the male member of the household was more knowledgeable to appear for an interview. Later, the audio-recorded interviews were transcribed and translated. The transcripts were divided into broader themes like 'ecosystem services' which was further subdivided into 'provisioning', 'regulating', 'cultural' and 'supporting'. The transcripts were divided into broader themes such as 'ecosystem services' which was further subdivided into 'provisioning', 'regulating', 'cultural' and 'supporting'. Under the theme of 'provisioning' we noted all the statements where people described usage of different materials obtained from the wetland. Under the 'regulating' and 'cultural' themes, we noted all the statements where people described the wetland's role as a regulator of other environmental factors, especially water, and as a site for developing affective social ties, respectively. Under the 'supporting' theme, we documented the statements describing the wetland's production of conducive environment for growth of the materials that are derived by people. The other themes were 'impacts', 'reasons for degradation', and 'addressing threats' with the latter containing information on efforts to address threats to the wetland. Other relevant information was noted in the 'comments' section. In the results section we have extensively used people's testimonies but instead of using actual names, we have used the initials of their full name as an anonymised marker. Wetland Dependency of the Study Village We identified 18 ecosystem services from the oral testimonies (see Table 1) out of which 4 ecosystem services provided important sources of livelihood. Twelve products including plants and animals, edible and non-edible, were identified by the respondents which they obtain from the wetland. Interviewees commonly cultivated edible crops such as paddy, onion, ladies finger, spinach, coriander, beans, cow pea, Indian pea. Among these, paddy was the most important resource for local consumption as well as livelihood. The harvested rice is generally kept for consumption and the excess rice is sold at 1000-1300 Indian Rupees ( INR 6 )/bag with each bag containing 60 kg rice. 50 year old GB said "The paddy we harvest is good in quality, much more healthy and tasty than the rice provided at government ration shops which is often mixed with dirt and small stones," indicating that he felt a certain pride in harvesting his own rice. Another farmer, 43 year-old NT, further added that the rice from fair price shops costs around 32-33 INR/kg but his children refuses to eat it because of its poor quality. On the other hand, premium quality rice was reportedly too costly to afford. Other crops are also sold: onion @ 30 INR /kg, coriander @ 70 INR /kg, spinach @ 70 INR /kg, beans @ 40 INR /kg, okra @ 53 INR /kg and Indian pea @ 60 INR/kg. KK, a 30 year old farmer, seemed happy with the significant profit returns from his piece of land after harvesting these crop types, "I spent 20,000 INR for harvesting onion and okra but earned 80,000 INR. After harvesting the paddy, I planted jute in the same land. I spent 20,000 INR in labour again but earned 50,000 INR. For paddy, if I spend 10,000 INR in labour, I get back 26,000 INR." Bamboo (Bambusa balcooa) and mud were frequently used wetland products for constructing houses. Bamboo was also used for making fish catching traps, broomsticks and baskets. These products provided additional sources of livelihood and were particularly useful when cultivation failed. 29 year old DD's crops failed in 2019 but he survived by selling fish catching traps for 250 INR and baskets for 50 INR. For widows from Jhakari, who were landless, collecting and selling molluscs from the wetland provided the major source of livelihood, through which they could provide for their children. AK, a 63-old widow, reminisced the physically challenging, yet economically rewarding work of searching for molluscs in waist-deep water in the wetland during the day with a company of other women. They then sold them for 80 INR/kg via middlemen who gave back 30-50 INR/kg. 60 year old SK rather preferred selling them door-to-door or by herself in the market and reportedly earned 200 INR/kg. Even after their children grew up and started earning, these widows did not stop this work. They reported that earnings from selling molluscs helped them remain economically independent as well as provided means to support their family when needed. As AK explained with a tone of clear confidence, "My sons look after me but I choose not to ask for money from them to cater to my needs and cravings." She also added that molluscs are packed with nutrition and are good for eyesight as was told to her by doctors." Almost all respondents fished in the recent past (till 5 years ago). Many of them are still involved in fishing. Fish is caught by various methods. SK caught fishes with her bare hands rather than with fish traps or nets and reported that Singee (Heteropneustes fossilis) and Magur (Clarius batrachus) are difficult to get, but if caught, they fetch upto INR 1000/kg. Smaller fish like Koi (Cyprinus rubrofuscus) can be sold for 200-300 INR/kg at the nearby market. Fuelwood (Kath-shola / Aeschynmene indica, jute-sticks and bamboo), fodder, edible plants and plant fibre are useful wetland resources in the village. BT, a 50 year old widow, was especially dependent on fuelwood collected from the swamp to run her small eatery which catered to farmers in the field and to factory workers. Villagers also depended on swamp grass to feed their cows. Leafy vegetables and stalks of aquatic plants were widely collected for local consumption. Poorer people such as widowed women and older residents sold them too. Kath-shola (Aeschynmene aspera) was also used for making marriage gear for brides and grooms and for decorating religious deities. KK (30) reported the selling price to be 300 INR/bundle while SD (49) said that people earn 400-500 INR/bundle to even 1000 INR/bundle during summer. Water from the swamp was reportedly used for irrigating farm plots. "If there is no swamp, then, water for agriculture will not be available anymore. The surroundings are drying up day by day," said said 32 year old PM. Older residents also benefited from the wetlands as they could undertake less laborious tasks and were still able to support their families. "I cultivate a little bit of onion and take the cattle out to graze. I also collect edible plants from the swamp. At my age, that's as hard as I can work to support myself and the family. But I will not be able to do this if the swamp is lost and will have to depend on others for food." Regulating Services From the oral testimonies, we identified two regulatory services of the wetlands -flood control and water purification. PM (32) explained "If the wetlands degrade, the waters in monsoon will flood our houses." HT (70) said "People who went to work in the swamp did not need to carry water with them because the swamp water was available. It was like filtered water." Cultural Services Residents shared that the wetlands provided them with the opportunity to reflect on life while younger children enjoyed recreational activities. 49 year old SD shared how children in the village created doll houses and dolls made out of mud taken from the wetland whereas young adults used the wetland space as social bonding sites. On the other hand, the wetlands inspired reflection among interviewees evident in their words. 32 year old ST for instance stated how the swamp brings prosperity to them and gives them a chance to cultivate "food of self-respect". Supporting Services We identified two supporting services which are nutrient retention and sustenance of fish stocks from the oral testimonies. 35 year old PS, for example, stated how the waterlogged lands became fertile after recession of the floods thus facilitating farming. 30 year old KK on the other hand explained how the wetland created a conducive environment for fish to breed in. Change in the Wetland Quality and its Human Cost Respondents perceived a steady degradation in the wetland's quality in the last 20 years with the trend worsening rapidly during the last three to four years. From their testimonies, it is apparent that this change has affected their livelihoods and living in significant ways. 58 year old ST stated that the swamp was four times bigger two decades back compared to the present extent. Majority of the respondents reported a reduction in the quantity of products obtained from the wetland. "Tides used to come into the swamp through the canal which connected it with Ganges bringing in a variety of fish like Bele, Koi, Singhi, Punti and their eggs," said 50 year old GB while describing how the wetland functioned in the past. Large sized prawns also used to be available which fetched significant monetary returns. Post-monsoon, the flood waters in the swamp would slowly recede through the canal into Ganges exposing nutrient-rich soil fit for harvesting. "Paddy like amon, neramon and beta, pulses, okra, potato, cauliflower, onion, leafy vegetables, gourds and water melons could be cultivated in the past. Bags full of food used to be harvested during April," reminisced 50 year old GB. Availability of molluscs and edible wetland plants as well as Aeschynmene which provides fibre reportedly decreased substantially over the years. According to the respondents, the swamp water seemed to have degraded in quality as well due to loss of its connection with Ganges over the last 15-20 years. This had increased the duration of waterlogging and created an unhygienic condition. "The swamp water is rotting. People get skin diseases now," says senior citizen DB while teenager P remembered how clean the water used to be in the canal during his childhood when it was connected to Ganges "The water was so clear that if one dropped a coin, it would be visible right till it hit the bottom. I used to dive into the canal from the bridge to take a bath. The water has now become blackish especially since the last three to four years." The respondent's testimonies suggest that fish diversity and abundance had decreased substantially over the years, especially Nandus nandus, Ophisternon bengalense, Glossogobius giuris and Mystys vittatus as 65 year old DB shared. "I still remember how my grandmother used to come back with baskets full of crabs in the past which she used to sell for 2-3 INR/ kg whereas now we sell them for 100 INR/kg," reminisced 49 year old SD implying that crabs had become scarce. Fish spawns were reported to be dying in the wetland as the outflow of the swamp water got blocked. Aquatic weeds and leaches had instead proliferated in recent times and the water had become unsuitable for irrigation. 52 year old U shared, "We don't drink water from the swamp anymore." The degradation in the quality of the wetland had changed the way of life in the village. GB, who is 50 years old now, shared that he did not have to go outside for work as a younger person as they could get expensive fish like Heteropneustes fossilis for free from the swamp as well as pulses and a variety of vegetables. "Those days are gone now. If I go for fishing now, I will catch fish worth less than 250 INR whereas if I work as a hired labourer I will earn atleast 250-300 INR/day. Fish from the swamp was so integral to our diet. The fish catch used to be huge. Even after feeding the whole family, the remaining could be sold for 300-500 INR/day during monsoon." Similar sentiments were echoed by other respondents and many reported leaving agriculture and fishing due to lower economic returns and thus, being forced to work as hired labourers. Due to lower productivity, many had sold off their land to overcome financial shocks. "Businessmen buy these lands at lesser prices (8,00,000-10,00,000 INR/bigha) taking advantage of unemployed people who are in need of cash and thus sell them at a huge cost (2,00,000 -3,00,000 INR/katha)," shared PS (35). 7 Some land-owners also decided to convert parts of their water-logged land to small-scale aquaculture. During 2019-2020, the region received excessive rainfall due to cyclonic depressions over Bay of Bengal which further aggravated the situation. Due to the blockage of the channel connecting the wetland to the river, excess rain water could not flow out, prolonging the water logged situation. The standing water from the land usually starts receding by end of October yet GB's land remained water-logged till mid-January, "In October the land must remain muddy but it has to lose its moisture after that if the onion yield is to be good. It is too late now." The calamity even disrupted age-old cultural practices as reported by 38 year old RB, "By this time okra plants become upright and even starts flowering. It takes at least 15-20 days for the seeds to germinate. As soon as they do, we celebrate Makar Sankranti, a festival that marks the onset of the new harvest, a 250 year old tradition. But where is the harvest to celebrate this year?". Drivers of Change Respondents unanimously identified presence of factories as the main factor causing degradation of the Dankuni wetland. According to most respondents, the emergence of garment and fertilizer factories and warehouses, in the last 15-20 years, coincided with the beginning of degradation and wetland shrinkage as they were constructed by filling up parts of the wetland. The factories HT, a senior citizen, explained, "The swamp has shrunk to a fourth of what it used to be due to the construction of factories. 25 years back, this was all swamp," he said pointing to the horizon and beyond on both sides. 34 year old KT detailed, "The first factory was constructed in the wetland 15 years back. Gradually, other factories also emerged." ST added, "They dump sand, soil, ash and debris into the wetland everyday and this blocks the passage of water out of the swamp." PM further explained, "The discharged solid waste materials have accumulated in the canal connecting the wetland to Ganges, making it significantly shallow and the waters stagnant." The respondents mentioned that the sluice gate of the main canal was not functioning due to which wastewater released by the factories stayed locked inside the wetland. Large buffalo shelters constructed along the canal also added to the problems. "The cattle waste is discharged into the swamp. One can find syringes and broken pieces of glasses in the swamp nowadays," said 50 year old TB. Perceptions on Threat Mitigation Almost all respondents said that the only way to restore the health of the wetlands is by dredging the canal that connects the wetland to the river. They also reported that at various times they had informed local politicians and administrative officials of the dire situation of the wetlands but to almost no avail. "They created a small outlet after we agitated but that is not enough. The sluice gate of the main canal has to be repaired. It is broken and clogged with debris. Some of us went there to clean it up but did not succeed," said KK, himself a senior citizen. Respondents felt that the weed cleaning drives conducted by the panchayat did not yield desired results. Lamenting on failed mitigation actions, 70 year old HT said "There are so many factors due to which the swamp is dying. Just cleaning water hyacinth is not enough. We are unable to farm but our MLAs and MP 8 s do not bother. Nowadays factories run the government so the government will work to benefit them." The respondents also thought that the onus of protecting the swamp lay with the local community as well. In this respect 32 year old ST stressed, "If the owners protest in unison, the government will have to respond. But if they sell off their land instead, how will the situation be rectified?" However, the views of 38 year old RB differed in this matter, "People do not raise their voices as this kind of destruction is being done by very powerful people at helm who have a lot of money. They will squash us like insects." In addition, lands belonging to community members were often sold off without their knowledge. Similar processes were at work in other villages surrounding the wetland, which led to its degradation on the one hand and impacted the livelihoods and living of dependent communities on the other. Discussion Our study showed that villagers residing beside a peri-urban wetland, Dankuni, in the rapidly urbanising Lower Gangetic Floodplains, perceived that the wetlands enhanced the quality of their lives, subsidized their living and provided livelihoods. However, establishment of factories and pollution 1 3 discharged from them into the wetland along with political apathy to rectify the same reportedly led to diminishing returns/services and had serious negative social implications including loss of livelihood, increased disaster risk and exposure to financial vulnerabilities. We recorded eighteen ecosystem services of the Dankuni wetlands with twelve provisioning services such as edible crop farming (for example, paddy and vegetables), nonedible crop farming (jute), collection of wetland products such as fish, molluscs, edible wild plants, fibre, water, mud, bamboo, fuelwood and fodder. The wetland was found to subsidize the living costs of respondents. For instance, they could procure cattle fodder, fuelwood, fibre, edible plants, house-building materials and molluscs, without any investment. Apart from this, the wetland also provided regulatory services (like flood control and water purification), supporting services (like nutrient retention and providing refuge for fish stocks) which sustained the provisioning services, and, cultural services that enhanced the quality of their lives. Landless widows and older residents seemed to be solely dependent on the wetlands for subsistence and livelihood. This dependence of women and older residents on common natural resources (such as wetlands) has in fact been well documented elsewhere (Ahmed et al. 2008;Mundoli et al. 2017;Sinthumule 2021). However, the oral testimonies clearly suggested that there has been significant erosion in all ecosystem services of the wetland over the last 15-20 years and especially in the last three to four years. This degradation coincided with the emergence of factories during the same time span which according to locals were constructed by filling up the wetlands, thereby blocking the flow of water between the wetland and the river. Moreover, wastewater generated from the factories and debris from buffalo shelters had contributed to blocking the canal connecting the wetland to the Ganges. A similar process was observed by one of the authors (TA) in Chilika, a Ramsar site on the Indian eastern coast, where buffalo shelters were first constructed on embankments leading to the lagoon, presumably to create blockages. Thereafter portions of the lagoon were cut off to create illegal aquaculture farms. Constructing buffalo shelters could therefore be a ploy to obstruct vigilance and facilitate wetland conversion. Shrinking and subsequent degradation of the wetland's water quality rendered it unfit for drinking and irrigation purposes. Fish abundance and diversity was especially affected due to the same reasons and the progressive clogging of the wetlands has decreased its capacity to regulate floods in the face of erratic and excessive rainfall, which affected farming, as was experienced by respondents during 2019-2020. In the era of accelerated climatic shifts, rainfall patterns are poised to be erratic and could therefore cause more urban floods (O'Donnell and Thorne 2020). This has implications for disaster risk management in the surrounding urban and peri-urban areas of Dankuni wetlands. Unsustainable use of wetlands has been known to impair wetland functions and permanently damage socio-ecological systems elsewhere in the world (Vilardy et al. 2011;Jaramillo et al. 2018). Similar storylines are evolving out of most South Asian countries in which water sources and river health has been severely compromised due to unplanned development (Pal and Talukdar 2018;Reis et al. 2017;Sarkar et al. 2021) decreasing their ecosystem values. In fact, degradation of the wetland was perceived by villagers at Jhakari as an attack on a self-sustaining ecosystem which is giving or had given them prosperity, autonomy and prestige, with the lands of many being simply snatched away without their consent or knowledge. People felt that they were increasingly being pushed towards relying on external actors such as urban markets and unpredictable climate. From being self-dependent, they were being pushed to work as urban labourers for financial security. Urbanisation has been known to cause the loss of these affective dimensions and silence the voices of the marginalized sections of the society (Unnikrishnan et al. 2016;Mundoli et al. 2017). As a result, like other parts of India (Mahanta and Das, 2012), increasing rural-to-urban dependency may be witnessed because of the vicious cycle of wetland health deterioration and consequently of people's care for and ownership towards it. Some villagers had sold off their land due to diminishing ecosystem services of the degraded wetlands. Even though we did not directly explore the relevance of SDGs to wetland dependence for the local community at our study site, the testimonies foreground vernacular forms of people's understanding of sustainability. Through their description of various ecosystem services, people referred to various conceptualisations of SDGs, such as, SDG 1, no poverty (living cost subsidy), SDG 2, zero hunger (food material provisions), SDG 3, good health and wellbeing and 6, clean water and sanitation (emotional wellbeing and good water quality), SDG 8, decent work and economic growth (decent work of collecting materials with self-respect) and SDG 14, life under water (fish nursery). Thus, maintaining ecological health of wetlands could lead to sustainable cities and communities (SDG 11). Jaramillo et al. (2019) identified the improvement of water quality and adoption of 'wise-use' of wetlands as central to achieving a range of SDGs covering environmental health, equity, human well-being and justice. Respondents at Jhakari believed that the water quality could be improved if the canal connecting it to river Ganges was dredged which would allow the polluted water of the swamp to flow out and tidal waters to flow in. Globally, wetland restoration with inputs from local communities has been encouraged because of their better understanding of the ecosystem given their closer association with it . Although local knowledge can facilitate successful community-based conservation and restoration of ecosystems including wetlands (Amano et al. 2018;Kongkeaw et al. 2019;Walle and Nayak 2020), it is hardly encouraged because of the absence of cross-sectoral policy integration for environmental protection along with the prevalence of rampant corruption, the subsequent suppression of local voices and the negligence of environmental laws enhancing socio-ecological vulnerability as a common practice in developing countries (Hettiarachchi et al. 2015;Sen and Nagendra 2020). People's testimonies identified probable political actions which can be useful to centre-stage socioecological concerns and improve ecological health as has been shown in Apipalakul et al. (2015) and Roose and Panez (2020). The testimonies also suggested technical interventions for rejuvenating the wetland. However, the decade-long litigation battle at Dankuni wetlands to prevent the illegal filling up of wetlands coupled with the presence of a syndicate between local politicians, administrators and factory owners to rapidly develop the wetland, as is perceived by the interviewees, foregrounds the need to increase government accountability and uphold the core tenets of sustainability to counter (mal)development-'precautionary' and 'polluters pay'. It is pertinent to stress here that existing Indian laws allow for the involvement of local communities in conserving socio-ecological spaces. For instance, the Indian Biological Diversity Act, 2020, has provisions to declare areas as Biodiversity Heritage Sites, the criteria being that traditional practices will sustain threatened species and maintain the ecological functions of an ecosystem. Globally there is consensus that wetlands should be conserved within a socioecological framework (Kumar et al., 2020). In this regard, remaining wetlands outside protected areas in India can be conserved by developing management plans in collaboration with local residents, researchers, administrators and politicians as per provisions in the above act. Conclusion The Lower Gangetic floodplains, which includes Dankuni wetland, is replete with a variety of wetlands that sustain the livelihoods and culture of many communities both in India and Bangladesh. However, our case study from Jhakari shows that such peri-urban wetlands are being developed rapidly and thus compromising sustainable and resilient futures. Popular opinion often point towards rising population as the sole cause for ecosystem changes. However, taking cue from the people of Jhakari, we need to hold (mal) development models accountable for such negative changes. People's prescription of increasing governmental accountability point towards vernacular conceptualisations of 'precautionary' and ' polluter-pays' principle, which are core issues of sustainability principles. The social and ecological costs of such pursuit of development reinstates that sustainability remains as a rhetoric. Wetland ecosystems are crucial for achieving many SDGs but continue to be sacrificed at the altar of development. Even though our study is situated in a single peri-urban village Jhakari, the learning we generated from people's lives there resonates with different communities whose lives and livelihoods have been threatened due to ecosystem degradation across Asia. So, people's accounts of ecosystem degradation, subsequent repercussions and necessary actions needed to restore the ecosystem functions need to be taken seriously as evidences in academic research, institutional mechanisms and on-ground actions. Such integration of local knowledge and locally suited action will help in building place-based sustainability models. The need to enforce Indian wetland protection laws cannot be stressed enough. More importantly, we strongly suggest that Indian laws that allow for building socio-ecologically sensitive, collaborative and constructive conservation models by encompassing local residents, scientists, policy makers and administrators, be explored with immediate effect for protection of wetlands in rapidly developing landscapes.
2022-07-22T13:42:05.703Z
2022-08-26T00:00:00.000
{ "year": 2022, "sha1": "a9c321775433d0a48295ac315b023dcbb33bb18e", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1204186/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "9e15c89a7fad4cc2870c390b8a83c82efa3e1059", "s2fieldsofstudy": [ "Environmental Science", "Geography", "Sociology" ], "extfieldsofstudy": [] }
149684942
pes2o/s2orc
v3-fos-license
Gramáticas de la ( ¿ post ? ) violencia : identidades , guerras , cuerpos y fronteras Deserving victimhood : kinship , emotions and morality in contemporary politics This paper is about the place of family values, kinship relations and feelings of compassion for victims in national public space. Setting out from a description of various public affairs concerning the relatives of disappeared in Argentina, I show the key role played by blood ties and family values in forming a legitimate political representation. While the claim of blood ties with victims had been instituted as a legitimate form of political representation ever since the return to democracy, over the last decade or so sentiments towards victims have become incorporated into the State, enabling the latter to be imagined as a victim too. Here I explore diverse assessments of these affective dispositions, the critical place attributed to suffering in forging forms of governmentality, and the significant role played by the State in the unequal distribution of feelings of compassion. Resumen Este artículo trata sobre el valor de los valores familiares, las relaciones de parentesco y los sentimientos de compasión con las víctimas en la política contemporánea.Tomando como punto de partida el análisis de una serie de eventos públicos relacionados con los familiares de los desaparecidos en la Argentina, analizo el lugar central que ocupan los vínculos de sangre y los valores de la familia en la conformación de una representación política legítima.Mientras que la reivindicación de un vínculo de sangre con las víctimas se instituyó como una forma de representación política legítima desde el retorno a la democracia, a lo largo de la última década, los sentimientos de compasión hacia las víctimas hicieron su ingreso al Estado, haciendo posible que sea imaginado como una víctima.Analizo en este trabajo el valor de estas disposiciones afectivas, el lugar crítico que ocupa el sufrimiento de las víctimas en la creación de formas de governamentalidad y el papel significativo que tiene el Estado en la distribución desigual de sentimientos de compasión. Introduction We do not mourn mass murder unless we have already identified with the victims, and this only happens once in a while, when the symbols are aligned Alexander, 2002:4 After thirty-six years of relentless searching, on August 5 th 2014, Estela Barnes de Carlotto, president of the Grandmothers of the Plaza de Mayo Association (Asociación Abuelas de Plaza de Mayo), 1 finally found her own grandson, Ignacio, son of her daughter Laura, who had disappeared during the dictatorship and was killed after giving birth. 2 While each appearance of the previous 113 grandchildren had also been marked by press conferences and the publication of news reports, this case broke with all the Association's previous routines and became an extraordinary event and a media sensation: Estela made the front pages of the country's leading national newspapers, Estela and Ignacio were pictured on the cover of a well-known current affairs magazine along with other 'famous' personalities of the year, 3 and subsequently appeared on numerous television shows broadcast to large national audiences.Estela, accompanied by her grandson Ignacio Montoya Carlotto, was received by then president Cristina Fernández de Kirchner and by the head of the Catholic Church, Pope Francis.Just a few months later, the first books on the case were already being published (Seoane & Caballero 2015;Folco 2015). On the same day and at the same time, as the conference at which Ignacio was presented to the public, María Victoria Moyano Artigas, Recovered Grandchild 53, was taking part in a union protest when she was brutally arrested by security forces from the national gendarmerie.Attacked initially with pepper spray, her car window was smashed, the door wrenched open and she was dragged out of the car by force and hauled off to a police station with other union leaders and activists.Given the simultaneity of these events, the scant public attention devoted to María Victoria's arrest becomes particularly significant.No figures from the national government would mention the incident and the national press provided little coverage in contrast to the innumerable pages devoted to the appearance of Ignacio, Grandchild 144.These acts of repression by the State against a recovered granddaughter aroused neither feelings of compassion for her misfortune, nor moral indignation over the infringement of political rights under the rule of law. 1 Referred to hereafter as Grandmothers (Abuelas). 2 During the dictatorship, the State armed forces illegally appropriated an estimated 500 children born to disappeared women, or kidnapped along with their parents, and handed them over to other families to be raised under different identities.In most cases, these families belonged to the armed forces (Regueiro 2010).After the return to democracy, a 'grandparentage index' (índice de abuelidad) was developed to prove the genetic relationship between alternate generations, even in the absence of living parents. 3 They appeared on a cover photo alongside artists, show business celebrities and media people.See the magazine Revista Gente.No. 2577.December 2014. How can we explain this uneven distribution of compassion towards victims of State terrorism who objectively share the same condition?What processes allow some victims to acquire titles of nobility in Argentinean politics?Both grandchildren were born in clandestine camps, their parents had been disappeared and both had been raised under a fictitious identity by appropriator families.Yet, despite these similarities, they failed to elicit the same feelings of compassion or share the same public legitimacy. This article is about the place of family values, kinship relations and feelings of compassion for victims in contemporary politics.It explores the unequal distribution of feelings of compassion towards victims, assuming that the contrast between public indifference over Victoria's arrest and the public concern and emotional outpouring over Ignacio's appearance provides an insight into the structure of moral feelings responsible for establishing frontiers and hierarchies within this universe of social relationships.As Sarti recognizes in her analysis of the social construction of the category of victim, paying attention to those left in the shadows enables us to comprehend this as a moral rather than legal process (Sarti 2011: 54). Following her suggestion, my intention here is to explore in depth the establishment of borders among social actors ostensibly sharing the same circumstances.I plan to analyse the social conditions that allow the recognition of 'good victims' and 'good families' as opposed to those who, subject to the same historical conditions, are deemed illegitimate victims -or at the very least 'bad' victims who deserve public indifference.As we shall see, these borders and hierarchies involve critical political issues linked to recent transformations in the relationship between the Argentine State and the human rights movement. Setting out from this brief sketch and based on the description of various public episodes, I show the key role played by blood ties and family values in forming a legitimate representation of victims that allows them to participate in contemporary politics.This aim in mind, I focus on describing: a) the attributes that qualify human rights leadership in post-dictatorship politics; b) the qualities of grandchildren that enable them to be included within the boundaries of their biological families and, in the process, within the national community; and c) the recent transformation in the prominence of victims since the coming to power of Kirchnerism (2003Kirchnerism ( -2015)). 4 While the claim of blood ties with victims was instituted as a form of legitimate political representation after 1983 and Argentina's return to democratic governance (Vecchioli 2005), over the last fourteen years sentiments concerning victims have been incorporated within the State itself, allowing the latter to be imagined as a family of victims too.Emotions and feelings about victims and their relatives have assumed a critical place in the State's integration.How then can we account for the significance of kinship categories for political action?What does the language of kinship ties express about the relations between the State and civil society? This article converges with a recent literature that recognizes the increasing amount of space occupied by victims and humanitarian sentiments in contemporary politics and the role that compassion, sentiments and imagination play in shaping forms of governmentality (Alexander 2002;Fassin 2011;Sarti 2011).Its implications for the present case will be covered at length over the course of this article. To accomplish these aims, I draw on authors who link family, kinship and politics (Lenoir 2003;Das 1996;Anderson 1991;Bourdieu 2001) in order to understand the place of blood relations within the State. Following the inspiration of Veena Das (1996), who based his ethnography of Indian national society on diverse forms of public speeches, here I analyse discourse and imagery as signifying practices and performative acts that contain a series of moral imperatives and emotional and rhetorical resources.A common memory between victims and the State is established through the latter, while symbolic borders are instituted within this moral community.By engaging in a microscopic analysis of these performative acts, I reflect on broader issues linked to the political transformations in Argentina's post-dictatorship era. Taking affects to be the very substance of politics (Stoler 2007) rather than an epiphenomenon (Laszczkowski & Reeves 2015), this article inscribes itself in current debates on the place of compassion towards victims in modelling contemporary politics.As Hirschman suggests, "sentiment's history is an inspired way to trace the changing form and content of what constitutes the subject and terrain of politics" (Hirschman 1977: 16).Among the vast literature produced on the topic, Jenkins (1991) and Stoler (2007) are especially useful to the understanding of our case since they analyse how States "culturally standardize the organization of feeling" (Jenkins 1991: 140) and the political consequences produced by "technologies of rule" based on sentiments (Stoler 2007).Setting out from these premises and recognizing that affections are the moralizing self-presentation of State (Stoler 2007), I explore the diverse assessments of these affective dispositions, the critical place assigned to the suffering of the victims of State terrorism, and the significant role played by the State in the distribution of sensibilities surrounding victims and their relatives. The academic literature on the Argentinean dictatorship encompasses many crucial issues such as the reconstruction of the repressive modalities used by the Armed Forces (Calveiro 2008), the emergence of the human rights movements (Catela 2001), the development of transitional justice mechanisms (Teitel 2002) and the social construction of emblematic memories (Crenzel 2011).In this article I shall dialogue with this well-established literature by looking to move beyond such canonical topics and analyse the prominence of the relatives of the victims in the ways of doing and imagining politics in the post-dictatorship period. By adopting such an approach, I seek to fill a gap in the existing literature, placing victims and their relatives at the heart of the State and shedding new light on the ways in which the human rights cause, its protagonists and the sentiments and values associated with them are intertwined with State practices and shape current politics. My analysis is based on a comprehensive survey of public documentary sources on the restitution of children of the disappeared produced between 2004 and 2017, as well as earlier ethnographic field research on a number of different State agencies responsible for dealing with the relatives of victims.5 Blood ties in the political space The appearance of Grandchild 114 proved to be an extraordinary event: a unique occurrence that attracted the involvement of many people with no direct connection to the case, turning into an emotional drama shared by wider Argentinean society.This situation became evident at the press conference, a routine event held to celebrate the appearance of each new grandchild.On this occasion, though, the Grandmothers head office was too small to accommodate the large crowd that turned up spontaneously to celebrate the encounter.Instead, they accompanied the announcement from the streets, cheering, singing and blowing cars horns.After the conference, Estela came out onto the balcony to thank those expressing their solidarity outside.Over the following days, a poster made by the Grandmothers appeared celebrating the event in Buenos Aires's streets: "114 recovered grandchildren.Each of them a prize of peace and love".6 Newspaper reports emphasized the collective emotion that led to the recovery of Ignacio's identity, who immediately became "everyone's grandson" and "a whole symbol" of the unwavering struggle of the Grandmothers and of the value of love and caring expressed by their endless searching.Media reports emphasized the feelings stirred by the event: surprise, excitement, joy, affection.These sentiments were extended beyond those directly involved as soon as Ignacio was considered "an exciting story for everyone."Ignacio's appearance was celebrated not just as a victory for the human rights associations but as a collective achievement, a cause for national celebration: "It is a triumph for all Argentineans"; "All the people of this country are joyful about this moment and share their happiness with us". 7The feeling of joy was compared to winning a football world cup, a national passion.Not for nothing Ignacio was baptized "the Messi of the grandchildren" in reference to Argentine soccer player and world superstar Lionel Messi. Academic experts were also unanimous in highlighting this same emotional dimension: "Why had such a unique case […] touched millions of people as though it were someone in our own family?Who did not share her [Estela's] joy?" (Ludueña 2014: 6)."A 'physical happiness' mysteriously revealed to thousands of people on receiving the news until precipitating a few hours later into a collective emotion" (Taitan 2004: 15).In evaluating the event, distinctions between the victims were emphasized: "tears welled up easily, the result of feelings of empathy stirred by the heart-breaking story, but also as a result of the long search that had finally resulted in this moving encounter.He is a grandson like the others recovered and the others still missing, but a grandson whose grandmother is a symbol.And so, he is not just one more grandchild" (Crenzel 2014). Events that followed Ignacio's recovery show the leading role played by the families of victims in public space.The press conference at which Estela announced Ignacio's appearance was transformed into a fullblown civic ceremony through which the sacred value of blood ties was reaffirmed as a central element in political life, in the sense that Durkheim attributes to rites as moments when "it is possible to discern […] the emotional mobilizations that it displays, in which are invoked norms, values, representations, beliefs […] that define this group" (Durkheim 2012: 233).This is an outcome not only of the disappearance of thousands of people during the last military dictatorship, but also of the powerful confluence between the moral values expressed in family bonds and the collective activism sustained over a forty-year period.Ever since its emergence, the human rights movement has been distinguished by the fact that a large proportion of its activists are publicly identified by their claim of a blood tie with victims. 8A simple review of the names given to the different groups making up the human rights movement reveals the foundation of a political community that publicly expresses its self-recognition through the language of kinship: Relatives of the Detained and Disappeared for Political Reasons (1976), Mothers of the Plaza de Mayo (1977) and Grandmothers of the Plaza de Mayo (1977). Victim activism creates a community of peers through the extension of individual ties of consanguinity between the victim and their families to all those taking part in this struggle, as reflected in the testimony of one mother: "the fact of having someone who has disappeared, just that alone, leads to the establishment of a sisterhood among us" (Mellibovsky 1990: 93).In the Grandmothers organization, recovered grandchildren have replaced and continued the work of deceased grandmothers (Bublik 2013: 151), enabling the creation of a community of peers based on a non-transferable bond that imbues this shared quality to all relations within this imagined community (Das 1996;Anderson 1991). 9Among this group, maternity and filiation are the primary and fundamental sources of human rights activism.As claimed by both Mothers and Grandmothers, they were willing to do anything to find their children and grandchildren and continue 7 Claudia Carlotto "Estamos Felices," Página/12 Newspaper, August 7 th , 2014.Available at http://www.pagina12.com.ar/diario/elpais/1-252432-2014-08-07.html.Accessed on March 12 th , 2015. 8 Under democracy, this principle remained in force after the classic channels of mediation between civil society and State had been restored, as evident in the creation of Children for Identity and Justice against Oblivion and Silence (HIJOS 1995) and Siblings of Disappeared for Truth and Justice (Herman@s, 2003). 9 The case of the Centre for Legal and Social Studies (CELS 1979) exemplify the socially constructed nature of the bond of familiarity: it was founded by a group of relatives of disappeared persons who made law their principle of public recognition. to do so because, from their viewpoint, their struggle cannot be broken by any force because it arises from the "maternal instinct" (Filc 1997;Vecchioli 2005).From these primordial maternal bonds derive the strength, courage and resistance demonstrated throughout forty years of activism. Public discourses that arose from the appearance of the grandson of Estela, the main leader of Grandmothers, reveal the effectiveness of this basic principle as a fundamental element in constituting this political community, its legitimacy and the level of sentimentalization of public space.They expose the centrality of family values and kinship in the shaping of political communities and the humanitarian sentiments mobilized in the production of collective support.Political practices -which claimed the particularity of blood in the creation of exclusive moral communities -acquired a national appeal through emotions and humanitarian sentiments towards victims and their relatives, both recognized as national symbols. The grandmother, an example of love Ignacio's appearance cast a spotlight on the profile and trajectory of a mother and grandmother with whom many Argentineans could readily identify.Although Estela is widely known as the leader of one of the most prestigious human rights associations, the qualities emphasized in the days that immediately followed her encounter with Ignacio primarily focused on her role as a mother and grandmother.She was described innumerable times as an "everyday mother" and an "example of love."10She herself helped cultivate this family profile by recounting her personal history repeatedly, sticking to a formulaic narrative that evoked the prototypical history of an urban middle-class family mother: a native of Buenos Aires, born 86 years ago into a Catholic home of immigrants, married to her first and only boyfriend, graduated from an industrial college and owner of a local painting business.Estela was a teacher and director of a public school on the outskirts of the city of La Plata.With no vocation for politics, her life was devoted to raising her four children, her husband and the school. Among the stock of anecdotes, preeminence was given to her delight in cooking and sewing, her family nickname, Ñata, the Sunday family meal surrounded by children and grandchildren, her coquettish air, her simplicity, and her austere life (Seoane & Caballero, 2015:40).These qualities were further enhanced by the publicity given to photographs from her family album, showing her dressed in white on the day of her wedding, holidaying on the beach with her small children, or wearing her teacher's gown in her workplace.This account was repeated incessantly in radio, television and printed reports, and moreover was a reiteration in exactly the same terms, using the same photos and anecdotes, of reports and news items published over previous years by different press outlets, revealing the construction of a stereotyped form of presenting her public biography. These qualities, applicable to any urban middle-class family mother, are insufficient to engender the kind of collective outpouring of compassion expressed in the days following Ignacio's appearance.These very same qualities were shared by the paternal grandmother, Hortensia Ardura de Montoya, mother of the disappeared Oscar, Ignacio's father.Like Estela, she was a teacher and director of a public school and devoted herself to raising her family.But unlike the maternal grandmother, Hortensia received little public attention and few newspaper reports were dedicated to portraying the paternal side of the family.This difference reveals a hierarchy founded not on biological relationships with the victims but on the possibility of reconverting personal suffering into political engagement.Hortensia lives in a small Patagonian town more than 1500 km from Buenos Aires.Even though she asked for the disappeared son, Hortensia had never engaged in human rights activism and remained in a distant small village.Estela, by contrast, had started to take part in what was then called the Association of Argentinean Grandmothers with Disappeared Grandchildren (Abuelas Argentinas con nietitos desaparecidos) immediately after the disappearance of her daughter, a political engagement that she would maintain over a span of forty years, chairing the association since 1989.These contrasting cases show that the 'Grandmother' condition is not acquired automatically as a result of a biological bond with an appropriated grandchild but from engagement in activism.The existence of relatives who do not belong to human rights associations indicates that being the bearer of this bond of maximal proximity to the victims is a property socially constructed and objectified by a group of people who identify themselves in the public space through the use of kinship categories. Human rights activism by itself is not enough to occupy the highest positions in the hierarchy of prestige.In the days that followed the encounter with Ignacio, other qualities were highlighted to distinguish Estela from other human rights leaders, qualified by contrast, as violent and fanatic.Estela was famed for her 'serenity,' 'strength,' 'bravery' and 'admirable solidarity' (Baltazar Garzón apud Folco 2015: 10), her composure in the face of extreme situations, combined with her peaceful, serene and soft tone, accompanied with calm 'maternal' gestures.But while Estela was portrayed in every media report as a person harbouring no feelings of rancour or vengeance, these very same qualities were attributed to Hebe de Bonafini, also a mother of disappeared and the leader of Mothers of the Plaza de Mayo Association (Asociación Madres de Plaza de Mayo) since 1986.Estela too always distinguishes herself from Bonafini in every public intervention, appealing not to rational arguments but to the language of feelings: My language is not aggressive, it is conciliatory, it opens doors, it does not close them [...] as a mother, I respect her: she is a mother who suffers and searches, but her form of acting, her methodology, her objectives are not those of the Grandmothers […] As an institution, [we are] characterized by not holding onto resentments or hate, nor payback or vengeance […] We don't share anything in common, we don't agree with the forms [they use] nor with various remarks that are contrary to our objective […] she said that 'there's no need to search for the grandchildren because they're already contaminated, they're beyond saving '. 11 Far from wishing to further cement this distinction by assessing the accuracy of each description, or by reducing these differences to basic psychological traits, my intention here is to make them sociologically comprehensible and, in so doing, reveal the moral economy that organizes human rights activism, allowing us to understand the collective emotions aroused by some of their leaders.In other words, the qualities performed by Estela explain her consecration at the top of a moral hierarchy. The expressions described here show the emotional logic that connects the moral status of this kind of activism, one based on family values and biological ties, to the way in which sentiments are expressed in political practice.The sentiments revealed in demanding recognition for victims informs the legitimacy of this political practice.Hatred, vengeance, resentment and violence are deemed to be illegitimate properties for engagement in human rights.Instead, love, serenity, solidarity and affection -all of which Estela encapsulated -locate this engagement to idealistic political expectations concerning our collective life.Relatives thus became significant players in political life so long as they conform to these moral expectations.Contrasting personal characteristics render visible the tensions within this universe, as well as the pretentions to demarcate the symbolic boundaries within the community of victims.As Stoler points out, moral condition is crucial as it serves as the basis of citizenship (Stoler 2007:8). Genetic imprints as moral forces As the days passed, the news reports on the encounter between Estela and Ignacio continued to draw public attention, not just because of the grandmother's fame, but also because of the qualities of her newlyfound grandson transmitted through the media: in the account that circulated publicly, Ignacio was born in 1978 while his mother, Laura Carlotto, was being held captive in a clandestine detention centre.A few hours after his birth, the baby was illegally handed over for adoption and raised in a rural area (Olavarría), just some 350 km from Buenos Aires, by a peasant couple unaware of Ignacio's true origin.When Ignacio learned of his adopted status in 2014, he approached the Grandmothers with his suspicion that he might be a disappeared child. Public surprise not only stemmed from the relatively short geographic distance that had separated grandson and grandmother.There were also an enormous number of coincidences between Ignacio and his biological family that were repeatedly mentioned in public: for the grandmother Hortensia: "He is just like his father, he is indisputably the son of my son.When I looked at him, I saw my son, because he is a carbon copy". 12In the view of his maternal aunt, Claudia, Guido bore no physical resemblance to Laura.According to her, "Ignacio is a carbon copy of his father […] On the other hand, he has a sense of humour very similar to our own and that reminds me of my sister". 13In the words of the grandson himself, "I saw the photos and he looks a lot like an older me.It was astounding". 14Photo 1: Press conference at Grandmothers Association Source: La Nación newspaper These similarities are not limited to the biological.In a happy coincidence with his father, Oscar, Ignacio is a fan of River Plate Football Club and a musician too -in fact a composer and the director of the municipal school of music.The grandson's own declarations emphasize the importance of this resemblance, interpreted as a product of genetic imprinting: "I'm a musician just like my father and paternal grandfather were, and a speaker just like my mother was". 15The bond with his biological family gave him an insight into his own gift for music: "The most astonishing thing is that I couldn't explain 12 H.Ardura.La Nación newspaper August 6 th , 2014.Available at http://www.lanacion.com.ar/1716158-hablo-la-otra-abuela-de-guido-y-lloro-por-laaparicion-de-su-nietoAccessed on August 6 th 2014. where my musical vocation had come from", 16 since the education he had received "had led me to something else" (he had trained as a master builder).In recovering his identity, blood was a central explanatory key: "there perhaps resides one of the most important answers". 17 That question had always been left unanswered, like an outstanding debt: why did you become a professional musician?if you think about where I came from […] This contradiction always jarred for me: I was raised in the rural world, yet I took a path so peculiar and foreign to that environment -not just my pursuit of music, but jazz in particular, living in a quest for the new, some kind of avant-garde, a spirit of constant searching that I could never explain". 18 More surprising still, Ignacio was an ardent supporter of the human rights movement.In fact, he had composed a song 'To the Memory,' participated in 'Musicians for Identity,' a series of musical shows organized by Grandmothers four years earlier and celebrated the appearance of Grandchild 106 on his twitter account two years earlier.In his own words, what struck him was not just the physical resemblance but: …the calls to do things that there was no reason for me to do: like being a musician, or playing every Memory Day and not knowing why -I'm not an activist or anything of the type -or writing 'To the Memory' and feeling it was a big part of who I am (op.cit.). The overlaps between the histories of Ignacio and his disappeared parents allowed these actors to determine that they belong to the same family since "there are a lot of coincidences, intangible things that are obviously genetic in nature". 19In the narrative describing the recovery of Grandchild 114, the biographical data is presented in a form that highlights the irrefutable existence of 'a genetic memory', 20 an interpretation that prevailed not only in the media and among social and political leaders, but also in expert analyses, as María Eugenia Ludueña alludes when she asks: "Just how influential are the imperturbable contents of our genes?What had Laura said/transmitted to him as she felt the baby growing and gently kicking in her belly?"(Ludueña 2014). It is worth emphasizing that this focus on the strength of blood ties is not unique to this particular case. The same also appears as a recurring element in many of the restitutions.Ten years earlier, the grandchild Juan Cabandié had remarked: …the dictatorship's sinister plan was unable to erase any record of the memory transmitted through my veins […] The fifteen days during which my mother breastfed and named me were sufficient for me to tell my friends -before I knew who my family was, before I knew my history -that I wanted to call myself Juan, just as my mother had called me during imprisonment in ESMA [a former clandestine detention camp]". 21 As in the case of Ignacio, filiation and blood appear as life-shaping elements that help explain the person's own biographical trajectory and, in this case, his vocation for political activism: The testimonies cited above, which extol the similarity between the experiences of parents and children, express precisely a notion of the legitimate family, conceived as "a mode of group belonging founded on a community of shared condition, habitation and blood; in sum, a homogenous grouping whose internal cohesion is based on the 'similitude' of the actors who form part of it" (Lenoir 2003: 19). Blood ties and filiation are represented as the legitimate bonds par excellence, evoking a worldview that transcends political and social positions, as well as any objective differences that may exist among the recovered children.Deployed by social movements, the State and the victims themselves, these narratives are structured around consecrated values and representations of the family, linked to the rhetoric of blood, origins, truth and genetics (Gatti 2011).Biological ideas concerning kinship imprint public narratives: "the blood circulating through their veins" is imposed as a principle that not only accounts for biological ties but also functions as a means to interpret destinies, career paths and personal preferences.This is the quality highlighted in the narrative shaping Ignacio's restitution. The recovered grandson, a good boy without resentment The transformation of Ignacio's recovery into a collective celebration was also achieved by combining Estela's qualities with another key ingredient: the moral qualities of the grandson.His relatives described him as "a good boy, someone they had raised well," "a healthy boy."According to Estela "he was raised in the rural world by a good family to whom he was delivered in good faith, without them being aware of his origin.They too had been victims of a 'deceit.'"22In Ignacio's account: "If there is love, as there was in my childhood, and love as there was in the search [for me], it's easier."23These warm, loving feelings extend to his own past in a perfect match with his grandmother's qualities: "I have no resentment, I feel highly privileged, perhaps uniquely so, because until a few months ago I had a phenomenal life […] for me it is a moment of joy". 24his set of adjectives -"good child, good family" -positively qualify the history of the grandmothers' struggle, the genetic memory that circulates through grandchildren's veins and makes explicit the idea of the family as a key space of moral education: some grandchildren were raised by "good families" unconnected to the dictatorship and their restitution is a cause of celebration.While DNA tests provide the genetic evidence that enables the grandchildren to be returned to their biological families, it is their behaviour in response to the DNA findings that provides the proof needed for them to be returned to the warm embrace of their families, conceived now as a moral space.It becomes clear how, in the cosmology of the victims, concepts of the family as an institution founded on biological ties coexist with ideas of the family as a moral space.The conditions surrounding Ignacio's restitution evince the moral qualities of the families involved and their capacity to elicit strong feelings of empathy, emotion and redemption.The transformation of Ignacio's restitution into a collective celebration was made possible by the fact that both grandmother and grandson exemplified consecrated notions of family and victimhood. All these categories and uses of language need to be understood in the context of the efforts made by other abducted grandchildren to prevent their identity from being recovered, to avoid becoming linked to their biological families or, at the very least, to minimize its symbolic effects, given that they still consider their appropriators to be their real parents.These conflicts include refusals to take DNA tests or to use the name of their biological family, leading in some cases to court litigations.The compulsory nature of the DNA test used to prove filiation has become controversial from the viewpoint of some kidnapped grandchildren since the knowledge of their true biological identity entails a) the immediate detention of their abductors for their responsibility in the crime of identity suppression, and b) the restitution of their original family names. In this context, the words of Claudia, Ignacio's uncle, help explain the risks involved in any restitution: "the people who raised Ignacio had nothing to do with the repression […] I was really worried […] he had been raised by some shitty military type who would have filled his head with rubbish." 25 This risk was identified by Estela too: "Each case of restitution has its own particularity […] [it depends on] the child's response.When they come of their own volition it is fine, but when they don't, it usually turns out badly. In other words, it's very nuanced."26This nuance was likewise recognized and emphasized by Argentina's president at the time, Cristina Kirchner: "Estela was lucky.Imagine if her grandson had been raised with hatred". 27Ignacio himself stated the same when he emphasized: "My upbringing was fantastic, raised by a couple who showered me with love […] I had an extraordinarily happy life and to this happy and extraordinary life was added the marvel of being part of this history".28Among academic experts, Ignacio's perceived qualities were emphasized and interpreted as an act of "double justice": Carlotto had found her grandson and "the grandson is this one" -that is to say, a grandson whose moral qualities corresponded to the values that distinguished his biological family: This doesn't mean that had her recovered grandchild (like so many others) been someone bearing the indelible marks of violent abduction, or possessing ideologies and ways of life closer to those of the military, the encounter would have been impossible and his appearance less celebrated.But, finding Ignacio and discovering that he was filled 'with the truth that touches him,' wanting to trace his filial roots to his missing parents […] more than resentment […] it calms and cherishes" (Abdo Férez 2014). The presence of "indelible marks of appropriation" and the upbringing in a family of perpetrators seem to converge on the paradigmatic case of Eva Donda, daughter of disappeared parents, raised under a false identity by her own paternal uncle, Adolfo Donda, a navy lieutenant and one of the principal perpetrators of human rights abuses at ESMA, a former clandestine detention centre.As in the cases of Ignacio and Juan, Eva's mother was disappeared after giving birth.Eva's father was Adolfo Donda's brother.He too disappeared.Taken away by her uncle and raised as his own daughter, Eva refused to carry out the DNA test voluntarily, 29 still defends her abductor, in prison since 2006, and participates actively in the Association of Relatives and Friends of Victims of Terrorism in Argentina (Asociación de víctimas y familiars de víctimas del terrorismo en Argentina).This association campaigns for the end of trials for crimes against humanity and proposes a national reconciliation policy.According to Eva, her feelings of filial love for her abductor justify her current status as a 'victim.'Hence, she asserts: …all of us are victims […] my [biological] parents also did violent things.Today they would be imprisoned for terrorist acts […] I wish for my uncle to be released.He's my paternal figure; he's my children's grandfather (Arenes & Pikielny 2016: 48-52). In other cases, complaints have focused on the use of the biological family name.This refusal was settled judicially in another paradigmatic case: Hilario Bacca, born in ESMA and identified by a DNA test as Federico Canola Pereyra, Grandson 95.Defining himself as a "son of the heart" of his abductors, he obtained his identity as a result of a compulsory DNA test carried out after legal intervention, which involved a raid on the family's home and the subsequent trial and conviction of his abductors.According to Hilario, after the intervention of the Grandmothers association, "the martyrdom of my life began": he was transformed into "a victim" for the courts, "a number" for Grandmothers and a "war trophy" for the people.Through an unexpected use of the category 'disappeared,' he accused courts of making him 'disappear' and denounced: Prosecutors and Grandmothers believe that every time I'm named as Hilario Bacca a crime is committed […] I feel that I'm being persecuted and [subject] to the same kind of [abusive] procedures that Liliana and Eduardo [his biological parents] experienced during the dictatorship. 30 After nine years of legal disputes the courts allowed him to continue using the name given by his adoptive parents.Grandmothers refused the judicial decision because it: "violates the rights of the Cagnola and Pereyra families and constitutes an affront to the memory of his biological parents."For the association, it amounted to "a legalization of the dispossession that […] their families suffered at the hands of State terrorism". 31Estela's own grandson adopted the surnames of his biological family but continues to use the first name Ignacio given to him by his foster parents, refusing to register as Guido, the name given at birth by his mother before she was assassinated.According to Estela, this attitude "hurts me because the whole world searched for him as Guido.His mother gave him the name in memory of her father, Guido, my husband". 32 From the viewpoint of the biological families of the grandchildren, the bad families force them to remain 'captives,' the 'slaves' of their abductors, even after their true biological identity has been confirmed. 33This is because being brought up among perpetrators leaves imprints: the love that they feel for the people who raised them.These grandchildren remain morally excluded from the community of legitimate victims until they accept the truth of their identity.Only then will they experience 'freedom' (Capiello 2014).It is this context that enables us to comprehend the semantic field in which the story of Ignacio's successful and miraculous recovery is narrated, including the incorporation into his biological family and into a nation that, conceived through blood ties and family values, celebrates his restitution. As Fonseca has analysed among children with Hansen's disease, the DNA exam is conceived as "valid proof " within a system of concrete technologies that mediate what people know and feel.Photos, names, tastes and so on co-produce ways of reckoning personal identity and family ties (Fonseca 2015: 80).In the 30 "El nieto 95 denuncia desprotección del Estado y lucha por llevar el nombre que tuvo por 37 años."In Perfil newspaper http://www.perfil.com/sociedad/El-nieto-95-denuncia-desproteccion-del-Estado-y-lucha-por-llevar-el-nombre-que-tuvo-por-37-anos--20151118-0008.html,Accessed on November 20 th , 2015.31 Op.cit.cases described here, acquiring the status of a legitimate victim as a grandchild of someone disappeared depends on displaying the required proofs: not just the DNA test, but the cultivation of moral and political virtues (Fassin 2011). The state as a relative of victims The public staging of the restitution of Ignacio's identity was also a radical novelty in terms of the place that relatives and the State could legitimately occupy within Argentina's national political space.Ignacio's restitution was enacted as a family affair that included the State itself, since blood ties were extended to its most important representatives, testifying to the profound changes occurring within the human rights movement since the arrival of Kirchnerism to power (2003). Since the returned to democracy (1983), all the devices developed by the State to manage the suffering of victims have demonstrated a) recognition of its responsibility for past human rights violations, as well as b) the legitimacy granted under democracy to the demands of relatives of the disappeared.State policies have covered a wide spectrum of actions, ranging from the creation of a truth commission (CONADEP 1983) to a civil criminal trial against the perpetrators of State terrorism (1985), the creation of a National Genetic Database (1987), a National Human Rights Secretariat (1991) and a National Commission for the Right to Identity (Conadi 1992), along with the adoption of international treaties in defence of human rights as part of domestic law (1994), among many other actions (Sutton 2015).In this process, the State has insisted on the need to specify the criteria used to identify those subjects wishing to be considered beneficiaries of these different policies, the victims.Since the beginning, the official category of victim was inseparably linked to recognition of another specific group: their relatives.For the state, therefore, a disappeared person is someone who "…in the vast majority of cases was ripped alive from the bosom of his family, kidnapped from his own home…" (Law 23.466/86).The same attributes used to define the disappeared also reciprocally define the identity of relatives and justify restorative policies: "[the family], the core of our social organization, […] has been severely attack with the kidnapping and later disappearance of one or more of its members.We must repair the damage caused." 34This analysis reveals the consecration of a public rhetoric that excludes any reference to the political identities of the actors involved by privileging family ties instead. Through these laws, guided by feelings of empathy and compassion, the family was recreated as a new victim.Although State terrorism had been suffered "…to a greater or lesser extent by the entire Argentine people, there were and is another victim atrociously assaulted: the victim's family" (ibid).These families were defined by the moral damage experienced with the disappearance of one of its members, and the situation of economic helplessness in which they were left without the support of the disappeared provider.This appeal to the family is based on a belief shared by the State and by members of human rights associations concerning the positive value of kinship and the place that family is considered to occupy within the nation.Paraphrasing Benedict Anderson (1991), kinship creates an imagined community, but not a fictitious one since its terms are intelligible to all its members. In the process of giving social existence to the disappeared, the State helped turn those claiming to be relatives into new victims.Through the approval and regulation of this array of laws, the State created and officially recognized a new kind of social category: the "relative of the victim."Families of the victims become a responsibility of the nation, a nation now devoted to protecting them.These laws are effectively acts of institution (Bourdieu 1994) through which the identities of the Argentinean nation were redefined.Those who succeeded in being recognized as a target of these policies -and thus included within the nation -were those who received a politically neutral but morally powerful identity: the victims and their families. These devices played a fundamental role in the crystallization and sacralisation of a way of imagining the nation as a family of victims (Vecchioli 2005;Filc 1997). Over the years, any sign of proximity between human rights associations and the State was condemned as a threat to the purity of such activism and its place of moral significance.Conversely, proximity to these associations cast suspicion on the political impartiality of former presidents Alfonsín and Menem, both of whom faced various attempted coups d'état led by military sectors opposed to the trials for crimes against humanity.Alfonsín was a lawyer and a founder member of a human right association, while Menem had been kept imprisoned throughout the dictatorship.Alfonsin's commitment to human rights ended up in a major civil trial that condemned those responsible for State terrorism.At the beginning of democracy, human rights and partisan activism, 'blood' and 'politics,' were considered antithetical. Since the arrival of Néstor Kirchner to the presidency in 2003, though, the distance that once characterized the relationship between human rights associations, political parties and the State has become erased.A significant number of recovered grandchildren entered the electoral lists of the Coalition for Victory, aiming to promote the human rights cause.This new combination of legitimate attributes was masterfully expressed by the recovered grandson Pietragalla, who claimed "I am the congressman of the Grandmothers". 35 The very same afternoon that the courts informed Estela that her grandson had appeared, Cristina Kirchner, president at the time, phoned to congratulate her.Asked about this conversation, Estela described it as a "mother-daughter communication" during which both women cried with emotion (Guinzberg 2014). Likewise, the informal reunion in the presidential residence a few days later -described as an intimate Source: @CFKARGENTINA Unlike former presidents Alfonsín or Menem, the Kirchners had no historical roots in the human rights movement or any involvement as political prisoners during the dictatorship.In their place, fictitious kinship relationships were recreated, and blood ties were extended to reach human rights activist.In this new family setup, Cristina is represented as Laura's sister and Estela as the mother of both.The authors of the book The Grandson accentuated their similarities, contributing to the crystallization of this family: "The president has a resemblance to Laura.The hair, the age, the way of speaking..." (Seoane & Caballero2015: 16). Photos circulated extensively confirming this resemblance. Photo 3: Cristina Kirchner -Laura Carlotto Source: La Nación newspaper For Estela, the similarities were extended and deepened because Oscar, Ignacio's father, was a Patagonian native like Néstor, while Laura was a native of La Plata like Cristina.They had all lived in La Plata during the 1970s where they were university students and political activists.A newspaper report added: "A similar history to the presidential couple, who also met each other while both were studying Law at the University of La Plata."36Kinship ties were recreated on the basis of the truth revealed by blood ties and on their life trajectories, presented as identical: a life devoted to and consecrated by -in the cases of deceased former president Néstor Kirchner and the parents of disappeared Ignacio -the commitment to fight for a cause, now recreated as a cause shared by them all.This communion appears further accentuated when we note that all of Estela de Carlotto's sons and daughter were active, in one form or another, in the structure of the Kirchner government: Claudia as a director of the National Commission for the Right to Identity (CONADI), Guido as Human Rights Secretary for the Province of Buenos Aires, and Remo as a national deputy for the Collision for Victory and president of the Human Rights and Guarantees Commission. The novelty expressed here was not the incorporation of the Carlotto family into the structure of the State.In fact, the central criterion for the recruitment of officials in those State areas responsible for human rights policies has been their status as either victims (former political prisoners, exiles, survivors), relatives of a victim, or their past commitment to the fight for human rights. 37The novelty, rather, was the symbolic integration of the Carlottos and Kirchners into the same family as a single moral and political space. This aim in mind, the biographies of the family members were recreated and imagined as convergent.The use of the plural testifies to this belonging to the same family constellation and the work involved in its construction.As Kirchner put it: "It's amazing, isn't it?We were all so nearby yet none of us had ever met." 38 The recreation of the State as a family of victims began with the first speech made by president Néstor Kirchner to the 58 th UN Assembly in 2003, when he addressed the global community through an appeal to primordial ties: "we are the sons and daughters of the Mothers and Grandmothers of the Plaza de Mayo," 39 an expression that foregrounds not only the constructed nature of this relation of maximal proximity with the victims and their families, but also the critical importance that these primordial relations acquired thereafter in the constitution of the State itself.Numerous different occasions make explicit this work of creating and recreating these imaginary bonds: ranging from State rituals involving senior officials or party activists, but also leaders of the associations of the relatives of victims, to the celebrations of Mother's Day in which Cristina published photos of herself in the company of Estela and Hebe de Bonafini, the president of the Mothers of Plaza de Mayo Association. The use of the family model cannot be seen simply as a discursive strategy employed by each of these two groups to maximize their demands or their potential for political support.My intention is to transcend explanations of social action as based on bare cost-benefit calculations and highlight the ways in which people's actions are informed by moral values.Thus, the appeal to family values and ties in the cases analysed here is based on a belief shared by the State and members of civil society organizations concerning the strength and positive value of kinship and the place occupied by the family in national society.Unlike more black and white viewpoints that judge the human rights movement to have been 'coopted' by Kirchnerism, my analysis proposes that novelty has resided not in the inclusion of activists from human rights organizations in the structure of the State -a process initiated in 1983 -but in the reconversion of grandchildren into professional politicians and in the inclusion of consanguine relationships with victims as part of the very constitution of the State -as revealed in Néstor Kirchner's claim that "we are the sons and daughters of the Mothers of Plaza de Mayo" or by the grandchild's assertion that "I am the congressman of the Grandmothers."State, victims and relatives thus became symbolically integrated within the same moral unit as members of the same family.These closening ties were not conflict-free.Maria Victoria Moyano's arrest while participating in a union demonstration reveals the existence of frontiers that define who may be considered as a legitimate victim, a grandchild who deserves feelings of compassion.Even though Ignacio and Maria Victoria are classified as 'siblings' within the Children association (Hijos por la Identidad y la Justicia, contra el Olvido y el Silencio HIJOS), María Victoria merited neither the same emotion when she was recovered, nor any feeling of compassion following her arrest in 2014. 40 38 "Retrato de un encuentro íntimo."(op.cit.).39 Kirchner, Néstor "Discurso de N. Kirchner en la ONU" Cristina Fernández de Kirchner September 25 th , 2003, accessed on March 8 th , 2015 http://www.cfkargentina.com/discurso-de-nestor-kirchner-en-la-onu-2003/40 Although the union protest was covered by a few national media sources, attention was focused on the aggressive attitude of the gendarmerie, led by Berni, a prominent national leader of Kirchnerism.During that pre-electoral period, critics focused on Berni in order to show how Cristina Kirchner's government was not fully democratic.Almost no mention was made of Victoria's arrest. vis-à-vis the State.Through a sentimentalization of public space, a unique and exclusive case, such as Ignacio's restitution described earlier, is able to garner support among those not directly involved, turning the State and the national community into a single family united by blood ties and mutual empathy.As Alexander has emphasized, "this only happens once in a while, when the [right] symbols are aligned" (Alexander 2002:4). Final considerations In this article I have taken advantage of the recent literature on humanitarianism, specifically its emphasis on the constructed quality of feelings of compassion towards victims in contemporary politics, in order to understand the different moral status of victims of State terrorism in Argentina.I have presented paradigmatic cases as privileged instances that communicate the key role played by blood ties and family values in forming legitimate political representation and the significant place granted to the suffering of the victims in the configuration of the State.These public scenes reveal the way love, kindness, blood, compassion and empathy constitute the fabric of practices and values of humanitarian patterns of government, the setting in motion of a variety of mechanisms of State administration, the intervention of expert knowledge, and the mobilization of humanitarian sentiments. The State runs and is reproduced by bureaucratic devices but also by affective engagements, by practices that extends kinship and emotions in order to achieve the condition of being a relative of victims. We are faced by a process through which affairs of the State are conceived as though they were family affairs. It is vital to recognize that while the academic literature's emphasis has been on identifying how the State intervenes decisively in the configuration of families and domestic relations, the situations described here show that appeals to the family -and the families of victims in particular, evoking all the values, emotions and sentiments with which they are associated -can become a plausible form of doing politics and, indeed, a means of establishing a hegemonic position within the State's field of power. From the return to democracy, the State was a crucial actor in the consolidation of consanguinity and filiation as 'natural' principles of adhesion to a collective cause, as revealed in the diverse policies developed to remedy the consequences of human rights violations.These involve a wide array of mechanisms for managing the suffering of the victims, ranging from the creation of the National Genetic Database (1987) and the National Commission for the Right to Identity (Conadi 1992) to the sanctioning of a law that permits DNA to be obtained via court order (26.549/2009), among many other devices that perform a crucial role in cementing this form of imagining the nation as a family.The appeal to the family is based on a belief shared by the State and by those belonging to the human rights associations concerning the positive value of kinship and the place that the family is held to occupy in the nation.The extraordinary events that followed Ignacio's restitution express the leading role played by the families of victims of State terrorism in Argentine's political space, the powerful confluence between the setting in motion of a variety of devices of State administration, the intervention of expert knowledge (geneticists), and a collective mobilization sustained over a forty-year period.The long-term cultivation of sensibilities that began in 1983 and intensified after 2003, eventually exploded with the recovery of Ignacio, a national event that unleashed expressions of fervour and sympathies.All these social forces contributed to institute, consecrate and simultaneously naturalize this singular form of building a collective cause, narrated and enacted in the public space via the language of kinship and family values.The cumulative work of inculcating the appropriate affective dispositions vis-à-vis the families of victims and the place that suffering should occupy in the public agenda erupted following the discovery of Estela's grandson. The appearance of Ignacio consecrates the legitimacy of these ties, as well as the legitimacy of a State that recognizes their centrality, especially in the case of the disappeared grandchildren who, given their status as absolute victims, are able to remain above any kind of public controversy.The family photo at the presidential residence reflects the successful incorporation of the demands of the human rights movement as State policy and how the State is conceived as an extended family that unites the Kirchners, Carlottos, Grandmothers, Mothers and Children on the basis of ties that are fictitious yet still founded strategically on the truth revealed by blood ties and on the near identical life trajectories of all the political actors involved.Grandson 114's appearance was experienced in public space as a heroic feat through which the consecrated image of the family magically materialized in front of the eyes of all the ritual's participants and spectators.The case shows how feelings of pain, compassion, empathy and redemption, along with the appeal to a 'blood community,' as traditional principles of adhesion not only remain active within the framework of modern States and the global community, they have also become a key site of contemporary politics (Fassin 2011). As Stoler reminds us, the language of feelings is not a way of 'masking' the true, dispassionate and malefic interests of the State.It is a substantial part of politics, a form through which the State presents itself as a moral space.It involves sentiments and moral values that lend motivation and meaning to the bureaucratic structures of the Nation State and the transnational community.They become instituted as forms of governance by reordering the relations within the State and the global community (Stoler 2007:18).Through these family metaphors, along with the values associated with them, our representations of political life appear inscribed in our bodies as noble emotions and feelings.This recourse to feelings to express political ties is truly effective, however, only when victims are constructed through the appropriate symbols -that is, as legitimate victims, deserving compassion -and when these feelings and values are in turn mobilized through agents possessing the social skills necessary to do so.This is also recognizable in the context of the claims made by relatives of the victims of police repression in the suburban peripheries, who are stigmatized as mothers of criminals or drug dealers and considered to be illegitimate victims (Bermudez 2017;Vianna & Farias 2011).Or again, in the intriguing paths taken by the political movement seeking legal reparation for the human rights violations perpetrated by the Brazilian government against children of the compulsorily institutionalized patients of Hansen's disease, who were separately involuntarily and raised by other families or the State.Starting out from a stigmatized condition, they attained public recognition of the traumatic experience of forced separation from their mother and/or father after forty years of activism (Fonseca 2015). As shown in the different scenes described in this text, the worthiness and skills required to become an object of feelings of compassion and empathy are unevenly distributed.The public indifference towards María Victoria Moyano's arrest sheds light on the effectiveness of the family-State in imposing the appropriate feelings and expectations and its capacity to establish moral hierarchies and boundaries among the victims' families.When included in political analyses, affect and emotion are often reduced to an instrumental mechanism of governmental power or treated as epiphenomenal to the real business of rule.In contrast, this analysis has explored the State as the object of emotional investment by considering how emotion is implicated in a variety of everyday and exceptional encounters between citizens and state agents.The politicization of the affects of particular spaces -that is, the act of binding these intensities to political symbols and discourses -is one important way in which the State acquires a tangible, affective and spatial reality, as well as becoming invested by the moral values associated with kinship and victimhood. I couldn't find any explanation […] I thought differently to him [his abductor].Why do I go on protests […])?I wondered why I dedicated eight years of my life every Saturday to visiting a poor village or the homes of orphaned children to provide recreational activities.This left me wondering.Given the context in which I was raised, how did I end up doing that?Where did I get it all from?It was inside me […] it's in my blood. Photo 2: Cristina Kirchner, Ignacio Carlotto Montoya and Estela de Carlotto at the presidential residence
2019-05-12T14:24:18.489Z
2018-12-03T00:00:00.000
{ "year": 2018, "sha1": "7b6cf8bdedb67576c64b63081712e32ba4058d00", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/vb/v15n3/1809-4341-vb-15-03-e153506.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7b6cf8bdedb67576c64b63081712e32ba4058d00", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
164257254
pes2o/s2orc
v3-fos-license
Performance of TPC based ranging signal for more than 2 services multiplexing . This paper presents signal structure and power efficiency performance for simultaneous transmission of Global Navigation Satellite System (GNSS) service signals based on Tiered Polyphase Code (TPC). For the simultaneous transmission of three or more service signals, the intermodulation terms addition and modification of the power allocations for the signal multiplexing are applied first to the spreading signal with the binary pseudorandom noise (PRN) code and a constant envelope signal is generated. Then, Zadoff-Chu sequence is applied as a secondary code to generate a multiplexed satellite navigation signal having a constant envelope characteristic. Simulation results show that power efficiency performance of more than 80% can be achieved in three service signal multiplexing. Introduction The use of Tiered Code (TC) using pseudo random noise (PRN) code of hierarchical structure is becoming popular as a method to facilitate signal acquisition in weak signal environment of Global Navigation Satellite System (GNSS) navigation signal. TC, which consists of a dual code structure of primary and secondary codes, is a binary PRN code in which both codes are combined to increase processing gain. Tiered Polyphase Code (TPC) structures have been proposed to reduce intra-system interference as well as increase processing gain by applying non-binary polyphase codes to the secondary codes. [1,2] On the other hand, with the proliferation of GNSS services, attempts have been made to simultaneously transmit more service signals using the same frequency band. Simultaneous transmission of three or more signals may degrade the power efficiency of the satellite high power amplifier. In order to minimize the degradation of power efficiency, various methods have been proposed to keep the envelope of the generated signal constant. [3] Most of the proposed methods are based on the assumption that the simultaneous transmission signals to be multiplexed have binary values. Therefore, when the simultaneous transmission is performed based on the TPC, the signal generation structure needs to be changed since the secondary code of the TPC is non-binary. In this paper, we propose a signal generation structure that can maintain constant envelope characteristics when simultaneously transmitting three or more GNSS service signals based on TPC. Also, it is shown that the power efficiency of the amplifier is not different from the power efficiency of the binary signals during the amplification of the signal generated by the proposed method. Generation of the multiplexed signal In order to generate a transmission signal by multiplexing three or more signals based on TPC, signals to be multiplexed are defined as follows. where N is the number of services to be multiplexed, di(t) denotes a data signal of the i-th service, Cp,i(t) denotes a primary code of the i-th service, Cs(t) denotes the secondary code common to all services and is non-binary polyphase sequence. In general, multiplexed and transmitted signals can be represented as functions of the signals for multiplexing. The signals multiplexed in the baseband are frequency shifted by the carrier signal and transmitted in the allocated frequency band. The following equation (2) shows this multiplexing process, where f(s i (t), i=1,...,N) is a function representing the signal multiplexing relationship in the baseband. where Re{ } denotes a function that extracts the real part of the argument and fc is the carrier frequency. In (2), the power variation of the signal s(t) is directly related to the equivalent baseband signal component f(si(t), i=1,...,N) since the carrier signal cannot contribute the envelope variation of the signal. To make the power variation constant, the multiplexing function in the baseband should generate constant envelope baseband signal. The baseband multiplexing function serves to synthesize the service signals to be multiplexed. In this process, since the secondary code Cs(t), which is commonly applied to all service signals, is included, signal components other than Cs(t) are synthesized to generate a multiplexed signal. This synthesis process can be modelled as follows. where, g (sb,i(t), i=1,...,N) is a baseband multiplexing function for synthesizing multiplexed signal with binary service signals spread by primary code, and sb,i(t) represents an i-th binary service signal and can be expressed as sb, The power variation of the multiplexed signal can be obtained by taking the square of the absolute value of the result of equation (3) as follows where |Cs(t)| 2 is constant since the secondary code Cs(t) is a complex valued function drawn unit circle in the complex plane. Therefore | g(sb,i(t), i=1,...,N)| 2 should be kept constant to make the multiplexed signal constant envelope. Figure 1 shows the signal generator structure for generating multiplexed signal with constant envelope property where the signals for multiplexing are spread by TPC. For the signal multiplexer in figure 1, there have been many research results achieving constant envelope characteristic for more than 3 binary signals multiplexing. By adding some intermodulation signals, they can achieve the constant envelope characteristics. In this paper we add intermodulation terms and modify the gains of the signals to achieve the constant envelope characteristic of the multiplexed signal. Power efficiency and constellation The performance of the signal multiplexing can be measured by the power efficiency. The power efficiency is defined as the ratio of the sum of the signal powers at the multiplexed signal to the total signal power of the multiplexed signal. The total multiplexed signal power is the sum of the powers of signals and the powers in the intermodulation terms. To evaluate the power efficiency of the multiplexed signal based on TPC spreading, we tried to find intermodulation terms and their weight. Also, to increase the power efficiency, we tried to modify the signal gains incorporated in the multiplexing process. For a specific example, we tried to find a solution for the 3 signal multiplexing case. The baseline form of the multiplexed signal would be represented as follows where si(t), i=1, 2, 3 is the signals for multiplexing and all of them are binary signals. By numerical simulation, we have obtained the multiplexed signal form with constant envelope characteristic as follows Figure 2 shows the constellation of the multiplexed signal with constant envelope property as shown in equation (6). By allowing the power allocation difference between signals for multiplexing up to 2 dB, the power efficiency of the multiplexed signal in equation (7) becomes 84.77%. The results given in equation (7) and figure 2 illustrate that there may be many difference results by changing the maximum value of the power allocation difference. Conclusions In this paper, we proposed a signal generator structure for TPC based signal multiplexing. Due to the polyphase structure of the TPC, legacy binary signal multiplexing schemes could not be applicable to the problem such that the proposed reordering of the multiplexing and spreading works well in the TPC case. The numerical simulation results for the proposed signal multiplexer show that the power efficiency of the proposed scheme for 3 signal multiplexing case approaches 85% and can be higher value with power allocation modification. To verify the validity of the proposed scheme, more numerical results for 3 or more signals multiplexing.
2019-05-26T13:18:26.370Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "70f3276be20a21e6d163ca2cd1ece6d97b5a39ff", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/20/e3sconf_isgnss2018_03007.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "89a5df899ca90f4e79040c49659b1c02996de1b4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
231752159
pes2o/s2orc
v3-fos-license
Global Distribution of Three Types of Drop Size Distribution Representing Heavy Rainfall from GPM/DPR Measurements 1School of Earth and Environmental Sciences, Seoul N ational University, Seoul, Korea 2National Institute of Meteorological Sciences, Kore a Meteorological Administration, Jeju, Korea Collaborative Innovation Center on Forecast and Eva luation of Meteorological Disasters, Nanjing University of Information Science & Technology, Nan jing 210044, China Key Laboratory for Aerosol-Cloud-Precipitation of C hina Meteorological Administration School of Atmospheric Physics, Nanjing University o f Information Science & Technology, Nanjing 210044, China Introduction The drop size distribution (DSD) of rainfall is important for estimating rain intensity and latent heating profiles using remote sensing data (Chapon et al., 2008;Liao et al., 2014;Nelson et al., 2016) and for parameterizing rain microphysics in numerical weather forecasting models (Lim & Hong, 2010;Zhang et al., 2006Zhang et al., , 2008. It also helps in understanding and interpreting physical processes related to rain development (Chen et al., 2011). Thus, better understanding of its global characteristics can significantly advance our meteorological knowledge. Previous DSD-related studies mainly used ground observations where the regional DSD characteristics were examined (Bringi et al., 2003;Dolan et al., 2018;Gatlin et al., 2015;Tang et al., 2014;Thompson et al., 2015;Ulbrich & Atlas, 2007;Zhang et al., 2020). Although these studies provided fundamental results by identifying DSD characteristics in various precipitation regimes, certain limitations exist when characteristics at any particular region is concerned, especially with regard to the understanding of rain processes over remote areas, such as in an open ocean. Furthermore, generalizing DSD characteristics has been challenging owing to the differences in the measuring instruments, study regions, time periods, and methodologies used in various studies. Thus, acquiring general characteristics applicable to any local area is necessary. Accepted Article This article is protected by copyright. All rights reserved. 4 Satellite measurements seem to be an appropriate solution for examining such spatially and temporally varying DSD behaviors across diverse precipitation regimes. Tropical Rainfall Measuring Mission (TRMM) was the first space-borne radar dedicated to rainfall measurement. The diameter parameter for DSD was retrieved using the precipitation radar (PR) measurements of reflectivity at a single , assuming that the DSD can be characterized by the diameter parameter itself (Iguchi et al., 2000). However, as DSD variations cannot be fully expressed as a single parameter, rain rates (RRs) retrieved from single frequency radar measurements have often been found to prone to errors and biases (Iguchi et al., 2010). Global Precipitation Measurement (GPM) satellite, a successor to TRMM, launched on February 27, 2014, carries a dual-frequency precipitation radar (DPR) and is much better at detecting DSDs as compared to the single band approach of PR (Iguchi et al., 2010). Difference between the radar reflectivities of the two frequencies (Ku-band: 13.60 GHz, Ka-band: 35.55 GHz) enables the retrieval of the two DSD parameters, mass-weighted mean diameter (Dm) and normalized intercept parameter (or normalized scaling parameter for concentration, Nw). This has enabled applications in different types of studies, for example, in the study of vertical structures of DSDs for stratiform and convective precipitation (Sun et al., 2020), microphysical features of tropical cyclones (Huang & Chen, 2019), and different DSD features between land and ocean (Kumar & Silva, 2019, Radhakrishna et al., 2016. Well-known features, such as difference between Dm of continental area and oceanic region was confirmed using globally retrieved DSD data (Seto et al., 2016;Yamaji et al., 2020). With large amount of accumulated data, comparing DPR-retrieved DSDs with in situ ground observations and understanding the comprehensive features of DSD across various rainfall regimes are more plausible. In particular, examining the Accepted Article This article is protected by copyright. All rights reserved. 5 behavior of DSD characteristics in two-dimension carries significant importance. Note that in remote areas, such as open oceans or rain forest areas, which are presumably under different atmospheric environments, DSD features are difficult to obtain using the conventional disdrometer and polarimetric approaches. In this study, we analyze the global DSD characteristics of heavy rainfall using multi-year DPR-retrieved DSDs and evaluate the remote sensing analysis results against ground observations provided in the literature. The obtained results can improve our understanding of cloud development and rain formation microphysics across global convection and rainfall regimes. This study can also further improve microphysical parameterizations needed for numerical models owing to the understanding of regionally different heavy rainfall microphysics. Data and Methodology In this study, GPM DPR-retrieved near-surface RR, Dm, Nw at clutter-free bottom level, and attenuation-corrected radar reflectivity profiles at the Ku-band (version 6) over the entire observation domain (65° N-65° S) during 2014-2019 are used (see Iguchi & Meneghini, 2016 for detailed DPR-retrieved parameters). Solid or mixed precipitation is not considered as the focus here is rain precipitation. We use the inner swath data of the matched scan mode where reflectivities are measured at both the Ku-and Ka-bands. For the initial version of DPR-retrieved DSD data, it has been reported that the uncertainty is high for RR > 8 mm h -1 cases, compared to the disdrometer measurements over Gadanki, India (Radhakrishna et al., 2016). From the validation of the updated version 4 data over the Mediterranean region, D'Adderio et al. (2019) reported that probability distribution of DPRretrieved Dm is well matched with disdrometer measurements, but logNw is subject to uncertainties. Accepted Article This article is protected by copyright. All rights reserved. 6 For the latest version (i.e., version 6), agreement is good between retrievals and disdrometer measurements over the central China for Meiyu monsoon events; correlation coefficient is higher than 0.6 with no significant mean bias (Sun et al., 2020), satisfying the DSD measurement requirement of 0.5 mm error range for the GPM mission (Skofronick-Jackson et al., 2017). Indirect validation of was undertaken about the microphysical assumption used for the DSD retrievals (version 6), using disdrometer measurements from numerous NASA's Ground Validation (GV) field campaign sites over the United States and Department of Energy-Atmospheric Radiation Measurement (DOE-ARM) mobile facility deployments over the globe (Chase et al., 2020). It was demonstrated that employed microphysical assumption of rain-drop size relationship for the rain is in good agreement with disdrometer measurements, further assuring the quality of version 6 DPR-retrieved DSD retrievals. The DSD function for raindrops (N(D)) is normally described by the gamma distribution function (Ulbrich, 1983) and its normalized form (Testud et al., 2001) can be written as follows: where μ, D, and N(D) are the shape parameter, diameter bin in mm, and number concentration in mm -1 m -3 . Dm is the mass-weighted mean diameter in mm, and Nw is the normalized intercept parameter in mm -1 m -3 . In this study, the shape parameter μ = 3 is used as in other studies (Liao et al., 2014;Seto et al., 2013). By using the values of Dm and Nw retrieved from DPR measurements, the corresponding number concentration N(D) is calculated using Eq. (1). The procedures for data construction needed for examining the global DSD characteristics are shown in Figure S1. We first define an equal-area grid, equivalent to the 5° × 5° grid area over Accepted Article This article is protected by copyright. All rights reserved. 7 an equatorial region such that the number of heavy rain pixels and corresponding Dm and Nw values can be saved at each equal-area grid. If there exist more than one pixel showing the RR greater than 10 mm h -1 at a given equal-area grid (about 5° × 5° area at the equator), the grid is considered to have a heavy rain event. An RR of 10 mm h -1 exhibits a threshold separating the stratiform rain type from the convective one (Tokay & Short, 1996). Subsequently, the Dm, Nw, and RR data of all heavy rain pixels at a given grid and time are constructed. After repeating the data constructing procedure over the entire domain and analysis period, we construct a raw dataset containing the Dm, Nw, and RR of all the heavy rain pixels over the 65° N-65° S observation domain and six year period (2014-2019). However, considering the high visiting frequency of the GPM satellite in higher latitudes, the raw dataset will have a bias of more samplings in higher latitudes. Samplings are therefore homogenized to construct the final dataset that is used for the clustering analysis. After counting the visiting frequency of the satellite depending on the latitude, the number of samples for each latitude are scaled using the ratio of the visiting frequency at each latitude to the visiting frequency at the equator. Afterwards, heavy rain events at a given latitude are randomly selected to equate the scaled number at the latitude to reduce uneven sampling problem. As a result, 328,391 heavy rain events (or number of grids showing heavy rain) and 6,258,800 heavy rain pixels over the study domain are collected for analysis. Finally, heavy rain pixels within a grid are averaged to yield mean Dm and logNw for that specific grid. The constructed data of Dm and logNw for heavy rain events are used for classifying the DSD types based on Gaussian mixture model (GMM; Bishop, 2006). GMM is a statistical model that is used to group the sample data into clusters assuming the presence of a certain number of Gaussian distributions in the sample data. Thus, each classified type best satisfies its own Gaussian distribution with associated mean and Accepted Article This article is protected by copyright. All rights reserved. 8 standard deviation. Details on how classification was carried using GMM are found in the Supplementary Information. Three classified DSD types For classifying the DSD types, we examine whether the constructed Dm and logNw satisfy the Gaussian distributions for estimating the applicability of GMM. The distributions of Dm and logNw were found be similar to Gaussian distributions (not presented). The number of classified types can be subjective, but the types must be consistent with known meteorological features. Three types appear most relevant when the DSD characteristics and associated geographical features are estimated. Figure 1a-c show the frequency distributions of Dm and logNw for the three heavy rainfall types classified using GMM. It can be seen that Dm reduces from Type 1 to Type 3, whereas logNw increases. The mean ± standard deviations for Dm for the three types are 2.25 ± 0.49, 1.62 ± 0.34, and 1.25 ± 0.27 mm, respectively and corresponding logNw mean ± standard deviations are 3.49 ± 0.49, 4.15 ± 0.45, and 4.64 ± 0.47 m -3 mm -1 . Dm for Type 1 exhibits a high frequency at 3 mm because the maximum value for Dm retrieved from the DSD algorithm was set to 3 mm (Seto et al., 2016). As the DSD types of heavy rainfall are classified based on the frequency histograms of the events, the range of one standard deviation between Type 1 (Type 2) and Type 2 (Type 3) overlaps by 9.5% (17.0%). To obtain the DSD distributions corresponding to the three Dm-logNw types, the respective Dm and logNw values are inserted in Eq. (1), given μ = 3. Obtained results are given in Figure 1df. As a form of gamma function, the number concentration increases until a certain diameter (i.e., Accepted Article This article is protected by copyright. All rights reserved. 9 an inflection point) and subsequently decreases. The diameters at the inflection points for the three types are 1.0, 0.6, and 0.4 mm. The number concentration around these diameters appears lowest for Type 1 and highest for Type 3. In contrast, toward the larger diameter side (e.g., D > 3 mm), Type 1 shows the highest number concentration, whereas Type 3 shows the lowest. However, Type 2 and Type 3 resemble each other. Geographical distributions of Dm and logNw that support the description provided in Figure 1 are displayed in Supplementary Figure S2. DSD at the surface should be closely linked to the vertical structure of the cloud system. In this study, we examine vertical structures of clouds for three DSD types. For this, the frequency of Ku-band radar reflectivity is provided in the reflectivity-height coordinates (Figure 1g-i). It is indicated that the cloud develops highest for Type 1 and lowest for Type 3, suggesting the strongest convection intensity for Type 1 and weakest for Type 3. Compared with the normalized distributions of the storm height for three types ( Figure S3), showing the minimum reflectivity at approximately 12-15 dBz (Hamada & Takayabu, 2016) highest for Type 1 and lowest for Type 3, the surface mean reflectivity seems to be proportional to the storm height as well as convection strength. The vertical shapes of reflectivity for first two types given in Figure 1g-i are consistent with the results of previous studies on precipitation characteristics representing continental and oceanic types (Liu et al., 2007;Hamada et al., 2015;Sohn et al., 2013;Song & Sohn, 2015;Xu & Zipser, 2012;Zipser et al., 2006). The diurnal variations in convective rainfall have been well-established to be distinctly different between continents and oceans; maximum precipitation over continents and oceans occur at 15-18 and 03-06 LST, respectively (Liu & Liu, 2016;Nesbitt and Zisper, 2003;Song & Sohn, 2015;Takayabu, 2002;Yang & Smith, 2006;Zipser et al., 2006). The diurnal variations for DSD Type 1 and Type 2 are similar to those noted for the continent and ocean (Figure Accepted Article This article is protected by copyright. All rights reserved. 10 S4). Thus, the DSD characteristics for Type 1 and 2 should represent the continental and oceanic types of heavy precipitation, respectively. Type 3 shows similar variations to Type 2, but with smaller amplitudes. Global distributions of three classified DSD types The spatial distributions of occurrence frequencies and volumetric heavy rainfalls associated with each DSD type can be drawn and the dominant DSD types can be examined from this at any given location of the domain (Figure 2). Compared to surface-based DSD studies conducted at limited regions, which provided the most dominant DSD features, this study provides information related to a combination of DSD types or the dominant DSD type at any given location. Type 1 shows a relatively high frequency in Africa, Europe, US and South America, and western Pacific maritime continent, thereby confirming that Type 1 rainfall is mostly the continental type (Figure 2a). Type 2 shows the dominant occurrence frequency for heavy rainfall over the tropical oceanic regions. Southeast Asian and east Asian monsoon regions as well as north Atlantic Ocean also show a prevalent Type 2 rainfall. Thus, Type 2 mainly represents the ocean type. It should also be noted that the Amazon rainforests fall into oceanic Type 2, with a less frequent continental Type 1 also evident in these forests. The dominant ocean type behaviors of rainfall over the Amazon have been well recognized in previous studies (Williams et al., 2002;Zipser et al., 2006). Geographical distributions of Type 3 (Figure 2e) mostly overlap with Type 2, indicating that oceanic convections causing heavy rainfall comprise of deep as well as shallow convection. Thus, heavy rainfall over the ocean regions may be characterized by a bimodal distribution with two main modes of Type 2 and Type 3, with Type 2 being the dominant one. Despite the overlapping tendency of Type 3 with Type 2, the main locations for Type 3 are toward the Accepted Article This article is protected by copyright. All rights reserved. 11 subtropical high regions off the dominant regions for Type 2, except the equatorial eastern Pacific region where Type 3 is also prevalent. Obtained results of Dm and logNw for three DSD types are found to be consistent with known microphysical processes of precipitation. In the continental regions where Type 1 occurs most frequently, it is well known that abundant ice particles including graupel and hail are present in clouds due to the strong convection. Such clouds generally produce larger raindrops at the surface, after experiencing the melting and collision-coalescence processes (Cecil, 2011;Cecil et al., 2012;Sohn et al., 2015;Xu & Zipser, 2012;Zipser et al., 2006 among many others). Thus, larger raindrop sizes (and smaller number concentration) found in this study are consistent with general rain characteristics found over the land. In tropical oceanic regions where Type 2 is dominant, ice water content is relatively small (Cecil, 2011;Sohn et al., 2015) while liquid water content is abundant (Wood et al., 2002), in comparison to the land type. With the convection intensity weaker than the land type, collision-coalescence processes are known to be the main rain growing physics (Xu & Zipser, 2012), resulting in drop sizes smaller than the land type. The Type 3 mostly found over the subtropical subsidence region and equatorial eastern Pacific appears to be largely associated the warm rain processes under weaker and shallow convection, giving relatively lower cloud top, smaller drop size, and larger number concentration Sekaranom & Masunaga, 2019;Xu & Zipser, 2012). In this study, we examine the rainfall amount at each grid contributed by each DSD type. For this, the volumetric heavy rainfall is calculated at each grid by summing all the selected pixellevel RRs within the grid. Results are shown in Figure 2b, d, and f. It can be seen that 63.5% of the total rainfall is contributed by Type 2, which is predominantly from oceans. Another 18.3% of the total heavy rainfall is contributed by Type 3, mostly over the oceans as well, and the remaining Accepted Article This article is protected by copyright. All rights reserved. 12 18.2% of the total heavy rainfall is contributed by Type 1 mostly over the land regions. Again, the eastern Pacific ITCZ area show nearly compatible amounts of heavy rainfalls from both Type 2 and Type 3. Overall, the spatial distribution analysis mentioned above brought a conclusion that Type 1, Type 2, and Type 3 should be linked to continental convection, oceanic deep convection, and oceanic shallow convection, respectively. We also examine the seasonally varying occurrence frequencies and volumetric rains for the three aforementioned types. In the boreal summer, occurrence frequency and volumetric heavy rainfall for the continental type occurred for most continents in the Northern Hemisphere, except Sahara and Arabia desert regions ( Figure S5a Contrastingly, in the boreal winter, the continental type is most commonly found in the Southern Hemispheric land regions, whereas the oceanic type is most commonly found in the Indo-western Pacific Oceans and Amazon area ( Figure S6c-f). Despite the seasonally varying geographical distributions of occurrence frequencies and volumetric rains, the results obtained for annual means of DSD parameters, as depicted in Figure 1, are found to be persistent. Comparison with ground-based DSD observations These globally classified DSD types are important to comprehensively interpret the DSD results of previous studies that have often represented local/regional characteristics. The global mean Dm and logNw values for the three types ( Figure 1) are compared with the results of previous studies ( Figure 3 and Table S1). Note that heavy rains as presented in Table S1 represent the cases with rain rates stronger than 10 mm h -1 (as observed in this study), whereas convective rains represent the cases with rain rates higher than 5 mm h -1 and standard deviations greater than 1.5 mm h -1 observed over a certain time period (Bringi et al., 2003). Accepted Article This article is protected by copyright. All rights reserved. 13 The values of Dm and logNw of the continental convective type (2.25 ± 0.49 mm and 3.49 ± 0.49 m 3 mm -1 , respectively) are consistent within a one standard deviation, based on the results obtained from Colorado US, Austria, Sydney Australia, and Puerto Rico (Bringi et al., 2003), the United Kingdom and Greece (Montopoli et al., 2008), northern China (Chen et al., 2016), and Tibetan Plateau (Chen et al., 2017). Furthermore, the values of Dm and logNw of the oceanic deep convective type (1.62 ± 0.34 mm and 4.15 ± 0.45 m 3 mm -1 , respectively) are found to be in good agreement with the results obtained from India (Lavanya et al., 2019;Radhakrishna et al., 2020), Indonesia (Marzuki et al., 2013), southern China (Huo et al., 2019Sun et al., 2020), Taiwan (Seela et al., 2018), South Korea (Suh et al., 2016), Japan (Montopoli et al., 2008), Darwin Australia, Papua New Guinea (Bringi et al., 2003), western Pacific (Bringi et al., 2003;Huang & Chen, 2019), and Amazon regions (Bringi et al., 2002). (Wen et al., 2018) and India (Janapati et al., 2017;Janapati et al., 2020). The typhoon generally shows features of the oceanic shallow convective type, but seems to depend on the location or development stage (Janapati et al., 2020). The lack of observations associated with the oceanic shallow convective type is likely due to the fact that this type is mostly found over the open oceans near the subtropical subsidence regions, equatorial eastern Pacific, and equatorial Atlantic Ocean. These classified types are also in good agreement with three DSD groups for convective precipitation based on ground-based disdrometer measurements covering diverse meteorological Accepted Article This article is protected by copyright. All rights reserved. 14 regimes from the tropics to the high latitudes (Dolan et al., 2018); Type 1, 2, and 3 are very similar to their 'ice-based convection', 'warm rain with high liquid water content', and 'weak convective warm rain shower' groups. Since the comparison of the classified results with ground-based measurements should be an indirect validation of DPR DSD products, the close agreement of classified rain types with results from ground-based measurements further assures the DPRretrieved data quality and obtained results. Conclusions and discussion In this study, we examined the global characteristics of two DSD parameters of heavy Results from this study are important because the DSD characteristics of heavy rainfall in any region can be interpreted as a combination of different DSD types with origins closely associated with the cloud-scale processes and environments. Furthermore, the global distributions of means and standard deviations of Dm and logNw for the three types, associated distributions of the occurrence of each type, and contribution to total heavy rainfall will help in understanding the physical processes of heavy rain formation, particularly over remote areas, such as open oceans where conventional observations cannot be readily made. In spite of classification results compromising overall DSD features noted over the globe, we admit caveats of statistically-based DSD classifications. The same resultant DSD type could be attributed to many different physical reasonings responsible for the rain. For example, if there are dominant breakup processes over the continent, then DSD types without physical consideration may be interpreted as an ocean type although physical processes responsible for the ocean type can be quite different from the continental type. Thus, more future studies are needed for better understanding of the precipitation microphysics, especially combined with regionally-based physical processes such as collision and breakup processes, water vapor convergence, aerosol loadings, and so on. While main discussions in this study have been made for interpreting three classified DSD types and their geographical distributions, obtained results can further envision studying climate change features, validating climate models, and improving cloud microphysical parameterization. Accepted Article This article is protected by copyright. All rights reserved. 16 For example, results obtained in this study may allow us to examine how two oceanic types of rainfall revealed in this study respond to changes in tropical circulations over the Pacific such as Walker circulation. And separated rain types and their contributions to the total rainfall can be used for the validation of climate model simulations, which should be more useful than the simple total rainfall comparison. The obtained results may also provide more insights on how specific rain types are linked to specific cloud structures at a given area. It is because precipitation cannot be separated from cloud microphysics.
2021-02-01T10:14:33.348Z
2021-01-27T00:00:00.000
{ "year": 2021, "sha1": "1d48acce2b08a2acee30d45ae3bdcf4113a73f0b", "oa_license": "CCBY", "oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2020GL090871", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1d48acce2b08a2acee30d45ae3bdcf4113a73f0b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
255077484
pes2o/s2orc
v3-fos-license
Relationship between Maternal Postpartum Intention to Breastfeed and Actual Breastfeeding Duration — Four Provinces, China, 2015–2017 What is already known about this topic? Several studies have reported that maternal antenatal intention to breastfeed is a strong predictor of actual breastfeeding duration. However, little research has investigated whether maternal postpartum intention also extends breastfeeding duration. What is added by this report? Maternal postpartum intention to breastfeed was a protective factor for extending actual breastfeeding duration after controlling potential confounders. What are the implications for public health practice? It is crucial to address and promote intrinsic and extrinsic factors that influence a mother’s intention to breastfeed after delivery, thereby extending the actual breastfeeding duration. Breast milk is universally recognized as the optimal source of nutrition for infants. The World Health Organization (WHO) and United Nations International Children's Emergency Fund (UNICEF) recommend exclusive breastfeeding until 6 months old, with continued breastfeeding to 2 years old or beyond (1). Nevertheless, breastfeeding duration in China is still far below the global nutrition targets (2). Several studies have reported that maternal antenatal intention to breastfeed is a strong predictor of actual duration of breastfeeding (3). Maternal postpartum intention could be much more associated with actual breastfeeding duration because of shorter time intervals and fewer impeding factors present during pregnancy, compared to maternal antenatal intention. However, very little research has been done to determine if the maternal postpartum intention to breastfeed increases the duration of feeding in some mothers (4). This study was performed to examine the relationship between maternal postpartum intention and actual breastfeeding duration to provide data on the impact of extended breastfeeding duration. Data used in this study (e.g., infants' breastfeeding status) were drawn from "Maternal and Child Health Monitoring Project", which was implemented by the National Center for Women and Children's Health of China CDC in five districts in four provinces (Hebei, Liaoning, Hunan, and Fujian) from 2015-2017. The results showed that maternal postpartum intention to breastfeed was a protective factor for extending actual breastfeeding duration. Addressing key factors influencing a mother's postnatal intention to breastfeed may be important for prolonging this duration. The data were collected during a surveillance project founded by the Central Financial Project called the "Maternal and Child Health Monitoring Project" (2015-2019), which aimed to promote infants' health. This surveillance project was implemented by the National Center for Women and Children's Health of China CDC. Five districts in four provinces (Hebei, Liaoning, Hunan, and Fujian) were selected as monitoring sites in this project. These districts were selected based on their good compliance and their existing management systems for child health according to the requirements of the National Basic Public Health Service Project covering the whole area. Pregnant women in their third trimester were recruited as participants, and their children were followed for 3 years. To be included in the project, pregnant women had to have a gestational age between 28 and 36 weeks, have a singleton birth, and have lived in the monitoring site for more than half a year. Additionally, they would have to be registered residents at the site, be expected to live in the monitoring site until the child reaches 3 years of age and ensure their child receives routine child health care, possess a handbook for maternal health care containing complete records of antenatal examination, and agree to participate for the entire duration of the follow-up including the provision of informed consent. Pregnant women with mental illness, brain diseases, cardiovascular and cerebrovascular diseases, endocrine diseases, and cancer were excluded. Ultimately, 2,731 mother-infant pairs were included in the project. Of the 2,731 mother-infant pairs recruited; 228 pairs were excluded from the study because breastfeeding information was missing. Ultimately, 2,503 eligible mother-infant pairs were assessed in this study. Data on maternal postpartum intention to breastfeed were acquired from mothers 1 month after birth. Investigators collected information on breastfeeding duration at face-to-face physical examinations of infants at two follow-up time points (ages 6 and 12 months). Both maternal postpartum intentions to breastfeed and the actual duration of breastfeeding were divided into three classes: ≤6 months, 7-12 months, and >12 months. Breastfeeding was defined as feeding with milk (direct from the breast or expressed) with or without other drinks, formula, or other infant foods (5). The chi-square test was used to investigate the correlation between maternal postpartum intention to breastfeed and actual breastfeeding duration. Multinomial logistic regression was performed to analyze the association between the maternal postpartum intended period of breastfeeding and the actual period of breastfeeding after controlling potential confounders that could be related to breastfeeding duration (including sex, gestational age at birth, method of delivery, birth weight, the timing of beginning complementary food, maternal age, parity, maternal education, annual family income, and region of the country). All data analyses were performed in SAS (version 9.4; SAS Institute). P<0.05 was considered statistically significant. The sociodemographic characteristics of the study population are shown in Table 1. Of the mothers who intended to breastfeed up to 6 months, only 50.72% were still breastfeeding at 6 months after birth, compared to 89.58% of mothers who had intended to breastfeed for 7 to 12 months and 94.30% of mothers who had intended to breastfeed for more than 12 months. There was a positive correlation between maternal postpartum intention to breastfeed and actual breastfeeding duration (Pearson contingency coefficient=0.42; P<0.05) ( Table 2). As shown in Table 3, after adjustment for confounding factors, multinomial logistic regression showed that relative to mothers intending to breastfeed less than 6 months at 1 month after birth, the intention to breastfeed more than 6 months or more than 12 months was a protective factor for extending actual breastfeeding duration to more than 6 months [OR=5.77, 95% confidence interval (CI)=4. Moreover, the timing for beginning complementary food, parity, and region of the country was statistically significantly associated with actual breastfeeding duration (P<0.05). Starting complementary food ≥6 months, multiparity, and living in South China were protective factors for longer breastfeeding duration. DISCUSSION In the monitored regions, 87.06% and 50.46% of mothers continued breastfeeding at 6 months and 12 months, respectively, which was far below global nutrition targets. The result of this study suggests that health guidelines and education should continue to be strengthened in monitoring regions to extend the duration of breastfeeding. In addition, this study found that maternal intention to breastfeed 1 month after birth was a protective factor for extending actual breastfeeding duration after controlling potential confounders, which implied that promoting a mother's postnatal intention to breastfeed may be an essential measure for extending breastfeeding duration. Furthermore, regional variation was also seen in breastfeeding duration, the reasons for which might be different rates of maternal postpartum intention among the regions. The rate of maternal postpartum intention to breastfeed until 7-12 months was 37.78% in Liaoning and 69.87% in Hunan. Li et al. determined that the mean duration of breastfeeding in China from 2007-2018 was only 10 months and a wide gap in the prevalence of breastfeeding remained among different cities (6), consistent with this study. Studies in China conducted in Shihezi (7) and Sichuan (8) also reported a positive association between maternal postpartum intention and actual breastfeeding duration. In a study in Thailand, mothers who intended to breastfeed more than 6 months after delivery were more likely to be breastfeeding at 6 months (4). In another study, infants whose mothers had the postpartum intention to breastfeed longer than 6 months were more likely to have been breastfed at 6 months (9); our results are consistent with these findings. Furthermore, even if mothers have a strong intention to breastfeed, they cannot achieve successful breastfeeding in reality because they are primiparous, concerned about the amount of breast milk or pain, lack professional support, or are required to return to an unsupportive work environment after giving birth (10)(11). Thus, it may be that enhancing mothers' intrinsic power, including increasing their self-efficacy regarding breastfeeding, and their extrinsic support power, such as improving the support they receive from family and society for breastfeeding during the postnatal period, can enhance maternal breastfeeding postpartum intention, including breastfeeding duration intention. The present study had some limitations. First, it was conducted at five monitoring sites in four provinces of China; the results cannot be generalized to China as a whole. Second, the need to return to work after giving birth may reduce overall breastfeeding duration; however, this variable was not collected in this study. Third, the presence of maternal illness, lack of breast milk, nipple pain during breastfeeding, and other factors that can affect breastfeeding were not considered, which may have affected the accuracy of the results. Finally, maternal intention to breastfeed at 1 month after birth may be more correlated with actual breastfeeding duration, though it seems that it is more practical to educate pregnant women before delivery or those who are hospitalized after delivery and improve their intention to breastfeed compared to educate mothers at 1 month after birth and improve their breastfeeding duration.
2022-12-25T16:01:59.766Z
2022-12-23T00:00:00.000
{ "year": 2022, "sha1": "c27b3ce1019ce23d5f599487ba654ce3266b2b49", "oa_license": "CCBYNCSA", "oa_url": "https://weekly.chinacdc.cn/en/article/pdf/preview/10.46234/ccdcw2022.233", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7a7ae2754f3aa6fc57ed0e0c17215d455f3b8bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
37813068
pes2o/s2orc
v3-fos-license
Indoxyl sulfate, a uremic toxin in chronic kidney disease, suppresses both bone formation and bone resorption Abnormalities of bone turnover are commonly observed in patients with chronic kidney disease (CKD), and the low‐turnover bone disease is considered to be associated with low serum parathyroid hormone (PTH) levels and skeletal resistance to PTH. Indoxyl sulfate (IS) is a representative uremic toxin that accumulates in the blood of patients with CKD. Recently, we have reported that IS exacerbates low bone turnover induced by parathyroidectomy (PTX) in adult rats, and suggested that IS directly induces low bone turnover through the inhibition of bone formation by mechanisms unrelated to skeletal resistance to PTH. To define the direct action of IS in bone turnover, we examined the effects of IS on bone formation and bone resorption in vitro. In cultures of mouse primary osteoblasts, IS suppressed the expression of osterix, osteocalcin, and bone morphogenetic protein 2 (BMP2) mRNA and clearly inhibited the formation of mineralized bone nodules. Therefore, IS directly acts on osteoblastic cells to suppress bone formation. On the other hand, IS suppressed interleukin (IL)‐1‐induced osteoclast formation in cocultures of bone marrow cells and osteoblasts, and IL‐1‐induced bone resorption in calvarial organ cultures. In cultures of osteoblasts, IS suppressed the mRNA expression of RANKL, the receptor activator of NF‐κB ligand, which is a pivotal factor for osteoclast differentiation. Moreover, IS acted on osteoclast precursor, bone marrow‐derived macrophages and RAW264.7 cells, and suppressed RANKL‐dependent differentiation into mature osteoclasts. IS may induce low‐turnover bone disease in patients with CKD by its direct action on both osteoblasts and osteoclast precursors to suppress bone formation and bone resorption. Abnormalities of bone turnover are commonly observed in patients with chronic kidney disease (CKD), and the low-turnover bone disease is considered to be associated with low serum parathyroid hormone (PTH) levels and skeletal resistance to PTH. Indoxyl sulfate (IS) is a representative uremic toxin that accumulates in the blood of patients with CKD. Recently, we have reported that IS exacerbates low bone turnover induced by parathyroidectomy (PTX) in adult rats, and suggested that IS directly induces low bone turnover through the inhibition of bone formation by mechanisms unrelated to skeletal resistance to PTH. To define the direct action of IS in bone turnover, we examined the effects of IS on bone formation and bone resorption in vitro. In cultures of mouse primary osteoblasts, IS suppressed the expression of osterix, osteocalcin, and bone morphogenetic protein 2 (BMP2) mRNA and clearly inhibited the formation of mineralized bone nodules. Therefore, IS directly acts on osteoblastic cells to suppress bone formation. On the other hand, IS suppressed interleukin (IL)-1-induced osteoclast formation in cocultures of bone marrow cells and osteoblasts, and IL-1-induced bone resorption in calvarial organ cultures. In cultures of osteoblasts, IS suppressed the mRNA expression of RANKL, the receptor activator of NF-jB ligand, which is a pivotal factor for osteoclast differentiation. Moreover, IS acted on osteoclast precursor, bone marrow-derived macrophages and RAW264.7 cells, and suppressed RANKL-dependent differentiation into mature osteoclasts. IS may induce low-turnover bone disease in patients with CKD by its direct action on both osteoblasts and osteoclast precursors to suppress bone formation and bone resorption. Bone metabolism consists of bone formation induced by osteoblasts and bone resorption regulated by osteoclasts. Osteoblasts are derived from mesenchymal stem cells, and the differentiation of the osteoblast precursors to mature osteoblasts is regulated by various factors and transcription factors such as osteocalcin, bone Abbreviations BMM, bone marrow macrophages; BMP2, bone morphogenetic protein 2; CKD, chronic kidney disease; CKD-MBD, CKD-related mineral and bone disease; IL, interleukin; OATs, organic anion transporters; PTH, parathyroid hormone; PTX, parathyroidectomy; q-PCR, quantitative PCR; RANK, receptor activator of NF-κB; RANKL, receptor activator of NF-κB ligand; sRANKL, soluble RANKL; TRAP, tartrate-resistant acid phosphatase. morphogenetic protein 2 (BMP2), and osterix [1,2]. Mature osteoblasts synthesize calcium phosphate crystals and extracellular matrixes, such as type 1 collagen [3,4], and deposit these substances as bone tissue. Osteoclast precursors are bone marrow macrophages and express RANK, receptor activator of nuclear factor jB, and the interaction between RANK and RANK ligand (RANKL) expressed on the osteoblast surface induces macrophage differentiation into mature osteoclasts [5,6]. Transcription factors such as NFATc1 are involved in the process of osteoclast differentiation [7]. It is well known that osteoblasts and osteoclasts cooperatively regulate bone turnover, and this communication is called bone coupling [8]. Abnormalities of bone turnover are commonly observed in patients with chronic kidney disease (CKD), and this was recently termed CKD-related mineral and bone disease (CKD-MBD) [9]. The CKD-MBD Work Group reported that various bone abnormalities, including osteitis fibrosa, adynamic bone disease, and osteomalacia, occurred in patients with CKD stage 3-5 (83%) and with dialysis (98%) [10]. It is known that the risk of hip fractures increased in dialysis patients compared to the general population [11], and mortality associated with hip fracture in patients with hemodialysis was higher than that in fracture-free patients with hemodialysis [12]. Thus, abnormalities of bone turnover are thought to be associated with risk of fracture and mortality in patients with CKD. Renal dysfunction leads to the accumulation of uremic retention solutes in patients with CKD [13,14], and the European Uremic Toxin Work Group proposed more than 100 substances which are classified as uremic toxins by the molecular weight and the ability of protein binding [13,15]. Some uremic toxins show adverse biological impacts for the cardiovascular system in patients with CKD and CKD model animals [15,16]. Indoxyl sulfate (IS) is an organic anion uremic toxin and is known as a representative toxin in patients with CKD [13]. Dietary tryptophan can be metabolized to indole by intestinal bacteria, and absorbed indole is transported to the liver where it is converted to IS [17], Fig. 1. IS is rapidly excreted into urine in healthy subjects, but it accumulates in the blood of patients with impaired renal function, and is involved in glomerular sclerosis and renal fibrosis by the progression of CKD in rats [18,19]. Low-turnover bone disease is commonly observed in patients with CKD, which was associated with low serum parathyroid hormone (PTH) level and skeletal resistance to PTH [20,21]. We have recently reported that IS exacerbates low bone turnover induced by parathyroidectomy (PTX) in adult rats, suggesting that IS directly inhibited bone formation by mechanisms unrelated to skeletal resistance to PTH [22]. In this study, we examined the effects of IS on bone formation and bone resorption in cultures, and suggested the direct action of IS in bone tissues with low turnover in animals and patients with CKD. Animals and reagents Newborn and six-week-old ddy mice were obtained from Japan SLC Inc. (Shizuoka, Japan). All procedures were performed in accordance with the institutional guidelines for animal research, and the experimental protocol was approved by the Animal Care and Use Committee of the Tokyo University of Agriculture and Technology. IL-1 was obtained from R&D Inc. Soluble RANKL (sRANKL) was obtained from Peprotech Co. Ltd. IS was obtained from Glycosynth, Co. Ltd., Cheshire, UK. Culture of primary mouse osteoblastic cells Primary osteoblastic cells were isolated from newborn mouse calvariae after five routine sequential digestions with 0.1% collagenase and 0.2% dispase, as described previously [23]. Osteoblastic cells were collected from fractions 2-4 and combined, and cultured for 3 days in aMEM with 10% FBS under 5% CO 2 in air at 37°C. After the cells reached to confluence, they were trypsinized, counted, and used for the respective experiment. Bone formation with mineralization Primary osteoblastic cells were cultured for 14 days in a medium containing bone-inducing factors, ascorbic acid (50 lgÁmL À1 ), and b-glycerophosphate (10 mM), to form calcified bone nodules. For the control culture, a medium without bone-inducing factors was used. IS was added to the medium containing bone-inducing factors. After culturing, the areas of alizarin-positive cells were defined as mineralized bone nodules. The areas of alizarin-positive bone nodules were measured on NIH images. Bone-resorbing activity in the organ cultures of mouse calvaria Calvariae were collected from newborn mice, dissected in half, and cultured for 24 h in BGJb containing 1 mgÁmL À1 of bovine serum albumin. After 24 h, the calvariae were transferred into a new medium with or without IL-1 and IS, and cultured for 5 days. The bone-resorbing activity was expressed as the increase in the level of calcium in the medium [23]. Osteoclast formation in cocultures of mouse bone marrow cells and osteoblasts Bone marrow cells (3 9 10 6 cells) were isolated from sixweek-old mice and cocultured with the primary osteoblastic cells (1 9 10 4 cells) in aMEM containing 10% FBS [23]. After culturing for 7 days, the cells adhering to the well surface were stained for tartrate-resistant acid phosphatase (TRAP). The TRAP-positive multinucleated cells that contained three or more nuclei per cell were counted as osteoclasts. Osteoclast differentiation from macrophages Bone marrow macrophages were prepared by 3 days of culturing with M-CSF and 5 days of culturing with or without soluble RANKL (sRANKL). RAW264.7 cells (a murine macrophage cell line) were also cultured for 5 days with or without sRANKL. The TRAP-positive multinucleated cells that contained three or more nuclei per cell were counted as osteoclasts. Quantitative PCR and RT-PCR Total RNA was extracted from mouse osteoblasts and from RAW264.7 cells using ISOGEN (Nippon Gene, Tokyo, Japan). cDNA was synthesized from 5 lg of total RNA by reverse transcriptase (Superscript II Preamplification System, Invitrogen, Carlsbad, CA). The quantitative PCR (q-PCR) was performed with iQ SYBR Green Supermix (Bio-Rad). The primers used for the q-PCR for the mouse RANKL, osterix, Col1a1, osteocalcin, BMP2, Nfatc1, RANK, and TRAP genes were constructed from the sequence of respective gene. The RT-PCR was performed to examine the mRNA expression of organic anion transporters (OATs), OAT1 and OAT3, in mouse osteoblasts, bone marrow macrophages, and RAW264.7 cells. The primer pairs used in the RT-PCR for mouse OAT1, OAT3, and b-actin genes were constructed from the sequence of respective gene. The PCR product was run on a 1.5% agarose gel and stained with ethidium bromide. Statistical analysis The data are expressed as the means AE SEM. Data were analyzed using one-way ANOVA, followed by Tukey's test for post hoc analysis. Statistical analyses were performed using IBM SPSS Statistics version 23 software. IS suppressed bone formation in osteoblast cultures We first examined the effects of IS on bone formation in primary osteoblast cultures. When osteoblasts were cultured with a medium containing bone-inducing factors, ascorbic acid, and b-glycerophosphate, alizarinstained mineralized bone nodules could be detected on day 14 ( Fig. 2A). The addition of IS, 30-300 lM, suppressed the formation of mineralized bone nodules in the cultures of primary mouse osteoblasts in a dosedependent manner (Fig. 2A). The expression of bone formation-related gene such as osterix, osteocalcin, and BMP2 mRNA in osteoblasts was found to be suppressed by the addition of IS (Fig. 2B). The mRNA expression of collagen, col1a1, tended to be suppressed by IS, but was not significant. Osterix, osteocalcin, and BMP2 are all important for osteoblast differentiation into mature osteoblasts. Thus, IS directly acted on osteoblasts to suppress bone formation by inhibiting the expression of bone formation-related genes. Effects of IS on IL-1-induced osteoclastic bone resorption In the cocultures of mouse bone marrow cells and osteoblasts, IL-1 markedly induced the formation of TRAP-positive osteoclasts, while the addition of IS suppressed IL-1-induced osteoclast formation in a dose-dependent manner (Fig. 3A). It is well known that bone-resorbing factors such as IL-1 induce RANKL expression in osteoblasts to elicit osteoclast differentiation. We therefore examined the effects of IS on the expression of RANKL in osteoblasts. In the cultures of primary mouse osteoblasts, the addition of IS suppressed the mRNA expression of RANKL that was induced by IL-1 in a q-PCR assay (Fig. 3B). These results indicate that IS acts on osteoblasts to suppress the expression of RANKL and to negatively regulate IL-1-induced osteoclast formation. A mouse calvarial organ culture is a typical ex vivo assay system for defining the effects of a test compound on bone resorption and bone loss. Bone-resorbing factors such as IL-1 are known to induce bone resorption in this model. Using ex vivo cultures, we were treated with IL-1 (2 ngÁmL À1 ) with or without IS (300 lM) for 24 h, and the total RNA was extracted. The mRNA expression of RANKL was detected by a q-PCR. (C) Mouse calvariae were cultured for 24 h in BGJb medium containing 1 mgÁmL À1 of BSA, and transferred to new media to culture for 5 days, with or without IL-1 (2 ngÁmL À1 ) and with or without IS (30, 100, or 300 lM). The concentration of calcium in the medium was measured to calculate the bone-resorbing activity. The data are expressed as the means AE SEM of three to four independent wells. Asterisks and hashes indicate a significant difference: ***P < 0.001 vs. control, # P < 0.05, ## P < 0.01, ### P < 0.001 vs. IL-1. examined the effects of IS on IL-1-induced boneresorbing activity. The application of IL-1 markedly induced bone-resorbing activity, while IS (30-300 lM) significantly suppressed bone resorption in a concentration-dependent manner (Fig. 3C). IS suppressed the differentiation of macrophages into mature osteoclasts Bone marrow macrophages, which are precursor cells for osteoclasts, can differentiate into mature osteoclasts by RANK/RANKL-mediated mechanisms. To examine the possible actions of IS in relation to the osteoclast precursors, IS was added to the cultures of bone marrow macrophages and RAW264.7 cells (a mouse macrophage cell line) in the presence of sRANKL. The differentiation of bone marrow-derived macrophages induced by M-CSF into mature osteoclasts was clearly suppressed by adding IS (Fig. 4A). IS dose dependently suppressed the sRANKL-dependent differentiation of RAW264.7 cells into osteoclasts (Fig. 4B). In RAW264.7 cells, sRANKL markedly induced the expression of Nfatc1, Rank, and TRAP mRNA for the differentiation into mature osteoclasts, and IS significantly suppressed the expression of these genes (Fig. 4C). As Nfatc1 is an essential transcription factor for osteoclast differentiation, IS may act on macrophages to inhibit their differentiation into osteoclasts. ). (C) RAW264.7 cells were cultured for 5 days with IS (30, 100, or 300 lM) in the presence of sRANKL (100 ngÁmL À1 ), and the mRNA expression of Nfatc1, Rank, and TRAP was measured by a q-PCR. The data are expressed as the means AE SEM of three to five independent wells. Asterisks and hashes indicate a significant difference: **P < 0.01, ***P < 0.001 vs. control, # P < 0.05, ## P < 0.01, ### P < 0.001 vs. sRANKL. Expression of OAT3 in osteoblasts, macrophages, and RAW264.7 cells OATs are known to eliminate organic anions from cells. In kidney, OAT1 and OAT3 localized in the proximal tubules and involved in uptake of organic anions from the blood [24]. To discuss the possible roles of OAT1 and OAT3 in bone tissues, we examined the mRNA expression of these OATs in osteoblasts using RT-PCR and found that osteoblasts expressed only OAT3, but not OAT1 (Fig. 5). In addition, bone marrow macrophages and RAW264.7 cells also expressed OAT3 mRNA, but not OAT1 (Fig. 5). These data suggest that OAT3 may be involved in the mechanism of IS action in bone tissues. Discussion Using CKD rat models, previous studies have indicated that an increase in IS in the blood is related to glomerular sclerosis, renal fibrosis, and the progression of CKD in rats [18,21]. IS is known to induce proximal tubular injury via the increase in free radial production [25], cardiovascular disease [26,27], and renal anemia [28,29]. Previous studies have suggested that the low-turnover bone loss is associated with low serum PTH level and with skeletal resistance to PTH in patients with CKD [20,21]. Nii-Kono et al. [30] have shown that IS suppresses PTH-stimulated intracellular cAMP production, PTH receptor expression, and induces oxidative stress in primary cultured murine osteoblastic cells. However, we have recently reported that IS exacerbates low bone turnover through using PTX rats, suggesting that IS-induced low bone turnover may be due to the inhibition of bone formation by mechanisms unrelated to skeletal resistance to PTH [22]. Oral administration of indole induced low bone turnover in PTX rats, and the serum IS levels in indole-treated rats on weeks 2 and 4 were 5.1 mgÁdL À1 (240 lM) and 3.9 mgÁdL À1 (183 lM), respectively [22]. Niwa et al. [18] reported that serum IS level was 1.8 AE 1.5 mgÁdL À1 (84 lM) in predialysis CKD patients, and 5.3 AE 2.1 mgÁdL À1 (249 lM) in patients on hemodialysis before dialysis. In the present study, we have found that IS, 100-300 lM, acted on osteoblasts and suppressed bone formation in osteoblast cultures. The concentrations of IS used in the present study are similar to serum IS levels in indoletreated rats and in patients with CKD, suggesting that the direct action of IS in bone formation may be occurred in vivo in CKD animals and patients. Various uremic toxins accumulate in blood during renal failure, and these toxins are classified by molecular weight and mode of protein binding. In addition to IS, p-cresyl sulfate (PCS) is a protein-bound uremic toxin, and serum PCS levels are significantly higher in patients with CKD compared with those of the healthy control [31]. Tanaka et al. [32] have reported that PCS induces osteoblast dysfunction by the suppression of PTH-induced cAMP production and by the induction of intracellular production of reactive oxygen species. Further in vivo studies are needed to define the roles of PCS in bone metabolism. In the present study, IS acted on osteoblasts and suppressed bone formation with mineralization by inhibiting the mRNA expression of osterix, osteocalcin, and BMP2. However, the molecular mechanisms of IS action in osteoblasts are unknown. The OAT family consists of six isoforms, and all OAT are expressed in kidney, while some OAT are expressed in liver, brain, and placenta [33]. In the CKD rats [34] and patients with CKD [35], IS may induce the nephrotoxicity by the mechanisms involving OAT1 and OAT3. In the present study, we detected the mRNA expression of OAT3, but not OAT1, in osteoblasts, bone marrow macrophages, and RAW264.7 cells (Fig. 5). Although further studies are needed to define the roles of OATs in IS action in osteoblasts and osteoclast precursors, OAT3 may be involved in the mechanism of IS action in bone. After the uptake of IS into the target cells, IS may modulate some signal transductions in osteoblasts and macrophages. Tanaka et al. [32] have reported that IS induces ERK1/2 phosphorylation in mouse osteoblasts, but PCS enhances JNK and p38 MAPK pathways in osteoblasts. Further studies using uremic toxins and OATs are needed to unveil the pathogenesis of CKD. Roles of inflammatory cytokines in the pathogenesis of CKD have been reported, and increased levels of serum cytokines such as IL-1, IL-6, and TNF-a were associated with poor clinical outcomes in patients with CKD [36]. In bone tissues, IL-1 is a typical boneresorbing cytokine associated with inflammation, but other cytokines such as TNF-a, IL-6, and IL-17 are known to be involved in bone destruction associated with rheumatoid arthritis. Therefore, we used IL-1, a general bone-resorbing cytokine, in the present study to examine the effects of IS on bone resorption in vitro. In the present study, IS suppressed IL-1-induced osteoclast formation in the cocultures of bone marrow cells and osteoblasts, and also suppressed IL-1-induced bone resorption in calvarial organ cultures. As IS acted on osteoblasts to suppress the expression of RANKL, IS may suppress bone resorption by the mechanisms involving osteoblasts. In addition, IS acted on osteoclast precursors to suppress the RANKL-dependent differentiation into mature osteoclasts. These results are consistent with the data reported by Mozar et al. [37] using RAW264.7 cells. Based on these results, IS may act on both osteoblasts and osteoclast precursors to suppress osteoclast differentiation and bone resorption. In our previous study using rats, the treatment with indole slightly suppressed bone resorption, measured by bone morphometry markers such as OcS/BS and ES/BS, although there were no significant differences [22]. Therefore, further studies are needed to define the roles of bone resorption regulated by IS in the low-turnover bone disease in patients with CKD. In conclusion, the present study demonstrated for the first time that IS directly suppresses bone formation and osteoblast/osteoclast coupling in vitro. Taken together with our previous report, we suggest here that the exacerbation of low bone turnover by IS may be due to the direct action of IS in bone tissues.
2018-04-03T04:07:43.942Z
2017-07-20T00:00:00.000
{ "year": 2017, "sha1": "b2166b17a430af5b38b9c965986ee0a157b34dd8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/2211-5463.12258", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2166b17a430af5b38b9c965986ee0a157b34dd8", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
76307990
pes2o/s2orc
v3-fos-license
Risk factors and outcome of stroke in young in a tertiary care hospital Stroke is an important cause of disability among adults and is one of the leading causes of death worldwide. It is a major health problem in India. Stroke burden has been rising in India as compared to the developed countries where it has reached plateau or decreased. 1 Stroke incidence rises steeply with age. Therefore, stroke in younger people is less common; however, stroke in a young person can be devastating in terms of productive years lost and impact on a young person’s life. While a specific definition of “young stroke” is lacking, the vast majority of authors consider “young stroke” to pertain to individuals less than 45 years of age. 2 In India, 10–15% of strokes occur in people below the age of 40 years. 3 It is believed that the average age of patients with stroke in developing countries is 15 years younger than that in developed countries. In India, nearly one-fifth of patients with first ever strokes admitted to hospitals are aged <40 years. 4 INTRODUCTION Stroke is an important cause of disability among adults and is one of the leading causes of death worldwide. It is a major health problem in India. Stroke burden has been rising in India as compared to the developed countries where it has reached plateau or decreased. 1 Stroke incidence rises steeply with age. Therefore, stroke in younger people is less common; however, stroke in a young person can be devastating in terms of productive years lost and impact on a young person's life. While a specific definition of "young stroke" is lacking, the vast majority of authors consider "young stroke" to pertain to individuals less than 45 years of age. 2 In India, 10-15% of strokes occur in people below the age of 40 years. 3 It is believed that the average age of patients with stroke in developing countries is 15 years younger than that in developed countries. In India, nearly one-fifth of patients with first ever strokes admitted to hospitals are aged <40 years. 4 While a greater proportion of strokes are due to subarachnoid hemorrhage and intracranial hemorrhage in young adults (40-55%) compared to the general stroke population (15-20%), cerebral infarction is still most common. 2 Diabetes mellitus, hypertension, heart disease, current smoking, and long-term heavy alcohol consumption are major risk factors for stroke in young adults as in elder population. 5 It is important to identify the causative factors in young stroke patients in order to prevent recurrences. Despite its substantial societal impact, there remains a paucity of literature regarding the etiological subtyping and risk factors for stroke in young Indian patients. 6 Hence, this study was an attempt to know the risk factors and outcome of these cases in a tertiary care hospital. METHODS This study was conducted in Sri Dharmasthala Manjunatheshwara College of Medical Sciences and Hospital (SDMCMSH), a tertiary care centre in Dharwad, Karnataka. It was a cross sectional study, conducted for a period of one year from November 2013 to October 2014. The study was approved by the institutional ethical committee. All cases of acute stroke fulfilling the WHO definition of stroke admitted to SDMCMSH, Dharwad, diagnosed & confirmed by radiological investigations were included in the study. Old cases of stroke admitted for co-morbidities and those with neurological deficit caused by non-vascular causes were excluded. Cases with age ≤45 years were considered as stroke in young cases. Informed consent was taken from the cases that were willing to participate in the study. A semi-structured and pre-tested Questionnaire was used to assess the sociodemographic profile. History regarding smoking, alcohol consumption, diabetes, hypertension, dyslipidemia and CVA was elicited. Hypertension was defined as a BP recording of >140/90 mm Hg on two or more readings on two or more occasions after initial screening. Patients who are already on antihypertensive medications were also taken as hypertensive. Diabetic patients were diagnosed as per the WHO recommendations for the diagnostic criteria for diabetes (Fasting plasma glucose ≥126 mg/dl or 2 hour plasma glucose >200 mg/dl). Patients on antidiabetic medications were also considered as diabetics. Patients were included as suffering from heart diseases if they had ischemic heart disease, congestive heart failure, rheumatic heart disease, and atrial fibrillation. Dyslipidemia was taken as serum cholesterol >200mg/dl, LDL cholesterol >130mg/dl and HDL cholesterol <35mg/dl in females and <40mg/dl in males. Cases diagnosed and treated earlier as stroke were taken as previous history of stroke. A family history of stroke was entertained if the first degree relatives of the patients suffered from stroke. Smoking, tobacco chewing and alcohol intake were based on the clinical history of past and present consumption of these substances. BMI was classified according to WHO classification. In case patient was not in a position to respond, his immediate family member was interviewed. Height and weight of the cases were measured using the standard techniques. Outcome of the cases at the time of discharge and after 28 days of admission was assessed using the Modified Rankin scale. 7 In case the patient is discharged before 28 days; the outcome was assessed through personal telephonic enquiry. Data was analysed using SPSS version 21. Chi-square test was used to determine whether the differences observed were statistically significant. P-value < 0.05 was considered significant. Socio-demographic profile of stroke in young cases is as given in Table 1. There was a little male preponderance (53.8%) and most of the cases were Hindus (84.6%) and of urban locality (59.6%). RESULTS Stroke was more common in the age group of 31-45 years (80.8%). The age-wise distribution of stroke in young cases is as given in Figure 1. Most of the stroke in young were of ischemic stroke 39 (75%), followed by haemorrhagic stroke 11 (21.2%) and Sub arachnoid Hemorrhage (SAH) 2 (3.8%). Risk factors associated with stroke in young cases are given in Table 2. Overweight & obesity (63.4%) was the most common risk factor followed by hypertension (50%) and cigarette & tobacco use (40.3%). When non-modifiable risk factors like sex and family history was compared with stroke in > 45 years of age, given in Table 3, family history was statistically significant for stroke in young (p value=0.0299). Table 4 gives the comparison of modifiable risk factors between stroke in young and stroke in > 45 years of age. Overweight & obesity was significantly associated with Outcome of stroke cases was assessed at discharge and after 28 days of admission ( Figure 2). 25% of stroke in young cases had disabilities at 28 days (slight disability 13.5%, moderate disability 5.8%, moderately severe disability 3.8% and severe disability 1.9%) and mortality was found to be 11.5%. When the same was compared with outcome of stroke in >45 years of age group of patients who had 25.5% disabilities and mortality of 25%, no significant association was found (Table 5). However, as far as mortality is concerned it was much less among stroke in young than stroke in > 45 years age group. DISCUSSION In the study, data was collected and analysed from 236 acute stroke cases admitted in SDMCMSH, Dharwad, out of whom 22% were ≤ 45 years of age. This proportion was higher than reported by Kapoor 12 and in a study done in Lucknow, 56% had hypertension, 54% had heart diseases and 38% were using cigarette & tobacco. 15 In a study for stroke in young done by Dash et al. in Delhi it was found that at the time of hospital discharge, 392 (89%) patients had mRS scores in the range of 0-2, and 37 (8.4%) patients hand mRS scores of 3 or 4. Death was reported in 11 (2.5%) patients. 16 Nedeltchev et al studied the outcome of stroke in young cases using modified Rankin scale (mRS). In their study, 68% achieved a mRS score of 0 to 1, 26% had a mRS score of 2 to 5, and 3% were dead by 3 months. Our study had almost similar outcome at the end of 28 days with 63.5% achieving mRS score of 0-1, 25% had mRS score of 2-5 and 11.5% were dead. 17 CONCLUSION This study demonstrated the substantial role of modifiable risk factors like overweight & obesity, hypertension, cigarette smoking / tobacco consumption and alcohol consumption in stroke in young. Primary and secondary prevention strategies targeting the traditional modifiable risk factors among young Indian population would benefit them and the same is emphasized. It was also found that mortality of stroke in young cases was much less than stroke in older age group. Health care facilities need to be strengthened to provide immediate and essential care for stroke cases.
2019-03-13T13:32:21.052Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "ea2795650057f04276b3b6da321f24bf3153de2b", "oa_license": null, "oa_url": "https://ijcmph.com/index.php/ijcmph/article/download/700/596", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c1f010de7e3827f7a560915149dbdc92b25192dc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246409224
pes2o/s2orc
v3-fos-license
Serum Proteomics Identifies Immune Pathways and Candidate Biomarkers of Coronavirus Infection in Wild Vampire Bats The apparent ability of bats to harbor many virulent viruses without showing disease is likely driven by distinct immune responses that coevolved with mammalian flight and the exceptional longevity of this order. Yet our understanding of the immune mechanisms of viral tolerance is restricted to a small number of bat–virus relationships and remains poor for coronaviruses (CoVs), despite their relevance to human health. Proteomics holds particular promise for illuminating the immune factors involved in bat responses to infection, because it can accommodate especially low sample volumes (e.g., sera) and thus can be applied to both large and small bat species as well as in longitudinal studies where lethal sampling is necessarily limited. Further, as the serum proteome includes proteins secreted from not only blood cells but also proximal organs, it provides a more general characterization of immune proteins. Here, we expand our recent work on the serum proteome of wild vampire bats (Desmodus rotundus) to better understand CoV pathogenesis. Across 19 bats sampled in 2019 in northern Belize with available sera, we detected CoVs in oral or rectal swabs from four individuals (21.1% positivity). Phylogenetic analyses identified all RdRp gene sequences in vampire bats as novel α-CoVs most closely related to known human CoVs. Across 586 identified serum proteins, we found no strong differences in protein composition nor abundance between uninfected and infected bats. However, receiver operating characteristic curve analyses identified seven to 32 candidate biomarkers of CoV infection, including AHSG, C4A, F12, GPI, DSG2, GSTO1, and RNH1. Enrichment analyses using these protein classifiers identified downregulation of complement, regulation of proteolysis, immune effector processes, and humoral immunity in CoV-infected bats alongside upregulation of neutrophil immunity, overall granulocyte activation, myeloid cell responses, and glutathione processes. Such results denote a mostly cellular immune response of vampire bats to CoV infection and identify putative biomarkers that could provide new insights into CoV pathogenesis in wild and experimental populations. More broadly, applying a similar proteomic approach across diverse bat species and to distinct life history stages in target species could improve our understanding of the immune mechanisms by which wild bats tolerate viruses. Relationships between bats and CoVs specifically have been of particular interest for zoonotic risk assessment (21,22). CoVs are RNA viruses found across mammals and birds, with at least seven known human viruses: two and five in the genera Alphacoronavirus and Betacoronavirus (23,24). A novel a-CoV with canine origins was recently detected in humans, but its role in zoonotic disease remains unclear (25). CoVs are highly diverse in bats, which are the likely ancestral hosts of aand b-CoVs (26)(27)(28). The evolutionary origins of highly pathogenic CoVs (i.e., SARS-CoV, MERS-CoV, SARS-CoV-2) have also been ascribed to bats, but spillover has typically involved intermediate hosts rather than direct bat-to-human transmission (5,29,30). Our understanding of bat-CoV interactions and the bat immune response to infection remains limited and has stemmed mostly from experimental studies of a few select species, including Rousettus leschenaulti, R. aegyptiacus, Artibeus jamaicensis, and Eptesicus fuscus (10,11,28,31,32). In vivo studies have typically found short-term CoV replication and shedding without substantial weight loss or pathology, supporting viral tolerance, although some species seem to entirely resist infection. Tolerance appears driven by innate immune processes, such as increased expression of ISGs, with little adaptive immune response. In vitro studies have further supported bat receptor affinity for CoVs (i.e., susceptibility) but little host inflammatory response in bats (10,11,16,33). Despite the clear insights afforded by experiments, model bat systems are heavily limited by logistical constraints (e.g., necessity for specialized facilities, colony maintenance) and have been mostly focused on frugivorous bats, which are relatively easy to keep in captivity compared to other dietary guilds (34,35). Transcriptomics of key tissues (e.g., spleen) has helped advance the field by identifying immune responses to infection in a wider array of species (36)(37)(38). However, such approaches are restrictive when lethal bat sampling is limited, such as when working with threatened species, in protected habitats, or in longitudinal studies to assess how these viruses persist in bat populations. Blood transcriptomes overcome such challenges to a degree and are increasingly feasible (39,40), yet such assays are only informative about the immune response in the blood itself. Proteomics, on the other hand, provides a unique and more nuanced perspective into the immune system, because the blood proteome includes proteins secreted from not only blood cells but also proximal organs such as the liver (41). Proteomics holds special promise for illuminating the innate and adaptive immune factors involved in bat responses to infection, because it can accommodate the small bat blood volumes typical of field studies (e.g., <10 mL). Recently, we surveyed the serum proteome of wild vampire bats (Desmodus rotundus) (42). Owing to its diet of mainly mammal blood, this species is the primary reservoir host of rabies lyssavirus in Latin America, and alpha-and betacoronaviruses have also been identified in this species (43)(44)(45)(46). Using only 2 mL sera from these small (25-40 gram) bats, we identified 361 proteins across five orders of magnitude, including antiviral and antibacterial components, 20S proteasome complex, and redox activity proteins. Mass spectrometry-based proteomics can thus facilitate the relative quantification of classical immunological proteins while also providing insight into proteins yet to be fully recognized for their importance in resolving viral infection (47)(48)(49). Here, we expand our recent work on serum proteomics in vampire bats in the context of CoV infection. First, by profiling the serum proteome of the same host species in an additional year of study, we provide a more general and comprehensive characterization of the wild bat immune phenotype. Second, owing to changes in regulations for importing vampire bat samples into the United States, we also assess the impact of heat inactivation, a common method of inactivating bat sera (50), on the proteome. Lastly, and most importantly, we assess differences in the serum proteomes of wild bats with and without acute CoV infection. We aimed to identify up-and downregulated immune responses of wild bats to CoV infection, with particular interest in comparisons with results from experimental infections. We also aimed to guide the discovery of candidate serum biomarkers of viral infection. By taking an agnostic approach via discovery proteomics (51), such biomarkers could provide new and mechanistic insight into CoV pathogenesis in wild bats (52). Vampire Bat Sampling As part of an ongoing longitudinal study (53), we sampled vampire bats in April 2019 in the Lamanai Archeological Reserve, northern Belize. This same population was sampled in 2015 for our earlier proteomic analysis (42). For the 19 individuals included in this study, we used a harp trap and mist nets to capture bats upon emergence from a tree roost. All individuals were issued a unique incoloy wing band (3.5 mm, Porzana Inc) and identified by sex, age, and reproductive status. For sera, we collected blood by lancing the propatagial vein with 23-gauge needles followed by collection with heparinized capillary tubes. Blood was stored in serum separator tubes (BD Microtainer) for 10−20 minutes before centrifugation. Following recent CDC guidelines, all sera were inactivated for importation to the United States by heating at 56°C for one hour. We also collected saliva and rectal samples using sterile miniature rayon swabs (1.98 mm; Puritan) stored in DNA/RNA Shield (Zymo Inc). Samples were held at −80°C using a cryoshipper (LabRepCo) prior to long-term storage. Bleeding was stopped with styptic gel, and all bats were released at their capture location. Field protocols followed guidelines for safe and humane handling of bats from the American Society of Mammalogists (54) and were approved by the Institutional Animal Care and Use Committee of the American Museum of Natural History (AMNHIACUC-20190129). Sampling was further authorized by the Belize Forest Department via permit FD/WL/1/19(09). Serum specimens used for proteomic analysis were approved by the National Institute of Standards and Technology Animal Care and Use Coordinator under approval MML-AR19-0018. CoV Screening and Phylogenetics As part of a larger viral surveillance project, we extracted and purified RNA from oral and rectal swabs using the Quick-RNA Viral Kit (Zymo Research). With exception of one bat, RNA from both swab types was available for all sera. We used a seminested PCR targeting the RNA-dependent RNA polymerase gene (RdRp) of alpha-and betacoronaviruses, following previous protocols (55). Amplicons were submitted to GENEWIZ for sequencing. Resulting sequences were aligned using Geneious (Biomatters; (56), followed by analysis using NCBI BLAST (57). PhyML 3.0 was used to build a maximum-likelihood phylogeny of these and additional CoV sequences (58). To assess possible risk factors for CoV infection (in oral or rectal swab samples), we fit univariate logistic regression models with bat age, sex, and reproductive status as separate predictors. Because the small sample sizes used here could bias our estimates of odds ratios, we used the logistf package in R to implement Firth's bias reduction (59). Proteome Profiling In addition to profiling serum from the 19 bats described above, we also performed another proteomic experiment to evaluate effects of heat inactivation with previously analyzed samples collected in 2015 (42). From four non-inactivated sera, we submitted an aliquot of each sample to the heat inactivation process used for our 2019 samples (56°C for one hour). We then processed the paired non-inactivated and heat-inactivated sera and the 19 sera in two batches using the S-Trap method for digestion with the S-Trap micro column (ProtiFi; ≤ 100 mg binding capacity). Full details are provided in the Supplemental Information. Briefly, we used 2 mL (approximately 100 mg protein) of each serum sample for digestion, and a pooled bat serum was digested across the two batches. Proteins were reduced with DL-Dithiothreitol (DTT) and alkylation with 2chloroacetamide (CAA). Digestion was performed with trypsin at an approximate 1:30 mass ratio, followed by incubation at 47°C for one hour. These resulting peptide mixtures were then reduced to dryness in a vacuum centrifuge at low heat before long-term storage at -80°C. Before analysis, samples were then reconstituted with 100 mL 0.1% formic acid (volume fraction) and vortexed, followed by centrifugation at 10000 x g n for 10 minutes at 4°C. The sample peptide concentrations were determined via the Pierce quantitative colorimetric peptide assay with a Molecular Devices SpectraMax 340PC384 microplate reader. We used the same LC-MS/MS method earlier applied for vampire bat serum proteomics (42). Using the original sample randomization gave a randomized sample order, and injection volumes were determined for 0.5 mg loading (0.21-0.44 mL sample). The run order and data key are provided in Table S1. Peptide mixtures were analyzed using an UltiMate 3000 Nano LC coupled to a Fusion Lumos Orbitrap mass spectrometer (ThermoFisher Scientific). A trap elute setup was used with a PepMap 100 C18 trap column (ThermoFisher Scientific) followed by separation on an Acclaim PepMap RSLC 2 µm C18 column (ThermoFisher Scientific) at 40°C. Following 10 minutes of trapping, peptides were eluted along a 60 minute twostep gradient of 5-30% mobile phase B (80% acetonitrile volume fraction, 0.08% formic acid volume fraction) over 50 minutes, followed by a ramp to 45% mobile phase B over 10 minutes, ramped to 95% mobile phase B over 5 minutes, and held at 95% mobile phase B for 5 minutes, all at a flow rate of 300 nL per minute. The data-independent acquisition (DIA) settings are briefly described here. The full scan resolution using the orbitrap was set at 120000, the mass range was 399 to 1200 m/z (corresponding to the DIA windows used), with 40 DIA windows that were 21 m/z wide, with 1 m/z overlap on each side covering the range of 399 to 1200 m/z. Each DIA window used higher-energy collisional dissociation at a normalized collision energy of 32 with quadrupole isolation width at 21 m/ z. The fragment scan resolution using the orbitrap was set at 30000, and the scan range was specified as 200 to 2000 m/z. Full details of the LC-MS/MS settings are in Supplementary Material. The method file (85min_DIA_40x21mz.meth) and mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE (60) partner repository with the dataset identifier PXD031075. In our earlier analysis of vampire bat serum proteomes (42), we used Spectronaut software to analyze our DIA data. Here, we instead used the DIA-NN software suite, which uses deep neural networks for the processing of DIA proteomic experiments (61). To search the bat samples, we used the NCBI RefSeq Desmodus rotundus Release 100 GCF_002940915.1_ASM294091v2 FASTA (29,845 sequences). Our full DIA-NN settings are provided in the Supplementary Material, and search settings and the generated spectral library (*.speclib) are included in the PRIDE submission (PXD031075). Briefly, 0.01 precursor false discovery was used, search parameters were chosen based on DIA settings, trypsin (cut at K*,R* but excluded cuts at *P) was selected, and fixed modification of cysteine carbamidomethylation. Since DIA-NN was made to handle protein inference and grouping assuming a UniProtKB-formatted FASTA file, and the NCBI RefSeq Desmodus rotundus Release 100 was used (RefSeq format), settings were chosen such that DIA-NN effectively ignored protein grouping, which can be performed on the backend following ontology mapping. To additionally search for CoV proteins (42), we performed a secondary search using the same settings and the addition of a Coronaviridae FASTA (117709 sequences) retrieved from UniProtKB (2021_03 release) using taxon identifier 1118 with all SwissProt and TrEMBL entries. Search settings are included in the PRIDE submission (PXD031075). Our identified bat proteins were then mapped to human orthologs using BLAST+ (62) and a series of python scripts described previously (42) to facilitate downstream analysis using human-centric databases (see Supplementary Material for full details). In those cases where human orthologs do not exist, such as mannose-binding protein A (MBL1), we used ad hoc ortholog identifiers. Specifically, eight identified vampire bat proteins are not found in humans (APOR, Bpifb9a, HBE2, ICA, LGB1, MBL1, Patr-A, and REG1), and we thus used UniProt identifiers from chimpanzee, cow, horse, mouse, and pig (Table S2). Proteomic Data Analyses The final data matrix of relative protein abundance for all identified proteins was stratified into two datasets for differential analysis: (i) the four 2015 samples analyzed before and after heat inactivation (Table S3) and (ii) the 19 samples collected in 2019 analyzed for CoV infection (Table S4). Our analysis also included a pooled serum sample as a quality control between the two digestion batches (Table S2), and digestion was evaluated by the number of peptide spectral matches. For formal statistical analyses, missing abundance values were imputed as half the minimum observed intensity of each given protein; however, for summary statistics (e.g., means, log 2 -fold change [LFC]), missing values were excluded (63,64). For the inactivation analyses, log 2 -transformed protein abundance ratios were used for each paired sample. These ratios were used in a moderated t-test with the limma package in R to evaluate protein changes within sera samples before and after heat treatment (65), followed by Benjamini-Hochberg (BH) correction (66). For the CoV infection analyses, we first reduced dimensionality of our protein dataset using a principal components analysis (PCA) of all identified proteins, with abundances scaled and centered to have unit variance. We then used a permutation multivariate analysis of variance (PERMANOVA) with the vegan package to test for differences in protein composition between uninfected and infected bats (67). Next, we used a two-sided Wilcoxon rank sum test in MATLAB to detect differentially abundant proteins between uninfected and infected bats. We again used the BH correction to adjust for the inflated false discovery rate. We also calculated LFC as the difference of mean log 2 -transformed counts between uninfected and infected bats. To next identify candidate biomarkers of CoV infection, we used receiver operating characteristic (ROC) curve analysis. We used a modified function (https://github.com/dnafinder/roc) in MATLAB to generate the area under the ROC curve (AuROC) as a measure of classifier performance with 95% confidence intervals, which we calculated with standard error, a = 0.05, and a putative optimum threshold closest to 100% sensitivity and specificity (68,69). We considered proteins with AuROC ≥ 0.9 to be strict classifiers of CoV positivity, whereas proteins with AuROC ≥ 0.8 but less than 0.9 were considered less conservative; all other proteins were considered to be poor classifiers (70). Variation in the abundance of strict classifiers by CoV infection status was visualized using boxplots. We also visualized the matrix of all candidate serum biomarkers with the pheatmap package, using log 2 -transformed protein abundances (scaled and centered around zero) and Ward's hierarchical clustering method (71,72). Lastly, we interrogated up-and down-regulated responses to CoV infection using gene ontology (GO) analysis. First, we programmatically accessed GO terms for all identified proteins using their associated UniProt identifiers and the UniprotR package (73). Next, we used the gprofiler2 package as an interface to the g:Profiler tool g:GOSt for functional enrichment tests (74,75). Enrichment was performed for all candidate protein biomarkers based on AuROC, with up-and downregulated proteins determined using log 2 -fold change. We ranked our proteins by AuROC to conduct incremental enrichment testing, with the resulting p-values adjusted by the Set Counts and Sizes (SCS) correction. We restricted our data sources to GO biological processes, the Kyoto Encyclopedia of Genes and Genomes (KEGG), and WikiPathways (WP). We ran the enrichment tests for both our strict and less-conservative protein classifiers. We note that the eight bat proteins lacking human orthologs all had AuROC < 0.8 and therefore did not require any manual GO and pathway mapping. Bat Demographics and CoV Positivity Our sample of 19 vampire bats included for proteomic analyses consisted predominantly of females (84%) and adults (79%). One male was reproductively active, whereas four females were lactating (n = 3) or pregnant (n = 1). Four of the 19 sera samples had paired oral or rectal swabs test positive through PCR for CoVs (21.1%); sequences are available in GenBank under accession numbers OM240577−80. Phylogenetic analyses of the four sequenced amplicons confirmed all positives in the genus Alphacoronavirus ( Figure 1A). All four vampire bat sequences are novel a-CoVs and displayed the most genetic similarity (94.6 −97.3%) to human CoVs (HCoVs; HCoV-NL63 and HCoV-229E) rather than known bat a-CoVs, including those previously found in other vampire bat colonies and other Neotropical bat species more broadly (76,77). In particular, NCBI BLAST did not identify any closely related a-CoV sequences from one of the PCR-positive vampire bat rectal swabs (BZ19-95; OM240579). Univariate logistic regressions did not find significant (unadjusted) effects of any bat demographic variables on CoV positivity. Males were no more likely than females to be infected (OR = 2.31, p = 0.49), although non-reproductive individuals (OR = 6.16, p = 0.17) and subadult bats (OR = 5.40, p = 0.13) were weakly more likely to be positive for CoVs ( Figure 1B). Serum Proteome Characterization Bottom-up proteomics using DIA identified 586 proteins in the 19 bat sera samples, with relative quantification covering 5.6 orders of magnitude ( Figure 2A; Table S4). The overall number of identified proteins in our former analysis (i.e., 361 proteins) (42) and the current dataset was within the same order of magnitude and had a similar dynamic range (approximately 10 3 −10 8 in our prior analysis versus 10 4 −10 9 in the current analysis). There was also high overlap in identified proteins between the datasets (91% of the original 361 proteins were included in our analysis here; Figure S1) and similar protein ranks (Table S5). Although the prior and current study have low sample sizes (n = 17 and 19, respectively) and were sampled across different years, the similarity in protein abundance, composition, and ranks suggest that these proteomic patterns (e.g., guanylate-binding proteins, circulating 20S proteasome, and hyaluronidase-1) are not the result of sampling bias and are instead likely a consistent vampire bat phenotype, with improved protein identifications differences driven by technical advances. Effects of Heat Inactivation One key difference between analyses here and our prior proteomic study of this vampire bat population is unknown technical artifacts from heat inactivation. To assess these possible effects, we compared proteomes before and after treatment of four serum samples used in our prior study (42). Using a moderated t-test of the four paired sera samples, 34 proteins showed significant changes after heat inactivation (unadjusted p < 0.05), but no differences remained after BH adjustment (even using a liberal adjusted p < 0.3). Although we found no statistically significant changes in protein abundance with heat inactivation, we observed a mean 28% absolute change across the proteome, with a maximum mean 500% absolute change. Most proteins (52.6%) changed less than 17% in response to heat inactivation ( Figure S2; Table S3). Mining for CoV Proteins Given prior proteomic identification of putative viral proteins in undepleted serum, including CoVs (42), we broadened our search space to include any CoV proteins. As observing nonhost proteins is a rare event, we used additional stringent criteria to verify any initial CoV peptide spectral matches (see Supplementary Material). Of the 749 CoV peptide spectral matches, none passed these more stringent criteria (Table S6). Thus we cannot firmly say that viral proteins were identified in this set of undepleted sera, regardless of CoV status. Proteomic Differences With CoV Infection To assess differences in the serum proteome between CoVinfected and uninfected bats, we first used multivariate tests. Across the 586 identified proteins, the first two PCs explained 25.46% of the variance in serum proteomes ( Figure S3). A PERMANOVA found no difference in proteome composition by viral infection status (F 1,17 = 0.99, R 2 = 0.05, p = 0.46), although variation was greater in infected bats. Using Wilcoxon rank sum tests, we initially identified 22 proteins with significantly different abundance in CoV-infected bats (unadjusted p < 0.05), but no differences likewise remained after BH adjustment (even using a liberal adjusted p < 0.3; Figure 2B). In contrast to multivariate and differential abundance tests, ROC curve analyses identified 32 candidate protein biomarkers of CoV infection using strict (n = 7, AuROC ≥ 0.9) and lessconservative (n = 25, 0.9 > AuROC ≥ 0. Figure 2C). The total 32 candidate biomarkers provided clear discriminatory power in differentiating the protein phenotypes of uninfected and infected bats (Figure 3). DISCUSSION Despite an increasing interest in bat-virus interactions, especially for CoVs given their human health relevance (22,23), we still have limited insights into the immune mechanisms involved in infection of bats (28). Here, we used serum proteomics to broadly profile the immune phenotype of wild vampire bats in the presence of relatively common CoV infections. Novel a-CoVs detected in these bats had little association with serum protein composition nor abundance, although ROC curve analyses identified 7-32 candidate biomarkers of CoV infection, including AHSG, C4A, F12, GPI, DSG2, GSTO1, and RNH1. Enrichment analyses using these protein classifiers identified strong downregulation of complement, regulation of proteolysis, immune effector processes, and humoral immunity in CoV-infected bats alongside an upregulation of neutrophil immunity, overall granulocyte activation, myeloid cell responses, and glutathione processes. Such results denote a mostly cellular immune response of vampire bats to CoVs and further identify putative biomarkers that could provide new insights into CoV pathogenesis in both wild and experimental populations. Much bat immunology work to date has understandably focused on model bat systems under captive conditions (14,18). However, identifying the immune correlates of infection is especially important for wild populations, where susceptibility and tolerance to infection, alongside other immunological processes, can vary based on habitat quality and host life history (e.g., reproduction) (78). Such efforts could provide a mechanistic basis for establishing when and where pathogen pressure from bats is greatest and thus help predict viral spillover (79,80). Unlike some other global profiling techniques, proteomics has the benefit of leveraging the small blood volumes that can be obtained non-lethally from most small bats (e.g., members of the Yangochiroptera, including over 1000 species) and thus is especially amenable to the long-term, mark-recapture studies required to study bat virus dynamics (52,81). To facilitate this work, we here built on our prior proteomic characterization of Desmodus rotundus (42). We identified a core serum protein phenotype for this species from multiple years of sampling that could serve as a reference for long-term proteomic studies, although technical advances (e.g., DIA-NN) likely contributed to an expanded protein repertoire in the current study. We also assessed the impacts of heat treatment, a common inactivation method for sera, given recent shifts in United States importation regulations. Artifacts from heat inactivation were not sufficiently conserved to be statistically significant, and most serum proteins had small changes in abundance before and after this treatment. Yet given the extent of such changes, we suggest original samples should typically not be analyzed with heated samples for comparative purposes. In contexts where sera inactivation is required, however, heat treatment across samples should not bias characterization of bat serum proteomes. Such optimizations could next be applied across longitudinal timepoints to more broadly study bat-virus interactions in the wild. We here focused this initial study on CoVs, which have been previously characterized as genetically diverse (a-and b-CoVs) in Neotropical bats, including but not limited to Desmodus rotundus (43,45,82,83). Our detection of CoVs from a northern Belize colony of vampire bats in swab samples with paired sera (4/19) was higher than other CoV surveys in this species (46,76,82,84), although future studies with larger sample sizes are needed to test if this represents a true geographic difference in virus prevalence. Yet despite many available CoV sequences in GenBank from diverse bat surveys, all CoVs detected here fell within the genus Alphacoronavirus but outside of known bat a-CoV clades. Instead, these viruses were either entirely novel or more closely related to human a-CoVs, specifically HCoV-NL63 and HCoV-229E, suggesting greater genetic diversity of CoVs within vampire bats than previously recognized. Such results could be further interrogated through whole-genome sequencing of these viruses (45). Similarly, such results further suggest the possibility of high zoonotic potential (or spillback from humans to bats) for vampire bat CoVs, which likely varies based on geography given the high degree of genetic differentiation across the broad distribution of Desmodus rotundus (85)(86)(87). Such findings should be confirmed with larger sample sizes and characterization, including in vitro assessments and attempts at virus isolation. Despite identifying CoV infection in a small number of bat samples, we were unable to detect CoV proteins in the serum proteomes. Previously, we detected two CoV peptides in sera from this population, but these were likely at the edge of detection limits (42). As such detection limits are susceptible to technical artifacts, including but not limited to sample handling, protein processing, or instrument performance, heat inactivation could have affected our ability to identify similar peptides in these bat samples, especially for bats positive for CoVs by PCR. Additionally, our ability to detect viral proteins may have been further restricted by ongoing limitations in applying proteomics to wild species. In humans, over 3000 serum proteins can be detected by mass spectrometry after depletion of the most abundant proteins (41). However, using antibody-based depletion techniques is not an effective strategy in non-human mammals (47), such that undepleted serum proteomics in bats will be limited to the top 300-600 proteins, with false negatives for low abundance proteins such as those of viruses (88). Alternatively, lack of detection of CoV proteins in sera despite detection of CoV RNA in oral and rectal swabs could indicate tropism, as CoVs have been more readily detected in bat feces and saliva than in blood (89). Using our novel a-CoVs, we then tested for differential composition and abundance of serum proteins between uninfected and infected vampire bats. In both cases, we found negligible overall differences in serum proteomes with CoV infection. This lack of differences is not necessarily surprising, because bats and their aand b-CoVs share a long coevolutionary history (27,90). However, such null results should also be qualified by the challenges posed to differential abundance tests by sample imbalance, given the small number of infected relative to uninfected bats (91). To partly address this imbalance, we used ROC curve analyses to identify proteins with strict (AuROC ≥ 0.9; n = 7) and less conservative (AuROC ≥ 0.8; n = 25) classifier ability for infection (47,92). The unbiased query of proteins in relation to infection through proteomics can in turn result in detection of unexpected candidate biomarkers and new insights into CoV pathogenesis in bats. For example, we identified increased ribonuclease inhibitor (RNH1) as a putative biomarker. RNH1 inhibits RNase 1 and blocks extracellular RNA degradation, possibly resulting in increased tumor necrosis factor (TNF)-a activation (93). Prior cell line studies of Eptesicus fuscus have shown limited production of TNF-a upon stimulation (94), whereas those of Pteropus alecto have suggested an induced TNF-a response (16). Whether greater abundance of proinflammatory cytokines such as TNF-a occur with CoV infection in bats would thus be a fruitful area for future work based on RNH1 differences here. We also identified lower complement C4A (one of two C4 isotypes) as another putative biomarker. In humans, lower complement C4A and C3 can signal elevated autoimmunity (95,96), and decreased complement C4 and C3 in COVID-19 patients also corresponds to disease severity (97). The processes that shape serum complement, namely complement synthesis, activation, and clearance, remain poorly characterized in bats (17), but the identification of C4A as a classifier could suggest specific explorations into how complement affects CoV infection. Other candidate biomarkers also had more direct implications for the antiviral response in bats. AHSG (alpha-2-HS-glycoprotein) is a negative acute phase reactant (98) and here was a positive predictor of CoV infection. In humans, elevated AHSG is accordingly protective against progression of disease caused by SARS-CoV (99), and decreased inflammation could also contribute to viral tolerance in bats. We also identified poly (rC)-binding protein 1 (PCBP1) as a positive, albeit weaker, predictor of CoV infection (AuROC = 0.87). This RNA-binding protein is upregulated in activated T cells to control effector T cells converting into regulatory T cells and thus stabilizes the innate immune response (100, 101); PCBP1 may also prevent virus-related inflammation (102). Whereas human patients with prolonged SARS-CoV-2 infections showed lower PCB1 compared to patients with short-term infections (103), bats with CoV infection here had elevated PCBP1 and more generally harbor more PCBP1 than humans (42). Despite focusing on two different viral genera, such results suggest both similarities and differences in how bats and humans may respond to CoV infection. More generally, the list of such candidate biomarkers identified here can be used to create accurate, sensitive, quantitative, and bat-specific parallel reaction monitoring mass spectrometry-based protein assays (104,105). Such assays could facilitate more thorough investigations into bat immune response to CoV infection. Further, such putative biomarkers are only observational correlates of naturally occurring CoV infection, but bats can be infected by a high diversity of viruses that could elicit similar immune responses (106,107). Experimental infection with CoVs would be an important future step to identify the roles of such proteins in pathogenesis. In addition to identifying candidate biomarkers, we also leveraged these proteins to more generally assess broad upand downregulated biological processes with CoV infection through enrichment analyses. Using all candidate biomarkers, we found that CoV-infected bats displayed downregulation of the complement system, regulation of proteolysis, immune effector processes, and humoral immunity while also showing upregulated neutrophil-mediated immunity, overall granulocyte activation, myeloid cell responses, and glutathione processes. These results in part support findings from limited experimental infections of select bat species, which have shown little humoral response to CoVs (11,31). Yet while Rousettus aegyptiacus challenged with SARS-like CoVs did not show hematological changes following infection (31), CoV-infected vampire bats here had largely upregulated cellular immune responses. Similarly, experimental studies have suggested bat tolerance of CoVs to be driven by upregulation of cytokine responses and a downregulated inflammatory response (11,16,33), but we did not find GO terms related to cytokines or inflammation in our analyses. Such discrepancies could again result from our ability to only detect the top 300-600 proteins without antibody-based depletion, which could cause low-abundance proteins (including but not limited to IFNs) being especially difficult to characterize here (88). Alternatively, these differences could reflect distinct immune responses of bats for a-CoV infection, given that experimental studies to date have focused on b-CoVs. Additionally, these findings could also signal immunological variation within and among bat clades, given that Desmodus rotundus and the closely related Artibeus jamaicensis may respond differently to CoVs (11). Future proteomic analyses across bat species in the wild could provide a tractable means to broadly characterize host responses to viruses, including but not limited to hypothesized immune mechanisms of tolerance in this order of mammals and to infection with diverse CoVs. By leveraging the benefits of proteomics to quantify hundreds of proteins from the small sera volumes that can be obtained from most bat species (41,42), such analyses could evaluate whether particular immune responses to viruses such as CoVs are conserved across bats (e.g., downregulation of humoral immunity) and which may be a feature of particular bat clades. In particular, further comparative proteomic analyses across Neotropical bats, including both additional members of the Phyllostomidae as well as sister families such as the Mormoopidae (108), would illuminate whether vampire bats have particular immunological relationships with CoVs that may facilitate viral tolerance. As suggested in our work here on Desmodus rotundus, such studies could also identify putative biomarkers that may suggest novel mechanisms of pathogenesis and facilitate development of protein-specific assays to improve the resource base for studying the immunology of wild bats and bat-virus dynamics. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. ETHICS STATEMENT The animal study was reviewed and approved by the Institutional Animal Care and Use Committee of the American Museum of Natural History (AMNHIACUC-20190129). AUTHOR CONTRIBUTIONS DB, MF, and NS conducted fieldwork. G-SL and RR ran CoV diagnostics. MJ, AB, and BN conducted proteomic analyses and bioinformatics. DB and BN analyzed data. DB and BN wrote the manuscript with contributions from all co-authors. All authors contributed to the article and approved the submitted version. ACKNOWLEDGMENTS For assistance with bat sampling logistics and research permits, we thank Mark Howells, Melissa Ingala, Kelly Speer, Neil Duncan, and the staff of the Lamanai Field Research Center. Identification of certain commercial equipment, instruments, software, or materials does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the products identified are necessarily the best available for the purpose.
2022-01-29T14:11:34.718Z
2022-01-27T00:00:00.000
{ "year": 2022, "sha1": "797a137a386b8ff92e0e1c70f94d6ee1e760866e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fviro.2022.862961/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "c6844bcaf2d8e33fca6f37ae3b2d1df138541c5d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
52877631
pes2o/s2orc
v3-fos-license
Relevance of MicroRNAs as Potential Diagnostic and Prognostic Markers in Colorectal Cancer Colorectal cancer (CRC) is currently the third and the second most common cancer in men and in women, respectively. Every year, more than one million new CRC cases and more than half a million deaths are reported worldwide. The majority of new cases occur in developed countries. Current screening methods have significant limitations. Therefore, a lot of scientific effort is put into the development of new diagnostic biomarkers of CRC. Currently used prognostic markers are also limited in assessing the effectiveness of CRC therapy. MicroRNAs (miRNAs) are a promising subject of research especially since single miRNA can recognize a variety of different mRNA transcripts. MiRNAs have important roles in epigenetic regulation of basic cellular processes, such as proliferation, apoptosis, differentiation, and migration, and may serve as potential oncogenes or tumor suppressors during cancer development. Indeed, in a large variety of human tumors, including CRC, significant distortions in miRNA expression profiles have been observed. Thus, the use of miRNAs as diagnostic and prognostic biomarkers in cancer, particularly in CRC, appears to be an inevitable consequence of the advancement in oncology and gastroenterology. Here, we review the literature to discuss the potential usefulness of selected miRNAs as diagnostic and prognostic biomarkers in CRC. Introduction Colorectal cancer (CRC) accounts for about 10% of all cancer cases worldwide. The latest Global Cancer Observatory data from 2012 estimates that nearly 1.4 million new cases of CRC and over 694,000 deaths were reported. CRC is more common in developed countries, with over 65% of cases. In Europe, about 400,000 new patients are diagnosed with CRC each year, and more than 200,000 die annually. The majority of CRCs occur in patients over the age of 50, with more than 75% being those over 60 years old. The risk of CRC increases with age and is 1.5-2 times higher in men than in women. Over the past three decades, in the male population, there has been a steady increase in mortality. In the female population, the increase in mortality stabilized in the mid-1990s and since then, mortality has remained relatively constant [1]. Currently used screening methods, including the fecal occult blood test (FOBT), stool DNA test, double-contrast barium enema (DCBE), and colonoscopy, have significant limitations. Colonoscopy is an invasive procedure that carries the risk of bowel perforation. Additionally, due to the nature of this test, some patients deny its implementation. Furthermore, the other tests mentioned are limited by either insufficient sensitivity or high percentage of false positive results. Therefore, currently, the development of new biomarkers for CRC screening tests is the subject of intensive research. However, most of the results of recent studies require confirmation on larger groups of patients before being implemented in clinical practice [2]. Currently, tumor-node-metastasis (TNM) classification is the primary prognostic marker in CRC therapy. The TNM system describes the size of the primary tumor, the degree of invasion of the intestinal wall, nearby lymph nodes and distant organs [3]. Although the TNM classification is the basis for CRC prognosis, this system has important caveats. Insufficient analysis of lymph node status may lead to underestimation of tumor progression, which in turn may result in ineffective treatment [3]. In addition, histologically indistinguishable CRCs may have various genetic and epigenetic backgrounds that contribute to different progressions and responses to treatment. For example, patients with TNM stage II CRC without lymph node metastasis have relatively good survival rates, but still around 25% of these patients have a high risk of relapse after surgical removal of the tumor. Unfortunately, currently used CRC prognostic markers have limited use in identification of patients with increased risk of recurrence and are still not highly accurate in assessing effectiveness of treatment. MicroRNAs (miRNAs), which are short, single-stranded non-coding RNA sequences of approximately 21-23 nucleotides, are interesting and promising targets in CRC therapy and diagnostics. MiRNAs are products of intracellular processing of hairpin precursor molecules. Firstly, a miRNA gene is transcribed into a primary miRNA (pri-miRNA), which is then processed in the nucleus by Drosha RNase to form a precursor miRNA (pre-miRNA). The pre-miRNA is then transported to the cytoplasm via the activity of the exportin 5, which interacts with the Ran protein. In the cytoplasm, Dicer RNase is responsible for further processing of pre-miRNA. Subsequently, mature miRNA is complexed with Argonaute family proteins to form the RISC complex. Physiologically, miRNAs are responsible for epigenetic regulation of translation by attaching to the 3 -untranslated region (3 -UTR) of the target messenger RNA (mRNA) (Figure 1). In this way, miRNA mediates mRNA degradation and the repression of translation [4]. Currently, tumor-node-metastasis (TNM) classification is the primary prognostic marker in CRC therapy. The TNM system describes the size of the primary tumor, the degree of invasion of the intestinal wall, nearby lymph nodes and distant organs [3]. Although the TNM classification is the basis for CRC prognosis, this system has important caveats. Insufficient analysis of lymph node status may lead to underestimation of tumor progression, which in turn may result in ineffective treatment [3]. In addition, histologically indistinguishable CRCs may have various genetic and epigenetic backgrounds that contribute to different progressions and responses to treatment. For example, patients with TNM stage II CRC without lymph node metastasis have relatively good survival rates, but still around 25% of these patients have a high risk of relapse after surgical removal of the tumor. Unfortunately, currently used CRC prognostic markers have limited use in identification of patients with increased risk of recurrence and are still not highly accurate in assessing effectiveness of treatment. MicroRNAs (miRNAs), which are short, single-stranded non-coding RNA sequences of approximately 21-23 nucleotides, are interesting and promising targets in CRC therapy and diagnostics. MiRNAs are products of intracellular processing of hairpin precursor molecules. Firstly, a miRNA gene is transcribed into a primary miRNA (pri-miRNA), which is then processed in the nucleus by Drosha RNase to form a precursor miRNA (pre-miRNA). The pre-miRNA is then transported to the cytoplasm via the activity of the exportin 5, which interacts with the Ran protein. In the cytoplasm, Dicer RNase is responsible for further processing of pre-miRNA. Subsequently, mature miRNA is complexed with Argonaute family proteins to form the RISC complex. Physiologically, miRNAs are responsible for epigenetic regulation of translation by attaching to the 3′-untranslated region (3′-UTR) of the target messenger RNA (mRNA) (Figure 1). In this way, miRNA mediates mRNA degradation and the repression of translation [4]. One type of miRNA molecules can regulate the expression of multiple target genes and activation of different signaling pathways, and their crosstalk. The miRNA expression is unique for different tissues (including tumor tissue), and it participates in maintaining a differentiated cellular One type of miRNA molecules can regulate the expression of multiple target genes and activation of different signaling pathways, and their crosstalk. The miRNA expression is unique for different tissues (including tumor tissue), and it participates in maintaining a differentiated cellular state. Therefore, disorders of miRNA expression may shift the cell to the undifferentiated proliferative phenotype. MiRNA expression disorders may be caused by a variety of molecular processes, such as deletions, amplifications, mutations within the miRNA locus, epigenetic silencing, or abnormal regulation of transcription factor activity. It is estimated that miRNAs are responsible for the regulation of translation of about 30-60% of human genes [5,6]. Because miRNAs recognize many different mRNA transcripts, these molecules are important in epigenetic regulation of basic cell processes, such as proliferation, apoptosis, differentiation, and migration. Therefore, miRNAs may act as potential oncogenes or suppressors in tumor development processes. Indeed, for many different human cancers, including CRC, significant abnormalities in miRNA expression have been observed. In CRCs, the altered expression of a large number of miRNAs associated with the development, progression and response to the treatment of this cancer was observed [7][8][9]. Changes in miRNA expression are associated with more frequent metastases, promotion of tumor mass growth, and increased malignancy of tumor cells. MiRNA expression is also associated with the risk of recurrence, response to the therapeutic regimen, and survival time in different cancers. Therefore, miRNA profiling may be a new and valuable tool in the diagnosis and prognosis of many types of cancer, including CRC. However, knowledge in this area still remains fragmented, and previous studies were conducted only on small groups of patients. In addition, the vast majority of studies conducted so far have evaluated the expression of miRNAs only in tumor tissues. However, miRNA expression profiling in tumor tissue has significant disadvantages. The heterogeneity of tumor cells produces variable results. Obtaining reliable data in this case is problematic and requires the isolation of individual cancer cells, e.g., using the laser microdissection method. This method, however, is costly, requires specialized equipment and is therefore not commonly used in clinical practice. Furthermore, studying tumor tissues does not allow for the assessment of changes in miRNA expression during anti-cancer therapy. Alternatively, choosing serum or blood plasma as the test material is a more practical and cheaper approach to miRNA profiling. Stool is also suitable for miRNA expression analysis. In both cases, the material is readily available, which makes it possible to identify potential biomarkers at any stage during and after the therapy, e.g., for early detection of cancer recurrence. MiRNA in plasma and serum is encapsulated in exosomes, which makes it highly stable. Therefore, the material does not require any special storage or protection operations prior to analysis. Potential applications of miRNA expression analysis for diagnostic, therapeutic and prognostic purposes are summarized in Table 1. The use of miRNAs as diagnostic and prognostic markers in neoplastic diseases, particularly in CRC, appears to be an inevitable consequence of advances in clinical oncology and gastroenterology. MiRNAs as Diagnostic Biomarkers Circulating miRNAs are stable due to encapsulation in exosomes [10], but the mechanism of their formation and secretion has not yet been fully understood. There is increasing evidence that miRNAs contained in exosomes may be involved in the exchange of information between distant tissues [11]. Serum miRNA expression is known to be altered in CRC patients and current studies attempt to investigate the correlation of the expression of certain types of miRNAs, both in serum and tumor tissue. Due to minimal invasiveness of miRNA harvesting procedures, circulating miRNAs may be potentially used as diagnostic biomarkers of various cancers, including CRC. A significant limitation of miRNA expression analysis is its poor utility in diagnosis, due to non-specificity and high variability of expression of a single miRNA type. Therefore, recent studies have attempted to evaluate the expression of miRNA sets. Such studies were conducted using microarray technology as well as quantitative real-time reverse transcriptase polymerase chain reaction (qRT-PCR) [12]. Insufficient sensitivity resulting from low concentrations of miRNA in serum or plasma of the patients is the principal limitation of a microarray experiment. QRT-PCR is characterized by better sensitivity, but evaluation of expression of numerous miRNAs using this method is a difficult and time-consuming task. Next-generation sequencing (NGS) is a novel and promising method that may be applied to evaluate the expression profiles of many miRNAs simultaneously [13]. Moreover, novel isolation techniques allow obtaining more miRNA for analysis, which, in combination with specialized NGS protocols, can increase the utility of this method in diagnostics of cancer, including CRC in the near future. Some non-invasive screening methods used in CRC diagnosis are based on stool testing. Endogenous miRNAs encapsulated in exosomes are protected against RNases, in contrast to mRNAs or proteins that are rapidly degraded. For this reason, the detection of miRNA in stool is relatively easier. However, in order to ensure sensitivity and replicability, appropriate protocols, including material preparation, extraction, and quantitative analysis of miRNA, are required in this case [14]. Stool tests allow for earlier detection of tumor cells and most tumor markers, as compared to peripheral blood tests. Therefore, stool miRNA assays may be useful in detecting precancerous lesions [15]. Stool miRNA purification kits are commercially available, making it possible to obtain high-quality and high-purity nucleic acids for further analysis. However, methods that use stool miRNA molecules as biomarkers of CRC are still in their infancy. Although many recent studies indicate that stool miRNA tests have higher specificity, higher sensitivity and higher reproducibility than peripheral blood assays, no particular stool test has yet passed the preclinical phase. Therefore, it is necessary to carry out further studies and validation of methods based on miRNA derived from this material. MiRNAs as Prognostic Biomarkers and Therapeutic Agents MiRNA molecules also appear to be promising prognostic biomarkers, as has been shown so far in many preclinical and clinical studies [16][17][18][19][20][21][22]. The profiling of miRNA expression for prognostic purposes has been demonstrated in many human tumors, including: colorectal, pancreatic, ovarian, breast cancer and glioblastoma [15,[23][24][25]. Since one type of miRNA molecule can influence the regulation of expression of many different genes, the use of these molecules in anti-cancer therapy also seems promising. Currently, there are two potential strategies: (1) the inhibition of oncogenic miRNAs and (2) the activation of suppressive miRNAs. Both strategies can be effective, as shown in preclinical studies. Direct inhibition involves antisense oligonucleotides used to sequester a given miRNAs. Modified antisense oligonucleotides used to inhibit miRNA in vivo are often referred to in the literature as antagomirs [26]. For example, a study conducted by Lanford et al. [27] published in Science in 2010 showed that the use of anti-miR-122 in chimpanzees chronically infected with hepatitis C virus (HCV) contributes to an improvement in liver disease. Currently, anti-miR-122 is in phase II clinical trials of HCV therapy in humans, and this miRNA-based therapy may possibly be included in clinical treatments in the coming years. The use of miRNA antagonists seems to be a promising form of therapy, as evidenced by the successful treatment of patients with chronic HCV infection [28]. Additionally, miRNAs can also be inhibited indirectly using a variety of chemical compounds [29]. Moreover, there are also studies on genetic knockout of miRNAs in cancer cells. These studies can provide valuable information about the role of miRNAs in cancer and contribute to the development of novel anti-cancer strategies. For example, Shi et al. [30] showed in mouse model that knockout of oncogenic miR-21 causes an attenuated proliferation of colitis-associated CRC. In turn, Jiang and Hermeking [31] and Rokavec et al. [32] performed studies on suppressive miR-34 in mouse model. In these studies, the authors showed that genetic knockout of miR-34a and miR-34b/c can contribute to CRC progression. MiR-21 One of the most intensively studied miRNA molecules is miR-21, which is often overexpressed in CRC [23,33]. MiR-21 lowers the expression of several different suppressor genes that influence various biological functions, such as proliferation, adhesion, angiogenesis, migration, metabolism, and apoptosis [34]. Therefore, aberrant miR-21 has potentially oncogenic properties. It is worth noting that some colorectal polyps transform into malignant tumors as a result of successive, consecutive genetic events. Many studies have shown that miR-21 is associated with the progression of polyps into malignant tumor, and that expression of this miRNA may be increased in CRC [35,36]. One study evaluated the expression of miR-21 in different stages of CRC in 39 surgically removed tumors and 34 polyps after endoscopic resection. Using in situ hybridization (ISH) of nucleic acids, expression of miR-21 was shown to be increased in non-malignant polyps in comparison with controls and was highest in advanced CRC tumors and also in adjacent stromal fibroblasts [36]. In another study, Bastaminejad et al. [37], using the qRT-PCR method, investigated the expression level of miR-21 in serum and stool samples from 40 patients with CRC and 40 healthy controls. The expression level of this miRNA was significantly up-regulated in serum (12.1-fold) and stool (10.0-fold) in CRC patients, compared to the control group. The sensitivity and specificity of serum miR-21 expression level were found to be 86.05% and 72.97%, respectively, and the sensitivity and specificity of stool miR-21 expression were 86.05% and 81.08%, respectively. The expression level of miR-21 was able to significantly distinguish CRC stages III-IV from stages I-II (according to the American Joint Committee on Cancer (AJCC) TNM staging system) in stool samples with a sensitivity and a specificity of 88.1% and 81.6%, respectively, and in serum samples with a sensitivity and a specificity of 88.10% and 73.68%, respectively. Significantly increased miR-21 expression was also demonstrated in stool samples of 88 CRC patients compared to control group of 101 healthy volunteers [38]. Similar results were obtained in 29 patients with CRC and eight healthy patients [39]. Therefore, the expression of this miRNA in tumor tissues as well as in serum and stool may be a potential and minimally invasive diagnostic biomarker of CRC. MiR-21 overexpression is closely related to proliferation and lymph node metastases in CRC, which are important prognostic factors in this type of cancer. Analysis of the expression of miR-21 derived from CRC tissues may also be helpful in prognosis. Fukushima et al. [40] assessed the prognostic value of miR-21 in a group of 306 CRC patients. The authors found that high miR-21 expression was correlated with low overall survival (OS), as well as low disease-free survival (DFS) in CRC patients in Dukes stages B, C, and D [40]. In another study, the prognostic value of miR-21 was also considered in patients classified in the TNM staging system. After tumor tissues from 301 patients with varying degrees of CRC were investigated, a statistically significant correlation between miR-21 expression and prognosis was observed [23]. Moreover, high expression of this miRNA was associated with low OS. Oue et al. [23] demonstrated that miR-21 expression in tumor tissues is significantly increased in patients with tumors infiltrating adjacent organs (T4), compared to patients with tumors limited to the colon (T1-T3). The study also presented a similar relationship in patients with regional lymph node metastases present (N1) compared to patients without cancer in the lymph nodes (N0). Furthermore, high miR-21 expression in CRC patients was correlated with insensitivity to 5-fluorouracil (5-FU) treatment. Oue et al. [23] analyzed the expression of miR-21 in German (stage II, n = 145) and Japanese (stage I-IV, n = 156) cohorts of patients using the qRT-PCR method. MiR-21 overexpression was associated with poor prognosis in both Japanese (stage II/III) and German patients (stage II). These correlations were not dependent on other clinical data in a multivariate model. In addition, the use of adjuvant chemotherapy did not benefit patients with high miR-21 expression in both cohorts. Similar correlations were also obtained in other studies [16,19,[41][42][43]. Moreover, Schetter et al. [16], using the ISH method, observed a high expression level of miR-21 in colonic epithelial cells in tumor tissues compared to adjacent non-tumor tissues. In turn, Nielsen et al. [42] detected miR-21 expression mainly in stromal fibroblasts adjacent to tumors. On the other hand, Xia et al. [44] showed in a meta-analysis of miR-21 expression profiles of 1174 CRC tissue samples that overexpression of this miRNA is associated with low OS, but there was no correlation with the carcinoembryonic antigen (CEA) level. Additionally, Chen et al. [45] and Peng et al. [46] in their meta-analysis studies showed that miR-21 expression in tumor tissues is associated with poor DFS and OS in CRC patients. However, Chen et al. [45] revealed the significant correlation between miR-21 expression and poor OS in studies based on the qRT-PCR analysis but not the fluorescence in situ hybrydization (FISH) method. Nonetheless, a few previous studies showed that a higher miR-21 expression level detected with the use of ISH method is associated with poor recurrence-free survival of CRC patients in stage II [41,47]. Moreover, in these studies, miR-21 expression was also detected mainly in stromal fibroblasts adjacent to tumors and only in a few samples in cancer cells. All studies discussed above indicated that the analysis of miR-21 expression in tumor tissues may be a potential, but certainly not an ideal biomarker of CRC prognosis. The prognostic value of miR-21 expression in blood and stool of CRC patients is also the subject of intensive research. Kanaan et al. [48] observed significantly increased plasma levels of miR-21 in CRC patients. In turn, Toiyama et al. [33] evaluated the expression of miR-21 in a cell culture medium from two different CRC lines, in serum collected from 12 CRC patients and 12 healthy volunteers separately, and confirmed that this miRNA belongs to the secretory group of the miRNAs. The same research group subsequently measured miR-21 expression in 246 blood samples from CRC patients, 53 healthy volunteers, and 43 colorectal polyps. They also compared the expression of miR-21 in serum and tumor tissues in 166 paired samples. Statistically significant increase in serum miR-21 expression was observed in patients with benign polyps and in those with CRC. Moreover, a decrease in the expression of this miRNA in serum was observed in patients after surgical removal of the tumor [33]. Furthermore, many studies showed that expression of miR-21 both in tissue and serum samples of CRC patients is associated with lower OS and DFS [33,49]. However, Chen et al. [45] showed no significant association of serum miR-21 expression with poor OS of CRC patients in their meta-analysis studies (421 patients). In another study, it was shown that the increase in miR-21 expression in stool of CRC patients may be correlated with TNM classification [50]. Similarly, Bastaminejad et al. [37] revealed that increased expression of miR-21 was associated with AJCC TNM staging, more related with III and IV compared to I and II stages, in both serum and stool samples of 40 CRC patients. The above-mentioned studies show that miR-21 expression is associated with tumor size, metastases and low patient survival. Therefore, the expression of this miRNA in tumor tissues as well as in serum and stool may be a potential and minimally invasive prognostic biomarker of CRC. MiR-29 Family Another promising potential CRC biomarker is the miR-29 family, which includes three related miRNAs: miR-29a, miR-29b, and miR-29c. This family is associated with various molecular functions, such as the regulation of cell proliferation, cell senescence and tumor metastasis. Therefore, these molecules can participate in carcinogenesis and progression. It has been shown that the expression of miRNAs belonging to this family is altered in many different cancers [51]. Wang et al. [52] performed a study in 114 patients with CRC-56 subjects without metastasis and 58 with liver metastasis, which are commonly found in this type of cancer. The authors demonstrated that the expression of miR-29a in serum of patients with liver metastasis was significantly increased. In addition, a significantly increased expression of miR-29a was also observed in patients in stage T4, compared to T2. The authors concluded that miR-29a enables the early detection of liver metastasis in CRC [52]. In addition to early metastasis detection, miR-29a was also tested for potential use as a diagnostic biomarker for CRC. Huang et al. [53], using qRT-PCR, studied the expression of 12 miRNAs (miR-17-3p, miR-25, miR-29a, miR-92a, miR-134, miR-146a, miR-181d, miR-191, miR-221, miR-222, miR-223, and miR-320a) in the plasma of patients with advanced stages of colorectal neoplasia (CRC and advanced adenomas), compared to a group of healthy volunteers. It was shown that the expression of two miRNAs, miR-29a and miR-92a, can have a significant diagnostic value in CRC. For miR-29a, the area under the curve (AUC) was 0.844, while for miR-92a, the AUC was 0.838 in differentiating patients with CRC from healthy volunteers. The utility of both miRNAs was also demonstrated in differentiation of advanced adenomas and normal tissues. In this case, the AUC value for miR-29a was 0.769, while for miR-92a, it was 0.749. Overall, a receiver operating characteristic (ROC) analysis for both miRNAs showed an AUC of 0.883 (sensitivity = 83.0% and specificity = 84.7%) in differentiation of CRC and an AUC of 0.773 (sensitivity = 73.0% and specificity = 79.7%) in differentiation of advanced adenomas. Similarly, Al-Sheikh et al. [54] revealed up-regulation of miR-29 and miR-92, and down-regulation of miR-145 and miR-195 in 20 CRC patients (both in tissue and plasma) compared to controls. The above-mentioned results suggest that the determination of miR-29a and miR-92a expression in plasma may be a novel and potential biomarker in the diagnosis of CRC. MiR-29b is also the member of miR-29 family. This miRNA inhibits proliferation and induces apoptosis in CRC cells. MiR-29b mediates the inhibition of the epithelial-mesenchymal transition. In many tumors that originate from epithelium, including CRC, the epithelial-mesenchymal transition is considered to be a key processes in initiation of metastasis. Li et al. [55] showed decreased expression of miR-29b in tissue and plasma samples obtained from CRC patients compared to controls. In addition, Basati et al. [56] showed down-regulation of miR-29b and miR-194 in serum samples obtained from 55 CRC patients compared to controls. Moreover, these authors showed a negative correlation between these miRNA expression and TNM stages. MiR-29 is also a potential prognostic biomarker of CRC. Tang et al. [57] analyzed the expression of miR-29a and KLF4 mRNA in 85 tumor tissues of CRC patients and CRC cell lines using the qRT-PCR method. It was shown that reduced expression of KLF4 mRNA is associated with the presence of metastasis. Moreover, increased miR-29a expression indicated presence of metastasis and worsened prognosis of patients with CRC. It is known that KLF4 is a target of miR-29a and that it acts to inhibit metastasis by reducing MMP-2 and increasing E-cadherin expression [57]. The study mentioned above showed that high expression of miR-29a is associated with metastasis and poor prognosis. The predictive value of miR-29a was also shown in stage II CRC. Weissmann-Brenner et al. [58] performed studies on 110 CRC patients (51 with stage I cancer and 59 patients with stage II cancer according to the AJCC TNM staging system) using miRNA microarrays and verified the microarray results using the qRT-PCR technique afterwards. RNA was extracted from formalin-fixed paraffin-embedded tumor tissues. The authors defined a poor prognosis as a recurrence of the disease within 36 months of surgery. Patients with a good prognosis (n = 87; 79%) and a poor prognosis (n = 23; 21%) were compared. There were no statistically significant differentially expressed miRNAs between good-and poor-prognosis stage I CRC patients, among the set of 903 analyzed miRNAs. On the other hand, the expression of miR-29a was significantly higher in stage II CRC patients with good prognosis compared to the poor-prognosis group. High expression of this miRNA was associated with longer DFS in both univariate and multivariate analysis. In case of miR-29a expression, a positive predictive value (PPV) for non-recurrence of the disease was 94% (two cases out of 31). Differences in the miR-29a expression were confirmed using qRT-PCR, and this method showed the effect of overexpression of this miRNA on prolonging of the DFS. This study demonstrated a significant association of the miR-29a expression level with the risk of CRC recurrence in stage II patients. For the patients in stage I, no such correlation was demonstrated. Moreover, significantly decreased expressions of miR-29a and miR-29c were reported in tumor tissues in 43 early-recurrence patients compared to the control group [59]. Although increased expressions of both miR-29a and miR-29c were associated with better outcome after 12 months of therapy, the authors suggested that only miR-29a may be used as a predictor marker for an early recurrence of the disease. The low PPV of miR-29c in this case may result from short follow-up time of the patients and the small study group [59]. In another study on 245 patients by Inoue et al. [60], the expression of miR-29b level in tumor tissues was used to divide the patients into two groups. The reference value was the median expression of this miRNA. The authors concluded that higher expression of miR-29b is associated with higher five-year DFS and OS values. Analysis of the severity of the disease showed that the miR-29b expression reflects the five-year DFS and has a significant prognostic value, but only in patients with stage III CRC. In addition, the low level of miR-29b expression was also a predictor of lymph node metastasis. These findings confirmed the prognostic value of this miRNA in stage III CRC patients. In another study, Yuan et al. [61] studied the expression of miR-29b in tumor tissue and adjacent normal mucosa samples of 41 patients using the qRT-PCR method. The authors found a significant decrease in the expression of this miRNA in CRC and concluded that the level of miR-29b may be associated with the size of a tumor, clinical status and lymph node metastasis. Basati et al. [56] studied 55 serum samples of CRC patients and revealed that lower expression of miR-29b is correlated with poor prognosis. In turn, Ulivi et al. [62] showed that higher miR-29b level in plasma samples is associated with longer progression-free survival (PFS) and OS of metastatic CRC patients treated with bevacizumab-based chemotherapy. As indicated by numerous studies, analysis of changes of miR-29 expression may be helpful in assessing an early recurrence and evaluating DFS in CRC patients. MiR-34 Family There are also studies on miR-34, a group of miRNAs that includes miR-34a, miR-34b and miR-34c. These molecules show suppressor properties and are regulated by p53 protein and DNA hypermethylation. The miR-34 group influences various processes that occur in tumor cells such as differentiation, drug resistance and metastasis [63]. For example, miR-34a overexpression inhibits NOTCH signaling and suppresses symmetric cell division, which prevents expansion of the CRC stem cell niche [64]. Wu et al. [65] studied the possibility of using miR-34 as a potential diagnostic biomarker for CRC. The authors showed abnormal methylation of miR-34a, miR-34b, and miR-34c genes in tissue and stool samples of 82 CRC patients. In turn, Aherne et al. [66] reported higher expression of miR-34a in plasma samples obtained from CRC patients compared to controls. MiR-34 expression has also been found to be useful for CRC prognosis. The usefulness of miR-34 as a biomarker of a recurrence of the disease in two independent groups of 268 CRC patients was evaluated. It was shown that the miR-34a expression in CRC tissues is directly proportional to DFS, and therefore, this molecule may be a good prognostic factor in assessing the risk of the recurrence of CRC. In addition, the expression of miR-34a was significantly higher in patients with high expression of p53 compared to those with low expression of this protein. The authors suggested that miR-34 inhibits the growth and invasiveness of CRC in p53-dependent manner, which allows this miRNA to be used as a potential biomarker for a recurrence in patients with stage II and stage III CRC [67]. The PAR2 receptor also plays an important role in the progression of CRC. Previous reports have indicated that miR-34a expression is inhibited by PAR2, which results in increased synthesis of cyclin D1 and transforming growth factor β (TGF-β) in CRC cells [68]. Furthermore, silencing of miR-34a expression through promoter methylation in CRC tissues is associated with the occurrence of metastases [69]. In other studies, a lower expression of this miRNA was observed in some patients with CRC in tissue [70] and serum/plasma samples [71], which suggests that miR-34a may play a role in the progression of this cancer. Li et al. [72] showed that lower expression of miR-34a in CRC tissues is correlated with the lymph node metastasis and TNM stage. Zhang et al. [73] performed studies on 103 CRC tissue samples and showed that miR-34a expression was down-regulated compared to adjacent normal mucosa samples with the use of ISH technique. Moreover, these authors indicated that the lower expression of this molecule is correlated with more distant metastasis and shorter OS time. Studies on the effect of the miR-34 group on the prognosis of CRC have also been performed. The purpose of these studies was to determine the relationship between the miR-34b and miR-34c expression in CRC tissues and the development of this disease. Samples were obtained from 159 American and 113 Chinese patients with stages I-IV CRC. Using the qRT-PCR method, Hiyoshi et al. [74] showed an increased expression of miR-34b and miR-34c in advanced CRC, which was associated with poor prognosis in both study groups. Similarly to miR-34a, the expressions of miR-34b and miR-34c are also regulated by p53 protein at the transcriptional level. The results of all of these studies show that miR-34 may be an interesting prognostic tool and may be used to assess the risk of a recurrence in patients with CRC. Furthermore, recent studies have determined the possibility of using miR-34 as a potential therapeutic agent. A low level of this miRNA was observed in DLD-1 CRC cell line that was resistant to 5-FU. Restoration of the miR-34a expression caused the sensitization of cells to 5-FU treatment and resulted in an inhibition of cell growth [75]. Moreover, Sun et al. [76] showed that miR-34a expression was down-regulated in blood samples of CRC patients after oxaliplatin-based chemotherapy. The negative correlation between miR-34a and TGF-β/SMAD4 expression is also noted. The authors suggested that miR-34a and TGF-β/SMAD4 expression changes can lead to activation of macroautophagy and oxaliplatin resistance in CRC cells. MiR-124 MiR-124 is known to inhibit cell proliferation, metastasis and invasion in CRC. This miRNA not only down-regulates rho-associated protein kinase 1 (ROCK1) [77], which functions as an oncogene, but also inhibits the activity of cyclin-dependent kinase 4 (CDK4), which is responsible for cell cycle progression at the G1/S checkpoint [78]. Studies have shown that, in CRC cells, the expression of miR-124 is silenced through DNA methylation [79,80]. Since miR-124 gene is more likely to be methylated in CRCs compared to other tumors, DNA methylation status of this miRNA may be used as a specific marker for CRC. Harada et al. [79] performed the detection of DNA methylation in bowel lavage fluid for CRC screening. These authors analyzed DNA methylation status in a total of 508 patients-56 with CRC, 53 with advanced adenoma, 209 with minor polyp, and 190 healthy individuals. Three genes showed the greatest sensitivity for CRC detection (miR-124-3, LOC386758, and SFRP1) after training set analysis (n = 345). A scoring system based on the methylation of these three genes achieved 82% sensitivity and 79% specificity, and the AUC was 0.834. These results were subsequently validated in an independent test set (n = 153; AUC = 0.808). In another study, Xi et al. [77] investigated the expression of ROCK1 and miR-124 in CRC patients using 68 paired tissue specimens (38 cases of non-metastatic CRC and 30 cases of metastatic CRC). The use of qRT-PCR revealed that expression of miR-124 was higher in normal compared with CRC tissues, and in non-metastatic compared to metastatic CRC tissues. In contrast, ROCK1 was significantly overexpressed in CRC compared with control tissues and between metastatic tissues and non-metastatic CRC tissues. The above-mentioned results suggest that miR-124, as a tumor suppressor, may play a role in tumor growth and metastasis. Recently, the miR-124a level has also been studied in 40 patients with ulcerative colitis (without colorectal cancer), four patients with CRC or inflammatory dysplasia, eight patients with CRC (without inflammatory background), and 12 healthy volunteers. It was found that miR-124a-1, miR-124a-2, and miR-124a-3 genes are methylated in tumor tissues. The authors suggested that the methylation of miR-124a-3 occurring during oncogenesis in patients with ulcerative colitis can be used to evaluate the individual's risk of developing cancer [81]. MiR-124 may also be a prognostic marker for CRC. Wang et al. [82] studied 96 tumor tissue samples and showed that down-regulated expression of miR-124 is correlated with poor OS and DFS in CRC patients. Moreover, Jinushi et al. [83] showed that higher expression of miR-124-5p (both in plasma and tissues) was associated with better prognosis of CRC patients. In turn, Slattery et al. [84] performed studies in 1893 patients with CRC. The authors found that miR-124-3p belongs to infrequently highly expressed miRNAs in tumor tissues and showed that up-regulation of miR-124-3p can worsen prognosis of these CRC patients. MiR-130b The direct target of miR-130b is the PPARγ receptor, of which inhibition results in the regulation of the expression of cadherin E, vascular endothelial growth factor (VEGF) and phosphatase and tensin homolog (PTEN). Since tissue miR-130b overexpression was observed in stage III and IV CRCs, it is suggested that miR-130b-PPARγ signaling may play a significant role in increasing tumor malignancy. Furthermore, evaluating the expression of proteins in this pathway, including miR-130b, may be a promising prognostic biomarker in CRC [85]. Finally, a study performed in 53 cancer and non-cancerous samples [86] suggested miR-130a as a good biomarker of CRC because of correlation with TNM staging and lymph node metastasis. MiR-155 The sequence of this miRNA is located in a non-coding region of MIR155 host gene (BIC). Altered expression of miR-155 has been observed in many different tumors, and is associated with severity of disease, progression, and response to treatment. Interestingly, Sempere et al. [87] using the ISH method observed that miR-155 expression is detected mainly in tumor-infiltrating immune cells. Lv et al. [88] examined the possibility of using serum miR-155 expression as a diagnostic tool. Using qRT-PCR, they measured the expression levels of miR-155 in 146 CRC patients and 60 healthy controls. Serum miR-155 was up-regulated in CRC patients compared with the matched healthy controls. Moreover, ROC curve analysis indicated that miR-155 is a suitable marker for discriminating CRC patients from healthy controls, with an AUC of 0.776. Therefore, this molecule can be used as a potential tumor biomarker in the diagnosis of CRC. MiR-155 can also play an important role in CRC prognosis. Shibuya et al. [43] showed that patients with an increased expression of miR-155 in tumor tissues were characterized by shorter OS and DFS, compared to those with lower expression of this miRNA. In another study, multivariate analysis also demonstrated a relationship between the level of miR-155 expression and poor prognosis in CRC, depending on the severity of the disease. The control group consisted of 60 healthy volunteers, while the experimental group consisted of 146 patients. The authors did not observe changes in the miR-155 serum level in patients with stage I CRC, but reported a statistically significant overexpression of this miRNA in patients with stages II-IV of the disease [88]. In another study, the serum CEA level and miR-155 expression were measured in tissues that were obtained before and after surgery of 84 CRC patients. It is well known that CEA is used for determining the prognosis, evaluating the effectiveness of therapy and monitoring the recurrence of CRC. In this study, the miR-155 expression was observed to be significantly increased in patients with CRC. This is associated with metastases and a recurrence of the disease [89]. Therefore, the evaluation of miR-155 expression in association with serum CEA may provide additional diagnostic information and enable a more accurate assessment of the risk of the metastasis of CRC. A statistically significant increase in miR-155 expression in tumor tissues, compared to normal samples, was also observed by Zhang et al. [90] in a study of 76 patients with CRC. In addition, the authors observed a correlation between miR-155 expression and lymph node and distant metastases and disease progression. Moreover, they observed that miR-155 overexpression inhibited E-cadherin expression and positively regulated zinc finger E-box binding homeobox 1 protein (ZEB-1), which resulted in an increased cell migration and metastases. Similarly, Qu et al. [91] revealed association of miR-155-5p expression in tumor tissues with location, grade of tumor, TNM stage and distant metastasis. Ulivi et al. [62] in multivariate analysis also showed that increased expression of circulating miR-155 is associated with shorter PFS and OS in metastatic CRC patients treated with bevacizumab-based chemotherapy. The above-mentioned observations indicate that miR-155 may play a role in the development and metastasis of CRC. MiR-224 Recent reports have revealed that miR-224 may influence many processes that are associated with tumor cell growth and development, such as proliferation, growth, differentiation and apoptosis [92]. Some groups have investigated the expression of miR-224 in CRC patients. Zhu et al. [93] found significantly lower miR-224 levels in feces from CRC patients than these from normal volunteers in their retrospective analysis of miR-224 levels in fecal samples from 80 CRC patients and 51 normal controls. The authors suggested that the miR-224 expression level in feces can be used for screening and early diagnosis of CRC. MiR-224 is also a potential prognostic biomarker in CRC. Zhang et al. [94] evaluated the clinical and pathological information of 40 patients with a recurrence and 68 patients without a recurrence within three years after a surgical intervention. Moreover, using the qRT-PCR and Western blot methods, the authors analyzed samples from all 108 patients with stages I and II CRC. They showed that miR-224 is involved in the regulation of SMAD4 protein, which is involved in cell signaling. SMAD4, together with other proteins from this family, forms a DNA-binding complex that acts as a transcription factor. Furthermore, a significant increase in miR-224 expression in CRC tissues was observed in the study and this change was associated with a higher risk of a recurrence and a shorter DFS. In another study, Ling et al. [95] showed that miR-224 is an activator of metastasis and that the target of this miRNA is SMAD4. The authors concluded that an evaluation of miR-224 alone or an evaluation of miR-224 together with SMAD4 may be an independent prognostic marker in patients with CRC. MiR-224 expression in tumor tissues and its association with the survival of patients were evaluated in 449 CRC patients. In this study, the patients were divided into two groups. These two groups were characterized by low and high levels of miR-224 expression, respectively. A shorter OS and survival time without metastases were observed in patients with miR-224 overexpression. Moreover, Wang et al. [96] showed an inverse correlation between SMAD4 and miR-224 expression in tumor tissues of 40 CRC patients. These authors also revealed that miR-224 can regulate USP3 expression and its higher expression is associated with poor prognosis. In turn, Liao et al. [97] observed a statistically significant increase in the expression of this miRNA in tumor tissues of patients with an aggressive CRC phenotype and poor prognosis. In another study, miR-224 expression was evaluated in 79 patients with CRC and 18 healthy volunteers. The authors observed a significant inhibition of miR-224 expression in tumor tissues. Since the molecular target of miR-224 is CDC42, a lower expression of this miRNA reduces tumor cells migration. In general, the study indicated an important role of miR-224 in inhibiting migration of CRC cells. The authors concluded that miR-224 may be a promising biomarker in evaluating development of CRC [98]. Furthermore, Zhang et al. [99] showed in their meta-analysis that the increased level of miR-224-5p is correlated with poor OS of CRC patients. MiR-378 MiR-378 is known to play a significant role in development of different types of cancer, including CRC. Current studies have shown that miR-378 is overexpressed in CRC cells and its targets include FUS-1 and SUFU suppressor genes [100,101]. In addition, miR-378 is involved in tumor progression by promoting cell survival, migration and angiogenesis [102]. Significant differences in the level of blood and tumor miR-378 expression between oncological and healthy subjects were observed. Zanutto et al. [103] analyzed miRNA expression in serum samples from 65 CRC patients and 70 healthy volunteers, and found significantly increased miR-378 levels in CRC patients compared to the control group. At the same time, the authors observed a statistically significant decrease in the expression of this miRNA after the surgical removal of the tumor. Similar results were obtained in patients who had no recurrence of the disease within four to six months after surgery. The results suggest that serum miR-378 levels may be useful not only for differentiating CRC patients from healthy subjects, but that miR-378 is synthesized in the tumor tissue and its concentration is associated with tumor mass and possible recurrence. In turn, Wang et al. [104] qualified miR-378 as a tumor suppressor after analyzing miRNA expression in 47 CRC samples that were matched with normal tissue samples. In another study, Zhang et al. [105] observed an inhibition of miR-378 expression in 84 CRC samples compared to normal mucosal samples. Similarly, Zeng et al. [106] showed lower miR-378 expression in 27 CRC samples compared to paired adjacent normal samples. There have also been studies that showed an association of a reduced expression of miR-378 in cancer tissues with increased tumor size, metastasis and shorter OS in patients with CRC [105,107]. The above-mentioned results suggest that miR-378 may play an important role in carcinogenesis and may have clinical value as a potential biomarker of CRC. Conclusions Our paper presents the latest reports on the diagnostic and prognostic values of selected miRNAs in CRC. Although the number of published papers that describe miRNAs as potential biomarkers for CRC has increased significantly over the past decade, clinical knowledge still remains fragmented. Only two miRNAs (miR-21 and miR-29) have been described in more detail in many previous studies. However, it is necessary to conduct further prospective validation studies before translating the knowledge into clinical use. Most of the current findings are from preliminary studies, which are often not free of methodological limitations such as small sample size, lack of detailed patient information, untested replicability, and statistical errors. It is also worth noticing that using the expression of a single miRNA as a diagnostic or prognostic biomarker of CRC is often limited due to insufficient specificity and sensitivity. Currently, many groups of researchers are investigating miRNA panels as CRC biomarkers, which appears to be a more promising strategy than the use of single miRNA tests. The development of panels containing many miRNA biomarkers seems to be essential and may enable more accurate diagnoses and prognoses of CRC in the future. However, the cost-benefit issue is also important in this case. In addition, for every potential miRNA biomarker, it is necessary to understand its molecular and biological functions as well as the mechanisms that are associated with its regulation. Understanding these processes is key to clinical application and identification of new therapeutic targets. Conflicts of Interest: The authors declare no conflict of interest.
2018-10-11T13:15:26.027Z
2018-09-27T00:00:00.000
{ "year": 2018, "sha1": "fd49f31478d95f6b503eddb7a4f850559dc7a327", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/19/10/2944/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74ea9e1ffb0d95073370640bacde1ba1f03c7731", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
21451200
pes2o/s2orc
v3-fos-license
A short proof of the equivalence of left and right convergence for sparse graphs There are several notions of convergence for sequences of bounded degree graphs. One such notion is left convergence, which is based on counting neighborhood distributions. Another notion is right convergence, based on counting homomorphisms to a target (weighted) graph. Borgs, Chayes, Kahn and Lov\'asz showed that a sequence of bounded degree graphs is left convergent if and only if it is right convergent for certain target graphs $H$ with all weights (including loops) close to $1$. We give a short alternative proof of this statement. In particular, for each bounded degree graph $G$ we associate functions $f_{G,k}$ for every positive integer $k$, and we show that left convergence of a sequence of graphs is equivalent to the convergence of the partial derivatives of each of these functions at the origin, while right convergence is equivalent to pointwise convergence. Using the bound on the maximum degree of the graphs, we can uniformly bound the partial derivatives at the origin, and show that the Taylor series converges uniformly on a domain independent of the graph, which implies the equivalence. Introduction The theory of graph limits has been extensively developing in recent years, primarily in the dense case [6,7]. For sequences of sparse graphs, in particular, sequences of graphs with bounded degree, much less is known. A notion of convergence, which we will refer to as left convergence, was first defined by Benjamini and Schramm [2]. One of the major open problems on bounded degree graph sequences, the conjecture of Aldous and Lyons [1] on left convergent sequences of graphs, is very closely related to Gromov's question about whether all countable discrete groups are sofic [9]. In a certain sense, left convergence is too rough to "distinguish" graphs which should be different. Because of this, other, finer notions of convergence for sequences of bounded degree graphs [3,4,10] have been proposed. The full picture of their mutual relations is not completely understood. Borgs, Chayes, Kahn and Lovász [5] prove that a sequence of bounded degree graphs is left convergent if and only if it is "right convergent" for a certain set of target graphs. We provide an alternative, shorter proof of this statement, which we believe also gives some new and useful insights. Here and throughout this paper we use the following setting: G n = (V (G n ), E(G n )), is a sequence of graphs with degrees bounded uniformly by a constant D, and v(G n ) = |V (G n )| → ∞. Given two simple graphs G and H, we let hom(G, H) denote the number of maps V (G) → V (H) such that any edge of G is mapped to an edge of H. We define inj(G, H) as the number of maps that are injective on the set of vertices, and ind(G, H) as the number of injective maps that are isomorphisms between G and the subgraph of H induced by the images of the vertices in G. Suppose now that H is a weighted graph, with vertex weights w i and edge weights w ij . We think of each pair of vertices as having a weight, perhaps equal to 0. We define hom(G, H) as the sum over all maps f : Note that if all vertices in H have weight 1 and all edges have weight 0 or 1, then hom(G, H) is equal to hom(G, H ′ ) where H ′ is the unweighted graph on the vertex set of H formed by the edges with weight 1. We also define t(G, H) as the average of this product over all maps, this simply divides it by a factor of v(H) v(G) . One important notion of convergence is the following, defined by Benjamini and Schramm [2]: . Given a sequence of graphs (G n ) with all degrees bounded by a positive integer D, we say that the sequence is left convergent if for any connected graph F , the limit exists. It is well known that we obtain an equivalent definition if we replace ind with inj or hom. Note that the original definition by Benjamini and Schramm involved looking at the distribution of the r-neighborhoods of vertices for any r, but it is easy to see that the two notions are equivalent. We also examine the notion of right convergence: Definition 2 (Right convergence). Given a sequence of graphs (G n ) with bounded degrees, we say that the sequence is right convergent with soft-core constraints, or simply soft-core right convergent, if for any weighted target graph H that is complete and has positive weights (including on loops), the limit exists. If the limit also exists for all graphs H with nonnegative edge weights, we will refer to it as right convergence with hard-core constraints, or hard-core right convergence. If we consider t(G n , H) instead of hom(G n , H), we obtain an equivalent definition, this simply decreases each term in the sequence by the same constant, log v(H). Given a sequence G n of bounded degree graphs, if for each n we add or delete o(v(G n )) edges in G n , neither soft-core right convergence, nor left convergence is affected. However, hard-core right convergence can change. Borgs, Chayes, and Gamarnik [4] show that if a sequence is soft-core right convergent, then one can delete o(v(G n )) edges from G n to make it hard-core right convergent. In particular, this implies that if every hardcore right convergent sequence is left convergent, then every soft-core right convergent sequence is left convergent. Borgs, Chayes, Kahn and Lovász [5] proved that hard-core right convergence implies left convergence, and left convergence implies right convergence for a subset of target graphs H, where the set depends on D, the bound on the maximum degree. Their proof considers target graphs H with zero weights. The proof in this note only considers target graphs with positive weights, and thus gives a more direct proof that (soft-core) right convergence implies left convergence. Specifically, we provide a new short proof of the following theorem: Theorem 1. For any sequence (G n ) of graphs with all degrees bounded by D, the following are equivalent: 2. For every target graph H with each vertex and edge weight between e −(4eD) −1 and e (4eD) −1 , the sequence 1 v(Gn) log t(G, H) is convergent. 3. For each k, there exists an ǫ k such that for any target graph H on k vertices with each weight between e −ǫ k and e ǫ k , the sequence 1 v(Gn) log t(G, H) is convergent. We will now introduce an alternative way of looking at right convergence that provides a more natural setting for the proof. Definition 3. Given a graph G, and a positive integer k, we define X(G, k) ∈ R k+k 2 as follows. Take a random coloring of V (G) with k colors. For a fixed coloring C, we define X(G, C) i as the proportion of vertices with color i. That is, we take the number of vertices with color i and divide it by v(G). We define X(G, C) i,j as the number of edges between color class i and j, divided by v(G). Note that X(G, C) i,j = X(G, C) j,i . We now define the random variables X i = X(G, k) i and X i,j = X(G, k) i,j as the values of X(G, C) where the coloring is uniformly random among all k v (G)-colorings of the vertices. Note that X ∈ [0, 1] k × [0, D] k 2 , where D is the bound on the degree. We will examine the cumulant generating function of this variable. Definition 4. Given a positive integer k and λ ∈ R k+k 2 , we define f G,k (λ) as the (normalized) cumulant generating function For λ ∈ R k+k 2 , define the weighted graph H λ on the vertex set [k] with vertex i having weight e λ i and edge ij having weight e 1 2 (λ ij +λ ji ) . Then it is not difficult to see that Thus, given a graph sequence G n , the pointwise convergence of f Gn,k (λ) for λ ∈ S for some S ⊂ R k+k 2 is equivalent to right convergence for all target graphs H λ for λ ∈ S. With this notation, Theorem 1 is equivalent to the following theorem. Theorem 2. For any sequence (G n ) of graphs with all degrees bounded by D, the following are equivalent: 3. For every k, there exists an open neighborhood S k of the origin in R k+k 2 such that for any λ ∈ S k , the sequence f Gn,k (λ) is convergent. Note that trivially (2) ⇒ (3). Our proof strategy is the following. We show that for a fixed graph G the functions f G,k are analytic around zero, and for a fixed k the Taylor series around the origin converges uniformly on the region λ ∞ < (4eD) −1 . Then we show that left convergence is equivalent to the convergence of the partial derivatives of f Gn,k at the origin for all k and all partial derivatives. The equivalence then follows from standard analysis arguments. In order to analyze the partial derivatives, we use the notion of the joint cumulant of random variables. If we have l random variables Z 1 , Z 2 , ..., Z l , their joint cumulant κ(Z 1 , ..., Z l ) is defined as follows. Let g(λ 1 , ..., λ l ) = log E(exp( l i=1 λ i Z i )) and take the partial derivative of g once according to each variable, at the origin. In particular, κ(X) is the expected value E(X) and The mapping κ is a multilinear function of the variables, and if we let π run over the partitions of the set [l], we have the formula We also define the r-th cumulant κ r (Z) = κ(Z, Z, ..., Z) where Z appears r times. Analyticity and the domain of convergence In this section, we will show that for each k, and for each graph G with maximum degree at most D, the Taylor series of f G,k converges on an open neighborhood of the origin, uniformly for fixed D. Proposition 1. Suppose G is a graph with maximum degree at most D. Then f G,k is analytic around 0, and at any point in the domain λ ∞ < (4eD) −1 , or even on the set λ ∞ ≤ c for any c < (4eD) −1 , the Taylor series centered at the origin converges uniformly across all graphs with maximum degree D. In order to prove this proposition, we first need to state an auxiliary result. Specifically, we will use Theorem 9.3 in [8], which states the following: Theorem 3. Let {Y α } α∈W be a family of random variables with finite moments, and assume there is a graph L on W with maximum degree ∆ that has the following property: If W 1 and W 2 are disjoint subsets of W with no edges between them, then {Y α } α∈W 1 and {Y α } α∈W 2 are independent. Assume that each Y α r ≤ A, where X r = (E|X| r ) 1/r . Let Y = α Y α . Then we have the following bound on the cumulant: A rough outline of the proof is as follows. The joint cumulant is a multilinear function, so it is just the sum of κ(Y α 1 , Y α 2 , ..., Y αr ) where each α i ∈ W . If we let H be the graph just on the α i 's (where we take a node multiple times if it appears as multiple α i 's), then it is shown that where tree(H) is the number of spanning trees of H. Since we can think of H as always having [r] as its vertex set, we can take each possible tree on [r] (we have r r−2 ), and using the bound ∆ on the maximum degree of L, we can bound the number of times it will be a subtree of H. Proof of Proposition 1. Clearly if we fix G, f G,k is analytic on some domain around zero. To analyze the domain, we analyze it in a fixed direction. Take any λ 0 ∈ R k+k 2 such that λ 0 ∞ = 1, and consider the function g : R → R, g(z) = f G,k (zλ 0 ). Let Y = λ 0 , v(G)X . We can write Y = α Y α , where α is either a vertex or an edge of G, and gives the contribution of that vertex or edge to λ 0 , v(G)X . We define the graph L with vertex set W = V (G) ∪ E(G), and we connect each v ∈ V (G) with its incident edges, and each e ∈ E(G) with its two incident vertices and its incident edges. The maximum degree of L will be at most ∆ = 2D. We also have that for any l ≥ 1, Y α l ≤ Y α ∞ ≤ 1. Using Theorem 3, so the Taylor series converges for z ∈ R, |z| < 1/(4eD). Because of the uniform bound on the partial derivatives, the convergence is also uniform across all graphs (of bounded degree D) for |z| ≤ c. Left convergence and the partial derivatives In this section, we will prove the equivalence of left convergence with the convergence of the partial derivatives of f G,k at the origin. We will use the following notion, closely related to inj. Let F be a multigraph with l labeled edges e 1 , e 2 , ..., e l (for some positive integer l), and no isolated vertices. For a graph G, we define i(F, G) as the number of ways of choosing a sequence of l edges of G (that is, a multiset of l edges of G labeled from 1 to l), such that the mulitgraph induced by these l edges is isomorphic to F . Let F * l be the collection of multigraphs with l edges labeled from 1 to l, but vertices unlabeled, with no isolated vertices. Let F l be the subset of F * consisting of just the connected multigraphs. We observe that left convergence is equivalent to saying that the limit exists for any F ∈ F l for all l > 0. Indeed, for any F ∈ F l , i(F, G n ) = i(F ′ , G n ), where F ′ is obtained from F by removing parallel edges. Then i(F ′ , G) = inj(F ′ , G), unless F ′ is a single edge, in which case i(F ′ , G) = inj(F ′ , G)/2. Thus, convergence of i(F, G n )/v(G n ) for all F ∈ F l is equivalent to convergence of inj(F, G n )/v(G n ) for all connected graphs F with at most l (unlabeled) edges (note inj(F, G) = 0 for F with parallel edges). Proposition 2. Suppose we have a graph G and a positive integer k. Consider the function f G,k : R k+k 2 → R defined in 4. For any nonnegative integer l, the values of the l-th partial derivatives at the origin are linear combinations of the values of i(F, G)/v(G) over various choices of F ∈ ∪ l ′ ≤l F l ′ (recall that in the definition of f G,k we also divided by v(G)), and the coefficients do not depend on G. Moreover, if k ≥ 2l, then this is a bijection, that is, if two graphs G 1 and G 2 have different values of i(F, G 1 )/v(G 1 ) and i(F, G 2 )/v(G 2 ) for some connected graph F on at most l edges, then their l-th partial derivatives will be different. Proof of Proposition 2. For ease of presentation of the proof, we will assume that each variable in the partial derivative corresponds to an edge, it will be clear that the proof also works if some of the variables correspond to vertices. In this case, we will only need F ∈ F l . We know that the partial derivative according to the variables λ i 1 ,j 1 , λ i 2 ,j 2 , ..., λ i l ,j l is equal to . Suppose the pairs (i 1 , j 1 ), (i 2 , j 2 ), ..., (i l , j l ) ∈ [k] 2 , form the (multi)graph J . We think of J as a multigraph on [k] with l edges whose edges are labeled 1 to l. Define κ G ( J ) as the joint cumulant κ(v(G)X i 1 ,j 1 , v(G)X i 2 ,j 2 , ..., v(G)X i l ,j l ) . Note that if we permute the vertices of J, κ G ( J ) does not change (this is the same as permuting the colors). Each J has a representative J in F * l (after removing isolated vertices and unlabeling the remaining vertices), and because permuting the vertices does not change the expression, κ G ( J ) depends only on the representative J (and on k, which is fixed), so with a slight abuse of notation we will write κ G (J) for J ∈ F * l . If k ≥ 2l, then each graph in F * l will have a corresponding graph on the vertex set [k] (after adding isolated vertices). Here Y ip,jp,ep is the indicator random variable of whether the two endpoints of e p are colored with colors i p and j p . Let us look at a term κ(Y i 1 ,j 1 ,e 1 , Y i 2 ,j 2 ,e 2 , ..., Y i l ,j l ,e l ). For simplicity of notation, let Z p = Y ip,jp,ep . It is not difficult to see from the definition of the joint cumulant that if Z 1 , Z 2 , ..., Z l can be partitioned into two groups that are independent of each other, then κ(Z 1 , Z 2 , ..., Z l ) is zero. This happens if e 1 , e 2 , ..., e l do not form a connected subgraph. Otherwise, we can write (1) We now introduce additional notation to facilitate the analysis of this expression. Let E be any multigraph with l labeled edges f 1 , f 2 , ..., f l . Denote E( p Y ip,jp,fp ) as x(E, J). Note that if E is disconnected, the disjoint union of E 1 and E 2 , then x(E, J) = x(E 1 , J)x(E 2 , J). We can define this value for any J that comes from a graph on the vertex set [k]. Now, given a partition π of [l], we define E π as follows. For each B ∈ π, take the subgraph of E formed by the edges with labels in B, and the vertices that are endpoints of at least one of these edges. Then take E π as the disjoint union of these graphs, where each edge keeps its original label from E. Then (recalling that J is the graph formed by the pairs (i p , j p )) we have This proves the first part of the Proposition. Let us prove the second part. Since k ≥ 2l, each graph J ∈ F * l has a corresponding graph on [k], so we just need to show that if we have two graphs G and G ′ such that i(F, G) = i(F, G ′ ) for some F ∈ F l , then κ G (J) = κ G ′ (J) for some J ∈ F * l . Let K be the matrix with columns indexed by F ∈ F l , rows indexed by J ∈ F * l , and entries κ(F, J). Let E be the matrix with rows and columns indexed by F * l , and the entry in column F and row J is x(F, J). Let P be the matrix with columns indexed by F l , rows indexed by F * l , and the entry in column F and row F ′ is (−1) |π|−1 (|π| − 1)! if F ′ = F π for some π partition of [l], and 0 otherwise. Then K = EP . If u ∈ R F * l is the vector with entries κ G (J), and w ∈ R F l is the vector with entries i(F, G) : F ∈ F l , then u = Kw. Thus, we need to show that the columns of K are independent, and we will do this by showing that E is invertible, and that the columns of P are independent. First, let us look at E. Its entries are x(F, J), which is zero if it is not possible for the edges of F to map to J. In particular, x(F, J) is zero if J has more non-isolated vertices than F , or if the number of non-isolated vertices is equal and the two graphs are not isomorphic (taking the edge labels into consideration). However, it is easy to see that the diagonal entries x(F, F ) are nonzero. Thus, if we order F * l such that the number of non-isolated vertices is increasing, then the matrix will have nonzero entries in the diagonal, and zero entries above the diagonal. Thus, E is invertible. To see that P has independent columns, we can just restrict to the rows in F l , and see that we obtain the identity matrix. So we have shown that P has independent columns, E is invertible, and so K = EP has independent columns. Thus, if two graphs have different values of i(F, G) for some F ∈ F l , then at least one of their l-th partial derivatives must be different. ✷ From the proposition, we easily obtain the following corollary: We are now ready to prove our main theorem. Proof of Theorem 1. First, we show that (1) implies (2). By Proposition 1, if we are within the domain λ ∞ < (4eD) −1 , then the Taylor series of f G,k converges uniformly across all graphs G with bounded maximal degree D. Since the coefficients of the Taylor series are the partial derivatives (multiplied by fixed constants), this immediately implies that if a sequence (G n ) is left convergent, then the values f G,k (λ) = log t(Gn,H λ ) Concluding Remarks We remark that it also makes sense to define left and right convergence for sequences of bounded degree graphs with parallel edges. There are a few different ways to extend the definition of hom, inj and ind, however they will produce equivalent notions of left convergence. The notion will also still be equivalent to convergence of i(F, G n )/v(G n ) for F ∈ F l . The definition of hom(G, H) if H is weighted extends naturally to the case when G has parallel edges, and so does the definition of right convergence, and our proof will still work.
2015-05-10T23:39:43.000Z
2015-04-11T00:00:00.000
{ "year": 2016, "sha1": "9154213e3a0765adee35fe284e78a676d9a16290", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.ejc.2015.10.009", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "9154213e3a0765adee35fe284e78a676d9a16290", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
234083020
pes2o/s2orc
v3-fos-license
Socio- Political Education and Women Empowerment in Buddhist Perspective The purpose of research was to study the socio-political education and women empowerment in Buddhist perspective. The researchers studied and collected the data from Buddhist scriptures, texts, and related document about socio-political education and women empowerment in Buddhist perspective and analyzed by using content analysis. The results indicated that For decades, women have been parts of the supply of cheap, unskilled or semi-skilled labors for the industrial and service sectors. Gender discrimination continues even in the present times. At the same time, the problems of rural and urban lower-class women cannot be ignored. The empowerment of women is one of the solutions to the problems of inequality, subordination and marginalization that women face in the society. However, this kind of empowerment is only partial for all though they have economic and political power, they are kept out of decision making or they are dependent on their husband, father or brother for crucial decisions. Buddhism accepts that every human being, independent of the consideration of sex, gender, class etc. is composed of five elements (Paṇcakkhandhā): namely rupa skandha, samjṇa skandha, vedanā skandha, saṁskāra skandha and vijṇanā skandha. On this basis, Buddhism has advocated the equality between man and women and thus has transcended the gender difference. It treats man and woman equally. Buddhism reflected in the Buddhist scriptures that there is a biological difference between women and men, but they have similar intellectual, mental as well as spiritual capabilities. Introduction The increasing influence and relevance of Buddhism in its various forms in the global society of the 21st century have given rise to a vibrant and evolving movement. In the field of gender and development, an understanding of the influence that religious and cultural traditions have upon women"s social status or economic opportunity is slowly being recognized as an important factor in the pursuit of female empowerment in developing countries. While institutional religion can legitimize values and rules that disempower women, the importance of religion in the lives of millions of poor women across the globe means that secular feminism is often perceived not only as western but also as lacking cultural relevance [1] .Over the last few decades Buddhism, environmentalism, the ecological movement and feminism have been the subject of much interdisciplinary work. Buddhist philosophy, ethics and its system of meditation have found common ground with the movements known as Eco-Buddhism and deep ecology with the core acknowledgement of the interrelatedness of all beings and their intrinsic value for the health and survival of the planet and all its inhabitants. The increasing influence and relevance of Buddhism in a global society have given rise to a vibrant and evolving movement, particularly in the west, loosely called Socially Engaged Buddhism. Today many looks to Buddhism for an answer to one of the most crucial issues of all time-eradicating discrimination against women. There is general agreement that Buddhism does not have a reformist agenda or an explicit feminist theory. The researchers explored this issue from a Theravāda Buddhist perspective using the scriptures as well as recent work by Buddhist scholars conceding that there are deep seated patriarchal and even misogynistic elements reflected in the ambivalence towards women in the Pāli Canon and bias in the socio-cultural and institutionalized practices that persist to date in Theravāda Buddhist countries. However, Buddha"s acceptance of a female monastic order and above all his unequivocal affirmation of their equality in intellectual and spiritual abilities in achieving the highest goals clearly establish a positive stance. While social and legal reforms are essential, it is meditation that ultimately uproots the innate conditioning of both the oppressors and the oppressed as the Dhamma at its pristine and transformative core is genderless [2]. However, compared to the other major religions, the women have always played a significant role in Buddhism as lay disciples as well as monastics later on influencing the Order and societies where Buddhism took root [3]. Dewaraja also notes that unlike in the other major religions, marriage is a purely secular matter in Buddhism, and also cites the Sigalovāda Sutta where the marital relationship is described as a reciprocal one with mutual obligations, and as there is no central creator in Buddhism and hence no sacredness attached to the human body [4] nor a strong differentiation of what is natural or unnatural, Buddhism has nothing against contraception [5] [6] or homosexuality .Most of all, the mere fact of women being included in the teachings and practices was remarkable given that this took place over 2,500 years ago in a patriarchal society where women had few rights with regard to education and religious practices as Buddhism"s greatest contribution to the social and political landscape of ISSN: 00333077 1612 www.psychologyandeducation.net ancient India is the radical. All men and women, regardless of their caste, origins, or status, have equal spiritual worth [7]. This is especially pertinent concerning the status of women, who were traditionally prevented by the brāhmanas from performing religious rites and studying the sacred texts of the Vedas. A core positive characteristic of Buddhism with regard to gender equality is the absence of an omnipotent Creator God traditionally portrayed as a male at its centre providing legitimacy to male supremacy. While the main agenda of Buddhism is not social reform, its ethical, doctrinal and psychological frameworks explicitly condemn creating mental or physical suffering for any other being, and the key concepts that suffuse the Dhamma are harmlessness (ahimsā), mettā (universal loving kindness) and compassion (karunā). Buddhism becomes an increasingly relevant globalized force with the scriptures of all schools available freely to so many and both lay and monastic women taking on key roles, it is inevitable that outdated prejudice and barriers begin to crumble which is essential for the flourishing of Buddhism in the 21st century. The Buddhist perspective on empowerment has efficiently provided possible solutions for these problems. For example, the ambivalent attitude of women to other women and also the societal ignorance about the capacities of a woman needs to be removed. Buddha"s teaching offers the methods of practical solutions to all housewives and nuns and also to the society of modern times. The perspective into Buddhism will enable the society to remove the ignorance towards women and rekindle the spirit of wisdom towards the roles and responsibilities of women. Research Objective The purpose of research was to study the socio-political education and women empowerment in Buddhist perspective. Research Methodology The researchers attempted to study and analyzed from the Pālī Buddhist literatures which are the primary sources together with the commentaries of related works. In addition to the Buddhist literature, the literature on feminism and gender studies have also been considered to understand the concept of empowerment. Also the researchers studied and analyzed from Pālī Tripitakas, Nikāyas, and commentaries. Women in Buddhist Perspective Buddhism, in its origins, was a pragmatic soteriology, a theory of liberation that sought to free humanity from suffering, first by thoroughly analyzing the fundamental human predicament and then by offering a practical method or path for eliminating the afflictions cognitive and dispositional that are perpetuated as greed, hatred, and delusion. The Buddha was frequently critical of conventional views including those carrying the authority of Brahmanic tradition. In marked contrast to the sacerdotal ritualism of the Brahmins, he offered a path that was open to all. The first canonical attitude to consider, soteriological inclusiveness, thus arguably is the most basic and also the most distinctively Buddhist attitude regarding the status of women that one can fine in the vast literature of the 2500 year old tradition. The earliest Buddhists clearly held that one"s sex, like one"s caste or class (varṇa), presents no barrier to attaining the Buddhist goal of liberation from suffering. Women can, affirmed by the Buddhist tradition, pursue the path. Moreover, they can become arhats, Buddhist saints who had broken completely the cycle of the suffering of death and rebirth (saṃsāra) [8]. In fact, the position that femaleness is no barrier to the achievement of the Buddhist human ideals takes two forms in Buddhist texts. The more common variation on this theme essentially proclaims that the Dhamma is neither male nor female that gender is irrelevant or even non-existent when one truly understands Buddhist teachings. One also finds infrequent claim that in fact, for those with good motivation, femaleness is actually an advantage. Though that assessment is not by any means common or well-known, its very existence is important for gathering the fullness of an accurate record of Buddhist attitude toward gender. In addition, the Buddha"s main argument against this was that no man or woman could be superior or inferior in society merely by reason of his birth The Buddhist view about woman"s nature pointed out that despite the fact that the Buddha elevated the status of women; he was practical in his observations and advice given from time to time. He realized the social and biological differences between men and women. The reality of the fact of the nature of women was brought out by the Buddha who had pointed out not only their weaknesses, but also their abilities and potential. The Buddha did not talk about the concept of spiritual empowerment of woman but the investigations of epistemological and metaphysical considerations behind the Buddhist thoughts enable us to understand the Buddhist concept of empowerment. This concept of empowerment will enable us, further; to solve the problems that arise in a woman"s life 2500 years ago he laid down his thoughts about women that were substantial enough to guide feminist thoughts and movements. Social Empowerment of Women in Buddhism As we have seen that empowerment of woman is attained through her self-realization that is through her mental and spiritual development and through her knowledge of herself and of the society as well as the realization of society and its acceptance that woman also has her own independent existence. Her self-realization makes her aware of her attributes; her qualities belong to her and not these that are imposed on her by social and cultural conventions and through the processes of enculturation and socialization. When the woman and the society have clear understanding regarding the nature of each other then only, there is holistic development of society as well as of the members of the society. The Buddha through his discourses always tried to enlighten the people about the myth of gender difference. In Buddha"s time, the birth of female child was not welcomed. Her birth made the parents unhappy. It may be because the parents believed that after marriage, daughters went to their husband ISSN: 00333077 1613 www.psychologyandeducation.net house then who would take care of the parents in their old age. But if they had son, their son lived with them even after their marriage. So, the sons could take care of them. It is because of this belief, parents preferred the male child. The Buddha tried to remove this kind of belief from the mind of people. According to him, there was no reason to feel gloomy at the birth of a daughter. For example, King Pasenadi was unhappy at the time of birth of a girl to his Queen Mallikā. He went to the Buddha to tell this news and when the Buddha observed that King is unhappy, he said; "Indeed, a woman of a certain sort is better than a man lord of folk: Wise, Virtuous, reversing her husband"s mother, a devoted wife, the man born of her is a hero, ruler of the regions, such a son of a good wife is one who advised his realm" [9]. Political Empowerment of Women in Buddhism The Buddha acknowledged the independent religious status to women, but his views with regard to the political status of women remained conventional. We don"t have many references about the political status of women in Buddhism. However, Srimālā, in Mahāyāna Buddhism held the position of queen. This consolidates the view that Buddhism did give equal status to women. A woman could rule a kingdom. As secular women, their only business was to pure themselves as good housewives and affectionate mother. The truth is that Gautama was least interested in temporal matters. For him systems of governments did not appeal much. He did not care what general position of women or even men was in the social or political fabric of the country. In the extensive kingdom of the great Buddhist monarch Asoka, it is not found that any office of significance was ever occupied by any woman. In the noble mission of propagating the Buddhist doctrine far and wide women travelled to every nook and corner of the globe and mixed with every sort of people, putting away all the gender differences. They preached to all men and women and expounded the doctrine in a worthy manner. Although, women held eminent religious position, the lay-sisters in the world had no respectable status to enjoy. Nothing was done by any law-giver to improve their secular existence and to ensure their general welfare. Thus, all went on as usual, without any betterment of the secular status of women. As we can see, Śrī mālā the Queen of Andhra, belonging to first century had important contribution to the development of the Buddhist thought [10]. Educational Empowerment of Women in Buddhism: As a whole, woman in Buddhism enjoyed higher status, greater freedom, more equality and an enhanced liberal environment than in the preceding ages. Women were more empowered in the Buddhist period. Buddhism"s contribution to the liberation and uplift of the woman in the social and educational sectors was equally immense. In this respect, the elevation of the woman in the Buddhist set up was conceptually much nobler, it was much more than a question of "rights" and "duties". The Buddhists really respect interpersonal relationships and therefore do not desire to tear away any portion of society and isolate it. For Buddhism, it is an achievement in the total integration the woman into the social fabric of the human community. The family in this respect is the smallest unit. In Buddhist thinking the male"s respect for the female had to be so high that the Buddhists knew what was meant by the courteous behavior to women. The ladies therefore had to be treated with due courtesy and consideration. Women under Buddhism had maintained their traditional legal position and the laws of the land had not change in the favour changed on this respect. In the Buddhist ages, women enjoyed religious and educational independence and spiritual and ethical advancement but in other spheres such as social, political and economic the situation remained the same as it was in the preceding ages. Overall, the rise of Buddhism brought an improvement in the status of women. Through its practices, it has also facilitated the self-confidence, empowerment and spiritual and educational liberation of both women and men such as Mahāpajāpatī Gotamī, the Buddha"s maternal aunt and foster mother, Khemā, the queen of king Bimbisāra of Rajgriha, Paṭāchārā from Shravasti; proficient in duties, Bhaddā Kuṇḍalkeshā, Ambapālī and Isidāsī have attained positions of high repute in the religious order of Buddhism. Sāmāvatī from Bhaddiya, Khujjuttarā and Visākhā are known for their devotion and charitable deeds [11]. Buddhism does not restrict either the educational opportunities of women or their religious freedom. The Buddha unhesitatingly accepted that women are capable of realizing the Truth, just as men are. This is why he permitted the admission of women into the Order, though he was not in favour of it at the beginning because he thought their admission would create problems in the Sasana. Once women proved their capability of managing their affairs in the Order, the Buddha recognized their abilities and talents, and gave them responsible positions in the Bhikkhuni Sangha. The Buddhist texts record of eminent saintly Bhikkhunis, who were very learned and who were experts in preaching the Dhamma. Dhammadinna was one such Bhikkhuni, Khema and Uppalavanna are two others [12]. Conclusion Buddhism has accepted that women are as eligible for spiritual emancipation as men and women, both can follow the Four Noble Truths, the Right Eightfold Path, surrender to the three jewelsthe Buddha, the Dhamma and the Saṅgha, cultivate the Pañcasīla and possess the three Jewels of Sīla, Samādhi and Paññā, and become eligible to emancipate through meditation [13].The empowerment of women is one of the solutions to the problems of inequality, subordination and marginalization that women face in the society. However, this kind of empowerment is only partial, for all though they have economic and political power, they are kept out of decision making or they are dependent on their husband, father or brother for crucial decisions. Hence in order to change this situation, it is necessary that women have to realize their own nature and understand the value of their own existence. When they realize their own nature they will have confidence and will participate in decision making independently. This will be possible only when they will be empowered spiritually. However, this also means to bring a significant change in the social mentality. Many a time, a ISSN: 00333077 1614 www.psychologyandeducation.net woman has confidence and has realized her own potential, but society prohibits and blocks her progress, her problems cannot be solved. In other words, to solve the problems of the woman, and in order to empower her it is also necessary that the society has to change its patriarchal mentality. The Buddha did not talk about the concept of spiritual empowerment of woman, but the investigations of epistemological and metaphysical considerations behind the Buddhist thoughts enable us to understand the Buddhist concept of empowerment. This concept of empowerment will enable us to solve the problems that arise in a woman"s life 2500 years ago. He laid down his thoughts about women that were substantial enough to guide feminist thoughts and movements.
2021-05-10T00:04:19.134Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "bdd8a64fa3225bb13db2807c7b79105f8d1b1095", "oa_license": "CCBY", "oa_url": "http://psychologyandeducation.net/pae/index.php/pae/article/download/954/768", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "033ab352906f000700049bd3c3d4a975589b3e7e", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
23954985
pes2o/s2orc
v3-fos-license
Khellin and Visnagin, Furanochromones from Ammi visnaga (L.) Lam., as Potential Bioherbicides Plants constitute a source of novel phytotoxic compounds to be explored in searching for effective and environmentally safe herbicides. From a previous screening of plant extracts for their phytotoxicity, a dichloromethane extract of Ammi visnaga (L.) Lam. was selected for further study. Phytotoxicity-guided fractionation of this extract yielded two furanochromones, khellin and visnagin, for which herbicidal activity had not been described before. Khellin and visnagin were phytotoxic to model species lettuce (Lactuca sativa) and duckweed (Lemna paucicostata), with IC50 values ranging from 110 to 175 μM. These compounds also inhibited the growth and germination of a diverse group of weeds at 0.5 and 1 mM. These weeds included five grasses [ryegrass (Lolium multiflorum), barnyardgrass (Echinocloa crus-galli), crabgrass (Digitaria sanguinalis), foxtail (Setaria italica), and millet (Panicum sp.)] and two broadleaf species [morningglory (Ipomea sp.) and velvetleaf (Abutilon theophrasti)]. During greenhouse studies visnagin was the most active and showed significant contact postemergence herbicidal activity on velvetleaf and crabgrass at 2 kg active ingredient (ai) ha−1. Moreover, its effect at 4 kg ai ha−1 was comparable to the bioherbicide pelargonic acid at the same rate. The mode of action of khellin and visnagin was not a light-dependent process. Both compounds caused membrane destabilization, photosynthetic efficiency reduction, inhibition of cell division, and cell death. These results support the potential of visnagin and, possibly, khellin as bioherbicides or lead molecules for the development of new herbicides. ■ INTRODUCTION Billions of tons of agricultural production are lost annually due to weeds. Herbicides are the most important method of weed management, as this technology has been much more effective than previous weed management approaches. After more than 70 years of the dominance of synthetic herbicides for weed control, evolved resistance to herbicides has become a major problem. 1−3 This problem is exacerbated by the fact that there have been no herbicides with new modes of action introduced in more than 30 years to help manage evolved herbicide resistance. 4 Natural phytotoxins are a source of compounds with new modes of action, which has fueled interest in their discoveries. 5−7 Furthermore, the biggest pest management need of organic farmers is economical and efficient natural herbicides approved for use in the organic marketplace. 8 Plant natural products provide an attractive alternative in finding effective and environmentally safe phytotoxic compounds, with high structural diversity and novel modes of action. 9 Such compounds may be formulated and directly used as bioherbicides or used as lead structures for the development of new products by chemical modifications. Our group has developed a systematic process to search for, evaluate, and select plant extracts with promising phytotoxic activity. Active extracts could then be used to discover new herbicidal molecules. As a result of this screening process of nearly 2400 plant extracts for their herbicidal activity, an extract from Ammi visnaga (L.) Lam. was selected for further studies. A. visnaga, also known as toothpick weed, visnaga, or khella, is an annual or biennial herb belonging to the Apiaceae (Umbelliferae) family, growing to about 1 m height. The stem is erect and highly branched, and leaves are dissected into many small linear to lance-shaped segments up to 20 cm long. The inflorescence of A. visnaga is a compound umbel of white flowers, and fruits are compressed oval-shaped structures around 3 mm in length. 10,11 This herb is native to the Mediterranean region of Europe, Asia, and North Africa, and it can be found as an introduced species in Argentina, Brazil, Chile, Uruguay, North America, Southwest Asia, and some Atlantic islands. 10−12 It grows preferentially under high sun exposure in clay soils, which are well drained and quickly desiccated on the surface, in the semiarid superior and subhumid bioclimatic zones. 13 In some regions, this plant has become an invasive weed of cultivated fields. 14−16 Fruits of A. visnaga have been described in pharmacopoeias as an antispasmodic, muscle relaxant, and vasodilator. 11 Other uses in traditional medicine include treatment of mild angina symptoms, supportive treatment of mild obstruction of the respiratory tract in asthma or spastic bronchitis, and postoperative treatment of conditions associated with the presence of urinary stones. This herb has also been used as treatment for gastrointestinal cramps, as a diuretic, and for treatment of vitiligo, diabetes, and kidney stones. 11, 17 Aqueous and ethanolic extracts as well as the essential oil of A. visnaga have antibacterial activity. 18−20 Also, different extracts of this species and their constituents have antioxidant activity 21 and prevented renal crystal deposition and cell damage caused by oxalate. 22,23 With regard to its pesticide uses, alcoholic and aqueous extracts and essential oil of A. visnaga have insecticidal properties on different insect species. 24−32 Previous work on the allelopathic potential of A. visnaga crude extracts reported some phytotoxicity toward legumes and maize and toward weeds associated with wheat cultivation. 33−35 However, the compounds responsible for these activities of the crude extracts were not isolated and identified. The objectives of the present work were to isolate and identify the phytotoxic compounds from a crude extract of A. visnaga and to evaluate their herbicidal effects on different model and weed species in laboratory and greenhouse tests. The possible modes of action of the isolated compounds were also explored. Figure S1) was collected near Los Telares, Salavina Department, Santiago del Estero Province, Argentina, in January 2014. A voucher specimen, no. SI-209406, has been deposited in the herbarium of the Darwinion Botanical Institute, Buenos Aires, Argentina. Ground dried flowers (39 g) and leaves (27 g) of A. visnaga were consecutively extracted by maceration at 25°C with two solvents: dichloromethane (DCM) first and then ethanol (EtOH). Plant material was extracted twice (24 h each) with each solvent, followed by Buchner funnel filtration and concentration under reduced pressure in a rotary evaporator. For each extraction step, 10 mL of solvent was added per gram of plant material. This procedure provided the following extracts: flowers DCM (1.74 g), flowers EtOH (3.57 g), leaves DCM (1.2 g), and leaves EtOH (2.5 g). Germination and Plant Growth Bioassays with Lettuce (Lactuca sativa), Creeping Bentgrass (Agrostis stolonifera), and Ryegrass (Lolium multiflorum). Extracts of A. visnaga and column chromatography fractions were evaluated for their phytotoxic activity on lettuce and creeping bentgrass bioassay as described by Dayan et al. 36 Briefly, a filter paper (Whatman no. 1) and five lettuce seeds (L. sativa L. cv. iceberg A from Burpee Seeds) or 10 mg of creeping bentgrass (A. stolonifera var. Penncross from Turf-Seed Inc.) seeds were placed in each well of a 24-well plate. Stock solutions (10 mg mL −1 ) of test extracts or fractions were prepared in acetone. Distilled water (180 μL) was added to each well together with 20 μL of the stock solution or acetone in solvent control. The final concentration per well was 1 mg mL −1 for extracts or fractions and 10% v/v for acetone. This percentage of acetone was used only for these miniaturized assays, and two controls were done, with and without solvent, to verify that germination and growth of seedlings were comparable between control in solvent and control in water. Plates were sealed with Parafilm and incubated at 26°C in a Conviron (Winnipeg, Canada) growth chamber set at 173 μmol photons m −2 s −1 continuous photosynthetically active radiation. Phytotoxic activity was qualitatively evaluated by visually comparing germination and growth in each well with solvent control at 7 days. A qualitative estimation of phytotoxicity was obtained by using a visual rating scale from 0 to 5, where 0 was no effect (control), 1 was <50% seedlings shorter than control, 2 was >50% seedlings shorter than control, 3 was <50% seeds germinated (only radicles observed), 4 was <50% seeds germinated (only radicle tips observed), and 5 was no germination of seeds. This procedure was also used for testing the phytotoxicity of pure compounds and for the dose−response assay on lettuce (L. sativa var. Waldmann's green, Guasch SRL Argentina) and ryegrass (L. multiflorum provided by Dr. Ricardo Pavon, BASF Argentina), instead of creeping bentgrass, because ryegrass is a weed species with a higher agronomical impact. The experiment with this grass was carried out in 12-well plates. Each well contained 10 seeds and 300 μL of test solution. Eight concentrations of the pure compounds (from 0 to 1 mM) were tested. Germination percentage and plant growth (length of plants) were measured at 7 days to determine the concentration required for 50% germination and growth inhibition (IC 50 ) with respect to control. All experiments were performed in triplicate. Bioassay-Guided Fractionation. The flowers DCM extract of A. visnaga (1.6 g) was divided into two 800 mg portions, and each was subjected to column chromatography using an Isolera One system (Biotage, Uppsala, Sweden), equipped with a UV detector (302 and 365 nm) and an automatic fraction collector. Separation was performed by a normal-phase chromatography column using a SNAP Cartridge KP-Sil (37 × 157 mm, 50 μm irregular silica, 100 g, Biotage) and a prepackaged SNAP Samplet Cartridge KP-Sil (37 × 17 mm, Biotage). A full gradient of hexane/ethyl acetate was used for elution, from 100:0 to 0:100 over 3000 mL. The flow rate was 40 mL min −1 , and 25 mL fractions were collected. Equal fractions were recombined on the basis of similarities in their TLC and chromatogram profiles, providing 19 fractions named I−XIX. All fractions were evaluated for their phytotoxicity on the germination bioassay of lettuce and creeping bentgrass. Chemical Analysis and Compound Identification. The crude flowers DCM extract of A. visnaga and fractions XIV and XVII were analyzed by GC-MSD on an Agilent Technologies 7890A GC system coupled to a 5975C Inert XL MSD. The GC was equipped with a DB-5 fused silica capillary column ( 38,39 Duckweed (Lemna paucicostata (L.) Hegelm.) Bioassay. Phytotoxic activities of khellin and visnagin were evaluated on duckweed by using a previously described method. 40 Briefly, duckweed stocks were grown from a single colony consisting of a mother and two daughter fronds in a beaker on modified Hoagland medium. The medium was adjusted to pH 5.5 with 1 M NaOH and filtered through a 0.2 μm filter (no. 431118, Corning Inc.). Each well of nonpyrogenic polystyrene sterile 6-well plates (CoStar 3506, Corning Inc.) was filled with 4950 μL of Hoagland medium and 50 μL of double-distilled water (ddH 2 O) or 50 μL of acetone in solvent control or 50 μL of acetone containing the appropriate concentration of test compound. The final concentration of acetone was 1% v/v. Two, three-frond colonies from 4−5-day-old stock cultures were placed in each well. Plates were kept in an incubator (no. CU-36L model, Percival Scientific) with white light (94.2 μmol photons m −2 s −1 ). Total frond area per well was recorded by the Scanalyzer (LemnaTec, Wurselen, Germany) image analysis system from days 0 to 7. Percentage of increase between days 1 and 7 was determined relative to baseline area at day zero. Experiments were done in triplicate, and nine concentrations (from 0 to 1 mM) were tested. Bioassays with Weed Species. Phytotoxicities of pure natural compounds were tested in 9 cm diameter Petri dishes on a diverse group of weeds, including five grasses [barnyardgrass (Echinochloa crus-galli), crabgrass (Digitaria sanguinalis), foxtail (Setaria italica), millet (Panicum sp.), and ryegrass (L. multiflorum)] and two broadleaf species [morningglory (Ipomea sp.) and velvetleaf (Abutilon theophrasti)]. For these assays, stock solutions of test compounds (100×) were prepared in acetone, and an aliquot of 40 μL was diluted in 3960 μL of distilled water prior to application. The effect on germination and growth was evaluated. Postemergence Assay. The assay described by Sampietro et al. 41 was used. Seeds of ryegrass were pre-incubated in 9 cm diameter Petri dishes on a moistened filter paper disk with 4 mL of distilled water for 2 days at 22°C. After that period, seedlings were moved to another Petri dish (20 seedlings per dish) with a filter paper disk. Four milliliters of the aqueous solution containing test compound (500 μM) was added to each plate. The herbicide glyphosate (750 μM) was used as a positive control, and water with acetone (1% v/v) was used as solvent control. Lids were sealed with Parafilm, and Petri dishes were incubated at 22°C for and additional 4 days. Each treatment was performed in triplicaten and the plant growth (length of plants) was measured for each condition at 6 days (4 days of treatment). Pre-emergence Assay. Twenty seeds of the species of interest were placed in a 9 cm diameter Petri dish over a filter paper disk. Four milliliters of the aqueous solution containing test compounds (0.5 or 1 mM) was added to each plate or the herbicide acetochlor (0.5 mM) as a positive control or water with acetone (1% v/v) as solvent control. Lids were sealed with Parafilm, and Petri dishes were incubated at 22°C . Percentages of germination and plant growth were measured at 7 days. For a more direct comparison among different species, results are expressed as percent with respect to the solvent control. This procedure was also used to compare the phytotoxicity of pure compounds in light and dark. In these experiments, 5 cm diameter Petri dishes were used, with a final volume of 2 mL in each one, and the herbicide acetochlor was used at 1 mM. For treatment in the dark, Petri dishes were kept in a dark chamber for 7 days. Glyphosate and acetochlor concentrations used were based on the recommended application rates, considering the surface area of the Petri dishes used. All experiments were carried out in triplicate, and results are expressed as the mean ± SD. Greenhouse Studies. Herbicidal activity of pure khellin and visnagin from Sigma-Aldrich was evaluated according to previously described methods 42 with some modifications. Compounds were tested on a broadleaf weed velvetleaf (A. theophrasti) and two grasses, crabgrass (D. sanguinalis) and barnyardgrass (E. crus-galli). For all experiments, 10 cm diameter plastic pots were used. Pots were filled with silty clay loam soil collected in a field that has never been treated with herbicides near the USDA Jamie Whitten Research Center in Stoneville, MS, USA. After seeding, pots were placed in trays without drainage holes and watered from the bottom. Pots were subsequently moved to trays with drainage holes and watered as needed. Plants were grown in the greenhouse of the NPURU-USDA-ARS, Oxford, MS, USA, during May and June of 2015 at 25°C without supplemental illumination. Test compound and herbicide applications were performed simulating field applications with a Generation III Spray Booth (Devries Manufacturing, Hollandale, MN, USA) equipped with a model TeeJet EZ 8002 nozzle (Springfield, IL, USA) with conical pattern and 80°spray angle. The distance from the nozzle to soil level was 40 cm for all experiments. The spray head was set to move over the plants at 1.3 km h −1 , and the apparatus was calibrated to deliver the equivalent of 400 L ha −1 at a pressure of 207 kPa with agitation. To reach an application rate of 1 kg active ingredient (ai) ha −1 , solutions of test compounds were prepared at 2.5 mg mL −1 . Final rates of 2 and 4 kg ai ha −1 were achieved by spraying two or four times, with a gap of 20−40 min between sprays. Pure compounds were dissolved in acetone and then in distilled water with surfactant (1% v/v) using agitation. The final concentration of acetone in solution was 1% v/v. Three pots of each species were sprayed with each treatment or control. The height of the aerial part of the plants was measured 11 days after treatment (DAT). Plants were subsequently removed, washed with distilled water, and air-dried on absorbent paper towels for 7 days at 35°C for determination of dry weight. Pre-emergence Activity Assays with Pure Compounds. Selected weeds were seeded 1 day before treatment. Ten seeds of velvetleaf and 20 of crabgrass were seeded per pot. The soil was watered after seeding, and then pots were moved to trays with drainage holes for 24 h. The following day, pots were sprayed with khellin and visnagin at 2 kg ai ha −1 . Tween 20 was used as a surfactant at 1% v/v. An aqueous solution with acetone and surfactant, without compounds, was sprayed in control pots. Postemergence Activity Assays with Pure Compounds. Seeds of velvetleaf, crabgrass, and barnyardgrass were planted to determine germination rates and specific times required for each species to emerge. Once established, pots of each weed species were initiated at staggered intervals to provide seedlings that would be at a similar developmental stage at the same time. Velvetleaf plants were thinned to 5 and grasses to 10 seedlings per pot before treatment. For one of the experiments, seedlings of velvetleaf were 6 days old and those of crabgrass 8 days old at treatment. Khellin and visnagin were tested at 2 kg ai ha −1 . Tween 20 (1% v/v) was added to the solution as a surfactant, and the control consisted of water with acetone and surfactant. Another experiment was conducted with plants at the two to three true-leaf stage. Visnagin was sprayed on 13-day-old velvetleaf and 16day-old crabgrass and barnyardgrass at 4 kg ha −1 . The postemergence activity of visnagin was compared with the bioherbicide pelargonic acid at the same rate. AGRI-DEX (1% v/v) was used as a surfactant, and the control consisted of water with acetone and surfactant. Electrolyte Leakage Assay. The effects of pure furanochromones (Sigma-Aldrich) on membrane stability were studied as described by Dayan and Watson. 43 Cucumber cotyledon disks were exposed to each furanochromone at 100 and 300 μM. Control tissues were exposed to the same solvent as treated tissues but without the compounds. Conductivity measurements were carried out at the beginning of the dark incubation period, a second measurement was made after 16 h, at which time the samples were placed under high light intensity, and final measurements were made after 8 and 26 h of light exposure. Each experiment consisted of three replicates. Maximum conductivity was measured by boiling three samples of each treatment for 20 min. To study if electrolyte leakage caused by khellin and visnagin was lightdependent, two sets of Petri plates were prepared with cucumber cotyledon disks exposed to test compounds and control in both. One set was treated as described before (dark 16 h/light 26 h), whereas the other one was kept in darkness for 42 h. Conductivity measurements were done at the indicated times. Effect of Compounds on Photosynthetic Efficiency. The effect of pure khellin and visnagin was evaluated by chlorophyll fluorescence measurements according to the procedure of Dayan and Zaccaro. 44 Cucumber cotyledon disks were exposed to different dilutions of each furanochromone (10, 30, 100, and 300 μM). Control tissues were exposed to the same solvent as treated tissues but without the test compounds. The cotyledon disks were incubated in darkness for 18 h before exposure to light for 24 h. Photosynthetic quantum yield and electron transport rate (ETR) were measured. ETR values were expressed as percent of the ETR average values observed in control treatments. A time course experiment was performed by measuring induced fluorescence of cotyledon disks after treatment at 3 h in darkness (start) and after 18 h in darkness, after which the samples were placed in the light, and further measurements were made after 6 and 24 h of light exposure. Three replicates were performed for each experiment. Detection and Measurement of Reactive Oxygen Species (ROS). ROS cellular localization was determined by confocal microscopy using the fluorescent probe 2′,7′-dichlorofluorescein diacetate (DCFDA). Cucumber cotyledon disks (1 cm) were treated as described for the electrolyte leakage assay. Five disks were placed in 5 cm Petri plates and exposed to different dilutions of khellin and visnagin (0, 100, and 300 μM). Disks were incubated in darkness for 16 h before exposure to high light intensity for 5 h or in darkness for 30 h. As positive controls, disks were exposed to the same solvent as treated tissues but with 10 mM hydrogen peroxide for 30 min in the light. After each treatment, they were vacuum-infiltrated in the dark with 50 μM DCFDA in 10 mM Tris-HCl pH 7.5, and ROS were visualized in an Eclipse TE-2000-E2 Nikon confocal microscope with excitation at 488 nm and emission at 515/530 nm. Green fluorescence intensities were quantified using the image processing package Fiji of ImageJ software. 45 Effect on Cell Division. Onion seed (Allium cepa L. Evergreen Longwhite Bunching, Burpee & Co., 2012, EE.UU) germination was carried out for 7 days with a 14 h photoperiod in 9 cm diameter Petri dishes on a filter paper disk that was moistened with a dilution (2.5 mL) of test compound or control. Stock solutions of test compounds (100×) were prepared in acetone, and aliquots were diluted in ddH 2 O to get the final concentration. The control consisted of water with the same proportion of acetone (1% v/v) applied in the treatments. At 7 days of incubation, samples were processed for mitotic index analysis according to the method of Armbruster et al. 46 Twenty root tips (1 cm sections) were fixed in glacial acetic acid/absolute ethanol (1:3 v/v) for 30 min. The segments were incubated with 5 N HCl at 25°C for 1 h and washed several times with distilled water. After that, segments were stained with Schiff's reagent for 45 min in the dark at 25°C. Stained meristematic regions were identified as purple tips. The root segments were transferred with tweezers to a drop of 45% acetic acid in water on a microscope slide. The tips were cut with a razor blade, and a coverslip was carefully placed over the tips and gently squashed by applying slight and constant pressure directly over the tissues. The edge of the coverslip was sealed with nail polish to delay the evaporation of acetic acid. The mitotic index was calculated by tallying the cells in various stages of mitosis. At least 1000 cells slide −1 and in triplicate (3000 cells per treatment) were counted for suitable statistical analysis of data. An Olympus BX60 microscope (Olympus, Center Valley, PA, USA) was used, and cells with abnormal mitotic configurations were counted as a separate class. This procedure was slightly modified to evaluate if cell division inhibition caused by furanochromones was reversible. For this experiment onion seeds were incubated with solutions containing khellin or visnagin for 3 days. After this period, all seeds were washed three times with distilled water and placed in new Petri dishes on moistened filter paper disks with ddH 2 O (wash + treatment). Seeds were then incubated for an additional 4 days, after which a mitotic index analysis was performed. As a reference to compare results, another set of onion seeds was kept with test compounds or control for 7 days (wash − treatment) before analysis. Cell Death Determination. To evaluate cell death in roots, onion seeds were germinated and treated as described for the postemergence assay with weed species. Onion seedlings were exposed to different doses of each furanochromone (0, 100, and 300 μM) during 4 days. For the experiment with leaf disks 3-week cucumber plants were used. One disk (1 cm) was placed in each well of a 12-well plate together with 1 mL of a 2% w/v sucrose/1 mM MES, pH 6.5, solution containing each of the compounds tested at the appropriate concentration (0, 100, or 300 μM) and acetone (1% v/v). Each assay was performed with leaf disks from different plants. Plates were sealed with Parafilm and incubated in a growth chamber at 21−27°C with a 16/8 h light/dark cycle for 7 days. Cell death was determined by Evans blue staining. 47 Root tips (5 mm sections) and leaf disks were incubated for 30 min in 0.25% w/v Evans blue aqueous solution at 25°C on a rotary shaker. After staining, unbound dye was removed by extensive washing with deionized water. Three root tips (three replicates) or one leaf disk (four replicates) were ground in a tissue grind tube with 500 μL of 1% w/v sodium dodecyl sulfate (SDS). The resulting suspension was centrifuged for 20 min at 20000g, and the supernatant was used for dye quantification by monitoring the absorbance at 600 and 680 nm. Relative cell death was expressed as A 600 for root tips and A 600 /A 680 ratios for leaf disks. 48 Statistical Analysis. Data from dose−response experiments were analyzed by a log−logistic model 40,49 using the dose−response curve module 50 of R software version 2.2.1. 51 Concentrations required for 50% germination or growth inhibition relative to control (IC 50 values) were obtained from estimated parameters in the regression curves. The standard error of each estimation is provided. Data from phytotoxicity bioassays in the laboratory and greenhouse, ROS, and cell death quantifications were analyzed by ANOVA using InfoStat statistical software version 2015, 52 and Scheffe's test was employed to compare the means at α = 0.05. ■ RESULTS AND DISCUSSION Phytotoxicity Bioassay-Guided Isolation and Identification of Khellin and Visnagin. Different extracts of A. visnaga were phytotoxic to lettuce (L. sativa) and creeping bentgrass (A. stolonifera) ( Table 1). The dichloromethane extract of flowers (flowers DCM ) exhibited the highest inhibitory effect on seed germination and growth. Therefore, its bioassayguided fractionation was carried out to isolate the phytotoxic compounds. The flowers DCM crude extract was subjected to normal-phase flash column chromatography, providing 19 fractions, I−XIX. All fractions were evaluated for their phytotoxicity on lettuce and creeping bentgrass at 1 mg mL −1 . Four fractions, XIV− XVII, were the most active, causing complete germination inhibition of both species at 7 days ( Table 2). The 1 H NMR spectra of these four active fractions revealed that fractions XIV and XVII were two different pure compounds, whereas the intermediate fractions XV and XVI contained a mixture of different compounds ( Figures S2, S4, S5, and S6). The pure compounds in fractions XIV and XVII were identified by GC-MS, high-resolution mass spectrometry, and 1 H and 13 C NMR data as the furanochromones khellin (4,9-dimethoxy-7-methyl-5H-furo[3,2-g]chromen-5-one) and visnagin (4-methoxy-7methyl-5H-furo[3,2-g]chromen-5-one), respectively ( Figure 1). For both compounds, the 1 H and 13 C chemical shifts matched spectral data reported for khellin ( Figures S2 and S3) and visnagin (Figures S6 and S7). 37−39 These furanochromones have been previously isolated from A. visnaga, 10 We therefore assayed the activity of commercial chemical standards of khellin and visnagin. We used a standard concentration of 1 mM to compare effects of compounds of unknown phytotoxicity. This concentration of almost all commercial herbicides provides an effect for comparison. At the same concentration, the technical standards were equally phytotoxic to lettuce and creeping bentgrass as the two furanochromones isolated from A. visnaga (Table ST1). Five structural analogues commercially available ( Figure 1) were also tested (Table ST1). Interestingly, benzo-γ-pyrone (chromone), the structure of which is smaller and simpler than those of khellin and visnagin, showed a similar level of activity to them. This may indicate that the rest of the structure contributes to activity, but it would not be essential for the phytotoxicity. Also, the replacement of the methyl group in C-7 of khellin by a carboxyl group (as in 4,9-dimethoxy-5-oxo-5Hfuro[3,2-g]chromene-7-carboxylic acid) did not increase the phytotoxicity, although it did improve the aqueous solubility of the molecule. Except for these two compounds, the effects of which were close to those of khellin and visnagin, none of the other analogues was as phytotoxic as these furanochromones. Accordingly, only the herbicidal activities of khellin and visnagin were studied in more detail. Phytotoxic Effects of Khellin and Visnagin on Different Species, Including Weeds. During dose−response bioassays in the laboratory, both furanochromones inhibited the growth of duckweed (L. paucicostata) and germination and growth of lettuce and ryegrass (L. multiflorum) ( Table 3) (Figures S8, S10, and S11). For dose−response curves with lettuce and ryegrass, germination and growth of seedlings were comparable between solvent control (acetone 10% v/v) and control in water ( Figure S12). These compounds caused almost complete necrosis of duckweed tissue and total inhibition of growth at 333 and 1000 μM (Figures S8 and S9). This model genus (Lemna) is routinely used to study the action of phytotoxins and herbicides by the pesticide industry, environmental toxicologists, and others. 40,62,63 Image analysis allows for very accurate determination of growth of the same plants over time. The Lemna species used in this study, L. paucicostata, is smaller than L. minor and L. gibba, facilitating miniaturization. Because a wide range of herbicides have been previously tested by this method, 40,64 it was possible to compare khellin and visnagin with those products. The IC 50 values for growth inhibition of duckweed were 162 ± 29 and 122 ± 28 μM for khellin and visnagin, respectively. Comparatively, in a study in which 26 different herbicides were tested on duckweed, there was a large range of responses with IC 50 figures ranging from 0.003 to 407 μM. 40 Therefore, the IC 50 values of the isolated furanochromones are in the middle range of commercial herbicides in this bioassay, similar to clomazone (IC 50 = 126 μM) and naphtalam (IC 50 = 128 μM). Ryegrass is a weed with high economic impact, which has evolved resistance to herbicides with several modes of action. 1 Khellin and visnagin also showed a dose-dependent phytotoxic a Bioassay rating based on scale of 0 to 5: 0 = no effect and 5 = no growth or germination. b Each fraction was tested at 1 mg mL −1 . Journal of Agricultural and Food Chemistry Article activity when they were applied to ryegrass seeds ( Figure S11). Additionally, the postemergence application of khellin and visnagin at 500 μM on 2-day-old seedlings of ryegrass in Petri dishes significantly reduced the growth of this weed (Figure 2). The plant growth at 6 days was much lower after treatment with visnagin than with khellin. The growth inhibitory effect caused by visnagin at 500 μM on ryegrass was similar to the one caused by the postemergence herbicide glyphosate at 750 μM ( Figure 2A). Moreover, the phytotoxic activity of khellin and visnagin was observed on other problematic weeds. To evaluate if this effect was selective, these compounds were tested on a diverse group of target weeds, which included grasses such as barnyardgrass (E. crus-galli), crabgrass (D. sanguinalis), foxtail (S. italica), and millet (Panicum sp.), and broadleaf species such as morningglory (Ipomea sp.) and velvetleaf (A. theophrasti). As it was observed that these molecules caused the highest reduction in plant growth at 500 μM or more ( Figures S8, S10, and S11), they were used at 0.5 and 1 mM to determine their herbicidal potential, and the effect was compared with the pre-emergent herbicide acetochlor. Khellin and visnagin showed a pre-emergence, nonselective effect because they significantly reduced the growth of the various weed species tested by application on the seeds ( Figure 3A). Germination of the grasses but not the broadleaf species was affected by these treatments ( Figure 3B). Interestingly, at 0.5 mM, visnagin caused a reduction in the growth and germination of weeds comparable to acetochlor at the same dose. Foxtail and crabgrass were shorter after treatment with visnagin than with acetochlor. By increasing the concentration of both furanochromones to 1 mM, germination of grasses and growth of all species were significantly more affected than at 0.5 mM (Figure 3), and the natural compounds were more phytotoxic to weeds than 0.5 mM acetochlor. To evaluate if khellin and visnagin were also effective on weeds in soil, their herbicidal activity was tested in greenhouse studies. As these compounds showed a generally nonselective effect, two weed species were selected for these assays: crabgrass (D. sanguinalis) as representative of grasses and velvetleaf (A. theophrasti) as representative of broadleaf species. No significant pre-emergence activity 11 DAT was observed when these compounds were sprayed at 2 kg ai ha −1 (data not shown). This result is in contrast to what was detected in laboratory assays in Petri dishes. Many natural phytotoxins may decrease their activities in soil due to several factors. 65,66 There are physical, chemical, and microbiological factors that can reduce herbicide activity by making them biologically unavailable or by degrading them by chemical or biological processes. 67 On the other hand, during this greenhouse assay, both furanochromones produced a significant postemergence effect 11 DAT at 2 kg ai ha −1 (Figure 4). Visnagin was the most active in these conditions, causing a biomass reduction of >50% in crabgrass and velvetleaf with respect to control ( Figure 4A). It also reduced significantly the height of both weeds, affecting crabgrass more severely ( Figure 4B). Plant growth reduction was observed with both compounds but was greater with visnagin. Necrosis was observed on the leaf edges of both weeds sprayed with khellin and visnagin ( Figure 4C). Necrotic lesions were not found in parts of the plants not directly wetted by the spray, such as new leaves, indicating a contact effect. As visnagin was more active than khellin as a postemergence treatment, a second greenhouse study was carried out to evaluate its postemergence herbicidal activity at 4 kg ai ha −1 on velvetleaf, crabgrass, and barnyardgrass (E. crus-galli). Because crabgrass was more sensitive to visnagin, a second grass was included in this assay. As a reference, treatment with pelargonic acid (n-nonanoic acid) at the same rate was used, because it is a commercial herbicide, but also is a natural product. The postemergence effect of visnagin at 11 DAT was comparable to the one caused by pelargonic acid at the same dose ( Figure 5). The dry weight of the three weeds sprayed with these two compounds was significantly smaller than the control ( Figure 5A). A similar reduction was observed on weed plant height, except for barnyardgrass, the height of which was not affected by visnagin or pelargonic acid ( Figure 5B). As in the previous greenhouse study, necrotic lesions of leaves after treatment with visnagin at 4 kg ai ha −1 indicated it can act as a contact herbicide. This was observed with both visnagin and pelargonic acid ( Figure 5C). The oldest leaves, which received the spray with these compounds, were significantly more injured than newer leaves. Pelargonic acid is a natural fatty acid, which is sold as a broad-spectrum bioherbicide for nonselective vegetative burndown in many situations. It disrupts plant cell membranes, causing rapid loss of cellular functions. 68 This fatty acid is more effective than other bioherbicides such as acetic acid or corn gluten meal that must be applied at rates of tons per hectare. Pelargonic acid is recommended at rates of 10−15 kg ai ha −1 . 69 The potential of visnagin as a bioherbicide seems comparable to that of pelargonic acid according to the results of the greenhouse studies. Both compounds caused a significant and comparable reduction in biomass and height of weeds at equivalent doses. Visnagin and pelargonic acid were tested at 4 kg ai ha −1 on 13−16-dayold (2−3 true leaf stage) plants. Data represent means of three replicates ± SD. Different letters above the bars indicate significant differences between treatments (p < 0.05). Formulation and chemical modification of lead molecules can greatly improve herbicide efficacy. 70−72 The analogues tested in the course of this study were less active than visnagin or khellin, but different analogues could have improved activity. However, their identification and study go beyond the objectives of the present work. Studies on the Possible Mode of Action of Khellin and Visnagin. The herbicidal activity or phytotoxicity of these two furanochromones has not been described before. Therefore, we conducted several assays that can provide information about their mode(s) of action. Because the IC 50 values of these compounds to inhibit growth of duckweed, lettuce, and ryegrass were in the range from 110 ± 11 to 244 ± 37 μM, doses in the range of the IC 50 values were used for these assays. The integrity of the plant plasma membrane is a good biomarker to help identify modes of action of herbicides and their dependence on light. 43 Stress conditions are often accompanied by the accumulation of high levels of ROS exceeding the detoxification mechanisms of plant cells. This can lead to membrane lipid peroxidation resulting in the uncontrolled release of cellular electrolytes. 73 Khellin and visnagin produced a destabilization of the cell membranes at 100 and 300 μM, leading to significant electrolyte leakage ( Figure 6). The stress caused by these furanochromones may trigger this phenomenon through direct or indirect effects. It has been recently suggested that electrolyte leakage, which stimulates proteases and endonucleases, and programmed cell death are often linked to each other when plant cells are severely stressed. 74 In cucumber leaf disks exposed to 100 μM khellin or visnagin, an increase (∼35%) in relative cell death was detected with respect to the control, as estimated by the Evans blue staining procedure 47 (Figure 7A,B). This rose to a 3.5-fold increase when the concentration was 300 μM, indicating severe damage to leaf tissue at the higher dose. Such processes of plasma membrane destabilization and cell death induced by these furanochromones would explain the necrosis observed in plants sprayed with these compounds in greenhouse assays. At 42 h of incubation, either in the dark or after 26 h of high light intensity exposure, both furanochromones triggered significant electrolyte leakage on cucumber cotyledon disks ( Figure 6). Ion leakage caused by 100 and 300 μM khellin and visnagin in the dark ( Figure 6B,D) indicates that the mode of action of these compounds is not light-dependent and involves cellular leakage. This was also shown when the phytotoxicity of khellin and visnagin was compared under light and dark conditions. There were no significant differences in their phytotoxic effects on lettuce in light and darkness (Figure 8). However, the most intense electrolyte leakage was observed after incubation of cucumber cotyledon disks with khellin and visnagin plus 26 h of high light intensity ( Figure 6A,C). Under these conditions, the effect of these compounds at 100 and 300 μM was comparable to the positive control obtained by boiling the cotyledon disks and bleaching was also observed ( Figure S13). This may be a consequence of the higher level of ROS produced in light, combined with the stress caused by the furanochromones. Most biocides cause ROS generation as a side effect before cell death occurs. To evaluate if there was an increase in ROS levels after treatment with khellin and visnagin, cucumber cotyledon disks were studied under presymptomatic conditions (before the detection of high levels of electrolyte leakage). As for cell death and electrolyte leakage assays, each compound was tested at 100 and 300 μM. The tissues were incubated in darkness (16 h) before exposure to high light intensity for 5 h. Cucumber cotyledon disks were subsequently treated with the ROS-dependent fluorescent probe DCFDA for visualization by confocal microscopy. In control disks and disks treated with the compounds at 100 μM, most of the label was recovered in chloroplasts as expected in light conditions, co-localizing with chlorophyll autofluorescence ( Figure 9A). Image analysis indicates that ROS levels in cotyledon disks exposed to 300 μM khellin or visnagin were significantly higher than in the control ( Figure 9B) and comparable to the treatment with 10 mM H 2 O 2 . Under these conditions, additional green fluorescence was detected in other cellular compartments and membranes ( Figure 9A), indicating increased peroxidation. According to these results, the treatment with these furanochromones at 300 μM, together with high light intensity, caused an increase in cellular ROS levels prior to the plasma membrane destabilization. However, no significantly higher generation of ROS was detected after treatment with compounds at 100 μM plus 5 h of high light intensity ( Figure 9), even though visnagin caused electrolyte leakage at this dose after 8 h in the light. A longer exposure to light is probably required to detect ROS generation. Thus, the increase in ROS may not be directly associated with the molecular target site of these compounds, but is more likely to be a secondary or tertiary effect of what is causing cellular leakage. Moreover, after incubation in complete darkness for 30 h, ROS generation was not detected in cucumber cotyledon disks with either khellin or visnagin at 300 μM ( Figure S14). This observation suggests that the main source of ROS in furanochromones-treated plants is associated with photosynthetic activities. Moreover, whereas ROS propagation may contribute to damage in the light, the basic mechanism of visnagin and khellin toxicity would be ROS-independent. Chlorophyll fluorescence measurements showed that khellin and visnagin significantly reduce the photosynthetic efficiency of cucumber cotyledon disks. The ETR was reduced ∼35 and ∼50% after 24 h of incubation in the light (42 h total) with these compounds at 100 and 300 μM, respectively ( Figure 10). These results suggest that photosynthesis is not a primary target of khellin or visnagin. Considering the long time period of incubation and irradiation required for a significant photosynthetic efficiency decline, khellin and visnagin probably affect photosynthesis indirectly, most likely altering chlorophyll fluorescence as a consequence of the membrane peroxidation and destabilization previously detected under these conditions (Figures 6 and 9). The effect of khellin and visnagin on cell division was evaluated by mitotic index analysis of root meristem cells of onion (A. cepa). 36,75 Because the sensitivity of this plant species to these furanochromones was unknown, a broader range of concentrations was tested (0−1000 μM). Both compounds inhibited cell division in a dose-dependent manner ( Figure 11A). Visnagin was more active than khellin and completely inhibited onion cell division at 300 μM. Inhibition was not associated with the arrest of a particular phase of mitosis. The percentage of cells observed in each mitotic phase after exposure to these furanochromones was smaller than in the control. However, when there was not a total inhibition, the relative proportion of cells in the mitotic phases after treatment with both compounds was comparable to the control ( Figure 12). A different situation was observed regarding cells with abnormal configurations such as chromosome aberrations (chromosome bridges, breaks, and losses), nuclear abnormalities (lobulated nuclei, nuclei carrying nuclear buds, polynuclear cells, etc.), or micronuclei. The proportion of these types of cells with abnormal configurations after treatment with khellin or visnagin was higher than in the control ( Figure 12). Inhibition of cellular division caused by khellin and visnagin at 100 μM was irreversible under the experimental conditions employed ( Figure 11B). As in the dose−response assay, we failed to detect alterations in the distribution of dividing cells into the different mitotic phases with respect to the control, except for cells with abnormal configurations, the proportion of which after treatment with furanochromones was higher than in the control ( Figure S15). However, root meristem onion cells did not recover their normal division rate after washing the seeds at 3 days with distilled water. This effect might be associated with a cell death process induced by these compounds. In onion roots exposed to 100 μM khellin or Acetochlor was included as reference of pre-emergent herbicide, and to get the highest herbicidal effect, all compounds were tested at 1 mM. Data represent means of three replicates ± 1 SD. Different letters above the bars indicate significant differences between treatments (p < 0.05). visnagin, a 2-fold increase in relative cell death was detected ( Figure 7C,D). These furanochromones inhibited cell division, but also caused cell death in onion roots, which was dependent on the dose used. At 300 μM, both compounds induced a 2fold increase in relative cell death compared to 100 μM and a 3−4-fold increase relative to the control conditions ( Figure 7C,D). Our results indicate that the mode of action of khellin and visnagin could be a complex process involving multiple targets. The inhibition of cell division and the increased cell death caused by these furanochromones, together with cell membrane destabilization, would account for the reduction in plant growth. In addition, these effects explain the development of necrosis and abnormal leaves observed after treatment with both compounds. Because membrane destabilization caused by these furanochromones was intensified after a light irradiation period, and considering their chemical nature, a phototoxic effect might be expected. Some biological activities caused by furanochromones, as well as their possible role in plant defense, have been associated with their photoactivity. 57,58,60,61,76−78 However, their phytotoxicity was not light-dependent, because both compounds induced electrolyte leakage in darkness, and the inhibitions of lettuce growth under light and darkness were similar. Despite the similarities in chemical structures and properties that furanochromones share with furanocoumarins, or psoralens, they differ in their photochemical properties and in their ability to photodamage eukaryotic cells and form cross-links. 79 Visnagin is much less phototoxic and photomutagenic than bergapten when compared at equimolar concentrations and equal UV-A doses on the green alga Chlamydomonas reinhardtii. 58 According to Martelli et al. 80 visnagin and khellin could react with DNA and generate activated oxygen species upon UV irradiation. However, in later work, the extent of photoaddition was low compared with most furanocoumarins, and oxygen-dependent photo-oxidation of DNA was not observed. 81 The absence of DNA photo-oxidation after treatment with visnagin or khellin plus UV-A suggested that furanochromones do not have any photodynamic effect on DNA. The phototoxicity of these molecules, albeit low when compared to furanocoumarins, might contribute to their herbicidal activity. Under high irradiation conditions, ROS production increased after treatment with khellin and visnagin (Figure 9), resulting in higher oxidative damage to cell membranes and other cellular components. However, the potential phototoxic effect under high light would not explain the phytotoxicity in the dark, indicating that other mechanisms are involved. In conclusion, the mode of action of these furanochromones appears to be a complex process. It is not light-dependent and involves effects on membrane stability, cell division, and cell viability in leaves and roots that may not be related. Both compounds also reduce photosynthetic efficiency through indirect effects and induce oxidative damage under high light intensity. Visnagin had the best contact postemergence herbicidal activity in greenhouse assays. Its effect was comparable to that of the commercial bioherbicide pelargonic acid at the same rate, indicating visnagin's potential as a bioherbicide or lead molecule for the discovery of new synthetic herbicides. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acs.jafc.6b02462.
2018-04-03T02:46:24.232Z
2016-12-12T00:00:00.000
{ "year": 2016, "sha1": "6615b918420eae26f56869db0aa2129e32ef0a2c", "oa_license": "CCBYNCSA", "oa_url": "https://ri.conicet.gov.ar/bitstream/11336/52842/2/CONICET_Digital_Nro.4c81ab1e-51b2-498b-b10a-65264edfc72f_A.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "a93b90f9e4868b6c443b1ac2f3f960c6918ef6c6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
240481592
pes2o/s2orc
v3-fos-license
Immune activation and inflammatory biomarkers as predictors of venous thromboembolism in lymphoma patients Background Lymphomas are characterized by elevated synthesis of inflammatory soluble mediators that could trigger the development of venous thromboembolism (VTE). However, data on the relationship between specific immune dysregulation and VTE occurrence in patients with lymphoma are scarce. Therefore, this study aimed to assess the association between inflammatory markers and the risk of VTE development in patients with lymphoma. Methods The erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), neutrophil-to-lymphocyte ratio (NLR), platelet-to-lymphocyte ratio (PLR), lactate dehydrogenase (LDH), total protein (TP), and albumin were assessed in 706 patients with newly diagnosed or relapsed lymphoma. Data were collected for all VTE events, while the diagnosis of VTE was established objectively based on radiographic studies. ROC (receiver operating characteristic) curve analysis was performed to define the optimal cutoff values for predicting VTE. Results The majority of patients was diagnosed with aggressive non-Hodgkin lymphoma (58.8%) and had advanced stage disease (59.9%). Sixty-nine patients (9.8%) developed VTE. The NLR, PLR, ESR, CRP, and LDH were significantly higher in the patients with lymphoma with VTE, whereas the TP and albumin were significantly lower in those patients. Using the univariate regression analysis, the NLR, PLR, TP, albumin, LDH, and CRP were prognostic factors for VTE development. In the multivariate regression model, the NLR and CRP were independent prognostic factors for VTE development. ROC curve analysis demonstrated acceptable specificity and sensitivity of the parameters: NLR, PLR, and CRP for predicting VTE. Conclusion Inflammatory dysregulation plays an important role in VTE development in patients with lymphoma. Widely accessible, simple inflammatory parameters can classify patients with lymphoma at risk of VTE development. Supplementary Information The online version contains supplementary material available at 10.1186/s12959-022-00381-3. Introduction Venous thromboembolism (VTE) is a leading cause of cancer-associated thrombosis (CAT) in patients with malignancy. The pathophysiological relationship between VTE development and malignancy was established decades ago [1]. Patients with cancer have up to 7-times higher risk of developing VTE than do healthy individuals [2]. Moreover, VTE is the second leading cause of mortality in patients with cancer immediately after cancer progression [3]. Additionally, VTE prolongs the duration of hospitalization and consequently raises the costs of treatment [4]. Furthermore, Khorana et al. [3] identified CAT as the leading cause of mortality in ambulatory patients with active malignancy receiving chemotherapy. The clinical consequences and financial burden of VTE in specific groups of cancer patients have been gaining increased attention [5]. Lymphomas comprises a heterogenous group of clonal hematological neoplasms that are characterized by varying clinical course, from utterly indolent to extremely aggressive [6]. Their biological diversity is reflected in various mechanisms through which the disease advances and causes complications. A particular pathophysiological feature of lymphomas is immune dysregulation, which is deeply related to activation of inflammation. Multiple pro-inflammatory cytokines have a principal role in lymphomagenesis: interleukin (IL) 6 (IL-6) is related to Th17 immune response that has association with non-Hodgkin lymphoma (NHL). In addition, IL-10 gene polymorphisms are linked to elevated risk of NHL development and higher levels of tumor necrosis factor alpha (TNF-α) are associated with developing particular types of NHL [7]. The level and nature of inflammation dysregulation vary between different types of lymphomas [8]. In patients with a compromised immune system due to both malignancy (primary disease) and cancer treatment, the possibility of infections is substantial, which can generate circumstances for VTE complications [9]. The pathways that trigger inflammation are subject to fine modulation and differ based on the type of lymphoma. However, inflammation has been classically assessed by white blood cell (WBC) count and standardized acute phase reactants including C-reactive protein (CRP), sedimentation rate, and fibrinogen level. Moreover, the neutrophilto-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) are newer biomarkers for systemic inflammation [10]. The NLR and PLR have been widely studied in different medical fields, where they have demonstrated prognostic significance of various outcomes [11][12][13][14]. Additionally, their potential strength for predicting VTE in cancer patients receiving chemotherapy has been emphasized in several recently published studies [15,16]. However, their predictive performance and reliability have not been evaluated in patients with lymphoma. Inflammation and inflammation-related conditions are associated with an increased risk of VTE due to dysregulation of multiple pathophysiological pathways, including venous stasis, hypercoagulability, inflammation, IL-6 expression, and inhibition of natural inhibitors of coagulation and anticoagulants [17]. The inflammatory and thrombotic pathways overlap [17]. Regarding responses to inflammation, besides other stimuli, endothelial cells transform towards prothrombotic phenotype by increasing expression of leukocyte adhesion molecules (P-and E-selectin), tissue factor (TF), and angiopoietin2 (Angpt2). These modifications result in a loss of vascular integrity coupled with lowered expression of antithrombotic molecules and overexpression of procoagulants (complement effectors, coagulation factors, thrombin, and VIIa and Xa) [18]. Furthermore, inflammatory cytokines, such as P-selectin, formation of luminal von Willebrand factor, and TF expression lead to recruitment and activation of monocytes, neutrophils, platelets, and coagulation activation [18,19]. At the same time, fibrinolysis system is partially impaired by elevated levels of prothrombin activator inhibitor-1 (PAI-1) [18]. Immunothrombosis, a relatively new term coined by Engelmann and Massberg, [20] emphasizes the role of innate immunity in VTE development. However, data on the relationship between specific immune dysregulation in patients with lymphoma and VTE occurrence are scarce. Therefore, our study aimed to assess the association between inflammatory markers and the risk of VTE development in patients with lymphoma receiving chemotherapy and evaluate the relationship between VTE and treatment course in patients with lymphoma. Study population The study included 706 patients with newly diagnosed or relapsed lymphomas (including NHL and Hodgkin lymphoma [HL] and excluding chronic lymphocytic leukemia [CLL] and the leukemic phases of all lymphomas) at the Clinic for Hematology, University Clinical Center of Serbia (UCCS). The study protocol has been approved by the UCCS's Ethics Committee, and written informed consent was obtained from all participants. The time frame in which eligible patients were recruited was from January 2010 to November 2019. Patients with CLL and other leukemic phases of lymphomas were excluded because their differential WBC count is shifted and affect the NLR and PLR, rendering these values invalid. Data of patients with newly diagnosed and relapsed lymphoma were collected for all VTE events. VTE was diagnosed objectively based on radiographic studies, including compression ultrasonography, contrast-enhanced thoracic computed tomography, and magnetic resonance imaging of central nervous system (CNS) thrombosis, as well as clinical examination and laboratory evaluation. All probable cases of VTE were reviewed by a final diagnosis committee composed of two specialists (an internist and a radiologist). Laboratory investigations Blood samples from patients with lymphoma included in the study were collected using vacuum tubes. Samples were anticoagulated with EDTA, and machineautomatized complete blood count (CBC) with leukocyte differential counts was performed. The NLR and PLR were calculated using the CBC with leukocyte differential counts. Citrated blood samples were analyzed in batches using commercially available ELISA kits at the University Clinical Center of Serbia. Furthermore, the following biochemical parameters were analyzed: erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), lactate dehydrogenase (LDH), fibrinogen, total protein (TP), and albumin. Statistical analysis Categorical variables are displayed as counts with percentages, and numerical variables are presented as medians with ranges. Normality of distribution was assessed using the Kolmogorov-Smirnov test. LDH, TP, and albumin were transformed to dichotomous categorical variables and defined as "under the lower reference range limit, " "in the reference range, " or "over the upper reference range limit. " Differences between patients with lymphoma who developed thrombosis and those without thrombosis were assessed using the Mann-Whitney test for numerical variables and the chi-square test for categorical variables. ROC (receiver operating characteristic) curve analysis was used to define the best cutoff values for predicting VTE. Multivariate logistic regression analysis was performed to identify significant predictors of thrombosis in patients with lymphoma. Significant variables from the univariate logistic regression analysis were fitted into the multivariate analysis. The results are presented as odds ratios (ORs) with corresponding 95% confidence intervals (CIs). Statistical significance was set at p < 0.05. Statistical analysis was performed using IBM SPSS statistical software (SPSS for Windows, release 25.0, SPSS, Chicago, IL, USA). Results The mean age of the patients included in the study was 52.8 years (range, 18-89 years); 53% of the patients were men. A total of 415 patients (58.8%) had aggressive NHL, 172 (24.3%) had indolent NHL, and 119 (16.9%) had HL. Most of the patients were newly diagnosed (90.4%) and had advanced stage disease, with Ann Arbor stages III and IV accounting for 20.6 and 39.3% of the cases, respectively. Most patients had good performance status (Eastern Cooperative Oncology Group Performance Status [ECOG PS] 0-1: 81.7%;), and B symptoms were present in 55.8% of patients. A "bulky" tumor mass (lymphoma masses or conglomerate of lymph node masses that measure ≥7 cm) was observed in 30.7% of the patients, and mediastinal involvement was found in 31.4% of the patients. The median follow up was 25 months. Sixty-nine patients (9.8%) developed VTE events: 39 developed deep vein thrombosis (DVT) of the extremities, three developed abdominal vein thrombosis, 12 developed superficial vein thrombosis, 11 developed jugular vein thrombosis, and 16 developed pulmonary embolisms (some patients had more than one thrombotic event). Most patients (59.4%) had symptomatic VTE (41/69). The majority of patients developed VTE during treatment (52.1%). However, 46.5% of patients were diagnosed with VTE prior to treatment initiation, and 1.2% developed VTE after completion of treatment. VTE was more frequent in the patients with aggressive lymphoma (11.8%) than in those with HL (8.4%) and indolent lymphoma (5.3%) ( Table 1). None of the patients with lymphoma with disease dissemination in the CNS developed VTE. Most patients with lymphoma with VTE had advanced stage disease (stage III, 29%; stage IV, 34.8%). The demographic and clinical characteristics of lymphoma patients with and without VTE are presented in Table 2. Compared with patients without VTE, the NLR, PLR, ESR, CRP, and LDH were significantly higher in the patients with lymphoma with VTE (p = 0.001, p = 0.001, p = 0.023, p < 0.001, and p = 0.035, respectively), whereas the TP and albumin were significantly lower (p = 0.024 and p = 0.032, respectively). In the patients with diffuse large B-cell lymphoma (DLBCL), IPI > 1 (intermediate/high risk group) was more frequent in those with VTE (p = 0.027). In the univariate regression analysis, the NLR, PLR, TP, albumin, LDH, and CRP were found to be prognostic factors for VTE development in the patients with lymphoma (Table 3). In the subgroup analysis of superficial vein thrombosis events exclusion, both NLR and PLR remained prognostic factors for VTE development. B-symptomatology, a "bulky" tumor mass, mediastinal involvement, and ECOG PS were the significant clinicopathological prognostic factors for VTE development in the patients with lymphoma (p = 0.001, p = 0.001, p = 0.005, and p = 0.015, respectively) ( Table 3). In the multivariate regression model, the NLR and CRP were found to be independent prognostic factors for VTE development in the patients A high NLR was defined as an NLR of 3 or higher, a high PLR as a PLR of 10 or higher, and a high CRP level as CRP > 20 mg/L. The Khorana score for previously defined cut-off values for high risk (≥3) were Sn = 11.6% and Sp = 88.0% in our study. There was no difference in the use of thromboprophylaxis between the patients with lymphoma with and without VTE (13% vs. 18.4%, p = 0.268). A poor therapeutic response to chemotherapy and immunotherapy was associated with the development of VTE (p = 0.011). Complete remission was less frequent in the patients with lymphoma who developed VTE than in those who did not develop VTE (36.9% vs. 53.6%, p = 0.011). The patients receiving intensive firstline or "salvage" chemotherapeutic regimens experienced a higher VTE rate than did those treated with standard first-line therapy regimens, such as R-CHOP, CHOP, and ABVD (18.2% vs 7.3%, p < 0.001). Discussion In the study we aimed to evaluate the correlation between inflammatory markers and the risk of VTE development in a cohort of patients with lymphoma, and assess the relationship between VTE and treatment course in those patients. Our analysis found that the inflammatory markers correlated well with the risk for VTE development in patients with lymphoma, with NLR and CRP being the most accurate VTE predictive markers. Furthermore, we identified that an insufficient therapeutic response to (immuno) chemotherapy was a risk factor for VTE in the patients with lymphoma. Summarizing, immune dysregulation in lymphoma settings has a substantial impact on VTE occurrence. In our study of patients with different types of lymphoma, the rate of VTE development was 9.8%. In a meta-analysis by Caruso et al., [21] which included 18,018 patients with lymphoma, the rate of VTE development was 6.4%. In that study, a higher rate of VTE development was observed in patients with NHL than in those with HL. In a study by Mahajan et al., [22] the cumulative 2-year incidences of acute VTE were 2.1, 4.8, and 4.5% in patients with low-grade, intermediate/aggressive, and high-grade lymphomas, respectively. Two studies [23,24] focusing only on DLBCL found that the rate of VTE development was 11 and 11.1%, respectively. In a study examining the frequency of VTE in patients with cancer, Khorana et al. [25] observed that 4.8% of patients with NHL developed VTE, whereas 4.6% of those with HL developed VTE. In the study by Antic et al., [26] the rates of VTE development among patients with lymphoma were 5.3% in the derivation cohort and 5.8% in the validation cohort. In a recently published article [27] focusing on DLBCL and follicular lymphoma, the reported rate of VTE development was 13.4%. These observed variations in the VTE rate among patients with lymphoma are notable and may be caused by several factors, including focusing on distinctive types of lymphoma, study methodology (e.g., retrospective vs. prospective), and publication time (more recent studies have been dedicated to CAT). Our results are similar to those from studies focusing only on aggressive lymphoma, which is in accordance with the fact that more than half of our study population had aggressive lymphoma. In our cohort of patients with lymphoma with disease dissemination in the CNS, we have not observed any VTE. There are various causes that may have impacted the result. Just under 60% of patients with lymphoma with CNS disease were in satisfactory performance status at the time of therapy initiation. Moreover, during the last three years covered by the study, almost 70% of the patients were administered thromboprophylaxis, demonstrating higher adherence to the thromboprophylaxis guidelines in this specific subgroup of patients with lymphoma. In our study, the NLR, PLR, ESR, CRP, and LDH were significantly higher in the patients with lymphoma with VTE than in those without VTE, whereas the TP and albumin were significantly lower in the patients with Fig. 1 Receiver-operating characteristic curve of neutrophil to lymphocyte ratio lymphoma with VTE than in those without VTE. The ROC curve analysis indicated acceptable specificity and sensitivity values of the NLR, PLR, and CRP in predicting VTE in the patients with lymphoma. In particular, the univariate regression analysis indicated that the NLR, PLR, TP, albumin, LDH, and CRP were prognostic factors for VTE development in the patients with lymphoma. However, the multivariate regression model demonstrated that only the NLR and CRP were independent prognostic factors for VTE development. Tumor-associated neutrophils have notable role in cancer microenvironment and serve as link between malignancy and inflammation, influencing cancer progression through several complex mechanisms. These mechanisms include mobilization of neutrophils from bone marrow towards tumor sites (mainly by CXCR2 axis), active participation in tumor microenvironment (release of reactive oxygen species and secretion of pro-tumor cytokines and chemokines) [28]. Likewise, platelets widely interact with tumor cells, whilst they have significant role in inflammation by releasing numerous inflammatory mediators, such as PF4 (CXCL4), P-selectin, and CD40L [29]. Both the NLR and PLR have been used as prognostic markers in a variety of pathological conditions, including sepsis, lupus erythematosus, and solid tumors [30]. Additionally, the NLR and PLR have been suggested as adverse prognostic markers in patients with DLBCL [31] and mantle cell lymphoma [32]. However, some studies have found conflicting results [33]. Regarding the association between the NLR and PLR with thrombotic events, some previous studies have shown the predictive power of the NLR and PLR for VTE development [16,34]. In contrast, Artoni et al. [35] could not find an association between the NLR and PLR and an increased risk of VTE or cerebral vein thrombosis. To the best of our knowledge, there are no published studies on using the NLR and PLR to assess the risk of VTE in patients with lymphoma. Intermediate/high risk score of IPI in patients with DLBCL was significantly more frequent in the patients who developed VTE, comparing to those without VTE, which is in line with the recently published data [36]. An increasing number of studies aim to assess the relationship between inflammation and thrombosis, as well as the specific mechanisms underlying this relationship. However, the most validated mechanisms are yet to be discovered. The best studied mechanisms that have been shown to trigger thrombosis development or have been frequently observed in patients who develop thrombosis are increased levels of TNF-α, [37] hyperexpression of IL-6, [17] neutrophil extracellular Fig. 2 Receiver-operating characteristic curve of platelet to lymphocyte ratio traps, [38] soluble CD40L, [39] and microparticles (MPs) [40]. Kapoor et al. [41] significantly advanced our understanding of these processes by introducing a fourth element to Virchow's triad-immune dysregulation, naming it the "tetrad of thrombosis. " They clearly stressed that there was a sufficient amount of evidence supporting the impact of immune dysregulation on the pathophysiology of thrombosis. A few studies identified higher CRP levels in patients with VTE (mainly DVT), [42] whereas the study by Antic et al. [43] published results similar to ours, showing the effect of a broad inflammatory and hemostatic biomarker spectrum (including D-dimer, Factor XIIIa, von Willebrand factor, TNF-α, protein S, β2Glycoprotein I, MPs, urokinase-like plasminogen activator, fibronectin, and plasminogen activator inhibitor type 1). Similar to the results of previous studies, [21,22,44] we found that the patients with advanced stage disease more frequently developed VTE. However, this was not statistically significant. A "bulky" tumor mass, mediastinal involvement, and ECOG PS were identified as prognostic factors for VTE development in patients with lymphoma using the univariate analysis. A large mediastinal tumor mass is an important risk factor for the development of VTE, mainly due to the mechanical compression of blood vessels and consequent narrowing of the lumen [45,46]. Performance status is included in newer VTE risk assessment models, underlining its importance in VTE development [26,46]. Immobility has been recognized as a contributing factor for VTE development. It is of particular importance in patients with CNS lymphoma, as they have a strikingly high rate of VTE development (up to 59.5%) [47]. In our cohort, the patients with aggressive lymphoma had a higher rate of VTE development (11.8%) than did those with indolent lymphoma (5.3%) and HL (8.4%). Aggressive histology is predisposed to complicate the clinical course of lymphoma due to VTE occurrence [21,22,27,44,46,48]. However, one large study by Sanfilippo et al. [49] concluded that the VTE risk for DLBCL was lowered after adjusting for additional risk factors. In general, aggressive lymphomas have higher proliferation rate that enables them to advance promptly and to obtain VTE risk factors more rapidly ("bulky" tumor mass, extranodal localizations, poor performance status), consequently increasing the risk for VTE development. Complementary to our results, the predominant timing of VTE occurrence in patients with lymphoma was prior to or within three months from initiation of specific hematologic treatment [45,50,51]. These data Fig. 3 Receiver-operating characteristic curve of C-reactive protein draw attention to the role of thromboprophylaxis, which remains underused in cancer patients [26,52]. Considering the absence of statistical significance for thromboprophylaxis between the patients with lymphoma with and without VTE, our data confirmed the underutilization of thromboprophylaxis. There are several reasons why thromboprophylaxis continues to be underused in patients with lymphoma: the lack of reliable and widely accepted usage of a VTE risk assessment model for this heterogeneous patient population, lack of prospective studies with risk stratification and randomization for thromboprophylaxis, [48] excessively diverse data throughout the literature concerning this topic, and overestimation of bleeding risk combined with anticoagulant therapy in cancer patients by clinicians. Further disease specific and appropriately designed clinical trials on thromboprophylaxis are required to achieve high quality evidence to ameliorate clinical guidelines. Importantly, the patients with lymphoma who achieved unsatisfactory therapeutic responses were more susceptible to VTE development. This finding is in accordance with published data that confirmed the connection between aggressive lymphoma and advanced stage disease, resulting in shorter overall survival (OS) and a higher mortality rate [22,44,45,53]. However, one study [54] did not observe an OS difference between the patients with lymphoma with and without VTE. The biology of aggressive lymphoma leads to aggravate clinical course. Moreover, immune dysregulation in patients with aggressive lymphoma subtypes is probably impaired to a greater extent, which contributes to the risk for VTE occurrence. In our study, the patients receiving intensive first-line or "salvage" chemotherapeutic regimens experienced a higher rate of VTE development than those treated with standard first-line therapy regimens (R-CHOP, CHOP, and ABVD). Chemotherapy itself is known to be a risk factor for VTE development [3,46]. The incidence of VTE was higher in the patients with lymphoma treated with dose-intense regimens [55]. Furthermore, anthracycline drugs were associated with an increased risk of VTE [27,49]. Intensive first-line therapeutic regimens are used to treat more aggressive lymphoma subtypes, and both intensive regimens and aggressive subtypes are potential risk factors for VTE development. Relapsing lymphomas are inclined to follow a more aggressive clinical course, primarily due to the disease biology and development of resistant features. Consequently, those patients are treated with more intensive, so-called "salvage, " chemotherapeutic regimens. These patients frequently have other VTE risk factors, such as poor performance status and advanced stage disease, which significantly increase the risk of VTE development. Corresponding to several previous findings, [56,57] the Khorana score has not demonstrated satisfactory thrombotic prediction performance in the patients with lymphoma. Therefore, the paradigm of shifting towards disease specific risk assessment models (RAMs) increasingly prevail in recent years in the lymphoma field as well. Our study has several limitations. The main limitation is the heterogeneity of the study population, which possibly might have affected the results and subsequent conclusions. The impact of VTE onto survival rates of lymphoma patients was out of scope in this study. Perhaps, that would further contribute to the assessment of actual clinical impact of VTE in patients with lymphoma. Conclusions In conclusion, immune activation represents a distinctive feature of lymphomas, especially aggressive lymphomas. Dysregulation of inflammation plays an important role in VTE development in patients with lymphoma. The findings of this study demonstrated that easily and widely accessible simple parameters that reflect the level of inflammation have the ability to identify patients with lymphoma at risk for VTE who may be candidates for thromboprophylaxis. In addition, the possible use of anti-inflammatory drugs in this specific group of patients would extend the tools for VTE prophylaxis. Further studies are required to better understand VTE in lymphoma settings and the utilization of these inflammatory markers in VTE risk assessment.
2021-11-03T15:15:26.428Z
2021-11-01T00:00:00.000
{ "year": 2022, "sha1": "feaad9ebe4ecb578f4845b006b3c86c3e1d45663", "oa_license": "CCBY", "oa_url": "https://thrombosisjournal.biomedcentral.com/track/pdf/10.1186/s12959-022-00381-3", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "3c6aea7aa0867a4ebe583a5b247dfe5c77874114", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224820126
pes2o/s2orc
v3-fos-license
The Seismic Electromagnetic Emissions During the 2010 Mw 7.8 Northern Sumatra Earthquake Revealed by DEMETER Satellite The abnormal electromagnetic emissions recorded by DEMETER (the Detection of Electro-Magnetic Emissions Transmitted from Earthquake Regions) satellite associated with the April 6, 2010 Mw 7.8 northern Sumatra earthquake are examined in this study. The variations of wave intensities recorded through revisiting orbits from August 2009 to May 2010 indicate that some abnormal enhancements at Extremely Low Frequency range of 300–800 Hz occurred from 10 to 3 days before the main shock, while they remained a relatively smooth trend during the quiet seismic activity times. The perturbation amplitudes relative to the background map which were built by using the same-time seasonal window (February 1 to April 30) data from 2008 to 2010 further suggest strong enhancements of wave intensities during the period prior to the earthquake. We further computed the wave propagation parameters for the electromagnetic field waveform data by using the Singular Value Decomposition method, and results show that there are certain portions of the Extremely Low Frequency emissions obliquely propagating upward from the Earth toward outer space direction at 10 and 6 days before the main shock. The potential energy variation of acoustic-gravity wave suggests the possible existence of acoustic-gravity wave stability with wavelengths roughly varying from 5.5 to 9.5 km in the atmosphere at the time of the main shock. In this study, we comprehensively investigated the link between the electromagnetic emissions and the earthquake activity through a convincing observational analysis, and preliminarily explored the seismic-ionospheric disturbance coupling mechanism, which is still not fully understood at present by the scientific community. INTRODUCTION The abnormal electromagnetic emissions associated with earthquake (EQ) activities, during either its preparation phase or its occurrence, have been widely documented since the last century. Both ground-based observations and lab experiments on rock-rupture-processing confirm that the electromagnetic emissions induced by EQ activities can appear over a broad frequency range from Direct Current (DC) to Ultra Low Frequency (ULF), Extremely Low Frequency (ELF), Very Low Frequency (VLF), and even up to High Frequency range (e.g., Gokhberg et al., 1982;Huang and Ikeya, 1998;Sorokin et al., 2001;Pulinets et al., 2018). With the development of space technology, by the early 1980s some satellites recorded the abnormal electromagnetic emissions, plasma parameter irregularities, as well as energetic particle precipitations over seismic fault zones (e.g., Gokhberg et al., 1982;Larkina et al., 1989;Parrot, 1989;Serebryakova et al., 1992), indicating that the possible seismic-ionospheric perturbations are likely propagating upward from lithosphere to the atmosphere and ionosphere, and in particular circumstances, even up to the inner magnetosphere. For example, Larkina et al. (1989) revealed abnormal ELF/VLF emissions at 0.1-16 kHz frequency range before strong EQs according to the observations of Intercomos-19 and Aureol-3 satellites; Serebryakova et al. (1992) presented strong ELF emissions below 450 Hz over seismic regions based on Cosmos-1809 and Aureol-3 satellites. Parrot (1994) statistically studied 325 EQs with magnitude larger than 5 based on Aureol-3 satellite, and found that the seismic ELF/VLF emissions can be observed all along the magnetic meridian passing over the epicenter. Admittedly, these ideas were not universally accepted: for example, Henderson et al. (1993) stated no clear ELF/VLF signatures related to earthquakes based on a statistical analysis on DE 2 satellite; Rodger et al. (1996) reported no significant precursory, co-seismic or post-seismic effects associated with ELF/VLF electromagnetic activities recorded by ISIS (International Satellites for Ionospheric Studies) 2 satellite. In the early 21st century, France launched the Detection of Electro-Magnetic Emissions Transmitted from Earthquake Regions (DEMETER) satellite mission which successfully operated from 2004 to 2010 and is regarded as the world's first space platform mainly devoted to study ionospheric perturbations caused by earthquakes, volcanic eruptions and human activities (Parrot et al., 2006a). Since then, a growing number of studies have been devoted to the scientific field of seismic-ionospheric disturbances. For examples, Parrot et al. (2006b) examined the abnormal ELF waves as well as the simultaneous variations of the ionospheric plasma parameters and energetic particle precipitations occurring over several seismic zones. Bhattacharya et al. (2007) reported strong ULF/ ELF emissions occurring 4 days before the 2006 Gujarat EQ with a magnitude of 5.5. Nemec et al. (2009) investigated the statistical variations of ELF/VLF wave intensity values for shallow earthquakes with magnitude over 4.8 (depth less than 40 km) occurred all over the world in the same period of DEMETER observations, and confirmed the existence of a very small but statistically significant decrease of wave intensity at 1.7 kHz about 0-4 h before the main shocks. Błeçki et al. (2010) found some abnormal ELF emissions from 11 days before the 2008 Mw 7.9 Wenchuan EQ, with the most intensive emissions from a few tens of hertz up to 350 Hz that appeared 6 days before the Wenchuan main shock. Zeng et al. (2009) further analyzed the wave propagation parameters and reported a portion of emissions at ELF 300 Hz obliquely propagating upward to the satellite's position over the Wenchuan epicenter zone with right-handed polarization. More recently Bertello et al. (2018), using DEMETER electromagnetic data, found the appearance of anomalous electromagnetic waves at 333 Hz one day before the April 6, 2009 L'Aquila EQ. However, at present, the physical mechanism about how those abnormal signals from the seismic fault zone couple into ionosphere and how excite electromagnetic emissions or disturb the plasma parameters is still poorly understood, and the present proposed mechanism is still questionable. It is still a challenge to extract the real seismic anomaly or so called "earthquake precursor" from either groundbased or space-based observations. This study searches for possible ionospheric electromagnetic disturbances from DEMETER satellite observations, and reports another interesting case study that is the 2010 moment magnitude 7.8 (Mw 7.8) northern Sumatra EQ, which occurred at 22:15 UT on April 6, 2010, with an epicenter at 2.38°N, 97.05°E and depth of 31 km. This Mw 7.8 EQ is the result of the Indo-Australian plates moving north-northeast relative to the Sunda plate at a velocity of about 60-65 mm/year (https:// earthquake.usgs.gov/). The Sumatra region in Indonesia is located at the boundary between Indo-Australian and Sunda plates with very active fault movements, so that this area is naturally prone to strong EQs, eventually producing great disasters. In recent decades, the strong seismic activity in Sumatra region has becoming more and more frequent, the most devastating one was the December 26, 2004 Mw 9.1 EQ which resulted in the largest tsunami event in recorded history. Previous studies on EQ activities of the Sumatra region found some clear seismic-ionospheric disturbance phenomena (e.g., Molchanov et al., 2006;Liu et al., 2010;Kumar et al., 2013;Liu et al. 2016). Kumar et al. (2013) analyzed ground-based VLF transmitter receiver network data and found that VLF radio wave amplitude decreased by about 5 dB at nighttime and 3 dB at daytime during a magnitude 5.8 shock on December 18, 2006. Molchanov et al. (2006) reported a decrease of signal to noise ratio values of VLF radio wave amplitude before the 2004 Mw 9.0 EQ based on DEMETER's observations Heki et al. (2006) presented various waveforms and relative amplitudes changes of the shortly pre-and co-seismic-ionospheric-disturbances during the 2004 Mw 9.0 EQ by GPS-TEC (Global Positioning System, Total Electron Content) data. Liu et al. (2016) reported GPS-TEC perturbations appearing at the east part of epicenter 2 days before the 2005 Ms 7.2 Sumatra EQ, with electron density simultaneously enhanced at the altitude of 710 km over the west of the GPS-TEC perturbations due to the E × B drift effects. Marchetti et al. (2020) combined the multi-source observations from skin temperature, total column water vapor aerosol optical thickness of atmosphere, magnetic field, and electron density of ionosphere, revealed the evidence of Lithosphere-Atmosphere-Ionosphere coupling (LAIC) phenomena during the September 28, 2018 Mw 7.5 EQ in the same region. In the present work, we report the abnormal ELF electromagnetic emissions appeared at frequencies 300-800 Hz under quiet ionosphere environment conditions preceding the April 6, 2010 northern Sumatra earthquake. This paper is organized in the following way. A brief introduction to DEMETER satellite and its associated payloads are provided in Dataset, the variations of ELF wave intensity investigated by the revisiting orbits and the background map methods are presented in Wave Intensity Analysis. Wave Vector Analysis presents wave vector analysis by using the Singular Value Decomposition method. Discussions are devoted to the possible mechanism of the abnormal seismic emissions by acoustic-gravity wave (AGW) instability evaluations, and Conclusions briefly summarizes the main results. DATASET In this study, we mainly utilized the ELF/VLF electromagnetic field observations from the low earth orbit satellite DEMETER. This satellite was launched on June 29, 2004 to a sunsynchronous circular orbit with an initial altitude of 710 km (before December 2005) then lowered to 660 km (after December 2005), and ended operation on December 10, 2010 Parrot et al., 2006a). DEMETER had a full orbit period of ∼1.6 h, i.e., it performed ∼15 orbits per day, and its measurements were operated in the region with magnetic latitudes below 65° (Parrot et al., 2006a). DEMETER is the first electromagnetic satellite aimed to detect and study the electromagnetic signals likely associated with earthquakes, volcanic eruptions, or anthropogenic activities. The scientific payloads of DEMETER included five sensors which allowed it to measure the electromagnetic fields and waves, plasma parameters (both electrons and ions), and energetic particles. In the present study, we mainly used the observations provided by the electric field experiment (ICE, Instrument Champ Electrique) and the search coil magnetometer (IMSC, Instrument Magnetic Search Coil) (Parrot et al., 2006a). ICE consisted of four spherical electrodes with embedded preamplifiers separately installed at the end of four booms (4-m long), measuring the electric field over a wide frequency range from DC to 3.175 MHz, that is subdivided into four frequency channels, i.e., DC/ULF, ELF, VLF, and High Frequency . IMSC was a three-orthogonal magnetic antennae linked to a pre-amplifier unit with a shielded cable of 80 cm, including a permalloy core on which a main coil with several thousand turns (12,000) of copper wire, and a secondary coil with a few turns were wound (Parrot et al., 2006a). DEMETER had two observations modes: survey and burst mode. For the ELF/VLF electromagnetic field detection, the survey mode provided the power spectral density (PSD) data for one component of the electric field and the magnetic field in frequency range from 19.5 to 20 kHz with a frequency resolution of 19.5 Hz, respectively. The burst-mode provided six components of the electromagnetic waveform data in frequency below 1.25 kHz, with a sampling rate of 2.5 kHz over the prone earthquake area or the ground-based experiments. The burst-mode waveform data is not available for the whole orbit trajectory, being very limited compared to the survey mode observations. In this study, we collected DEMETER's survey mode observations of the variant magnetic field from 2008 to 2010, and burst-mode electromagnetic field waveform data recorded from March 20 to April 10, 2010 in the earthquake's epicenter ±8°area (5.6°S-10.4°N, 89°E-105°E). We used the vertical temperature profile of atmosphere, which is retrieved from the ERA-5 climate reanalysis dataset (https:// confluence.ecmwf.int/) to compute the AGW instability at the moment of earthquake. ERA-5 is an assimilated climate reanalysis dataset released by European Center for Medium-Range Weather Forecasts (ECMWF). ERA-5 provides global and hourly temperature profiles with high resolution at 137 different pressure levels from near surface to 0.01 hPa (∼80km altitude). The horizontal resolution is about 0.28°in both longitude and latitude. In this study, gridded data with a resolution of 0.3°were produced and downloaded from the ECMWF Web Applications Server (http://apps.ecmwf.int/datacatalogues/era5/). Revisiting Orbits First, we examined the wave intensity values of the variant magnetic field at different frequency ranges (200 Hz-20 kHz) from March 20 to April 10, 2010 over the Mw 7.8 northern Sumatra epicentral area by using PSD values provided by surveymode observations of DEMETER. These results reveal that those orbits passing over the epicentral area show certain enhancement of wave intensity at ELF frequency (300-800 Hz) ( Figure 1). Figure 1 displays the average PSD values of the magnetic field at frequency 300-800 Hz from March 20 to April 10, 2010 with the red star marking the epicenter (2.4°N, 97.1°E). It can be seen that the enhancement phenomena at ELF frequency band (300-800 Hz) over the seismic zone is evident. In order to exclude external origins for this enhancement (such as solar flare events, geomagnetic storms, etc.), which directly disturb the upper ionosphere, we removed all the orbits recorded under disturbed space weather conditions (Dst ≤ −30 nT; Kp ≥ 3). The Dst and Kp index are used to characterize the disturbance condition of space, Dst is derived from the equatorial geomagnetic stations, while Kp is computed by geomagnetic stations located at middle-high latitudes. The Geomagnetic Data Service (http://wdc.kugi.kyoto-u.ac.jp/index.html) provides the real time data of Dst and Kp index. The space weather conditions from March to April 2010 are presented in Figure 2. It can be seen that the 2010 Mw 7.8 northern Sumatra EQ occurred during the recovery phase of a moderate geomagnetic storm (Dst index reached a minimum of -80 nT on April 6, 2010). For this reason, those orbits recorded during the main phase and recovery phase (mostly from April 3-8) are not illustrated in Figure 1, and data from these times were not considered in our later analysis. It can be seen that around the epicenter area +8°(denoted by the red square in Figure 1), the enhancement of wave intensity mainly occurred at those orbits passing near the epicenter area, especially at orbits No. 306891 on March 27, No. 307041 on March 28, No. 307481 on March 31, 2010, and on these days the fluctuation amplitude of Dst index varied over −30 to ∼0 nT, and Kp index remained below 3, it can be said that no significant geomagnetic external activity was present. It is not convincing to simply relate the enhancements of those orbits to the seismic activity just according to a short space and time window including the earthquake location and occurrence. We further selected previous observations along the same orbit trajectories (i.e., revisiting orbits) to investigate the long-term variation pattern. The sun-synchronous circular orbit feature of DEMETER allows the satellite to return to the same orbit trajectory at the same local time approximately every ∼13 days in 2010 (the recursive period changes due to the slight shift of satellite position). By using this feature of revisiting orbits, we can examine a longer time window to determine the normal electromagnetic environment background trend along the same orbital trace at the same local time. According to Figure . We also checked the solar activity during this half year period which kept a weak and stable level revealed by the sunspot numbers and there were no strong solar proton events occurring during this time period (not shown). We finally got 25 revisiting orbits for each of those above five orbits. The trajectories of those five orbits (colored) and their revisiting orbits (gray) are shown in Figure 3. Interestingly, in this time-window (August 2009 to May 2010), there was another earthquake occurred on March 5, 2010 with a magnitude of 6.8 in the vicinity of the 2010 Mw 7.8 northern Sumatra epicenter area (https://earthquake.usgs.gov/earthquakes/ map), indicating that the fault movement in Sumatra area is very active. In Figure 3, the red star represents the April 6, 2010 Mw 7.8 EQ, and the orange one denotes the March 5, 2010 Mw 6.8 EQ. According to the empirical equation of the earthquake preparation zone put forward by Dobrovolsky et al. (1979), the influential zone of a 7.8 magnitude earthquake in the lithosphere is a circle with a radius of 2,260 km. For convenience, considering the projection feature of satellite orbit on the ground, we chose a square area of around 1,800 km at satellite's altitude, that is the epicenter (2.4°N, 97.1°E) ± 8°area, or (5.6°S-10.4°N, 89°E−105°E). We then extracted the PSD values at frequency (300-800 Hz) over the studied area from each revisiting orbit of the above five orbits, and re-sorted them as five sets of time-series data, and then we applied a running quartile method (Zhima et al., 2012b;Liu et al., 2013;Shen et al., 2017) to examine the long-term trend, as shown in Figure 4. The running medians, along with the inter-quartile ranges (IQR, being equal to the difference between the third and first quartiles) were computed by using the three previous and three successive orbits of the current orbit (7 orbits in total). We Frontiers in Earth Science | www.frontiersin.org October 2020 | Volume 8 | Article 572393 defined the running median PSD values as the background trend (denoted by blue lines in Figure 4), while the median PSD values of the current orbit as the current variation level (represented by red lines). The upper and lower bound were computed by the running median PSD values ± IQR values, denoted by the black and green lines, respectively. It can be seen in Figure 4 that three orbit trajectories, No. 306891,No. 307041,and No. 307921, show enhancements over the background trend before the two EQs. The recorded PSD values at frequency (300-800 Hz) along these three orbit trajectories are comparatively stable from August 2009 to February 2010, then start to fluctuate near the time of the two major shocks. However, the other two orbit traces at the west side of the epicenter (No. 307191, No. 308071, the last two panels in Figure 4) show no obvious difference between earthquake and non-earthquake time, it is difficult to identify any earthquake related abnormal signals from the variation pattern of the last two orbits, so we do not further discuss their relationship to earthquake any further. Specifically, for the orbit No. 306891 (see the first panel in Figure 4), which recorded on the east side of epicenter on March 27, 2010, the PSD values reach to ∼10 -6.7 nT 2 /Hz before the Mw 7.8 EQ, far exceeding the background threshold (see the black and green lines). In addition, it shows a relatively smaller enhancement (∼10 -7.5 nT 2 /Hz) within one month before the Mw 6.8 EQ. The orbit No. 307041, which is located right above the Mw 7.8 epicenter (see Figure 3) on March 28, 2010, presents a low and stable trend until the time very near to the two major shocks, then it becomes strongly disturbed (see the second panel in Figure 4), reaching maximum values of ∼10 -7.8 nT 2 /Hz during the Mw 6.8 EQ, and ∼10 -6 nT 2 /Hz before the Mw 7.8 EQ. Along the orbit No. 307921, which closely passed over the Mw 6.8 epicenter area (see Figure 3) on April 3, 2010, the variation keeps a relative normal background trend far before the main shock time too, mainly gets disturbed on February 23, 2010 before the Mw 6.8 EQ, and after the Mw 6.8 EQ on March 8, and become Frontiers in Earth Science | www.frontiersin.org October 2020 | Volume 8 | Article 572393 5 highly enhanced from 10 -8.4 nT 2 /Hz to 10 -6.3 nT 2 /Hz during the Mw 7.8 EQ (see the third panel in Figure 4). In all, the variation patterns of the above three orbits indicate that the wave intensity at frequency (300-800 Hz) indeed show enhancements with the location of during the Mw 7.8 EQ, compared to other quiet seismic activity times. Specifically, on March 27, 2010, 10 days before the Mw 7.8 EQ, the wave intensities started to increase with peak value around 10 -6.7 nT 2 /Hz on the orbit trace of No. 306891; and 3 days before the main shock, the wave intensities varied from 10 -8.4 to 10 -7.6 nT 2 /Hz along the orbit trace of No. 307921 (April 3). The strongest enhancement, of about 10 -6.0 nT 2 /Hz, was recorded along the orbit trace closest to the epicenter (No. 307041 on March 28) of the Mw 7.8 main shock. Background Map To obtain more convincing evidence of these abnormal ELF emissions, the longer term observations under quiet space weather condition (Dst ≥ −30nT and Kp ≤ 3) from 2008 to 2010 were selected to build a background map over the Mw 7.8 Sumatra EQ area. We extracted the data in a same-time-window from Feb. 1 to April 30 for each year from 2008 to 2010. Through this way, the variations related to the seasonal conditions can be eliminated. In this study, we adopted the method put forward by Zhima et al. (2012aZhima et al. ( , 2012b to build a background map based on longerterm satellite observations over the epicenter area. First, we extracted the observations over the area ±8°about the epicenter where Δρ is regarded as the perturbation amplitude during earthquake time T i compared to the background map β 2008-2010 . The difference between the magnetic field wave intensity at earthquake time (α Ti,2010 ) and the long-term background map (β 2008-2010 ) is normalized by the standard deviation (σ 2008-2010 ) in each 2°× 2°data bin. We computed the Δρ for different frequency ranges from 300 to 800 Hz, and found that at the frequency range (468-566 Hz) there exists strong enhancements during the earthquake impending time [March 21-April 6, 2010] as shown in Figure 5. Figure 5 shows the variation pattern of the perturbation amplitude Δρ during time intervals from February 1 to April 30, 2010. It can be seen that during the February 1 to March 20 time period ( Figures 5A,B), the Δρ values in the epicenter area remain at a relatively low level, mostly varying around 0 and the maximum value peaking about 1.2. However, during the earthquake impending time interval from March 21 to April 6 (see Figure 5C), the Δρ gets enhanced (∼over 3), with the strongest enhancement mainly spread along the latitudinal direction of the near northwest side of the epicenter area (almost right above the epicenter). The eastern part of the epicenter gets a wide scope enhancement both along the longitudinal and latitudinal direction marked by the dashed square in Figure 5. After the main shock ( Figure 5D), the Δρ returns back to a relatively low disturbance amplitude level, mostly around 0, and maximum value 1.2, similar to the levels in Figures 5A,B. Due to the disturbed space weather conditions, the observations during the earthquake impending days from April 3 to 6, 2010 (see Figure 2) were not included in this computation, so the enhancement showed in Figure 5C is very likely attributed to the seismic activity. WAVE VECTOR ANALYSIS We further checked the burst-mode observations which were automatically triggered when DEMETER flies above known seismic fault zones (Parrot et al., 2006a). The electromagnetic payloads ICE and IMSC in burst-mode provide six-components of waveform data at frequency range below 1.25 kHz with sampling rate of 2.5 kHz. However, the waveform data are not available at any time of interest during this earthquake. Fortunately, the orbit No. 306891 on March 27 (10 days before the main shock), and No. 307481 on March 31, 2010 (6 days before the main shock), coincidentally triggered burst-mode observations over the epicenter zone, allowing us to compute the wave propagation parameters by wave vector analysis method. Figure 6 shows the exact burst-mode operation locations of these two orbits. Figures 7A,B show the detailed electromagnetic spectral values computed by the waveform data of orbit No. 306891. It can be seen that near the epicenter area (2.38°N, 97.05°E), at the latitudes from ∼ 8°to 3.8°S, longitude from 107°E to ∼105°E, there exists electromagnetic wave activities (denoted by white arrows) mainly from 14:43 to 14:45 UT at L shells roughly from 1.4 to 1.09 where the satellite is quite near to the Mw 7.8 epicenter area. To compute the wave propagation parameters of these emissions over the epicenter zone, we built a Field Aligned Coordinate (FAC) system in the orbit space of DEMETER satellite. Under this FAC coordinate system, the Z-axis is along direction of the background magnetic field B 0 , the Y-axis is horizontally perpendicular to the Z-axis cross the position vector of the satellite (so that the positive Y-axis is nominally eastward at the equator), and the X-axis completes the right-handed system. The background magnetic field B 0 is obtained by IGRF 2000 model (Olsen et al., 2000) according to DEMETER's position. The angles θ and ϕ are defined as the wave normal angle (polar angle) and the azimuthal angle between the B 0 and the wave vector k. A value of 180°in azimuthal angle ϕ indicates that k propagates toward decreasing L shell direction, i.e., downward to the Earth direction, while 0°means that k Frontiers in Earth Science | www.frontiersin.org October 2020 | Volume 8 | Article 572393 6 propagates toward the increasing L shell direction in the meridian plane (i.e., from the Earth direction upward to the outer space direction). Then, the Singular Value Decomposition method put forward by Santolik et al. (2003) to compute wave propagation parameters, which has been widely used in the analysis of ELF/ VLF space electromagnetic waves (Parrot et al., 2006b;Wei et al., 2007;Zhima et al., 2015a;Zhima et al., 2015b), was adopted to compute the wave normal angles, ellipticity, polarization, and planarity. The computed wave propagation parameters from No. 306891 are shown in Figures 7C-F. The parameters computed by the low intensity waveform data (lower than 10 -7.8 nT 2 /Hz) are not shown for a better visual inspection. The wave normal angles θ of wave vector k are displayed in Figure 7C, which varies roughly from 40°to 80°, indicating these emissions are obliquely propagating. Figure 7D shows the azimuthal angles ϕ with a wide varying range from 0°to 180°. However, it can be clearly identified that there are some portions of emissions showing azimuthal angles ϕ ∼ 0°(see the black squares) mainly at frequency from 300 to 800 Hz(even up to ∼1,100 Hz)at ∼14:43:44 to 14:44:58 UT. According to the FAC coordinate system defined above, the propagation direction of these portions of emissions points upward from the Earth direction to the outer space direction (increasing L shell), indicating that these waves come from lower altitudes than the satellite. It is noted that the strong emissions around 400-450 Hz with wave normal angles θ ∼ 90°( Figure 7C), azimuthal angles ϕ ∼ 180°( Figure 7D), the right handed ellipticity values of 1, are obliquely propagating downward from the higher altitudes (or decreasing L shell direction) than satellite position to the Earth direction, and they are identified as ionospheric hiss waves which might originate from the plasmasphere or the inner magnetosphere Zhima et al., 2017;Xia et al., 2019), these emissions are not related to the earthquake activity. The ellipticity values are given in Figure 7E, which represent the ratio of the axes of the polarization ellipse. The value +1 of ellipticity means that the wave is right-hand polarized, while −1 indicates that the wave is left-hand polarized, while 0 is the linearly polarized. For these portions of upward propagation waves (azimuthal angles ϕ ∼ 0°) the ellipticity mainly varies around 0 at frequencies 300-400 Hz, meaning that they are linearly polarized, while at frequencies below 300 Hz or above 450-800 Hz, even up to 1,100 Hz, the waves change to the left hand polarized (ellipticity varying from 0 to −1). The planarity of waves, which represents wave propagating in a single plane (+1) or in spherical direction (0), is presented in Figure 7F, with a value mostly being +1, implying that the observed waves are coming toward the spacecraft as plane wave propagation. We also statistically analyzed the distributions of wave propagation parameters for the waves at frequency range (300-800 Hz) marked by the black square area in Figure 7 (including the downward right-handed hiss at frequency 400-450 Hz), as shown in Figure 8. The overlapped red curves represent the fitted curves computed by the kernel density distribution function. The majority of wave normal angles θ varies below 80°. The azimuthal angles ϕ mainly peaked at 0°. The ϕ values of ±180°are mostly attributed to the downward hiss waves at 400-450 Hz. For the ellipticity, the values of -0.5 to −1 are mostly related to the upward direction waves, and +1 to the downward hiss waves. The planarity predominates at values of 1. As with orbit No. 306891, the wave propagation parameters of waveform data for orbit No. 307481 at 15:08-15:10 UT on March 31, 2010 are shown in Figure 9. As can be seen from Figure 9 that the waveforms recorded at latitudes from 1.95°S to 6.8°N, longitudes from 99.29°E to 97.44°E are exactly over the epicenter zone (see Figure 5). The strong electromagnetic emissions along this orbit mainly appeared at frequencies below 500 Hz. The wave propagation parameters also show basically similar features as the waves recorded by No. 306891, although they are not as significant as the ones of No. 306891. However, it can be clearly identified that there are waves with azimuthal angles ϕ of 0°. Figure 10 shows the statistical features of the waves recorded from (1.95°S-6.89°N, 99.29°-97.44°E). For these waves, the wave normal angles θ vary at a broad range from 0°to 90°, indicating waves are obliquely propagating. The azimuthal angles ϕ have three peaks: ±180°and 0°, meaning there are waves mixed both from the Earth direction (ϕ 0) and the outer space direction (ϕ ±180°). The ellipticity mainly peaks around ±0.5, and the planarity is 1. Frontiers in Earth Science | www.frontiersin.org October 2020 | Volume 8 | Article 572393 8 DISCUSSIONS The 2010 Mw 7.8 northern Sumatra EQ occurring at the equatorial area over where the equatorial ionosphere has less energetic particle precipitations compared to the high-latitude ionosphere, we do not need to consider the possibility of energetic particle precipitation induced wave activity in this study. Considering that the upper ionosphere environment space weather conditions are quiet during the studied time-window, there are two major generation sources for electromagnetic emissions we must consider: the atmosphere lightning activities and ground-based VLF transmitters. The lightning activities from the atmosphere also serve as an embryonic source for strong ELF/VLF emissions in the upper ionosphere (Santolík et al., 2009;Shklyar et al., 2012;Zhima et al., 2017). The azimuthal angles of wave vector of lightning induced ELF/VLF emissions usually predominate around 0° in the above defined FAC coordinate system, which means that this kind of wave propagation direction points away from the Earth direction to outer space (in the increasing L shell direction). The lightning induced wave also presents either right or left handed polarization (Santolík et al., 2009), but most importantly, the lightning induced ELF/VLF emissions usually appear as a series of intensive burst spectra with vertical lines or whistlermode falling/rising tones along the whole frequency range from a few hertz up to over 3 kHz or even 10 kHz . In this study, the strong ELF emissions over the Sumatra epicenter zone appeared in a much lower frequency range (below 1,100 Hz), mainly at 300-800 Hz. Additionally, the variations of revisiting orbits also confirm that these emissions mainly get enhanced near earthquake time, while keeping a relatively smooth trend during the quiet-seismic activity time. Further, the perturbation amplitude relative to the background map also indicates that the enhancement of wave intensity at 300-800 Hz mainly occurs during the earthquake impending time intervals (see Figure 5C) but not in other time windows. So we exclude the possibility of lightning activity as the generation source for these abnormal ELF emissions. Another kind of known electromagnetic emissions which can propagate from lithosphere to ionosphere are the artificial VLF radio waves emitted by the powerful ground-based VLF transmitters. VLF radio waves are mainly used as longdistance communication and navigation in the lithosphereionosphere waveguide, however, some portions of VLF radio wave energies leak into the ionosphere, propagating upward and reaching to satellite altitudes. The satellite recorded VLF radio waves usually appear at frequencies over 10 kHz to even to 30 kHz (Zhao et al., 2019), the spectra of VLF radio waves recorded by satellite usually exhibit a narrow transversal spectrum peak at the central frequency of the emitted radio waves (Shen et al., 2017;Zhang et al., 2018). Additionally, no VLF transmitter is reported near Sumatra area. So the association with VLF radio waves is excluded as a possible explanation of the observations presented here. Previous studies (Sorokin et al., 2001;Sorokin et al., 2003;Molchanov et al., 2004;Pulinets and Ouzounov, 2011;Pulinets et al., 2018) have been undertaken to interpret the mechanism of the electromagnetic disturbances induced by earthquakes. Pulinets and Ouzounov (2011) presented the LAIC mechanism based on a complex multidisciplinary approach, trying to interpret the physical processes involved in generation of anomalous atmospheric and ionospheric phenomena associated with strong earthquakes. First, the lithospheric rock due to tectonics plate movement, are stressed and release radon or other different kinds of gases into air (e.g., methane, helium, hydrogen, and carbon dioxide); subsequently, the radon radiation in the atmosphere changes the air conductivity resulting in a vertical electric current (see Figure 10 in Pulinets and Ouzounov (2011) and Figure 6 in Sorokin et al. (2001)). Correspondingly, the local growth of electric currents in the atmosphere develops AGW instabilities as well as a horizontal inhomogeneity of ionospheric conductivity (Sorokin et al., 2001), finally generating the magnetic field aligned currents, plasma irregularity or the ULF/ELF emissions (Sorokin et al., 2001). For example, during the 2004 Mw 9.0 Sumatra-Andaman EQ, a clear co-seismic AGW instability appeared in the atmosphere at VLF from 1.4 to 2.8 mHz with a group velocity around 300-314 m/s and amplitudes varying from ∼1 to 12 Pa (Mikumo et al., 2008). AGW can be evaluated by the wind field and temperature data and the total wave energy (E 0 ) of AGW can be described by the sum of kinetic (E K ) and potential energies (E P ) which correspond to the fluctuations in the wind field and temperature of atmosphere (Yang et al., 2019), respectively. E 0 and E P energies are proportional to each other (VanZandt, 1985;de la Torre et al., 1999), so that we can examine AGW instability through E P , which is defined as (VanZandt, 1985;Yang et al., 2019): where g is the gravitational acceleration constant (9.8 ms −2 ), T ' is the perturbation atmosphere temperature deviated from the background temperature T. N is the Brünt-Vaisala frequency defined as (Fritts and Alexander, 2003): where N is a function of altitude and potential temperature, where θ T P0 P R cp is the potential temperature, z is the altitude, P 0 is the standard reference pressure (1 hPa), P is air pressure, R is the gas The variance term T ' T 2 is calculated within a layer of 2 km thickness as: where z max and z min are the top and bottom altitudes of the layer. Here we computed the E P variation over the 2010 Sumatra epicenter area by using the technique developed by Yang et al. (2019) with some procedures modified to fit in the ERA-5 data, as shown in Figure 11. Figure 11A shows the vertical temperature profile retrieved from ERA-5 dataset over the Sumatra EQ epicenter, and Figure 11B is the background temperature from Figure 11A filtered by a moving average of every 2 km; Figure 11C displays the temperature deviation computed by removing the background from the original temperature profile; Figure 11D represents the squared term of the Brunt-Väisälä frequency computed by Eq. 3 with the temperature profile; Figure 11E is the potential energy as calculated by Eq. 2. It can be seen from Figure 11E that the E P value peaks around the altitude of 17 km (the tropopause), which is a common phenomenon (Yang et al., 2019). Four wave crests can be identified in the temperature deviation profile at the altitudes of 18. 23, 27.71, 36.36, and 41.82 km (denoted by arrows in Figure 11C). We computed the wavelength of a full sinusoidal period of these four wave crests in the temperature deviation but not in the E P profile. The corresponding vertical wavelengths are 9.5, 8.7, and 5.5 km for the four wave crests, which are consistent with previous understanding that the vertical wavelength of stratospheric AGW is about 2-10 km (Tsuda et al., 1994). Therefore, we suggest the possible existence of AGW wave in the atmosphere at the moment of the 2010 Sumatra EQ occurrence. It must be admitted that through this computation, we only found the possibility of AGW generation during main shock. Because of the complicated LAIC coupling mechanism, it is impossible to build a coupling model at every key altitude (or layer) from the lithosphere to the satellite's location, and to evaluate how the coupling processing is developing. Anyway, we can tentatively interpret the link between the AGW propagation and the electric field observations in terms of a mechanical interaction between the atmospheric pressure gradient induced by the AGW and the ionosphere which causes a local instability in the plasma distribution. Such plasma variation gives rise, in the E-layer, to a local nonstationary electric current which, successively, generates an electromagnetic (EM) wave (Yang 2019;Piersanti et al., 2020, submitted). The interpretation of LAIC mechanism needs a multidisciplinary synergy (Pulinets and Ouzounov, 2011) with the simultaneous observational data at different altitudes in lithosphere-atmosphere-ionosphere system which are sensitive to various kinds of disturbances. The relatively very weak precursors of earthquakes can be submerged by other stronger perturbations even during quiet space weather conditions. At present, the LAIC mechanism still lacks reliable experimental evidence with direct and simultaneous observations at different layers or altitudes. It involves geophysical, chemical and even biological knowledge to interpret the mystery of seismic-ionospheric coupling. Many of the reported seismic-ionospheric case studies still require the further experimental confirmation and objective statistical studies. CONCLUSIONS This paper investigated the abnormal electromagnetic emissions during the 2010 April 6 Mw 7.8 Sumatra earthquake based on DEMETER satellite observations. The PSD values show that there are certain enhancements of wave intensity at frequency range (300-800 Hz) on 10-3 days before the main shock. The variation patterns along the same orbit trajectories which were computed from the revisiting orbits (August 2009 to May 2010) further indicate that the wave intensity indeed got enhanced during seismic activity time compared to the relatively stable variation patterns during quiet seismic activity time. Specifically, on March 28, 2010 (9 days before main shock), the wave intensity started to increase with peak value around 10 -6.7 nT 2 /Hz on the orbit trace of No. 306891 (3 days before the main shock), the wave intensity varied from 10 -8.4 to 10 -6.3 nT 2 /Hz along the orbit trace of No. 307921. The strongest enhancement of 10 -6.0 nT 2 /Hz was recorded along the orbit trace nearest to the epicenter (No. 307041). We further investigated the perturbation amplitude relative to the background map which is built by four years' of quiet space weather time data using the same time window (each year from February 1 to April 30), and found that the perturbation amplitude of wave intensities at frequency range (468-566 Hz) were indeed enhanced during the earthquake impending time interval (from March 21 to April 6). We further computed the wave propagation parameters for the electromagnetic field waveform data by using Singular Value Decomposition method. Results show that there does exist some portions of ELF emissions mainly at 300-800 Hz, propagating upward from some altitudes lower than the satellite over the seismic zone. We excluded other generation sources for ELF/ VLF emissions under quiet space weather conditions, such as the lightning activity and ground-based VLF transmitters. Considering the wave propagation features and their locations, we suggest that these portions of upward propagating ELF waves are very likely excited during the earthquake preparation processing. According to the previous studies (e.g., Gokhberg et al., 1982;Larkina et al., 1989;Bhattacharya et al., 2007;Błeçki et al., 2010;Zhima et al., 2012a;Zhima et al., 2012b;Pulinets et al., 2018) and the reference therein, it is sure that the ELF electromagnetic emission is a promising tool for earthquake precursor detection. It usually appears a few days or weeks over the earthquake preparation zone, especially during the impend moment of a shock rupture in which the variation of stress on the rocks excite electromagnetic emissions at a broad band. In this study we mainly took an approach of extraction the anomaly information before the strong earthquake by a case study. We didn't involve the aftershock effects in this study. We will leave it for future deep research after we accumulated enough evidence for the abnormal seismic emissions from satellite observations. For the possible mechanism, we computed the potential energy of AGW at the moment of earthquakes and results confirm the possible existence of AGW with wavelength roughly varying from 5.5 to 9.5 km in the atmosphere at the moment of the main shock. It must be admitted that in this study, we just suggest the possibility of AGW generation over the epicenter area, due to the very complicated LAIC coupling mechanism and the impossibility of building a coupling model at every key altitude (or layer) from lithosphere to ionosphere space with the present day's science and technology levels. The comprehensive interpretation on the LAIC is beyond the scope of the present study, but we hope to explore this topic in the future. AUTHOR CONTRIBUTIONS ZZ: scientific analysis and manuscript writing. YH: data collection and data processing. MP: AGW wave simulation and its scientific analysis. XS and AS: scientific analysis. RY, YY, SZ, ZZ, QW, JH, and FG: data collection and scientific analysis
2020-10-22T18:56:25.228Z
2020-10-22T00:00:00.000
{ "year": 2020, "sha1": "c90e358273c7c457b9c31bbada81e6ed0e4ff281", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/feart.2020.572393/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "c90e358273c7c457b9c31bbada81e6ed0e4ff281", "s2fieldsofstudy": [ "Geology", "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
97320214
pes2o/s2orc
v3-fos-license
Studies on the Influence of Monomers on the Performance Properties of Epoxy Acrylate Resin Twelve blend samples were prepared by physical mixing of epoxy acrylate resins with various monomers viz. ethoxylated phenol monoacrylate (EOPA), tripropylene glycol diacrylate (TPGDA) and trimethylol propane tri acrylate(TMPTA), having weight ratio of epoxy acrylate resin and monomers are 50:50, 60:40, 70:30, 80:20. These samples were cured under UV radiation using 5% photo initiator by weight. These blends were evaluated for mechanical, chemical & thermal properties. It was found that the sample having mono & tri functional monomers shows better properties than the samples having di functional monomer. Introduction Photo-initiated curing reactions are powerful processes to quickly create highly cross-linked polymer networks by supplying an appropriate form of energy, generally ultraviolet (UV) light, to a thermosetting material like epoxy, unsaturated polyester, acrylic resins etc., containing photo initiating species [1][2][3][4][5] In recent years, epoxy acrylate prepolymer suitable for use in ultraviolet (UV) curing systems, command the largest use in the market and are typically used in applications ranging from paper and card over-print varnishes, wood coatings, screen and lithographic inks and solder resist inks for printed circuit boards 6-9.Other areas such as vacuum metallizing base coatings, adhesive laminates, release coatings and video disc coatings are also becoming more important 10.Epoxy acrylates are noted for adhesive properties, flexibility, nonyellowing, hardness, and chemical resistance 11-13.The epoxy backbone promotes tough-ness and flexibility to cured films whilst the carbon-carbon and ether bonds of the same improve the chemical resistance.The epoxy acrylate resins have the merits of: (a) high reactivity (b) possibility of structural variation in backbone (e.g.bisphenol A or epoxy novolacs) [14][15][16] which provides good coating properties after curing and (c) excellent adhesion performance (due to the presence of pendant hydroxyl groups).However, the UV cured films based on epoxy diacrylate as oligomer are very brittle and the relative elongation at break is very low 17-19.The epoxy acrylate resin can further be cross linked with different functionality monomers to change or modify its performance.These acrylic monomers are the essential part of every UV/EB formulation.Nevertheless, there are only a few studies published explaining the underlying logic of explaining the use of acrylic monomers.In the past experimental works, trimethylolpropane triacrylate (TMPTA) has been used by various researchers 20,21 . Therefore, for the first time we tried to prepare a blend by physical mixing of epoxy acrylate resin with mono, di & tri functional monomers and studied the effect by using different concentration of these monomers. Preparation of epoxy acrylate resin A general purpose, bisphenol a based epoxy resin (EEW: 180), acrylic acid and triethyl amine were used for the synthesis of epoxy acrylate resin.The epoxy acrylate resin was synthesized by reacting epoxy resin and acrylic acid in 1:1 mole ratio.Tri ethylamine was used as a catalyst.The acid value of prepared resin was 3.0 mg KOH/gm solid as shown in Scheme 1. Preparation of blends of epoxy acrylate & monomers Epoxy acrylate resin was physically mixed with different monomers of different concentration with an interval of 10 wt.%.All the samples were designated as shown in Table 1. Curing of blend samples The blend samples with 5 wt.% photo initiator [Darocure 1173] were cured in a UV Oven at a speed of 10 meter per minute in atmospheric air with 80 watts /cm mercury vapor bulb. Preparation of panels The blend samples were applied on sand blasted steel panels of size 150 × 100 × 1.25 mm with a bar applicator (M/s Sheen Instruments Ltd., U.K.).A dry film thickness of about 10-15 µ was maintained on all the panels.The films were cured as per the cure schedule discussed earlier. Mechanical properties Adhesion The adhesion properties of the cured films were evaluated by cross -cut adhesion test method.In this test, parallel cut were made on the coating in two directions to form a series of small squares, originally one hundred of them, 1.25 mm in size.An adhesive tape was applied to the cross cuts, the tape was rolled at the place to assure good adhesion, and then removed with a force perpendicular to the coated substrate.The number of squares removed, gave the numerical value of its adhesiveness. Impact resistance of cured film The impact resistance of the cured film samples was evaluated by dropping a hemispherical shaped 2-Lb weight from different heights, ranging from 25 to 15inches (DEF1053 specification).The tests were carried out on the uncoated side of the panels facing the falling weight. Pencil hardness As per the specification in ASTM D3363, pencil hardness of the cured films was determined by using a calibrated set of drawing leads ranging from 6B, the softest, to 6H, the hardest.The first pencil that scratches out the coating from the substrate was reported as the coating's hardness. Chemical properties The panels were also examined for a visual change in the film by conducting spot test with different chemicals like H 2 SO 4 (10% Sol.), NaOH (10% Sol.) & Toluene etc. for 24h at ambient temperature. Thermal Properties The thermal stability of the blend samples was determined by comparing the onset degradation temperature (up to 5% wt.loss) of the cured samples with Thermogravimetric analyzer (TGA) of Universal V3.9A TA instrument, at a heating rate of 20 0 C/min. in nitrogen atmosphere from 50 to 600 0 C. The blend samples with EOPA & TMPTA shows improved adhesion & pencil hardness which is due to the very bulky side group present in EOPA, directly linked to the polymeric network, limiting the freedom of movement of this network.Whereas in TMPTA, the polarity of the linkage group gives more degree of dipolar interaction which leads to better mechanical properties.The TPGDA shows inferior mechanical properties except impact resistance as compared to EOPA & TMPTA.This is due to the higher freedom of movement around the linkage, enabling the stretch with the polarity, giving dipolar interaction which is responsible for mechanical properties.Table 2.The surface properties of the cured film of different blends samples. Comparative acid, alkali & solvent resistance of epoxy acrylate modified with different monomers & their different concentrations cured under UV -Radiation Table 3 shows the comparative acid, alkali & solvent resistance of cured film blends.A quick perusal of Table 3 signifies that the coating film prepared with TMPTA offered maximum resistance towards acid, alkali & solvent, as compared to the film of other monomers.This behavior is due to the high reactivity of TMPTA, which results in a more complex structure, resulting better resistance as compared to other samples. Thermal properties The system was evaluated for thermal stability in nitrogen atmosphere by thermogravimetric analysis.The thermogravimetric (TG) traces were obtained for samples Conclusion From the preceding results & discussions it can be concluded that the films of blend sample containing EOPA & TMPTA shows better adhesion, hardness, chemical resistance & thermal resistance properties as compared to TPGDA.Epoxy acrylate resin containing EOPA & TMPTA can be used in harsh environment such as automotives and it is also suitable for metal coating at ambient temperature.As Epoxy acrylate resin containing TPGDA has high impact resistance so it can be used as an oligomer in UV-cured coatings for plastic instead of flexible urethane acrylate. Table 2 shows the surface properties of the cured film of different blends samples.The table indicates that except R 1 D 1 , R 1 D 2 , R 1 D 3, and R 1 D 4 , all the blend samples pass through adhesion test. Table 3 . The comparative acid, alkali & solvent resistance of cured film blends
2018-12-24T17:39:58.862Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "4cc8f0bfd9c3069c9fcfe31303ed6bed76b967b9", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jchem/2008/734290.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4cc8f0bfd9c3069c9fcfe31303ed6bed76b967b9", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
119211341
pes2o/s2orc
v3-fos-license
Band crossings in $^{168}$Ta: a particle-number conserving analysis The structures of two observed high-spin rotational bands in the doubly-odd nucleus ${}^{168}$Ta are investigated using the cranked shell model with pairing correlations treated by a particle-number conserving method, in which the blocking effects are taken into account exactly. The experimental moments of inertia and alignments are reproduced very well by the calculations, which confirms the configuration assignments for these two bands in previous works. The backbending and upbending mechanisms in these two bands are analyzed in detail by calculating the occupation probabilities of each orbital close to the Fermi surface and the contributions of each orbital to the total angular momentum alignments. The investigation shows that the level crossings in the first backbending is the neutron $i_{13/2}$ crossing and the in second upbending is the proton $h_{11/2}$ crossing. Introduction The high-spin rotational structures of the rare-earth nuclei with proton Z ≈ 72 and neutron N ≈ 94 have drawn lots of attention due to the existence of exotic excitation modes [1,2,3,4]. Great efforts have been carried out to find these novel phenomena, e.g., wobbling mode [1,5,6,7,8,9,10]. Meanwhile, a considerable amount of data of the high-spin rotational bands in this mass region have been obtained in this process. Especially some doubly-odd nuclei, e.g., 166,168,170,172 Ta [11,12,13,14], 168,170,172,174,176 Re [15,16,17,18,19], etc., which are characterized by fairly small quadrupole deformations, provide a good opportunity to the understanding of the dependence of band crossing frequencies and angular momentum alignments on the occupation of specific single-particle orbitals. Furthermore, these data provide a benchmark for various nuclear models, e.g., the cranked Nilsson-Strutinsky method [20], the Hartree-Fock-Bogoliubov cranking model with Nilsson [21] and Woods-Saxon potentials [22,23], the cranking non-relativistic [24] and relativistic mean-field models [25], the projected shell model [26], the projected total energy surface approach [27], etc. For example, two high-spin rotational bands in the doubly-odd nuclei 168 Ta have been extended up to spin ∼ 40 in Ref. [12], in which the second band crossing in 168 Ta has been observed for the first time. However, the cranked shell model (CSM) can only reproduce the first neutron crossing [12]. The difficulty of the CSM in reproducing the second proton crossing at high-spin region may come from a change of deformation with increasing rotational frequency and/or the improper treatment of pairing correlations [28,29]. Therefore, it is interesting to investigate the proton alignments of h 11/2 orbitals involving in this nucleus using a reliable nuclear model. In the present work, the CSM with pairing correlations treated by a particlenumber conserving (PNC) method [30,31] will be used to investigate the band crossings of the two high-spin rotational bands observed in 168 Ta [12]. In contrary to the conventional Bardeen-Cooper-Schrieffer or Hartree-Fock-Bogoliubov approaches, the Hamiltonian is solved directly in a truncated Fock-space in the PNC method [32]. Therefore, the particle-number is conserved and the Pauli blocking effects are taken into account exactly. The PNC-CSM has been employed successfully for describing various nuclear phenomena, e.g., the oddeven differences in moments of inertia (MOIs) [33], identical bands [34,35], super-deformed bands [34,36] nuclear pairing phase transition [37], antimagnetic rotation [38,39], high-K isomers in the rare-earth [40,41,42,43] and actinide nuclei [44,45,46,47,48], etc. The PNC scheme has also been adopted both in non-relativistic [49,50] and relativistic mean-field models [51] and the total-Routhian-surface method with the Woods-Saxon potential [52,53]. Most recently, the shell-model-like approach, originally referred to as PNC method, based on the cranking covariant density functional theory has been developed [54]. Note that the covariant density functional theory provides a consistent description of the nuclear properties, especially the spin-orbital splitting, the pseudo-spin symmetry [55,56,57,58,59,60,61,62,63,64] and the spin symmetry in the anti-nucleon spectrum [65,66], and is reliable for the description of nuclei far away from the β-stability line [67,68], etc. Similar approaches with exactly conserved particle number when treating the paring correlations can be found in Refs. [70,71,72,73,74,75]. This paper is organized as follows. A brief introduction to the PNC treatment of pairing correlations within the CSM is presented in Sec. 2. The calculated results for two observed high-spin rotational bands in 168 Ta using PNC-CSM are shown in Sec. 3. A brief summary is given in Sec. 4. Theoretical framework The cranked shell model Hamiltonian of an axially symmetric nucleus in the rotating frame can be written as where H Nil is the Nilsson Hamiltonian [76], −ωJ x is the Coriolis interaction with the cranking frequency ω about the x axis (perpendicular to the nuclear symmetry z axis). H P is the pairing interaction, whereξ (η) labels the time-reversed state of a Nilsson state ξ (η), and G 0 is the effective strength of monopole pairing interaction. Instead of the usual single-particle level truncation in conventional shellmodel calculations, a cranked many-particle configuration (CMPC) truncation is adopted, which is crucial to make the PNC calculations for low-lying excited states both workable and sufficiently accurate [77,32]. Usually a CMPC space with the dimension of 1000 will be enough for the investigation of the rareearth nuclei. By diagonalizing the H CSM in a sufficiently large CMPC space, sufficiently accurate solutions for low-lying excited eigenstates of H CSM can be obtained, which can be written as where |i is a CMPC (an eigenstate of H 0 ). The angular momentum alignment for the state |Ψ is and the kinematic MOI of state |ψ is Because J x is a one-body operator, the matrix element i|J x |j (i = j) may not vanish only when |i and |j differ by one particle occupation [31]. After a certain permutation of creation operators, |i and |j can be recast into where µ and ν denote two different single-particle states, and (−1) Miµ = ±1, (−1) Mjν = ±1 according to whether the permutation is even or odd. Therefore, the angular momentum alignment of |Ψ can be written as where the diagonal contribution j x (µ) and the off-diagonal (interference) contribution j x (µν) can be written as and is the occupation probability of the cranked orbital |µ , P iµ = 1 if |µ is occupied in |i , and P iµ = 0 otherwise. Results and discussion In this work, the deformation parameters ε 2 = 0.217 and ε 4 = 0 are taken from Ref. [78], which are chosen as an average of the neighboring even-even Hf and W isotopes. The Nilsson parameters (κ and µ) are taken from Ref. [79], and a slight change of κ 6 (modified from 0.062 to 0.068) and µ 6 (modified from 0.34 to 0.32) for neutron N = 6 major shell is made. The neutron Nilsson parameters are changed to account for the observed ground state of 167 Hf. In addition, the proton orbital π1/2 − [541] is slightly shifted upward by about 0.2 ω 0 , which is adopted to avoid the defect caused by the velocity-dependent l 2 term in the Nilsson potential for the MOIs and alignments at the high-spin region [20]. The effective pairing strengths can be determined by the odd-even differences in nuclear binding energies, and are connected with the dimension of the truncated CMPC space. In this work, the CMPC space is constructed in the proton N = 4, 5 major shell and the neutron N = 5, 6 major shell with the truncation energies about 0.7 ω 0 both for protons and neutrons. For 168 Ta, ω 0p = 7.106 MeV for protons and ω 0n = 7.755 MeV for neutrons, respectively [76]. The dimensions of the CMPC space are 1000 for both protons and neutrons in the present calculation. The corresponding effective pairing strengths are G p = 0.34 MeV for protons and G n = 0.46 MeV for neutrons, respectively, which are the same as those adopted in 166 Ta [43]. . Therefore, this may lead to the formation of various high-K 2-quasiparticle isomers. It also can be seen that near the Fermi surface, there exists a proton sub-shell at Z = 76 and a neutron sub-shell at N = 98. Figure 2 shows the experimental [12] and calculated kinematic MOIs J (1) (upper panel) and alignments i (lower panel) of two high-spin rotational bands [78], which are taken as an average of the neighboring even-even Hf and W isotopes. The Nilsson parameters (κ and µ) are taken from Ref. [79], and a slight change of κ 6 (modified from 0.62 to 0.68) and µ 6 (modified from 0.34 to 0.32) for neutron N = 6 major shell is made. In addition, the proton orbital π1/2 − [541] is shifted upward by 0.2 ω 0 . in 168 Ta. The alignment is defined as i = J x − ωJ 0 − ω 3 J 1 , and the Harris parameters J 0 = 28 2 MeV −1 and J 1 = 58 4 MeV −3 are taken from Ref. [12]. The experimental MOIs and alignments are denoted by black solid circles (signature α = 0) and red open circles (signature α = 1), respectively. The calculated MOIs and alignments are denoted by black solid lines (signature α = 0) and red dotted lines (signature α = 1), respectively. In Ref. [80], these two bands were firstly observed and based on the relative intensity, their configurations were tentatively assigned as π9/2 − [514] ⊗ ν5/2 + [642] (πh 11/2 ⊗ νi 13/2 ) for band 1 and π5/2 + [402] ⊗ ν5/2 + [642] (πd 5/2 ⊗ νi 13/2 ) for band 2, respectively. Later on, these two bands were extended up to spin ∼ 40 in Ref. [12], in which the second upbending at ω ∼ 0.5 MeV in 168 Ta has been observed for the first time. Then by analyzing the energy staggerings, electromagnetic transition probabilities and rotational alignments of these two bands, the configurations were confirmed. It can be seen in Fig. 2 that the signature splittings at low rotational frequency region in these two bands are quite small. Due to the large signature splitting of the νi 13/2 orbital, these two bands should be constructed by the favored α = +1/2 sequence of νi 13/2 coupled to both signature partners of the single proton. Using these configuration assignments, the MOIs and alignments for these two bands are reproduced quite well by the PNC-CSM calculations, which in turn supports the configuration assignments [12,80]. Especially, the second upbending around ω ∼ 0.5 MeV in the α = 0 sequence of band 1 is reproduced quite well and the corresponding signature splitting is also reproduced. Note that in Ref. [12], the CSM can not reproduce this upbending, which was tentatively interpreted as the crossing with proton π 2 9/2 − [514](α = 1/2) ⊗ 1/2 − [541](α = −1/2) configuration. However, the gradual upbending in the α = 0 sequence of band 2 at ω ∼ 0.5 MeV is not reproduced by the PNC-CSM, which may need further investigation. In the following, the level crossings in the backbending and upbending will be discussed in detail. It is well known that the backbending and upbending phenomena in the rare-earth region is caused by the alignment of the high-j neutron i 13/2 and proton h 11/2 orbitals [81]. As for band 1 of 168 Ta, the proton h 11/2 orbital π9/2 − [514] and the neutron i 13/2 orbital ν5/2 + [642] are all blocked. Therefore, it is interesting to investigate the level crossings in this band. To understand the first ( ω ∼ 0.3 MeV) and the second ( ω ∼ 0.5 MeV) level crossings in band 1 of 168 Ta, the occupation probability n µ of each orbital µ (including both α = ±1/2) near the Fermi surface of band 1 are shown in Fig. 3. The top and bottom rows are for neutrons and protons, respectively. The positive (negative) parity levels are denoted by blue (red) lines. The Nilsson levels far above (n µ ∼ 0) and far below (n µ ∼ 2) the Fermi surface are not shown. It can be seen in Fig. 3(a) that for neutron, the Coriolis mixing between the neutron i 13/2 orbitals ν5/2 + [642] and ν3/2 + [651] is very strong since they are all very close to the Fermi surface (see Fig. 1). Around the rotational frequency ω ∼ 0.3 MeV, the occupation probabilities of ν5/2 + [642] and ν3/2 + [651] suddenly increase, while the occupation probability of ν5/2 − [523] suddenly decreases. Therefore, the first backbending around ω ∼ 0.3 MeV in band 1 may come from the contribution of these two i 13/2 neutrons. At ω > 0.3 MeV, the occupation probabilities for all neutron orbitals change gradually, so they may not contribute to the second upbending. Fig. 3(b) shows the proton occupation probabilities of band 1. It can be seen that around ω ∼ 0.5 MeV, the occupation probabilities of π7/2 − [523] and π9/2 − [514] decrease, while the occupation probability of π5/2 + [402] increases. Therefore, the second upbending around ω ∼ 0.5 MeV in band 1 may come from the contribution of these two h 11/2 protons. The contributions of neutron N = 5, 6 major shell (upper panel) and proton N = 4, 5 major shell (lower panel) to the angular momentum alignment J x for band 1 in 168 Ta are shown in Fig. 4. The contributions of diagonal µ j x (µ) and off-diagonal part µ<ν j x (µν) in Eq. (7) from the neutron N = 6 and proton N = 5 major shell are shown as dashed lines. It should be noted that in this figure, the smoothly increasing part of the angular momentum alignment represented by the Harris formula is not subtracted (cf. the caption of Fig. 2). It can be seen in Fig. 4(a) that, the first backbending at ω ∼ 0.3 MeV mainly comes from the contribution of neutron N = 6 major shell. Furthermore, this backbending mainly comes from the off-diagnoal part of N = 6 major shell. Fig. 4(b) shows that, the second upbending at ω ∼ 0.5 MeV mainly comes from the contribution of proton N = 5 major shell. Furthermore, this upbending mainly comes from the diagnoal part of N = 5 major shell. Meanwhile, the off-diagnoal part also contribute a little. In order to have a more clear understanding of the level crossing mechanism, the contribution of (a) each neutron orbital in the N = 6 major shell and (b) each proton orbital in the N = 5 major shell to the angular momentum alignments J x of band 1 in 168 Ta are shown in Fig.5(a). The diagonal (off-diagonal) part j x (µ) [j x (µν)] in Eq. (7) is denoted by black solid (red dotted) lines. It can be seen from Fig. 5 that for neutron, the off-diagonal part j x (ν3/2 + [651]ν5/2 + [642]) changes a lot at ω ∼ 0.3 MeV. The alignment gain after the backbending mainly comes from this interference term. In addition, the off-diagonal parts j x (ν1/2 + [660]ν3/2 + [651]) and j x (ν5/2 + [642]ν7/2 + [633]) also have obvious contributions. This tells us that the first backbending at ω ∼ 0.3 MeV is mainly caused by the neutron i 13/2 orbitals. From Fig. 5(b) one can see that for proton, the diagonal part j x (π7/2 − [523]) contributes a lot to the upbending. The diagonal part j x (π9/2 − [514]) and off-diagonal part j x (π5/2 − [532]π7/2 − [523]) also contribute a little. Therefore, we can get that the second upbending at ω ∼ 0.5 MeV is mainly caused by the proton h 11/2 orbitals. Summary The structures of two observed 2-quasiparticle high-spin rotational bands in the doubly-odd nucleus 168 Ta are investigated using the cranked shell model with pairing correlations treated by a particle-number conserving method, in which the blocking effects are taken into account exactly. The experimental moments of inertia and alignments are reproduced very well by the particle-number conserving calculations, which confirms the configuration assignments for these two bands in previous works. The backbending and upbending mechanisms in these two bands are analyzed by calculating the occupation probabilities of each orbital close to the Fermi surface and the contributions of each orbital to the total angular momentum alignments. It was found that for the first backbending at ω ∼ 0.3 MeV, the interference terms between the neutron νi 13/2 orbitals contribute a lot. For the second upbending at ω ∼ 0.5 MeV, the diagonal part of proton h 11/2 orbitals contributes a lot, and the off-diagonal part of proton h 11/2 orbitals also contributes a little. Note that the second gradual upbending in band 2 with signature α = 0 is not reproduced by the present calculation. This may be caused by the deformation changing with increasing rotational frequency, especially the triaxial deformation. However, in the present PNC-CSM, the deformation is fixed. Therefore, more sophisticated theory is needed to perform further investigations for this band. The recently developed shell-model-like approach, originally referred to as PNC method, based on the cranking covariant density functional theory can treat the deformation self-consistently with increasing rotational frequency and may provide more detailed information for this upbending. Acknowledgement This work was partly supported by the National Natural Science Foundation of China (Grants No. 11505058, 11775112, 11775026), and the Fundamental Research Funds for the Central Universities (2018MS058).
2018-11-16T22:56:16.000Z
2018-11-16T00:00:00.000
{ "year": 2018, "sha1": "116255dffe3ac108b74d8543c8d3bda933067ca0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1811.08276", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "116255dffe3ac108b74d8543c8d3bda933067ca0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
39944975
pes2o/s2orc
v3-fos-license
Risks and Benefits of Liver Biopsy in Focal Liver Disease Even with the recent evolution of imaging techniques, and with the ever-increasing role of serum markers, direct analysis of tissue samples maintains its role in modern medicine. This is especially true for the diagnosis and assessment of the prognosis and evolution of a series of viral, tumoral and inflammatory liver diseases. Thus, liver biopsy and histological assess‐ ment of the liver parenchyma can still be called by many the “gold standard” in diagnosis and staging of associated disease. However, liver biopsy in itself implies a series of risks and inherent discomfort for the patient. With the increasing availability of other non-invasive methods routinely used in diagnosis and staging of liver-related diseases, many debate the necessity and ethical implications of tissue sampling. Introduction Even with the recent evolution of imaging techniques, and with the ever-increasing role of serum markers, direct analysis of tissue samples maintains its role in modern medicine.This is especially true for the diagnosis and assessment of the prognosis and evolution of a series of viral, tumoral and inflammatory liver diseases.Thus, liver biopsy and histological assessment of the liver parenchyma can still be called by many the "gold standard" in diagnosis and staging of associated disease.However, liver biopsy in itself implies a series of risks and inherent discomfort for the patient.With the increasing availability of other non-invasive methods routinely used in diagnosis and staging of liver-related diseases, many debate the necessity and ethical implications of tissue sampling. In the following pages, we will try and synthetize the historical evolution of liver biopsy, describe the techniques used over the years and present its current recommendations and their alternatives, with focus on the so-called "virtual liver biopsy" techniques currently employed. Historical landmarks and recent developments in liver biopsy The first written documented report of a successful liver biopsy was made by Paul Ehrlich in the book "On diabetes" published in 1884.He published an account of the procedure performed in 1880 in Berlin, along with graphical illustrations of the instruments and the liver samples collected.This came detailed account was based on previous theoretical advantages of this technique discussed by the French physician AGM Vernois in 1844, who in turn based his assumption on successful procedures performed for punctur-ing purulent echinococcus, as early as 1825 (Récamier) and 1833 (Stanley).Cytology was reported as a diagnosis method for liver disease by L. Lucatello (in Rome) in 1895, while F. Schupfer performed liver and spleen biopsies with a thicker needle twelve years later, in 1907.This new approach provided cylindrical-shaped tissue samples which could be histologically prepared and analyzed [1]. A new stage in modern liver biopsy techniques was reached when, in 1957 and repeated in the following year, Menghini performed and reported on the first "one-second needle biopsy" performed with a special small caliber needle with no trocar and a sharp bevel.This was the first time needle liver biopsy was introduced worldwide as a praised diagnostic technique capable of providing enough histological material for an accurate interpretation of the pathological changes present in the parenchyma [1]. Following this radical advancement, liver biopsy became more spread and the technique evolved once modern imagistic methods allowed for better and safer puncturing of the liver parenchyma.Thus, the technique entered the image-guided age of investigation performed under computed tomography (CT) or ultrasound (US) real-time screening.Reports from Denmark, China, the United Kingdom, France or the United States of America populated the 1960-1980 literature, once the technique became widespread and fully acknowledged by the academic community.Its utility in diagnosing liver diseases and later on in staging hepatitis or malignancies was undisputed for entire decades of the 20th century [1]. Recent advancements, based on the advent of new imagistic high-accuracy techniques based on both US and CT/RM approaches, highly diminished the role played by this invasive investigation.The term "virtual biopsy" became more and more present in recent literature, once both doctors and patients alike became more confident and were introduced to these high-yield methods, such as Transient or Acoustic Radiation Force Elastography.Moreover, advanced serum markers (such as, for example, the Fibrotest-Actitest battery of tests) allow for an accurate non-invasive staging in hepatitis.The introduction of arterial uptake contrast-enhanced US and CT/RM techniques substantially decreased the role of biopsy in diagnosing liver biopsy [2][3][4]. However, histology remains one of the most accurate methods for evaluating liver parenchymal changes, and is always used in malignancies when the diagnosis is uncertain or when other non-invasive methods fail to provide an accurate staging for hepatitis.Along with these non-invasive techniques came a revolution in in-situ biopsy methods.Such is probe-based confocal laser endoscopy (pCLE), which uses miniaturized probes connected to a laser source through fiber optics, small enough to fit inside a biopsy needle, thus providing rapid live assessment of liver architecture [5]. Percutaneous biopsy All modern percutaneous liver biopsy techniques have rapidity as a common denominator.Either cutting or suction needles can be used for transthoracic or subcostal biopsy, either after palpation or imaging assessment of the puncturing zone, or, preferably, under continuous image guidance.The transthoracic approach is the preferred method used, under realtime US or (more rarely) CT guidance and after a thorough imaging investigation of the liver and puncture route.All percutaneous methods imply two phases, one extra-hepatic corresponding to the needle puncturing the skin and reaching the needle, and a hepatic stage in which the needle passes the liver capsule, collects the parenchyma material, and is swiftly extracted.It is considered a relatively safe procedure, complication rates varying between studies, from 0.75% up to 13.6% [6]. Trucut needles and their modified versions driven by spring-loaded biopsy guns are increasingly used and are the instruments of choice in many centers worldwide, especially in Europe [7].Needle diameters vary between 1.20 mm to 1.60 mm, smaller calibers being used when a high risk of complications is suspected. Suction needles are less expensive and their operation allows for rapid intra-hepatic handling, thus being easier to use and possibly imply less bleeding-related complications.The most widespread types are the Menghini, Jamshidi and Klatskin needles, which remained virtually unchanged since their introduction in the second half of the last century.The maximum required time for a complete syringe suction of the cytological material and the consecutive needle retraction is 0.5 seconds.The intrahepatic phase is reduced to as low as 0.1 seconds when the needle is operated by an expert practitioner [8]. Image guidance has become mandatory in centers where the gastroenterologist can perform his or her own US exam.Real-time surveillance of the procedure greatly decreases the risk of complications (such as bleeding) and minimizes post-procedural complaints such as pain or hypotension.Hepatologists in the United States usually prefer to have a radiologist performing the procedure under CT or US guidance [8]. Transjugular (transvenous) biopsy The transjugular route is preferred when the risk for complications is high and therefore a percutaneous approach is not considered safe enough for the patient.Patients with clinical ascites, known hemostatic defect, cirrhotic liver with clinical signs of organ deficiency (smaller size and increased palpatory stiffness) or morbid obesity are usually prime candidates for this approach.Another situation when the transvenous approach is preferred is when additional pressure measurements in the hepatic vein are required [8]. The resources needed for this procedure are higher than percutaneous approaches; however, complication rates are lower (2.5% up to 6.5%) according to some authors [9], with mortality rates of approximately 0.09% in high-risk patient groups [10].The expertise of the performing physician also plays a crucial role in the success rate of this procedure, and should be considered along with the higher resource costs when choosing this access route for a lower-risk patient [1]. Another very important aspect is the lower quality of the tissue specimens collected through the transjugular approach.The tissue cylinders are thinner and more fragmented than those obtained through percutaneous biopsy, and usually represent only 1-2 cm of the liver parenchyma, containing fewer portal fields [11]. Surgical or laparoscopic biopsy: Novel approaches for liver biopsy This approach is preferred in patients with peritoneal involvement when an abdominal cancer is present, with associated ascites or peritoneal disease with ascites of suspected hepatic origin.Also, focal hepatic lesions can be targeted for biopsy through the laparoscopic channel. Biopsy can thus be performed with either normal needle systems, or by wedge resection.However, the later approach may overestimate the level of fibrosis, as the resection is performed too close to the fibrotic capsule that envelops the liver.The procedure is always conducted under general anesthesia and requires controlled pneumoperitoneum by infusion of nitrous oxide, always performed by trained physicians, allowing for a good control of bleeding and a minimum set of complications due to the large working area created.In direct comparison with percutaneous biopsy, the laparoscopic approach provides a higher level of accuracy as it allows the evaluation of the surrounding peritoneum [12].The main complications are related to the general anesthesia used for the procedure, the local abdominal and intra-peritoneal traumas associated, as well as the risk of bleeding, which is also present in the other types of biopsy. Advancements to surgical techniques led to the development of the natural orifice transluminal endoscopic surgery (NOTES), a new surgically-derived endoscopic technique that uses a transgastric or transanal route to facilitate the access to the abdominal cavity.One recent study presented a liver biopsy performed through a transgastric flexible endoscopic device which permitted the inspection of the liver and surrounding intraperitoneal space.The technique can be applied to morbidly obese patients or to patients at high risk of complications [13].This approach remains however limited at the present time to a few highly selected patients, and is performed only by trained surgeons and gastroenterologists, at moderate to high costs and in selected centers. Recent studies also focused on evaluating the liver capsule in cirrhotic patients through pCLE inserted through a laparoscopic channel, this being a promising field in the advancement of minimally invasive biopsy techniques [14].Another study describes the use of pCLE in a routine minilaparoscopy setting, performed under conscious sedation.The authors could describe subsurface serial images in real time, allowing for an in vivo analysis of the liver parenchyma [5].This approach may lead the way to targeted biopsy through live assessment of the liver parenchyma, as well as immediate morphological and dynamic evaluation of intrahepatic structures. Adequacy of liver biopsy samples Analysis of the biopsy material under ultraviolet fluorescent light may be required in order to identify porphyria.Liver tissue obtained through biopsy is then quickly transferred into a buffer solution, usually 4% or 10% neutral formalin, to avoid the alterations it may sustain due to hepatic enzymes autolysis.It can then be subjected to various preparation techniques, in accordance to what diagnostic tests will follow with that specific sample (frozen section, RNA detection etc.) [1]. An adequate biopsy fragment is between 1 and 4 cm long, weighting between 10 to 50 mg, with a minimal diameter of 1 mm.Fragmented samples from Menghini needles are acceptable, as their added size is somewhere in the vicinity of 2 cm (usually range from 1 to 2.5 cm in length).In order to properly represent the parenchymal architecture, at least 10-11 portal tracts should be completely present, six being a minimally acceptable number.Specimens of inadequate lengths usually lead to understaging of fibrosis and underestimate the grade of inflammation.Cirrhotic parenchyma usually comes fragmented through biopsy, thus leading to approximately 20% sampling errors [15,16]. As it is appreciated that a liver biopsy specimen represents 1/50 000 of the total organ mass, discussions regarding how representative it can be for diffuse lesions always existed in the literature [8,17].It is however appreciated that most diffuse (steatosis or inflammation etc.) or focal lesions (both malignant and benign), as well as structural lesions such as fibrosis can be visualized with a fairly high degree of accuracy, if the minimum amount of liver parenchyma and the required number of portal spaces are present.It was however demonstrated that the size of the sample is directly correlated to an underestimation of inflammatory changes [18], this paradigm being extended to fibrotic changes and has a direct effect on the subsequent grading and staging [1,19,20]. Another issue highly debated in literature is the inter-observer variability; even with the wide usage of quantification scores for both inflammation and fibrosis such as the Knodell [21] scoring system and the revised Ishak version [22] or the METAVIR score [23].All interpretations are subjected to the experience and training of the pathologist, which is an independent variable in itself, separated from the inherent sampling and procedural errors.A second opinion is always recommended, and two pathologists are usually present in most large referral centers.Collaboration between the pathologist and the clinician performing the liver biopsy is also preferred, as some studies indicated [24][25][26]. The most important quantification parameters refer to its geometry and relationship between the principal compartments -portal tracts and the elements of the arterial vascular system; the configuration adopted by hepatocyte plates; the sinusoids and the perisinusoidal compartment; the amount of connective tissue, fat and the number of ducts present, as well as other normal cellular infiltrates of lymphoid origin [8].Regenerative nodular hyperplasia or macronodular cirrhosis can be sometimes classified as normal parenchyma, and the inherent variations of normal inflammatory cellular infiltrate can be misleading for an inexperienced pathologist when observing low grade inflammatory lesions [8,27]. Risks, complications and post-procedural complaints of liver biopsies The main risks for a patient subjected to liver biopsy were already briefly discussed in the previous paragraphs.Their frequency and predisposition in certain patient groups are determinant factors for choosing one biopsy technique in favor of another.The risk of bleeding cannot be excluded with any instrument, and liver biopsy is not recommended in most cases of suspected primary liver cancers because of a needle track seeding of tumor cells.These however do not exclude liver biopsy as a last resort diagnostic tool, when imagistic or serum tests proved constantly inconclusive or do not converge to an outcome. The most commonly occurring complication of percutaneous liver biopsy is pain, present in up to 84% of procedures and ranging from mild discomfort to severe pain [28].It is usually located in the right upper quadrant and it is referred to the right shoulder, with various intensities and time of installment.Moderate to severe pain is present in fewer than 5% of all patients, and may be the sign of a more severe complication such as bleeding or the puncturing of the gallbladder [16,29].Mechanisms that lead to pain after the biopsy maneuver are not fully understood, however it is likely to be caused by bile or blood extravasation with subsequent capsule swelling (the only liver component with sensitive nervous terminations) [30].Another cause of upper abdominal pain is the traction of the falciform ligament after the puncture.Cervical pain, as well as pain in the right shoulder, may also be caused by the irritation of the phrenic nerve.Subcapsular hematoma may lead to respiratory pain and irritation of the pleura or peritoneum may lead to vagal stimulation and consecutive vagal shock, manifested through bradycardia, severe hypotension, weak pulse and intense pain in the upper abdomen [1].In some cases of extreme pain, hospitalization and further imaging tests are required to determine the correct course of action for these patients. However, the most important complication of liver biopsy is bleeding.The most severe bleedings occur intraperitoneally, when they determine a drop in vital signs and can be visualized through imaging [16,31].Urgent hospitalization and blood transfusion, even followed by surgery or radiological intervention may be required.Nevertheless, these cases are scarce, with 1 in 2 500 up to 10 000 biopsies incidence, while less severe cases which do not require blood transfusions or surgical maneuvers are more frequent, approximately 1 in 500 biopsies [16].Serious bleeding-related complications usually occur within 2 hours of the procedure, and over 90% of all bleedings become evident within 24 hours of the procedure.Clinical symptoms are revelatory, as patients experience hypotension and shock.Age and the underlying conditions also are predictive factors, as older patients and liver masses are more frequently associated with post-puncture bleeding.A correlation between the needle type and the risk for bleeding was also cited in literature, as cutting needle seem to pose an increased risk compared to their suction counterparts [15].Other factors are related to operator experience, the diameter of the needles and their diameter [16]. A correlation between conventional coagulation tests and the risk of bleeding has not been sufficiently demonstrated until now; therefore no certain recommendations in this regard are currently in place [16].The option to insert coagulation agents on the needle tract is considered, especially in the US, with no definite data on its ability to prevent possible bleed-ings.As already mentioned, the transvenous approach is preferred in certain categories of patients as it is considered safer, even though several pooled analyses showed similar risks with standard percutaneous methods [10,16]. The singular major complication of liver biopsy, caused in turn by consecutive severe bleeding is patient death.No consistent data regarding post-procedural mortality exists in the literature, the most commonly quoted rate being less or equal to 1 in 10 000 biopsies [16], and seems to be greater after biopsies of malignant liver masses compared to diffuse parenchymal disease [6]. Other complications of liver biopsies include the perforation of other viscous organs, bile peritonitis (major complication which can result in death), infections (especially in posttransplant patients due to immunosuppressive medication), hemobilia, pneumothorax (instantly recognized on radiographs, essentially to diagnose quickly due to high risk of death) or hemothorax.Correct usage of imaging methods both when choosing the biopsy site and for surveillance of the procedure minimizes many of these risks, especially those related to puncturing adjacent structures [16].The risk of needle track seeding when puncturing liver malignancies exists in 1 to 3% of all cases [32], as will be detailed below. Current recommendations regarding conditions that require liver biopsy The indications for liver biopsy were greatly reduced since the recent introduction of accurate non-invasive tests which can evaluate liver parenchyma with minimal or no patient trauma.The concept of liver biopsy may evolve even further, if in vivo direct histological methods such as pCLE will provide important additional data.It is most likely that the recommendations for liver biopsy will suffer further changes in following years.A series of these advancements will be discussed separately within this chapter.Below, we will describe some of the main indications for liver biopsy, either for diagnostic purposes or for evaluating and staging liver disease. Grading and staging of chronic viral hepatitis The recent outburst of viral hepatitis cases (especially as a result of the increasing number of newly diagnosed virus C infections) represents a major health burden worldwide.With almost four million people being infected in the United States alone, and between 130 and 170 million worldwide, chronic hepatitis C virus (HCV) infections and more than double those figures for hepatitis B virus (HBV) infections, this ensemble of viral diseases currently represent the main cause of liver-related morbidity [33,34]. Nowadays, the role of liver histology in the positive diagnosis of chronic viral hepatitis has greatly diminished.However, it still plays a central role when assessing both activity and progression of the disease [8,35].Sampling issues arise when evaluating liver parenchyma affected by chronic hepatitis, as the quality of the obtained specimens can greatly influence the semi-quantitative scores developed in the last four decades to quantify disease progression.There are a number of changes present within the liver and their heterogeneity makes the "10-complete portal spaces" paradigm essential when evaluating disease severity.All scoring systems are bound to yield significantly different results, primarily because of sample variability, but also as a result of the different levels of expertise from the pathologist involved in their evaluation.All modifications of the liver parenchyma -inflammation, necrosis or fibrosis -exhibit particularities and can be subjectively interpreted even in a scoring system [8]. The first approach to liver biopsy scoring for chronic hepatitis dates from the early 1980s when the histological activity index (HAI) was introduced by Knodell and Ishak [21].This model did not clearly delimited between disease grades (that is, the importance of any inflammatory activity present) and stage, which refers to the degree of fibrosis and parenchymal remodeling.The later modification performed by Ishak resolves most of these issues and is currently used worldwide, partially replacing or at least complementing the earlier alternative Knodell classification.The preferred approach is a parallel evaluation using several scoring methods, such as the modified HAI, the Scheuer or the Ludwig systems and the Knodell classification, or the METAVIR algorithm devised in France [23]. Abnormal hepatic biochemical tests, alcoholic and non-alcoholic liver disease Chronically elevated hepatic biochemical parameters are a common concern for many patients during routine screenings or general consults.Gastroenterologists facing abnormal aspartate aminotransferase/alanine aminotransferase, gamma-glutamyltransferase or alkaline phosphatase levels have to conduct a thorough anamnesis to determine the underlying condition.Many such patients either acknowledge high alcohol consumption or are diagnosed with non-alcoholic liver disease (NAFLD) associated with their lifestyle, while few remain undiagnosed until they begin to display signs of liver cirrhosis (cryptogenic cirrhosis or cirrhosis of unknown etiology).The latter two classes are usually diagnosed through liver biopsy, as no other condition can be found from either their background or non-invasive investigations and blood tests [8,16]. The most common aspect revealed by liver biopsy in these patients is macrovesicularsteatosis, intracellular lipid accumulation exceeding 5% of the total cellular population.This macrosteatosis is generally coined as fatty liver disease (FLD) and can either be identified as either alcoholic liver disease (ALD), when regular alcohol consumption above established thresholds is established, or NAFLD when obesity, type 2 diabetes mellitus and/or hyperlipidemia are associated.Steatohepatitis, either of alcoholic origin (alcoholic steatohepatitis -ASH) or metabolic (non-alcoholic steatohepatitis -NASH) share histological similarities.NASH is recognized as a form of NAFLD with ballooning hepatocytes and necroinflammatory changes, as well as fibrosis and parenchymal remodeling.The NAFLD activity score (NAS) was developed in an attempt to objectively quantify the extension of this disease.This score sums the three pathologic features -steatosis, lobular inflammation and hepatocellular ballooning on a 0 to 8 scale, 5 being the cut-off point for a certain diagnose of NASH and 3-4 being labeled as borderline steatohepatitis [36,37]. Currently, even though liver biopsy is still regarded as the "gold standard" when diagnosing these conditions, no consensus has been reached.Liver biopsy remains therefore a controversial decision which ultimately has to be performed only when a clear diagnosis cannot be extracted from serum values, imagistic findings and clinical features [38]. Metabolic liver disease Diseases that determine intrahepatic iron accumulation are the main indications for liver biopsy when a metabolic condition is suspected, besides NAFLD or ALD.Hereditary hemochromatosis, in its various forms identified today, is routinely diagnosed and staged through liver biopsy [8,39].The metabolic syndrome (syndrome X) represents the increased accumulation of iron within hepatocytes, in the context of NAFLD.These deposits are not distributed equally among various regions of the liver, therefore deeper biopsies are needed in order to collect more tissue for analysis [8,40].For this purpose, at least two scores are currently used -the Deugnier and the Brissot scores [41,42].The hepatic iron index is calculated through a mathematical formula which takes into account the hepatic iron concentration (evaluated by liver biopsy), its atomic weight as well as the age of the patients.An index above 1.9 is an indicator of hemochromatosis; however its sensitivity is low as it is dependent on the timing of the liver biopsy [8]. Focal liver lesions Discovery of a focal liver lesions (FLL) can occur after imaging tests used routinely for either screening or diagnosis.The practitioner may encounter lesions of various sizes, number and location, some of them being associated with pre-existing conditions.This is especially the case of primary liver malignant tumors, either hepatocellular carcinoma (HCC) or cholangyocarcinoma (CC).Early discovery of a FLL is possible in up to 60% of all cases, especially in developed countries where surveillance programs are well established and health services are available to the majority of the population, irrespective of their location and economic status [43,44]. Imaging alone is currently the main diagnostic procedure for HCC, as modern contrast-enhanced techniques, either by CT or MRI, are sufficient to highlight the hallmark pattern of tumor vascularization.Diagnostic criteria in the United States of America, Europe and Asia stipulate that imaging techniques are sufficient to diagnose the majority of HCC lesions, biopsy being reserved for the few situations where imaging is unclear, discordance between two methods exists, or tumor size does not allow a precise imaging diagnosis [43][44][45].A defining criteria for evaluating FLLs is the presence of an underlying hepatic condition such as hepatitis or cirrhosis. When HCC is suspected in cirrhotic patients, criteria for liver biopsy are set by the size of the tumor.In nodules between 1 and 2 centimeters, diagnosis should ideally be based on non-invasive criteria; however, confirmation through biopsy should be sought whenever possible.The evaluation should be performed ideally by a pathologist with extensive experience in evaluating liver biopsies.In case of inconclusive findings after the initial biopsy, a second one should be performed if no other imaging criteria are present during the evaluation period.Nodules larger than 2 centimeters discovered through routine US should be ideally diagnosed through non-invasive procedures; however, when radiological findings are atypical, a liver biopsy should be obtained as confirmation [43][44][45].A panel of immunohistochemical markers was proposed as diagnostic when evaluating liver biopsies for HCC.A combination of glypican 3, heat shock protein 70 and glutamine synthetase are recommended for the differential diagnosis between early HCC and high grade dysplastic nodules [46] (Di Tomaso et al, 2009).A final recommendation of the EASL-EORTC guidelines is that liver biopsy should be performed within controlled settings of scientific research, for identifying new markers for HCC and for tissue bio-banking [44]. The current tendency in diagnostic medicine is to avoid liver biopsy when evaluating HCC [44].The main reasons against performing liver biopsy are the high rate of sampling errors which would diminish the sensitivity of the investigation; a higher rate of recurrence posttransplant in patients who underwent liver biopsy and finally the small but well-established risk of needle track seeding.In transplant referral centers, liver biopsy is performed more frequently, as there is an increased need for a correct final diagnosis; however, these procedures are subject to wide variation depending on country-specific regulations [43,44].Another argument for liver biopsy in HCC cases that benefit from chemotherapy would be the importance of histological grading.Response to local or systemic anti-angiogenic or antiproliferative agents might be dictated by the microscopic configuration of the tumor and the amount of angiogenesis markers present on histological samples [16]. The second most important primary liver malignancy is CC.It can also develop in the presence of an underlying liver condition, such as chronic biliary tract diseases.Imaging diagnosis is sometimes difficult, as it may present similar contrast-enhancing patterns to those of HCC -the majority of CCs are solitary masses present in the hilum, while a minority can develop in other regions [43,44].Mixed forms of CC/HCC may also be present, their noninvasive diagnosis being even more difficult.All these forms of either atypical CCs or mixed presentations are usually subjected (with various degrees of variability, depending on setting and context) to liver biopsy.Surgical intervention, either by resection or liver transplant, are the approaches that yield the best survival chances for the patient.Therefore, liver biopsy may be indicated, as well as concomitant biopsy of lymph nodes in the upper abdominal area [16]. Metastases have the overall highest incidence amongst malignant liver lesions [47].When a secondary malignant liver lesion is suspected and the physician cannot identify the primary point, liver biopsy is usually diagnostic, even when imaging fails to provide enough detail.If an underlying parenchymal disease is also suspected, biopsy should be performed outside the lesion site as well, for an extended and more precise diagnosis.A vast panel of markers may be employed in an immunohistochemistry study; however, the histologic architecture identified through normal techniques may be sufficient for an expert pathologist to determine the primary site of origin [1,16]. Other rare primary liver parenchyma or bile duct malignant or benign neoplasms can ultimately be identified through histological analysis, after careful imaging-guided liver biop-sy is performed.This diagnosis is often not possible on cross-sectional imaging studies as well as tumor serum markers, as their specificity for such lesions is inadequate.An expert hepatologist should closely collaborate with an experimented pathologist, as the diagnosis is difficult most of the times.These lesions may develop in the presence of an underlying liver condition, which would aid the clinical diagnosis or suspicion on the part of the clinician [1,16]. The majority of lesions discovered through imaging techniques in patients without pre-existing liver conditions are benign in origin, mostly solitary or occasionally multiple.They exhibit particular vascular patterns in contrast-enhanced imaging techniques and are thus easily diagnosed without the use of invasive techniques.Such is the case of liver hemangiomas, mostly solitary benign tumors with characteristic contrast enhancement throughout all phases of an imaging investigation.Other lesions such as focal nodular hyperplasia are also usually solitary and may display distinct features such as "central scarring" or particular enhancement patterns (spiked wheel enhancement etc.).All these particularities have a morphological substrate: central hypoechoic areas which do not show vascular hyperenhancement usually correspond to areas of necrosis; intense signal enhancement zones are indicators of high microvessel density and neo-angiogenesis vessels; the US or CT peripheral rim translate in certain particularities of fibrous capsules [1,16,44]. Overall, lesions may present as cystic, solid or vascular; all these particularities usually being identified through non-invasive procedures prior to liver biopsy.In the USA for instance, liver biopsy is performed by imagists as they can perform the pre-biopsy or real-time assessment of the procedure, while in Europe most gastroenterologists or hepatologists perform the procedure themselves, under US surveillance [43,44].A core biopsy is usually preferred to fine-needle aspiration, as histology is considered superior from a diagnostic perspective compared to cytology; another reason being that experts in evaluating histology are more numerous compared to cytologists.The risk of puncturing blood vessels, either major arteries in the normal parenchyma, or intra-tumoral vessels is considerably diminished by real-time imaging guidance, for instance US with color Doppler.The risk of track seeding exists, even if extremely low (one study estimates a risk of 0.13%, while in other studies no such incidents were reported) [48,49].A certain dependency on the technique and size of the needle was also proven [50].Infectious lesions may be biopsied; even if echinococcal cysts were considered an absolute contraindication as puncturing can be associated with anaphylactic shock and death, it was proven that these lesions can be aspirated with 19 or 22-gauge needles, taking all preparations for possible anaphylaxis [51]. Probe-based confocal laser endomicroscopy The latest development in histological evaluation of gastrointestinal structures is confocal laser endomicroscopy.It allows for the in vivo evaluation of dysplasia and malignancies of the gastrointestinal tract, or in order to obtain directed biopsies that would allow rapid and more precise diagnoses [52,53].The first embodiments of this technique required dedicated endoscopes to be used for evaluating cavitary structures accessible from both ends of the digestive tracts. Recent advancements however were able to miniaturize the technology so the imaging microprobe can be connected to 30,000 fiber-optic threads that enable point-to-point real-time detection at 12 frames/sec.The imaging device by itself measures less than 1.5 millimeters in diameter, thus allowing its use through 19G or tru-cut biopsy needles, or insertion by laparoscopy or NOTES [53].This technology will allow in vivo, real-time imaging of liver histology, technically enhancing the capabilities of liver biopsy [54].A few studies on animal models exist in the literature, detailing pCLE use for liver histological imaging [14,55,56].The technique can be used for assessing the state of hepatocytes and the morphology of the liver tissue, or can be limited to the study of the exterior liver capsule, yielding interesting preliminary results in the setting of cirrhosis.Mennone et al reported interesting results regarding a fibrotic pattern and collagen deposits in animal models with cirrhosis induced by bile duct ligation [14].The technology shows promise and may someday allow for safer histological assessment of patients with chronic liver disease irrespective of its advancement, either cirrhotic or having any extreme complications, such as HCC. Non-invasive imaging and serum tests for the assessment of fibrosis Transient elastography (TE, Fibroscan® developed by Echosens, Paris, France) and Acoustic radiation force impulse (ARFI) are two ultrasound-based methods for quantifying liver fibrosis without the need for histological assessment.Another approach is through serum markers of fibrosis quantification, processed in complex mathematical formulas which give a quantitative result for liver stiffness, such as the Fibrotest, Biopredictive and the aspartate transaminase to platelets ratio index (ARPI) approaches. TE is a novel and rapid non-invasive examination which involves minimal patient discomfort over a relatively low time period (one examination may take up to 5-10 minutes depending on the skeletal and adipose conformations of the patient).The device consists of a hand-held vibrating unit with an ultrasound transducer probe mounted on its axis, which generates medium amplitude vibrations at a low frequency, thus inducing an elastic shear wave in the underlying tissue.The hand-held probe is connected to a modified tower US machine which registers the result and through the on-screen software interface presents the user with an elastogram as a function of depth in time.The patient lies on his/her side and the probe is placed against the skin on the median clavicle line, directed towards the anatomical location of the liver, at a 90 degrees angle with the skin surface.Its results are presented as kilo Pascals (kPa), units of applied force.A series of 10 measurements are mediated to present a final value of the liver stiffness, which is equivalent to an F-stage fibrosis measurement obtained through biopsy [2]. ARFI is another technology that uses short-duration, high-intensity acoustic pulses which in turn exert mechanical excitation upon the tissues, generating local displacement resulting in shear waves.Their velocity can be assessed in a selected cylindrical area of interest of 0.5 cm (length) x 0.4 cm (diameter), up to 5.5 cm below skin level.Its results are expressed as velocities, in m/s [4]. Fibrotest-Actitest (Biopredictive, France) is a serologic marker-based algorithm which represents an alternative to invasive biopsy techniques.It received clinical validation in patients with chronic hepatitis B and C, ALD and NAFLD.Fibrotest consists of a panel of markers designed for appreciating liver fibrosis: Gamma-glutamyltranspeptidase (GGT), Total bilirubin Alpha-2-macroglobulin, Haptoglobin, and Apolipoprotein A1.Necroinflammatory activity is appreciated through the Actitest component, which adds Alanine transaminase (ALT) to the above mentioned serum markers [3,57].All these tests are performed in validated laboratories due to their complexity and variability of their different components and their results are inserted in a complex mathematical formula through a web-based interface, the end-result being correlated with other quantitative score systems such as METAVIR, Knodell or Ishak [58]. The best results are provided by a combination of two or more non-invasive methods, one study in particular finding that Fibrotest and Fibroscan offers the best diagnosis performance compared to liber biopsy as a gold standard, at least for advanced fibrosis (F values beyond 2) or cirrhosis (F3 or F4) [2].This conclusion was reached by another, more recent study performed by Boursier and his collaborators [59].They diminish the number of patients who require liver biopsy, however, this procedure is not excluded in all cases.Some studies have shown a high variability between Fibroscan results, dependent of the bodymass index and population factors [60,61].A discordance between liver biopsy staging and the estimation provided by non-invasive methods has also been identified [34].It was approximated that 30-40% of all patients investigated by a combination of non-invasive imagistic and marker-based methods still require liver biopsy, during either sequential or simultaneous protocols [60,61]. Conclusion Despite all its limitations and the advances in modern lesser invasive techniques, liver biopsy remains the gold standard for evaluating a wide array of liver diseases. The main concern when turning to tissue sampling through biopsy is the risk/benefit ratio, the decision ultimately belonging to the clinician involved.The risks may at times be higher than the implied diagnostic outcome, in which case other methods are preferred for the diagnosis. Currently, it is recommended that all interpretations should be based on proper tissue blocks, with the correct technique applied.It is preferred that more than one pathologist with extensive experience in liver pathology should formulate the final histological diagnosis.This is especially true for FLLs and liver malignancies, as benign features may at times overlap, making the diagnosis uncertain. Modern imagistic techniques allow for precise non-invasive evaluation of liver fibrosis in the context of hepatitis; however, the correct methodology for interpreting these tests is yet to be established.Novel imagistic approaches may in time open new perspectives for liver biopsy, by providing in vivo, real time data on liver parenchymal features which would prove useful for accurate diagnosing of otherwise difficult to interpret pathologies.
2017-09-17T18:45:33.874Z
2012-11-21T00:00:00.000
{ "year": 2012, "sha1": "7e7eb10e728e4be607dcc510a2f4a3dfc65ef027", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/41006", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "96a2eb3cbf04789bd3cfe343bb4bbf9ac0ce9fed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214049469
pes2o/s2orc
v3-fos-license
Assessment of Waste to Energy Potential in the Central Zone of Afghanistan The central zone of Afghanistan has enough cattle to be considered for generating biogas. The cattle population in the zone was 634,524, 647,229 and 633,362 heads in 2012-13, 2014-15 and 2016-17, respectively. As a result of field experiments, the fresh manure generation of cattle in the zone is 19 kg head day, fraction recoverable of the generated cattle manure is 80% and the proportion of dry matter of the manure is 23.7%. Based on these manure parameters, about 834,320, 851,026 and 832,792 tons of dry matter recoverable could be generated in the mentioned three years, respectively. By using a biogas digester, this dry matter recoverable could be enough for generating about 86,769,319, 88,506,691 and 86,610,419 m of biogas in 2012-13, 2014-15 and 2016-17, respectively. The amount of generated biogas is equivalent of 1,735, 1,770 and 1,732 TJ of energy in the mentioned years, respectively. In the case study of Kabul province, it was found that till now biogas plants are not constructed in the zone. For financial evaluation of biogas utilization, a dairy of 24 cattle was selected. It was determined that the manure from 24 cattle can generate about 9 m per day (3,285 m per year) of biogas in a 24 m DSAC-Model biogas digester. By comparing biogas energy value from the equivalent energy of LPG, biogas has value of 66,521.25 Afg per year (978 USD per year). By considering the requirements of cooking and lighting of a family of 8 members, the generated biogas (9 m/day) in the mentioned dairy farm can be enough for two families. Considering the situation of the zone DSAC-Model biogas plant was considered suitable among various types of it. The techno-financial analyze result was quite attractive. For this case, the NPV was 2,664.6 USD, B/C 2.37, IRR was 33% and the discounted payback period (PP) was 4.09 years (4 years and about one month). As all these financial indicators are in the acceptable range, therefore the biogas generation with DSAC-Model biogas plant in the central zone of Afghanistan is beneficent. unknown and their efficient usage is not suggested in the country. Animal manure is the main energy source along with crop residues and wood in the country. In Afghanistan, the women and children make the manure cake and then the sun dried manure cakes are used for cooking and space heating. They burn this animal manure directly, which is not the efficient way for using it. As an alternative, this manure can be used for biogas production, which provides better and clean fuel than the animal dung. Burning this manure in the open fire stove causes indoors air pollutants and other respiratory diseases, but biogas is a cleaner, odorless and smokeless fuel. In addition to it, biogas can be also used for lighting and refrigeration. Cattles are the very dominant animals, kept by most of the families in all over the country. Generating biogas from cattle manure has two very dominant advantages; first, biogas is a cleaner fuel than cattle manure; second, the digested cattle manure which is called slurry is a very harmless fertilizer for the agriculture. However, the utilization of biogas is very common in our surrounding countries such as, India, China, Nepal and Pakistan [1]; the people of Afghanistan are unfamiliar with this source of renewable energy. To take the advantage of this renewable energy source in Afghanistan, it is required to find its potential and techno-economically analysis it for a zone. For this purpose, a study on the biogas energy potential of cattle manure and techno-economic analysis of biogas utilization in the central zone (Kabul, Parwan, Kapisa, Logar and Wardak) of Afghanistan were conducted. A. Livestock in Afghanistan Afghanistan is an agricultural country. In the agriculture GDP, 50% share goes to the livestock products. The livestock population has fluctuated over the past 30 years, from about 4 million cattle and over 30 million sheep and goats to 3.7 million cattle and around 16 million sheep and goats in drought years [2]. In Afghan agriculture sector, animals play an important role. Nearly, 79% of rural households and 94% of nomads (Kochyan) population keep some kind of animal like cattle, oxen, horses, donkeys, camels, goats, sheep, and poultry not only for meat, dairy products, and eggs, but also for providing the cooking fuel and fertilizer [3]. Nowadays, in the agriculture sector of the country, there are a lot of small scale dairies in the urban and suburban areas, which is a good way of revenues to the formers. The cattle manures produced in the dairies are used for domestic cooking and as fertilizer for the crops in the rural households. The main livestock kept by afghan people are cattle, sheep, goats, donkeys and camels [1]. The manures of cattle are a very potential source for Assessment of Waste to Energy Potential in the Central Zone of Afghanistan Abdul Ghani Noori, Agha Mohammad Fazli generating biogas. Biogas generation is very common in the world. Globally about 30 million households use biogas for cooking, heating, and lighting [4]. Most of these populations are in China (25 million biogas digesters), India (nearly 4 million biogas digesters), Nepal (200,000 biogas digesters), and Vietnam (150,000 biogas digesters). In contrast 75 biogas digesters was in operation in Afghanistan in 2010 [5]. Based on the number of animal from statistics of 2008-9, the theoretical biogas potential in Afghanistan was about 1,408 million cubic meters (32 trillion Btu) per year, which is double of the energy consumption in 2005 (18 trillion Btu) [6]. By using cattle manure, nearly 896,000 domestic biogas plants could be installed in Afghanistan and about 26% of the households could be revived by the efficient and clean fuel in 2008-9. B. Biogas Technology Biogas is produced by the microorganisms through the anaerobic fermentation of biodegradable biomass. In this process, the organic fraction of biomass is digested by bacteria in an anaerobic environment. As a result, CH4, CO2, H2 and the decomposed mass are produced. Depend on the feeding materials and its degradable fraction, biogas has different composition, about 50% to 70% is methane (CH4), 30% to 40% carbon dioxide (CO2), 5% to 10% hydrogen (H2), 1% to 2% nitrogen (N2), 0.3% water vapor (H2O) and hydrogen sulfide (H2S) in traces. Methane is a combustible gas. Its net calorific value is 20 MJ per m 3 and the air to fuel ratio, required for its combustion is 5.7. About, 700C° temperature is required for its ignition and the density of methane is 0.94 kg per m 3 [7]. C. Fixed Dome Biogas Plant Fixed dome biogas plant is a single unit, where the dome acts as a gasholder and there is no moving part. This plant is constructed underground, which helps to avoid the effects of the temperature variations and let the upper space to be used for other activates [7]. The common types of fixed dome biogas plants are Janatha, Deenbandhu and DSAC-Model. Janatha biogas plant is an Indian design, which is masonry structure at all. The inlet and out let structures of the plant have a tank shape, from which the animal manure is fed and the digested slurry is extracted, respectively. The gas produced in the result of slurry digestion in the digester, goes up and collected in the dome. When the produced gas increases, it pushes the digested slurry to the outlet and also helps to pressurize the gas going out through the gas outlet. As this is fixed structure, the gas pressure is not constant like in the floating type [7]. Deenbandhu biogas plant is the improved version of the Janatha biogas plant, shown in Fig.1. It was developed in India by Action for Food Production (AFPRO) in 1984. This plant can be constructed with locally available materials. The construction cost of this model is cheap (30 -45 percent then Janatha). The losses through the inlet chamber are less and its practical retention time (RT) is close to the theoretical retention time (RT). The structure of this model is composed of two spheres, having different diameters, joint at their bases. There is no need for the masonry walls around the digesters. The inlet and out let of this plant is the similar to the Janatha biogas plant. To avoid the entrance of slurry to the gas outlet, it is constructed 150 mm below the slurry outlet. The gas holding capacity of this model is 33 percent of the total capacity of the digester [7]. DSAC-Model biogas plant is a rectangular fixed dome digester, shown in Fig.2. It was modified from the Chines and Indian models and used in the Philippines. The structure of this plant is totally from concrete, bricks and other similar locally available materials, so its structure is more durable. The plant is adaptable for small, medium and large scale applications with low cost investment. It is environmental-friendly and can reduce the pollution from 60% to 80%. In addition to that, it is selfstirring and can produce 35% to 60% biogas of the digester volume. [8]. D. Floating Drum Biogas Plant The floating drum biogas plant is constructed underground from bricks and a metal circular dome, shown in Fig.3. A partition wall is constructed in the middle of the digester to avoid the mixing of fresh and digested slurry. The gas produced is collected in the metal structure. This metal structure is moveable. When the volume of the produced gas is increased, it goes up and the gas is withdrawal through the gas out let and then drops down. This drum is rotated horizontally to break the scum formation in the digester. For the vertical moment of the drum, a central guide frame is used. The cost of the drum is counted around 60 percent of the overall plant costs. The weight of the drum helps to provide the gas with a constant pressure. The inlet and outlet tanks of the plant are the same as in the fixed dome biogas plant. The gas pipe is provided in the top of the plant. This type is widely used in India [7]. III. METHODOLOGY A. Determining Biogas Energy Potential The animal manure is composed of organic material, moisture and ash. In the anaerobic environment, it is decomposed and produces CH4, CO2 and stabilized organic materials (SOM). The energy potential of animal manure through biogas production is estimated by the following equations [11]. Where, In the above equations, EPmanur = Annual energy potential of animal manure (TJ/y) ABPmanure = Annual amount of biogas from recoverable manure (NM 3 B. Method for Sizing DSAC-Model Biogas Plant There are two approaches to size the biogas digester, determining the size of digester for a specific amount of slurry to be treated or determining the size of digester for a specific amount of biogas needed. If the slurry to be digested is known the following equations can be used to size the DSAC-Model biogas plant [8]. Slurry volume (m 3 /day) = Animal manure (m 3 /day) x 2 (manure to water ratio = 1:1) Digester volume (m 3 ) = Slurry volume (m 3 /day) x RT (days) As the DSAC-Model biogas plant is a rectangular digester so based on the space availability, the length (L), width (W) and height (H) of the digester can be determined. C. Method for Financial Analysis The financial analysis determines the suitability of investment. Usually, it is used to evaluate whether a project is stable and profitable enough to be invested in or not. These indicators are net present value (NPV), benefit to cost ratio (B/C), internal rate of return (IRR) and payback period (PP) [12]. A. Cattle Population In the central zone of Afghanistan, groups of cattle are kept in dairy forms as well as in the households for producing milk. Keeping a group of cattle makes it easy to collect their daily manure for biogas generation. In this zone of the country, most cattle are kept in the Kapisa province followed by Parwan, Kabul, Logar and Wardak. In 2012-13, 2014-15 and 2016-17 the total cattle population in the central zone of Afghanistan was 634,524, 647,229 and 633,362 head of cattle, respectively. In Table I, it is shown that there is an increase of cattle population from 2012-13 to 2014-15 and back a decrease from 2014-15 to 2016-17 [13]. B. Cattle Manure Parameters For estimating biogas potential of cattle manure and energy potential of biogas, it is needed to find different characteristics of cattle manure, such as fresh manure generation per head per day, recoverable fraction of the fresh manure, fraction of dry matter, fraction of volatile solids, biogas yield and the calorific value of biogas. For determining these parameters, a dairy was selected in Kabul province of Afghanistan during data collection in December-2018. There were 24 cattle in the dairy. The cattle manure was collected twice daily, the fresh manure of these cattle was measured and its sample was taken for determining the cattle manure parameters. The field measurements showed that, the average per head per day fresh cattle manure production is 19 kg, and its recoverable fraction is 80 percent of total produced cattle manure. The recoverable fraction is so high compare to other countries, because Afghanistan does not have grazing areas for cow feeding, so they are always farmed in selected small areas. On the other hand, cattle manure has energy and fertilizer value in Afghanistan and cows standing areas are brick covered in most places, which help to collect the maximum amount of cattle manure. In comparison, the collection efficiency of cattle manure is 50 percent in Sri Lanka [14]; 60 percent in India [15] and 80 percent in Thailand [16]. The proportion of dry matter resulted from the field test is 23.7 %. The other parameters are taken from previous study on biomass energy potential in other countries, such as cattle manure volatile solids percentage 52 percent [1], biogas yield 0.2 m 3 per kg of volatile solids, and calorific value of 20 MJ per m 3 of biogas, shown in Table II [14]. C. Annual Dry Matter Recoverable from the Manure The dry matter recoverable was estimated based on the cattle manure parameters described in Table II Initial Uniform 2012-13, 2014-15 and 2016-17 was 834,320, 851,026 and 832,792 tons respectively. As most of the cattle population is concentrated in the Kapisa province followed by Parwan and hence the dry matter recoverable is also high in these two provinces compared to Kabul, logar and Wardak provinces. In Table III, it is shown that the dry matter recoverable has been increased by 2% from 2012-13 to 2014-15. And is back decreased by about 2.14% from 2014-15 to 2016-17. The estimated dry matter recoverable is used as fertilizer for the crops and fuel for cooking and space heating. There is no any study and data for Afghanistan to show the fraction of cattle manure used as fertilizer and for energy. The cattle manure cakes are sun dried, before using as fuel. Table III shows the annual dry mater recoverable from the cattle manure in the central zone of Afghanistan. D. Annual Biogas Potential By considering the cattle manure parameters and the dry matter recoverable, the biogas potential of cattle manure is estimated for the three selected years. As the biogas potential depends on the cattle population and cattle manure parameters; therefor, the most biogas productive provinces Logar Wardak 2012-13 13,625 23,664 27,250 11,474 10,757 2014-15 13,898 24,138 27,795 11,703 10,972 2016-17 13,600 23,621 27,200 11,453 10,737 E. Annual Biogas Energy Potential The annual biogas energy potential is estimated from the annual biogas potential of cattle manure and the biogas calorific value, discussed in the previous sections. In Table V, shows the annual biogas energy potential in the five provinces of the central zone of Afghanistan in the three selected years. For this case study, a dairy was selected in Kabul province. Considering the environmental situation, material availability and experts in construction, DSAC-Model biogas plant was considered suitable among the other types of the plants like Janatha, Deenbandhu and floating drum. About 24 cattle were farmed in the dairy selected for this study. The required parameters of the cattle manure were measured on sit in December, 2018. Based on the cattle population and their manure generation potential, the techno-financial analysis of the identified suitable biogas plant was carried out. The DSAC-Model biogas plant was designed based on the design method described in section B of methodology. For this case the dairy needs 24m 3 digester which costs about 1,949 USD. By considering the number of cattle, the daily biogas generation of the plant would be 9m 3 . To find the serviceability of the daily generated biogas, biogas technologies and their burning rate are required. In this case, biogas cook stoves and lamps are focused. A double burner biogas stove consumes 0.44 m 3 of biogas per hour and a biogas lamps consumes 0.14 m 3 of biogas per hour [8]. In Afghanistan the average family size is 7.4 people [17]. In this case, one double burner biogas stove for 5 hours daily cooking and 4 biogas lamps for 4 hours in the night was assumed for a family of 8 members. Based on the above parameters, the family requires 2.24 m 3 of biogas for daily cooking and 2.2 m 3 of biogas for lighting and its total daily biogas requirement is 4.44 m 3 . As the daily biogas generation potential is 9 m 3 , so it can fulfill the cooking and lighting requirements of two families of 8 members or a single family of 16 members in the central zone of Afghanistan. To find the annual benefit of the biogas plant, it is required to find the unit cost of biogas generated from the biogas plant. The unit cost of biogas can be estimated from its equivalent fuelwood or LPG. 1m 3 of biogas equals to 3.47 kg of fuelwood or 0.45 kg of LPG [8]. As the biogas potential of the plant is 9m 3 daily and 3,285 m 3 per year, which equals to 11,398.95 kg of fuelwood or 1,478.25 kg of LPG per year. In Kabul, the cost of fuelwood was 8.63 Afg per kg (0.15 USD per kg) and the cost of LPG was 45 Afg per kg (0.66 USD per kg) in December, 2018. Based on the fuelwood cost, the estimated cost of biogas is 98,372.94 Afg per year (1,710 USD per year) and based on the LPG cost the estimated biogas cost is 66,521.25 Afg per year (978 USD per year). Here we will take the biogas cost based on the LPG cost 66,521.25 Afg per year (978 USD per year), which is the annual revenue for the biogas plant owner. Based on the plant type and construction, the life time of the plant is assumed 20 years. After the first year 4.83 percent main inflation rate is applied to the annual cost of the biogas [18]. The financial analysis of the biogas plant is based on the investment in the biogas plant, serviceability period, annual revenue, inflation rate, corporate taxes and interest rate in Kabul, Afghanistan. The total investment for the 24 m 3 DSAC-Model biogas plant is 132,533Afg (1,949 USD) and the annual biogas revenue is 66,521.25 Afg per year (978 USD per year) which is interested by 4.83 percent mean inflation rate of Afghanistan every next year. The corporate taxes rate is 20 percent [19], and the interest rate is 15 percent in Afghanistan [18]. Based on these parameters, the net present value (NPV), benefit to cost ratio (B/C), internal rate of return (IRR) and the discounted payback period (PP) of the biogas plant are estimated. The estimation method, described in section C of the methodology is used for analyzing these parameters and are shown in Table VII. As all these indicators are in the acceptable ranges, therefore the investment in the biogas generation from cattle manure in the central zone of Afghanistan is beneficent. This study was completed with the achievement of two main objectives. The first objective was to assess the biogas energy potential of cattle manure in the central zone of Afghanistan. In the central zone of the country, the total cattle population was 634,524, 647,229 and 633,362 head of cattle in 2012-13, 2014-15 and 2016-17, respectively. Based on the manure parameters determined from the field and experiments shown in Table II, about 834,320, 851,026 and 832,792 ton of dry matter recoverable could be generated in the mentioned three years, respectively. By using a biogas digester, about 86,769,319, 88,506,691, and 86,610,419 m 3 of biogas could be generated in 2012-13, 2014-15 and 2016-17, respectively. The amount of generated biogas is equivalent of 1,735, 1,770 and 1,732 TJ of energy in the mentioned years, respectively. The second objective of the study was to analyze technoeconomic aspects of the biogas utilization in the central zone of Afghanistan. From the field observations, it was noted that biogas plants are not currently used in the central zone. For the evaluation of biogas utilization in the zone, a dairy form was selected in Kabul province. Based on the cattle population in the dairy (24 cattle) and manure parameters, it was determined that about 9 m 3 per day (3,285 m 3 biogas per year) of biogas can be generated in a 24 m 3 DSAC-Model biogas digester. By comparing biogas energy value from the equivalent energy of LPG, it was determined that the generated biogas has value of 66,521.25 Afg per year (978 USD per year). The per day generated biogas (9 m 3 ) in the mentioned dairy farm could be enough for lighting and cooking requirements of a single family of 16 members or two families of 8 members. For central zone situations, various biogas plant types of the neighboring countries were discussed and among them, the DSAC-Model biogas plant was considered suitable and its techno-financial analysis was carried out for the mentioned dairy case. The technofinancial analyze result was quite attractive, for this case the NPV was 2,664.6 USD, B/C 2.37, IRR 33% and the discounted payback period (PP) was 4.09 years (4 years and about one month). As all these financial indicators are in the acceptable range, therefore the biogas generation with DSAC-Model of plant is beneficent and attractive in the central zone of Afghanistan.
2020-02-06T09:02:54.964Z
2019-12-30T00:00:00.000
{ "year": 2019, "sha1": "437796b313ab2b4b0b8e89037260df2a2f8a80b0", "oa_license": "CCBY", "oa_url": "https://www.ejers.org/index.php/ejers/article/download/1698/732", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d78fe9c2688ffcce8ba0b3a9cb62abbec35d4cb0", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
270388098
pes2o/s2orc
v3-fos-license
Recurrent ventricular tachycardia in a patient with A19D mutation-associated hereditary transthyretin amyloidosis: a case report Abstract Background Previous literature suggests that patients with transthyretin amyloidosis (ATTR) experience a high burden of ventricular arrhythmias. Despite this evidence, optimal strategies for arrhythmia prevention and treatment remain subject to debate. Case summary We report the case of a patient with hereditary ATTR cardiomyopathy who developed recurrent ventricular tachycardia prior to a decline in his left ventricular ejection fraction (LVEF). Although he ultimately received an intracardiac device (ICD) for secondary prevention of ventricular tachycardia, his clinical course begets the question of whether more aggressive arrhythmia prevention upfront could have prevented his global functional decline. Discussion Given the advent of new disease-modifying therapies for ATTR, it is imperative to reconsider antiarrhythmic strategies in these patients. New decision tools are needed to decide what additional parameters (beyond LVEF ≤ 35%) may warrant ICD placement for primary prevention of ventricular arrhythmias in these patients. Learning points • Transthyretin amyloidosis (ATTR) cardiomyopathy causes fibril deposition within the cardiac conduction system, cardiac microvasculature, and myocardial tissue, provoking the development of scar tissue.This late evolution of the disease might be associated with an increased risk for ventricular arrhythmias. • Left ventricular ejection fraction ≤ 35% is a threshold reached very late in ATTR cardiomyopathy; thus, it is imperative to elucidate other indices (e.g.left ventricular global longitudinal strain on echocardiography or scar burden on cardiac magnetic resonance imaging), which may be more useful for identifying arrhythmia risk. Introduction Transthyretin amyloidosis (ATTR) is a chronic condition caused by the misfolding of transthyretin monomers into fibrils, which most commonly accumulate in cardiac and/or peripheral nervous system (PNS) tissues.Among the two forms of ATTR, wild-type ATTR (wtATTR) is the most common and primarily affects the elderly (mean age 75 years at diagnosis).Although less common, hereditary ATTR (hATTR) affects younger patients (ages 30-80 years) and is caused by transthyretin gene mutations. 1he clinical presentation of hATTR depends on the causal mutation involved, of which over 130 variations have been identified. 2ransthyretin amyloidosis-related cardiomyopathy (ATTR-CM) and PNS involvement are most common; however, ocular, gastrointestinal, and musculoskeletal manifestations can also occur, including carpal tunnel syndrome and lumbar spinal stenosis. 3rom a cardiac standpoint, the deposition of fibrils into the myocardium causes left ventricular hypertrophy (LVH), which often precipitates restrictive cardiomyopathy and progressive heart failure.Moreover, patients with ATTR-CM may suffer from both conduction abnormalities and arrhythmias.Proposed mechanisms include the deposition of fibrils within the cardiac conduction system, inflammation, fibrosis of damaged cardiomyocytes, and myocardial ischaemia caused by microvascular fibril deposition. 4n addition to fluid and arrhythmia management, current strategies for treating ATTR include transthyretin stabilizers (Tafamidis) and gene silencers (patisiran and inotersen), which may slow disease progression. 5Clinical trials investigating monoclonal antibody treatments are ongoing. 6To date, the only curative treatment is orthotopic heart transplant (OHT) (± liver transplant), which is limited by the availability of donor organs. 7Without OHT or specific treatment, life expectancy following hATTR-CM diagnosis is 69 months. 8 Case presentation In 2019, a 57-year-old Caucasian patient with no significant past medical history was seen in our centre for dyspnoea and lower extremity pain.He underwent transthoracic echocardiography (TTE) revealing severe, concentric LVH, left ventricular ejection fraction (LVEF) 52%, and LV strain −13% with apical sparing (Figure 1).Given the patient's age, this was followed by subsequent cardiac magnetic resonance imaging (MRI) (Figure 1) and laboratory testing to exclude light chain (AL) amyloidosis. After confirming a diagnosis of ATTR-CM by technetium-99 scintigraphy, the patient underwent genetic testing revealing a rare A19D mutation (substitution of alanine with asparagine at position 19). 9Given this hereditary aetiology, thorough ophthalmologic, gastrointestinal, and musculoskeletal examinations were performed to determine the degree of extracardiac involvement, which revealed early polyneuropathy and ophthalmologic damage.At this time, the patient was started on diuretics and Tafamidis 61 mg. In 2020, given his ongoing dyspnoea [New York Heart Association (NYHA) Class II] and neuropathy, the patient was started on patisiran.In 2021, he was found to have atrial fibrillation during exercise and underwent pulmonary vein isolation, achieving sinus rhythm. In summer 2023, the patient returned to clinic for worsening symptomatology, including lightheadedness on exertion and functional decline.He underwent a stress test revealing VO 2 max 15.2 mL/min/m 2 (59% of the theoretical value) and no sustained ventricular arrhythmias.3 dimenstional tranthoracic echocardiography showed an LVEF of 44%. In July 2023, the patient called the local emergency medical services (EMS) for an episode of chest pain and palpitations at home.An electrocardiogram revealed a monomorphic, wide-complex tachycardia at a rate of 176 b.p.m. (Figure 2A), which resolved spontaneously.After a recurrence of this ventricular tachycardia (VT) in the hospital, he was started on amiodarone and admitted to the intensive care unit, where he underwent coronary angiography (without significant coronary artery lesions), repeat AL testing, and repeat TTE to exclude other causes of arrhythmia.A cardiac resynchronization therapy with defibrillator CRT-D was placed for secondary prevention of VT. A few weeks later, cardiac MRI revealed a severely hypertrophied LV with diffuse fibrosis and an LVEF of 26% (Figure 3). Several months later, in October 2023, the patient called the EMS again after an episode of exercise-induced VT at home.During transport, his VT deteriorated into ventricular fibrillation, necessitating six electric shocks.The only trigger found was mild hypokalaemia.He was started on bisoprolol (2.5 mg/day); however, his arrhythmias persisted (Figure 2B).Ventricular tachycardia ablation was considered; however, it was not pursued given the multi-focal origin of his VT. With the progression of the patient's symptoms (NYHA Class III) and global functional decline, he was listed for OHT and liver transplant (Figure 4). Discussion Current literature on amyloidosis-associated ventricular arrhythmias exists; however, most studies have focused on AL amyloidosis, leaving ATTR-CM largely understudied in terms of optimal treatment strategies. Despite this lack of guidelines, there is evidence that the burden of new VT in patients with ATTR-CM is significant.In an American cohort of 16 ATTR-CM patients with intracardiac devices (ICDs) placed for primary prevention, Kim et al. 10 reported an appropriate ICD therapy rate of 25%.In another cohort of 25 hATTR patients (majority Val122Ile mutations) with ICDs placed for primary prevention, Brown et al. reported an appropriate ICD therapy rate of 28%.However, comparing those with LVEF ≤ 35% with and without ICDs, Brown et al. 11 did not reveal any significant ICD survival benefit (3.3 ± 0.5 vs. 2.8 ± 0.4 years, P = 0.699). Nevertheless, this lack of demonstrated survival benefit may be related to the fact that the current guideline for ICD placement (LVEF ≤ 35%) 12 is a threshold reached very late in ATTR-CM.As such, LVEF may not be the best predictor of future VT risk in patients 13 studied the rates of appropriate ICD therapy (84% primary prevention, with implantation according to hospitalspecific criteria).In this cohort, the appropriate ICD therapy rate was 27% with a mean LVEF of 48% in patients experiencing therapy, during a follow-up period of 17 ± 13.7 months, comparable with the ICD therapy rates present in LVEF ≤ 35% cohorts. Beyond device-related strategies, current medical strategies for preventing and treating ATTR-associated arrhythmias are hotly debated.Given their negative chronotropic and inotropic effects, which can inadvertently reduce cardiac output in a stiff ventricle, beta-blockers are not typically recommended.However, a recent study by Ioannou et al. 14 showed that low-dose bisoprolol (≤2.5 mg daily) may improve prognosis in patients with LVEF < 40%.That said, the antiarrhythmic efficacy of this dose remains uncertain.Per recent papers, amiodarone may also be used. 15o our knowledge, there are no data on VT ablation in ATTR patients, perhaps due to multi-focal triggers for VT in ATTR.Thus, to date, OHT remains the only curative treatment for ATTR-associated arrhythmias.Orthotopic heart transplant is more frequently pursued in hATTR patients given their younger age and lack of comorbidities relative to wtATTR.In a retrospective study of eight patients with ATTR-CM transplanted between 2008 and 2017, Kristen et al. 7 showed a 5-year survival of 75%, comparable with that of patients transplanted for other heart diseases.In the event of associated neuropathy, combined cardio-hepatic transplantation is often proposed, with results comparable with heart transplantation alone (75.8% survival at 5 years).Of note, OHT is contraindicated in those with severe multi-systemic involvement (gastrointestinal damage or autonomic neuropathy).Recurrent VT in a patient with A19D mutation-associated hATTR Conclusion Although rare, ventricular arrhythmias should be considered as a complication and a marker of poor prognosis in patients with ATTR-CM.Additional studies are needed to clarify the antiarrhythmic strategy in these patients.Given the emerging treatments for patients with ATTR-CM, strategies for assessing myocardial damage should be elucidated, as ATTR disease lesions may increase the risk for both inflammation-and scar-related arrhythmias. Lead author biography Tanguy Bois is a cardiology resident (4th year) at Rennes University Hospital, specializing in echocardiography.His fields of interest are cardiomyopathies and heart failure.He has carried out internships in specialized units in the fields of heart failure, heart transplantation, sports medicine, and intensive care unit. Consent: The authors confirm that written consent for submission and publication of this case report including the images and associated text has been obtained from the patient in line with COPE guidance. Figure 1 Figure 1 Baseline imaging utilized to establish the patient's diagnosis of transthyretin amyloidosis-related cardiomyopathy (2019).(A) Baseline patient electrocardiogram with no abnormalities besides R wave flattening in V3. (B, C) Baseline patient transthoracic echocardiography, which revealed LVEF 52 with biventricular hypertrophy (septal thickness 18 mm) and left atrial dilatation (volume = 53 mL/m 2 ).(D) Baseline patient MRI revealing fibrosis across all four cardiac chambers. Figure 2 Figure 2 First two recorded episodes of ventricular tachycardia experienced by the patient (2023).(A) Patient's first recorded episode of ventricular tachycardia (176 b.p.m.) showing a positive concordance morphology in precordial leads with a negative and inferior axis, suggesting an inferobasal septal origin.(B) Patient's second recorded episode of ventricular tachycardia (162 b.p.m.) showing left bundle branch abnormality morphology, transition in V5, and superior axis, suggesting an inferomedial septal origin. Figure 3 Figure 3 Repeat cardiac MRI performed following recurrent episodes of ventricular tachycardia (2023).(A) Cardiac MRI demonstrating worsening of the patient's left ventricular hypertrophy (interventricular septum newly hypertrophied to 25 mm) and further reduction in LVEF to 26%.Notably, hypertrophy of the interatrial septum is also seen.(B) Elevation in the patient's native T1 signal.(C, D) Diffuse, circumferential late gadolinium enhancement of all four cardiac chambers appreciated in 2023, which notably includes the atria and lateral wall of the right ventricle.The presence of subendocardial sparing argues against an ischaemic aetiology. Figure 4 Figure 4 Decline in patient's left ventricular strain from diagnosis (2019) to case conclusion (2023).Figure shows 'bulls eye' strain diagrams as viewed over the progression of the patient's disease from 2019 to 2023.Dark red indicates higher left ventricular strain value; light blue indicates lower (worse) left ventricular strain.
2024-06-12T15:05:29.079Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "f7e77ef31b5b08de4e8eed3da0d04f8867d5ebb5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/ehjcr/ytae273", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a9cf70043f89e4749406425f988237b763d81fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15344919
pes2o/s2orc
v3-fos-license
Minimum Lateral Bone Coverage Required for Securing Fixation of Cementless Acetabular Components in Hip Dysplasia Objectives. To determine the minimum lateral bone coverage required for securing stable fixation of the porous-coated acetabular components (cups) in hip dysplasia. Methods. In total, 215 primary total hip arthroplasties in 199 patients were reviewed. The average follow-up period was 49 months (range: 24–77 months). The lateral bone coverage of the cups was assessed by determining the cup center-edge (cup-CE) angle and the bone coverage index (BCI) from anteroposterior pelvic radiographs. Further, cup fixation was determined using the modified DeLee and Charnley classification system. Results. All cups were judged to show stable fixation by bone ingrowth. The cup-CE angle was less than 0° in 7 hips (3.3%) and the minimum cup-CE angle was −9.2° (BCI: 48.8%). Thin radiolucent lines were observed in 5 hips (2.3%), which were not associated with decreased lateral bone coverage. Loosening, osteolysis, dislocation, or revision was not observed in any of the cases during the follow-up period. Conclusion. A cup-CE angle greater than −10° (BCI > 50%) was acceptable for stable bony fixation of the cup. Considering possible errors in manual implantation, we recommend that the cup position be planned such that the cup-CE angle is greater than 0° (BCI > 60%). Introduction Developmental dysplasia of the hip (DDH) is a common cause of hip osteoarthritis [1], characterized by insufficient acetabular coverage on the femoral head and shallow acetabular concavity [2]. During cementless total hip arthroplasty (THA) in patients with DDH, a hypoplastic acetabulum often makes it difficult to obtain sufficient bone coverage and initial stability of the acetabular component (cup), especially in cases of severe dysplasia [3]. Several techniques have been reported to manage insufficient bone coverage, including structural autograft [4], superior placement of the cup [5], and medialization of the cup [6]. However, the minimum bone coverage required on the porous-coated cup for securing stable fixation without these special techniques remains unclear, since the previously reported values vary greatly among studies, ranging from 50% to 80% [7][8][9][10][11][12][13][14], and adequate evidence is not available to determine the absolute value. Thus, the purpose of the present study was to determine the effect of lateral bone uncoverage on the fixation of the porous-coated cups and the minimum requirement for lateral bone coverage on the cup. Patients. We reviewed the clinical and radiographic data of 260 consecutive patients (281 hips) with hip osteoarthritis who underwent primary THA using cementless components between April 2010 and March 2013. The institutional ethics committee of JCHO Kyushu Hospital and Kyushu University Hospital approved this study. The inclusion criterion for this study was hip osteoarthritis secondary to hip dysplasia, defined as a lateral center-edge angle of Wiberg [15] of less than 20 ∘ on anteroposterior pelvic radiographs. Of the 260 patients, 224 patients (243 hips) met this criterion. The exclusion criteria included lack of a minimum 24-month follow-up, prior acetabular osteotomy, and other hip diseases. Therefore, 4 patients (5 hips) with a history of acetabular osteotomy and 20 patients (22 hips) who had not been followed up for a minimum of 24 months were excluded from this study (follow-up rate: 91%). One patient (1 hip) with a structural bone graft was also excluded. A total of 199 patients (215 hips) were finally eligible for this study. They included 31 men and 168 women with an average age of 66.8 ± 8.4 years (range: 50-85 years). The average body mass index was 24.9 ± 3.8 kg/m 2 (range: 14.6-37.8 kg/m 2 ). The average follow-up period was 49.4 ± 11.9 months (range: 24-77 months). According to the classification system of Crowe et al. [3], 175 hips were classified as type I, 29 hips were classified as type II, 10 hips were classified as type III, and 1 hip was classified as type IV. Fifteen patients (15 hips) had a history of femoral osteotomy. Sixteen patients had staged bilateral procedures during the study period. The patients were evaluated preoperatively, at 3, 6, and 12 months postoperatively, and annually thereafter, both radiographically and clinically. Radiographic evaluations included anteroposterior pelvic and cross-table lateral view radiographs. Further, their Merle d' Aubigné and Postel hip scores were recorded for clinical evaluation. Surgical Procedures. Total hip arthroplasties were performed through the posterolateral approach in 209 hips and through the trochanteric approach in 6 hips. The cups were implanted using the line-to-line technique, and initial stability was obtained in all hips. One to 3 screws (average: 2) were used to secure the initial fixation of the cup. In 29 hips, morselized allografts from the resected femoral head were applied to fill the gap between the host bone and the uncovered portion of the cup. A concomitant subtrochanteric shortening osteotomy was performed in 3 patients (3 hips). Patients were routinely allowed full weight bearing starting on postoperative day 1. Those who underwent THAs through the trochanteric approach or who had intraoperative femoral fractures were allowed weight bearing from 4 weeks after surgery. Implants. A titanium arc-splayed titanium cup with hydroxyapatite coating (AMS HA Cup; Kyocera, Osaka, Japan) [18] and a cross-linked ultra-high-molecular-weight polyethylene liner (Aeonian 910 AMS Liner; Kyocera, Osaka, Japan) were used in all hips. The bearing couples were ceramic on polyethylene in 193 hips and metal on polyethylene in 22 hips. The femoral head diameters were 26 mm in 4 hips, 28 mm in 96 hips, and 32 mm in 115 hips. Two cementless stems, namely, Perfix910 HA-coated stem (Kyocera, Osaka, Japan) and S-ROM (DePuy, Warsaw, IN, USA), were used in 193 hips and 22 hips, respectively. Radiographic Evaluations. The cup center-edge (cup-CE) angle and the bone coverage index (BCI) were measured on postoperative anteroposterior pelvic radiographs as indices of the degree of lateral bony coverage on the cups [7,19] (Figure 1). The radiographic anteversion and inclination angles of the cups were determined with the interteardrop line as a baseline [20,21]. The horizontal and vertical distances from the tip of the ipsilateral teardrop to the hip center were measured with the interteardrop line as a baseline. Hips with a vertical distance exceeding 35 mm were defined as having a high hip center [22]. Cup fixation was determined on the radiographs at the latest follow-up on the basis of the modified classification system of DeLee and Charnley [16,17] (Table 1). Osteolysis was defined as a circular or oval area of distinct bone loss. Heterotopic ossification was graded according to the classification system of Brooker et al. [23]. Morselized allografts were judged as incorporated if continuity of the trabecula was found between the host bone and the graft and they were judged as absorbed if the allograft had disappeared. Thus, the clinical results, the radiographic results of the porous-coated cup, and the association between lateral bone coverage and cup fixation were analyzed using the data described above. Statistical Analysis. Student's t-tests, Welch's t-tests, or Wilcoxon rank sum tests were used to compare continuous parameters between any 2 groups, depending on data distribution (Shapiro-Wilk W test and F-test). Chi-square tests or Fisher's exact tests were used to compare categorical parameters, as appropriate. Correlation between the cup-CE angle and the BCI was evaluated using a linear regression analysis. The significance level was set at < 0.05 for all tests. Statistical analyses were performed using JMP5 Version 11.0 (SAS Institute Inc., Cary, NC, USA). Clinical Results. The average Merle d' Aubigné and Postel hip score improved from 9.8 (range: 4-16) preoperatively to 16.5 (range: 12-18) at the time of the latest follow-up. No postoperative dislocation or symptomatic thromboembolic events occurred. Further, no cases of revision for any reason were noted during the study period. Complications occurred in 8 hips: 3 hips had intraoperative femoral metaphysis fractures and concomitant wiring was performed on 2 hips. Two hips had superficial infection, which healed conservatively. Two hips had postoperative fractures of the greater trochanter after the patients fell, and these healed conservatively. Finally, 1 hip showed nonunion of the trochanteric osteotomy site. Radiographic Results. At the time of the latest followup, all cups were determined to show stable fixation by bone ingrowth: 210 hips (97.7%) were judged as type IA and 5 (2.3%) as type IB (Table 2). Of the type IB hips, 2 showed thin radiolucent lines in zone I and 3 showed them in zone II. No cases of pelvic or femoral osteolysis or stem loosening were found. Heterotopic ossification occurred in 30 hips (14.0%), and the morselized allograft was incorporated in 27 of 29 hips (93.1%) ( Table 2). The average anteversion and inclination angles of the cup were 16.2 ∘ and 42.4 ∘ , respectively ( Table 2). The average position of the hip center was 33.5 mm laterally and 21.0 mm superior to the tip of the ipsilateral teardrop, and 8 hips (3.7%) were determined to have a high hip center. Association between Lateral Bone Coverage and Cup Fixation. The median cup-CE angle and BCI of 5 hips (2.3%) with radiolucent lines (type IB) did not differ from those of the 209 hips (97.2%) without radiolucent lines (type IA), with the numbers available (Table 3). No radiolucent lines were observed in hips with a cup-CE angle of <0 ∘ (7 hips) or those with a BCI of <60% (6 hips). Among the demographic parameters, patients with hips with radiolucent lines were older than those without the radiolucent line ( = 0.0404). Other demographic, surgical, and radiographic parameters showed no association with the presence of radiolucent lines (Table 3). Discussion In DDH, a hypoplastic acetabulum often compromises sufficient bone coverage and the initial stability of the cup during THA [3]. The minimum requirement for bone coverage on the porous-coated cup to ensure stable fixation remains unclear. In the present study, all 215 cups had stable fixation by bone ingrowth at an average follow-up of 4 years, and a minimum cup-CE angle of −9.2 ∘ (BCI: 48.8%) was acceptable for stable bony fixation. Thin radiolucent lines were observed for 5 hips (2.3%), and these hips were associated with older age but not with decreased lateral bone coverage. From our study results, we assume that a cup-CE angle of approximately −10 ∘ (BCI: 50%) indicates acceptable bone coverage for stable fixation by bone ingrowth in the short term. However, we could not determine the actual threshold (cut-off value) of minimum bone coverage required to ensure stable fixation because there were no cases with failure of fixation in our study. The previously reported minimum requirements of lateral bone coverage on the cup vary greatly among studies [7][8][9][10][11][12][13][14], probably because of differences in implant selection, patients selection, and surgical techniques. The AMS HA Cup used in this study has shown excellent outcomes at a minimum follow-up of 10 years [24]. In a study of 98 THAs using the press-fit-only technique at a mean follow-up of 7.4 years, Takao et al. [7] reported that a cup-CE angle of 8.4 ∘ (BCI: 65.5%) was adequately high for press-fit cups to resist superior directed loads and achieve bone ingrowth. Another study of 81 THAs using porouscoated cups with screws at a mean follow-up of 10.6 years reported that there was no loosening when the cup surface was in contact with more than 60% of the host bone [13]. Additionally, Y.-H. Kim and J.-S. Kim [8], in a study of 116 THAs at a mean follow-up of 9.7 years, reported that 11 hips (9%) with bone coverage of less than 60% by the host bone showed aseptic loosening, while the remaining 105 hips (91%) with bone coverage exceeding 60% showed solid fixation. Lastly, Li et al. [10] observed no cup loosening in their study of 52 THAs with bone coverage between 50% and 70% at a mean follow-up of 4.8 years, and they concluded that bone coverage of 50% is acceptable. They recommended the use of morselized allografts when the host bone coverage is less than 70%. These studies indicate that when performing THA using porous-coated cup with screws and morselized autograft, the minimum bone coverage required for securing stable fixation lies between 50% and 60%. In the present study, thin radiolucent lines were observed for 5 hips (2.3%), and these hips were associated with older age but not with decreased lateral bone coverage, based on the numbers available. The previously reported incidence of a thin radiolucent line around the porous-coated cup ranged from 1.9% to 20% [7,10,13]. Although none of these radiolucent lines were progressive in the short to intermediate term, further studies are needed to confirm a possible correlation between the incidence of these radiolucent lines and cup loosening in the long term. We adopted the cup-CE angle as a key indicator of the bony cup coverage. Previous studies used various methods to estimate the bony cup coverage radiographically, including cup-CE angle, BCI, and circumferential bone coverage. Takao et al. [7] have shown that the cup-CE angle showed the highest correlation with three-dimensional bone coverage measured on computed tomography. We also determined correlation of the cup-CE angle with BCI to collectively evaluate research results of present and previous studies. On the basis of the collective results of the present and previous studies, we recommend that the cup position be planned such that a cup-CE angle greater than 0 ∘ (BCI > 60%) is achieved. Although the cup-CE angle of approximately −10 ∘ (BCI: 50%) ensured acceptable bone coverage for stable bony fixation in the short term, this value is not recommended as a target value in preoperative planning, because errors in manual implantation can result in cup positioning different from the preoperatively planned position [25] and an unintended severe lack of bony cup coverage. Additionally, only 7 hips (3.3%) had a cup-CE angle less than 0 ∘ in the present study. We intend to perform three-dimensional preoperative planning to ensure that cup placement replicates the native hip center to the best possible extent and to move the cup template superiorly to achieve a cup-CE angle greater than 0 ∘ (BCI > 60%). Sufficient bony cup coverage by the anterior and posterior acetabular wall on the axial plane should be confirmed. In cases in which a cup-CE angle greater than 0 ∘ (BCI > 50%) cannot be achieved without a high hip center, we intend to consider special techniques such as structural bone grafting. This study has several limitations. First, the follow-up periods were relatively short. We believe the stable fixation by bone ingrowth in the short term guarantees favorable results in the long term. A previous study showed that radiolucent lines were identified within 2 years postoperatively and no new radiolucent lines were observed thereafter [7]. However, further studies are needed to determine whether fixation definitively lasts in the long term. Second is the retrospective design of this study and we did not have the option to evaluate three-dimensional bone coverage on the acetabular component. The bone coverage on the cup was evaluated on plain radiographs, which only provide two-dimensional information. A three-dimensional analysis is required to clearly determine the extent of bone coverage on the acetabular component. However, Takao et al. [7] have shown a significant correlation between the radiographic parameters of lateral bone coverage (cup-CE angle and BCI) and threedimensional bone coverage measured on computed tomography. As radiography is the most convenient and prevalent method used in preoperative planning and patient follow-up, we believe that our results will still be useful for clinicians planning THA. In conclusion, a cup-CE angle greater than −10 ∘ (BCI > 50%) was acceptable to achieve stable bony fixation of the cup in the short term. Considering possible errors in manual implantation and the limited number of hips with a cup-CE angle less than 0 ∘ , we recommend that the cup position be planned such that a cup-CE angle of >0 ∘ (BCI > 60%) is achieved. Abbreviations DDH: Developmental dysplasia of the hip THA: Total hip arthroplasty Cup-CE: Cup center-edge BCI: Bone coverage index. Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. All the authors certify that their institutions approved the human protocol for this investigation and that all the investigation was conducted in conformity with ethical principles of research. Consent For this type of study, formal consent is not required. Disclosure This work was performed at the Department of Orthopaedic Surgery, JCHO Kyushu Hospital, and the Department of Orthopaedic Surgery, Kyushu University Hospital. Level of evidence is Therapeutic Level IV, retrospective case series.
2018-04-03T02:37:28.055Z
2017-02-19T00:00:00.000
{ "year": 2017, "sha1": "e460183842980a0864b12fa75b534cacda0ce234", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2017/4937151.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0b8f117206c7beab8e8cd9b690663db321a85cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266995280
pes2o/s2orc
v3-fos-license
An adjudication algorithm for respiratory-related hospitalisation in idiopathic pulmonary fibrosis This algorithm for the adjudication of respiratory-related hospitalisation in IPF clinical trials will help achieve consistency in the reporting of this end-point, increasing comparability of Introduction Idiopathic pulmonary fibrosis (IPF) is a chronic progressive lung disease that places a high burden on patients, with excessive mortality and increasing prevalence [1,2].Pirfenidone and nintedanib are approved for the treatment of IPF [3,4], but neither reverse existing pathology [5][6][7].Novel therapeutic agents are under development [8], leading to a pressing need to optimise and standardise clinical trial end-points [9]. Respiratory-related hospitalisation is common in individuals with IPF.In a 5-year follow-up study, 87% of patients with IPF were hospitalised at least once, with 37% of hospitalisations due to acute respiratory worsening [10].Each year in the UK alone, there are an estimated 9000 hospital admissions due to IPF [11], and 5-14% of patients in a typical IPF clinical trial are hospitalised during the study [12][13][14].The financial impact of IPF respiratory-related hospitalisation is significant, with one USA study estimating the mean cost per admission to be USD 16 000 [15].Respiratory-related hospitalisation is associated with high morbidity and increased mortality, irrespective of the cause of respiratory worsening [14,[16][17][18].In a cohort study of 592 patients, median survival was 2.8 months following respiratory-related hospitalisation, compared with 27.7 months following nonrespiratory-related admissions [19].Data from a large USA medical insurance database showed pirfenidone or nintedanib treatment decreased the risk of all-cause mortality and of acute (mostly respiratory-related) hospitalisation when compared with an untreated IPF matched cohort, supporting the use of respiratory-related hospitalisation as an end-point [20]. Many of the symptoms and clinical features of respiratory-related causes of hospitalisation are nonspecific.For example, worsening dyspnoea could have pulmonary or extrapulmonary causes [21].Moreover, many patients with IPF have serious comorbidities, such as pulmonary hypertension [22,23], COPD, lung cancer and heart disease [23]; comorbidity burden is associated with high morbidity and mortality [24].Owing to this complexity, hospitalisations during IPF treatment trials may be centrally adjudicated to ensure accurate and standardised end-point classification.However, there is currently no universally accepted definition of respiratory-related hospitalisation; thus, there is impetus for the creation of a clear, standardised, pre-determined methodology to define and adjudicate these events. We developed an algorithm for the adjudication of respiratory-related hospitalisation in IPF trials, based on a literature review and using clinical data available to practicing clinicians [25][26][27][28].The algorithm provides methodology to define types of respiratory-related hospitalisation event, thus confirming respiratory-related hospitalisation.The algorithm was used by a blinded clinical end-point adjudication committee (CEAC) to adjudicate respiratory-related hospitalisation events in two phase-3 IPF trials: ISABELA 1 and 2 (NCT03711162 and NCT03733444) [29].The concordance between the CEAC and investigators with regards to the cause of respiratory-related hospitalisation was assessed. Literature review A literature review was conducted to identify English-language reports of phase-2 and -3 randomised clinical trials (RCTs) in IPF in which respiratory-related hospitalisation and/or acute exacerbations of IPF (AEIPFs) were pre-specified end-points.To identify articles, the PubMed database was searched for ("idiopathic pulmonary fibrosis" AND "trial" AND ["hospital" OR "acute exacerbation"]).All articles published between 1 January 2000 and 31 October 2018 were retrieved.This process was repeated during the development of this article (with an end date 28 September 2019), to capture full or follow-on publications of studies previously only reported within a clinical trials registry.We identified 322 articles, which was reduced to 128 when further filtered by the inclusion of "clinical trials".When the abstract and/ or full text of articles were manually reviewed, 16 RCTs that met the criteria for inclusion and two cohort studies of post hoc adjudication of RCTs were identified.Supplementary table S1 summarises the reasons for exclusions. Additional studies were identified from the reference lists of articles.To identify ongoing studies, the ClinicalTrials.govdatabase was searched using the terms "idiopathic pulmonary fibrosis" AND "hospital", as well as "idiopathic pulmonary fibrosis" AND "acute exacerbation", filtering for phase-2 and -3 trials (supplementary table S1).Additional identified articles included nine RCTs identified from other sources, eight ongoing RCTs identified on ClinicalTrials.govand two additional cohort studies identified from other sources.Hence, a total of 33 RCTs and four cohort studies were included. Algorithm development An international working group was established, comprising nine expert clinician researchers with experience in adjudicating IPF clinical trials (supplementary table S2).Based on the literature review results, plus flowcharts previously developed for the diagnosis of AEIPF [26,28], five members of the expert group (P.Ford, K.K. Brown, N. Hirani, J. Behr and R.J. Kaner) developed an algorithm for the adjudication of respiratory-related hospitalisation.The proposed algorithm was circulated to the wider group for review and the algorithm was revised.This process was repeated, and a third version was approved by the whole expert group.This final version was used by the CEAC of the ISABELA trials, which comprised eight members, three of whom were among the experts responsible for developing the algorithm.Details of the ISABELA trials have been reported previously.Study protocols were approved by the independent ethics committee/institutional review board for each site or country, as applicable, and all patients provided written informed consent. Algorithm validation The CEAC of the ISABELA studies adjudicated respiratory-related hospitalisation and deaths; the algorithm was used for the adjudication of respiratory-related hospitalisation.If an event causing death and death itself occurred on the same calendar day, then death was the only event classified; death and the event causing death were classified as separate events if they occurred on different calendar days.Events (hospitalisations and deaths regardless of cause) were identified primarily from the completed electronic case report forms reported by site investigators via an electronic data capture system.Source documents (listed in supplementary table S3) were then requested from the site to support adjudication of the event by the CEAC.The case was not adjudicated if necessary source documentation for adjudication could not be obtained.Two members of the CEAC independently evaluated each case; a third CEAC member evaluated discrepant cases (agreeing with one of the previous adjudicators or forming an alternative verdict).To validate the algorithm, the cause of hospitalisation as determined by the CEAC using the algorithm (i.e. the type of respiratory-related hospitalisation) was compared with the cause stated by the study investigator ( provided in narrative form).The proportion of cases in which there was agreement between the CEAC and investigator was recorded.This qualitative comparison was performed by two Galapagos (Mechelen, Belgium) employees.Concordance between the CEAC and the investigators for cause of death was also assessed qualitatively. Literature review Respiratory-related hospitalisation Respiratory-related hospitalisation and AEIPF were used as end-points in the 33 included phase-2 and -3 IPF RCTs [6,12,13,25, (table 1).However, the vast majority (16 out of 18) of studies that used respiratory-related hospitalisation as an end-point did not include a specific definition beyond "hospitalisation due to respiratory causes/worsening respiratory symptoms".Fewer than half of studies stated that adjudication was performed, with most relying solely on investigator-defined events.Typically, RCTs did not describe the adjudication process beyond stating that adjudication was performed by a committee blinded to treatment group.Among the most detailed descriptions was that in the ARTEMIS-IPF trial, which stated that an end-point committee adjudicated whether the primary reason for hospitalisation was respiratory, nonrespiratory or elective, and whether the primary diagnosis was acute IPF disease progression, IPF disease progression without acute exacerbation, pneumonia, bronchitis, left heart failure or an alternative respiratory event [12]. The National Heart, Lung, and Blood Institute-sponsored IPF Clinical Research Network (IPFnet) published an article summarising the outcomes of the adjudication process for respiratory-related hospitalisation in the ACE-IPF and PANTHER-IPF trials [30].Following a review of the available clinical records, the adjudication committee classified a hospitalisation as respiratory-related if worsening respiratory symptoms were considered the main reason for hospitalisation.Out of 36 investigator-reported hospitalisations in ACE-IPF, 28 were adjudicated as respiratory and eight as "other".Out of 57 investigator-reported hospitalisations in PANTHER-IPF, 28 were adjudicated as respiratory and 29 as "other". Two studies performed post hoc classification of respiratory-related hospitalisation using pooled data from the CAPACITY and ASCEND trials [18], and from ACE-IPF, PANTHER-IPF and STEP-IPF [14] (table 2).DURHEIM et al. [14] categorised the following as respiratory-related hospitalisation: AEIPF, pulmonary embolism, respiratory tract infection, pneumothorax, aspiration event, COPD exacerbation, lung transplantation and other respiratory worsening (including increased dyspnoea, hypoxia, respiratory distress and other/unclassifiable acute respiratory worsening). AEIPF The majority (28 out of 32) of RCTs included AEIPF as an end-point, reporting the proportion of patients affected and/or the time-to-first AEIPF (table 1).Fewer than half of studies stated that central adjudication was used to distinguish between definite and suspected AEIPF events.Approximately one-third of studies reported that AEIPF was defined using the consensus criteria proposed by COLLARD and co-workers [25,28], with a minority reproducing the criteria they used in full (supplementary table S4).Several further studies stated that AEIPF was defined in the study protocol, but the protocol was not available. IPFnet published details of the adjudication process for AEIPF in the ACE-IPF, PANTHER-IPF and STEP-IPF trials [30].The definition of AEIPF used by IPFnet was based on the consensus criteria published by COLLARD et al. [25].All suspected AEIPFs were referred to the adjudication committee.Events were classified as "definite acute worsening" (all criteria met, no alternative aetiology), "unclassifiable acute worsening" (insufficient data to evaluate all criteria, no alternative aetiology) or "not acute exacerbation" (alternative aetiology identified that explained the acute worsening) (refer to supplementary table S4 for stated criteria).The committee adjudicated 88 suspected AEIPF events; 29 were judged as definite and 31 as unclassifiable.Of the unclassifiable cases, 75% were missing a computed tomography scan, 10% were missing data on infection status and in 15% the data were too ambiguous to reach a definite conclusion. In a post hoc analysis of the INPULSIS trials, fewer than two-thirds of investigator-reported AEIPFs were judged by retrospective central adjudication as AEIPFs [61,62] (table 3).Out of 79 investigator-reported AEIPFs, the adjudication committee rated nine (11%) to be correct AEIPFs and 33 (42%) to be suspected acute exacerbations; 35 (44%) were not considered acute exacerbations, and two could not be adjudicated because of insufficient data [61].A similar pattern emerged in a second analysis, in which 31 (63%) out of 49 serious adverse events reported by trial investigators were judged by adjudication to be a confirmed/ suspected AEIPF, whereas 18 (37%) out of 49 were deemed "not an AEIPF" [62].For 14 investigator-reported nonserious adverse events deemed to be AEIPFs, the adjudication committee found five (36%) to be confirmed/suspected AEIPFs; nine (64%) were "not an AEIPF". Respiratory-related hospitalisation algorithm The algorithm we developed (figure 1) builds on the most recent consensus-based recommendations for AEIPF diagnosis [25][26][27][28] and the findings from the literature review.It incorporates additional decision points to capture other (non-AEIPF) respiratory-related causes of hospitalisation.In brief, all patients hospitalised because of increasing pulmonary symptoms should undergo nonenhanced, high-resolution computed tomography to distinguish parenchymal from extraparenchymal causes.If chest imaging combined with other clinical data suggests something other than AEIPF (typical signs of acute exacerbation not identified) and indicate that extraparenchymal causes can be excluded, the classification "other cause of respiratory hospitalisation" is assigned.Note that in the absence of significant left ventricular dysfunction in a patient with IPF, right-sided heart failure is considered a respiratory cause. In patients with worsening respiratory symptoms within the past month, for whom chest imaging and other clinical data are compatible with AEIPF, the hospitalisation is rated to be due to "AEIPF".This is further classified as "definite AEIPF" when all criteria are met (there is radiological or histological evidence of diffuse alveolar damage), with all other cases classed as "suspected AEIPF".If the trigger for AEIPF (which may be infective, post-procedural, traumatic, drug toxicity-related or aspiration-related) is identified, the cause of hospitalisation is classified as "known AEIPF" (i.e.triggered AEIPF) and, if not, as "idiopathic AEIPF".This applies for both "definite" and "suspected" AEIPF. "Extraparenchymal" cases (table 4), "other respiratory" and all cases of "AEIPF" (definite or suspected; known trigger or idiopathic) are considered "respiratory causes of hospitalisation".All other hospitalisations are classified as "nonrespiratory".Note that the algorithm excludes elective hospital admission for lung transplantation. Algorithm validation A total of 349 respiratory-related hospitalisations were identified for adjudication in the ISABELA studies.Overall, 338 (97%) out of 349 hospitalisations were adjudicated by the CEAC using the algorithm; the remaining 11 (3%) were not adjudicated due to insufficient data.Among adjudications, the rate of disagreement between the first and second adjudicator was 30%; in these instances a third CEAC member decided the cause of hospitalisation.The third adjudicator reached a different verdict to either of the two previous adjudicators in 10.7% of cases.Discordance between the CEAC (using the algorithm) and study investigators (reporting a case narrative) occurred in 21 (6.2%) out of 338 adjudicated cases.Four Galapagos representatives checked discordant cases for commonalities to potentially inform improvements As per protocol "Adverse events were adjudicated by a committee of three experts blinded to treatment assignment as a confirmed acute exacerbation (if all protocol-defined criteria were met), a suspected acute exacerbation (if the event was felt to be an acute exacerbation but did not meet all protocol-specified criteria) or not an acute exacerbation (if an alternative cause was identified)" Out of 49 investigator-reported SAEs deemed by the investigator to be AEIPFs, 31 (63%) were adjudicated as being a confirmed/suspected AEIPF, and 18 (37%) were adjudicated as "not an AEIPF" Of 14 investigator-reported non-SAEs deemed by the investigator to be AEIPFs, five (36%) were adjudicated as being a confirmed/suspected AEIPF, and nine (64%) were adjudicated as "not an AEIPF" COLLARD, 2017 [61] (INPULSIS 1 and 2) NCT01335464 NCT01335477 As per protocol "The adjudication committee comprised three experts in IPF who were not investigators in the INPULSIS trials.An event was adjudicated as a 'confirmed acute exacerbation' if all the protocol-defined criteria were met, a 'suspected acute exacerbation' if the event was felt to be an acute exacerbation but failed to meet all protocol-specified criteria, or 'not an acute exacerbation' if an alternative cause was identified" Out of the 79 investigator-reported AEIPFs, nine (11%) were adjudicated as confirmed acute exacerbations, 33 (42%) as suspected acute exacerbations and 35 (44%) as not acute exacerbations.For two events, insufficient data were available for adjudication Mortality was similar following investigator-reported acute exacerbations, adjudicated confirmed/suspected acute exacerbations and events adjudicated as not acute exacerbations SAE: serious adverse event; AEIPF: acute exacerbation of IPF. in the algorithm; however, no events were more prevalent than others (no events were particularly conflictual).Furthermore, there was no substantial overlap between cases that were discordant and cases that were adjudicated differently by CEAC members. When respiratory-related hospitalisation and deaths were considered together, 427 cases were identified for adjudication.A total of 416 (97%) out of 427 hospitalisations and deaths were adjudicated (11 cases were not adjudicated due to insufficient data).The CEAC adjudicators disagreed in 34% of their evaluations, requiring a final decision by a third CEAC member, who reached a different verdict to either of the two previous adjudicators in 10.5% of cases.Discordance between the CEAC and study investigators occurred in 44 (10.6%) out of 416 cases.Data regarding nonagreement between CEAC members were assessed to see whether there was a learning effect with repeated algorithm use.When the assessment period was divided into two sequential periods (August 2019 to mid-June 2020 and mid-June 2020 to May 2021), internal CEAC disagreement occurred in 24% and 37% of cases in the first and second periods, respectively.There were no differences in the type of event to occur between the two periods.When the total number of cases assessed (n=416) was divided into two, CEAC disagreement occurred in 31% and 38% of the first and last 208 cases, respectively.These findings indicate that algorithm use did not improve over time. Discussion Findings from the literature review showed that respiratory-related hospitalisation was not defined nor adjudicated in the majority of studies, and while the development of standardised criteria for AEIPF [25][26][27][28] has improved the reliability of AEIPF classification, the complexity of diagnosis means that central adjudication is still required.The diagnostic ambiguity associated with respiratory-related end-points in IPF highlights the need for an algorithm for respiratory-related hospitalisation adjudication. Adjudication will impose additional requirements and costs (e.g. a formal adjudication committee with regular training in the use of the algorithm, alongside access to complete medical records).In the ISABELA trials, an electronic system was used to maximise efficiency.In instances of missing data, an adjudication committee may request additional information from investigators to determine the nature of the hospitalisation and more accurately categorise the event. The algorithm was developed by experts in the USA and Western Europe; resource availability, clinical practice and opinion may vary in other regions, e.g. in Asia where acute exacerbations are more frequent, with perhaps more devastating outcomes [31,63,64].Therefore, the algorithm was purposefully designed to be simple to help minimise discordance and ensure that the clinical data required are available to most clinicians.This will also help ensure that differences between sites and countries with respect to, for example, the imaging equipment available, will not impede application of the algorithm.Developing a simple algorithm for a complex end-point is not without its challenges and we acknowledge that some difficulties related to terminology may remain.For example, a documented viral infection could be classed as pneumonia or triggered acute exacerbation. As there is no generally accepted, gold-standard definition for respiratory-related hospitalisation, it is not possible to compare the algorithm with a current standard.However, comparisons can be drawn with reported definitions of AEIPF.The definition used by IPFnet states that AEIPF includes "unexplained worsening of dyspnoea or cough within 30 days" [30], whereas our algorithm uses the broader AEIPF criterion of worsening respiratory symptoms within the past month.In addition, the IPFnet definition requires both "new superimposed ground-glass opacities or consolidation on computed tomography scan, or new alveolar opacities on chest radiograph", a "decline of ⩾5% in resting room air oxygen saturation by pulse oximetry from last recorded level or decline of ⩾8 mmHg in resting room air partial pressure of oxygen from last recorded level" and a lack of clinical and microbiological evidence of infection [30].In IPFnet studies, cases were adjudicated as "unclassifiable acute worsening" when these criteria were not met, or insufficient data were available (e.g.missing imaging, oxygen saturation or partial pressure of oxygen data).Our algorithm uses broader criteria to classify "definite" AEIPF and allows cases of "suspected" AEIPF to be recorded as such; thus, it may be more practical to apply and less likely to incorrectly classify those with AEIPF due to missing radiological or physiological data. The algorithm was used to adjudicate events in the ISABELA studies.There was a high rate of agreement between investigator-and CEAC-determined causes of hospitalisation and deaths and hospitalisation (combined).However, there is potential bias, as adjudicators may base their decisions on information reported by the investigator, which cannot be independently verified, and three of the CEAC members were involved in algorithm development.Notably, there was a higher rate of disagreement between CEAC adjudicators (30% disagreement rate between the first and second adjudicator; perhaps reflecting the absence of a gold-standard definition for respiratory-related hospitalisations), than between CEAC adjudicators and investigators (6% discordance).Although the disagreement rate between the adjudicators may be viewed as a limitation, one could argue that these findings suggest that if sites are well selected, investigators can accurately categorise hospitalisations without the need for central adjudication.While investigators may correctly classify events in most cases, these data show that their diagnosis can be corroborated by respiratory experts who are unfamiliar with the case, using a standardised process with predefined criteria.Furthermore, the high rate of agreement we report may reflect the experience of the ISABELA investigators.These findings may not be replicated in other clinical trials, as each differs with regards to the robustness of their data, and decisions made by site investigators are subject to many variables, including the framework within which they work and the level of support they receive from contract research organisations and sponsors.Central adjudication may help classify trial outcomes where there is variation or uncertainty in investigator-determined events and provide additional transparency.The algorithm defines a prescribed method for classifying outcomes, which can be used to compare data from future trials.In addition, the algorithm informs which clinical data are needed to retrospectively classify events and could therefore be programmed into data capture systems before trial onset.The algorithm could also be used by site investigators to increase homogeneity and efficacy in the reporting of events. The ISABELA studies enrolled 1306 patients, comprising the largest IPF population studied to date, and generated longer-term data than previous IPF trials.Although registry data on outcomes following hospitalisation are available [2,65], such data have not been systematically reported in the literature for previous trials.The ISABELA programme provides a sentinel cohort for the algorithm; our results indicate that the algorithm works well.Due to the early termination of the ISABELA studies, it was not possible to follow patients to determine the prognostic implications of differentiating the type of respiratory-related hospitalisation, but this could be investigated in future prospective IPF trials. ERJ OPEN RESEARCH ORIGINAL RESEARCH ARTICLE | P. FORD ET AL. TABLE 1 Phase-2 and -3 idiopathic pulmonary fibrosis (IPF) clinical trials that included respiratory-related hospitalisation and/or acute exacerbation of IPF (AEIPF) as a pre-specified end-point ORIGINAL RESEARCH ARTICLE | P. FORD ET AL. TABLE 3 Cohort studies that performed post hoc adjudication of acute exacerbation events in phase-2/3 idiopathic pulmonary fibrosis (IPF) randomised controlled trials Algorithm for the adjudication of respiratory-related hospitalisation in idiopathic pulmonary fibrosis (IPF).For simplicity, "extraparenchymal" and "other respiratory" are both considered respiratory causes of hospitalisation, together with acute exacerbation of IPF (AEIPF) ("definite" or "suspected", in which both are either "known" (i.e.known trigger) or "idiopathic").All other admissions are "nonrespiratory" and classed as such.ER: emergency room; HRCT: high-resolution computed tomography; PE: pulmonary embolism.# : elective (nonemergency) admission to hospital for lung transplantation is excluded.¶ : rule out primary cardiac causes (e.g.congestive cardiac failure, myocardial infarction, arrhythmia); if no significant left ventricular dysfunction, right-sided heart failure is considered a respiratory cause.+ : if no evidence of diffuse alveolar damage, the case is suspected.§ : exacerbations with identified triggers (infective, post-procedural or traumatic, drug toxicity-related or aspiration-related) are classed as "known AEIPF"; those with no identified trigger are classed as "idiopathic AEIPF".
2020-07-09T09:11:09.482Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "3c71cc12a1d4a4b33b960baba3f95bc85be733b5", "oa_license": "CCBYNC", "oa_url": "https://openres.ersjournals.com/content/erjor/early/2023/11/09/23120541.00636-2023.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98fae28ffa03f37e865a080666a4847c9ad17a72", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247932630
pes2o/s2orc
v3-fos-license
Particle Number Concentration: A Case Study for Air Quality Monitoring : Particle matter is one of the criteria air pollutants which have the most considerable effect on human health in cities. Its legislation and regulation are mostly based on mass. We showed here that the total number of particles and the particle number concentrations in different size fractions seem to be efficient quantities for air quality monitoring in urbanized areas. Particle number concentration ( N ) measurements were realized in Budapest, Hungary, for nine full measurements years between 2008 and 2021. The datasets were complemented by meteorological data and concentrations of criteria air pollutants. The annual medians of N were approximately 9 × 10 3 cm − 3 . Their time trends and diurnal variations were similar to other large continental European cities. The main sources of N are vehicle road traffic and atmospheric new aerosol particle formation (NPF) and consecutive growth events. The latter process is usually regional, so it appears to be better assessible for contribution quantification than mass concentration. It is demonstrated that the relative occurrence frequency of NPF was considerable, and its annual mean was around 20%. NPF events increased the contribution of ultrafine (UF < 100 nm) particles with respect to the regional particle numbers by 12% and 37% in the city center and in the near-city background, respectively. The pre-existing UF concentrations were doubled on the NPF event days. Introduction Air pollution is one of the most important factors affecting human health, the climate, and the environment. Around 91% of the world's population live in places with poor air quality. The ambient air pollution is estimated to account for 4.2 million premature deaths per year worldwide due mainly to stroke, lung cancer, heart disease, and acute and chronic respiratory diseases [1]. The sources of air pollution are multiple and complex. On a global scale, the major anthropogenic ambient sources include road vehicles, residential energy production for heating and cooking, power generation, industry, and agriculture [2]. The identification and characterization of the sources are crucial for understanding the effects of pollutants, as well as to develop suitable policies and technologies for moderating the air pollution, especially in cities [3]. Ambient air quality is ordinarily expressed by concentrations of certain key air pollutants and their health limit values. According to the World Health Organization (WHO), the key air pollutants include O 3 , NO 2 , SO 2 , CO, PM 2.5 mass, and PM 10 mass [1]. The latter two species express the particulate matter (PM) with aerodynamic diameters below 2.5 and 10 µm, respectively. As far as the health limits are concerned, there are global guidelines which also offer quantitative health-based recommendations for air quality management, namely, guidance on how to decrease the levels of these pollutants [1]. The European Environmental Agency (EEA) completed this list with NO x (= NO + NO 2 ), Pb, and benzene [4] and set their health limits. The U.S. Environmental Protection Agency (EPA) defined six outdoor criteria air pollutants, CO, Pb, ground-level O 3 , NO 2 , PM 2.5 or PM 10 masses, and SO 2 , and determined National Ambient Air Quality Standards (NAAQS) for them [5]. There are many further chemical species which are usually present in the ambient air and which can also cause harmful consequences for human health. The list of the key air pollutants can be further extended, e.g., by soot. The regulatory issue of the PM is especially complicated since it is not a single chemical species, but it is a heterogeneous system which contains a complex mixture of more than a thousand of inorganic and organic compounds in their condensed phase dispersed in the air. In addition, as a colloidal system, it can be characterized by several different metrics that express important properties of particles. Hence, it cannot be expected that a single or a few metrics of the PM explain its comprehensive health effects. Particulate matter mass (in a certain size range), which is involved in the regulations, is one of the simplest quantities. It is usually associated with the health impacts of particles. Most epidemiological studies were based on mass as the dose metrics. The mass of atmospheric aerosols is, however, made of larger, i.e., coarse and fine particles. The mass contribution of smaller, e.g., of ultrafine (UF) particles (with an equivalent diameter <100 nm) is negligible. There are some PM types, atmospheric conditions, and specific health effects in which some other PM properties than the mass become important. These may include the number and surface area of particles. Many recent epidemiological and toxicological studies demonstrated that particle number concentrations, especially of UF particles, have a more considerable effect on human health than mass concentration [6][7][8][9][10][11][12][13][14][15]. UF particles, due to their size, can penetrate the respiratory system and even enter into the bloodstream and can cause inflammation and respiratory and cardiovascular diseases. This size fraction also represents an excess health risk relative to coarse and fine particles with the same or similar chemical composition [16,17]. It is worth mentioning that 70-80% of total particles in cities belong to the UF size range. It is, therefore, a plausible intention and requirement to extend the list of the key air pollutants by further aerosol metrics such as particle number concentration. There have been mitigation policies and control regulations to reduce the emission of particle numbers as part of an overall air-quality improvement strategy since the 1990s. The legislation in the EU, including Hungary, focus, e.g., on particle emissions from diesel engines [18]. There were some important changes in the car emissions, which included the introduction of Euro 5 and 6 regulations for light-duty vehicles in January 2011 and Euro VI regulations for heavy-duty vehicles in September 2015 (the number of emitted particles with diameters >23 nm should be <6 × 10 11 km -1 ). The concentration of sulfur in diesel fuel for on-road transport was decreased in several phases to <10 ppm in January 2009 [19]. Sulfur content in fuels for mobile non-road diesel vehicles-including mobile machinery, agricultural and forestry tractors, inland waterway vessels, and recreational crafts-was limited to a level of 1000 ppm in 2008 and at 10 ppm in 2011. Dangerous fuel types for domestic heating are also listed, their emission factors are determined, and the accumulated information is disseminated among potential users. As far as secondary particles are concerned, it is not straightforward to reduce their concentration levels because the effects of gaseous and aerosol species are complex due to their nonlinear relationships and feedbacks in their related processes. Total particle number concentrations are easily measured for monitoring purposes by condensation particle counters (CPCs), whereas particle number size distributions can be determined by online mobility particle size spectrometers. The latter systems possess the advantage that concentrations for different size fractions can be derived from the measured data. This is important, since different size fractions are related to different source types, atmospheric properties, and processes. There are two major source types of particle numbers in the atmosphere: new particle formation and growth (NPF) events and high-temperature emissions. The former process is the dominant source in the global troposphere [20][21][22][23]. In cities and urbanized areas, the particle number concentrations are strongly affected by high-temperature emission sources from different sectors such as industrial processes, domestic installations, residential heating and cooking, vehicular road traffic, and power production [24,25]. In large cities, primary particles prevail over secondary particles [26][27][28]. Vehicles emit primary aerosol particles but also contribute to secondary aerosol particle formation by emitting their precursors [3]. New particle formation and growth events have been proven to be also be common in large cities, and they can have large particle number contributions on nucleation days [29]. Local meteorological conditions and long-range transport of air masses can play substantial roles in the concentrations actually realized [30]. It seems to be relevant and useful to investigate and overview the properties and behavior of particle number concentrations in longer data sets from the air quality aspects as well. Atmospheric particle number concentrations in various size fractions and meteorological data for nine full measurement years are available for Budapest. Budapest is the capital of Hungary, which is located in the Carpathian Basin in Central Europe. It is the biggest and the most inhabited city of the country with around 525 km 2 of land area and has a population of 1.72 million inhabitants. The number of passenger cars registered in Budapest (596 × 10 3 in 2008 and 691 × 10 3 in 2020) increased slowly, while the share of the diesel-powered passenger cars enhanced somewhat more from approximately 20% in 2008 to 36% in 2020 [31]. The number of buses (ca. 4000) registered in Budapest and the share of the diesel-power buses (98%) on the national bus fleet remained constant. The major objectives of this study are to overview the properties and time trends in particle number concentrations, to investigate their main sources, and to determine and discuss the contribution of NPF events to particle number concentrations. Further goals are to discuss the trends in nucleation source intensity and to interpret its consequences for the urban air quality. Experimental Part The measurements were performed in two different urban sites in Budapest. Most measurements were realized at the Budapest platform for Aerosol Research and Training (BpART) Laboratory (47 • 28 29.9 N, 19 • 3 44.6 E; 115 m above mean sea level, m.s.l.) of the Eötvös Loránd University. The location represents a well-mixed, average atmospheric environment for the city center due to its geographical and meteorological conditions [32]. Therefore, it can be regarded as an urban background site. The main local emissions are diffuse urban traffic exhaust, residential and household emissions, industrial sources, and some off-road transport [33]. The long-range transport of air masses can also play an important role. The other location was situated at the northwestern border of Budapest in a wooded area of the Konkoly Astronomical Observatory (47 • The measurements in year Y2 were performed in the near-city background, while they were realized in the city center in the other years. As the time base of the data, local time (LT = UTC + 1 or daylight saving time, UTC + 2) was chosen because it has been observed that the daily activity of inhabitants significantly influences many atmospheric processes in the urbanized areas [34][35][36]. The particle number size distributions were determined by a laboratory-made flowswitching-type differential mobility particle sizer (DMPS) [37,38]. The system operates in the electrical mobility diameter range from 6 to 1000 nm in the dry state (with a relative humidity (RH) < 30%) of particles in 30 channels with a time resolution of 8 min [32,37]. Its main components include a Ni-60 radioactive bipolar charger, a Nafion semi-permeable membrane monotube dryer, a 28-cm-long Vienna-type differential mobility analyzer, and a butanol-based CPC (TSI, model 3775, USA, Shoreview, MN). The instrument was operated in two sets of flows: In the high flow mode, the aerosol flow rate was 2.0 L min −1 , and in the low mode, it was 0.31 L min −1 , while the sheath air flow rates were 10 times larger than the sample flows. The measurements were performed in a continuously way according to the international technical standards [39,40]. The CPC instrument (TSI, model 3752, USA, Shoreview, MN) operated with an aerosol inlet flow of 1.5 L min −1 and measured concentrations of particles with diameters above 4 nm using butanol as a working fluid. Mean particle number concentration data were extracted from its database with a time resolution of 1 min. The data were utilized for quality control of the integrated DMPS data. The concentrations of SO 2 ; NO, NO x , and NO 2 ; O 3 ; CO; and PM 10 mass and PM 2.5 mass were measured by UV fluorescence (Ysselbach 43C, Budapest, Hungary), chemiluminescence (Thermo 42C, Waltham, MA, USA), UV absorption (Ysselbach 49C, Budapest, Hungary), IR absorption (Thermo 48i, Waltham, MA, USA), and beta-ray attenuation (two Environment MP101M instrument with PM 10 and PM 2.5 inlets) methods, respectively, with a time resolution of 1 h. The experimental data were acquired from the closest measurement stations of the National Air Quality Network in Budapest, which is located 4.5 km from the city center site and 6.9 km from the regional background site in the upwind prevailing wind direction [41]. Most meteorological measurements for the city center took place on-site. Air temperature (T), RH, wind speed (WS), and wind direction (WD) data were obtained by standardized meteorological methods (HD52.3D17, Delta OHM, Padova, Italy, and SMP3 pyranometer, Kipp and Zonen, Delft, The Netherlands, respectively) with a time resolution of 1 min (except for Y1, when it was 1 h). The meteorological data for the near-city background were measured using a mobile meteorological station installed at the measurement location at a height of ca. 2 m from the ground and the time resolution was 10 min. The data coverage for particle number concentrations, meteorological data, and criteria air pollutants were 94, >90, and >85%, respectively, over the whole interval. Data Treatment The measured DMPS data were inverted into discrete size distributions that were utilized to calculate particle number concentrations in the diameter ranges 6-25 nm (N 6-25 ), 25-100 nm (N 25-100 ), 6-100 nm (N 6-100 ), 100-1000 nm (N 100-1000 ), and 6-1000 nm (N 6-1000 ). These size intervals were selected to represent various important particle source types. The concentrations N 6-25 are associated with NPF events [20,26,42], N 25-100 are mostly emitted by incomplete combustion (such as vehicle road traffic or household heating) in urbanized areas or generated by condensational growth of new particles, and N 100-1000 mostly represents physically and chemically aged, thus regional particles [34,36]. The concentration N 6-100 (of UF particles) is of special interest since it is often related to excess health effects, while the N 6-1000 represents the total particles. The DMPS data were also utilized to generate daily particle number size distribution surface plots for identification and classification of the NPF events [43][44][45]. The following classes were defined: event days, non-event days, undefined days, and missing days. The relative occurrence frequency of NPF events (f NPF ) was determined individually for each month and year as the ratio of the number of event days to the total number of relevant days. The importance and contribution of particles generated by NPF events were assessed by nucleation strength factors (NSFs) [34]. There are two factors, which are defined as follows: (1) It was implicitly assumed that the production of larger particles (>100 nm) was much smaller than the concentration of UF particles. This is typical for cities, and it can be proved by considering the contributions of UF particles to the total particle numbers [45,46]. The NSF NUC is determined exclusively for nucleation days, while the NSF GEN is derived for all available days. The former property represents the concentration increment from NPF on an ordinary nucleation day with respect to N 100-1000 , while the latter quantity expresses the overall increment in particles in general, thus on an average day [26]. Both NSFs were calculated separately for each month, each measurement year, and the whole measurement interval. Their diurnal variations were also derived. Atmospheric Concentrations and Their Ratios Basic descriptive statistics of the particle number concentrations in different relevant size fractions are summarized in Table 1. They demonstrate that the minimum concentrations in a selected size range were usually similar to each other over the years, while the other concentrations tended to be distinctly larger in the city center (Y1, Y3-Y9) than in the near-city background (Y2). This is due to the larger anthropogenic sources of particles in the city than in its surroundings. The vehicle road traffic represents the major contribution, and the traffic intensity is larger in the city center with respect to the near-city background. From this aspect, particle number concentration is indeed a valuable and useful quantity to express several important urban anthropogenic activities. The ranges of the concentrations were rather large; they could be expressed by a factor of 300. This is a specific feature of particle number concentrations, and it is mainly related to the dynamic character of their sources, atmospheric processes, and relatively short residence time of smaller (UF) particles. It is seen that there was a decreasing trend in the N 6-1000 and N 6-100 for the city center from 2008 over the years, and this was likely interrupted in 2016. The change in the N 100-1000 appeared to be more modest, which can be explained by the baseline character of this size range. The concentrations are in line with or similar to those in other large European cities [11,[47][48][49][50][51][52][53]. The N 6-100 represents the major contribution to the total particle numbers both in the city center and the near-city background [41]. Their mean values and SDs were (78 ± 10)% and (67 ± 16)%, respectively. In the city center, this is mostly due to the vehicle road traffic and other high-temperature emission sources. In urban areas, N 25-100 is also mainly composed of particles from high-temperature emission sources, which explains its larger temporal variability. The mean contribution and SDs of the chemically aged (regionally representative) particles to the total numbers were (22 ± 10)% and (33 ± 14)%, respectively. Time Series The annual time series of particle number concentrations in different size ranges during all measurement years are shown in Figure 1. We can see that there was no considerable monthly or seasonal variability of concentrations during a year. This is different from PM mass concentrations. It can be explained by relatively constant sources, atmospheric processes, and decrease in particle numbers over a year. The concentrations N 6-25 , N 25-100 , and N 6-1000 showed larger temporal variability, while the changes in N 100-1000 were more modest. This can be explained by the different major sources of size-fractionated particles, as discussed in Section 3.1. Table 1. Ranges, medians, and means with standard deviations (SDs) of particle number concentrations N 6-25 , N 25-100 , N 6-100 , N 100-1000 , and N 6-1000 (all in 10 3 cm −3 ) in the city center and near-city background for the measurement years Y1 to Y9. The diurnal distribution of the air pollutants and particle number concentrations in different size ranges are presented in Figure 2. We can see that the concentrations of SO 2 and PM 2.5 mass did not change substantially during the day. They seem to have little relation with the vehicle road traffic. This is because the sources of the fine particles in Budapest are mostly related to non-vehicular processes [33]. In the case of the diurnal variation in PM 10 mass, we can see that it changed slowly and modestly during the day with tendencies for lower concentration overnight and higher levels during daylight. This can be related to their major sources such as the resuspension of urban dust, emissions from material wear from moving parts of vehicles, and the ageing of exhausted particles from vehicles and their relatively large atmospheric residence time [54]. At the same time, we can see that NO, NO 2 , and CO concentrations show two peaks corresponding to the typical behavior of traffic [55]. These pollutants are mainly related to vehicle road traffic. The peaks appeared around the typical rush hours in Budapest [55]. This property can be identified most evidently in the case of N 25-100 . This is explained by the fact that the main source of this size range (25-100 nm) in urban environments is incomplete combustion [36], thus vehicle traffic. On the diurnal pattern of N 6-25 , three peaks were identified; the first and the last can be associated with road traffic, while the peak around noon is linked to NPF events. The diurnal variation of N 100-1000 was more constant, and it only showed modest variability, as expected. All these features are reflected back on the diurnal variation of the total particle number. The diurnal series obviously showed associations among particle number concentrations and vehicle traffic. Furthermore, it was estimated that ca. 70% of total particle numbers in cities are generated by emissions [24]. This can, however, sensitively depend on and change with the local and regional atmospheric properties and conditions. Therefore, its contribution or importance are challenging to estimate. Instead of this, we assessed the contributions from NPF events in the present study since this process is related to a larger region at least in the study area, i.e., in the Carpathian Basin [38]. New Aerosol Particle Formation and Growth Events The total number of NPF events in 9 years was 663. Its annual mean and SD were 74 ± 17. They resulted in an overall mean relative occurrence frequency and SD of (21 ± 4)%. It means that the phenomenon occurred with a considerable rate; there was an NPF event every fifth day on a yearly scale. This also suggests that the NPF events are an important source of particles even in cities. The numbers of nucleation days for the different months in each year are summarized in Table 2. The distributions of the monthly mean counts exhibited obvious differences, while the annual total counts were similar to each other in various aspects, except for year Y5, which also showed the smallest annual count. We could not find plausible explanation for this extremely small value. The other differences can likely be explained by substantial changes in multifactorial conditions and by the complex interplay among the influential environmental variables over a year and by inter-annual differences in chemical, aerosol and meteorological properties and in biogenic cycling [56][57][58]. The number of NPF event days in the near-city background location (Y2) was the highest among them. The realization of NPF events depends on the competition between the sources and sinks of condensing vapors expressed by their ratio [58]. They can still create favorable conditions for NPF occurrence for smaller source intensities if the condensation sink (CS) is even lower. This is expected to be the typical case for the near-city background site. The mean f NPF values were calculated separately for days of the week and for workdays and holidays in the city center for 8 years (Figure 3). It can be seen that the values for the holidays were significantly larger than for the workdays or for the overall mean. On weekends, especially on Sundays, some anthropogenic sources are substantially reduced. For instance, the road traffic on weekends is decreased by approximately 30% with respect to workdays [55]. This results in smaller CS on these days, which appears to be favorable for the NPF event occurrence. As far as the workdays are concerned, they seem to fluctuate around their mean value. Mondays seem to exhibit somewhat lower frequency, which can likely be explained by usually larger traffic intensities on this day than on the other workdays. These results demonstrate that anthropogenic activities, in particular, vehicle road traffic, do affect the urban NPF phenomenon. Contributions of NPF Events The mean NSFs calculated separately for the city center and near-city background are summarized in Table 3. New particle formation (represented by the NSF NUC ) increased the particle number concentrations in the city center by a factor of 1.7 and by ca. 2 times in the near-city background. The importance of NPF on a longer time interval was demonstrated by the mean NSF GEN values. In the city center, 12% of UF particles were generated by NPF as a single source, while it produced 34% of UF particles in the near-city background. The relatively large SDs point to the changing intensity of NPF events. In addition, it is also seen that both the NSF NUC and NSF GEN values were systematically larger for the background than for the center. This is mainly due to the higher particle emissions in the city center, which is caused by anthropogenic activities. Relative occurrence frequency of new aerosol particle formation in the city center over 8 measurement years separately for days of the week and for workdays and holidays. The solid horizontal line indicates the overall mean, and the yellow band shows its standard deviation. The actual counts of the new particle formation event days are written above the columns. The annual mean NSFs for the city center are presented in Figure 4. The NSF GEN seems to be constant during the whole measurement interval. It is worth mentioning that the occurrence frequency did not change significantly during the investigated years (except for Y5, see Section 3.2). However, the annual mean NSF NUC values appear to indicate a slightly increasing tendency with a slope and SD of (2.6 ± 2.5)% annually. This may be just a fluctuation, or this may suggest that either the contribution of NPF events became larger or that the general level of N 100-1000 decreased during the years. The latter tendency cannot be confirmed in the corresponding concentration data from Table 1, and moreover, the possible changes in the annual mean NSF NUC and N 100-1000 are not in line or coherent with each other. This hints to possible increasing importance of NPF events with respect to emission sources in Budapest. The hypothesis should be further investigated by independent evaluation methods and on longer data sets. The monthly mean NSF GEN values changed modestly on an annual scale. Their distribution basically followed the shape of the NPF occurrence frequency, which was shown in [58]. This can be explained by the fact that the monthly NPF contributions on a general day are expected to be larger if the NPF frequency is higher. It is more exciting to investigate the variations in the monthly mean NSF NUC , which are displayed in Figure 5. There seems to be a systematic tendency for lower values (down to 1.1) in summer and larger values (up to 2) in late autumn and early winter. This is, however, a consequence of the seasonal trend in the N 100-1000 level. Its diurnal variations for the NPF days and non-NPF days in the non-winter seasons are presented in Figure 6a. The differences between the curves were negligible. In winter, the shapes were also similar to each other, while their magnitudes were considerably different (Figure 6b). The concentrations were substantially smaller on the NPF days than on the non-NPF days. In winter, the GRad and biogenic precursor gases in the air were decreased [58], and therefore, the source strength of the condensing vapors is expected to also be lower. Consequently, NPF events in this season occur only or preferably if the sink term (related mainly to the existing regional aerosol particles, thus to N 100-1000 ) is even smaller. This explains the difference between the diurnal curves for the NPF and non-NPF days, and the elevated NSF NUC values in winter (see Equation (1)). The NPF events rather took place on those winter days when the particle number concentrations were relatively smaller, and, therefore, the NPF increased the existing low concentration levels by a larger factor. The mean diurnal variations in the NSF NUC and NSF GEN in the city center are shown in Figure 7a,b. The averaging was performed for the month in which the counts of the nucleation days (or the f NPF ) was the largest in each year. This was typically in March or April. The curves exhibited a single peak at noon. The baseline of the peaks from 00:00 to 07:00 and from 19:00 to 24:00 was around unity. In some years, the baseline of the NSF NUC was elevated which can be explained by the fact that the particle growth can take place until the late morning of the next day; thus, the NPF can influence the N 6-100 concentrations even in the next morning. Furthermore, we can observe that the peaks have a longer tail in the afternoon side due to the particle growth process. The elevated baseline is a real effect, and it should be included in deriving the mean NSF NUC . The contribution of NPF to the regional concentration level (N 100-1000 ) is the largest at noon, when it reaches a factor of 2.0 to 3.5 (typically 2.5). This is a considerable enhancement, although it only lasts for a few hours. The differences among the annual mean values were likely caused by the year-to-year variability. The diurnal variation in N GEN also exhibited a single peak with a maximum at noon and with a tail in the early afternoon. The maximum values represented concentration contributions from 50% to 100% due to NPF event for a limited time interval. This means that NPF events have an important contribution to UF particles during the midday even in the city center. The values are lower estimates since a considerable part of the N 100-1000 particles can be also produced by NPF from previous days. It is informative to compare the contribution values to the global share of various source sectors in primary UF particle number emission to have an idea on the relative extent of our results. Road transport, power production, and residential heating combustion are the first three largest contributors to primary UF particles with the shares of 40%, 20%, and 17%, respectively [24]. The actual contributions can vary in different parts of the world and with economic development. Summary and Concluding Remarks Particulate matter is one of the criteria or key air pollutants (if not the most relevant species) which represents the largest health risk for humans in the world. Its monitoring, quantification, and legislation are based on PM mass. It is, however, increasingly recognized that particle number concentrations in urban and industrial areas are important and valuable additional metrics both for health risk and some environmental considerations. As a consequence, the particle number concentrations have been proposed to amend the air pollutants monitored at present. We showed here that the total number of particles and, in particular, the concentrations in several size fractions, are very useful quantities for these goals. The former property can easily be measured for long-term monitoring purposes, while the latter quantities are more advised to be determined as part of background research studies preparing the actual monitoring activity and in an occasional or expedient manner than for continuous observations. This can create a nice example for mutually beneficial and close cooperation among researchers and regulatory bodies. Ultrafine particles make up a very considerable portion, typically 70-80% of total particles. The atmospheric residence time of these particles is relatively short, and their concentrations can change rapidly in time and space. Therefore, they reflect the active source processes, atmospheric transformations, and sinks of particles in a dynamic way. This is an important advantage of particle numbers with respect to PM mass as air quality metrics. Particle number characteristics including the concentration levels (annual medians of approximately 9 × 10 3 cm -3 ), time trends (no strong seasonal dependency), and diurnal variations (with a remarkable temporal pattern) in Budapest are similar to those in most continental large European cities. Their main source types include vehicle road traffic (and other high-temperature emission sources such as household and residential heating, cooking), and atmospheric NPF. The latter process is usually of regional character, and therefore, it seems to be better assessable for contribution or quantification. It was demonstrated that the occurrence frequency of NPF events is considerable even in large cities with an annual mean value of ca. 20%. We also estimated that NPF events increase the ratio of UF particles with respect to the regional particle numbers (N 100-1000 ) by 12% in the city center and 37% in the near-city background. At the same time, the pre-existing UF concentrations are doubled on the NPF event days. The contributions exhibit substantial diurnal and seasonal variations. Data Availability Statement: The observational data are available from Imre Salma upon reasonable request.
2022-04-04T15:05:05.898Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "35465f41d2e5d50481245333f2bcc7fddc66f6dc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/13/4/570/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2a583ffe89965cfb91081ba8d19a31d6b6e8da05", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
226737910
pes2o/s2orc
v3-fos-license
The biological activity of subspecies Trichoderma harzianum against Fusarium oxysporum, the causative agent of fusarium wilt cucumber in vitro The study of the effect of the strains of the fungus Trichoderma: Trichoderma atrobrunneum VKPM F-1434, Trichoderma harzianum 5/14, Trichoderma Lixii T4/14 on the number of micromycetes populations of the pathogenic fungi Fusarium oxysporum isolate B/14, Fusarium oxysporum isolate MOS509, Fysarium oxysporum isolate IMI58289 in vitro. It was found that the Trichoderma atrobrunneum fungus strain VKPM F-1434 showed the highest degree of inhibition on 10 days of cultivation with phytopathogenic microorganisms, which was 100 %. The study revealed that all Trichoderma species are capable of producing lytic enzymes. Trichoderma atrobrunneum strain VKPM F-1434 exhibits strong lipase and chitinase activity and average proteinase activity. In addition, Trichoderma atrobrunneum strain VKPM F-1434 has a growthpromoting ability, which was reflected in the germination of seeds of cucumber "German F1". The maximum values of indicators of germination energy were noted 98.4 % and germination – 100 %. Introduction Fusarium is one of the most harmful and widespread diseases of agricultural plants in the world caused by soil phytopathogenic fungi, including representatives of the genus Fusarium. Infection can occur at any stage of crop growth and seriously reduce yield and degrade fruit quality, especially in protected ground [1]. It is economically feasible to use biological control methods that are environmentally friendly in the integrated fight against fusarium [2]. The biological protection of plants from pathogens of Fusarium infections is becoming increasingly important in the production of greenhouse cucumber (Cucumis sativus). Fusarium wilt of a cucumber (tracheomycosis) according to many authors, the disease causes death on average up to 10-15 %, and in some years up to 65 % of cucumber plants. It is possible to successfully reduce the rate of wilting of plants caused by fungi of the genus Fusarium, possibly using antagonistic microorganisms such as Bacillus, Enterobacter, and Pseudomonas, which are the main root colonizers and can stimulate plant protection [3]. Many researchers have shown that in the biological control Fusarium successfully use fungi of the genus Trichoderma spp. [4] five main mechanisms of combating phytopathogens, including mycoparasitism through the secretion of hydrolytic enzymes, competition for nutrition, antibiosis in the production of secondary metabolites, stimulating plant growth and stimulating systemic resistance to diseases in plants [5]. To date, more than 340 species of Trichoderma have been described [6,7], which have potential biological activity against phytopathogenic fungi. The most commonly used species are T. asperellum, T. atroviride, T. harzianum, and T. polysporum [8]. Thus, based on the foregoing, the relevance of the search for effective biological control agents for fungi of the genus Fusarium, the causative agent of fusarium wilt of cucumber, from the genus Trichoderma, is shown. The aim of this study was to assess the effect of Trichoderma atrobrunneum exemetabolites VKPM F-1434 and other related strains of microorganisms against fungi of the Fusarium genus, followed by assessment of their effect on the growth and development of cucumber seeds in vitro and in vivo. Object of research For the experiment, we used live cultures of Trichoderma fungi: Trichoderma atrobrunneum VKPM F-1434, Trichoderma harzianum 5/14, Trichoderma Lixii T4/14; phytopathogenic fungi of the genus Fusarium: Fusarium oxysporum B/14, Fusarium oxysporum MOS509, Fusarium fujikuroi IMI 58289 from the academic collection of the department of biotechnology of the Oryol state agrarian University, for a long time stored in the refrigerator at a temperature of +4 о . All represented fungi of the genus Trichoderma before 2015 belonged to the same species Trichoderma harzianum [9]. Cucumber seeds of the "German F1"variety were used as objects of research. The cucumber variety " German F1" is a hybrid universal variety that is suitable for growing in greenhouses and farms. Statistical processing of results was performed using the Microsoft Office 2010 (Excel) package. All experiments were carried out in five-fold repetition. Determination of antagonistic activity of fungi of the genus Trichoderma To assess the degree of manifestation of antagonistic activity and mechanisms of action on phytopathogens, the influence of Trichoderma atrobrunneum antagonists strain VKPM F-1434, Trichoderma Lixii isolate T4/14, Trichoderma harzianum isolate 5/14, on strains of phytopathogenic fungi Fusarium oxysporum isolate B/14, Fusarium oxysporum isolate MOS509, Fusarium fujikuroi isolate IMI58289 under in vitro conditions by double culture method. The results of the research showed that in the control all phytopathogenic microorganisms intensively grew along the Petri dish and occupied almost the entire area of the Petri dish, while they formed a well-developed air mycelium with a bright pigment (Tabl. 1). Determination of the activity of enzymes of fungi of the genus Trichoderma associated with mycoparasitism Inhibition of growth of pathogens by fungi is their generic feature and is due to the ability of mycoparasite to hydrolyze the cell walls of phytopathogenic fungi and use them as a substrate due to the produced enzymes and secreted various compounds [14]. The results of determining the enzymatic activity of the studied antagonist strains are presented in the table 2. In our study, all Trichoderma species are capable of producing lytic enzymes. In T. atrobrunneum strain VKPM F-1434, the degree of manifestation of lipase and chitinase activity is strong, average proteinase. Strains of T. Lixii strain T4 / 14 and T. harzianum strain 5/14 have medium lipase and chitinase activity, weak -proteinase. Which confirms the data of mycoparasitic ability of strains within species [15]. Determination of the stimulating and fungicidal effect of presowing treatment of cucumber seeds with spore suspensions of the studied antagonists in vitro Many researchers have shown that microorganisms with antagonistic activity can stimulate the growth and development of various plants, as well as change the soil microbiota, thereby improving the phytosanitary state of the soil [16]. В наших исследованиях особое значение придавалось использованию аборигенных штаммов антагонистов, так как их биологическая активность непосредственно связана with habitat and with the entire soil complex as a whole [17]. Based on the results of the antagonistic, mycoparasitic, and enzymatic activity of the studied antagonist microorganisms, in vitro experiments were performed on their ability to stimulate the germination of cucumber seeds. The results are presented in Figure 1. Biocontrol potential of the studied fungi of the genus Trichoderma against fungi of the genus Fusarium in vivo Trichoderma species produce a huge amount of water-soluble metabolites, including pyrenes, terpenoids, steroids and polyketides, and others [5]. Able to inhibit the growth of plant pathogens in vitro and in vivo. The biocontrol potential of the studied antagonist microorganisms was evaluated in vivo against an artificially created infectious background against cucumber microplants. 50 ml of an aqueous suspension of Fusarium fungi, the titer of which is higher than that of the antagonists (10 9 conidia/ml), was added to plastic cuvettes with 7-day-old seeds of cucumber treated with spore suspensions of the studied antagonist microorganisms. T. atrobrunneum strain VKPM F-1434 and T. harzianum isolate 5/14 stimulate root growth, while T. Lixii isolate T4/14 is characterized by stimulation of both roots and seedlings. It was shown that by the end of the experiment, the number of phytopathogenic populations decreased by 75-80 % compared with the initial number of phytopathogenic fungi due to the hyperparasitism of the T. atrobrunneum micromycete strain VKPM F-1434, which no longer affected the susceptibility of cucumber seedlings, but at the same time decreased the number of introduced antagonist by 52 % due to the decrease in substrate (phytopathogens) and the processes of restoring the structure of the microbial pool in the soil. The number of populations of micromycetes of the genus Fusarium decreased by 58-82 % compared with the initial number of phytopathogenic fungi due to hyperparasitism of micromycete T. harzianum isolate 5/14, while the number of introduced antagonist decreased by 64 %. The population of micromycetes of the genus Fusarium decreased by 37-39 %, Fusarium fujikuroi isolate IMI 58289 by 79 % compared to the initial number of phytopathogenic fungi due to the hyperparasitism of micromycete T. Lixii isolate T4/14, which no longer affected the susceptibility of cucumber seedlings, even at the same time, the number of introduced antagonist decreased by 35 % (Fig. 2). Conclusion The results of tests of the biological activity of antagonist strains obtained in laboratory conditions and in the open ground can vary significantly, since microbial antagonism in the soil proceeds taking into account many natural factors, often significantly different from the antagonism of the same microbes on artificial nutrient media [18]. Therefore, the search for antagonist microorganisms should include studies of the interaction of microorganisms in controlled conditions and in a natural environment. Given the widespread prevalence of fungal diseases of cucumber, especially protected soil, it is especially important to select indigenous strains of antagonists that are able to efficiently reduce the number of phytopathogens and at the same time stimulate plant growth and development of an environmentally friendly crop to protect the culture. The Trichoderma atrobrunneum VKPM F-1434 strain meets these requirements.
2020-06-25T09:06:04.584Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "adb48e68187327e04a333b3fb253add284036bf7", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2020/05/bioconf_bpp2020_00021.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "109a0b47521412beaee8b4c7ed273a5170b1f829", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
234153593
pes2o/s2orc
v3-fos-license
Existing potentials in Insect Growth Regulators (IGR) for crop pest control The aim of this review is to explore the potentials existing in insecticides that are considered Insect Growth Regulators (IGR) for the control of insects considered crop pests, with an observation of the main mechanisms of neuroendocrine modifications, development and viability of the species used as study models. The data search on digital platforms, as well as the screening of materials about crop pests, resulted in 74 IGR references and their potentials. The analysis of the information found demonstrated a greater use in works of compounds belonging to chitin synthesis inhibitors class; and orders such as Hemiptera, Lepidoptera, Coleoptera, Orthoptera, Thysanoptera and Diptera were represented in the studies. The main types of activities reunited were morphological and anatomical modifications, reproductive modifications, alterations in developmental stage, alterations in developmental period, ovicidal activity, larvicidal/ninficidal activity and fagoinhibition. The congregated knowledge about the main pests used as study models, the main IGRs compounds and their biological potentials allow an evaluation of their use as an informative source for crop pest control methods. Introduction The agricultural sector demonstrates a great contribution in the scope of productivity, being able to be expressed in values relative to ¾ of worldwide economy (FAO, 2013). An insect, when is considered an agricultural pest, has an abundance in its density that cause financial losses in important crops. Sharma et al. (2017) emphasize the damage caused by agricultural pests on a global scale, with an estimated loss of 18-20% in annual production and $ 470 billion. The use of chemical substances for insect population control, which has as one of its purposes the improvement of economic investment, can often result in environmental toxicity problems (Moreira, et al., 1996). The scientific progress in the last decades have made possible the exploration of alternative methods regarding the regulation of insect development, based on compounds with mechanisms of greater selectivity and less toxicity to non-target organisms; which could overcome problems caused by the use of organochlorine and organophosphate insecticides, characterized as insecticides of the first generations (Faria, 2009). Insect growth regulators (IGRs) have a role in regulating essential physiological processes in insects; they are not necessarily toxic and may alter specific pathways of hormonal control that are related to molting, metamorphosis and reproduction (Tunaz & Uygun, 2004). Since its potential discovery, credited to the "paper factor" related by Sláma & Williams (1965), IGRs have been commercialized at an industrial level and widely used in pest control. In spite of several of these compounds have been explored in studies, there are few materials that gather information about these substances in a concise way and that show their evaluated potentials, mainly about the study of insect control in agriculture. Therefore, the aim of this article is to carry out a review of IGRs use in the control of insects considered crop pests, based on the analysis of mechanisms concerning changes in hormonal and developmental pathways, as well as the exploration of the main potentials of existing biological activities, and the quantification and qualification of analyzed data. Methodology For the data search, which was made from 2018 to 2020, the platforms Periódicos Capes, National Center for Biotechnology Information (NCBI) and Scientific Electronic Library Online (SciELO) were used. A combination of terms from the main classes of IGRs, mechanisms of action and potentials against agricultural pests was used, such as "ecdysteroid agonists against agricultural pests", where there was an exchange of only the first and second terms according to the IGR class. Rhodnius prolixus (Hemiptera: Reduviidae)]. Subsequently, a second screening was carried out to reduce the number of sources of IGRs use in agricultural pests, excluding works that took into account insects considered stored product pests [e.g. The study gathered a total of 107 references and, among them, 74 are specifically about the potential of IGRs in insects considered crop pests. The information includes studies ranging from classic works dating from 1934 to studies of 2019. Hormonal control modifications by IGRs activities Insects have physiological processes that are directly related to endocrine centers and integrated and correlated hormonal responses. In neuroendocrine control, represented in Figure 1, the neurosecretory cells (NSC) present in the insect brain consist of neurons that are specialized in hormone production; and that project their axons into a series of endocrine glands and neurohemal organs (Harstenstein, 2006). While the endocrine glands are structures adapted to produce and release hormones in the circulatory system, neurohemal organs are based on the storage of hormone until the neuroendocrine pathways signals mediate its products release (Gullan & Cranston, 2014). The prothoracicotropic hormone (PTTH) is produced by NCS and stored in the neurohemal organ corpora cardiaca (CC) and, later, it will be released to promote the stimulation of the prothoracic glands and the consequent production of ecdysone (Ec). This Ec is in its inactive form, being converted to a 20-hydroxyecdysone (20HE) in the epidermal cells by 20hydroxylase (Song, et al., 2017), which will circulate in the hemolymph and start a new cycle of division of the epidermal cells to form a new cuticle (Klowden, 2013). The endocrine gland corpora allata (CA) is responsible for the production and release of juvenile hormone (JH), primarily described by Wigglesworth (1934), consisting of a sesquiterpene with the function of inhibiting genes that promote the development of adult characteristics, participating in processes of molting and metamorphosis (Klowden, 2013;Gullan & Cranston, 2014) and, later, in reproductive mechanisms. According to Klowden (2013), the presence of Ec results, in a process of molting, the same type of cuticle; while the absence of JH and the presence of Ec stimulate the reprogramming of epidermal cells to produce specific proteins for the next stage and the completion of metamorphosis process. Research, Society andDevelopment, v. 10, n. 1, e35910111726, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i1.11726 Substances that are characterized as IGRs can interfere with the neuroendocrine balance existing in insects, acting as agonists or antagonists of the hormones involved in the main physiological processes of development. Among the IGRs, there are juvenoids, anti-juvenoids, ecdysteroid agonists and antagonists, chitin synthesis inhibitors and PTTH synthesis inhibitors. Juvenoids, also known as JH mimics or JH agonists, can prolong nymph/larva/pupa development period by increasing JH levels (Gallo, et al., 2002). Examples of JH analogues are substances such as methoprene and pyriproxyfen, and their capacity for prolonging the larval period is proven (Miranda, et al., 2002). Anti-juvenoids, also known as JH antagonists or precocenes, were primarily recognized by Bowers et al. (1976) in studies with Oncopeltus fasciatus (Dallas, 1852) (Hemiptera: Lygaeidae), observing effects of nymphs with early development in adults, and sterile adults. They are able to interfere in the synthesis of JH through injuries to CA, maintaining high levels of PTTH and stimulating the reprogramming of epidermal cells for an early metamorphosis process (Gallo et al., 2002). Hypotheses suggest that there is a competition between anti-juvenoids and JH in the carrier proteins binding, reducing the activity of JH (Staal, 1986;Tunaz & Uygun, 2004). Reports in the literature express the role of precocenes in CA degeneration (Ergen, 2001;Gotoh, et al., 2008); and allatostatins in inhibiting JH synthesis in CA (Woodhead et al., 1989). Chitin synthesis inhibitors (CSI) comprise a group of substances called benzoylphenylureas, with an inhibitory mechanism in chitin synthetase (Gallo, et al., 2002). According to Tunaz and Uygun (2004), this inhibition can occur in three ways: by inhibiting chitin synthase; by inhibiting proteases that activate chitin synthase; and by inhibiting UDP-Nacetylglucosamine membrane transport. Among the various types of substances known for CSI activity, there are diflubenzuron, triflumuron and lufenuron. Compounds that act as IGRs have a wide range of action in insect control, representing potential regulators for the development of disease vectors, urban pests and storage products pests. IGRs activities in crop pests can demonstrate several types of biological potentials that will be elucidated below and are also organized and visualized in the Table 1 Data quantification and qualification The present review demonstrates 45 compounds characterized as IGRs, being cited around 119 times over the 74 references used to compose the study of potentials. Among the existing classes of regulators, CSI proved to be the most explored by the works, accounting for 45.38% of substances with explored potentials. They are compounds that have been widely commercialized for some decades and that show their activities of interference in chitin formation, as well as in insects reproduction and development (Merzendorfer, 2012). Thereafter are JH analogs (16.81%), JH antagonists (12.60%), ecdysteroid agonists (11.76%), azadirachtin and derivatives as PTTH inhibitor representatives (6.72%) and ecdysteroid antagonists (6.72%). Among the most cited substances, diflubenzuron was observed, being represented in nine works; methoxyfenozide, being explored in eight studies; lufenuron and azadiractin, being cited in seven studies each; and novaluron, hexaflumuron, precocene I and pyriproxifen, present in six studies each. The information obtained about the crop pests used as models in the different studies allows an observation about the main orders and species mentioned. Hemiptera was the largest order explored in the works, with 45.76% of representativeness, followed by Lepidoptera with 33.90%; while Coleoptera obtained 11.86% of expression in the studies, Orthoptera 3.39%, Diptera also 3.39% and Thysanoptera 1.69% of presence. The orders present in this review coincide with those reported by Culliney (2014), which characterizes them as being the most important orders for agricultural pests. Among the species, Spodoptera littoralis (Boisduval, 1833) (Lepidoptera: Noctuidae) was the most used in the studies, being mentioned in nine articles, which can characterize its importance as a source of study for control methods while it demonstrates to be one of the lepidopterans with the greatest economic impact in plantations of cotton, tomatoes, tobacco and maize (CABI, 2019). Orthoptera demonstrated great relevance in the studies, being represented by insects inserted in economic and historical contexts due to their polyphagy and migration capabilities. Schistocerca gregaria (Foskal, 1775) (Orthoptera: Acrididae) and Locusta migratoria (Linnaeus, 1758) (Orthoptera: Acrididae) appeared in seven and five studies, respectively. The mechanisms of action exerted by IGRs on crop pests revealed a diversity of biological potentials. Larvicidal/ninficidal potentials were demonstrated in 27.89% of the results, and morphological/anatomical modifications 23.81%, these being the two major activities found. Reproductive modifications accounted for 18.37% of the results, while fagoinhibitive activities showed 12.24% of the results. Alterations in the development period appeared with 8.84% of representativeness; ovicidal potential were observed in 6.12% of the results; and 2.72% of the data analyzed correspond to alterations in stages of development. Ovicidal potential There are methods based on the applicability of IGRs in eggs, in order to assess the effects on their viability and hatching capacity. Juvenoids are types of substances that can express these activities. Ascher and Eliyahu (1988) (Boina, et al., 2009). Precocenes may also demonstrate ovicidal potential through their mechanisms of JH antagonism, and by acting at different stages of egg development. Pener et al. (1986) evidenced the reduction of JH levels in precocene III treatments in 10-day-old eggs of L. migratoria. Kafi-Farashah et al. (2018), in analyzes of precocene I activity in Eurygaster integriceps (Puton, 1881) (Hemiptera: Scutelleridae), the Sunn pest, showed greater effects of mortality and susceptibility in older eggs. Larvicidal/ninficidal potential The use of IGRs to control larvae, nymphs or pupae represents an important strategy to avoid an emergency in reproductively viable adults, affecting the density of insect crops characterized as agricultural pests. Many of these substances can develop toxicity activities when applied to insects at an early stage, affecting their hormone levels and development. (Eisa, et al., 1991;Perez-Farinos, et al., 1998;Khajepour, et al., 2012;Li, et al., 2014). JH mimics demonstrate larvicidal/ninficidal activity in the insect orders Lepidoptera and Hemiptera. In lepidopterans, the substances methoprene, pyriproxyfen and fenoxycarb were tested in S. littoralis and Spodoptera frugiperda (J.E. Smith, 1797) (Lepidoptera: Noctuidae), the fall armyworm, with a result of greater toxicity to fenoxycarb in S. littoralis larvae (El-Sheik, et al., 2016). The effectiveness of fenoxycarb was also seen in treatments with P. xylostella, showing high toxicity in third instar larvae (Mahmoudvand & Moharramipour, 2015). In Hemiptera, the activities of a serie of compounds derived from (Suchy, et al., 1968). Eisa et al. (1991) used C. floridensis as a study model to analyze its development against JH analogues such as fenoxycarb, Pro-done, R-20458 and dofenapine, aiming at preventing the development of nymphs treated with fenoxycarb and dofenapine. Pyriproxyfen expressed lethality in D. citri nymphs, which was not observed in treatments with adults (Boina, et al., 2009). Potential in reproductive modifications Substances of IGR activity can exert modifications in mechanisms of insect reproduction, altering not only the reproductive system of males and females but also the fertility, oviposition and eggs hatching. Ecdysteroid agonists showed responses in lepidopteran agricultural pests. Adel and Sehnal (2000), when analyzing the effects of methoxyfenozide in S.littoralis, reported that the insects that managed to escape the lethality developed in adults with reduced fertility, and a sterility linked to an accumulation of the compound in the body and penetration into the developing gonads. Effects on the reproductive system were also evidenced by Seth et al. (2004), when a treatment with tebufenozide in Spodoptera litura (Fabricius, 1775) (Lepidoptera: Noctuidae), also known as tobacco cutworm, resulted in decrease of reproductive potential of males through the reduction of testicular volume and sperm release. Other mechanisms may be associated by reduction in fertilization, which was speculated by Sun et al. (2003) who, in studies of methoxyfenozide and tebufenozide application in C. pomonella, pointed out the possibility of inhibition in vitelogenic synthesis agonists presented in the fatty body, a translocation of substances in the hemolymph or an absorption by ovary. Exceptions can occur and demonstrate that not all the action exerted by ecdysteroid agonists in fertilization of crop pests is negatively regulated; there are reports that methoxyfenozide has increased not only the fertility in S. littoralis, but also egg laying (Ishaaya, et al., 1995). Oviposition alteration and eggs hatching activities can also demonstrate differences in mode of action. While Seth et al. Among antagonistic activities, the ecdysteroid antagonist cucurbitacin B caused fertility suppression in parental generation of A. gossypii and generated effects that influenced his F1 generation (Yousaf, et al., 2018). JH antagonists can stimulate fertility and eggs maturation, and these data are possible to be visualized in treatments of precocene I in Myzus persicae (Sulzer, 1776) (Hemiptera: Aphididae), known as green peach aphid (Ayyanath, et al., 2015); and in precocene II treatments in S. gregaria (Eid, et al., 1988). On the other hand, Amiri et al. (2010) demonstrated the effects of reduction in laying and hatching eggs of E. integriceps insects submitted to precocene I. JH analogs expressed the ability to alter the fertility of agricultural pests. While the dofenapine treatment in C. Transovarian activities related to a low number of nymphs were also seen after the treatment of S. pyrioides adults with azadirachtin (Joseph, 2019). The same substance demonstrated in a treatment with Chaetosiphon fragaefolli (Crockerell, 1901) (Hemiptera: Aphididae), the strawberry aphid, 28% of reduction in fertility (Bernardi, et al., 2012). Compounds that inhibit chitin synthesis have mechanisms for altering the reproductive system of treated insects. The evaluation of flucycloxuron on the development of Dysdercus koenigii (Fabricius, 1775) (Hemiptera: Pyrrhocoridae), another of the bugs commonly known as cotton stainers, showed effects of reduced fertility, disintegration of follicular epithelium, reduced number of oocytes and an vitelogenesis inhibition (Khan & Qamar, 2011;. Tests with lufenuron presented a sperm reduction in males of Anthonomus grandis Boheman (1843) (Coleoptera: Curculionidae), the boll weevil, as well as ovarian changes in females (Costa, et al., 2017). The same compound was tested in S. gregaria, showing analyzes of ovarian and testicular disruption (Ghazawy, 2012). Tail et al. (2008) observed, in diflubenzuron treatment of S. gregaria, a reduction in eggs number per ootheca and, with the observed results, formulated a hypothesis about the treatment have reduced the ecdysteroids levels in hemolymph, reflecting in ovarian synthesis reduction due to alterations in follicular chambers. Several CSI demonstrate other types of activity. Novaluron showed effects of reduction in egg viability in L. Morphological and anatomical modifications generated by IGRs treatments Insecticides can often have sublethal effects that compromise the morphological and anatomical structures of insects. Many of the effects reported by IGR for abnormalities of treated insects are related to ecdysis failures and wing deformities, which can constitute mechanisms to block its locomotion, viability and longevity. According to Bransby-Williams (1971), malformations in developmental process can influence the dispersion and reproduction of insects. Anti-juvenoids demonstrate an influence on wing development process and, according to Hardie et al. (1996), can affect morphogenic pathways of induction or inhibition. Aphids showed sensitivity to JH antagonists regarding alar development, and an inhibition in M. persicae was evaluated by treatment with precocene III (Hales & Mittler, 1981); an induction of a winged offspring for the treatment of precocene I and III in Acyrthosiphon pisum (Harris, 1776) (Hemiptera: Aphididae), known as pea aphid, and Aphis fabae Scopoli (1763) (Hemiptera: Aphididae), known as the black bean aphid (Hardie, 1986); and an induction and inhibition by 2,2-dimethyl chromene and 2,2-dimethyl chroman precocene derivatives treatment in A. pisum (Hardie, et al., 1996). Deformational aspects in wings were observed in Orthoptera, through precocene I treatment in L. migratoria (Pedersen, 1978). Precocene I demonstrates other types of deforming effects in treatments with crop pests, such as the appearance of a poorly developed ventral thoracic portion in L. migratoria (Pedersen, 1978); and the presence of E. integriceps insects with deformities in scutellum, wings, and presenting disproportionately small and narrow abdomen and stomach (Amiri, et al., 2010;Kafi-Farashah, et al., 2018). Precocene II treatment in S. littoralis caused the elongation of treated larvae, and deformations in pupae and adults (Khafagi & Hegazi, 1999). Other alteration activities proposed by precocenes are modifications in the sensory system due to a reduction in the number of sensillae and disturbances in antennae development (Triseleva, 2003), and changes in pigmentation of treated insects (Pedersen, 1978;Eid, et al., 1988). Different types of deformations could be seen in treatments of JH analogues in insects belonging to Lepidoptera, Hemiptera and Orthoptera. Singh and Kumar (2011), in pyriproxyfen treatments on Papilio demoleus (Linnaeus, 1758) (Lepidoptera: Papilionidae), the lime swallowtail, observed effects that comprised an incomplete detachment of exuvia in ecdysis process, culminating in mortality, and the appearance of an old head capsule linked to the new in some larvae; as well as showing rectal prolapse in larvae, different degrees of melanization in pupae, and deformations in the wings, antennae and legs of adults. Studies of S. litura against pyriproxyfen and diofenolan demonstrated the presence of "larva-pupa mosaics", that is, insects that had altered their development and that acquired both larva and pupa characteristics; in addition to pupae with mouthparts and appendages out of the chrysalis and adults with alterations in wings, legs and genitalia (Singh & Kumar, 2015). migratoria (Cotton & Anstee, 1991) and D. citri (Boina, et al., 2009) in tests with methoprene and pyriproxyfen, respectively. El The compound methyl farnesoate dihydrochloride developed modifications in Dysdercus fasciatus (Signoret, 1861) (Hemiptera: Pyrrhocoridae), another cotton stainer, as wings that would be longer than normal, and some insects that had antennae with an extra segment (Critchley & Campion, 1971). Azadirachtin presented, according to Bernardi et al. (2012), a color change in treated nymphs and mobility reduction, while Mordue and Nisbet (2000) pointed out abnormalities in molting processes of S. gregaria and P. brassicae. In S. Other effects of methoxyfenozide were reported by Zarate et al. (2009) in S. frugiperda, where larvae treatment reduced not only the size of pupae and females, but also culminated in malformations of wings in adults. Ecdysteroid antagonists demonstrate few evaluations of morphological changes in studies; however, the work of Bélai & Fekete (2003) serves as a source for this type of potential. In the study of D. cingulatus against azolic compounds, molting failures and insects attached to the old cuticle were reported, in addition to wing deformations that did not cover the entire abdomen. It has been speculated about the performance of 20-HE in wing morphogenesis control, influencing the alar deformations. Alterations in developmental period Certain IGRs have mechanisms that modify the insect developmental time, either by delaying the nymph/larva stages or by their prolongation, which constitutes control methods that prevent the appearance of adult insects. Changes in developmental period are also observed in precocene treatments, showing results of delays in molting and metamorphosis (Chenevert, et al., 1980;Eid, et al., 1988), as well as in treatments with juvenoids and ecdysteroid agonists. Delays in ecdysis, larval prolongation and decreased pupation time were evidenced in pyriproxyfen applications in P. Alterations in developmental stage The ability of juvenoids to keep high the endogenous JH levels can guarantee an appearance of supernumerary nymphs, presenting characteristics of an adult insect, but which are not reproductively mature. In their studies, Singh and Kumar (2015) observed the appearance of adultoids on pyriproxyfen and diafenolan treatments in S. litura, corroborating the characterization of this type of potential in substances similar to JH. The methyl farnesoate dihydrochloride compound was able to stimulate the presence of supernumerary insects of D. cardinalis and D. fasciatus (Bransby-Williams, 1971;Critchley & Campion, 1971). However, the mechanisms that culminate the appearance of supernumerary nymphs have not yet been fully elucidated, and there may even be other compounds capable of inducing these transformations, taking as an example the presence of E. figulilella supernumerary nymphs in treatments with the CSI hexaflumuron and lufenuron (Khajepour, et al., 2012). Fagoinhibition Azadirachtin can be considered as one of the main substances studied in terms of the potential for altering insect feeding, integrating inhibitory and physiological processes. According to Mordue and Nisbet (2000), in low concentrations, azadirachtin is capable of expressing changes in chemoreceptors present in mouthparts, triggering a fagoinhibition that will culminate in starve of treated insects. Other compounds can exhibit the same type of activity, being distributed in almost all known classes of IGRs. Adel and Sehnal (2000), in studies with S. littoralis, proved the effects of feeding prevention and death not only in treatments with azadirachtin, but also with the ecdysteroid agonist methoxyfenozide. Methoxyfenozide also demonstrates feeding interruption and weight reduction of S. nonagrioides, in addition to modifications in the digestive tract that restricted food intake (Eizaguirre, et al., 2007). Ascher et al. (1987) analyzed the activity of ecdysteroid antagonists such as withanolide E and 2,3-dihydrowithanolide E in S. littoralis and Epilachna varivestis (Mulsant, 1850) (Coleoptera: Coccinellidae), a pest known as the Mexican bean beetle, and linked the results obtained to a fago-repellency originating a reduction in the consumption of treated leaves or the toxicity exerted by the compounds. Tallamy et al. (1997) performed an analysis of cucurbitacin B activity on various agricultural pests, and came up with a hypothesis about the potential of fagoinhibition for mandibular species and a possible stimulating potential for sucking insects, being used Popillia japonica Newman, 1841 (Coleoptera: Scarabaeidae), known as the Japanese beetle; Cerotoma trifurcata (Foster) (Coleoptera: Chrysomelidae), the bean leaf beetle; Trichoplusia ni (Hubner, 1803) (Lepidoptera: Noctuidae), known as the cabbage looper; Gargaphia solani (Heidemann 1914) (Hemiptera: Tingidae), the eggplant lace bug; Corythucha ciliata (Say, 1832) (Hemiptera: Tingidae), the sycamore lace bug; Peregrinus maidis (Ashmead, 1890) (Hemiptera: Delphacidae), the corn delphacid, Ostrinia nubilalis (Hubner, 1796) (Lepidoptera: Pyralidae), the European corn borer; S. exigua and A. pisum as models of study. Conclusion The review demonstrated the different types of potentials existing in IGRs regarding the control of insects considered crop pests. Hormonal deregulation mechanisms were presented, that culminate in the alteration of processes such as development, molting, metamorphosis and reproduction, which would allow a reduction in the density of insects and, consequently, in their economic losses. The knowledge gathered on the main pests used as study models and the main IGR compounds used permit an assessment of their use as a source of information for agricultural pest control methods. Further studies will be conducted to obtain a greater understanding of IGRs and their specific mechanisms of action in the development of insects considered pests.
2021-05-10T05:09:54.680Z
2021-01-18T00:00:00.000
{ "year": 2021, "sha1": "06a54fe04c88d98f36afc820343161cc9a3ce39a", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/11726/10609", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "06a54fe04c88d98f36afc820343161cc9a3ce39a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
43326049
pes2o/s2orc
v3-fos-license
Carving Out the End of the World or (Superconformal Bootstrap in Six Dimensions) We bootstrap ${\cal N}=(1,0)$ superconformal field theories in six dimensions, by analyzing the four-point function of flavor current multiplets. Assuming $E_8$ flavor group, we present universal bounds on the central charge $C_T$ and the flavor central charge $C_J$. Based on the numerical data, we conjecture that the rank-one E-string theory saturates the universal lower bound on $C_J$, and numerically determine the spectrum of long multiplets in the rank-one E-string theory. We comment on the possibility of solving the higher-rank E-string theories by bootstrap and thereby probing M-theory on AdS${}_7\times{\rm S}^4$/$\mathbb{Z}_2$. 1 Introduction and summary Conformal field theories in six dimensions parent a plethora of conformal field theories in lower dimensions through compactification. A primal example is the compactification of N = (2, 0) theories on Riemann surfaces to class S theories in four dimensions [1,2]. While no argument exists for the necessity of supersymmetry, all known interacting conformal field theories in six dimensions are in fact superconformal. 1 It follows from representation theory that these interacting theories have neither marginal nor relevant deformations [5][6][7][8][9][10]. Moreover, no known interacting theory admits a classical limit (hence essentially strongly coupled), or arises in the infrared limit of renormalization group flows from a Lagrangian theory. For these reasons, only a scarcity of tools exists for extracting physical quantities in these theories. The conformal bootstrap aims to extract physical observables in strongly coupled conformal field theories, using only the basic assumptions: unitarity, (super)conformal symmetry, and the associativity of operator product expansions (OPEs) [11][12][13][14]. The past decade has seen substantial developments of numerical bootstrap techniques -most notably the linear functional method -in constraining conformal field theories . In particular, the bootstrap has been applied to N = (2, 0) superconformal symmetry in six dimensions, and substantial evidence was found to support the conjecture that the bootstrap bound on the central charge is saturated by the A 1 theory, which arises in the infrared limit of the worldvolume theory of two coinciding M5 branes [34]. For theories that saturate the bootstrap bounds, the linear functional method determines the scaling dimensions and OPE coefficients of all the operators that contribute to the correlators under analysis [20]. By incorporating more and more correlators, the conformal bootstrap potentially solves these theories completely. 2 In this paper, we apply the conformal bootstrap to study yet another interesting class of six-dimensional conformal field theories -the E-string theories -which arise in the infrared limit of the worldvolume theory of M5 branes lying inside an "end-of-the-world" M9 brane [45,46]. These N = (1, 0) theories have tensionless string excitations charged under an E 8 flavor symmetry, and are related to various lower-dimensional conformal field theories. For instance, upon compactification on a circle with the presence of E 8 Wilson lines, they reduce to Seiberg's E n theories in five dimensions [47][48][49]. Compactifying on Riemann surfaces lands us on various N = 1 theories in four dimensions [50,51]. There is a larger class of N = (1, 0) theories coming from F-theory constructions that contains the E-string theories as a subclass [52][53][54][55]. In order to pinpoint specific theories on the solution space of bootstrap, we need to know the values of certain physical observables. One physical observable that has been computed in known six-dimensional theories is the anomaly polynomial [56][57][58][59][60][61][62]. By superconformal symmetry, the anomaly polynomial uniquely fixes both the central charge C T and flavor central charge C J , which are in turn related to certain OPE coefficients [63][64][65]. The precise relation between C J and the 't Hooft anomaly coefficients should appear in [66], and the relation for C T was determined in [65,67,68]. Employing numerical bootstrap techniques, we analyze the four-point function of scalar superconformal primaries in the E 8 flavor current multiplets. Based on the results, we propose the following conjecture: Conjecture 1 The rank-one E-string theory has the minimal flavor central charge C J = 150 among all unitary interacting superconformal field theories in six dimensions with an E 8 flavor group. We emphasize to the reader that the true virtue of this conjecture is not that we can compute C J by bootstrap, but rather the fact that if the rank-one E-string theory indeed saturates the bootstrap bound, then the entire OPEs between the flavor current multiplets can be determined (up to signs) by the linear functional method. This would be invaluable input towards a full solution of the rank-one E-string theory by the conformal bootstrap. We shall comment on the possibility of solving the higher-rank E-string theories and thereby probing the dual M-theory on AdS 7 × S 4 /Z 2 . The organization of this paper is as follows. Section 2 reviews the superconformal representation theory of the N = (1, 0) algebra in six dimensions. In Sections 3 and 4, we write down the general form of the four-point function involving 1 2 -BPS scalars in flavor current multiplets that solves the superconformal Ward identities, and determine the superconformal blocks. Section 5 explains how to introduce non-abelian flavor symmetry. In Section 6, we relate the central charge C T and flavor central charge C J to certain coefficients in the OPEs between flavor current multiplet scalars. In Section 7, we review the linear functional method which turns the problem of bounding OPE coefficients to a problem in semidefinite programming. Section 8 presents the numerical bounds and their physical implications. Section 9 discusses the future outlook. Review of superconformal representation theory The six-dimensional N = (1, 0) superconformal algebra is osp(8|2), which contains a bosonic subalgebra so(2, 6) × su(2) R . There are sixteen fermonic generators: eight supercharges Q A α and eight superconformal supercharges S α A , where α = 1, · · · , 4 and A = 1, 2 are the so(6) and su(2) R spinor indices, respectively. Superconformal primaries are operators that are annihilated by all the superconformal supercharges S α A . A highest weight state of osp(8 * |2) is a superconformal primary that is also a highest weight state of the maximal compact subalgebra so(2) × so(6) × su(2) R . Representations of the superconformal algebra are generated by successively acting the supercharges Q A α and the lowering generators of so(6) × su(2) R on the highest weight states. While some descendants of a highest weight state can appear to have zero norm, in unitary theories, they must be decoupled, and the shortened multiplets are referred to as short multiplets. Each superconformal multiplet can be labeled by the charges ∆, h 1 , h 2 , h 3 , J R of its highest weight state under the Cartan generators of so(2) × so(6) × su(2) R , where h 1 , h 2 , h 3 are the charges under the subgroup so(2) 3 ⊂ so (6). All the charges are real for unitary representations of the Lorentzian conformal algebra so(2, 6) × su(2) R . The short representations are classified into A, B, C, D types, satisfying the following relations [5,6,8,9], where c 1 , c 2 and c 3 are the Dynkin labels of su (4) which is related to the h 1 , h 2 and h 3 by The D-type highest weight states are annihilated by the four supercharges with positive R-charge, and are therefore 1 2 -BPS. The A-, B-, and C-type multiplets always contain BPS operators, although their highest weight states are not BPS. The long representations satisfy the inequality Due to OPE selection rules, later we only have to consider multiplets whose superconformal primaries are in the symmetric rank-representation of so (6). We denote such representations by The ∆, subscripts for D-type multiplets and the ∆ subscript for B-type will be omitted since their values are fixed by (2.1) and (2.5). Important short multiplets We give names to certain special short multiplets, some of which contain conserved currents. • Flavor current multiplet D [2]: contains conserved currents transforming in the adjoint of a flavor symmetry, and their supertners. Four-point function of half-BPS operators In this section, we consider the four-point function of the scalar superconformal primaries in the 1 2 -BPS multiplet D[k], and review the constraints from superconformal symmetry [72]. The 1 2 -BPS condition implies that this four-point function uniquely fixes the entire set of fourpoint functions of the (primary or descendant) operators in D[k]. 4 Although we are interested in N = (1, 0) in six dimensions, the setup is the same for superconformal field theories in other dimensions where the R-symmetry is su(2) R , namely, N = 1 in five dimensions and N = 3 in three dimensions. 5 Hence we keep the spacetime dimension general and write it as d = 2( + 1). The scalar superconformal primaries form a spin-k 2 representation of su(2) R , and their weight is fixed by the BPS condition ∆ = k. The scalars can be written as O A 1 ···A k (x), which is a symmetric rank-k tensor of the fundamental representation of su(2) R , A i = 1, 2. We can contract the indices with auxiliary variables Y A to form an operator O(x, Y ) that has homogenous degree (− k, k). The four point function of O(x, Y ) is then a homogenous degree (−4 k, 4k) function, and is polynomial in Y A . Therefore it must take the form The superfield for a 1 2 -BPS multiplet only depends on four fermionic coordinates (half the number of fermionic coordinates in full superspace). The four-point function of such superfields depends on sixteen fermionic coordinates, which is the same as the number of fermionic generators in the superconformal algebra. Hence the four-point function of the superfields can be obtained by supersymmetrizing the four-point function of the superconformal primaries. There is no extra constraint coming from the crossing symmetry of the four-point functions of superconformal descendants. 5 Our setup does not apply to N = 2 in four dimensions. In particular, such a theory has a protected subsector corresponding to a two-dimensional chiral algebra [72,29]. where the cross ratios u, v, and w are defined as 6 As all four external scalars are identical, the invariance of (3.1) under ( leads to the crossing symmetry constraint Similarly, the invariance of (3.1) under (x 1 , Y 1 ) ↔ (x 2 , Y 2 ) leads to the constraint The four-point function is further constrained by the superconformal Ward identities, which we review in Appendix B. They were solved in [72], and the solutions are parametrized by k − 2 functions b n (u, v), where the differential operator ∆ is defined as (3.6) In even dimensions, ∆ is a well-defined differential operator, and is invariant under crossing. One approach to solving the crossing equation is to "factor out" (D ) −1 and write down a crossing equation for b n (u, v) (while carefully taking care of the kernel of (D ) −1 ), as was the approach of [34]. However, in odd dimensions, the differential operator (D ) −1 is defined only formally on the functional space spanned by Jack polynomials with eigenvalues given in (A.10), and this functional space does not map to itself under crossing u ↔ v. 7 To make our setup easily generalizable to five and three dimensions, we will not study the crossing equation for b n (u, v), but will instead analyze the crossing equation for G(u, v; w) directly. See Appendix C for the setup of the crossing equation for b n (u, v) in the special case of = k = 2. The rest of the paper specializes to the case of k = 2. Then G(u, v; w) is a second degree polynomial in w −1 . By matching the coefficients of the monomials in w, the crossing equation (3.3) can be separated into three equations involving only u and v, where G i are defined in (3.1), and the third equation is trivially equivalent to the first equation. In Appendix B, we show that the second equation also follows from the first equation as a consequence of the superconformal Ward identities (B.1). Moreover, the superconformal Ward identities imply an identity (B.8) on the first equation, which is important when we need to identify the independent constraints from the crossing equation in order when applying the linear functional method. Superconformal blocks The four point function can be expanded in superconformal blocks as where A X (u, v; w) is the superconformal block of the superconformal multiplet X . The sum is over the superconformal multiplets allowed in the OPE of two D [2]. The selection rule is [73] with the obvious modifications. By the arguments of [73], O must correspond to either a D-or B-type multiplet if O has 2J R = 2, and a D-type if 2J R = 4. 9 Note the the bosonic conformal blocks satisfy G ∆, (u, v) = (−1) where P J R (x) are Legendre polynomials. The summation (2J R ,∆, )∈X is over all primary operators in the superconformal multiplet X that appear in the OPE, labeled by (2J R , ∆, ). It is a finite sum as there are only finitely many primary operators contained in each superconformal multiplet. Bosonic conformal blocks are reviewed in Appendix A. The coefficients c 2J R ,∆, are fixed by the superconformal Ward identities (B.1). The superconformal block expansion (4.1) implies that the functions b n (u, v) parameterizing solutions to the superconformal Ward identities (see (3.5) The relation (4.5) can then be written as where we abbreviate b 0 as b since there is no other b n . In the following subsections, we give explicit expressions for the superconformal blocks by solving (4.7). The bosonic conformal blocks are normalized such that in the limit of u = v 1, the leading term in the u expansion is u ∆ . The superconformal blocks are normalized such that in the same limit, the leading term is (−) J R u ∆ P J R (1 + 2 w ). Short multiplets The superconformal blocks for the short multiplets can be obtained by taking limits of the superconformal block for L[0] ∆, , as follows: where the first and third equations follow from the recombination rules at the unitary bound. 10 In the second and forth equations, we need to analytically continue the superconformal block A L[0] ∆, to ∆ below the unitarity bound (2.3), so the limits should be regarded as mere tricks to generate solutions to the superconformal Ward identities. One can explicitly check that the superconformal blocks for short multiplets obtained this way indeed have the correct decompositions into bosonic conformal blocks. One can also show that given the content of each multiplet, (4.10) or (4.11) is the unique combination of bosonic conformal blocks that solves the superconformal Ward identities. In fact, as mentioned earlier, the lack of a solution for A [0] and C[0] proves their absence in the selection rule (4.2). 11 10 See (4.4) in [8] or (2.63) in [9]. 11 All the bosonic component fields in C[0] are R-symmetry neutral, hence the superconformal Ward identities reduce to ∂ χ G(u, v; w)| w→χ = 0, ∂χG(u, v; w)| w→χ = 0, (4.12) which cannot be satisfied by any non-vacuum block. The superconformal block for A[0] must take the form where ρ and θ are defined by χ = ρe iθ andχ = ρe −iθ . By it is clear that (4.13) cannot satisfy the superconformal Ward identities unless a = b = 0. Flavor symmetry We want to consider theories with non-abelian flavor symmetry. Since flavor currents are contained in the D [2] multiplets, the superconformal primaries O a (x i , Y i ) transform in the adjoint representation of the flavor symmetry group G F , where a is the adjoint index. The and G abcd (u, v; w) admits a decomposition into superconformal blocks as in Section 4. The operators that appear in the OPE of O a (x 1 , Y 1 ) and O b (x 2 , Y 2 ) transform in the tensor product representation adj ⊗ adj, which can further be decomposed into irreducible representations R i . The decomposition of G abcd (u, v; w) takes the form where P abcd i is the projection matrix that projects onto the contributions of operators in the OPE that transform in the representation R i . They satisfy [75] P abcd The projection matrices of the trivial representation and the adjoint representation are where h ∨ is the dual Coxeter number and ψ 2 = 2 is the length squared of the longest root of the flavor group. The identity operator and the stress tensor multiplet B[0] 0 can only transform in the trivial representation 1 of the flavor group, while the flavor current multiplet D [2] can only be in the adjoint representation adj. Their OPE coefficients satisfy In Section 6, we will relate the coefficients λ 2 B[0] 0 and λ 2 D [2] to the central charge C T and flavor central charge C J , which are in turn related to the anomaly coefficients and can be determined through other methods. Because all four external scalars are identical, the four-point function ( , leading to the crossing symmetry constraint where the crossing matrix F i j is defined as where |R i | = 0 for R i appearing in the symmetric tensor product of two adjoint representations, and |R i | = 1 for R i appearing in the anti-symmetric tensor product. The constraint (5.8) amounts to imposing the selection rule + J R + |R i | ∈ 2Z on the intermediate primary operators. We will be interested in the SU(2) and E 8 flavor groups. The adj ⊗ adj decompositions and crossing matrices are summarized in Table 1 Central charges In this section, we review the definitions of the central charge C T and the flavor central charge C J , and derive their relations to the OPE coefficients λ 2 Central charge C T Conformal symmetry fixes the two-point function of the stress tensor up to an overall coefficient. Since the stress tensor has a canonical normalization, this coefficient is physical and is referred to in the literature is as the central charge C T . More precisely [76], is the volume of a unit (d − 1)-sphere, and the conformal structure I µν,σρ (x) is given by In Appendix D.1, we review how the contribution of the stress tensor multiplet to the fourpoint function of identical scalars is fully determined by the value of C T . Assuming that there is a unique flavor-singlet stress tensor multiplet To later compare with numerical bounds, we present here the values of C T for sixdimensional superconformal field theories of interest, by relating C T to a Weyl anomaly coefficient. The Weyl anomaly in six-dimensional conformal field theories takes the form [77][78][79] A 6d = (4π) 3 T µ µ = −aE 6 + c 1 I 1 + c 2 I 2 + c 3 I 3 + scheme dependent, (6.4) where E 6 is the Euler density and I 1,2,3 are certain Weyl invariants. I 3 is normalized as I 3 = C µνσρ ∇ 2 C µνσρ + · · · , C µνσρ being the Weyl tensor (see [79] for the precise definition of . The a-coefficient appears in the stress tensor four-point function, c 1 and c 2 in the stress tensor three-point function, and c 3 in the stress tensor two-point function. The relation between c 3 and C T is In theories with supersymmetry, the Weyl anomaly coefficients are linearly related to the 't Hooft anomaly coefficients [63][64][65], which appear in the anomaly polynomial involving gravitational and R-symmetry anomalies (see [65] for precise definitions and normalizations) In [65], the authors proposed that the coefficients appearing in the linear relations can be fixed by computing the values of α, β, γ, δ and a, c 1 , c 2 , c 3 in free theories, e.g., the free hypermultiplet, the free tensor multiplet, and a class of non-unitary free theories. The relation between c 3 and α, β, γ, δ was determined up to an unfixed parameter ξ, The value of ξ can be further fixed by considering a superconformal vector multiplet V (1,0) , which has the same field content as the flavor current multiplet, but whose component fields have higher-derivative kinetic terms. More explicitly, the multiplet consists of a fourderivative vector, a three-derivative Weyl fermion, and three standard two-derivative scalars. The anomaly coefficients are [65] ( Thus the constant ξ can be determined by Since the theory is free, the C T of V (1,0) is simply the sum of that of its component fields. The C T of a free scalar is known from [76], and that of a free four-derivative vector was computed in [80,81] to be In [68], the authors computed the C T for a three-derivative Weyl fermion by studying the partition function on S 1 × H 5 , and found In Appendix E, we verify this answer by explicitly constructing the stress tensor for the three-derivative fermion and computing its two-point function. Thus and they concluded that which corroborates with what was first found in [67] via a different method. 14 In [68], the conformal anomaly coefficients for an infinite family of free, non-unitary, higher-derivative N = (1, 0) superconformal multiplets were also computed, and indeed found to satisfy the linear relation (6.7) with this value of ξ. There are various techniques for inferring the values of 't Hooft anomaly coefficients in superconformal field theories, even when the theory is strongly interacting and direct handles are lacking. For instance, if a construction within string theory or M theory exists, the 't Hooft anomaly coefficients can be computed by anomaly inflow [56,58]. Another approach is anomaly matching by going onto the tensor branch or the Higgs branch [57,59,60,62]. In the following, we present the values of C T for the free hypermultiplet and the E-string theories. Free hypermultiplet The C T for each free scalar φ and each free Dirac spinor ψ are [76] Thus the C T for a free hypermultiplet is E-string theories The rank-N E-string theory is realized by stacking N M5 branes inside an end-of-the-world M9 brane [45,46]. The flavor symmetry is E 8 for rank-one and E 8 ×SU(2) for higher ranks. The 't Hooft anomaly coefficients and the conformal anomaly coefficient c 3 are given by (including the free hypermultiplet describing the center-of-mass degrees of freedom parallel to the M9 brane) The minimal central charge is achieved in the N = 1 case, which after decoupling the free hypermultiplet is Flavor central charge C J We can perform a similar analysis for the flavor currents J a µ , which are canonically normalized in the following way. In radial quantization, the non-abelian charge of a state on the cylinder which corresponds to an operator inserted at the origin x µ = 0 is measured by wherer µ = x µ /|x| is the radial unit vector, and the integral is over an S d−1 surrounding the origin. If we consider a state J b µ that corresponds to the current J b µ , then the non-abelian charge of this state is given by the structure constants, We can normalize the structure constants by where h ∨ is the dual Coxeter number and ψ 2 = 2 is the length squared of the longest root of the flavor group. This then endows the currents with a normalization. Conformal symmetry constrains the two point function of the flavor currents J a µ up to an overall coefficient, which is called the flavor central charge C J [76], The contribution of the flavor current multiplet to the four-point function of identical scalars is fully determined by the value of C J . In Appendix D.2, we derive the relation between the OPE coefficient λ D [2] and the central charge C J , Similar to the central charge C T , the flavor central charge C J can be linearly related to 't Hooft anomaly coefficients [66]. We list the values of C J for the theories of interest. Free hypermultiplet The flavor central charge of a single free hypermultiplet can be determined by (6.23) and (F.23), giving E-string theories The C J of the E 8 flavor group of E-string theories is For rank one, C J = 150. Semidefinite programming We proceed by employing the linear functional method [15] to exploit the crossing symmetry constraint (3.3) (setting = k = 2), as well as the non-negativity of the coefficients in the superconformal block expansion (4.1), where X is summed over the multiplets (4.2) allowed by selection rules. To keep the discussion simple, we only display formulae for U (1) flavor symmetry. Also recall from that G(u, v; w) has an expansion in w −1 as shown in (3.1). Putting these together, we have where each superconformal block A X (u, v; w) also has an expansion in w −1 that terminates at quadratic order, 15 2) The precise formulae for these superconformal blocks are detailed in Section 4. As explained in the final paragraph of Section 3, the superconformal Ward identities imply that the independent constraints from crossing symmetry are contained in the equation Putting things together compactly, the constraints we need to analyze are 16 (4.6), which are the coefficients in the expansion of superconformal blocks A X (u, v; w) in Legendre polynomials rather than in monomials in w −1 . 16 Recall from (5.5) that when the flavor group is non-abelian, the normalization is λ 2 where I, the putative spectrum of superconformal multiplets with the identity multiplet excluded, contains a subset of It is a subset because there are further restrictions on the set of X over which we sum: • With abelian flavor symmetry, there is a further selection rule that requires + J R to be even. • With non-abelian flavor symmetry, the selection rule allows symmetric representations in adj × adj for + J R even and anti-symmetric ones for + J R odd. • D[0] only appears in the trivial representation of the flavor group. • D [2] can only appear in the adjoint representation of the flavor group since these multiplets contain flavor currents (hence D [2] are absent for abelian flavor). • In interacting theories with a unique stress tensor, B[0] 0 only exists in the trivial representation, and B[0] for > 0 do not exist since these multiplets contain higher spin conserved currents. 1718 Our goal is to put bounds on the central charges C T and C J , which are inversely proportional to λ 2 B[0] 0 and λ 2 D[2] via (6.3) and (6.23). We presently explain how to put a universal lower bound on C T , or equivalently an upper bound on λ 2 B[0] 0 , using the linear functional method. Simple modifications of the following setup allow us to further bound theories to within a finite region in the C −1 T − C −1 J plane. Consider the space of linear functionals on functions of u, v. Suppose we can find a linear functional α that satisfies then these constraints combined with the constraints (7.4) imply an upper bound on λ 2 D [2] , The optimal upper bound is obtained by maximizing α[K D [2] ] within the space of linear functionals satisfying (7.6). The resulting functional is referred to as the extremal functional, which we denote by α E [20]. Thus the linear functional method turns the problem of putting an upper bound on λ 2 D [2] to a problem in semidefinite programming. Generically, there is a unique four-point function saturating (7.7), called the extremal four-point function [20,23]. This four-point function satisfies which, given (7.6), means that the long multiplets that can contribute to this extremal four-point function must have ∆, at which In practice, we can only perform the above minimization procedure within a finitedimensional subspace of linear functionals, with the constraints (7.6) imposed on a finite number of multiplets. We achieve the latter by restricting to multiplets with spins no larger than a certain maximum max , and estimate how the bound weakens with increasing max . Empirically we find that the amount of weakening is roughly inversely proportional to max , and so we can estimate the errors by extrapolations. This issue is examined further in Appendix G. As for truncating the linear functionals, a convenient subspace is given by the following. Define variables z,z by such that crossing u ↔ v amounts to (z,z) ↔ (1 − z, 1 −z). Consider the expansion of linear functionals in the basis of taking derivatives with respect to ∂ z and ∂z and evaluating at the crossing symmetric point z =z = 1 2 . Our subspace is simply the truncation of these derivatives to having total degree no larger than Λ, namely, Bosonic conformal blocks and their derivatives evaluated at the crossing symmetric point are computed by utilizing the recursive representation [82], the diagonal limit [19,83], and a recursion relation on transverse derivatives [19] that follows from the conformal Casimir equation. The computations are described in Appendix A. We use the SDPB package [31] to perform the semidefinite programming procedure. Details on the numerical implementations are discussed in Appendix G. Free hypermultiplet: a check In the semidefinite programming approach to constraining superconformal field theories, free theories differ from interacting theories by the presence of multiplets that contain higher spin conserved currents, B[0] with > 0. This means that the functional α acted on these multiplets must also be non-negative, leading to weaker constraints than the interacting case. . Also shown are the extrapolations to Λ → ∞ using the ansatz (8.2), for Λ ∈ 4Z and Λ ∈ 4Z + 2, separately. A single free hypermultiplet has SU(2) flavor symmetry. In particular, the SO(4) that rotates the four real scalars is the combination of the flavor SU(2) and R-symmetry SU(2) R . The superconformal primaries of the D[2] multiplets are scalar bilinears, and their fourpoint function can be computed explicitly by Wick contractions. We refer the reader to Appendix F.2 for the explicit form of this four-point function and its decomposition into superconformal blocks. An important property is the absence of B[0] in the 5 representation, an additional condition that we impose in the bootstrap analysis. We also note that the long multiplets appearing in the 1 channel have lowest scaling dimension ∆ = 8, and in the 5 channel have lowest ∆ = 10. Assuming SU(2) flavor symmetry and the existence of higher spin conserved currents in the trivial 1 or adjoint 3 representation, Figure 1 shows the universal lower bounds on C T and C J at various derivative orders Λ, as well as extrapolations to Λ → ∞ using the quadratic ansatz We see that both min C T and min C J tend towards the values for a single free hypermultiplet. The left side of Figure 2 shows the extremal functional optimizing the lower bound on C J acted on the contribution of the spin-zero long multiplet to the crossing equation, in the 1 and 5 channels of the SU(2) flavor. We can read off the low-lying spectrum of long multiplets from the zeroes. 19 The right side of Figure 2 shows how the lowest ∆ in each channel varies with increasing Λ and tends towards ∆ = 8 and ∆ = 10. Also shown are extrapolations to infinite Λ using the ansatz Due to the oscillatory behavior of the data points, we perform separate extrapolations for Λ ∈ 4Z and Λ ∈ 4Z + 2, for both min C T /J and ∆ gap . These results suggest that a free hypermultiplet saturates the lower bounds on both C T and C J . E-string theories Let us now turn our attention to the E-string theories. We first present universal lower bounds on C T and C J for theories whose flavor group contains E 8 as a subgroup. Figure 3 shows the bounds on C T and C J at different derivative orders Λ, and extrapolations to infinite Λ using the quadratic ansatz (8.1). Table 2 summarizes the results of the extrapolations, as well as the C T and C J values in the rank-one E-string theory. Notice that the extrapolated lower bound on C J sits close to the rank-one E-string value, while that on C T is still some distance away. The former observation motivates Conjecture 1 stated in the introduction. To supply further evidence for Conjecture 1, we perform a full survey of the range of allowed (C J , C T ). Figure 4 shows the allowed region in the C −1 T − C −1 J plane for derivative orders Λ = 24, 28, . . . , 40. Notice that the point of minimal C J has a value of C T that sits close to the value of C T in the rank-one E-string theory. To quantify this observation more precisely, we show in Figure 5 how the value of C T at min C J tends to the rank-one Estring value with increasing derivative order. The value appears to be rather stable between derivative orders 24 and 48, and although it is somewhat smaller than the rank-one E-string value, a closer examination shows a trend of potential convergence to the rank-one E-string at higher derivative orders. 20 While our data do not permit a reliable extrapolation of the entire allowed region to infinite derivative order, we comment on some of the features. First, given any two unitary solutions to crossing, G 1 (u, v; w) and G 2 (u, v; w), we can construct a family of unitary solutions αG 1 (u, v; w) + (1 − α)G 2 (u, v; w) for 0 ≤ α ≤ 1 that populate the line segment between the two points corresponding to G 1 (u, v; w) and G 2 (u, v; w) on the C −1 T − C −1 J plane. This means that the allowed region is convex. 21 Second, there seem to be two kinks, one corresponding to the rank-one E-string theory, and another with a C J value close to that of the rank-one E-string, but with a smaller C T . 22 A third feature is that the lower boundary appears to approach the locus of points corresponding to the higher rank E-string theories. 20 The deviation of C T at min C J from the rank-one E-string value (∼ 7%) is larger than the estimated error due to the truncation on spins ( 2%). See Appendix G. 21 Unitary solutions to crossing that populate the boundary of the allowed region can be explicitly constructed using the extremal functional method. 22 We do not know what to make of the proximity of C J at min C T to the rank-one E-string value, as shown in Figure 6, or are aware of any candidate theory that sits at this second kink; one logical possibility is that min C T changes trend at very high derivative orders and becomes saturated by the rank-one E-string theory. We discuss the last feature more in Section 9. A further check of Conjecture 1 is the following. The Higgs branch of the rank-one Estring theory is the one-instanton moduli space of the flavor group E 8 , which is isomorphic to the minimal nilpotent orbit of E 8 [45,84,85]. The minimal nilpotent orbit can be defined by quadratic polynomial equations in the complexified e 8 Lie algebra. More explicitly, for r ∈ e 8 , the defining equation for the minimal nilpotent orbit is The Higgs branch chiral ring is isomorphic to the coordinate ring of the Higgs branch [86,87,84,88]. The latter admits a description as the polynomial ring generated by the Assuming that Conjecture 1 is true, we can determine various physical properties of the rank-one E-string theory, such as the spectrum of long multiplets. The left side of Figure 7 shows the extremal functional acted on the contribution of the spin-zero long multiplet to 23 We thank Yifan Wang for explaining this fact to us. Table 4. in each channel at different Λ, and an extrapolation to Λ → ∞ using the ansatz (8.2), for Λ ∈ 4Z and Λ ∈ 4Z + 2, separately. Outlook Based on our observations on Figure 4, we put forward an optimistic conjecture. Conjecture 3 The E-string theories of all ranks sit at the boundary of the space of unitary solutions to crossing. As a piece of supporting evidence, Figure 8 shows the lower bound on C J assuming the value of C T = 151956 5 in the rank-two E-string theory, where we see that the extrapolated C J sits close to the rank-two E-string value C J = 420. There is actually more we can do. For N > 1, the E-string theories have a larger flavor group E 8 × SU (2), and the SU(2) flavor central charge is given by This additional input may be necessary to put the higher-rank E-string theories on the boundary of the space of unitary solutions to crossing. Figure 8: The lower bounds on C J at different derivative orders Λ, for interacting theories with E 8 flavor group and assuming C T = 151956 5 , which is the value in the rank-two E-string theory. Also shown is an extrapolation to infinite derivative order using the quadratic ansatz (8.1) with Λ ≥ 24. If Conjecture 3 is true, then the conformal bootstrap can potentially solve the E-string theories of arbitrary rank N . We can then consider the large N regime, and study the dual M-theory on AdS 7 × S 4 /Z 2 beyond the supergravity limit. On the M-theory side, the low energy excitations consist of a supergravity multiplet in the eleven-dimensional bulk and an N = 1 E 8 vector multiplet supported on a ten-dimensional locus, AdS 7 × S 3 that is fixed by Z 2 . With enough computational power, we can collect information about the non-BPS spectra in the E-string theories of large N , filter out the operators dual to multi-particle excitations of the bulk supergravity and E 8 vector multiplets, and determine for instance the scaling dimension of the operator that corresponds to the first M-brane excitation. 24 The scaling dimension of this operator should behave as to leading order at large N . The knowledge of a and b would be an important step towards understanding the quantum nature of M-branes. We are also exploring other flavor groups. For instance, the Sp(4) R R-symmetry in N = (2, 0) theories breaks up into R-symmetry and flavor symmetry parts, Sp(2) R × Sp(2), when interpreted as N = (1, 0) theories. For the A N −1 theory, which is the infrared fixed point of the world-volume theory on a stack of N M5 branes, the central charge and flavor central charge are Other N = (1, 0) theories include the large class of theories constructed in F-theory [52][53][54][55], whose C T and C J can be computed by using the anomaly polynomials given in [59,61]. Finally, a particularly interesting example is a conjectural theory that has SU(3) flavor symmetry, and whose Higgs branch is given by the one-instanton moduli space of SU (3), recently proposed in [62]. It has central charge and flavor central charge This theory does not seem to appear in the F-theoretic "classification" of N = (1, 0) theories [52][53][54][55]. 25 The conformal bootstrap can provide evidence for the existence or non-existence of this theory. The system of equations studied in this paper has straightforward generalizations to superconformal field theories in lower spacetime dimensions, N = 1 in five and N = 3 in three dimensions, which have SU(2) R R-symmetry [92]. The C T of such theories can be computed by taking the second derivative of the squashed three-or five-sphere partition function with respect to the squashing parameter [93][94][95][96][97][98][99][100][101][102][103][104]. 26 In five dimensions, there is another distinguished class of superconformal field theories -Seiberg's E n theories [47,48]. If an analog of Conjecture 3 is true for these theories, then we can study the type I' string theory on a warped product of AdS 6 and S 4 [105]. In three dimensions, the Chern-Simons-Matter theories provide many examples of N = 3 superconformal field theories [106][107][108]. 24 Such an operator is analogous to the Konishi operator in N = 4 SYM, whose dimension to leading order at large N is 2g 1/2 YM N 1/4 at strong coupling [89,90] and 3g 2 Y M N/4π 2 at weak coupling [91]. 25 We thank Tom Rudelius for a discussion on this point. 26 We thank Hee-Cheol Kim for a discussion on this point. A Bosonic conformal blocks This appendix reviews properties of bosonic conformal blocks for the four-point function of scalar primaries with scaling dimensions ∆ 1 , ∆ 2 , ∆ 3 , ∆ 4 in d = 2 + 2 spacetime dimensions. The conformal blocks depend on the external scaling dimensions only through the differences ∆ 12 ≡ ∆ 1 − ∆ 2 and ∆ 34 ≡ ∆ 3 − ∆ 4 , and will be denoted by G ∆ 12 ,∆ 34 ∆, . In Section A.1, we keep ∆ 12 and ∆ 34 arbitrary since blocks with nonzero ∆ 34 will be needed in Appendix C, but for later sections we set ∆ 12 = ∆ 34 = 0. For notationally simplicity, we abbrevaite The standard conformal cross ratios u, v are defined in terms of the positions of operators as We also introduce the variables z,z and χ,χ as alternative ways to parameterize the cross ratios, 27 Radial coordinates r and η, defined as [109] r will be the variables in which we expand the conformal block in the recursive representation. A.1 Expansion in Jack polynomials The conformal block can be expanded in Jack polynomials [110], where the expansion coefficients r mn are given by with the initial condition r 00 = 1. Jack polynomials can be defined in terms of Gegenbauer polynomials which satisfy the orthogonality condition The reader should be careful when comparing with [72], as we have swapped what they called z and χ. A.2 Recursive representation From now on we only consider the conformal blocks for the four-point function of identical scalar primaries, and set ∆ 12 = ∆ 34 = 0. When the scaling dimension of the internal primary is taken to values where a descendant becomes null, the conformal block encounters a simple pole whose residue is again another conformal block. This fact was first used in [111,112] to write down a recursion formula for Virasoro blocks. The generalization to higher dimensions was obtained in [82], where the authors found that when the external operators are scalars, the degenerate primaries come in three classes, as we list in Table 5. Then the conformal blocks admit the following recursive representation (1 − r 2 ) (1 + r 2 + 2rη) where C ( ) (η) is the Gegenbauer polynomial. The coefficients c i (k) for the three types of degenerate weights are The virtue of this recursive representation is not only its computational efficiency. Firstly, the expansion in r converges better than the z expansion, as r = 3 − 2 √ 2 ≈ 0.17 at the crossing symmetric point. Secondly, to a fixed order in r, the truncated conformal block with the (4r) ∆ prefactor stripped off is a rational function of ∆, whose poles are at values of ∆ below the unitarity bound. This latter fact is crucial because semidefinite programming is much more efficient when the inputs are polynomials (for the sake of imposing non-negativity, we can strip off manifestly positive factors from the truncated conformal block); in fact, the SDPB package [31] only allows polynomial input. For the purpose of computing derivatives of conformal blocks evaluated at the crossing symmetric point, we find it most efficient to -instead of implementing the above recursion relation -expand closed form expressions for conformal blocks in the diagonal limitz → z to a fixed order in r (η = 1 on the diagonal), take the diagonal derivatives at the crossing symmetric point, and then apply a further recursion relation to obtain the transverse derivatives [19]. The closed form expressions and the recursion on transverse derivatives are reviewed in the next two sections. A.3 Diagonal limit When all external scalars have the same scaling dimension, the conformal blocks admit closed form expressions in the diagonal limitz → z, defined via a recursion relation [19] starting with seeds , . A.4 Recursion on transverse derivatives Define and denote ∂ m a ∂ n b G ∆, | a=b=0 by h m,n . Given the diagonal limit of the conformal block, we can compute h m,0 for all m ≥ 0. The transverse derivatives can then be obtained by the following recursion relation [19], B Superconformal Ward identities The superconformal Ward identities read [72] where the variables χ andχ are related to u and v by We presently show in the case of k = 2 that the second equation in (3.7) follows from the first as a consequence of the first superconformal Ward identity in (B.1), which explicitly reads It can be rewritten as Applying u ↔ v or equivalently (χ,χ) ↔ (χ −1 ,χ −1 ), the above equation becomes The difference of the two equations gives Similarly, with χ replaced byχ, we have From (B.6) and (B.7), we see that the first and third equations of (3.7) imply the second equation of (3.7) up to a constant. This constant can be fixed by considering the case of u = v. The compatibility between (B.6) and (B.7) gives the identity which is important when we want to identity the independent constraints from the crossing equation. C Crossing equation for b(u, v) Specializing to = k = 2, let us substitute the solution (3.5) into the superconformal Ward identity into the crossing equation (3.3), The general solution to the first equation is 28 H(z,z) = n a n P (2) n,0 (z,z). We also have D 2 zzH = n a n (n + 3)P (2) n,0 (z,z), D 2 (z +z)H = n a n nP (2) n−1,0 (z,z). (C.4) Using the fact that P n,0 (z,z)'s are orthogonal polynomials for non-negative integers n, one can argue that (C.2) has no non-trivial solution if we restrict to such n. However, the orthogonality condition fails if we allow n to take negative integer values, and indeed (C.2) has an unique solution Therefore, the original crossing equation is equivalent to where c is an unphysical constant. For example, the generalized free field solution (F.2) corresponds to (up to the unphysical term) which solves (C.6) with c = 0. A function b(u, v) that gives rise to a physical four-point function G(u, v; w) (via (3.5)) also admits a decomposition into blocks with non-negative coefficients. The blocks b X (u, v) for the superconformal multiplets (4.2) can be expressed (up to the unphysical term on the RHS of (C.6)) in terms of bosonic conformal blocks with ∆ 12 = 0 and ∆ 34 = −2, (C.8) 28 There may appear to be another class of solutions P −2,n+2 (z,z), but they are related to P n,0 (z,z) by (A.11). where the unphysical G 0,−2 5,−1 (u, v) is formally defined by its expansion into Jack polynomials. Explicitly, b D [2] (u, v) can be written as which has a branch point at the origin of the z-plane, and the monodromy around it is (C.10) This monodromy can be absorbed into a shift of the constant c, We can therefore restrict to the zeroth sheet, where b D [2] (u, v) along with other b X (u, v) are all real functions in z,z. Moreover, on this sheet, b X (u, v) are regular asz → z, whereas the term on the right hand side of (C.6) is not. Hence, the constant c must vanish for a solution to (C.6) to also admit an expansion into blocks. D Relating central charges to OPE coefficients Conformal symmetry fixes the three-point function of the stress tensor with two identical scalars O to be of the form [76] T µν ( where the conformal structure t µν is given by The OPE coefficient C OOT is fixed by the conformal Ward identity to be [17,113] I µν,σρ (x 13 )t σρ (X 23 ) = t σρ (X 12 ). (D.4) From the three-point function (D.1), and using the identities (D.4), we can deduce that the OPE of two identical scalars contains which can be written in terms of the cross ratios u and v as Comparing (D.7) with the conformal block expansion, we determine the coefficient that sits in front of the bosonic stress-tensor block G d,2 (u, v), The bosonic conformal block G d,2 (u, v) sits inside the B[0] 0 superconformal block with the coefficient given in (4.16). We thus obtain the relation (6.3) between the OPE coefficient λ B[0] 0 and the central charge C T . D.2 C J to λ 2 D[2] Consider the three-point function of one flavor current with two scalars transforming in representation R of the flavor group. Conformal symmetry fixes this three-point function to be 29 where i, j are the indices for representation R, T a R are the generators of the flavor group in the representation R, and the two point functions of the scalars are normalized as We are particularly interested in external scalars that transform in the adjoint representation, in which case (T a ) b c = f ab c . From the three-point function (D.10), and using the identities (D.4), we obtain the OPE of two scalars in the adjoint representation, Now consider the four-point function of four scalars O a . Using the OPE (D.11) and the three-point function (D.10), we find which can be expressed in terms of the cross ratios u and v as By comparing (D.13) with the conformal block expansion, we can determine the coefficient sitting in front of the bosonic conformal block G d−1,1 (u, v) of the flavor current, 29 Acting the charge (6.19) on the scalar O j (0) gives which fixes the overall coefficient of the three-point function (D.10). The bosonic conformal block G d−1,1 (u, v) sits inside the D [2] superconformal block with the coefficient given in (4.18). We thus obtain the relation (6.23) between the OPE coefficient λ D [2] and the flavor central charge C J . E The central charge C T of the three-derivative fermion The C T of a free three-derivative Weyl fermion was recently computed in [68] as the second derivative of the partition function on S 1 ×H 5 with respect to the S 1 radius. In this appendix, we verify their answer by explicitly constructing the stress tensor for a three-derivative Dirac fermion, and computing its two-point function. The C T of a Weyl fermion is simply half that of a Dirac fermion. Since the three-derivative Dirac fermion exists in arbitrary d spacetime dimensions, we keep d = 2 + 2 general. The two-point function of a free Dirac fermion with scaling dimension ∆ ψ is where x = x µ Γ µ , and Γ µ are 2 +1 × 2 +1 matrices obeying the Clifford algebra {Γ µ , Γ ν } = 2δ µν 1 1. For a three-derivative fermion, ∆ ψ = − 1 2 . Our approach is to work in flat space, write down the most general symmetric traceless spin-two primary operator of scaling dimension d, imposed current conservation, and identify the stress tensor by demanding that it has the correct OPE with the fundamental fermion [35], Let us first list all the symmetric traceless spin-two operators of scaling dimension d constructed as fermion bilinears, Eleven linearly independent combinations out of the fourteen T i µν are descendants (total derivatives), µν + T 9 µν , T 10 µν + T 11 µν , T 11 µν + T 12 µν , T 5 µν + T 8 µν , T 2 µν + T 11 µν , T 13 µν + T 14 µν . (E.4) Hence there are three linearly independent combinations of T i µν that are primary operators, which by conformal symmetry must have vanishing two-point function with all the descendant operators (E.4). To find the correct linear combinations, we consider the two-point functions involving all fourteen T i µν , We outline the intermediate steps for this computation. First, we compute the four-point functions, . (E.6) Then the two-point functions (E.5) can be obtained by taking derivatives on (E.6), followed by the limit x 1 , x 2 → x and x 3 , x 4 → 0. For i, j = 13, 14, it is convenient to define K ν 1 ν 2 = ∂ 4 ∂x 1,ρ 1 ∂x 2,σ 1 ∂x 3,ρ 2 ∂x 4,σ 2 . (E.7) Then we have The two-point functions (E.5) allow us to identity the three-dimensional space of primary operators as the space orthogonal to the descendants. In unitary theories, a primary operator with scaling dimension saturating the unitarity bound must be conserved, but this is false in non-unitary theories. Indeed, using the explicit two-point functions (E.5), we find that there are two conserved spin-two primaries and one non-conserved spin-two primary. The stress tensor is a particular linear combination of the two conserved spin-two primaries that satisfies the T µν ψ OPE (E.2). A consequence of this OPE is that in the large x 2 limit, To find the stress tensor, we compute the three-point functions and identify the correct linear combination of conserved primaries to match with (E.9). This computation can be done by taking derivatives on the three-point functions (E.11) followed by the limit x 3 , x 4 → x 1 . For example, we have In four spacetime dimensions, the stress tensor T µν , the spin-two conserved primary T µν orthogonal to T µν , and the spin-two non-conserved primary Θ µν orthogonal to both T µν and T µν are (E. 13) In six spacetime dimensions, they are (E.14) We can read off the central charge C T from the two-point function (6.1) of the stress tensor T µν . In four spacetime dimensions, we find In six spacetime dimensions, we find These values are in agreement with [114,115,68]. F Analytic examples of solutions to crossing We write down two analytic solutions to the superconformal crossing equation (3.3), using first generalized free fields (mean field theory) and second a free hypermultiplet. Since these solutions exist in arbitrary spacetime dimensions, we keep d = 2 + 2 general. F.2 Free hypermultiplet A free hypermultiplet consists of a pair of complex scalars transforming in the fundamental representation of su(2) R , and a fermion singlet. The fermion could be Dirac, Majorana, or Weyl depending on the number of spacetime dimensions; in six dimensions, it is a Weyl fermion. Let us denote the complex scalar doublet by φ A , andφ A its complex conjugatē φ A = (φ A ) * . They are normalized by the two-point function (F.8) The superconformal primaries of a D [2] superconformal multiplet have scaling dimension 2 , and can be constructed as scalar bilinears where A, B = 1, 2, a = 1, 2, 3, and (σ a ) B A are the Pauli matrices. To keep track of su(2) R , we can contract the scalars with auxiliary variables Y A , and consider the four-point function where G hyper (u, v, w) is given by A single free hypermultiplet has SU(2) flavor symmetry. 31 We can construct a triplet of D [2] superconformal primaries, One can write O(x, Y, Y ) = (φȦ A YȦY A ) 2 , (F.14) 30 The indices can be raised and lowered by Y A = AB Y B and Y A = Y B BA . 31 The two complex scalars, regarded as four real scalars, can be rotated by an SO(4) action which is a direct sum of the SU(2) R R-symmetry and the SU(2) flavor symmetry. The Weyl spinor in six dimensions admits a quaternionic structure, and also transforms as a doublet under the SU(2) flavor symmetry. where Y a ≡ i(σ a )ȦḂ YȦ YḂ, then G a 1 a 2 a 3 a 4 hyper (u, v, w) admits a superconformal block decomposition, 33 G a 1 a 2 a 3 a 4 hyper (u, v, w) = i∈{1,3,5} P a 1 a 2 a 3 a 4 i λ 2 X ,i A X (u, v, w). (F. 22) In six dimensions, the low-lying nonzero OPE coefficients are on spins max , and the order n r to which the r expansion of the superconformal blocks (see Appendix A.2) are truncated. For fixed Λ, we in principle need to extrapolate to infinite n r and max to obtain rigorous bounds. However, in practice, we find that if we set n r ≥ 2Λ, then the bounds are stable to within numerical precision against further increases in n r . The numerical bounds in this paper are obtained using max = 64, n r = 80 for Λ ≤ 40 and n r = 96 for 40 < Λ ≤ 48. The relevant parameter settings for the SDPB package are precision = 1024, initialMatrixScalePrimal = initialMatrixScaleDual = 1e20, dualityGapThreshold = 1e-10. (G.1) In the past, the weakening of the bounds with increasing max has been handled by imposing non-negativity conditions on functionals acted on a few blocks of very high spin (such as = 1000, 1001 in [18]), in addition to blocks below some max . 34 We find that this approach does not make our bounds stable against increasing max . But numerical extrapolations to infinite max require data with a large range of max for each derivative order, which is computationally intensive and impractical. 35 Our strategy is to use max = 64, and estimate the errors by performing the extrapolations to infinite max in simpler cases. We shall consider E 8 flavor in the absence of higher spin conserved currents. The left side of Figure 9 shows the extrapolations for the lower bound on C J at derivative order Λ = 24, and the right side shows the relative error between max = 64 and extrapolations to max → ∞ using the quadratic ansatz obtained at various derivative orders. We see that the relative error decreases to below 0.5% as we go to high enough derivative orders. In light of the slight discrepancy between the value of C T at min C J and the rank-one E-string, as shown in Figure 5, we estimate its error due to spin truncation. Figure 10 shows the upper and lower bounds on C T , when the value of the flavor central charge C J is set close to saturating the lower bound, C J = (1 + 10 −4 ) min C J , at derivative order Λ = 24, 32 and across a range of spin truncations max . The data appears less regular than that for min Figure 9: Left: The lower bounds on C J for interacting theories with E 8 flavor group, at derivative order Λ = 24 and across a range of spin truncations max . Also shown is an extrapolation to max → ∞ using the quadratic ansatz (G.2). Right: The relative errors between max = 64 and the extrapolations to max → ∞, at different Λ. C J , and extrapolations using the ansatz (G.2) do not look reliable, but we estimate that the error due to truncating spins to max = 64 is less than 2% for Λ ≥ 24. Similar to min C J , this error decreases with increasing derivative order.
2017-08-18T01:22:37.848Z
2017-05-15T00:00:00.000
{ "year": 2017, "sha1": "fc2ec9818aa72984e87dbf36c9f153a36128c91e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP08(2017)128.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "fc2ec9818aa72984e87dbf36c9f153a36128c91e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3423430
pes2o/s2orc
v3-fos-license
Appearance of the universal value $e^{2}/h$ of the zero-bias conductance in a Weyl semimetal-superconductor junction We study the differential conductance of a time-reversal symmetric Weyl semimetal-superconductor (N-S) junction with an s-wave superconducting state. We find that there exists an extended regime where the zero-bias differential conductance acquires the universal value $e^{2}/h$ per unit channel, independent of the pairing and chemical potentials on each side of the junction, due to a perfect cancellation of Andreev and normal reflection contributions. This universal conductance can be attributed to the interplay of the unique spin/orbital-momentum locking and s-wave pairing that couples Weyl nodes of the same chirality. We expect that the universal conductance can serve as a robust and distinct signature for time-reversal symmetric Weyl fermions, and be observed in the recently discovered time-reversal symmetric Weyl semimetals. In this work, we study a 3D time-reversal symmetric N-S junction constructed by a WSM and an s-wave superconducting Weyl metal. Near the Weyl nodes, the intraorbital pairing dominates the superconducting state. Denoting by µ N and µ S the chemical potentials of the WSM and superconductor, respectively, and by ∆ s the superconducting pairing potential, we find that in the regime |µ N | [|∆ s | 2 + µ 2 S ] 1/2 , the contributions of Andreev and normal reflections perfectly cancel at vanishing excitation energy. In this regime, the zero-bias differential conductance, thus, takes the universal value e 2 /h per unit channel, independent of µ N , µ S , and ∆ s . We attribute this universal conductance to the interplay of the unique spin/orbital-momentum locking and s-wave pairing in the Weyl junction. We also discuss its robustness and expect that it can serve as a distinct signature for time-reversal symmetric Weyl fermions. We are confident that the universal conductance can be observed in the recently discovered time-reversal symmetric WSMs [39,40]. Model Hamiltonian.-We start with a low-energy model for a time-reversal symmetric WSM [67] H(k) =k x s x σ z + k y s y σ 0 + (κ 2 0 − |k| 2 )gs z σ 0 + βs y σ y − αk y s x σ y , where k = (k x , k y , k z ) is the wave vector, the four- T is written in terms of annihilation operators c s,σ,k with spin indices σ =↑, ↓ and orbital indices s = A, B. Here, σ i (i = 0, x, y, z) are the 2 × 2 identity and Pauli matrices for the spin-1/2 space, and s i (i = 0, x, y, z) for the orbital space. κ 0 , α and β are real model parameters. The model (1) breaks inversion symmetry, i.e., s z H(k)s z = H(−k) by the β term, but preserves timereversal symmetry as shown by σ y H * (k)σ y = H(−k). Suppose 0 < β < κ 0 , the model (1) has four Weyl nodes at ±Q ± where Q ± = (β, 0, ±k 0 ) and k 0 = [κ 2 0 − β 2 ] 1/2 . Near the Weyl nodes, we can linearize the model (1) and rewrite it as a sum of four effective Hamiltonians, each describes the electrons near one of the Weyl nodes where k z has been re-scaled by 1/(2k 0 ) and k y by 1/α, the indices γ = 1, 2, 3, 4 label the Weyl nodes at Q + ,−Q + , Q − , −Q − , respectively, and k indicates that k is confined to the vicinity of the Weyl nodes. The spinors Ψ γ,k ≡ (ψ γ,↑,k , ψ γ,↓,k ) T of Weyl nodes are given and H 2 (k) describe the two Weyl nodes of positive chirality while H 3 (k) and H 4 (k) describe the two Weyl nodes of negative chirality. All the Weyl nodes consist of different orbitals and spins, and exhibit a nontrivial spin/orbitalmomentum locking. They form two time-reversed pairs, i.e., σ y H * 1 (k)σ y = H 2 (−k) and σ y H * 3 (k)σ y = H 4 (−k), each of them with definite chirality. Next, introducing the s-wave superconducting coupling with both intra-and inter-orbital pairing potentials and projecting onto the spinors of Weyl nodes, one can see that the inter-orbital pairing is strongly suppressed due to the mismatch of spins or momenta [68]. Suppose the Weyl nodes are well separated and the chemical potential is close to the Weyl nodes, then only the intra-orbital pairing is important and reads The pairing potential ∆ s couples electrons on Weyl nodes stemming from the time-reversed pairs. The whole system can thus be understood as two effectively independent and equivalent subsystems with opposite chirality. In the following, we will discuss the physics of the subsystem with positive chirality. Using the Nambu spinor in real space for positive where ∆ s (r) = |∆ s (r)|e iφ(r) . We have introduced the identity and Pauli matrices ν i and τ i (i = 0, x, y, z) for electron-hole and Weyl-node degrees of freedom, respectively, and moved the k 0 and β dependence into the wave function by performing a unitary transformation Φ(r) = e i(k0zσz+βxσx)τzνzΨ (r). In a uniform system, the eigenenergies are given by ε = ±[|∆ s | 2 + (|k| ± µ) 2 ] 1/2 . The superconductor is fully gapped. The BdG Hamiltonian (7) decouples into two 4 × 4 identical blocks which can be treated separately. We will consider one block which is enough to fully describe the junction problem. Reflection probabilities in a Weyl N-S junction.-The time-reversal symmetric Weyl N-S junction can be described by the BdG Hamiltonian (7) with ∆ s (z) = ∆e iφ Θ(z) and µ(z) = µ N Θ(−z) + µ S Θ(z). Here Θ(z) is the Heaviside step function, ∆ > 0 and a constant superconducting phase φ are assumed. The wave vector k = (k x , k y ) parallel to the N-S interface is conserved. We can treat each k separately and work with a quasi-1D junction problem. Differential conductance.-At zero temperature, the differential conductance (per unit area) in the N-S junction is given by [69] where eV is the bias voltage. Note that only real k e contribute in Eq. (10). We normalize the conductance to the value G 0 = e 2 (µ N +eV ) 2 /(4πh), corresponding to the number of available channels at energy µ N + eV on the N side. With the expressions (8) and (9) in Eq. (10), we are able to analyze the behaviors of the conductance. We concentrate, in the following, on two particular parameter regimes: (i) µ S = µ N (∆ arbitary); and (ii) µ S ∆ (µ N arbitary) [77], which have a distinct zero-bias feature in commom (see below). For regime (i), µ S = µ N , the normalized conductance g NS ≡ G −1 0 dI/dV [78] as a function of eV is plotted in Fig. 1. At large bias eV ∆, all curves converge to unity. This is expected since at large excitation energies the influence of superconductivity is negligible, which together with an identical chemical potential on both sides makes the interface transparent. The g NS -eV relation is rich in the subgap region, depending on the ratio µ N /∆. For µ N /∆ 1, the Fermi momentum mismatch of the two sides is negligible, i.e., k eq(hq) ≈ k e(h) , thus normal reflection is suppressed, leading to perfect Andreev reflection with g NS = 2. Similar behavior occurs for conventional electron systems [69]. For smaller µ N , but µ N > ∆, g NS bends down and even shows a dip at eV = ∆. For 0 < µ N < ∆, g NS vanishes at eV = µ N as no hole state is available for Andreev reflection. This is typical for gapless Dirac systems [70]. In the limit µ N /∆ 1, specular Andreev reflection dominates in the bias region µ N < eV < ∆ and gives rise to g NS = 2 [68]. Nevertheless, in the limit of low biases, g NS approaches unity for µ N /∆ 1 (see solid curves in Fig. 1), implying the universal conductance e 2 /h per unit channel. Let us now consider regime (ii), µ S ∆, which corresponds to the most relevant experimental condition and is depicted in Fig. 2. For µ N > µ S , g NS varies little in the subgap region and it decreases smoothly to a constant at large bias. With decreasing µ N , g NS increases in the subgap region or at large bias. For µ N = µ S , g NS is maximized for any bias and shows perfect Andreev reflection with g NS = 2 in the subgap region. For µ N < ∆, the vanishing of g NS can also be observed at eV = µ N where no Andreev reflection is allowed. Most remarkably, for µ N µ S , one can notice again that all the curves approach unity in the limit of low biases, despite that they vary substantially away from zero bias, and converge to a constant 4 log(2) − 2 at large bias (see solid curves and inset in Fig. 2). Zero-bias conductance and universal value.- Figure 3 focuses on the behavior of the zero-bias conductance g NS . In particular, Fig. 3(a) displays various salient features of g NS as a function of µ S and µ N . First, g NS is centrosymmetric in the phase space {µ N , µ S }, as a hallmark of particle-hole symmetry of the system. Second, g NS shows a ridge along the line µ N = µ S where the small Fermi momentum mismatch strongly suppresses normal reflection. In contrast, when |µ S | |µ N |, the Fermi momentum mismatch is large and normal reflection is enhanced, we have thus vanishing g NS . Finally, g NS is always smaller than unity in the bipolar regime with µ N µ S < 0, implying that the normal reflection contribution is larger than the Andreev reflection contribution. Figure 3(c) instead displays g NS with respect to µ N = µ S . The universal conductance e 2 /h clearly appears in the regime: where the Fermi momenta on the two sides of the interface are very different, i.e, |k e | |k eq |. We note that such regime corresponds to an ideal semimetal phase on the N side, which should be experimentally accessible. To understand the occurrence of the universal conductance, we focus on the regime (11) and analyze our analytical results. Since only real k e contribute to the conductance given by Eq. (10), the channels with k < |µ N | are relevant. From the BdG Hamiltonian (7), we observe that while on the N side the parallel wave vector k , which couples different spins and orbitals, is significant, on the S side it becomes negligible compared to the perpendicular momentum, i.e., k |k eq | ≈ [∆ 2 + µ 2 S ] 1/2 . Thus, the Aand B-orbital components are decoupled from each other on the S side. As a result, the reflection probabilities at zero energy reduce to They become functions of a single parameter |k /µ N |. Notably, normal and Andreev reflections have opposite contribution to the conductance, according to Eq. (10). Plugging Eqs. (12) and (13) into Eq. (10), it is straightforward to see that the contributions from Andreev and normal reflections cancel each other perfectly, giving rise to the universal conductance e 2 /h per unit channel. The perfect cancellation in the 3D Weyl junction can be understood as a result of the unique spin/orbitalmomentum locking and s-wave pairing, which can be inferred from the analog of the Weyl system to a 1D ferromagnet-superconductor junction [68]. Robustness of the universal value.-We note that in a conventional electron system with parabolic spectrum, the zero-bias conductance can also exhibit a universal value in the regime (11). However, it is trivially zero. Indeed, since in that case velocity and current are linear in momentum, for large momentum mismatch, the conservation of the flux at the interface is only possible if the flux vanishes. By contrast, in a Dirac system, the Fermi velocity is constant and the flux conservation is less sensitive to the Fermi momentum mismatch. As a consequence, non-vanishing flux and conductance are possible. In graphene, a 2D Dirac system, a finite characteristic value (4e 2 /3h) of the zero-bias conductance can be found [70]. However, the instabilities of the 2D Dirac cone to small perturbations, such the intrinsic spin-orbit coupling [71] or the coupling to the substrate [72], likely mask such effect. In fact, to the authors' knowledge, the value 4e 2 /3h in graphene has never been observed experimentally. By contrast, the Weyl nodes in a WSM are topologically protected and cannot be gapped out. Therefore, we expect that the universal conductance e 2 /h found here is accessible in experiments. Finally, we stress that the universal conductance predicted by us is robust in the presence of an interface bar-rier, due to Klein tunneling [68,73]. The interface barrier can be modeled by a potential term V 0 ν z Θ(z + d)Θ(−z) in the BdG Hamiltonian, where we assume the barrier length d → 0 and potential V 0 → ∞ but the barrier strength χ ≡ V 0 d remains finite [74]. Then, g NS is an oscillation function of χ with a period π. In the regime (11), g NS oscillates slightly around the universal value, as shown in Fig. 3. Note that if the system is not deep in the regime (11), only a small deviation from e 2 /h appears. Therefore, the universal conductance can be used as a distinct signature for time-reversal symmetric Weyl fermions. Experimental relevance.-Recently, an ideal timereversal symmetric WSM phase has been proposed in 3D HgTe under compressive strain [39,40]. There are likely four pairs of Weyl nodes in the WSM phase [40]. However, as long as the Fermi energy is close enough to the Weyl nodes, the system can be decoupled to multiple equivalent time-reversed subsystems. Then our analysis and main results should hold. Importantly, superconductivity in 3D compressively strained HgTe could be realized by proximity to a conventional s-wave superconductor, similar to the case of tensilely strained HgTe, a 3D topological insulator [75,76]. Therefore, we expect that the universal conductance e 2 /h could be measured on compressively strained HgTe systems. Summary.-We have analyzed a time-reversal symmetric Weyl N-S junction with an s-wave superconducting pairing state. In an accessible regime, the zero-bias differential conductance takes the universal value e 2 /h per unit channel, independent of the pairing and chemical potentials, as the Andreev and normal reflection contributions perfectly cancel at vanishing excitation energy. The universal conductance can be understood as a consequence of the interplay of the unique spin/orbitalmomentum locking and s-wave pairing in the WSM system. Acknowledgments.-We thank Jian Li, Benedikt Scharf, Martin Stehno, and Xianxin Wu for valuable Supplemental material In this Supplemental Material, we show (S1) the derivation of the effective Hamiltonian for the s-wave superconducting pairing; (S2) Transport probabilities of the Weyl N-S junction; (S3) Analogy of the Weyl junction to a 1D ferromagnet-superconductor (F-S) junction; (S4) Effect of an interface barrier on the conductance. S1. EFFECTIVE HAMILTONIAN FOR THE S-WAVE SUPERCONDUCTING COUPLING The s-wave superconducting coupling with both intra-and inter-orbital pairing potentials is given by where ∆ s and∆ s measure the amplitudes of the intra-and inter-orbital pairing potentials. Under the unitary transformation c Thus, we can rewrite Eq. (S1.1) in the Nambu spinor where the BdG Hamitltonian reads (S1.5) At low energy, the whole Nambu spinor containing 16 components in real space can be written as Ψ(r) = [Ψ 1,q (r), Ψ 2,q (r), Ψ 3,q (r), Ψ 4,q (r), Ψ * 1,q (r), Ψ * 2,q (r), Ψ * 3,q (r), Ψ * 4,q (r)] T . (S1.6) where |q| k 0 , β and the basis functions for the Weyl nodes read (S1.10) The projection of the pairing potential onto the Nambu spinor (S1.6) is calculated as where ψ i is the i-th component of the Nambu spinor (S1.6). Here Ω 0 is the volume of the system. In calculating the element H S 1,10 , large length L x or L z of the system in the x or z direction or large Weyl-node separation k 0 or β are assumed such that βL x 1 or k 0 L z 1 and the integral vanishes. Along these lines, we obtain the 16 × 16 effective BdG Hamiltonian for the pairing: (S1.14) We can see that the inter-orbital pairing vanishes as βL x 1 or k 0 L z 1. To physically understand the vanishing of the inter-orbital pairing, let us analyze the term ψ ↑,−k must correspond to either Weyl node 2 or 4. This coupling, however, is not allowed since the B-orbital component of Weyl node 2 or 4 always carries ↓-spin. Similar analysis can be applied to the other three terms of the inter-orbital pairing. Therefore, at low energy the inter-orbital pairing∆ s is suppressed and only the intra-orbital pairing ∆ s is important. From Eq. (S1.14), we can also observe that ∆ s couples Weyl nodes of the same chirality, i.e., Weyl node 1 to Weyl node 2 and Weyl node 3 to Weyl node 4. Thus, the whole effective BdG Hamiltonian decouples into four equivalent 4 × 4 blocks. On the WSM (N) side, the basis functions for a given excitation energy ε can be written as (we neglect the e ikxx+ikyy part for simplicity) where θ k = arctan(k y /k x ), α e(h) = arctan(k /k e(h) )/2, and k e(h) = sgn(ε ± µ N + k ) (ε ± µ N ) 2 − k 2 . On the superconducting (S) side, the basis functions are whereα e(h) = arctan(k /k eq(hq) )/2, and k eq(hq) = sgn ε ± sgn(µ S ± k ) ∆ 2 + (µ S ± k ) 2 (µ S ± Ω) 2 − k 2 . For subgap energies ε ∆, β = arccos(ε/∆) and Ω = i √ ∆ 2 − ε 2 while, for supragap energies ε > ∆, β = −iarccosh(ε/∆) and Ω = sgn(ε) √ ε 2 − ∆ 2 . Note that α e(h) is always real whileα e(h) can be complex. At an excitation energy ε 0, the wave function, for the scattering state of an electron injected from the WSM and moving towards the interface, can be described by where a 0 , b 0 , c 0 , and d 0 represent the coefficients of Andreev and normal reflections, transmissions to two rightmoving quasi-particles, respectively. These coefficients are determined by the continuity of the wave functions at the N-S interface With the basis functions and the coefficients, we can calculate the probabilities of Andreev and normal reflections, and transmissions which are defined by the Andreev and normal reflected, and transmitted current densities normalized by the incident current density, respectively. In general, the transport probabilities can be found, respectively, as (S2.12) (S2.14) (S2. 15) Eqs. (S2.12) and (S2.13) are the results [Eqs. (8) and (9)] given in the main text. In the Dirac system, on requiring the continuity of the wave function, the continuity of the current flux is also satisfied, as shown by R ee + R eh + T ee + T eh = 1. One can see clearly that for subgap energies ε ∆, β is real, thus there is no transmission probability, i.e., T ee = T eh = 0. In the following, we will analyze R eh and R ee , since they are the only functions required in the calculation of the differential conductance. • At ε = µ N < ∆, Eqs. (S2.12) and (S2.13) simplify to Andreev reflection is not allowed physically because there is no hole state on the N side. As a result, the differential conductance vanishes. The critical energy ε = µ N separates two energy regions. In the region ε < µ N , Andreev retroreflection occurs while in the region ε > µ N , specular Andreev reflection occurs. S3. ANALOGY OF THE WEYL JUNCTION TO A 1D F-S JUNCTION To see the role played by spin/orbital-momentum locking and s-wave pairing in the universal conductance e 2 /h, it is instructive to consider a 1D Dirac F-S junctions. The 1D F-S junction with a ferrromagnet on the negative side (z < 0) and a superconductor (z > 0) on the positive side can described by where m(z) = m 0 Θ(−z), ∆(z) = ∆Θ(z) and µ(z) = µ F Θ(−z)+µ S Θ(z). Here the basis is Ψ = c 1,↑ , c 1,↓ , c † 2,↓ , −c † 2,↑ T with 1 and 2 denoting two valleys. Note that the magnetization m(z) is valley dependent, i.e., it is opposite at the two valleys, and the pairing potential ∆(z) couples the same chirality (defined by the projection of the momentum onto the spin orientation). This is important to mimic the physics of the Weyl junction. At zero excitation energy, the basis functions of the right-moving electron, left-moving electron and left-moving hole on the ferromagnetic side z < 0 are given by respectively, where α m = arctan(m 0 /k m )/2 and k m = sgn(µ F ) µ 2 F − m 2 0 . Note that these zero-energy states on the ferromagnetic side exist only when m 0 < |µ F |. Thus, in the following calculation, we focus on the case of m 0 < |µ F |. On the S side, the basis functions of the two "right-moving" particles are given by wherek eq = |µ s | + i∆. Both ϕ− → eq (z) and ϕ− → hq (z) decay from the interface in the superconductor as e −z/ξ with ξ = 1/∆. The matching of the wave function at the interface, z = 0, gives rise to the equation where a 0 , b 0 , c 0 and d 0 , similar to the previous section, represent the coefficients of Andreev reflection, normal reflection and transmissions, respectively. The coefficients of Andreev and normal reflections are found as Then, the probabilities of Andreev and normal reflections are given by respectively. Eq. (S3.9) indicates that in the absence of magnetization, m 0 = 0, the 1D junction exhibits perfect Andreev reflection, as expected by the conservation of chirality. A finite magnetization couples the right and left movers (i.e., different orbitals) and leads to finite normal reflection. The µ S and ∆ dependence disappear in the final results (S3.8) and (S3.9) because the space-dependent phases of the wave functions drop out in the continuity equation (S3.7). Most importantly, one can find that Eqs. (S3.9) resemble the form of Eqs. (S2.22) but with k replaced by the magnetization m 0 . As a contrast, if the pairing potential couples opposite chirality (e.g., spin-triplet) or if the magnetization is valley independent, then following the same approach, one would find different results. In the large-momentum mismatch regime |µ N | ∆ 2 + µ 2 S of the Weyl N-S junction, the parallel spin/orbitalmomentum locking is significant on the N side but negligible on the S side, thus the system becomes equivalent to a bundle of 1D Dirac F-S junctions where the wave vector k acts as the valley-dependent parallel magnetization. In this way, one can see that the universal conductance e 2 /h per unit channel is due to the interplay of the unique spin/orbital momentum locking and s-wave pairing that couples Weyl nodes of the same chirality. S4. EFFECT OF A NON-MAGNETIC INTERFACE BARRIER In the presence of an interface barrier, the junction can still be described by the BdG Hamiltonian (S2.1) but with ∆ s (z) = ∆e iφ Θ(z), (S4.1) Here the length d and potential V 0 of the barrier are assumed to satisfy such that the dimensionless barrier strength χ = V 0 d remain finite [74]. On the N and S sides, the basis functions are still given by Eqs. (S2.2-S2.5) and (S2.6-S2.9), respectively. On the barrier, 0 < z < d, the basis functions can be written as Note that these expressions are valid only for the limit (S4.3).
2017-11-21T16:28:47.000Z
2017-11-21T00:00:00.000
{ "year": 2017, "sha1": "b5e6f8408849f53b7dc243aedd49bc683de12929", "oa_license": null, "oa_url": "https://iris.polito.it/bitstream/11583/2699319/1/Zhang-Dolcini-Trauzettel_PRB_97_041116_2018%20(Appearance%20of%20the%20universal%20value%20e2h%20of%20the%20zero-bias%20conductance%20in%20a%20Weyl%20semimetal-superconductor%20junction).pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b5e6f8408849f53b7dc243aedd49bc683de12929", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14744355
pes2o/s2orc
v3-fos-license
Association of HPA axis-related genetic variation with stress reactivity and aggressive behaviour in pigs Background Stress, elicited for example by aggressive interactions, has negative effects on various biological functions including immune defence, reproduction, growth, and, in livestock, on product quality. Stress response and aggressiveness are mutually interrelated and show large interindividual variation, partly attributable to genetic factors. In the pig little is known about the molecular-genetic background of the variation in stress responsiveness and aggressiveness. To identify candidate genes we analyzed association of DNA markers in each of ten genes (CRH g.233C>T, CRHR1 c.*866_867insA, CRHBP c.51G>A, POMC c.293_298del, MC2R c.306T>G, NR3C1 c.*2122A>G, AVP c.207A>G, AVPR1B c.1084A>G, UCN g.1329T>C, CRHR2 c.*13T>C) related to the hypothalamic-pituitary-adrenocortical (HPA) axis, one of the main stress-response systems, with various stress- and aggression-related parameters at slaughter. These parameters were: physiological measures of the stress response (plasma concentrations of cortisol, creatine kinase, glucose, and lactate), adrenal weight (which is a parameter reflecting activity of the central branch of the HPA axis over time) and aggressive behaviour (measured by means of lesion scoring) in the context of psychosocial stress of mixing individuals with different aggressive temperament. Results The SNP NR3C1 c.*2122A>G showed association with cortisol concentration (p = 0.024), adrenal weight (p = 0.003) and aggressive behaviour (front lesion score, p = 0.012; total lesion score p = 0.045). The SNP AVPR1B c.1084A>G showed a highly significant association with aggressive behaviour (middle lesion score, p = 0.007; total lesion score p = 0.003). The SNP UCN g.1329T>C showed association with adrenal weight (p = 0.019) and aggressive behaviour (front lesion score, p = 0.029). The SNP CRH g.233C>T showed a significant association with glucose concentration (p = 0.002), and the polymorphisms POMC c.293_298del and MC2R c.306T>G with adrenal weight (p = 0.027 and p < 0.0001 respectively). Conclusions The multiple and consistent associations shown by SNP in NR3C1 and AVPR1B provide convincing evidence for genuine effects of their DNA sequence variation on stress responsiveness and aggressive behaviour. Identification of the causal functional molecular polymorphisms would not only provide markers useful for pig breeding but also insight into the molecular bases of the stress response and aggressive behaviour in general. Background Stress responses promote the maintenance of homeostasis and adaptation to physiological and psychosocial challenges of a changing environment. This complex process involves coordinated activation of behavioural, autonomic, and neuroendocrine reactions. Concomitantly, pathways that promote vegetative functions such as growth, reproduction, and feeding are inhibited, the extent to which is dependent upon the duration and intensity of the stressor ("biological cost" of the stress response [1]). Aggression is a powerful stressor and has been shown to activate the hypothalamic-pituitary-adrenocortical (HPA) axis as well as the sympatho-adrenomedullar (SAM) system in various species including pigs [2][3][4]. In the pig, aggression commonly occurs when mixing unfamiliar individuals which disturbs the social dominance order [5]. Besides negative effects on animal welfare, aggression has also been shown to have a negative impact on immune response [6], growth performance [7], and product quality in pigs [4,8]. Aggressive behaviour in turn is affected by the functional properties of the HPA axis and of the SAM system (reviewed in [9]). Baseline levels of glucocorticoids are inversely related with aggressiveness in various species including pigs (reviewed in [9,6,10]). In contrast, as reported in rodents, an acute increase in glucocorticoid levels may promote aggressive behaviour by a fast feed forward mechanism [11]. Functional properties of the stress response systems and aggressive behaviour show large interindividual variation, depending upon a variety of factors including genetic predisposition [12][13][14]. In humans and in model animals several genomic regions and gene variants associated with variation in stress responsiveness and aggressiveness have been identified using quantitative trait loci (QTL) mapping and candidate gene approaches (reviewed in [15][16][17][18]). In the pig, such studies are scarce, in spite of the widely recognized impact of aggressive behaviour and stress on the expression of meat and carcass quality traits [8]. Désautés et al. [19] mapped QTL for behavioural and neuroendocrine stress responses in a Meishan × Large White intercross leading to the identification of the corticosteroid binding globulin encoding gene as a major QTL for plasma cortisol levels in this population ( [20], reviewed in [18]). In addition Geldermann et al. [21] mapped QTL for plasma creatine kinase levels after pharmacological challenge. Using the candidate gene approach Fujii et al. [22] identified a mutation (SNP c.1843C>T) in the ryanodine receptor 1 (RYR1) gene responsible for malignant hyperthermia and porcine stress syndrome. The mutation has been shown to affect the basal functioning of the HPA axis in vivo [23] and in vitro [24], and analysis of this important mutation was included in the present study. The aim of the present study was to expand the current knowledge of the molecular-genetic basis of the variation in stress responsiveness and aggressive behaviour in the pig. To this end we analyzed the association of candidate gene DNA markers with physiological parameters of the stress response (cortisol, creatine kinase, glucose, and lactate concentration in plasma) and aggressive behaviour (measured by means of lesion scoring) in the context of the psychosocial stress of mixing individuals with different aggressive temperaments in a commercial pig herd [4]. In addition, we also analyzed the association with adrenal weight, a parameter reflecting activity of the central branch of the HPA axis (i.e. the release of corticotropin-releasing hormone and ACTH) over time, in a different herd of commercial crossbred pigs. To relate position of the candidate genes with known porcine QTL we physically mapped those candidates whose position had not previously been determined. The candidates represented genes encoding members of the HPA axis pathway (corticotropin-releasing hormone, CRH; CRH type 1 receptor, CRHR1; CRH binding protein, CRHBP; vasopressin, AVP; vasopressin V 1B receptor, AVPR1B; proopiomelanocortin, POMC; melanocortin type 2 (ACTH) receptor, MC2R; glucocorticoid receptor, NR3C1) and genes encoding members of the corticotropin-releasing hormone (CRH) system (urocortin, UCN; CRH type 2 receptor, CRHR2). CRH and vasopressin act synergistically to activate expression of proopiomelanocortin, the precursor of adrenocorticotropin (ACTH), in the pituitary through activation of CRH type 1 and vasopressin V 1B receptors respectively. CRH is the dominant trigger for HPA axis activation during acute stress while vasopressin, which itself is a weak ACTH secretagogue, is important in mediating response to chronic stress. Vasopressin also has an important function in the control of aggressive behaviour in various species [25] including the pig [26]. CRH binding protein functions as a buffer for CRH and related peptides and plays an inhibitory role in the modulation of CRH activity. ACTH stimulates synthesis and secretion of glucocorticoids from the adrenal cortex via melanocortin type 2 receptor. The action of glucocorticoids on target tissues is mediated by glucocorticoid receptor, which in the hypothalamus, in the pituitary, and in the hippocampus acts to terminate the stress response (reviewed in [27]). The urocortins, although not directly involved in the HPA axis action, may play a modulatory role via stimulation of the CRH receptor type 2 [28]. Results and Discussion Allele distribution of the candidate genes The candidate gene DNA markers are located mainly in the transcribed region (Table 1) with the exception of the SNPs CRH g.233C>T and UCN g.1329T>C, where the DNA markers are located in the promoter region in evolutionarily conserved segments,~350bp and~90bp upstream of the transcription start site respectively ( [29] and unpublished data respectively). Two DNA markers lead to amino acid exchanges (CRHBP c.51G>A, AVPR1B c.1084A>G), and one to an amino acid deletion (POMC c.293_298del), respectively. The remaining markers are either silent SNPs (MC2R c.306T>G, AVP c.207A>G) or polymorphisms located in the 3' untranslated region (CRHR1 c.*866_867insA, CRHR2 c.*13T>C, NR3C1 c.*2122A>G). The SNP c.306T>G in MC2R was reported previously by others [30]. Allele frequencies of the candidate gene DNA markers in the two phenotyped commercial crossbred herds (SYN and PiF1a), along with allele frequencies in sets of pigs from three commercial pure breeds German Landrace (LR), German Large White (LW), and Pietrain (Pi), in a set from Vietnamese local breed Muong Khuong (MK), and in a set of European Wild Boars (WB) are summarized in Table 1. Contrary to the common assumption that selective breeding reduces genetic variability, the commercial breeds and crosses showed the highest variability, whereas Wild Boar the least. Six out of the ten candidate gene DNA markers tested, namely CRHR1 c.*866_867insA, CRHBP c.51G>A, AVPR1B c.1084A>G, POMC c.293_298del, MC2R c.306T>G, and UCN g.1329T>C were fixed in Wild Boar. The increased variability of the commercial breeds compared to Wild Boar may be a result of the introgression of Asian genetics into European breeds in 18 th and 19 th centuries. Ramirez et al. [31] showed that the proportion of Asian genetics in the genetic pool of European commercial breeds is still significant at around 47-61%. The majority of the candidate gene DNA markers, as for example the SNP AVPR1B c.1084A>G, showed large differences in allele distribution between Wild Boar and Muong Khuong, indicating that the alleles missing in Wild Boar but segregating in commercial breeds may originate from the introgression of the Asian genetics. In view of the intriguing differences in allele frequency of the DNA markers of the HPA axis-related genes between commercial, local, and wild pigs it is interesting to note that domestication of several species and selection for tameness (i.e. reduced interspecific aggression) in silver foxes and rats, which is an important aspect of domestication, was coupled with decreased activity of the HPA axis ( [32], reviewed in [33]). Likewise, domestic pig breeds show lower cortisol levels compared to their wild ancestor [34]. Although the reduced cortisol level in modern pigs may result from selection for leaner body composition [35], it is conceivable that it might be related to domestication-related behaviour as well. A test of Hardy-Weinberg equilibrium revealed significant deviation in both crossbred herds for the SNPs RYR1 c.1843C>T and AVPR1B c.1084A>G, for the SNPs AVP c.207A>G and UCN g.1329T>C in SYN, and for the SNP NR3C1 c.*2122A>G in PiF1a. In contrast in the pure breeds, only the polymorphisms CRHR1 c. *866_867insA in Pietrain and CRHR2 c.*13T>C in Muong Khuong showed significant deviation, i.e. in two tests out of fifty performed, which is a proportion that would be expected to occur at the significance level of 5% simply by chance. The more frequent deviation from Hardy-Weinberg equilibrium in SYN and PiF1a is thus Gene region where the analyzed polymorphism is located. For polymorphisms in the coding region the effect on protein sequence is given in parenthesis. 3 Reference: a [22], b [29], c dbSNP submitter accession numbers listed in Additional File 1, Table S1-S4, d [30] Association with stress responsiveness and aggressive behaviour The physiological parameters of the stress response included in this study were shown previously to be affected by aggressive interactions [2,36] and also in the present experiment as reported earlier [4]. In this context creatine kinase (CK), lactate and glucose concentration reflect the amount of physical activity and mobilization of energy sources, and indicate activation of the SAM system, whilst cortisol level indicates activation of the HPA axis. For evaluating aggressive behaviour at mixing we used the number of skin lesions, which is an established indicator trait. Turner et al. [14,37,38] showed that front lesion score is the best indicator of engagement in reciprocal aggression at mixing, whereas rear lesion score is the best indicator of receipt of nonreciprocal aggression at mixing. Effects with p ≤ 0.10 are summarized in Tables 2 and 3 for stress parameters and in Table 4 for lesion scores. We refer to results as significant when p < 0.05 and as showing a tendency when 0.05 ≤ p ≤ 0.10. To advise the reader of the increased risk of type I error due to multiple testing we also provide the corresponding false discovery rate (q-value; Tables 2, 3, 4). The q value is similar to the well known p value, except it is a measure of significance in terms of the false discovery rate rather than the false positive rate [39]. All physiological stress parameters were significantly affected by the RYR1 SNP c.1843C>T (Table 2). In accordance with the known effect of the RYR1 SNP on metabolism of the skeletal muscle [40,41] the mutated T allele highly significantly (p < 0.0001) increased plasma CK and lactate concentration. Plasma glucose concentration, in turn, was reduced. Consistent with Weaver et al. [23], heterozygous carriers of the T allele showed significantly lower cortisol concentration compared to CC homozygous individuals. Furthermore the adrenal weight tended to be lower in the heterozygous carriers of the T allele in the PiF1a line, providing independent supporting evidence that the RYR1 SNP affects activity of the HPA axis (Table 3). With increased sample size this effect became significant (n = 316; data not shown). Concerning aggressive behaviour the T allele of the RYR1 SNP c.1843C>T significantly increased rear lesion score ( Table 4). The effect of RYR1 on aggressive behaviour might be related to its effect on HPA axis activity. Guárdia et al. [42] also reported an effect of RYR1 on skin lesions, but the direction of the effects was opposite to our findings, perhaps due to differences in recording of the skin lesions (in the study of Guárdia et al. [42] as a whole carcass score on a 5-point scale). The SNP CRH g.233C>T showed a significant association with plasma glucose concentration and a tendency to affect plasma CK concentration; however the effect on glucose showed neither an additive nor dominance direction ( Table 2). We mapped CRH previously on SSC4 in the marker interval SW724-S0107 [29]. So far no QTL for physiological stress parameters have been reported in this genomic region. The effect on CK concentration is possibly related to aggressive behaviour, because carriers of the T allele, which tended to have higher CK concentration, also tended to have a higher rear lesion score (Table 4). It could be speculated that the association of the SNP CRH g.233C>T with aggressive behaviour might be related to the anxiogenic effect of CRH [43,44] because the SNP showed no association with activity of the HPA axis. The anxiogenic effect of CRH is, at least partly, independent of its action on the HPA axis [45]. Anxiety is related to aggressive behaviour in a complex manner, depending on the model used. Veenema and Neumann [46] for example, found an inverse relationship between anxiety and offensive aggression in rats divergently selected for anxiety-related behaviour. CRHR1 maps on SSC12 in the marker interval SW957-SW943 (Additional File 1, Table S6). No effects, on either the stress parameters or on aggressive behaviour were detected for the polymorphism CRHR1 c. *866_867insA. CRHBP maps on SSC2 in the marker interval SW1602-SW1320 (Additional File 1, Table S6). The SNP CRHBP c.51G>A showed a tendency to affect middle lesion score (Table 4), but showed no association with physiological stress parameters or adrenal weight. The polymorphisms POMC c.293_298del and MC2R c.306T>G both showed association only with adrenal weight in the PiF1a line (Table 3). This effect is in line with the established positive effect of POMC-derived peptides, in particular ACTH, on adrenal growth [47]. We mapped POMC on SSC3, in the marker interval SW314-S0002 (Additional File 1, Table S6), close to a QTL for basal glucose level [19]. POMC is involved in the control of glucose homeostasis via the HPA axis and other pathways [48], however in the present study we found no association of the polymorphism POMC c.293_298del with glucose concentration. MC2R was mapped on SSC6 close to marker SW2173 by Jacobs et al. [30] and close to QTL for plasma creatine kinase levels in the Meishan × Pietrain and Wild Boar × Pietrain families [49]. However these QTL are most likely caused by the SNP c.1843C>T in RYR1, which is also located in the same QTL region. The SNP NR3C1 c.*2122A>G showed a significant association with plasma cortisol concentration in SYN and a significant association with adrenal weight in PiF1a (Tables 2 and 3). The allele A, which is associated with lower cortisol concentration, is also associated with lower adrenal weight (Tables 2 and 3), suggesting that it is associated with an enhanced negative feedback effect on the activity of the HPA axis. A similar phenotype, including decreased basal plasma corticosterone and ACTH levels, reduced adrenal weight and adrenocortical size, has been reported in a knock-in mouse line showing increased functioning of a modified glucocorticoid receptor [50]. A clue about possible molecular mechanisms underlying the enhanced negative feedback effect might be obtained from the report of Perreau et al. [51] who observed a higher density of glucocorticoid receptors in the pituitary in Large White pigs which exhibit lower activity of the HPA axis compared to Meishan pigs, suggesting that genetic factors cause variation in glucocorticoid receptor density in the pituitary of the pig. Furthermore, the SNP NR3C1 c.*2122A>G showed a significant association with the total and front lesion score, but not with the middle or rear lesion score (Table 4). This indicates that the animals accumulated lesions mainly through reciprocal fighting. Because, as mentioned in the introduction, baseline levels of glucocorticoids are inversely related with aggressiveness, the decreased activity of the HPA axis in the carriers of the allele A provides a possible explanation for their enhanced aggressive behaviour. NR3C1 maps on SSC2 in the marker interval SW1879-SWR308 (Additional File 1, Table S6). So far no QTL for physiological stress parameters have been reported in this genomic region. AVP maps on SSC17 in a QTL region for post-stress ACTH level [19]. However, in the present study the SNP AVP c.207A>G showed associations with neither the physiological stress parameters, nor with aggressive behaviour. The SNP AVPR1B c.1084A>G in turn showed significant association only with aggressive behaviour ( Table 4). The allele G consistently decreased lesion score. However, the effect was most pronounced for the middle lesion score, less pronounced for the front lesion score, and did not reach the 0.1 level for the rear lesion score ( Table 4). The vasopressin pathway plays a prominent role in the regulation of social behaviour, including aggression [25,26]. Vasopressin V 1B receptor knockout mice display reduced levels of social forms of aggressive behaviour. While male vasopressin V 1B receptor knockout mice demonstrate deficits in offensive and defensive aggression, female knockout mice have deficits in maternal aggression. However, there is no global deficit in aggressive behaviour as vasopressin V 1B receptor knockout mice show normal predatory aggression [52]. Hence, there is strong functional evidence supporting the identified association of the SNP AVPR1B c.1084A>G with aggressive behaviour in the context of social stress of mixing. AVPR1B maps on SSC9 in the marker interval SW1879-SWR308 (Additional File 1, Table S6). The SNP UCN g.1329T>C showed no association with physiological stress parameters but a significant association with adrenal weight in PiF1a (Table 3) indicating that it might affect activity of the HPA axis. Urocortin is thought to be involved in the autonomic stress response [53] but so far studies with knockout mouse models have revealed no consistent evidence for an involvement of urocortin in the HPA axis response to acute stress (reviewed in [54]). The study of Zalutskaya et al. [55] on the response of urocortin knockout mice to repeated restraint indicates that urocortin may play a role in the adaptation of the HPA axis to chronic stress. Little is known about the function of urocortin in the pig. Parrott et al. [43] showed that intracerebroventricular injection of urocortin increases cortisol release in the pig. Besides association with adrenal weight the SNP UCN g.1329T>C also showed significant association with front lesion score (Table 4). Urocortin, similar to CRH, possess an anxiogenic effect [43,56]. As discussed above for the SNP CRH g.233C>T, this might also underlie the association of the SNP UCN g.1329T>C with aggressive behaviour. UCN maps on SSC3 in the marker interval SW730-SWR201 (Additional File 1, Table S6), close to the position of POMC and to a QTL for basal glucose level [19]. Urocortins are involved in the regulation of glucose homeostasis [57,58]. However in the present study we found no association of the SNP UCN g.1329T>C with the glucose concentration. CRHR2 maps on SSC18 in the marker interval SW787-SW1682 (Additional File 1, Table S6). The only effect we found for the SNP CRHR2 c.*13T>C was a tendency for adrenal weight in PiF1a (Table 3). On SSC18 QTL for basal and stress-induced increase in cortisol level were detected [19], however these map distal to CRHR2. In the present study to examine phenotypic effects of the HPA axis-related genes we used a single DNA marker per gene. However, the HPA axis-related genes are usually highly polymorphic with several polymorphisms affecting gene expression and/or function of the encoded protein [29,59,60], with the consequence that a single DNA marker most likely does not capture all of the genetic information. Furthermore, for several polymorphisms, e.g. for POMC c.293_298del or CRHBP c.51G>A, the power to detect an association was limited by the low frequency of the minor allele. Therefore, genes that showed no significant associations here could still harbour functional DNA sequence variation with phenotypic effects. On the other hand the SNP in NR3C1 and AVPR1B showed multiple consistent effects, partly significant even after correction for multiple testing (the SNP NR3C1 c.*2122A>G on adrenal weight and the SNP AVPR1B c.1084A>G on middle lesion score respectively), providing convincing evidence for a genuine effect of the DNA sequence variation of these two genes on stress responsiveness and aggressive behaviour. Consequently, the SNP used here are either directly involved or are in linkage disequilibrium with the causal variants. Conclusions In the pig knowledge about the molecular basis of the stress response, aggressive behaviour and their interindividual variation is very limited. In the present study we analyzed the association between DNA markers of ten HPA axis related genes with stress reactivity and aggressive behaviour in the context of psychosocial stress of mixing individuals with different aggressive temperaments. From this we obtained convincing evidence for an effect of two genes: NR3C1 on HPA axis activity and aggressive behaviour, and AVPR1B on aggressive behaviour. Our results provide a foundation for future studies directed at the identification of the causal functional DNA sequence variation which would not only provide markers useful for pig breeding but also insight into the molecular basis of the stress response and aggressive behaviour in general. Animals The structure and phenotyping of the pig line designated SYN (synthetic) was described in detail by D'Eath et al. [4]. Briefly, pigs of the SYN line were progeny of Pietrain sires and crossbred dams from six different commercial parent lines of the Pig Improvement Company (PIC). The RYR1 SNP c.1843C>T was segregating among the Pietrain sires. Aggressive temperament was measured by counting skin lesions (lesion scoring) immediately before and 24 h after mixing pigs into new groups at approximately 10-11 weeks of age. The increase in number of skin lesions has been shown by Turner et al. [14] to be positively associated with the duration of involvement in reciprocal fighting behaviour. Pigs in each mixed group were ordered by the change in total skin lesions: half of the pigs in each group (those with the most lesions) were designated as high aggressiveness and the remaining half as low aggressiveness. In four slaughter batches pigs were assigned to one of four mixing treatments based on their aggressiveness (number of animals per treatment included in this study: high with high n = 63, high with low n = 61, low with low n = 65, and unmixed n = 64). Pigs were mixed into their treatment groups as they were loaded onto a vehicle for transport to the abattoir. In a fifth treatment, experienced by another four batches, mixing of pigs occurred at loading onto the truck and at lairage in an uncontrolled way, typical of commercial practice (n = 165). Skin lesions were counted before mixing and after slaughter on the carcass, dividing the body into front (head, neck, shoulders and front legs), middle (flanks and back) and rear (rump, hind legs and tail) sections. The difference in lesion number was again taken as the lesion score. The mixing treatment had significant effect on aggressive behaviour as reported by D'Eath et al. [4]. The animal experiment received approval from the Scottish Agricultural College Animal Experiments Committee. Pigs were stunned by means of CO 2 gas and at exsanguination, a 50 ml sample of trunk blood was collected from each pig in a plastic tube containing 1 ml of 0.5 M EDTA and was stored on ice until plasma preparation, after which they were stored at -80°C. Glucose, lactate and creatine kinase activity were measured with a clinical biochemistry automate (COBAS-MIRA Plus, Roche). Cortisol concentration was measured with the automated analyzer Centaur (Siemens) using a kit designed for human serum and that we validated for pig serum. The PiF1a line consisted of performance tested pigs (n = 208) of the German commercial cross Pietrain × (German Large White × German Landrace). At slaughter in the FBN experimental slaughterhouse the left adrenal gland was dissected, trimmed of visceral fat and weighed. Detection of DNA sequence variation and genotyping Genomic DNA was isolated from skin or liver samples according to the standard phenol-chlorophorm extraction protocol. DNA sequence variation was detected either in silico (POMC, NR3C1, AVP, CRHR2) by alignment of available porcine sequences and confirmed by direct sequencing or de novo by comparative sequencing of each two individuals of the breeds Pietrain, German Large White, German Landrace and Wild Boar (CRHR1, CRHBP, AVPR1B, UCN). The SNPs MC2R c.306T>G and CRH g.233C>T were published previously by Jacobs et al. [30] and Murani et al. [29] respectively. Genotyping of the SNPs RYR1 c.1843C>T and CRH g.233C>T was performed by PCR-RFLP and SSCP respectively as described previously (D'Eath et al. [4] and Murani et al. [29] respectively). The polymorphisms CRHR1 c.*866_867insA, CRHBP c.51G>A, MC2R c.306T>G, NR3C1 c.*2122A>G, AVPR1B c.1084A>G and UCN g.1329T>C were genotyped by PCR-RFLP. Briefly, the polymorphic sites were amplified in 20 μl PCR reactions containing 100 ng genomic DNA, 0.2 mM dNTP, primer as listed in Additional File 1 (Table S1), and 0.5 U SupraTherm Taq-polymerase (Ares Biosciences, Köln, Germany). The temperature profile included initial denaturation at 95°C for 3 min, followed by 40 cycles of denaturation at 95°C for 15 s, annealing at the specific temperature (Additional File 1, Table S1) for 60 s, extension at 72°C for 60 s, and one cycle of final extension at 72°C for 5 min. To detect the polymorphisms 10 μl of the amplified DNA were digested using the appropriate enzyme (Additional File 1, Table S1) overnight according to manufacturer recommendations (Fermentas, St. Leon-Rot, Germany), and the resulting RFLP was analyzed on 2% ethidium bromide stained agarose gel. The SNP AVP c.207A>G was genotyped by SSCP. The PCR was performed as described above, only scaled down to 10 μl (Additional File 1, Table S2). PCR products were separated on a 12% (49:1 AA:Bis) PAA gel containing 10% urea at 450 V for 4 hours at room temperature and subsequently visualized by silver staining. The SNP CRHR2 c.*13T>C was genotyped by pyrosequencing. The PCR was performed as described above, only scaled up to 25 μl, using a step-down temperature profile (Additional File 1, Table S3). The subsequent pyrosequencing reaction was performed as described by Srikanchai et al. [61]. The insertion/deletion c.293_298del in POMC was analyzed as a length polymorphism on a MegaBACE 750 capillary sequencer using the MegaBACE Fragment Profiler v1.2 software (GE Healthcare, Munich, Germany). The PCR was performed as described above, only scaled down to 10 μl, using a step-down temperature profile (Additional File 1, Table S4). Statistical analysis Population genetic analyses were performed using the PowerMarker V3.25 software [62]. Association between the candidate gene DNA markers and phenotypic variation was analyzed using general linear model (Proc GLM, SAS V9.1, SAS Institute, Cary, NC, USA). Genotypes of the RYR1 SNP c.1843C>T were considered as a fixed effect in the model for analysis of every other DNA marker because it has been shown to affect stress responsiveness and aggressiveness in previous and also in the present study (see the Results and Discussion section). For the behaviour traits the model included fixed effects of the marker genotype, RYR1 c.1843C>T genotype, slaughter batch, sex and treatment and for the physiological stress parameters also the fixed effect of the dam line. Skin lesion scores and creatine kinase concentrations were log 10 -transformed before analysis. For adrenal weight the model included fixed effects of the marker genotype, RYR1 c.1843C>T genotype, farm, sex, and body weight as a covariate. Least square means for marker genotypes were compared by a t-test and the p-values were adjusted by Tukey-Kramer correction. False discovery rate (q-value, [39]) was computed using the JMP Genomics 3 software (SAS Institute, Cary, NC, USA). Rydiation hybrid physical Mapping Physical mapping was performed using the INRA-University of Minnesota porcine Radiation Hybrid (IMpRH) panel. The panel was screened by a standard PCR and the products were resolved on 2% agarose gels. The primers and PCR conditions used are detailed in Additional File 1 (Table S5). Regional assignment was obtained using the multipoint analysis option of the IMpRH mapping tool at the IMpRH server http://www.toulouse.inra.fr. Additional material Additional file 1: Tables S1 to S6. Primer sequences, PCR and genotyping assay conditions, results of the IMpRH mapping
2014-10-01T00:00:00.000Z
2010-08-09T00:00:00.000
{ "year": 2010, "sha1": "3dd22e70934486937171e76840fc6d61ea45d49a", "oa_license": "CCBY", "oa_url": "https://bmcgenomdata.biomedcentral.com/counter/pdf/10.1186/1471-2156-11-74", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4786eba5bccc4da82b478bc5c6956ca452e6e5d", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
226670063
pes2o/s2orc
v3-fos-license
Kinesiophobia and functionality perception in postmenopausal women with chronic low back pain BACKGROUND AND OBJECTIVES : Low back pain is the main cause of global disability and is prevalent in women, ten-ding to increase after menopause. The present study aimed to analyze the correlation between body mass index, muscle strength, kinesiophobia, estradiol, functional disability, and low back pain perception in postmenopausal women with chronic low back pain. METHODS : Twenty-two postmenopausal women with chronic low back pain were evaluated. Abdominal and lower back strength were assessed using isometric tests. Basal serum estradiol levels were analyzed using the chemiluminescence method. Ki-nesiophobia, low back pain perception, and low back functional disability were determined using the Tampa Scale for Kinesio-phobia, the visual analog scale, and the Roland Morris Questionnaire, respectively. RESULTS : The Spearman correlation test showed correlations between the levels of kinesiophobia and the value of body mass (rho= -0.513; p=0.015) and the levels of kinesiophobia and the values of body mass index (rho= -0.576; p=0.005). There was correlation between the levels of kinesiophobia and perception INTRODUCTION Non-specific low back pain (LBP) is a symptom with no defining cause and is considered as the main motive for global disability 1 , affecting people of all ages 2 , however, its prevalent on women 3 , mainly those in the postmenopause period 4 .In this phase of life, women present reduced levels of hormones, such as estradiol 5 , which can be a risk factor for degeneration of the intervertebral discs of the lumbar spine 6 .This hormonal reduction is related to the climacteric period, which precedes and lasts for some time after menopause 7 .Besides the reduction of the estradiol levels, other indicators of health related to pain and functionality have a tendency to suffer modifications during the climacteric, like the increase in total body mass (TBM) and body mass index (BMI), as well as the reduction of muscle strength levels 8 .In order to control these variables, specially the treatment of pain, physical exercises are considered as the primary non-pharmacological intervention due to the capacity to generate an increase in the muscle strength levels and, consequently, reduce pain perception levels 9 .Training program models that improve the strength levels of flexor and spinal extensor muscles, such as resistance and stabilization training, can generate positive results in individuals with chronic non-specific LBP 10 .If not treated, the prolonged exposure to this pain can contribute to the development of kinesiophobia, characterized as the fear of feeling pain when making movements or maintaining certain specific positions 11 .Kinesiophobia can develop independently from the levels of pain perception, which can limit, besides other tasks, the practice of physical activities 12 .The sensation of fear caused by kinesiophobia is considered as more disabling than the severity of pain itself 13 , because of the impediment of performing tasks, specially those related to mobility.The limitation of movement can aggravate even more the functional disability of the individual 14 .Despite the association between kinesiophobia and LBP and their dysfunctions, there is still a gap in the scientific literature on kinesiophobia and LBP associated with variables related to postmenopause.A better understanding of the existent association between these variables is important for the control and reduction of LBP in all women, postmenopausal or yet to be.The present study's objective is to analyze the relations between the BMI variables, muscle strength, kinesiophobia, estradiol, low back functional disability, and perception of pain in postmenopausal women with chronic LBP. METHODS Correlational, cross-sectional, descriptive original research.The population is composed of women from an orthopedic clinic in Rio de Janeiro, Brazil, going through postmenopause and suffering from LBP. Were included all participants that: presented unspecific chronic LBP 2 ; presented LBP perception of at least 4 points in the visual analog scale (VAS) score 15 ; were in the postmenopause period 16 ; did not practice physical exercises systematically for the last three months.The study excluded the participants that: were under the effect of antidepressants or anxiolytics; presented any kind of condition or pain with the possibility of worsening during the tests or had any kind of physiotherapy treatment in the last three months.The sample size calculation was done in the GPower 3.1(Germany), software, taking into consideration a two-way correlation model with an effect size of 0.5, alpha error probability of 0.05 and power of 0.8.The calculated size of the sample with these information was 26 participants.The sample was obtained from the orthopedic clinic database, the contact was made through e-mail or telephone, and 26 women were selected.During the collection of anthropometric data, four patients did not attend, and 22 were included.On the first data collection visit, the participants went through an anamnesis, signed the Free and Informed Consent Term (FICT), answered the VAS and done a blood exam for the measuring of estradiol levels.On the second visit, the anthropometric evaluation was done, the kinesiophobia and low back functional disability questionnaires were answered, and the neuromuscular assessments were done.The LBP perception was evaluated by the VAS, a non millimetered scale ranging from zero to 10cm, in which zero represents the absence of pain and 10 the worst possible pain 15 .The participants were asked to indicate their current level of pain tracing a perpendicular straight line in this scale.After the marking, the examiner positioned a ruler in the same direction and orientation of the scale, assessing the marking in centimeters 17 .The blood sample was collected at 8am, after 12h of fasting.A qualified professional independent from the study protocol performed the blood collection.Next, the levels of estradiol were evaluated by the chemiluminescence method -IM-MULITE -DPC MED LAB, vacuum closed system.As a pattern of reference, the result needed to be >20pg/mL to characterize the postmenopause 16 .The Tampa Scale for Kinesiophobia (TSK) 18 was used to evaluate the excessive fear of movement and physical activity.This scale is a questionnaire composed of 17 questions that approach pain and intensity of symptoms, and the scores are disposed in a Likert scale with a 4 points range.The answers for the 4, 8, 12 and 16 items had to be inverted to count the scores.The final score can vary between 17 and 68 points.The higher the score, higher are the levels of kinesio- phobia 18 .The score above 41 points indicates a greater degree of commitment related to the belief in movement 19 . A version of the Roland Morris Questionnaire (RMQ) 20 , validated and adapted to Brazil, was used to assess the low back functional disability perception.This questionnaire is composed of a subjective scale with 24 phrases referring to the status of the lumbar spine.That patients can answer ''yes'' or ''no'', according to their own perception at that moment.Each ''yes'' answer is equal to 1 point and each ''no'' answer is equal to zero points.The final score can vary from zero to 24 points.The mean score is 11.4 and the scores above 14 indicate significant disability 21 . For the measurement of the total body mass (TBM) and height, a mechanical scale with a Filizola ® (Brazil) PL -150 number 8346/97 stadiometer, ABNT NBR ISSO 9001 certified, with 100g precision and maximum capacity for 150k was used.With these data, the BMI was calculated as the ratio between the TBM (kg) and the square height (m 2 ).All measurements were performed in accordance with the International Standards for Anthropometric Assessment (ISAK) 21 protocol. The abdominal strength and spine extensor isometric tests were used to assess strength and resistance of the abdominal and lumbar spine extensor muscles.Both tests presented protocols that measured the time, in seconds, in which an individual could maintain himself or herself in a determined position, doing a contraction of the target desired muscle, according to the study 22 . The abdominal isometric test evaluates the abdominal muscle strength (AbdStr).In this test, the individual laid down in dorsal decubitus with hips bent 45° and knees bent 90°.The individual then moved to the final position of each verification level and was instructed to hold the position for as long as possible.The score is given according to the final position he or she was able to perform the test, in addition to measuring the time in seconds. The score varies from 1 to 5, in which 5 represents higher levels of abdominal strength and resistance: (5) hands crossed behind the nape of the neck, scapulas off the ground; (4) hands crossed over the chest, scapulas off the ground; (3) arms along the body, extended elbows, scapulas off the ground; (2) hands behind the head, only the head off the ground; (1) arms along the body, only the head off the ground 22 .The isometric spine extensor test assessed the strength of the iliocostalis muscles of the lumbar spine and the multifidus (LumbStr).In this test, the individuals laid down in ventral decubitus position and tried to extend the spine as much as possible, lifting their head and trunk from the ground.The score was given according to the position achieved by the individual and the time he or she was able to maintain isometry.The score ranged from 1 to 5, where 5 represented higher levels of strength of the extensor muscles of the spine: (5) hands behind the head, the individual raised the head, chest and ribs from the ground; (4) arms along the body, the individual raised the head, chest and ribs from the ground; (3) arms along the body, the individual raised the sternum from the ground; (2) arms beside the body, the individual raised the head from the ground; (1) only a slight contraction of the muscle, with no apparent movement 22 . The present study was approved by the Research Ethics Committee of the Universidade do Estado do Rio de Janeiro (UERJ), opinion number 1.360.167. Statistical analysis The data was analyzed in the IBM SPSS Statistics 23 software and was presented in mean and standard deviation.The Shapiro-Wilk test was used to check the normality of the sample data.The Spearman correlation test was applied to analyze the associations between the TBM, BMI, muscle strength, kinesiophobia, estradiol, lumbar functional disability, and LBP perception variables.The following parameters were used for the interpretation of the magnitude of the correlation level (rho): 0.00-0.30:negligent; 0.30-0.50:low; 0.50-0.70:moderate; 0.70-0.90:high; 0.90-1.00:very high 23 .The study adopted the value of p<0.05 for statistical significance. RESULTS Table 1 presents the characteristics of the 22 patients that participated in the study and the descriptive results of the VAS, RMQ, TSK variables and the isometric tests, in scores.for the studied variables.Two negative moderate correlations related to the levels of kinesiophobia were found, one referring to the TBM (rho= -0.513; p= 0.015) and another one referring to the BMI (rho= -0.576; p= 0.005).This shows that the higher the TBM and BMI values, the lower the kinesiophobia levels tend to be.Kinesophobia also had a moderate correlation with the perception of lumbar functional disability, however, in a positive manner (rho= 0.434; p=0.043).This means that the higher the levels of kinesophobia, the greater the perception of lumbar functional disability.No significant correlations were found between the variables of muscle strength, estradiol and perception of lumbar pain. DISCUSSION The results showed positive correlation between kinesiophobia and the perception of lumbar functional disability.This indicated that, the higher the levels of kinesiophobia, the higher is the perception of lumbar functional disability.A negative correlation between TBM, BMI and kinesiophobia was also found.This demonstrates that, the higher the TBM and BMI values, the lower tend to be the levels of kinesiophobia.The BMI values found classified the sample as overweight, within the pre-obese range (25.0-29.9kg/m 2 ) 24 .For the study 25 overweightness and obesity are strongly associated with increased LBP incidence.Overweightness can increase the risk of LBP, which is often related to a sedentary lifestyle.However, this association between obesity and LBP can have different relationships, as obesity can be a cause or a consequence of LBP 26 .Systematic practice of physical exercise can be efficient in maintaining the values of BMI within the considered adequate parameters 27 .Treatments and exercise programs with an emphasis in recovering functional movement, muscle strength, flexibility and mechanisms of anticipatory stabilization must be the base for LBP prevention and intervention processes 10 .The study 28 correlated the variables of kinesophobia, pain intensity, quality of life and functional disability in 132 patients diagnosed with chronic LBP.There is a correlation between kinesophobia and functional disability in patients with chronic LBP: the greater the fear of movement, the higher are the individual's levels of functional disability, results which are in accordance with this study's findings.A positive correlation between kinesiophobia and pain intensity was found, different from the results found in the current study, in which there was no correlation between the two variables. In patients diagnosed with chronic LBP, the belief of pain during movement is associated with more pain, more disability and less probability of returning to professional activities.Besides these factors that are directly related to the perception of LBP felt by the patient, it's also possible to observe the brain activity of specific areas related to emotions, such as those related to the beliefs of fear 29 .People in pain tend to have thoughts based on the necessity to protect themselves and avoid feeling more pain as an escape mechanism 30 .Authors 31 evaluated 192 patients diagnosed with chronic LBP, divided in obese and not obese.Kinesiophobia was assessed by TSK, functional disability was assessed by the Oswestry Scale.Quality of life was also evaluated.The results showed higher levels of kinesiophobia in the obese population compared to the non-obese, contrary to the findings of the present study, which showed lower levels of kinesiophobia in individuals with higher TBM and BMI.It's possible that these results are due to the subjective feature of kinesiophobia, since the beliefs related to movement and fear of pain during movement can change according to the individual's previous experiences of pain and the area of pain 32 .The alteration of hormonal levels is common in the process of aging and can have a direct influence on LBP.This condition was observed in the study 22 with 11 postmenopausal women with chronic LBP.These women presented higher levels of LBP perception and less levels of estradiol, but no significant correlations between these two variables were found.These reduced hormonal levels can be associated with the loss of bone mass, which can trigger the deterioration of the lumbar spine intervertebral discs and, consequently, cause pain in this area.Women who were older presented lower levels of lumbar strength 22 .As for LBP, this reduction in the levels of strength, known as dynapenia, can be caused by changes coming from the aging pro- cess.This provokes the reduction in the number and size of muscular fibers and progressively reduces muscular function due to the loss of motor neurons which is not properly compensated by the reinnervation of muscle fibers by the remaining motor neurons 33 .The limitations of this research include the fact that the study design is cross-sectional, where causal relationships cannot be established.The relatively small sample size and the absence of any type of control related to the participants' work activities or rest duration during the study were also considered limitations.Due to these factors and that BMI can't distinguish between muscle mass and body fat mass, it's suggested that the study's results are interpreted with prudence.Studies using a probabilistic sample and seeking to understand the relationship of the postmenopause variables such as quality and quantity of sleep, hormone levels, body fat and muscle mass are necessary, since changes resulting from this physiological and chronological change can affect the quality of life of women, especially at more advanced ages. CONCLUSION Women in the postmenopause period with chronic LBP that had higher values of TBM and BMI presented lower levels of kinesiophobia, which means less fear of during movement or when maintaining a specific position.There is a positive relation between the levels of kinesiophobia and the perception of functional lumbar disability, indicating that the fear of pain during movement or when maintaining a specific position is directly related to the perception of functionality and safety of the lumbar spine. Figure 1 . Figure 1.Participant data collection flow Table 1 . Descriptive results of the sample characteristics for the study variables Table 2 presents the result of the Spearman correlation test, expressing the values related to the correlation coefficient (rho) Table 2 . Results of the studied variables by the Spearman correlation test TBM = total body mass; BMI = body mass index; VAS = visual analog scale; RMQ = Roland Morris Questionnaire; TSK = Tampa Scale for Kinesiophobia; AbdStr = abdominal strength; LumbStr = strength of spine extensors.* p<0.05.
2020-08-20T10:05:22.240Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "a6b0b2ee205ca29e1f033b27720a1cefe1cbf15e", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/brjp/a/BcY6VR8Qyx5KrkFXBDzZcLB/?format=pdf&lang=en", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "f1b73b358331a8678347abeaa186c67a7fd7a898", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212786163
pes2o/s2orc
v3-fos-license
The Moderating Effects of Organizational Support on the Relationship Between Mentoring Behavior and Innovative Work Behavior — Innovative behavior as complex behavior consist of activities pertaining to both the introduction of new ideas (either by oneself or adopted from others) and the realization or implementation of new ideas. This study is aimed to get more understanding issue associated with adjustment’s perspective. This study expects that the empirical validation of the research framework develop to a new broader framework for the understanding of mentoring behavior, expatriate adjustment, and innovative work behavior. The results of this study explained that mentoring behavior has a positively influence on expatriate adjustment process. Moreover, this study found that expatriate adjustment has positive influence on Innovative work behavior. Finally, there is moderating variable organizational support that can enhance the success of expatriation. INTRODUCTION Consultancy surveys have claimed increasingly the number of expatriates that sent on foreign assignments in many major companies across the world and it will continually expand in the future [1], [2]. MNCs sent a lot of employees to overseas because international experience is the key channel for developing global talent and leadership [3], [4]. Global staffing is an important aspect of the management of human resource management [5]. Moreover, international assignments play vital role for expanding and building global skills [6]. It is more popular toward using international assignments for developmental goal [7]. Moreover, expatriates also need another potential resource for their foreign adjustment. Likewise, expatriates need relationships and personal networks to created exchanges [8]. Social networks during international assignments may have significant implication for expatriate's effectiveness or successfully performance [9]. Therefore, mentoring behavior as act to provide informational psychosocial support provide the best fit as an overarching adjustment perspective for identifying new areas of research. Moreover, innovation is one of the most powerful sources on competitive advantage and successful business performance [10]. Continuing innovation, generating and implementing new ideas are the key access for MNC's success in global market [12] triate's experience. Expatriates are responsible for transferring and developing knowledge across organization [12]. Adjustment is very important for improving organizational performance because adjust-well expatriates have more energy to work [13]. II. LITERATURE REVIEW Moreover, organizational support has several important effects on expatriate. Communication with colleagues may enhance learning and understanding of the cross cultural [11]. Organizational support can give lower level of depression and work conflict for employees and also can assist expatriate to get well feeling in organization [14], [15]. Therefore, organizational support may act as a mediating variable for the influence of individual factors, family factors, and social capital on expatriate adjustment. This study may be helpful in examining various issues associated with expatriates adjustment that have not yet been fully investigated. This study also expects that the empirical validation of the research framework develops to a new broader framework for the understanding of expatriate and innovative work behavior. A. Mentoring behavior Mentoring behavior is about expatriate who possesses mentors, who the mentors are in, and what the benefits of it. The "mentor" is referring to a person who has broad knowledge and can guide the inexperienced [16]. Usually, a mentor has the source of knowledge about the environment for newcomers. Using mentoring network enhances expatriate adjustment and development for a successful transformation [17]. As expatriates become accelerated to new condition, expatriate go through similar process as an individual entering new work environment (Carraher et al. 2008). The concept of mentoring has identified into two sources, host country mentoring and home country mentoring [17], [18]. Host country mentors (host mentor) are host country national (HCNs) who have knowledge about the lifestyle and culture of the host country that could help the expatriate's adjustment process. Home country mentors are parent country nationals (PCNs) who have been expatriated to the same country and are supposed to have knowledge about both the home and the host countries. B. Innovative work behavior Innovation is one of the most powerful sources of competitive advantage and successful business performance. Innovative behavior as complex behavior consist of activities pertaining to both the introduction of new ideas (either by oneself or adopted from others) and the realization or implementation of new ideas [10]. Contemporary MNCs depend on continued innovation, broadly understood as a process of generating and implementing new ideas. Through innovation, firms create and sustain competitive advantages that enable their survival and successful performance. C. Expatriate adjustment Originally, the concept of expatriate adjustment was from Black and his colleagues [19], [20] who identified three dimensions of expatriate adjustment; general, interactions, and work. General adjustment refers to the degree in which expatriate managers feel psychologically comfortable with their host country's living environment [19]. Expatriate interactions adjustment refers to the degree in which expatriate managers feel psychologically comfortable in interpersonal relations with host HCNs (Black 1988). Expatriate work adjustment refers to the degree to which expatriate managers feel psychologically comfortable with their new work roles [20]. Adjustment is generally described as a process where employees leave a familiar cultural, enters an unfamiliar and also the adaptation process of expatriate living and working in a foreign country [21]. Likewise, the adjustment process in abroad is an ingredient in foreign performance [22]. D. Organizational support Organization support is related to the organization which concerns to its expatriate's well being in order to increase expatriate's loyalty and performance [14], [23]. Organizational support is the degree employees perceived (health & well-being) to reduce conflict between employees' personal and professional life [14]. Hypotheses Development for this study are: H1: Mentoring behaviour has positive relationship on expatriate adjustment H2: Expatriate Adjustment has positive relationship on innovative work behavior H3: Organizational support will moderate the positive relationship between expatriate adjustment and innovative work behavior. Therefore, the higher level of organizational support will strengthen the positive relationship of this variable. III. METHODOLOGY In this research acquires 287 answers from expatriates in Taiwan and Mainland China with an effective response rate of 15.9%. In order to achieve the purposes of this study and to test the hypotheses, SPSS 18 software and were used to analyze the collected data. Questionnaire design was measured on a seven-point rating scale to indicate their level of agreement toward each statement, from 1 = strongly disagree to 7 = strongly agree. IV. RESULTS AND DISCUSSION This section explained the results. Regression model M1 presented the hypothesis testing of H1. The results shown that mentoring behavior has positively significant influence on expatriate adjustment (β= 0.841, R2= 0. 708, F-value= 179.844, p-value= 0.000). These result was in line with the prior research that explained from home and host country mentors as a whole, mentors' psychosocial support can enhance expatriate general adjustment in the host country [24]. Assisting the expatriate to become more quickly acculturated in the new working environment, the host-country mentor may create a positive impression of the host country and its workers, while demonstrating to the expatriate that hostcountry nationals do want him or her to succeed. The results above supported hypothesis H1 that mentoring behavior has positively significant impact on expatriate adjustment. Regression model M2 presented the hypothesis testing of H2. The results shown that expatriate adjustment has positively significant influence on innovative work behavior (β = 0.627, F-value = 184.282, p-value=0.000). This result was in line with work role transition theory from [25] that stated role innovation as part of adjustment process. It means that expatriates who are succeeded in their adjustment process in new environment; they have process of implementing new idea. The success of innovation creates sustainable competitive advance for organization. To examine the moderating effect of organizational support on expatriate adjustment toward innovative work behavior, ANOVA analysis method was adopted. This study also divided the respondents into four groups which are low OSlow EA, low OS-high EA, high OS-low EA, and high OS-high EA. These results indicated that expatriate with higher organizational support tend to manipulate higher influences of innovative work behavior (F=72.367, p=0.000) on their expatriate adjustment. V. CONCLUSION The first major objective of this study is to explain the influences of mentoring behavior on expatriate adjustment. The second objective is to explain the influences of expatriate adjustment on innovative work behavior. The third objective is to explain organizational support will moderate the positive relationship between expatriate adjustment and innovative work behavior. Hypothesis H1 is supported and state that mentoring behavior has positive relationship on expatriate adjustment. The mentor with broad knowledge and network enhances expatriate adjustment and development [17]. Moreover, hypothesis H2 is supported which stated expatriate adjustment has positively significant on innovative work behavior. The prior study from [7] examined work role transition theory from (Nicholson 1984). Work role transitions involved both personal development (change) and role development (innovation) [20]. Expatriate with international experiences in abroad may influence expatriate's development of knowledge, skill, attitudes and behaviors [26], [27]. These experiences can influence the level of adjustment [28]. The results showed H3 that expatriate adjustment positively significant influence on expatriate performance. This result was in line with the prior studies that expatriates' work adjustment was strongly correlated to expatriate's performance and has positively relationship with the job performance [29], [27]. There are several implications of this research for multinational companies especially for human resource department who always arranges the needs and manages expatriation process. Organizational Support has a moderating effect for the relationship between expatriate adjustment and innovative work behavior. This result means that the higher level of organizational support has strengthened the positive impact of expatriate adjustment on innovative work behavior. The prior study stated that organization support is related to the organization which concerns to its expatriate's well being in order to increase expatriate's loyalty and performance [14], [23]
2020-03-05T10:58:15.662Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "9f7555423ab19936880d5bec43749eabcbf79f5a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/aebmr.k.200131.054", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4ec47b9f05125b898fb63189fdb34e0078e50435", "s2fieldsofstudy": [ "Business", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
253465398
pes2o/s2orc
v3-fos-license
Research on the Construction of Enterprise Accounting Data Analysis Platform Based on Cloud Computing With the emergence of massive data inside and outside the enterprise, paying much attention to the processing and analysis of accounting big data can bring huge value-added value to enterprises, and adapt to complex and changing economic environment. Based on this, by analyzing the related theories of accounting big data, accounting information and cloud computing, we build a cloud computing storage module of big data analysis platform, and apply Apriori data mining algorithm based on association rules to deal with massive data of accounting. A comparative prediction of the financial status of a group shows that the maximum error is less than 8% compared with the actual results, thus verifying the reliability and superiority of the established accounting big data analysis platform based on cloud computing. Introduction In the era of big data, data, information and knowledge are also important resources for enterprises (Hashem, Yaqoob, Anuar et al., 2019). Now many developed countries are striving to become knowledge-based enterprises, constantly using advanced information technology to excavate knowledge from accounting big data and enhance their core competitiveness. China has also clearly defined the "upgrading of information technology" into one of the goals of building a moderately prosperous society in an all-round way, so as to ensure sustained economic development and enhance comprehensive national strength. How to adapt to the complex economic environment and realize the value added of the enterprise is also the focus of the enterprise information construction (Zhang, 2021). The core of enterprise informatization is accounting information. Because of the high cost, low efficiency, long construction cycle, limited technology and other factors, the existing accounting information system is difficult to get large amounts of accounting data from inside and outside enterprises, and find knowledge from them, so as to provide a scientific basis for business managers to make timely decisions (Xu, Huang, Chen et al., 2018). Therefore, how to acquire and excavate the valuable knowledge hidden behind the big data and promote the sustainable development of enterprises is a difficult problem for academic and business circles to tackle together. With the development of the technology of the Internet of things, cloud computing is also widely used. It has the advantages of low cost, large storage space, fast processing speed and so on. All the information related research in the world has started to rely on it (Zhang & Chen, 2016). In the construction of the accounting information system, the cloud computing technology is used to obtain, cluster and analyze large accounting data, which not only overcomes the problem of large cost of traditional accounting information mode, but also greatly improves the efficiency of analyzing massive accounting data, and gradually exerts the value of accounting big data analysis. It provides a new idea for further developing and utilizing the big data of accounting, improving the relevance of accounting information, helping managers to make decisions, and achieving low cost accounting control in enterprises (Yang, Huang, Li et al., 2019). For example, Alibaba Cloud migrated the cloud platform for Zheshang Securities. In order to ensure that customers across the country experience similar network access speeds, Zheshang Securities has set up several clusters in the computer rooms of different operators across the country to ensure a good user experience. However, deploying clusters in multiple regions of the country not only has a long construction period and high cost, but also faces great challenges in system expansion and management. At present, Zheshang Securities has deployed part of the market quotation and entrustment system on Alibaba Cloud. With the support of Alibaba Cloud's powerful network platform, multi-line BGP access has been realized. No matter which operator the end customer accesses through, whether in the north or the south, they can obtain good network access speed and quality, reduce the difficulty and cost of management, and facilitate flexible expansion, thus ensuring the end customer good experience. State of the Art As information technology is widely used in the field of accounting information, the research on accounting information has entered a new stage in the academic circle. Foreign information technology has been applied to accounting since 1950s. Under the impetus of information technology, foreign scholars study the process reengineering and construction of management information system (Sookhak, Gani, Khan et al., 2015). To the existing research, scholars point out that there are a variety of research methods in the framework of accounting information research, which can only promote the construction of accounting information (Park, Ki, Jeong et al., 2016). The domestic scholars put forward that accounting informationization relies on the existing information technology to integrate information flow, capital flow, business flow and logistics, which realizes the accounting information system with the combination of accounting and information technology, digital, dynamic, diversified and real-time (Huang, Lu, & Zhang, 2020). According to the investigation of the scholars, the research on the existing accounting information theory is not deep enough, and the theoretical framework of the comprehensive accounting information system is not formed (Liu, 2016). The scholars also think that the main reason is that scholars pay too much attention to the application of accounting information technology and its influence on accounting theory, ignoring its influence on the essence of accounting informatization (Yang, Huang, Li et al., 2019). Scholars believe that the intelligent accounting information system is the theoretical knowledge of complex system. We analyze the accounting data of all transformed into knowledge, and knowledge expression and complete intelligent storage, learning and memory, push, update and other functions, to achieve man-machine combination, intelligent decision accounting software. In addition, the scholar pointed out the characteristics of accounting information in the era of big data, and proposed that we should strengthen the construction of cloud computing and set up an accounting big data analysis platform to excavate the value behind big data. Because the cloud computing technology has many advantages, the use of cloud computing in the accounting information system can greatly improve the value of the accounting data (Ranjan, Georgakopoulos, & Wang, 2018). The Theoretical Basis of Accounting Large Data Analysis The financial department is one of the most closely related departments in the enterprise. The arrival of the big data era provides more information resources for accounting department managers' financial analysis. Big data is a large and complex data set, which can provide huge value for acquisition, storage, management, sharing and analysis within a reasonable time. The large data has four characteristics: the rapid and continuous increase of data, the fast speed of data input/output, the diversity of data types and sources, and the low value of the data. Accounting information is a comprehensive application of modern information technology, such as computer, network communication, and so on, to obtain, process, output and apply the accounting data resources. It provides adequate, real-time and all directional accounting information for enterprise management, control decision and economic operation and is beneficial to the man-agers' information decision making, thus improving the core competitiveness of the enterprise. Accounting information has the characteristics of universality, integration, dynamic and gradual. Its essence is the process of accounting data processing, and the goal is to establish modern accounting information system by means of information technology, and its purpose is to provide sufficient, real-time and omnibearing accounting information. Figure 1 is the application of information technology in the large data processing of accounting. Using data mining technology can extract useful knowledge from large amounts of data, which can be applied to all aspects of enterprise financial early warning, product sales budget, customer value analysis and so on, so all walks of life pay close attention to data mining. The research of data mining is also a hot spot in today's application development. With the production of massive data and the sharing of information resources, the original information technology can not deal with mass data, and cloud computing technology is produced. Cloud computing is a business computing model. It concentrates all computing resources and uses hardware virtual technology to provide cloud computing users with powerful computing power, storage space and bandwidth. It assigns computing tasks to a large number of computer integrated resource pools, enabling different application systems to get corresponding storage space, information service and equal or more computing power with traditional large servers according to their needs. Architecture of Large Data Storage Mechanism in Cloud Computing Environment Cloud computing is through the Internet to provide dynamic and easy to expand large data storage space and structure model. In order to realize the clustering and classification of large data storage in cloud computing environment, it is necessary to build a large data storage system architecture in the cloud computing environment. In the cloud computing environment, large data storage uses a virtualized storage pool structure, and cloud computing deployment depends on the computer cluster. From top to bottom, they are the I/O virtual computer, the USB interface sequence and the disk layer respectively. The enterprise data center acquires application services through various terminals, so that computing is distributed on a large number of distributed computers. When all the cloud computing virtual machines are allocated to the physical machine, the utilization (1) can be used to calculate the global optimal solution in this cluster. It can also distribute the large data feature clustering center The sample is analyzed and collected to judge whether the sample is a typical sample, and the sample is the data, and a large database letter and data flow sample is set up. Data information sampling is carried out in time period 1 2 , , , K T T T  respectively. Now we divide the large data set X into the c class in the cloud computing environment, of which 1 < C < n. The segmentation of data is transformed into space segmentation, and the central vector of large data storage structure is as follows: Among them, V i is the i vector of the target clustering feature (the i cluster center vector). The fuzzy partition matrix is expressed as: Single data source is processed by redundant data reduction. In the process of virtual machine clustering mining for multi-channel QoS requirements, the input part (the set of virtual machines and physical machines) and the related parameters are factor is α , and the expected value of the heuristic factor is β , and the maximum number of mining times is max I . Thus, the data blocks uploaded by the client provide a fixed size data block to achieve cloud clustering. Through the analysis of the large data storage system architecture in the cloud computing environment, it provides an accurate data base for large data analysis. Modeling of Large Data Mining Algorithm Based on Association rules Because of the huge amount of data in the accounting work, the Apriori data mining algorithm based on association rules is used to deal with large amount of accounting data. Apriori data mining algorithm belongs to a kind of association algorithm, which has the basic. It describes the underlying association between data items, which are classified as single, single, and Boolean association rules. The algorithm uses a sequential search cycle by layer to dig frequent sets. The Apriori algorithm has the following properties: any infrequent (k − 1) set can not be a subset of frequent k item sets. This is because if the percentage of transactions that contain (k − 1) item sets is not greater than the minimum support threshold, then the percentage of transactions that contain the k item set and another K item set can not be greater than or equal to the threshold of minimum support. If we use the concept to explain, the connotation of the concept of transaction composition containing k item set is increased than that of the concept consisting of (k − 1) item set, so its extension will inevitably decrease, and the number of transactions involved will also decrease. Therefore, it is possible to delete a set of infrequent items in a set of (k − 1) before the K item set is generated by this property. The frequent itemsets are obtained by deleting the infrequent itemsets in the candidate (k − 1) item set (k − 1). The basic process of the algorithm is as follows: 1) irst, calculate all the C 1 ; 2) scanning database, deleting infrequent subsets and generating L 1 ; 3) connect L 1 with its own to generate C 2 ; 4) Scan the database, delete infrequent subsets in C 2 , and generate L 2 ; 5) By analogy, the k C is generated through the 1 k L − connection with itself, and then the database is scanned to generate k L until no more frequent itemsets are generated. Frequent item sets are divided into two steps: connection and pruning. Join step: we use the recursive connection method to find k L , and use 1 k L − to connect the set of candidate k item sets with its own connection. We record the set of candidate k item sets as k C . Setting 1 l and 2 l is an item set of This item is added to the set until a new item set is not produced. Pruning step: a set k C is generated by a connection. It contains some k item sets that are not frequent k entry sets but all the frequent k items are included in the collection k C . If the k item set is compared to the transaction in the database, the support of the set in the transaction can be obtained. If the K item set that does not meet the minimum support degree is deleted, the frequent K itemsets can be obtained. Construction Feasibility Analysis of Large Data Analysis Platform Based on Cloud Computing Y can be regarded as the integral of the independent variable x for the large accounting data of the enterprise: The ( ) x ρ is the density of the large accounting data. Get all the objective information through the upper form. On this basis, it is confirmed that the useful accounting data V is the value amendment of the large accounting data Y: The value coefficient [ ] 0,1 r ∈ , when r = 1, V = Y, all the big accounting data are valuable; When r = 0, V = 1, there is an accounting data value. The knowledge K is the integral of the useful accounting data V Among it, i is the knowledge conversion coefficient of useful information. According to the above theory, the accounting information system is to classify, summarize and excavate the objective information Y, and automatically provide the decision information K. The accounting big data analysis platform in this paper is to expand the scope of accounting data. Based on the above theories, we use information technology to do mining and analysis of accounting big data and provide decision information, so the establishment of platform is theoretically feasible. Forecast of Operating Income and Operating Cost Analysis of the performance of a large data analysis platform is based on a group of cases. The group is a large enterprise with pharmaceutical retail as the core. It has the characteristics of large scale, rapid development and many institutions. With the continuous development and expansion of the group, the problem of management mode is becoming more and more obvious. The group adopts centralized management, and the group headquarters centrally manages the budgets and financial decisions of subsidiaries, factories and departments. In order to test the predictability of the platform, we import all the financial data of the group from 2008 to 2013, and predict the financial status of operating income and operating cost from 2015 to 2020. The accuracy of the platform prediction is verified by comparing the predicted number with the actual number. According to the operating income and operating cost from 2008 to 2013, the platform automatically draws the corresponding trend line. The user can choose to predict the next few years, and the forecast line will extend automatically, as shown in Figure 2. It is predicted that the operating income of 2008 is 41,829,787,656 yuan, the actual amount is 38,721,656,259 yuan, and the error rate is 8%. It is predicted that the operating income of 2009 is 54,795,373,842 yuan, the actual amount is 68,078,217,820 yuan, and the error rate is 0.05%. It is predicted that the operating income of 2010 is 67,497,346,641 yuan, and the actual rate is 68,078,217,820 yuan, and the error rate is 1%. It is predicted that the operating income of 2011 is 78,313,511,241 yuan, and the actual operating Open Journal of Business and Management income is 78,232,818,357 yuan, and the error rate is 0.1%. Except the prediction error of 2008, the other years are within the range of prediction error. The forecast of operating costs is affected by operating income. Except to the forecast error of 8% in 2008, other forecasting errors are also allowed. In 2011, the main reason for the sales volume less than the forecast is that the introduction of the new health care reform has brought a new round of structural adjustment and market expansion. The macroeconomic environment is uncertain, and the changes in policies and regulations have an impact on the industry, resulting in a decline in sales. The group then adopted measures to integrate other companies to restore sales in 2011. Three Items of Cost Forecast It forecasts the financial situation of the group's sales expenses, management expenses and three expenses of financial expenses, and now there are 2011-2017 years of actual financial situation. In 2011 years, the forecast sales cost is 2847458501.50 yuan, the actual amount is 3071521304.81 yuan and the error is 3%. In 2012, the forecast sales cost was 3272341449.25 yuan, the actual sales cost was 3288786883.64 yuan, and the forecast error was 0.5%. In 2013, the sales cost was 3857956.28 yuan, and the actual sales cost was 4407325498.15 yuan, and the error was 3%. In 2014, the sales cost was 4,394,103,832 yuan, and the actual sales cost was 4417235498.08 yuan, and the error was 0.3%. The sales costs of 2015, 2016 and 2017, and the forecast results for the management and financial costs of each year are shown in Figure 2. There is a big error between the cost of management and the cost of sales in 2011 and 2013. The main reason is the integration of many enterprises in 2011 and 2013, with a large number of sales and management costs in the consolidated statement. The increase in financial costs in 2012 was caused by a large amount of remittance losses, and 2014 returned to normal. Moreover, internal financing is adopted to reduce the financial cost, but there is a real difference between the predicted intra platform exchange rate and the real exchange rate, resulting in the prediction error in 2011 and 2013. The above situation is a sudden situation, the platform can not be foreseen, but also the shortcomings of the platform. Conclusion With the development of economy, the improvement of computer level, network and intelligence, modern society has already stepped into the information stage. Especially in the field of accounting, with the deep application of cloud computing and other technologies, the construction of enterprise accounting information has been accelerated. The current situation of the traditional accounting information system is analyzed, and its shortcomings such as single accounting data collection, poor integration between the system and other systems, insufficient accounting information and so on are analyzed. It is difficult to meet the new needs of the managers and the need for the value creation of accounting data. Therefore, it is necessary to apply advanced technology to solve the above problems. Using cloud computing technology to build platform has the advantages of high efficiency of data processing and low construction cost. Based on the theory of accounting information, we use cloud computing technology to build accounting information system based on accounting big data analysis, and mainly build an accounting big data analysis platform. The cloud computing storage module of the big data analysis platform is modeled in detail, and the Apriori data mining algorithm based on association rules is applied to deal with the massive data of accounting. Finally, based on the analysis and prediction of the past few years and future financial situation of a group, by comparing the actual data in the past few years, it is proved that the prediction data of the platform is reliable. The large data analysis platform based on cloud computing can help enterprises to develop rapidly and enhance the core competitiveness of enterprises. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
2022-11-12T16:32:51.651Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "393aeb451b5ac7a683f8e287843368ea71d151c0", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=121142", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f2050b72b5a36d4223102c572d6ab850140f5ce4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
266758942
pes2o/s2orc
v3-fos-license
Infrared Spectroscopy for the Quality Assessment of Habanero Chilli: A Proof-of-Concept Study "2279 Habanero chillies (Capsicum chinense cv Habanero) are a popular species of hot chilli in Australia, with their production steadily increasing. However, there is limited research on this crop due to its relatively low levels of production at present. Rapid methods of assessing fruit quality could be greatly beneficial both for quality assurance purposes and for use in breeding programs or experimental growing trials. Consequently, this work investigated the use of infrared spectroscopy for predicting dry matter content, total phenolic content and capsaicin/dihydrocapsaicin content in 20 Australian Habanero chilli samples. Near-infrared spectra (908–1676 nm) taken from the fresh fruit showed strong potential for the estimation of dry matter content, with an Rcv of 0.65 and standard error of cross-validation (SECV) of 0.50%. A moving-window partial least squares regression model was applied to optimise the spectral window used for dry matter content prediction, with the bestperforming window being between 1224 and 1422 nm. However, the near-infrared spectra could not be used to estimate the total phenolic content or capsaicin/dihydrocapsaicin content of the samples. Mid-infrared spectra (4000–400 cm−1) collected from the dried, powdered material showed slightly more promise for the prediction of total phenolics and the ratio of capsaicin-to-dihydrocapsaicin, with an Rcv of 0.45 and SECV of 0.32 for the latter. The results suggest that infrared spectroscopy may be able to determine dry matter content in Habanero chilli with acceptable accuracy, but not the capsaicinoid or total phenolic content. Introduction Habanero chillies (Capsicum chinense cv Habanero) are some of the hottest commonly consumed chillies in Australia. The pungency of chilli arises from capsaicinoids, which are compounds classified as N-vanillylamides of branched fatty acids. The two most abundant capsaicinoids present in chilli are capsaicin and dihydrocapsaicin; however, a number of other capsaicinoids may be present in minor amounts. There is an ongoing interest in developing new Habanero chilli varieties with higher capsaicin contents, as these form a niche high-value market sector. Capsaicinoid contents are generally measured using high-performance liquid chromatography (HPLC), which provides a high level of specificity and accuracy. However, this technique is time-consuming and expensive, which means that it may not be suitable for the routine assessment of large number of samples. Hence, there is recent interest in using rapid analytical techniques such as infrared spectroscopy for the quality assurance/analysis of chilli. Near-infrared (NIR) spectroscopy has previously been used for the estimation of capsaicinoid content [1] and total phenolic content in chilli [2]. Furthermore, NIR spectroscopy has an extensive history of use for food quality analysis [3,4]. Hence, Eng. Proc. 2021, 8, 19 2 of 5 this study aimed to conduct a proof-of-concept investigation into the potential application of infrared spectroscopy for the quality analysis of Habanero chilli and to compare the relative performance of NIR and mid-infrared (MIR) spectroscopy for this purpose. Methods Twenty samples of Habanero chilli were sourced from Austchilli (Bundaberg, Queensland), incorporating a wide range of environmental variability. Near-infrared spectra between 908 and 1676 nm were collected from the fresh, intact chillies using a MicroNIR OnSite handheld spectrometer (Viavi, Santa Rosa, CA, USA). Duplicate spectra were collected from opposite sides of each chilli, providing four spectra per sample (n = 80 spectra in total). The chillies were subsequently oven-dried and ground to a fine powder. Mid-infrared spectra were collected from the dried, ground chilli powder in triplicate using a Bruker Alpha Fourier transform infrared (FTIR) spectrophotometer (Ettlingen, Germany) fitted with an attenuated total reflectance (ATR) module (4000-400 cm −1 ). Polar compounds were extracted from the dried, powdered samples in duplicate using 90% methanol, following previously described protocols [5]. The total phenolic (TP) content of the extracts was measured as previously reported for other matrices [5], with the results being expressed as gallic acid equivalents (GAE) per 100 g (dry weight basis). Capsaicin and dihydrocapsaicin contents were analysed in the methanolic extracts using HPLC, following the method of Waite and Aubin [6]. Chemometric analysis was performed in the Unscrambler X software (version 10; Camo Analytics; Oslo, Norway). Moving-window PLS-R was conducted using a custom script in R Studio running R 4.0.2. Figure 1 shows the NIR spectra of the chilli samples. The major peaks were centred at approximately 1447, 1193 and 976 nm, corresponding to the OH second overtone, CH 3 second overtone and OH third overtone, respectively. Following Standard Normal Variate (SNV) normalisation, two of the spectra were identified as outliers and removed. [3,4]. Hence, this study aimed to conduct a proof-of-concept investigation into the potential application of infrared spectroscopy for the quality analysis of Habanero chilli and to compare the relative performance of NIR and mid-infrared (MIR) spectroscopy for this purpose. Methods Twenty samples of Habanero chilli were sourced from Austchilli (Bundaberg, Queensland), incorporating a wide range of environmental variability. Near-infrared spectra between 908 and 1676 nm were collected from the fresh, intact chillies using a MicroNIR OnSite handheld spectrometer (Viavi, Santa Rosa, CA, USA). Duplicate spectra were collected from opposite sides of each chilli, providing four spectra per sample (n = 80 spectra in total). The chillies were subsequently oven-dried and ground to a fine powder. Mid-infrared spectra were collected from the dried, ground chilli powder in triplicate using a Bruker Alpha Fourier transform infrared (FTIR) spectrophotometer (Ettlingen, Germany) fitted with an attenuated total reflectance (ATR) module (4000-400 cm −1 ). Polar compounds were extracted from the dried, powdered samples in duplicate using 90% methanol, following previously described protocols [5]. The total phenolic (TP) content of the extracts was measured as previously reported for other matrices [5], with the results being expressed as gallic acid equivalents (GAE) per 100 g (dry weight basis). Capsaicin and dihydrocapsaicin contents were analysed in the methanolic extracts using HPLC, following the method of Waite and Aubin [6]. Chemometric analysis was performed in the Unscrambler X software (version 10; Camo Analytics; Oslo, Norway). Moving-window PLS-R was conducted using a custom script in R Studio running R 4.0.2. Figure 1 shows the NIR spectra of the chilli samples. The major peaks were centred at approximately 1447, 1193 and 976 nm, corresponding to the OH second overtone, CH3 second overtone and OH third overtone, respectively. Following Standard Normal Variate (SNV) normalisation, two of the spectra were identified as outliers and removed. As can be seen in Table 1, only the prediction of dry matter gave acceptable model statistics when using the NIR spectra, with an R 2 cv of 0.65 and standard error of crossvalidation (SECV) of 0.50%. The comparison of results obtained using NIR spectroscopy and the reference DM method is presented in Figure 2. Although the model statistics could be further improved, these results suggest that NIR spectroscopy may be useful for the rapid, in-field estimation of dry matter content. This could be used to monitor the maturity of the chilli crop and determine the optimum time for harvest. However, PLS-R prediction of other parameters showed poor R 2 val values and high SECV values, indicating that the As can be seen in Table 1, only the prediction of dry matter gave acceptable model statistics when using the NIR spectra, with an R 2 cv of 0.65 and standard error of crossvalidation (SECV) of 0.50%. The comparison of results obtained using NIR spectroscopy and the reference DM method is presented in Figure 2. Although the model statistics could be further improved, these results suggest that NIR spectroscopy may be useful for the rapid, in-field estimation of dry matter content. This could be used to monitor the maturity of the chilli crop and determine the optimum time for harvest. However, PLS-R prediction of other parameters showed poor R 2 val values and high SECV values, indicating that the NIR spectral range used was not able to detect the functional groups responsible for these compound classes (i.e., total phenolics, capsaicinoids). NIR spectral range used was not able to detect the functional groups responsible fo compound classes (i.e., total phenolics, capsaicinoids). One outlier sample (n = 4 spectra) was excluded. Moving-window PLS-R was implemented to determine the optimum ran wavelengths to use for the prediction of dry matter content, following the meth Anderson et al. [7]. The procedure used the SNV+1d5 pre-processed spectra, with increment of 6 nm throughout the NIR spectrum. The optimum model was found be wavelengths of 1224 and 1422 nm, with an R 2 val of 0.67, RMSECV of 0.46% and R 2.20. Results and Discussion As shown in Table 2, the model performance for all parameters aside from dry was slightly higher when using MIR compared to NIR spectra; nevertheless, performance remained quite poor. The best performing MIR model was for the cap to-dihydrocapsaicin ratio, which showed an R 2 val of 0.45, SECV of 0.32 and RPD of Moving-window PLS-R was implemented to determine the optimum range of wavelengths to use for the prediction of dry matter content, following the method of Anderson et al. [7]. The procedure used the SNV+1d5 pre-processed spectra, with a step increment of 6 nm throughout the NIR spectrum. The optimum model was found between wavelengths of 1224 and 1422 nm, with an R 2 val of 0.67, RMSECV of 0.46% and RPD of 2.20. As shown in Table 2, the model performance for all parameters aside from dry matter was slightly higher when using MIR compared to NIR spectra; nevertheless, model performance remained quite poor. The best performing MIR model was for the capsaicinto-dihydrocapsaicin ratio, which showed an R 2 val of 0.45, SECV of 0.32 and RPD of 1.34. Conclusions This proof-of-concept study sought to investigate the potential of NIR and MIR spectroscopy towards the rapid quality assessment of Habanero chillies, including the prediction of dry matter content, total phenolic content and capsaicin/dihydrocapsaicin content. Spectra collected using a handheld NIR instrument showed strong potential for the estimation of DM content, but not for TP or capsaicinoid content. The major benefits of using handheld instrumentation include portability, speed of measurement (almost instantaneous) and low cost (virtually no ongoing costs). This means that NIR instrumentation could potentially be applied for use in the in-field assessment of fruit maturity. Furthermore, the method is non-destructive, meaning that samples can be analysed at different points throughout the harvest season to determine the optimum time for harvest. MIR spectroscopy did not perform well for the estimation of capsaicinoid content, although it performed slightly better for the estimation of TP content and the capsaicin-to-dihydrocapsaicin ratio.
2021-12-03T16:17:43.207Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "48097451694afe371738e76ed7a6b0bc0749c686", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-4591/8/1/19/pdf?version=1637644355", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "83bf10a7564e4a1461a1894834c961f380bcebc4", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
229464657
pes2o/s2orc
v3-fos-license
Accessing the Frank-Kasper Phase of Block Copolymer in the Fuzzy Colloid Regime The discovery of Frank-Kasper (FK) phase in block copolymer (bcp) has prompted the progress of the eld of soft quasicrystals. In principle, the formation of FK phase from the supercooled liquid phase of the bcp micelles should involve the mass transport of constituent molecules to transform the unimodal distribution of micelle size into the multimodal distribution prescribed by the volume asymmetry of the Voronoi cells in the FK phase. Here we present a new regime in which the Laves C14 phase of bcp developed below the glass transition temperature of the micelle core, where the mass transport was inhibited by the immobile block chains forming the core. The bcp micelle comprising a glassy core and a soft corona resembles the fuzzy colloid and the strong van der Waals attraction between the cores directs their organization into C14 phase to minimize the interparticle interaction energy under the metastable condition. Introduction Colloidal crystallization signi es the self-organization of mesoscopic particles into periodic crystalline or aperiodic quasicrystalline order and is of both fundamental and practical signi cance. Colloidal crystals have found vast applications in photonics, hydrogels, mechanically robust composites and templates for porous materials. 1 From the fundamental perspective, spherical colloidal particles have been exploited as the model system to explore the crystallization mechanism of atoms due to easier in-situ observation in real space, much slower ordering kinetics, and adjustable strength and range of interparticle interaction. 2 Broadly speaking, colloidal particles cover both hard particles that retain their shape and size upon packing in the lattice and soft deformable particles such as dendrimers and the micelles formed by block copolymers and surfactants. The packing of hard spherical particles cannot ll the space entirely due to curved surface. In a colloidal system composed of hard spheres with identical size, void fraction is minimized via close packing in face-centered cubic (FCC) or hexagonal close-packed (HCP) lattice, where the former is more stable due to slightly higher positional entropy. [3][4][5] Soft colloids in the melt state tend to ll the space homogeneously by the constituent molecules; therefore, these particles are deformed into polyhedrons known as the Voronoi or Wigner-Seitz cells with their geometries determined by the lattice structure. 6,7 The Voronoi cells of body-centered cubic (BCC) and FCC lattice are truncated octahedron and rhombic dodecahedron, respectively. 8 One of the most exciting discoveries of colloidal crystallization in recent years is the formation of quasicrystalline phase and the quasicrystal approximant known as the Frank-Kasper (FK) phase in soft matter, which opened the eld of "soft quasicrystal". 9,10 In contrast to the canonical lattice structures comprising only one type of Voronoi cell, FK phases are constructed by the speci c packing of the distorted icosahedron cells with the coordination number of 12, 14, 15 or 16 (denoted as Z12, Z14, Z15 and Z16, respectively) in the unit cell, where each FK phase composes of at least two types of these motifs. 10,11 Since the Voronoi cells in a given FK phase may have different volume, this type of structure was thought to form only in the systems composed of nonequivalent particles with multiple size. Indeed, quasicrystals and FK phases were Page 3/11 initially discovered in metallic alloys composed of atoms with distinct sizes and electronic states. 11 The concept of multiple particle size was later extended to create colloidal quasicrystals by mixing nanoparticles with different sizes. 12 The necessity of prior mixing of particles with different size was relaxed after the discovery of FK phases in single-component soft matter, including block copolymer (bcp), 13-18 surfactant 19,20 and shape amphiphile. [21][22][23] Though these systems compose of only one component, the basic motif of the lattice is the micelle assembled by multiple molecules. These micelles are allowed to adjust their association number and hence volume via mass transport of the constituent molecules to t into the lattice that minimizes the total free energy. 14,[24][25][26] In the case of conformationally asymmetric bcp, diblock foam model (DFM) shows that the packing of the unequal-sized micelles in FK s phase minimizes the total free energy comprising the interfacial free energy and the conformational free energy of the stretched block chains. 27 The redistribution of association number to yield multiple particle size makes the micellar system effectively a mixture of particles of distinct volume, though the multiplicity stems from selfadjustment via mass transport instead of intentional prior mixing. It should be noted that the redistribution of association number has to occur above the glass transition temperatures (T g ) of both constituent blocks to assure that the required mass transport can take place. In this study, we present an exceptional scenario that Laves C14 phase was able to develop from the supercooled micellar liquid phase of a bcp below the T g of the micellar core, where the mass transport mechanism was inaccessible. The C14 phase thus formed dissipated upon heating above the T g of the core (T g core ), implying that its stability was relevant to the vitri cation of the micellar core. In contrast to the previously studied bcp micelles with uid core and corona, the present system approached the socalled "fuzzy colloid" comprising a hard core surrounded by a soft corona, and the interaction energy between the cores plays an important role in selecting the stable packing lattice. 28,29 Our nding adds a new regime for the ordering of bcp micelles, in that the development of FK phase does not involve the mass transport and the lattice structure is governed by a hidden free energy component. Results And Discussion Glass transition temperatures and conformational asymmetry of the bcp. The bcp studied was a poly(2vinyl pyridine)-block-poly(dimethyl siloxane) (P2VP-b-PDMS) with the number average molecular weights of P2VP and PDMS blocks of 2,000 and 10,000 g/mol, respectively, and the polydispersity index of 1.15 (see Figure 1 for its chemical structure). The volume fraction of P2VP in the copolymer was 0.16. The large compositional asymmetry assured that P2VP and PDMS blocks formed the core and the corona of the micelle, respectively. The T g of P2VP block in the copolymer measured by the rheological measurement at a frequency of 10 Hz while ramping the temperature from 40 to 100 o C with a heating rate of 10 o C/min. Figure 2 displays the measured storage modulus G' and loss modulus G'' as a function of temperature. The glass transition of P2VP core was manifested by an abrupt drop of G' in the heating ramp; the T g determined from the mid point of the observed glass transition region was ca. 65 o C. The Kuhn lengths of the two constituent blocks determined by small angle neutron scattering (SANS) were b P2VP = 1.46 nm and b PDMS = 0.99 nm (see Supplementary Information for details), which yielded the conformational asymmetry parameter )=1.49 by considering the mass densities ρ 2VP =0.977 g/cm 3 and ρ DMS =0.965 g/cm 3 . 30 The value of e was similar to those of the bcp systems having been reported to form FK phases. [13][14][15]18,[31][32][33][34] Micelle ordering resolved by small angle X-ray scattering (SAXS). Figure 3(a) shows the SAXS pro les collected by heating the solvent-cast P2VP-b-PDMS. At the onset temperature (i.e., 30 o C), the SAXS curve showed a broad peak centering at 0.47 nm -1 along with a shoulder at 0.75 nm -1 . This scattering pro le was tted well by the Percus-Yevick model of polydisperse spherical particles (see Figure S2 of the Supplementary Information), indicating that the micellar entity of the bcp still retained, but the micelles exhibited only short-range order. That is, the bcp formed the micellar liquid phase. The broad hump marked by "i=1" was the rst-order form factor maximum of P2VP core. The fact that this peak was broad and there was no discernible higher-order peaks attested that the distribution of the core size was quite broad, as manifested by the relatively large polydispersity index (=0.154) given by the ratio of the standard deviation to the mean value (= 4.77 nm) of the sphere radius assuming Schultz size distribution. As the temperature was raised to 80 o C, which situated 15 o C above the T g of P2VP core (T g core ), the micelles organized into BCC lattice with the unit cell dimension of a = 22.0 nm, as evidenced by the emergence of sharp peaks with the position ratio of 1: 2 1/2 : 3 1/2 . The SAXS results suggested that the micelles developed during the solvent evaporation and subsequent drying processes were unable to undergo a fast ordering and were trapped into a metastable micellar liquid phase at 30 o C. Upon heating above T g core , the micelles gained su cient mobility to proceed with the ordering into the stable BCC phase within the experimental time scale. An order-disorder transition (ODT) occurred upon further heating to T > 160 o C, where the BCC phase turned into a micellar liquid phase exhibiting broader interaction and form factor peaks in the SAXS curve. The micellar liquid phase attained at high temperature persisted in the subsequent cooling cycle, as demonstrated in Figure 3(b). The copolymer sample thus cooled was then stored at 30 o C for prolonged annealing. Interestingly, the sample having been annealed for 60 days was found to exhibit a large number of diffraction peaks in the SAXS curve, as shown in Figure 3(b). The diffraction peaks were indexed well according to the P6 3 /mmc space group of hexagonal unit cell (see Table S2 and Figure S3 of the Supplementary Information) and the entire diffraction pattern was consistent with that of Laves C14 phase of other bcp systems reported previously. [14][15][16][17] The dimensions of the large hexagonal unit cell deduced from the peak positions were a = 37.59 nm and c = 61.39 nm, yielding the ratio of c/a = 1.633, in accord with that associated with an ideal hexagonal cell. The unit cell of the C14 phase composes of 12 particles and is lled by three types of Voronoi cells, i.e., two types of Z12 and one Z16 cells, 14 as schematically illustrated in Figure 1(d). The present study revealed that P2VP-b-PDMS was another bcp system capable of forming FK phase, where the micelles in the supercooled micellar liquid phase at 30 o C underwent a slow organization to form Laves C14 phase. According to the conventional Voronoi tessellation, the 12 particles in the unit cell of C14 phase have three different volumes, with the ratio of the largest cell volume to the smallest one being 1.23. On the other hand, the micelles in the micellar liquid phase, from which the C14 phase developed, displayed unimodal size distribution. A redistribution of the association numbers of the micelles should in principle occur during the phase transition, transforming the unimodal distribution in the micellar liquid into the multimodal distribution in C14 phase. 24 However, such a symmetry breaking process was not accessible here, since the structural organization occurred at 35 o C below T g core , thereby prohibiting the mass transport required for redistributing the association number. As a matter of fact, the SAXS pro les at q > 0.7 nm -1 , which were dominated by the form factor scattering of the P2VP core, associated with micellar liquid and C14 phase were superimposable (see Figure 4), con rming that the micelle size distribution was preserved upon the phase transformation. Strictly speaking, the unimodal distribution of micelle size in micellar liquid phase did not t the multiplicity of the cell volume in C14 phase; nevertheless, comparing to the scenario of monodisperse particle size, the relatively high polydispersity of micelle size in the present system could be advantageous for accommodating the volume asymmetry underlying the C14 phase. 36 Moreover, the lattice formed by bcp micelles is usually distorted, where the centroids of the micelles deviate from the ideal positions due to size distribution and thermal uctuations. There is an allowable range of distortion within which the scattering pattern still contains su cient number of diffraction peaks (but with broadening in peak breadth) for assigning the packing structure. In the case where mass transport is forbidden, the requirement of volume asymmetry for FK phase formation can be alleviated by lattice distortion. The C14 phase dissipated almost completely upon heating to 90 o C, as demonstrated in the temperaturedependent SAXS pro les of the C14-forming sample collected in a heating cycle shown in Figure 5. The T ODT of C14 phase was ca. 25 o C higher than T g core and was much lower than the T ODT of the BCC phase observed in Figure 3(a). The result suggests that the C14 phase developed here was metastable relative to BCC phase. This is understandable in that the glassy state of polymer is nonequilibrium in nature, such that the micelle was indeed a metastable entity below T g core . If the vitri cation of the core did not occur in the cooling process, the micelles would have been able to adjust their association numbers (and hence sizes) in response to the change of segregation strength; in this case, BCC should have been the thermodynamically stable ordered structure along the equilibrium free energy path representing the temperature change of the structure for the micelles with uid core and corona. On the other hand, once the micelle size was frozen in by the vitri cation of the core, the system would go through another free energy path representing the change of the structure for micelles composed of a glassy core and uid corona. C14 phase then became the favored packing structure under this metastable condition. Thermodynamic driving force leading to the formation of C14 phase. The key issue remained is the thermodynamic driving force leading to the formation of C14 phase at temperature T g corona < T < T g core . On basis of the DFM, Reddy et al. have calculated the free energy of the micelle con ned within the Voronoi cells associated with various lattice structures in the polyhedral interface limit (PIL), and demonstrated that C14 phase was unstable relative to BCC, A15 and s phases. 27 In this model, the interfacial free energy governed by the surface area per unit volume of the core is coupled with the geometry of the Voronoi cell, as the core is assumed to be the a nely shrunk copy of the cell. Therefore, the interfacial free energy is directly determined by the lattice structure chosen to calculate the total free energy of the micelle. When the micelles are brought below T g core , the core geometry is arrested upon vitri cation; in this case, the PIL ansatz is no longer applicable in that the interfacial free energy becomes a constant and does not vary with the lattice structure chosen for calculating the total free energy of the micelles in the Voronoi cell below T g core . Now the conformational free energy of the coronal block becomes the sole variable in DFM, and BCC will be the favored packing lattice for minimizing the entropic penalty arising from stretching of the coronal blocks in the Voronoi cell, provided that the micelle core arrested (e.g. from the micellar liquid phase) adopts sphere geometry. 27 Nevertheless, the micelles of P2VP-b-PDMS were found to organize spontaneously into C14 phase below T g core , implying that there exists a free energy component not considered explicitly in DFM. At T g coronna < T < T g core , the micelle approaches the so-called "fuzzy colloid" de ned by Ziherl and Kamien to describe the particle composed of a hard core and a thin soft corona such as dendrimer. 28 In the treatment of the packing problem of the fuzzy colloid, Ziherl-Kamien (Z-K) model postulated that the packing lattice is governed by the balance between two free energy components, i.e., the bulk free energy arising from the interaction between the cores which were treated as hard spheres and the surface free energy arising from the loss of orientational entropy of the chain segments constituting the corona upon overlapping with the segments associated with the neighboring particles. This model predicted that, if the corona is thin compared to the core, the bulk free energy dominates and close-packed lattice such as FCC is favored. But when the corona is su ciently thick, A15 lattice becomes the stable packing symmetry, in that it minimizes the contact area and hence the surface free energy of the Voronoi cells with xed volume. 28 The micelles of P2VP-b-PDMS however formed neither close-packed lattice nor A15 phase predicted by Z-K model. Z-K model was originally developed to predict the packing of fuzzy colloids composed of a thin corona formed by short alkyl chains, so the conformational free energy of these short chains was not taken into account. Recently, Pansu and Sadoc extended Z-K model to include the conformational free energy change of the coronal chains attached with hard spherical particles packed in the lattice by treating the chain as an entropic spring. 29 Moreover, the hard sphere interaction assumed in Z-K model was replaced by the van der Waals attraction in formulating the bulk free energy. As expected, the theory predicts BCC as the packing lattice that minimizes the conformational free energy. Most intriguingly, the distribution of the interparticle distance in the lattice of C14 phase was found to minimize the van der Waals interaction energy of the particles. Consequently, the fuzzy colloids prefer to organize in C14 phase once they experience strong van der Waals attraction. This theoretical prediction was consistent with the experimental nding of C14 phase in gold nanoparticles coated with hydrophobic ligands. 37 The energy of interaction between the cores was normally neglected in treating the packing problem of bcp micelles. This is a good assumption for weaker inter-core interaction; under this condition, the calculation of the intramicellar free energy associated with the Voronoi cells is su cient to evaluate the stability of the corresponding lattice. Once the bcp micelles fall into the fuzzy colloid regime, the interfacial free energy of the micelle becomes independent of the lattice structure; the inter-core interaction energy determined by the positions of the micelles in the lattice may then emerge as an important variable in the total free energy. Because the particles showing poor a nity to the matrix phase tend to aggregate, the van der Waals attraction between the cores of the micelles is expected to be stronger in the bcp displaying larger Flory-Huggins interaction parameter c. In other words, the contribution of the inter-core interaction energy will be particularly important in high-c bcps, where Pansu-Sadoc (P-S) model will serve as the appropriate tool for analyzing the stabilities of the packing lattices. The solubility parameters of P2VP and PDMS are 20.6 and 15.5 MPa 1/2 , respectively; the large difference in their solubility parameters prescribes a large c for P2VP-b-PDMS. Therefore, we believe that the formation of C14 phase in P2VP-b-PDMS was driven mainly by the strong attractive force between P2VP cores, according to the P-S model. 29,38 In summary, we have disclosed a new approach for generating the Laves C14 phase of bcp through accessing the fuzzy colloid regime at T g coronna < T < T g core . This approach is particularly plausible for bcp displaying large c, as the strong van der Waals attraction between the cores could outweigh the conformational free energy of the coronal blocks to drive the organization of the micelles into C14 phase that minimizes the interaction energy under the metastable condition. The FK phases having been disclosed for bcp thus far include σ, A15 and Laves C14 and C15 phases, with σ phase being the most common. Bates and coworkers reported the rst discovery of C14 and C15 phases via thermal path dependent processes in compositionally asymmetric polyisoprene-block-polylactide. 14 Rheological measurement. The T g of P2VP core of the micelle was determined by the rheological measurements performed on an Anton-Paar MCR 302 stress-controlled rheometer. A 25mm diameter cone-and-plate geometry with one-degree angle (Anton Paar CP25-1) was used due to the small sample volume requirement of 0.07 ml together with medium diameter giving a higher sensitivity to the required low strain amplitudes. The sample loading was performed by heating the rheometer stage to 180 o C and then added a carefully cut bubble free section of the sample followed by lowering the gap to slightly above the measurement height of 0.054 mm. The excess sample was trimmed, moved to the measurement height, and then the temperature was lowered to 40 o C. Initially a strain sweep measurement was performed over a strain amplitude of 0.01 to 10 % to ascertain that the linear viscoelastic range (LVER) was accessed. Heating the sample to 150 o C for 5-min annealing and cooling back to 40 o C was performed to remove any shear-induced changes during the strain sweep measurement. For subsequent oscillatory shear measurements, the strain amplitude was set to 0.02 %, within the LVER. To measure the glass transition temperature of the P2VP block, constant strain amplitude measurements at a frequency of 10 Hz were performed while ramping the temperature from 40 to 100 o C with a heating rate of 10 o C/min. Synchrotron small angle X-ray scattering (SAXS) measurements. SAXS measurements were conduct at beamline TPS 25A1 of Taiwan Photon Source storage ring in National Synchrotron Radiation Research Center located at Hsinchu, Taiwan. The instrument con guration utilized 25 keV corresponding to wavelength 0.083 nm for 0.5 second exposure time for each measurement. Data were collected using an EIGER 16M detector mounted at three meters from sample to produce q range 0.05 to 0.5 nm -1 , where q= 4π/λsin(θ/2), θ and λ are scattering angle and X-ray wavelength respectively. All the scattering pro les were corrected by for the scatterings from air and cell. Data availability
2020-11-19T09:14:42.175Z
2020-11-12T00:00:00.000
{ "year": 2020, "sha1": "9438b555675c5de3d1a2886f9acb47f839eb1854", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-101485/v1.pdf?c=1606836420000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "827cd7a7471333a1517ee2e189e4bdfa3152f8d7", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
59275005
pes2o/s2orc
v3-fos-license
MicroRNA-29a activates a multi-component growth and invasion program in glioblastoma Background Glioblastoma is a malignant brain tumor characterized by rapid growth, diffuse invasion and therapeutic resistance. We recently used microRNA expression profiles to subclassify glioblastoma into five genetically and clinically distinct subclasses, and showed that microRNAs both define and contribute to the phenotypes of these subclasses. Here we show that miR-29a activates a multi-faceted growth and invasion program that promotes glioblastoma aggressiveness. Methods microRNA expression profiles from 197 glioblastomas were analyzed to identify the candidate miRNAs that are correlated to glioblastoma aggressiveness. The candidate miRNA, miR-29a, was further studied in vitro and in vivo. Results Members of the miR-29 subfamily display increased expression in the two glioblastoma subclasses with the worst prognoses (astrocytic and neural). We observed that miR-29a is among the microRNAs that are most positively-correlated with PTEN copy number in glioblastoma, and that miR-29a promotes glioblastoma growth and invasion in part by targeting PTEN. In PTEN-deficient glioblastoma cells, however, miR-29a nevertheless activates AKT by downregulating the metastasis suppressor, EphB3. In addition, miR-29a robustly promotes invasion in PTEN-deficient glioblastoma cells by repressing translation of the Sox4 transcription factor, and this upregulates the invasion-promoting protein, HIC5. Indeed, we identified Sox4 as the most anti-correlated predicted target of miR-29a in glioblastoma. Importantly, inhibition of endogenous miR-29a decreases glioblastoma growth and invasion in vitro and in vivo, and increased miR-29a expression in glioblastoma specimens correlates with decreased patient survival. Conclusions Taken together, these data identify miR-29a as a master regulator of glioblastoma growth and invasion. Electronic supplementary material The online version of this article (10.1186/s13046-019-1026-1) contains supplementary material, which is available to authorized users. Background MicroRNAs are short (about 22 nucleotides) non-coding RNAs that generally repress translation by targeting complementary sequences in the 3′ untranslated region (3'-UTR) of messenger RNAs [1]. A single microRNA can target dozens of messenger RNAs, thereby regulating complex biological processes. Numerous reports have detailed the important role that microRNAs play in development and in carcinogenesis [2][3][4]. Because the function of microRNAs is determined in part by the co-expression of their specific target mRNAs, their roles are complex and tissue specific. One striking example of this tissue-specific complexity is miR-29a, which has widely been reported to be a tumor suppressor in acute myeloid leukemia [5], lymphoma [6][7][8], hepatocellular carcinoma [9,10] and gastric cancer [11,12]. However, other studies have reported an oncogenic role for miR-29a in acute myeloid leukemia [13] and chronic lymphocytic leukemia [14]. Likewise, miR-29a has been reported to either decrease invasion in human carcinoma cell lines [15] or to increase invasion in human epithelial cancers [16] and in human hepatoma cells [17]. Thus, it is essential that the role of individual microRNAs such as miR-29a be evaluated in their native context, as tissue-specific or cell type-specific gene expression patterns have a tremendous impact on their function. One of the cancers where microRNAs have been shown to play an important role is glioblastoma, the most common and most malignant intrinsic brain tumor [18][19][20][21]. Despite treatment, the median survival of patients with glioblastoma is only 14 to16 months and, at present, there is no cure [22]. Recent studies have used molecular features to divide glioblastomas into several subclasses [19,[23][24][25]. We recently used microRNA expression profiles to classify glioblastoma into five genetically and clinically distinct subclasses, and showed that microRNAs contribute significantly to the phenotypic characteristics of each subclass [19]. Although a growing number of microRNAs have been implicated in glioblastoma, the functional role of a majority of these molecules remains unknown. Here we report that miR-29a is expressed primarily in the most aggressive glioblastoma subclasses, and its expression correlates with short patient survival. miR-29a downregulates PTEN, EphB3 and SOX4 expression to activate a complex post-transcriptional program of growth and invasion in glioblastoma. Antagonism of miR-29a inhibits glioblastoma growth and invasion in vitro and in vivo, suggesting that this approach may represent a novel therapeutic strategy in glioblastoma. Lentiviruses and cell lines All studies involving primary human tissues were conducted under the auspices of a human subjects protocol approved by the Institutional Review Board at Brigham and Women's Hospital. Primary glioblastoma stem-like cells were prepared from surgical glioblastoma specimens as described previously [26]. Human U87, U251 and LN229 glioblastoma cell lines were purchased from the American Tissue Type Culture Collection. The hsa-miR-29a sequence with~264 bp of flanking sequence was cloned from human genomic DNA by PCR and confirmed by DNA sequencing. The forward primer was 5′-gcacctcgattagttctcg-3′, and the reverse primer was 5′-ccaagctggcctaacttcag-3′. The PCR product was transferred into the pLenti6-IRES-GFP vector and packaged in 293FT cells [19]. A control EGFP lentivirus lacking a microRNA sequence was also prepared. miR-29a sponge (miR-Locker) and control sponge lentiviral vectors were purchased from Biosettia and packaged in 293FT cells. LN229 and U251 glioblastoma (GBM) cells were transduced with the control, miR-29a or miR-29a sponge lentiviruses, and stable cell lines were selected using blasticidin. Genome-scale expression data and analysis Array comparative genomic hybridization (CGH), micro-RNA, mRNA and clinical data for 197 GBM patients were downloaded from The Cancer Genome Atlas project data portal (http://cancergenome.nih.gov) in March 2009. Data use certification was obtained for the use of controlled-access data. Details on the processing and platforms used, as well as the methods for selection of highly informative microRNAs and consensus clustering are as described previously [19]. In brief, we used the online website http://www.microrna.org/microrna/ home.do to identify potential targets and looked at other anti-correlative targets using The Cancer Genome Atlas (TCGA) dataset. Additional data from TCGA for Kaplan-Meier survival analyses for Sox4 and HIC5 was obtained at http://hgserver1.amc.nl/cgi-bin/r2/main.cgi. Luciferase reporter assays Reporter constructs were generated by overexpressing a vector in which firefly luciferase was fused to the Sox4 3'-UTR containing the putative miR-29a binding site (Addgene). Expression of the luciferase fusion protein in 293 T cells was then determined as we have described previously [27] in the presence of a miR-29a mimic (100 nM), a control oligonucleotide (100 nM) or a miR-335 mimic (as a positive control). RNAi studies A miRIDIAN double-stranded RNA miR-29a mimic and a hairpin inhibitor (antagomiR) for miR-29a, as well as the corresponding negative controls, were purchased from Dharmacon. SOX4 siRNA and a matched oligonucleotide control were purchased from Invitrogen. The miR-29a mimic or inhibitor was added to the medium at a concentration of 100 nM without the use of additional transfection reagents for 48 h prior to performing assays. For SOX4, EphB3 and HIC5 siRNA experiments, the siRNAs were purchased from Ambion. Oligofectamine was used to transiently transfect the cells overnight prior to performing siRNA assays. Real-time polymerase chain reaction (PCR) Total RNA enriched for microRNA was extracted from LN229 or U251 GBM cell lines using a commercially available kit (Qiagen). cDNA was then prepared using 1 μg of total RNA from each sample (SuperScript III First-Strand Synthesis SuperMix, Invitrogen). A miR-29a-specific Real-time PCR primer was purchased from Applied Biosystems. Six nanograms of cDNA were used for real-time PCR analysis in a final reaction volume of 20 uL. The samples were analyzed in triplicate using an ABI 7300 Real-time PCR machine, and statistical analysis was performed using the t test. In vitro proliferation, growth, apoptosis and invasion assays Cell growth or cell survival after DNA damage were assayed in vitro using the MTT assay, and cell proliferation was measured using bromodeoxy-uridine (BrdU) incorporation into DNA as we have described previously [27]. Matrigel transwell invasion assays were performed as we have described previously [28]. In vivo tumor growth assay All animal studies were conducted under the auspices of an IACUC protocol approved by the Harvard Medical Area Standing Committee on Animals. Human LN229 glioblastoma cells were transduced with either a miR-29a lentivirus or a control virus, and stable cell lines were established. 5 × 10 6 control or miR-29a expressing glioblastoma cells were then injected subcutaneously into nude mice (n = 6 animals per condition). Subcutaneous tumor growth was then measured serially, and tumor volume was calculated using the formula for a spheroid. Significance was determined using an upaired t-test. Intracranial invasion assay U251 glioblastoma cells were transduced with either a miR-29a sponge lentivirus encoding Green Fluorescent Protein (GFP) or a control sponge lentivirus encoding Red Fluorescent Protein (RFP). The green or red color was reinforced by DiO (green) or DiI (red) staining. Control and miR-29a sponge-expressing cells were mixed in a 1:1 ratio, and 2 × 10 5 cells were then injected intracranially into nude mice. After 1 week, the animals were sacrificed and the brains were sectioned and processed for immunofluorescence imaging. The extent of radial invasion by glioblastoma cells transduced with the miR-29a sponge lentivirus (green) versus the control sponge lentivirus (red) was then assessed. miR-29a promotes glioblastoma growth We previously reported that patients with glioblastomas from the astrocytic and neural glioblastoma subclasses have the shortest survival among glioblastoma patients [19]. In the current study, we set out to identify micro-RNAs that contribute most significantly to the aggressive characteristics of these glioblastoma subclasses. We analysed microRNA expression profiles from 197 glioblastomas, and observed that members of the miR-29 subfamily (miR-29a, miR-29b, and miR-29c) were unique among the 171 informative microRNAs examined in that they showed a selective increase in expression in both the astrocytic and neural glioblastoma subclasses (Fig. 1A). Patients harboring glioblastomas from either of these two subclasses displayed the shortest median survival (Fig. 1B). We therefore examined the effects of microRNA mimics for miR-29a, miR-29b and miR-29c on proliferation in human U87 glioblastoma cells, and found that only the miR-29a mimic significantly increased proliferation ( Fig. 1C, P = 0.026, unpaired t-test). Consequently, we selected miR-29a for further study. To determine the effect of miR-29a on glioblastoma tumor growth in vivo, we used lentiviral transduction to overexpress the miR-29a transcript in human LN229 glioblastoma cells. This afforded an approximately 2 fold increase in miR-29a expression in the cells (Additional file 1: Figure S1). Overexpression of miR-29a significantly increased the growth of LN229 glioblastoma cells in vitro ( Fig. 1D, P < 0.02, unpaired t-test). LN229 glioblastoma cells transduced with the miR-29a lentivirus or a control lentivirus were subsequently transplanted subcutaneously into nude mice and tumor growth was monitored over time. LN229 glioblastoma cells overexpressing miR-29a formed significantly larger tumors than cells transduced with a control lentivirus ( Fig. 2A, P < 0.05, unpaired t-test). These data indicated that miR-29a promotes glioblastoma growth. miR-29a targets PTEN in glioblastoma We next investigated the mechanisms by which miR-29a increases glioblastoma growth. miR-29a has previously been reported to directly target the 3'-UTR of the PTEN tumor suppressor in hepatoma cells in vitro [17] and in neural stem cells [29]. PTEN is frequently mutated or deleted in glioblastoma, and we and others have reported that it is a target of oncogenic microRNAs in this tumor [30][31][32]. PTEN loss increases glioblastoma growth and invasion in part by activating the PI3 kinase/AKT pathway [33]. Western blot analysis indicated that miR-29a downregulates PTEN protein expression in LN229 glioblastoma cells and in primary glioblastoma stem-like cells (GSCs, Fig. 2B). As expected, the miR-29a-mediated repression of PTEN expression was accompanied by activation of AKT (Fig. 2C). Integrated copy number, mRNA and microRNA expression analysis using data from 197 TCGA glioblastoma specimens failed to demonstrate an anti-correlation between mir-29a and PTEN mRNA expression (Pearson correlation coefficient (PCC) =0.038). Strikingly, however, miR-29a and its subfamily members (miR-29b and miR-29c) were foremost among microRNAs that were positively correlated with PTEN copy number (Fig. 2D, PCC = 0.219 for miR-29a). Thus, miR-29a is well positioned to suppress PTEN expression in glioblastomas in which the PTEN gene is intact. miR-29a decreases EphB3 to increase AKT in PTENdeficient glioblastoma cells Our earlier finding that miR-29a increased the proliferation of human U87 glioblastoma cells (which lack functional PTEN) suggested the existence of additional mediators of miR-29a-induced glioblastoma growth (see Fig. 1C). To investigate this possibility further, we examined the effect of miR-29a on the growth of human U251 glioblastoma cells, which also lack functional PTEN. Lentiviral-mediated overexpression of miR-29a increased U251 glioblastoma cell proliferation significantly ( Fig. 3A, P < 0.0005, unpaired t-test). Conversely, exposure of PTEN-deficient U251 cells to the miR-29a inhibitor (100 nM) significantly decreased proliferation (Fig. 3B, P < 0.05, unpaired t-test). Additionally, inhibition of endogenous miR-29a using the miR-29a inhibitor (100 nM) significantly decreased the growth of PTEN-deficient U251 glioblastoma cells (Fig. 3C, P < 0.01, unpaired t-test). In order to identify growth-promoting pathways activated by miR-29a in the absence of PTEN, we exposed PTEN-deficient U251 glioblastoma cells to a miR-29a mimic (100 nM), collected the protein and examined the lysates using an antibody array that assays several key growth regulatory pathways in the cell (Human Phospho-Kinase Array Kit, R&D Systems). This assay revealed that miR-29a increased AKT phosphorylation and β-catenin expression in U251 glioblastoma cells, and this BrdU proliferation assay illustrating the effect of miR-29a, miR-29b, miR-29c or a control mimic (100 nM) on proliferation in human U87 glioblastoma cells. Mean ± SEM (*P < 0.026, unpaired t-test). d) MTT cell growth assay illustrating the effect of lentiviral-mediated overexpression of miR-29a or a control sequence on the growth of primary human glioblastoma stem-like cells (*P < 0.0001, unpaired t-test) was confirmed by Western blot (Fig. 4A). AKT can phosphorylate and inactivate GSK3β [34] which, in turn, phosphorylates β-catenin and targets it for degradation [35]. Indeed, miR-29a induced inhibitory phosphorylation of GSK3β on serine 9 (Fig. 4B), suggesting that the increased β-catenin expression observed in the presence of miR-29a may result from AKT-dependent phosphorylation and inhibition of GSK3β activity. In order to identify upstream mediators of the effect of miR-29a on AKT activation, we identified predicted anti-correlated mRNA targets of miR-29a using micro-RNA and mRNA expression profiles from 261 TCGA glioblastoma specimens. We identified EphB3 as an anti-correlated (PCC = − 0.508) predicted target of miR-29a. EphB3 encodes a receptor tyrosine kinase that suppresses AKT activation in lung cancer cells [26]. Western blot confirmed decreased EphB3 expression in human U251, LN229 and U87 cells transduced with the miR-29a lentivirus (Fig. 4C). Moreover, siRNA-mediated knockdown of EphB3 in LN229 glioblastoma cells increased AKT phosphorylation and activation (Fig. 4D). In addition to their effects on proliferation, both AKT and β-catenin can inhibit apoptosis in glioblastoma cells [26,27]. Enforced overexpression of β-catenin increased glioblastoma cell growth (Fig. 4E, P < 0.05, unpaired t-test). We therefore examined the effect of miR-29a on the survival of PTEN-deficient U251 glioblastoma cells after DNA damage. The miR-29a mimic (100 nM) significantly increased cell growth under basal conditions and increased survival after exposure to the DNA-damaging agent, camptothecin (Fig. 4F, P < 0.006, unpaired t-test). Taken together, these data suggest that miR-29a decreases EphB3 to activate the PI3K/AKT and Wnt pathways, thereby promoting proliferation and survival in glioblastoma cells. Like PTEN, EphB3 suppresses AKT activation and inhibits lung cancer cell migration [37]. Although AKT activation can promote glioblastoma cell invasion, β-catenin reportedly inhibits invasion in these cells [38]. Our finding that miR-29a decreases expression of both PTEN and EphB3 in glioblastoma raised the possibility that miR-29a regulates a coordinated invasion program in glioblastoma. We therefore searched for additional miR-29a targets that might mediate its effects on glioblastoma invasion. Computational analysis of microRNA and mRNA expression profiles from 261 primary glioblastoma specimens identified Sox4 as the most anti-correlated predicted mRNA target of miR-29a (PCC = − 0.636). Likewise, miR-29a was Fig. 3 miR-29a increases proliferation and growth in glioblastoma cells lacking PTEN. a) Fluorescence micrographs (left panels) and data quantitation (right panel) from BrdU proliferation assay investigating the effect of lentiviral-mediated miR-29a overexpression on BrdU (red) incorporation into nuclei (DAPI, blue) by human U251 glioblastoma cells that lack functional PTEN. Data shown in right panel are mean ± SEM. *P < 0.0005, unpaired ttest. Scale approximately 100 μm. b) Fluorescence micrographs (left panels) and quantitation (right panel) from BrdU proliferation assay illustrating the effect of a miR-29a inhibitor (100 nM) on proliferation in PTEN-deficient U251 glioblastoma cells. P < 0.05, unpaired t-test. Data shown are mean ± SEM. *P < 0.0005, unpaired t-test. Scale approximately 100 μm. c) MTT cell growth assay illustrating the effect of a specific miR-29a inhibitor (100 nM) on the growth of PTEN-deficient U251 glioblastoma cells. Data shown are mean ± SEM. *P < 0.01, unpaired t-test the microRNA that was most anti-correlated with Sox4 (Additional file 1: Figure S3). Sox4 is an HMG box transcription factor that regulates a variety of biological processes, including neural differentiation [39]. Numerous but conflicting reports indicate that Sox4 may act as either an oncogene or a tumor suppressor in a range of cancers [39]. Relevant to the current study is a previous report that loss of Sox4 promotes melanoma cell invasion via a mechanism that involves activation of NFκB [40,41]. Western blot analysis indicated that miR-29a robustly downregulates Sox4 protein expression in multiple human glioblastoma cell lines and in primary glioblastoma stem-like cells (Fig. 6A), and the miR-29a inhibitor (100 nM) increased Sox4 protein levels in primary glioblastoma stem-like cells (Fig. 6B). Real-time PCR revealed a miR-29a-induced decrease in Sox4 mRNA levels (Additional file 1: Figure S1). Importantly, a luciferase reporter assay in which the SOX4 3'-UTR was fused to the firefly luciferase mRNA sequence indicated that miR-29a directly targets the Sox4 3'UTR (Additional file 1: Figure S3). We depleted endogenous Sox4 protein expression via transient transfection of Sox4 siRNA in LN229 or U251 glioblastoma cells (Fig. 6C), and found that this significantly increased glioblastoma cell invasion (Fig. 6D, P < 0.0001, unpaired t-test). Conversely, Sox4 overexpression in PTEN-deficient U87 glioblastoma cells decreased invasion (Fig. 6E, P < 0.01, unpaired t-test). Manipulation of Sox4 expression did not increase AKT phosphorylation or β-catenin levels (data not shown). Taken together, these data indicate that miR-29a targets Sox4 to promote glioblastoma cell invasion independent of PTEN. HIC5 is a downstream mediator of the mir-29a/Sox4 invasion pathway in glioblastoma Loss of Sox4 has been reported to promote invasion by activating the NFκB pathway [40,42]. However, the miR-29a mimic failed to promote NFκB nuclear translocation in human glioblastoma cells, suggesting that NFκB activation is not responsible for the miR-29a-induced increase in glioblastoma cell invasion (data not shown). To identify other possible downstream mediators of the miR-29a/Sox4 invasion pathway, we queried public databases [43,44] to search for Sox4 regulated transcripts that have been implicated in cell migration or invasion. We identified HIC5 (a.k.a. TGFβ1I1) as a migration-related transcript [45][46][47][48] that is upregulated after knockdown of Sox4 in other cell types. Western blot analysis revealed upregulation of HIC5 protein expression after exposure of human U251 glioblastoma cells to the miR-29a mimic or after siRNA-mediated knockdown of Sox4 (Fig. 6F). Knockdown of HIC5 expression using siRNA significantly inhibited LN229 (PTEN-competent) and U251 (PTEN-deficient) glioblastoma cell invasion (Fig. 6G, *P < 0.0001, unpaired t-test). Moreover, knockdown of HIC5 completely abrogated the increase in invasion induced by miR-29a in PTEN-deficient U251 cells, suggesting that it plays an essential role in this process (Fig. 6H). miR-29a regulates glioblastoma invasion in vivo and correlates with survival Our earlier studies using human LN229 glioblastoma cells transplanted subcutaneously unto nude mice indicated that miR-29a promotes glioblastoma tumor growth (see Fig. 2A). We next investigated the effect of Quantitation of data depicted in a). Data shown are mean ± SEM of 6 replicates. *P < 0.05 for LN229 and *P < 0.0001 for U251 glioblastoma cells. c) Phase micrographs of invading LN229 or U251 glioblastoma cells in Matrigel invasion assay after exposure to control or miR-29a inhibitor (100 nM). Scale approximately 50 μm. d) Quantitation of data depicted in c). Data shown are mean ± SEM of 6 replicates. *P < 0.01 for LN229 and *P < 0.0001 for U251 glioblastoma cells endogenous miR-29a on glioblastoma invasion in vivo using an intracranial human glioblastoma xenograft mouse model. PTEN-deficient human U251 glioblastoma cells were transduced with a lentivirus containing a nucleotide sequence complementary to miR-29a (miR-29a sponge/miR-locker) or a control sequence (control sponge/miR-locker), and a stable cell line was then selected. The control and miR-29a sponge lentiviral vectors also encoded either Red Fluorescent Protein (control) or Green Fluorescent Protein (miR-29a), respectively. Overexpression of the bulged miR-29a sponge sequence increased mir-29a levels (Additional file 1: Figure S1), presumably because it interfered with RISC-mediated degradation of the miRNA/mRNA target duplex. The sponge effectively antagonized the ability of miR-29a to degrade its mRNA targets, as evidenced by the elevation of Sox4 mRNA (Additional file 1: Figure S1). This elevation was in contrast to the decrease in Sox4 mRNA expression induced by miR-29a itself (Additional file 1: Figure S1). Overexpression of the miR-29a sponge increased Sox4 protein expression and decreased HIC5 protein expression in U251 glioblastoma cells (Fig. 7A). In addition, it significantly decreased U251 glioblastoma cell proliferation (Fig. 7B, P < 0.02, unpaired t-test. The miR-29a sponge also inhibited cell growth (P < 0.0001, unpaired t-test) and increased DNA damage-induced apoptosis (P < 0.05, unpaired t-test) in U251 glioblastoma cells in vitro (Additional file 1: Figure S1). Inhibition of endogenous miR-29a using the miR-29a sponge significantly decreased glioblastoma cell invasion in vitro (Fig. 7C and Additional file 1: Figure S5). We examined the effect of miR-29a on glioblastoma cell morphology using human U251 glioblastoma cells transduced with control, miR-29a or miR-29a lentiviruses. When compared to control U251 glioblastoma cells, cells overexpressing miR-29a were smaller and displayed moderately fewer filopodia (Fig. 7D). In contrast, cells overexpressing the miR-29a sponge adopted a rounded morphology with a marked reduction in filopodia and lamellopodia (Fig. 7D). In order to investigate the role of miR-29a in glioblastoma cell invasion in vivo, PTEN-deficient U251 glioblastoma cells expressing either the control (RFP) or mir-29a (GFP) sponges were mixed 1:1 and injected intracranially into the brains of nude mice. After one week, the brains were collected and processed for Fig. 7 Endogenous miR-29a regulates glioblastoma cell invasion in vivo and correlates with patient survival. a) Effect of miR-29a sponge overexpression on Sox4 and HIC5 expression in human U251 glioblastoma cells. b) BrdU proliferation assay illustrating the effect of a miR-29a sponge on U251 glioblastoma cell proliferation. Mean ± SEM. *P < 0.02, unpaired t-test. c) Matrigel invasion assay illustrating effect of miR-29a sponge on U251 glioblastoma cell invasion. d) Fluorescence micrographs of U251 glioblastoma cells transduced with lentiviruses encoding miR-29a, a miR-29a sponge or a control sequence. Scale approximately 3 μm. e) Fluorescence micrograph of mouse brain section obtained 1 week after transplantation of U251 glioblastoma cells transduced with a control (red) or miR-29a sponge lentivirus. Scale approximately 500 μm. f) Kaplan-Meier survival analyses using microRNA expression profiles (n = 197) or mRNA expression profiles (n = 504) obtained from the TCGA portal for glioblastoma. P values are miR-29a (P = 0.038), SOX4 (P = 0.023), HIC5 (P = 0.027), EphB3 (P = 0.045) fluorescence imaging to identify invading cells. Glioblastoma cells overexpressing the miR-29a sponge (green fluorescence) migrated from the injection site less than control cells (red fluorescence, Fig. 7E). Our initial observations using primary glioblastoma specimens indicated that miR-29a is preferentially expressed in the astrocytic and neural glioblastoma subclasses. Because these subclasses display the shortest median survival among the five glioblastoma subclasses identified by microRNA profiling, our findings suggested that miR-29a may be associated with decreased patient survival. Indeed, Kaplan-Meier survival analysis using microRNA expression data from 261 primary glioblastoma specimens obtained from the TCGA portal indicated that increased miR-29a expression is associated with decreased patient survival (Fig. 7F, P = 0.038, Logrank). Consistent with the miR-29a/Sox4/HIC5 invasion pathway identified by our in vitro studies, increased Sox4 mRNA expression is positively correlated with patient survival (Fig. 7F, P = 0.023, Logrank), and HIC5 mRNA expression is negatively correlated with survival ( Fig. 7F, P = 0.027, Logrank). Of note, decreased EphB3 mRNA expression also correlated with decreased survival (Fig. 7F, P = 0.045, Logrank). Taken together, these data establish a role for endogenous miR-29a in glioblastoma growth and invasion. Discussion MicroRNA-29a is a conserved microRNA that is involved in the regulation of several coordinated post-transcriptional programs affecting different biological processes. For example, miR-29a represses the translation of multiple extracellular matrix proteins, and miR-29a depletion leads to fibrosis in several tissues [49]. miR-29a also regulates the myeloid differentiation program [5]. We report here that miR-29a regulates a complex program of cell growth and invasion in glioblastoma. This program not only involves co-activation of the AKT/PI3K and Wnt pathways via downregulation of PTEN and EphB3, but also activation of a newly discovered Sox4/Hic5 invasion pathway (Additional file 1: Figure S6). MicroRNA-29a has previously been reported to promote hepatoma cell migration by directly targeting PTEN, a key regulator of migration in many cell types [17,50]. We observed that miR-29a robustly downregulates PTEN in glioblastoma cells that have intact PTEN function. Surprisingly, however, we did not find an anti-correlation between miR-29a and PTEN mRNA expression. This may be due in part to the impact of other mechanisms that regulate PTEN expression and function in glioblastoma, including deletions, mutations and the impact of other microRNAs [19,27,31,32,51]. Interestingly, miR-29a is among the top 1% of microRNAs in terms of its positive correlation with PTEN copy number in glioblastoma, suggesting that miR-29a-mediated downregulation of PTEN provides a selective growth advantage in glioblastoma cells with intact PTEN. In many glioblastomas, PTEN is deleted or mutated [51]. In such tumors, we find that miR-29a nevertheless promotes growth by downregulating EphB3, thereby increasing AKT activation and β-catenin levels. Activation of the EphB3 receptor leads to PP2A-mediated dephosphorylation and inactivation of AKT [37]. Interestingly, EphB3 also inhibits the migration of lung cancer cells [37]. Consistent with previous reports [34], we find that AKT activation is accompanied by phosphorylation and inactivation of GSK3β which, in turn, phosphorylates β-catenin and targets it for degradation [35]. In osteoblasts, miR-29a has also been reported to increase β-catenin by directly targeting several Wnt pathway antagonists other than GSK3β [52]. Additional studies are needed to determine whether miR-29a also increases β-catenin via these mechanisms in glioblastoma. In addition to the AKT/PI3 kinase and Wnt pathways, miR-29a activates a newly-discovered Sox4/Hic5 invasion pathway in glioblastoma cells. This pathway operates in PTEN-deficient glioblastoma cells to robustly promote invasion. Sox4 is an HMG transcription factor that promotes neuronal differentiation during nervous system development [53]. Recent reports indicate that decreased Sox4 expression promotes invasion in melanoma [39,40]. This effect is thought to involve activation of the NFκB pathway and regulation of DICER expression. In the current study, we did not find evidence for miR-29a-induced nuclear translocation of NFκB, suggesting that this is not the mechanism underlying miR-29a-induced invasion in glioblastoma. However, we did observe robust upregulation of HIC5 after miR-29a exposure or Sox4 knockdown in glioblastoma cells. Increased HIC5 expression promoted glioblastoma cell invasion, and HIC5 knockdown abrogated the miR-29a-induced increase in invasion. HIC5 is homologous to paxillin and associates with the focal adhesion kinases, FAK and Pyk2, at focal contacts [45,46]. HIC5 has also been reported to promote epithelial-to-mesenchymal transition (EMT) and invadopodia formation by regulating the Rho/ROCK pathway [11,47]. However, as a tumor promoter, miR-29 mediates epithelial-mesenchymal transition (EMT) and promotes metastasis in breast cancer, colon cancer and pancreatic cancer [54][55][56]. There is also evidence that miR-29 may regulate MMP2 or Mcl-1 [56,57], which partly participate in the process of metastasis, yet the underlying mechanisms remain controversial. Interestingly, we saw no evidence of Rac1 activation by miR-29a (personal observations). Additional studies are thus underway to determine the downstream signaling pathway activated by HIC5 in glioblastoma cells.
2019-01-27T14:02:49.686Z
2019-01-25T00:00:00.000
{ "year": 2019, "sha1": "6686d2a04f01a30fa1a601d17b785c74586601eb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13046-019-1026-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "28f3b0c2f783456aa300d8389e76b45beb88fad1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
219629206
pes2o/s2orc
v3-fos-license
Financial Transactions Flow Chart of Fish Marketing at Fish Landing Center (PPI) Lhok Pawoh a Teknik Industri, Politeknik Aceh Selatan, Tapaktuan, 23711, Indonesia b Teknik Informatka, Politeknik Aceh Selatan, Tapaktuan, 23711, Indonesia c Mahasiswa Teknik Industri, Politeknik Aceh Selatan, Kabeupaten Aceh Selatan, 23711, Indonesia asbahrul.alr@gmail.com*, hardiantozainal@gmail.com, meraty.ramadhini@yahoo.com, safrijal3681@gmail.com firdausmuhammad828@gmail.com, fahri.fahri0203@gmail.com I. Background of Research The fisheries and marine sector is one of the significant sectors in Aceh Province where more than 55% population of Aceh depend on this sector both directly and indirectly [1]. Since 2006, after the tsunami, Aceh's fish production charts began to increase from 126,400 tons of fish produced in 2006 to 159,500 tons in 2014 [2]. The sizable fish production was also followed by the high power consumption of Acehnese fish that exceeded national fish consumption. Aceh fish consumption is at 45.83 kg / cap / year while national fish consumption is 38.14 kg / cap / year. Based on the results of 2014 data collection, the fish produced by Aceh Selatan fishermen reached 20,370.06 tons / year. Where 40% or 8 tons are sent to North Sumatra and then exported to foreign countries. As many as 3% of them are processed fish by fishermen. While the rest of it is the local consumption of the people of South Aceh and neighboring regencies such as Subulussalam and Southwest Aceh [3]. Such great potential as mentioned above is not accompanied by an understanding of how the market mechanism is operated. Therefore, knowing the marketing system, especially financial transactions, is very important as a basis for determining the level of security and quality of fish distributed to consumers. ARTICLE INFO A B S T R A C T Article history: Accepted The fisheries and marine sector is one of the significant sectors in Aceh Province where more than 55% population of Aceh depend on this sector. Based on the results found in 2014 data collection, fisheries production by South Aceh fishermen reached 20,370.06 tons / year which was marketed inside and outside South Aceh. The problem to be examined in this study is to determine the flow chart of financial transactions in the marketing of fish at the Fish Landing Center level as part of efforts to determine the quality and safety of the catch. This research is designed to answer how the financial transaction process and payment process between market operators. The type of data used in this study is in the form of primary data and secondary data. Qualitative analysis is carried out descriptively which aims to find out deeply about financial transactions at the Lhok Pawoh Fish Landing Center. The results of this study indicate that the type of transaction used at the Fish Landing Center uses external transactions with the payment method divided into two types, namely cash payments and non-cash payments. Most of financial transaction process carried out by market operators at PPI Lhok Pawoh is using the non-cash payment method, except for Muge motor which originating from outside of Lhok Pawoh Village, Muge lapak and consumers who buy fish directly to PPI. In addition, It is also found that only 25% of Muge motor make cash payments. All rights reserved. A. Market A simple market can be interpreted as a place of exchange of goods and services [4]. According to Staton W.J is a group of people who have a desire to be satisfied, money to spend, and a willingness to spend [5]. From the above understanding it can be concluded that each market has facilities for the occurrence of transactions, the existence of sellers and buyers who are interested and agree to exchange goods or services. B. Marketing Marketing is a social and managerial process whereby individuals and groups get what they need and want by creating and exchanging products and values with other parties [6]. In other words marketing is a process or activity that delivers products from producers to consumers so that it becomes a bridge between producers and consumers [7]. C. Market Operator Market operator means a person who manages and the person who operates a regulated market business and may be a regulated market itself. Every market operator must at all times have a fairly good reputation, have sufficient knowledge, skills and experience to carry out their functions in the market operator, must act with honesty, integrity and independence of mind [8]. D. Financial Transaction Transaction is an activity carried out by a person which causes changes to the assets or finances owned by either increasing or decreasing. For example, selling property, buying goods, paying debts, and paying various kinds of costs to make ends meet. In a transaction there is a transaction administration. What is meant by administration here is an activity to record changes in a person's finances or organizations carried out carefully and to use certain methods [9]. E. Payment Sistem Payment system is a system that includes a set of rules, institutions and mechanisms used to carry out the transfer of funds to meet an obligation arising from an economic activity. Payment instruments are arguably developing very rapidly and advanced. If we look back at the beginning of the known means of payment, the barter system between goods traded is a custom in the premodern era. In its development, began to be known certain units that have a payment value better known as money. Until now, money is still one of the main payment instruments in force in the community. Furthermore, payment instruments continue to evolve from cash-based (non-cash) to non-cash payment instruments such as paper-based payment instruments, for example, checks and crossed checks. Also known as paperless payment instruments such as electronic funds transfers and payment instruments using cards (based card) [10]. F. Flow Chart Flow Chart is a logical system-oriented description of the data flow that defines the mapping of information flow into the structure of the program used to facilitate users to better understand the system to be developed [11]. The function of flow diagrams is to illustrate, simplify a series of processes so that they are easily understood and easily seen based on the sequence of steps of a process. The symbols commonly used in flow charts can be seen in table 1. Process State the activities that will occur in the flow chart. 4 Connecting Connecting point at the same thing. III. Research Method This research was conducted at PPI Lhok Pawoh, Sawang District, South Aceh Region, Aceh Province. Within a period of 5 months starting from 1 March 2019 to 31 July 2019. A. Research Approach The flow chart of financial transactions carried out by market operators at the PPI Lhok Pawoh does not yet have sufficient initial data so that this study is included in basic research or preliminary research. Basic research is research intended for the development of a science and directed at developing existing theories or discovering new theories [12]. B. Research Design According to Burn & Grove the research design is a research blueprint for conducting a study with maximum control over factors that can influence the validity of an finding [13]. On the other hand Polit et al explain the research design as everything that researchers are able to do to answer the problem or try a research hypothesis, so it is necessary to arrange a research design [14]. This research is designed to answer how the financial transaction process and payment process between market operators is carried out. This will be answered by conducting a case study. This case study is conducted on market operators in PPI Lhok. C. Data Types and Source Type of data used in this study is in the form of primary data and secondary data. Secondary data is data that has been collected and analyzed by someone [15]. Whereas Primary data is obtained through direct observation and interviews with market operators using unstructured interview methods. The respondents who will be the source of the required primary data can be seen in Toke Bangku 3 5 Toke Ikan 1 6 Muge Besar 2 7 Muge Lapak 2 8 Muge Motor 2 9 Consumer 2 Total 14 D. Data Analysis Method Qualitative analysis requires creativity with challenges how to put raw data into meaningful logic; test them as a whole; and find ways to communicate their interpretation to others [16]. Qualitative analysis is done descriptively which aims to find out deeply about the pattern of financial transactions in PPI Lhok Pawoh. So it can be concluded that the researcher or analyst plays an important role in this stage. IV. Result and Discussion PPI Lhok Pawoh began operating in 2013, having 13 fishing boats with at least 500 workers accommodated in the PPI. Business activities at PPI Lhok Pawoh involve fishermen as fish producers and market operators as agents that simplify the marketing process. Fishermen are people who actively engage in fishing activities as a livelihood. While consumers are scattered in districts where PPI Lhok Pawoh is located, Sawang sub-district and neighboring sub-districts. Based on field observations, there are several types of market operators at Lhok pawoh fish landing center (PPI) as seen in Table 3. Toke bangku is a term for a person who is a fisherman representative in selling and providing information on the price of fish catches in the market. Toke ikan are individuals who collect fish through Toke bangku to be marketed outside the area or factory. Muge is a person whose job is as a buyer, distributor and seller of fish catches from fishermen. Based on field observations, there are several types of muge that operate in PPI Lhok Pawoh, which can be seen in Muge besar is a person who buys large quantities of fish and then marketed to Muge lapak and consumers directly in traditional markets. Muge lapak is a person who buys fish directly at PPI or from Muge besar which is then marketed to consumers. Muge motor is a person who takes fish from manager and is then marketed to consumers. Based on the above data, it can be concluded that there are five types of market operators in PPI Lhok Pawoh, namely: Toke bangku, Toke ikan, Muge besar, Muge lapak and Muge motor. A. Types of Finacial Transaction That Done by Market Operators PPI Lhok Pawoh is a large-scale fish producer PPI in Sawang District. So that many market operators involved in the process of marketing fish catches. Every muge that enters PPI Lhok Pawoh can directly buy fish to toke bangku without having to involve certain parties in the transaction process, as well as market operators to consumers. At the Fish Landing Center (PPI), the transaction process begins with the fixing of the basic price carried out by toke bangku and customer, price approval and payment of fish. After the process is complete, the consumer marketing process begins. the transaction process carried out by market operators is carried out according to their respective agreements among market operators. Every financial transaction is carried out personally because the market operator does not constitute an integrated entity within an organization. Therefore, each of them conduct business activities independently. This reveals that every transaction carried out by market operators is included in external transactions. The payment methods that are often used by market operators in PPI Lhok Pawoh are divided into 2 types, namely: (1) Cash Payment, (2) Non-cash payments. B. Cash Payment This cash payment is often done by muge motor, muge lapak and consumers who buy fish directly from toke bangku with a minimum capacity of fifty kilo grams (50 kg). But not all muge motor make cash payments, one of the benches said that the ones who often make cash payments are those who come from outside the Lhok Pawoh village, for example from the districts of Meukek, Samadua and Tapaktuan. Of the many muges motor that take fish at PPI Lhok Pawoh, only 25% make cash payments when collecting fish at the PPI. Meanwhile, consumers who buy fish from muge motor or muge lapak, they make direct payments in cash. This was obtained from an interview with one of the consumers, saying that when buying fish at a muge lapak or muge motor always make cash payments directly. The proof of payment used by market operators in the PPI Lhok Pawoh uses receipts, which is a proof of transactions regarding receipt of money for payment of one item. Receipts are made and signed by both parties, both those who receive money and those who have made payments. Usually the receipt consists of two parts, namely the first and second parts. From the above statement it can be concluded that those making the cash payment process can be seen in Table 5 Non-cash payment here is not a payment using a credit card but a payment made after fish taken from Toke bangku are sold to consumers. From the results of interviews with the Toke bangku, Rusman, said that almost every market operator makes non-cash payments, they make payments after fish taken from Toke bangku are sold. Even payment is made the next day or payment is made when taking the next fish. Non-cash payments involves toke ikan, muge besar, muge motor and toke bangku to fishermen. The payment process from toke bangku to fishermen is paid the next day when the fish managed by toke bangku are sold out. Non-cash payments can be seen in Table 6 T.9. Payments made by toke bangku to fishermen is using non-cash payment methods. T.10 Consumers make transactions to muge besar by using cash payment method. V. Conclusion The research revealed that PPI Lhok Pawoh is run by 5 types of market operators, namely: Toke bangku, Toke ikan, Muge besar, Muge lapak and Muge motor. All financial transactions used by market operators in PPI Lhok Pawoh are included in external transactions, because they are not under the same organization and running their business personally. Most of financial transaction process carried out by market operators at PPI Lhok Pawoh is using the non-cash payment method, except for muge motor which originating from outside of Lhok Pawoh Village, muge lapak and consumers who buy fish directly to PPI. In addition, It is found also that only 25% of muge motor make cash payments and the rest carried non-cash payment. ACKNOWLEDGMENT The study was supported by Ministry of Research, Technology and Higher Education of the Republic of Indonesia, DKP South Aceh District, fisherman and market operator in PPI Lhok Pawoh. We thank our colleagues from South Aceh Polytechnic who provided insight and expertise that greatly assisted the research.
2020-05-28T09:12:06.638Z
2020-05-26T00:00:00.000
{ "year": 2020, "sha1": "95f6be37c63adbebed70b1483717a55351aeb8d5", "oa_license": null, "oa_url": "https://inotera.poltas.ac.id/index.php/inotera/article/download/102/84", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ddf40e0aad86e72b611cc666ad3c89833ab85452", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Business" ] }
235346855
pes2o/s2orc
v3-fos-license
Factors associated with gynaecological morbidities and treatment-seeking behaviour among adolescent girls residing in Bihar and Uttar Pradesh, India Background Gynaecological morbidities are more common than reproductive and contraceptive morbidities and constitute a substantial proportion of disease burden in women. This study aimed to examine the prevalence and factors associated with gynaecological morbidities and the treatment-seeking behaviour among adolescent girls residing in Bihar and Uttar Pradesh, India. Methodology The study utilized data from the Understanding the Lives of Adolescents and Young Adults (UDAYA) survey with a sample size of 14,625 adolescents girls aged 10–19 years. We defined gynaecological morbidity in dichotomous form, created from five questions on different morbidities. Further, the treatment-seeking behaviour was assessed for reported gynaecological morbidities three months prior to the survey. Univariate and bivariate analysis was used to perform analysis to carve out the preliminary results. Additionally, the study employed the heckprobit selection model, a two-equation model, to identify the determinants of outcome variables. Results Overall, about one-fourth (23.6%) of the adolescent girls reported suffering from gynaecological morbidities, and only one-third of them went for treatment. Non-Scheduled Caste/Scheduled Tribe (Non-SC/ST) adolescents were significantly less likely to have gynaecological morbidities (β: -0.12; CI: -0.18, -0.06) compared to SC/ST counterparts; however, they were more likely to go for the treatment (β: 0.09; CI: 0.00, 0.19). The adolescents who had 8–9 (β: 0.17; CI: 0.05, 0.29) or ten and above years of education (β: 0.21; CI: 0.09, 0.34) had a higher likelihood of going for the treatment than adolescents with no education. Moreover, adolescents who belonged to rural areas were less likely to go for the treatment of gynaecological morbidities (β: -0.09; CI: -0.17, -0.01) than urban counterparts. Conclusion Multi-pronged interventions are the need of the hour to raise awareness about the healthcare-seeking behaviour for gynaecological morbidities, especially in rural areas. Adolescent girls shall be prioritized as they may lack the knowledge for gynaecological morbidities, and such morbidities may go unnoticed for years. Mobile clinics may be used to disseminate appropriate knowledge among adolescents and screen asymptomatic adolescents for any possible gynaecological morbidity. Introduction Adolescence is a transition period of physical and psychological change from puberty to legal adulthood. Adolescent includes individuals between the ages of 10-19 years [1]. Globally more than 1.2 billion are adolescents, meaning that one in every six persons is an adolescent. In absolute numbers (243 million), India is home to more adolescents than any other country [2]. WHO defined reproductive morbidity as consisting of three types of morbidity: obstetric, contraceptive, and gynaecological; gynaecological morbidity includes any condition, disease, or dysfunction of the reproductive system which is not related to pregnancy, abortion, or childbirth but may be related to sexual behaviour [3]. Some of the gynecological morbidity symptoms include irregular menstrual patterns, white vaginal discharge, itching of vulva, burning urination, and inguinal swelling [4]. Globally gynaecological problems are the significant contributors to morbidity and mortality, with the highest burden of disease borne by women in the low resource countries. The gynaecological disease is attributed to approximately 4.5 percent of the overall global disease burden, which exceeds that of other major global diseases such as malaria, tuberculosis, ischaemic heart disease, and maternal conditions [5]. Menstruation is often traumatic and very negative experience for young girls in most parts of India. Many traditional beliefs, misconceptions, and practices are associated with menstruation, which makes girls vulnerable to stress and depression as well as reproduction problems [6]. Evidence from India's existing studies shows that a large proportion of girls suffers from various gynaecological morbidity [6]. The population-based cross-sectional study reveals that 15 percentage of Indian adolescent girl suffers from any form of gynaecological morbidity and the prevalence varies by socio-demographic characteristics [7]. Heavy menstrual bleeding, dysmenorrhea, menstrual irregularities, primary and secondary amenorrhea are common gynaecological problems among adolescent girls. A study in Maharashtra and Bangladesh reported that menstrual disorder, dysmenorrhea, and prevaginal discharge, and vulval itching as the common gynaecological problem among adolescent girls [8,9]. Despite being a common problem during puberty and adolescence, they also run the risk of delayed diagnosis and treatment [10]. The youth survey from six Indian states reported low treatment-seeking for symptoms of reproductive tract infections (RTI's) by married and unmarried young women (15-24 years), and the factor such as stigma, shame, and social isolation are more likely to deter unmarried youth from seeking treatment for RTI's [11]. Studies indicate that delayed seeking treatment is because most adolescents did not seriously concern their reproductive health problems or pain but only sought treatment when the pain was unbearable [12]. Another study among adolescents from Bangladesh mentioned that the reason for not receiving treatment for gynaecological problems includes lack of knowledge, economic hardship, shyness to expose to doctor, and no need for treatment for the problems [13]. Seeking treatment for gynaecological morbidity by adolescents is a complex process. It mainly depends on the individual's comfort and familiarity with the service providers and the accessibility to the health services [14]. Despite the significant proportion of the adolescent population in India, studies have highlighted a lack of information on adolescents' sexual and reproductive health [15]. Existing studies have indicated that programs and policies on sexual and reproductive health should give special attention to young and adolescent girls in India [16,17]. Only a few studies in India have focused on adolescents' gynecological morbidity and their treatment-seeking [11,18]. This paper contributes to the literature on the prevalence and treatment-seeking behaviour of gynaecological morbidity with a particular focus on adolescent girls. In our analysis, we apply the Heckman model approach to explore the socio-economic determinants of treatment-seeking behaviour. The advantage of using the Heckman model approach is that it improves the estimates by accounting for the unobserved or unmeasured factors that may influence both the outcome (seeking treatment) and the selection (having any gynaecological disease) variable [19,20]. The objective of the study is to determine the factors associated with gynaecological morbidity and treatment-seeking behaviour among adolescents in Bihar and Uttar Pradesh. Data The authors used secondary source of data collected by Population Council, New Delhi, India. The Population Council Institutional Review Board provided ethical approval for the study. Adolescents provided individual written consent to participate in the study, along with a parent/guardian for unmarried adolescents younger than 18 years. The study utilized data from the Understanding the Lives of Adolescents and Young Adults (UDAYA) project survey conducted in two Indian states Uttar Pradesh and Bihar, in 2016 by Population Council under the guidance of the Ministry of Health and Family Welfare, Government of India [21]. The survey collected detailed information on family, media, community environment, assets acquired in adolescence, and quality of transitions to young adulthood indicators. The sample size for Uttar Pradesh and Bihar was 10,350 and 10,350 adolescents aged 10-19 years, respectively. The required sample for each sub-group of adolescents was determined at 920 younger boys, 2,350 older boys, 630 younger girls, 3,750 older girls, and 2,700 married girls in both states. The effective sample size for this study was 14,625 adolescents girls aged 10-19 years. The UDAYA adopted a multi-stage systematic sampling design to provide the estimates for states as a whole as well as urban and rural areas of the states [21]. Outcome variables The explanatory variable was formed using the following questions that a) Have you had experienced genital ulcers in the last three months? b) Have you had experienced itching in the genitals in the last three months? c) Have you had experienced swelling in the groin in the last three months? d) Have you had experienced burning while passing urine in the last three months? e) Have you had experienced white discharge in the last three months? The response of the questions was coded as 0 means "no," and 1 means "yes." Now the variable named gynaecological morbidity was generated using the above five questions. If the respondent had experienced any issue from the questions mentioned above, then it was coded as 1 means "yes," and if the respondent had experienced none of the above, it was coded as 0 means "no." Apart from this, treatment-seeking behaviour was assessed using the question "Did you seek treatment for this complaint?" the response was coded as 0 "no" and 1 "yes." Hence both the outcome variables were binary. Predictor variables The predictor variables were selected after going through the extensive literature review [4,6,7,11]. Individual variables 1. Sexually active variable was generated using "whether the respondent was married or not?" and "whether or not she had sexual intercourse with her boyfriend? ", if the response was yes in either of the cases, then she was coded as sexually active 1 "yes" and in the other case as sexually inactive 0 "no." 2. Use of sanitary napkin was coded as "sanitary napkin," "cloth," and "others." 3. Toilet facility was coded as "own flush/pit," "shared flush/pit," and "no facility." 4. Age was coded as 10-12, 13-14, 15-17, and 18-19 years. Household variables 1. Wealth index was coded as the "poorest," "poorer," middle," richer," and richest." The variable of wealth status was created using the information given in the survey. Households were given scores based on the number and kinds of consumer goods they own, ranging from a television to a bicycle or car, and housing characteristics such as the source of drinking water, toilet facilities, and flooring materials. These scores are derived using principal component analysis. Wealth quintiles were compiled by assigning the household score to each usual (de jure) household member, ranking each person in the household population by their score, and then dividing the distribution into five equal categories, each with 20 percent of the population. 2. Caste was coded as "Scheduled Caste/Scheduled Tribe (SC/ST)" and "non-SC/ST." The Scheduled Caste includes "untouchables," a group of the population that is segregated socially and financially/economically by their low status as per Hindu caste hierarchy. The Scheduled Castes (SCs) and Scheduled Tribes (STs) are among India's most disadvantaged socio-economic groups. The OBC is the group of people identified as "educationally, economically, and socially backward." The OBC's are considered low in the traditional caste hierarchy but are not considered untouchable [22]. 3. Religion was coded as "Hindu" and "non-Hindu." 4. Residence was available in the data as "urban" and "rural." 5. The survey was conducted in two states, "Uttar Pradesh" and "Bihar." Statistical analysis Univariate and bivariate analysis was used to perform analysis to carve out the preliminary. Additionally, the study employed the heckprobit selection model, which is a two-equation model. First, there is a selection model (in this study, referring to "Do the respondent had any gynaecological morbidities in the last three months? (Yes or no)"). Secondly, there is an outcome model with a binary outcome (in this study refers to "did the respondent went for seeking its treatment? (Yes or no)"). The model provides a two-step analysis and deals with the zero-sample issue. It can accommodate the heterogeneity (i.e., shared unobserved factors) between respondents and then address the endogeneity (between occurrence gynaecological morbidity and opting for its treatment) among adolescents. The Heckman model is identified when the same independent variables in the selection equation appear in the outcome equation [23]. However, this does not provide precise estimates in the outcome equation because of high multicollinearity; it was suggested to have at least one independent variable in the selection equation and not in the outcome equation. A p-value of less than 0.05 was considered statistically significant. The probit model with sample selection assumes that there exists an underlying relationship: latent equation such that we observe only the binary outcome The dependent variable, however, is not always observed. Instead, the dependent variable for observation j is observed if: When ρ ǂ 0, standard probit techniques applied to the first equation yield biased results. Heckprobit provides consistent, asymptotically efficient estimates for all the parameters in such models. For the model to be well identified, the selection equation should have at least one variable that is not in the probit equation. Otherwise, the model is identified only by functional form, and the coefficients have no structural interpretation [23]. Additionally, svyset command was used to adjust the complex design of the survey, which includes clustering and stratum effect. The analysis of the dataset has been carried out after assigning survey weight available in the data set. Moreover, Variance Inflation Factor (VIF) was estimated to check for multicollinearity [24], and no multicollinearity was found among the variables. Wald chisquare test was used to specify the goodness of fit for heckprobit model [23]. In STATA 14, we used rvfplot command to check for heteroskedasticity, and it was found that there was no heteroskedasticity [25]. Results Fig 1 displays the different types of gynaecological morbidities among adolescents aged 10-19 years. About 16 percent of adolescents suffered from white discharge/urethral discharge, followed by burning while passing urine (10.7%) and itching in the genitals (7.4%). The socio-demographic profile of adolescents aged 10-19 years was presented in Table 1. Around 37 percent of adolescents were sexually active, and half of the adolescents were used sanitary napkins. Interestingly, three-fifth of adolescents did not have toilet facilities, and most of them were 15-17 and 18-19 years age group. Nearly one-third of adolescents had ten and above years of education, 16.7 percent were working, and about half of the adolescents used media frequently. A higher proportion of adolescents were Hindu (78.5%) and belonged to rural areas (83.9%). Gynaecological morbidities among adolescents and their treatment-seeking behavior were presented in Table 2. Overall, about one-fourth of the adolescents reported gynaecological morbidities, and one-third of them went for their treatment. Nearly one-third of sexually active adolescents suffered from gynaecological morbidities, and this was higher among adolescents who used sanitary napkins (26.8%). Interestingly, the gynaecological morbidities were significantly lower among adolescents who did not have toilet facilities than those who used toilet facilities. Gynaecological morbidities and their treatment-seeking behavior were positively associated with the age of adolescents. For instance, with an increase in the adolescents' age, the reporting of gynaecological morbidities and their treatment-seeking behavior also increased. Adolescents with no education (28.6%) reported more gynaecological morbidities, while adolescents with ten and above years of education (34.2%) went more for their treatment. Gynaecological morbidities were significantly higher among the working adolescents (27.4%) compared to those who were not working (22.9%). The reporting of gynaecological morbidities was higher among those who rarely had media exposure (26.3%), whereas treatment-seeking for that was more among those who have frequently used mass media (35%). As expected, the richest adolescents (37.2%) went more for treatment than the poorest ones (20.7%). Adolescents who belonged to SC/ST group (24.9%) reported significantly higher gynaecological morbidities than non-SC/ST ones (23.2%). Moreover, this result was the opposite for treatment-seeking for gynaecological morbidities (25.4% vs. 34.2%). A higher proportion of adolescents belonging to urban areas (35.5%) seek treatment for gynecological morbidities than their rural counterparts (31.1%). Estimates from the heck probit model for gynaecological morbidities and its treatmentseeking behavior among adolescents were presented in Table 3. The model was fit as the Wald chi-square test's value was statistically significant (65.24; p<0.05). Sexually active adolescents were 0.38 times more likely to suffer from gynaecological morbidities (β: 0.38; CI: 0.32-0.44) than those who were not sexually active. Gynaecological morbidities were 0.10 and 0.38 times significantly less likely among adolescents who used cloth and other materials, respectively, compared to those who used sanitary napkins. Adolescents age 15-17 (β: 0.28; CI: 0.09, 0.47) and 18-19 years (β: 0.36; CI: 0.17, 0.56) were 0.28 and 0.36 times more likely to have a gynaecological morbidities, respectively than adolescents with 10-12 years age group. Moreover, adolescents who belonged to the 15-17 and 18-19 years age group were 0.53 and 0.47 times less likely to go for the treatment of gynaecological morbidities, respectively, compared to 10-12 years adolescents. On the other hand, adolescents who had 8-9 standard (β: 0.17; CI: 0.05, 0.29) or ten and above (β: 0.21; CI: 0.09, 0.34) education were significantly 0.17 and 0.21 times more likely to go for treatment than illiterate ones. Non-SC/ST adolescents were 0.12 times significantly less likely to have gynaecological morbidities (β: -0.12; CI: -0.18, -0.06) compared to SC/ST counterparts. However, the same adolescent group was 0.09 times more likely to treat gynaecological morbidities (β: 0.09; CI: 0.00, 0.19). Moreover, adolescents who belonged to rural areas were 0.09 times less likely to go for the treatment of gynaecological morbidities (β: -0.09; CI: -0.17, -0.01) than urban counterparts. Discussion This study examines gynaecological morbidities among adolescent girls aged 10-19 years and subsequent treatment for those gynaecological morbidities. The results from this study corroborate with previously available literature concerning risk factors for self-reported gynaecological morbidities and subsequent treatment for these morbidities. To say, our finding of increased risk of gynaecological morbidities among sexually active adolescent girls has been reported in various previous studies [26,27]. Similarly, as in our study, various studies have reported a strong association between the use of shared toilets and the high prevalence of gynaecological morbidities among adolescent girls [28]. Further, the marked association between increasing age among adolescents and higher gynaecological morbidities is also logically documented in previous studies [29]. The study has several other significant findings. Gynaecological morbidities were higher among working adolescents, SC/ST adolescents, Non-Hindu adolescents, and adolescents in Uttar Pradesh. Furthermore, the treatment for gynaecological morbidities was higher among educated adolescents, Non-SC/ST adolescents, adolescents in the urban area. The study noted that around one-fourth of the adolescent girls (23.6%) reported any one of the five gynaecological morbidities. Genital ulcer was the least reported, and white discharge/urethral discharge was reported by around 16 percent of the adolescents. As found in this study, the prevalence of various gynaecological morbidities was nearly the same as measured in previous studies in different settings in India [30][31][32][33]. Sexual activeness was found to be highly associated with gynaecological morbidities among adolescents. Previous studies also noticed a high level of gynaecological morbidities among sexually active [34,35]. This study deviates from previous studies in noticing the positive association between the use of sanitary napkins and a low level of gynaecological morbidities [36,37]. We are not sure about the mechanism of how this association was generated as we could not find any relevant literature; however, it could be presumed that the accumulation of blood in the genital area for a prolonged period may be a risk factor. For reasons like the high cost of the sanitary napkin, an adolescent girl may keep using the sanitary napkin for a longer duration than recommended; for these reasons, the association in our study was other way. A study in the Kenyan setting also noticed various factors associated with the use of sanitary napkin for a longer duration and assumed that using sanitary napkin for a longer duration may be a reason for the accumulation of blood in the genital area, which may further impact gynaecological morbidities [37]. The gynaecological morbidities were higher among adolescents who shared toilets than those adolescents who did not share toilets. A study highlighted higher gynaecological morbidities for those sharing toilets than those who do not share the toilet [36]. Sharing toilet seats may be a factor associated with high gynaecological morbidities [38]. Increasing age is one of the factors that was found to be associated with higher gynaecological morbidities among adolescents. Dheresa et al., in their systematic review, also noticed the association between age and gynaecological morbidities [29]. With an increase in age, adolescent girls may come across many risk factors of gynaecological morbidities, such as the onset of sexual life that may define higher gynaecological morbidities. Moreover, undiagnosed gynaecological morbidities at an earlier age may be diagnosed at a later age, thus, raising the prevalence of gynaecological morbidities at later ages. The finding that health-seeking for gynaecological morbidities declines with an increase in age is opposite to what was noticed by Savarkar in his study [39]. Therefore, an increase in age signifies the higher maturity level shall be attributed to the higher treatment-seeking for gynaecological morbidities. Previous studies have highlighted the importance of education in declining the gynaecological morbidities among adolescents [29]. However, this study failed to find any significant association between education and gynaecological morbidities. Higher education, preferably, leads to lower reporting of gynaecological morbidities, probably because educated girls have a better knowledge of menstrual health, thereby reducing the chances of gynaecological morbidities [40]. Despite failing to associate education and gynaecological morbidities among adolescents, the study significantly concluded that treatment-seeking for gynaecological morbidities was higher among educated adolescents than their counterparts. Scholars unanimously have Use of sanitary napkin Sanitary napkin Ref. agreed on the association between higher education and higher levels of treatment-seeking for gynaecological morbidities [41]. Educated girls are well-informed about the consequences of gynaecological morbidities, and therefore, they seek treatment. The 'culture of silence' associated with gynaecological problems often hinders the participants from having an open discussion about their problems [42]. Females generally feel shy and disgrace to discuss the gynaecological problems with others [35]. Females often ignore the symptoms of gynaecological problems as these are perceived not so serious health issues [35]. 'Self-limiting' about the problem is the main reason for not seeking any healthcare [43]. Working status is another factor that was associated with gynaecological morbidities among adolescents in this study; however, the lower treatment-seeking for gynaecological morbidities among working adolescents was not significant in this study. Previous studies also highlighted that working women are more likely to suffer from gynaecological morbidities [44]. Working adolescents may find themselves busy with their work. Hence, personal hygiene and care may be left out because busy schedules could be a reason for high gynaecological morbidities among them. Cloth Although a previous study noted that urban girls have better menstrual hygiene practices than rural girls [45], this study failed to find any association between reporting gynaecological morbidities among adolescent girls by their residence place. However, this study found that the treatment for gynaecological morbidities was lower among adolescent girls in rural areas than in urban areas. Previous studies align with our study in reporting lower treatment-seeking for gynaecological morbidities among rural girls [46]. In rural areas, stigma related to gynaecological morbidities may be one reason for the lower treatment of gynaecological morbidities among adolescents [47]. Moreover, in rural areas, health care services may be too far from home [47]. In rural areas, most married women and adolescent girls do not seek treatment as they did not feel that treatment was needed [46]. The study has several potential limitations. Foremost, gynaecological morbidities were selfreported by the respondents. Previously studies have noted differences between self-reporting of gynaecological morbidities and gynaecological morbidities diagnosed through clinical examination [48]. Therefore, we assume an underreporting of gynaecological morbidities in this study. However, this study measured gynaecological morbidities with a set of five questions, and therefore, the underreporting may not be to a greater extent. Another limitation is the period for which the gynaecological morbidities were recorded among respondents. Our study captured gynaecological morbidities for the past three months from the survey's date. The study sample covers only two states in India, and therefore the implications may differ from the wider population. Despite the limitations mentioned earlier, this study contributes to a better understanding of gynaecological morbidities and their treatment-seeking among adolescents. Conclusion Previously, several studies have examined menstrual hygiene among adolescents in various Indian settings; however, minimal scholarship exists for prevalence and factors associated with gynaecological morbidities and the subsequent treatment for gynaecological morbidities among adolescents. This study has several significant findings and has importance from a policy perspective. Addressing gynaecological morbidities among adolescent girls is a complex process as adolescents either do not consider it a significant health problem or hesitate to discuss it. Multi-pronged interventions are the need of the hour to raise awareness about the healthcare-seeking behaviour for gynaecological morbidities, especially in rural areas. Adolescent girls shall be prioritized as they may lack the knowledge for gynaecological morbidities, and such morbidities may go unnoticed for years. The mobile clinic may be the right approach as they have an educational outreach component too [49]. Mobile clinics may be used to disseminate appropriate knowledge among adolescents and screen asymptomatic adolescents for any possible gynaecological morbidity.
2021-06-06T06:16:36.218Z
2021-06-04T00:00:00.000
{ "year": 2021, "sha1": "74abb0e3127992dfd1b38f4ba0e204e54d4aa97c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0252521&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "412e06559beb75cf37cd1d6e97975649fd1f29b5", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
3292279
pes2o/s2orc
v3-fos-license
A functional variant in the OAS1 gene is associated with Sjögren’s syndrome complicated with HBV infection Hepatitis B virus (HBV) has been suspected to contribute to several autoimmune diseases, including Sjögren’s syndrome (SS), although the exact mechanism is unknown. The 2′–5′ oligoadenylate synthetase (OAS1) is one of the most important components of the immune system and has significant antiviral functions. We studied a polymorphism rs10774671 of OAS1 gene in Han Chinese descent. The minor allele G was significantly associated with a decreased risk for SS, anti-SSA-positive SS, and anti-SSA-positive SS complicated with HBV infection, which have not been seen in anti-SSA-negative SS and HBcAb-negative SS patients. Gene expression analysis showed that the risk-conferring A allele was correlated with lower expression of p46 and increased expression of p42, p48, and p44. A functional study of enzymatic activities revealed that the p42, p44, and p48 isoforms display a reduced capacity to inhibit HBV replication in HepG2 cells compared to the normal p46 isoform. Our data demonstrated that the functional variant, rs10774671, is associated with HBV infection and anti-SSA antibody-positive SS. The SAS variant switches the primary p46 isoform to three alternatives with decreased capacities to inhibit HBV replication. These data indicated that individuals harboring the risk allele might be susceptible to hepatitis B infection and SS development. in the inhibition of viral replication 14,15 . Four OAS1 isoforms have been identified 16 . A single-nucleotide polymorphism (SNP), rs10774671, at the intron 5 splice acceptor site (SAS) of the OAS1 gene affects the production of various OAS1 isoforms 17 . The G reference allele produces OAS1 transcript variant 1 (TV1), which results in the production of the OAS1 p46 isoform; the alternate A allele changes the SAS site, resulting in the loss of the canonical SAS, and produces three transcript variants, TV2 (p42), TV3 (p48), and TV4 (p44), which encode different isoforms with altered enzymatic activities 14,16,17 . The SAS variant, rs10774671, has been associated with multiple autoimmune diseases, including type 1 diabetes (T1D) [18][19][20] and multiple sclerosis (MS) 21,22 . A recent large-scale association study was just published and showed a significant association of variant at the OAS1 locus in SS in European population 23,24 . By comparing 835 T1D and 401 healthy siblings (subjects were collected from 574 families of Danish, Canadian, and American heritage), the G allele of rs10774671 was found to significantly increase the risk of developing T1D 18 . In MS, contradictory results have been reported. A Spanish study showed that the G allele is associated with increased risk for MS 21 ; however, an Irish study showed that the G allele plays a protective role 25 . These data suggested that rs10774671 has distinct effects on various autoimmune diseases and ancestries 26 . To date, there is only limited information on the nature of the SAS variant of the OAS1 gene and susceptibility to SS 23 . Thus, a population-based case-control study to evaluate the association of SS and the OAS1 gene is critically needed. Hepatitis B virus (HBV) infection is highly prevalent in human populations. Of the 350 million individuals worldwide infected with HBV, 33% reside in China 27 . Although there is accumulating evidence indicating that hepatitis C virus (HCV) infection contributes to the etiology of SS 28,29 , there is a limited number of reports regarding the nature of the association between SS and HBV infection [30][31][32] . Two large cohort studies have suggested that HBV infection might be more prevalent in patients with SS than in the general population 30,31 . However, the pathogenic role of HBV in SS-susceptible individuals has not been well characterized. Between 2005 and 2016, we established a cohort of 588 patients with SS and 1455 healthy controls to study genetic factors that influence the risk for SS. To evaluate the pathogenic role of HBV in triggering SS in susceptible individuals, we analyzed the clinical information for 368 anti-SSA-positive SS patients for HBV infection by screening their anti-HBc antibody status. Of the 68 tested SS patients, 30 SS patients were previously infected by HBV (HBcAb+), and 38 SS patients were negative for HBcAb. Therefore, we set out to evaluate the genetic association of the SAS variant of the OAS1 gene in a case-control study complicated by HBV infection. Results The rs10774671 SNP is associated with SS and anti-SSA-positive SS. The demographics of the 588 cases and 1455 independent controls enrolled in this study are shown in Supplementary Table 1. There were no significant differences between the case and control subjects in terms of mean age or gender distribution. All cases and controls in the analysis are self-reported Han Chinese. As shown in Table 1, single-marker association was performed using a logistic regression with multiple models. We observed an association of variant rs10774671 in SS (P = 0.039 for an additive model, P = 0.01 for a dominant model, and P = 2.9 × 10 −4 for a recessive model), with an effect size of odds ratio (OR) = 0.85. Given that data reflecting the anti-SSA autoantibodies of 429 SS patients were available, we further stratified SS patients based on their anti-SSA autoantibodies status. We observed a significant association of rs10774671 with anti-SSA-positive SS (P = 8.0 × 10 −5 ) with an effect size of OR = 0.70. However, the association was not seen in anti-SSA-negative SS (P = 0.1725) ( Table 1). These results demonstrated, for the first time, the association of OAS1 in SS patients and anti-SSA-positive SS patients in Han Chinese. The G allele of rs10774671 plays a protective role, and the A allele confers risk for SS. In our cohort, the frequencies of the risk allele A were 0.71 in controls and 0.78 in SS cases. A similar pattern of association was observed in SS patients with European ancestry, in which the frequencies of the risk allele A were 0.65 in controls and 0.72 in SS cases 23,24 . These findings suggest that this genetic risk factor could influence risk of SS in both European and Asian populations. The variant rs10774671 is associated with anti-SSA-positive SS complicated by HBV infection with a reduced effect size. To evaluate the role of rs10774671 in the pathogenesis of anti-SSA-positive SS complicated with HBV infection, we classified the anti-SSA-positive SS patients into two groups: anti-SSA+/ HBcAb+ (n = 30) and anti-SSA+/HBcAb− (n = 38). Comparisons were made between anti-SSA+/HBcAb+ SS, anti-SSA+/HBcAb− SS, and controls. We performed an association study using Fisher's exact test to account for small sample sizes. As shown in Table 2, the protective allele G of the rs10774671 SNP was significantly associated with anti-SSA+/HBcAb+ SS (P = 0.0059), with an effect size of OR = 0.36. Interestingly, this association was not seen in the anti-SSA+/HBcAb− SS samples (P = 0.3746, OR = 1.24), although the sample size of the anti-SSA+/HBcAb-SS group (n = 38) was greater than the anti-SSA+/HBcAb + SS group (n = 30). Our data showed that the protective allele G was less common in the anti-SSA+/HBcAb + SS group (frequency of the G allele = 0.13), and the altered risk allele A was strongly enriched in the anti-SSA+/HBcAb + SS group (frequency of the A allele = 0.87). These results demonstrated that the OAS1 rs10774671 A allele conferred risk for SS patients with HBV infection compared to SS patients having no HBV infection. In contrast to the effect sizes of the association signals of rs10774671 in SS patients (OR = 0.85) and in anti-SSA-positive SS (OR = 0.70), the effect size in HBV-infected SS patients was decreased (OR = 0.36). These data suggested that the A allele of genetic variant rs10774671 contributes to the risk for SS. Therefore, HBV infection might play a pathogenic role in genetically predisposed individuals at the OAS1 locus to develop SS. OAS1 gene expression is significantly increased in anti-SSA-positive patients. Our data demonstrated a significant association between the variant rs10774671 and SS in Han Chinese. To begin to evaluate the role of rs10774671 in SS pathogenesis, a bioinformatic analysis using Haploreg (4.1) 33,34 was performed to identify variants that are in linkage disequilibrium (LD) with rs10774671. As shown in Supplementary Table 2 and Supplementary Figure 2, 98 variants in Asian individuals spanning the OAS1-OAS3 locus were correlated with rs10774671, with an r 2 ≥ 0.99. These data suggested that an rs10774671-tagged haplotype, spanning the OAS1-OAS3 locus carrying 99 genetic variants, is associated with SS. Therefore, we evaluated the role of rs10774671 in regulating gene expression of OAS1 and OAS3 in peripheral blood mononuclear cells (PBMCs) from patients with SS (n = 54) and from healthy controls (n = 104) by RT-qPCR assays according to the methods in our previous studies [35][36][37][38] . In healthy individuals and SS cases, the genotypes of rs10774671 do not influence total expression of OAS1 and OAS3 ( The major allele A of rs10774671 leads to an increased risk for SS by regulating pre-mRNA splicing of the OAS1 gene. Given that the risk-conferring A allele of rs10774671 is a splice acceptor site variant located at the junction between intron 5 and exon 6 and may switch the primary normal isoform to various alternatives, we assessed whether the altered allele A influences the expression of various isoforms of OAS1 in PBMCs and EBV-transformed B cell lines from patients with SS. Primers that specifically detect different transcript variants (TVs) of OAS1 were designed and are listed in Supplementary Table 3. Quantitative PCR analyses were performed to detect expression of specific TVs of OAS1. As shown in Fig. 2, the switching of reference allele G to risk-conferring allele A results in reduced expression of the TV1 (p46) isoform and increased expression levels of TV2, TV3, and TV4. To further confirm this finding, we replicated the gene expression analyses in PBMCs obtained from 20 patients with SS. As shown in Supplementary Figure 5, we consistently observed that the risk-conferring allele A reduces expression of the TV1 (p46) isoform and increases the expression levels of the three alternatives. Risk allele A of the rs10774671 SNP demonstrated reduced activity in the clearance of HBV infection. Previous studies on enzymatic activities of various isoforms of the OAS1 gene product in clearance of hepatitis C virus 39,40 and West Nile virus 41 showed that the p46 isoform is significantly more active than the TV3 (p48) isoform. Additionally, it is known that OAS1 plays a critical role in restricting HBV infection and replication 42 . In this study, we observed that the rs10774671 SNP is significantly associated with SS complicated with HBV infection. Therefore, we hypothesized that the risk-conferring allele A, which is associated with reduced OAS1 activity, is inefficient in the clearance of HBV in SS-susceptible individuals and influences the risk for SS with HBV infection. To test this hypothesis, we co-transfected HepG2 cells with an HBV-producing plasmid and OAS1 isoform-expressing vectors. Six days post-transfection, we determined the expression levels of the OAS1 isoforms and the hepatitis B core protein in HepG2 cells by western blotting and measured the HBV-DNA by RT-qPCR. As shown in Fig. 3, the various isoforms of OAS1 were expressed evenly; however, we observed that the TV2, TV3, and TV4 isoforms showed significantly lower capacity to restrict HBc protein expression in cells than TV1 (p46) (Fig. 3). To evaluate the activities of OAS1 in restricting HBV surface antigen release in HepG2 cells, we measured the levels of HBsAg in the culture medium by ELISA. As shown in Fig. 3C, the levels of HBsAg in the culture media in TV2-, TV3-, and TV4-expressing cells were significantly higher than those in the media in TV1-expressing cells. Our data demonstrated that the A allele-associated isoforms produce significantly lower activity to inhibit HBV replication than TV1 (p46). These data indicated that individuals harboring the risk allele might be susceptible to hepatitis B infection and SS development. Discussion Little is known about the etiology and pathogenesis of SS, partially due to the complexity and heterogeneity of the disease mechanisms 1,29 . Dysregulation of IFN signaling pathways has been observed in patients with SS and makes the IFN-inducible gene OAS1 a good candidate risk gene for SS 9,12 . Recently, a large-scale genetic association study was just published and demonstrated a significant association of variant at the OAS1 gene with SS in European population 24 . In addition to the genetic associations of OAS1 with T1D 18-20 , MS 21,22 , and SS in European, we demonstrated a genetic association of rs10774671 of the OAS1 gene with SS (OR = 0.85) in Chinese Han descent, especially with anti-SSA autoantibody-positive SS (OR = 0.70). Among SS patients who tested positive for the hepatitis B core protein antibody, the effect size of the association was reduced to OR = 0.36 (Fig. 4). The risk allele A is strongly enriched in anti-SSA+/HBcAb-positive SS patients. Interestingly, the genetic association was not seen in anti-SSA+/HBcAb -negative SS patients. Further functional studies revealed that the risk A allele leads to the changing of a consensus sequence of the splicing accepter site at intron 5 of OAS1, resulting in the production of three alternative isoforms with a reduced capacity to inhibit HBV replication in HepG2 cells. The data from these functional studies are consistent with our findings in the genetic association study. The association of SS with HCV has resulted in an intense debate over the past decade. In 1992, researchers found the first histological evidence of SS in 16 of 28 patients with chronic HCV infection 43 . Since then, numerous studies have shown significant associations of SS with HCV. However, the association of SS and HBV has not been established, although both viruses are highly prevalent. Interestingly, there are a number of case reports demonstrating that SS occurs after hepatitis B vaccination and suggesting a role of HBV in SS pathogenesis 44 . A recent study on the prevalence of HBV infection in patients with SS (603 cases) in Spain showed a slight increase in the percentage of HBV infection in SS patients (HBsAg + 0.83%) compared to the population controls (HBsAg + 0.7%) 31 . Additionally, a study of SS patients from Taiwan (9,629 cases) infected with HBV showed that the HBV infection was more frequent in SS patients (4.3%) than in the healthy population (3.6%, 38,516 controls) 30 . Our data indicated that there is an increased incidence of HBV in SS patients carrying the A allele. In addition to the HBV, viruses like HIV, HTLV-1, and HDV, are also shown to influence risk for SS. However, the patients in our cohort are HIV and HDV negative. The serological data in our cohort is not available for the HTLV-1, therefore, infections of other SS-associated viruses in patients should be considered as they might correlate with the risk variant of the OAS1 gene. The SNP rs10774671 at intron 5 of the OAS1 gene affects pre-mRNA splicing and the enzymatic activities of various isoforms of OAS1, which have been intensively studied in the context of viral infections, including HCV. However, the expression profiles of OAS1 isoforms in SS patients and the activities of different OAS1 isoforms in controlling HBV replication remain to be elucidated. Our findings in the current study revealed a molecular mechanism by which rs10774671 impacts the pre-mRNA splicing of OAS1 in patients with SS, which was similar to previous findings in other tissues and diseases 17,19 . Failure to clear virus might lead to a chronic infection that drives the sustained overexpression of IFNs, which is associated with increased risk of SS 24,45 . On the other hand, viral proteins may also indirectly cause IFN production through adaptive immune responses 45,46 . Further mechanistic study of hepatitis B viral proteins may or may not directly influence salivary gland function in vitro, and animal models are fundamental to understand the effect link between HBV and development of Sjögren's syndrome. In summary, we demonstrated a genetic association of SNP rs10774671 in the OAS1 gene in SS, anti-SSA-positive SS, and HBV-positive SS patients, with an enhanced effect size. A functional study of the risk variant demonstrated that the SS-associated risk allele A leads to decreased expression of the normal isoform of OAS1 (TV1, p46) and results in the expression of alternative isoforms with reduced activities in inhibiting HBV replication in HepG2 cells. These findings provide significant insight into the mechanistic understanding of how risk variant might result in a reduced ability to the clearance of HBV, leading to a chronic infection and constitutive activation of the IFN signaling, influence risk for SS in both Asian and European populations. Materials and Methods Patients and samples. The The isoforms (OAS1_ V1 and OAS1_V3) were amplified with a same pair of primers and were resolved on the same gel. The isoforms OAS1_V2 and OAS1_V4 were amplified with two pairs of primers designed specifically for the V2 and V4 and were resolved separately. The GAPDH amplicon from each sample was used as a loading control. in China. All SS patients were diagnosed according to the standards defined by criteria of the American European Consensus Group in 2002 and were enrolled in this study (Table 1). There was no sex and age restriction (Supplementary Table 1). Four hundred twenty-nine patients were tested for anti-SSA and anti-SSB Patients with SS complicated with infections of hepatitis viruses. Viruses including HIV, HTLV-1, HCV, and HDV have been associated with increased risk for SS. We checked available serological data in our cohort and found that seventy-six SS patients were tested for HCV infection, and only 1 patient was positive. All subjects including cases and controls are negative for HIV and HDV. Tests for HIV and HDV antibody detection are accomplished by Abbott HIVAB HIV-1/2 (rDNA) EIA kit and Cusabio HDV IgG ELISA kit respectively as instruction's suggestive protocols (Abbott, Tokyo, Japan; Cusabio, Guangzhou, China). For HTLV-1, there is no serological information available. Of the 368 anti-SSA antibody-positive SS cases, 68 patients were given the serological tests for HBsAb and HBcAb. Given that the HBV vaccination is routinely administered to people in China, a positive HBsAb test could not distinguish infection or vaccination of HBV. Therefore, we classified 68 patients with serological test results into two groups based on their anti-HBc status. Patients who were positive for anti-HBc demonstrated exposure to HBV. Because anti-HBc can persist for life, these patients either had an active hepatitis B infection or have had hepatitis B in the past. Additionally, the patients with anti-HBc antibodies also tested for the HBV core proteins. All patients were negative for the HBV core protein suggesting there is no active infection. As shown in Table 2, 30 anti-SSA-positive SS patients were positive for HBcAb, and 38 anti-SSA-positive SS patients were negative for HBcAb. SNP genotyping. Genomic DNA was extracted from PBMCs using a DNA-Beads-400 DNA extraction kit (Zhiang Biotech, Changchun, China), according to the manufacturer's instructions. The PCR primers were designed as listed in Supplementary Table 3. Genomic DNA from each sample was amplified, and the genotype at rs10774671 for each sample was determined using a TaqMan SNP genotyping assay (Thermo Fisher Scientific Inc. Beijing, China) on an Applied Biosystems ™ OpenArray ™ real-time PCR instrument. In addition to the TaqMan SNP genotyping assays, several PCR products were randomly selected and subjected to Sanger sequencing to confirm the results (Supplementary Figure 1). RNA isolation and quantitative RT-PCR. Total RNA from PBMCs was isolated using TRIzol (Invitrogen Inc., Carlsbad, CA, USA) according to the manufacturer's instructions. The concentrations of total RNA were determined by NanoDrop, and samples were diluted with 10 ng/μL of MS2-RNA (Hoffmann-La Roche, Inc., Nutley NJ, USA) to a final concentration of 100 ng/μL. cDNA from each individual was synthesized using iScript cDNA Synthesis Kits (Bio-Rad Laboratories, Inc., Hercules, CA, USA). Quantitative RT-PCR was performed to determine the mRNA expression levels of four OAS1 isoforms. The human GAPDH gene was used in quantitative RT-PCR as a control. Statistical analysis. Single-marker associations were assessed using the logistic regression function in Plink, version 1.09. The Hardy-Weinberg proportion test P value of rs10774671 in the controls was greater than 0.01. For the associations between rs10774671 and SS and anti-SSA-positive SS, multiple models were used. The associations of rs10774671 in anti-SSA-positive SS complicated with HBV were performed using Fisher's exact test, given that a small sample size was available for this particular analysis. ORs and 95% confidence intervals (CIs) were calculated to assess the relative risk conferred by a allele and genotype. The comparisons of the mRNA expression levels of OAS1 and the different isoforms in PBMCs and EBV-transformed B cell lines between different genotypes were performed using a non-parametric Kruskal-Wallis test with correction for multiple comparisons. A P value less than 0.05 was considered statistically significant. Molecular cloning of OAS1. To amplify DNA segments encoding different isoforms of OAS1, we designed a group of primers for initial and nested PCRs. A human B cell cDNA library was used as the template, and PCR was performed with high-fidelity Phusion polymerase (Thermo Fisher Scientific Inc. Beijing, China). The PCR products were cloned into the pBluescript-KS vector. Plasmid DNAs were purified from E. coli clones using a DNA mini-prep kit (Thermo Fisher Scientific Inc. Beijing, China). The correct construct was selected based on restriction enzyme digestions. DNA sequencing analyses with T7 and T3 primers then verified the entire sequence of the DNA insert. DNA segments encoding four isoforms of OAS1 were then sub-cloned into the pCDNA3.1 expression vector for the follow-up functional studies. Anti-HBV activity assays. HepG2 cells were maintained in complete Dulbecco's Modified Eagle's medium (DMEM, Gibco-BRL, CA) containing 10% fetal bovine serum (FBS, Hyclone, Fisher, PA), 100 units/mL of penicillin, and 100 mg/mL of streptomycin in a humidified incubator with 5% CO 2 at 37 °C. Co-transfection of HepG2 cells was performed with expression vectors encoding various isoforms of OAS1 along with the HBV-producing plasmid pCMVayw-HBV. The culture medium was replaced every three days. Six days after transfection, the cells were harvested for HBV core protein assays. The supernatants were harvested for detecting HBV antigens using ELISA, and DNA from the cells was isolated for determination of HBV-DNA presence by quantitative PCR. The expression levels of the different isoforms of OAS1 and the HBV core protein in HepG2 cells were determined using a standard western blot procedure using an antibody against the co-expressed tag with OAS1 and anti-HBc antibody. β-actin was used as a loading control for the experiments. β-actin expression was detected and was used as a loading control for each lane. Three independent experiments were performed to determine the statistically significant differences. All methods were performed in accordance with the institutional and national guidelines, regulations, and approvals.
2018-04-03T02:21:19.854Z
2017-12-14T00:00:00.000
{ "year": 2017, "sha1": "41c4689ddc160869c81d0422f52e17f021ab07bb", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-17931-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41c4689ddc160869c81d0422f52e17f021ab07bb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237398187
pes2o/s2orc
v3-fos-license
Cation complexation by mucoid Pseudomonas aeruginosa extracellular polysaccharide Mucoid Pseudomonas aeruginosa is a prevalent cystic fibrosis (CF) lung colonizer, producing an extracellular matrix (ECM) composed predominantly of the extracellular polysaccharide (EPS) alginate. The ECM limits antimicrobial penetration and, consequently, CF sufferers are prone to chronic mucoid P. aeruginosa lung infections. Interactions between cations with elevated concentrations in the CF lung and the anionic EPS, enhance the structural rigidity of the biofilm and exacerbates virulence. In this work, two large mucoid P. aeruginosa EPS models, based on β-D-mannuronate (M) and β-D-mannuronate-α-L-guluronate systems (M-G), and encompassing thermodynamically stable acetylation configurations–a structural motif unique to mucoid P. aeruginosa–were created. Using highly accurate first principles calculations, stable coordination environments adopted by the cations have been identified and thermodynamic stability quantified. These models show the weak cross-linking capability of Na+ and Mg2+ ions relative to Ca2+ ions and indicate a preference for cation binding within M-G blocks due to the smaller torsional rearrangements needed to reveal stable binding sites. The geometry of the chelation site influences the stability of the resulting complexes more than electrostatic interactions, and the results show nuanced chemical insight into previous experimental observations. Introduction Bacterial biofilms consist of a community of bacteria embedded in an extracellular matrix (ECM) of polysaccharide. More widely dispersed at longer length scales, are polypeptide oligomers and extracellular proteins, and circular polynucleotides such as plasmids [1]. The ECM limits the penetration of antimicrobials, which contributes to the minimum inhibitory concentrations of antimicrobials against biofilms being 100-1000-fold higher than those required for treating planktonic bacteria [2]. Pseudomonas aeruginosa is one such species where its biofilm is definitively associated with chronic disease, most notably in the cystic fibrosis lung [3]. Confocal laser scanning microscopy and fluorescent lectin-binding analysis has characterized P. aeruginosa biofilm matrix architecture in situ and shown that the bacterial microcolonies are embedded in an open 3D network of matrix material [4,5]. This network gives rise to interstitial void spaces, to which, the vast majority of biofilm (bulk) water is confined [6]. The cystic fibrosis (CF) lung acts as a prime infection site for P. aeruginosa [7] and quantitative microbiological analysis of CF sputum over long periods of time has demonstrated that it is the most prevalent, and most dangerous, pathogen found in CF patients [8]. Initial colonization is by the non-mucoid phenotype but over time the stress of the CF lung environment affects the conversion to the mucoid phenotype, which becomes the dominant variant [9]. P. aeruginosa is intrinsically resistant to antibiotic therapy due to its low outer membrane permeability, production of antibiotic inactivating enzymes and expression of efflux pumps [10]. The mucoid biofilm ECM further adds to its pathogenicity and mature biofilms are rarely eradicated by high doses of antimicrobial treatments such as tobramycin [11]. Consequently, chronic infection by mucoid P. aeruginosa leads to decreased lung function and, ultimately, death [12]. The polysaccharide component of the mucoid P. aeruginosa ECM is predominantly composed of the unbranched anionic polysaccharide alginate, a copolymer of two uronate sugars, namely, β-D-mannuronate (M) and its C5 epimer α-L-guluronate (G), linked via a 1-4 glycosidic bond [13,14]. The alginate polysaccharide is acetylated at the C2 and/or C3 position(s) exclusively on the M units, inferred from the absence of H 1 -NMR chemical shifts characteristic of acetylated G-units of bacterial alginates [15]. Bacterial alginate H 1 -NMR spectra also show the parallel presence of both mono-acetylated and diacetylated M-units [16], which suggests that steric bulk at one carbon position following acetylation does not prevent acetylation at the other carbon position. Moreover, H 1 -NMR spectroscopy has quantified the degree of M-unit acetylation in bacterial alginates to be between 4-57% [16]. Acetylation offers protection against epimerase activity [17] and consequently repetitions of G units (G-blocks), which are the distinguishing architecture of algal alginate, do not occur in bacterial alginate [18]. Despite the fact that biofilm matrices are highly solvated systems, the ECM has the physical characteristics of a solid (viscoelastic) material, evident from the high storage modulus and low loss modulus [19]. It adopts a gel-like structure, whereby the internal polymer network is stabilized by chemical cross-links including electrostatic interactions, hydrogen bonds and dispersive interactions [19]. Binding of amphiphilic fluorescent carbon dots to the P. aeruginosa ECM has shown, in the absence of cations, the ECM is dendric in morphology, stabilized solely through entanglements [20]. In the CF-lung, as in all biological tissues, the most common serum metal ion is sodium. A recent study suggested that salinity levels, and thus sodium ion concentration, in CF-lung sputum were slightly higher than in control subjects, although statistical significance was only achieved in age-matched populations [21]. When considering other metal ions, samples of expectorated sputum from cystic fibrosis sufferers and a non-CF control group, have shown significantly (p <0.001) elevated levels of magnesium (30 mg/L), calcium (102 mg/L), iron (797 μg/L) and zinc (1285 μg/L) [21]. Apart from iron, which is also associated with bacterial virulence and respiration [22], these ions are known to be implicated in a variety of inflammatory pathways [23,24]. Across all CF patients iron, magnesium and zinc had the largest increase compared to non-CF sputum samples [21]. However, when looking specifically at samples from patients with P. aeruginosa infections, it was only magnesium (p<0.05) and calcium (p<0.01) that showed significant elevation compared to samples from patients with other common CF-infections [21]. Furthermore, there was a high correlation coefficient between calcium and magnesium in the CF sputum samples [21]. These ions serve to create permanent electrostatic cross-links in the extracellular polysaccharide (EPS), established between neighbouring M-M and M-G junctions [25], further stabilizing the ECM [26]. Using compression measurements on mucoid biofilms, calcium ions have been observed to be strong cross-linkers, enhancing structural rigidity through an increased Young's modulus [27]. Moreover, the presence of calcium ions increases the amount of alginate produced leading to thicker, more granular, biofilm structures [28]. Indeed, experiments conducted in continuous-flow stirred tank reactors, show that calcium ions create a biofilm where specific cellular growth outstrips the specific cellular detachment rate, leading to biofilm thickening and accumulation [29]. Relative to calcium ions, sodium ions appear to be poor cross-linkers, forming weaker, temporary cross-links and softer gel-like structures stabilized though entanglements rather than bridging cations [30,31]. NMR studies on 13 C-labelled native P. aeruginosa biofilms showed that while calcium ions caused broadening of the CHOH-carbon signals (particularly within the guluronate units) as gel formation progressed and molecular mobility reduced, the effect of magnesium in the same system was significantly less pronounced [25]. Cation binding by algal alginate has been studied theoretically, using classical molecular dynamics (MD) and quantum chemical Density-Functional Theory (DFT). A combination of MD and Monte Carlo simulations, studying calcium ion complexation by two poly-α-L-guluronate (G) chains, suggested that chain complexation is facilitated through ionic interactions between the cations and carboxylate groups [32,33]. Upon establishment of calcium-carboxylate ionic interactions, driving initial poly-α -L-guluronate aggregation, a hydrogen-bonding network is then established between chains, although the stability of the resulting aggregate is more dependent on the calcium cross-links [34]. By contrast, MD simulations of sodium ions with poly-α -L-guluronate indicates that no bonding interaction occurs as the monovalent ion sits too deeply in the G-G junctions and is, therefore, too distant to attract a second chain [35]. Similarly, a lack of sodium-induced bonding is also observed in analogous poly-β-D-mannuronate simulations [36]. More accurate DFT calculations, investigating complexation of divalent cations (Mg, Ca, Sr, Mn, Co, Cu, Zn) by simple algal disaccharide polyuronates (M-M, G-G and M-G junctions) highlight the role of the hydroxyl, glycosidic and ring oxygen atoms in cation binding. It was shown that in these systems the alkaline earth cations in particular, can, in principle, form five or six ionic bonds within each complex [37,38]. In these circumstances, the cation-carboxylate bonds are stronger relative to the hydroxyl, glycosidic and ring oxygen-cation bonds, as indicated by their shorter bond lengths. A more recent study examining the structure and reactivity of the M-M, G-G, M-G and G-M conformations, showed that the stability, as defined by the hardness (η), of the cation-disaccharide complexes decreased from magnesium to calcium to sodium, a trend inversely proportional to the ionic radius [39]. This suggests that magnesium-cross-linked alginate chains are more stable than those cross-linked with calcium. However, C 13 -NMR spectroscopy measurements of the interaction of P. aeruginosa alginate, with bivalent metal ions does not corroborate this result, suggesting instead that binding of magnesium ions to the bacterial alginate framework is weak and non-specific [25]. In summary, previous molecular modelling studies have assessed the contributions specific cations make to the stability of algal alginate complexes upon the establishment of cross-links, but there remains disagreement as to which ions produce the most stable structures, and what the chemistry of the interactions between those cations and the alginate looks like. Attention has been drawn to the importance of the cation-carboxylate interaction during the aggregation events and although coordination to other oxygen functionality is possible, this has not been considered vital for aggregation. In the present work, the interaction between cations that are elevated in the CF lung, and the extracellular polysaccharide, was studied with aim of quantifying how and where the cations contribute to EPS stability. Two simplified mucoid P. aeruginosa EPS molecular models were constructed that possess structural motifs unique to mucoid Pseudomonas. Each model consisted of two chains, each containing four saccharide units. DFT studies followed, to understand the chemical interactions between the chains and selected cations, to determine the stable coordination geometries and to quantify the thermodynamic stability of the resulting complexes. The choice of metal ions in this study was based on the highest concentrations in CF sputum, specifically, the three metal ions (Na + , Ca 2+ , Mg 2+ ), with the sodium ion effectively acting as a biological control [21]. Compared to non-CF controls, the calcium ion concentration was 7.5 × higher and the magnesium ion concentration 2.5 × higher [21]. The models predict the potent cross-linking ability of the ions relative to one another and indicate that stable cation complexation results from a combination of electrostatic and steric factors. Computational details All geometry optimizations were performed using the plane-wave Density Functional Theory (DFT) code, CASTEP [40]. For all polyuronate and ion-complexation optimizations, a convergence tested cut-off energy of 900 eV was employed, as well as a Monkhorst-Pack k-point grid of 1 x 1 x 1 to sample the Brillouin zone [41] in an orthorhombic box of size 45 Å x 27 Å x 16 Å. On-the-fly ultrasoft pseudopotentials were used [42] alongside the PBE exchange-correlation functional [43]. The semi-empirical dispersion correction of Tkatchenko and Scheffler [44] was employed to account for intra-and intermolecular dispersive forces. The SCF tolerance was set to 1×10 −7 eV Atom -1 and the energy, force and displacement tolerances for the geometry. optimisations were set to 1×10 −5 eV Atom −1 , 0.03 eV Å −1 and 1×10 −3 Å respectively. Following each geometry optimization, Mulliken bond populations [45] were calculated to classify the nature of bonding in each of the complexed structures. All molecules were generated and visualized using CrystalMaker 1 [46]. For the determination of formation energies, chemical potentials for sodium, magnesium and calcium were calculated by their respective 0K energy per atom, from the pure metals in their lowest energy configurations, namely HCP (sodium), hexagonal (magnesium) and BCC (calcium). Chemical potentials for hydrogen and oxygen were calculated from optimized single molecules, as was the energy of the ethanal molecule. All calculations were conducted at the same cut-off energy of 900 eV. Molecular models Mucoid P. aeruginosa alginate presents in vivo with acetylated M units and no contiguous G-blocks [15,18]. This creates structural variations throughout the EPS architecture as certain fractions of the EPS are exclusively mannuronate while in others, guluronate units are successfully incorporated. As such, two molecular templates were created. The first was a poly-β-D-mannuronate structure, a single chain of four M units linked via a 1-4 glycosidic bond. The second was a copolymeric β -D-mannuronate-α-L-guluronate structure, a single chain of two M units and two G units linked via a 1-4 glycosidic bond in an alternating M-G pattern. Fifty percent acetylation of these templates at the C2 or C3 positions on the M units was then performed. This degree of acetylation falls within the 4-57% range that has been observed experimentally [15]. For the poly-β-D-mannuronate and copolymeric β -D-mannuronate-α-L-guluronate structures, this gave 12 and 4 possible acetylation configurations respectively. The thermodynamic stability of each acetylation configuration (Table 1) was determined by means of evaluating a formation energy (E f ) according to Eq 1 where E final is the final energy of the acetylated structure, E initial is the energy of the initial non-acetylated template, E ethanal is the energy of an ethanal molecule and μ H is the chemical potential of a hydrogen atom. For the poly-β -D-mannuronate structure, n = 2 and m = 4 and for the copolymeric β -D-mannuronateα -L-guluronate structure, n = 1 and m = 2. The most thermodynamically stable acetylated poly-β-D-mannuronate and copolymeric β-D-mannuronate-α-L-guluronate structures are shown in Fig 1 and from here on are referred to as PolyM and PolyMG respectively. Water Pulsed-field gradient NMR has shown that biofilm bulk water is highly mobile and confined to channels within the P. aeruginosa biofilm matrix. A small amount of water is present entrapped within the secondary structures of polysaccharides but is exchanged frequently with the bulk solvent [6]. Additional NMR observations on water movement through polysaccharide gels provide consistent conclusions, highlighting that waters near to the polysaccharide do not have a significantly reduced motion, i.e. the water does not bind specifically [47]. Binding of ions by polysaccharides occurs preferentially in positions where oxygen functionality is well-positioned to displace water from the coordination shell of the cation, [48] a requirement satisfied by positioning ions in-between chains. For these reasons, as well as for reduced computational expense, all optimisations were performed in vacuo with the omission of water molecules. This allowed for large scale, tractable, DFT calculations focusing solely on cationpolysaccharide interactions, which gave thermodynamic predictions in line with experimental observation. 2-chain systems The most thermodynamically stable arrangement of two PolyM and PolyMG chains was evaluated. The configurations individually tested were parallel, antiparallel, parallel-inverted (where one chain has been inverted 180˚about the chain axis) and antiparallel-inverted, with all atoms being given complete freedom. The thermodynamic stability of each arrangement was determined by evaluation of the formation energy according to Eq 2 where E final is the final energy of the spatial arrangement and E initial is the energy of a single PolyM or PolyMG structure. The formation energies of all configurations are presented in Table 2. The antiparallel arrangement of two PolyM structures and the parallel arrangement of two PolyMG structures were the most thermodynamically stable spatial arrangements and are shown in Fig 2. The antiparallel polyM and parallel polyMG arrangements, compared to the other spatial arrangements tested, establish a larger hydrogen bonding network between chains. The network is larger in the parallel PolyMG system compared to the antiparallel PolyM system as, within M-G junctions, there is a wider variety of hydrogen-bonding functionality at a suitable orientation to sustain more hydrogen bonds. For example, oxygen functionality that participates in hydrogen-bonding is limited to carboxyl and hydroxyl groups in the antiparallel PolyM system but extends also to the glycosidic oxygen in the parallel PolyMG system. This helps rationalize the greater stability of the hydrogen bonded parallel PolyMG system. Hereafter, the antiparallel PolyM and parallel PolyMG arrangements are referred to as PolyM (ap) and PolyMG (p) respectively. Dihedral angles (ϕ, ψ) around the glycosidic bonds for the PolyM and PolyMG molecular models are given in S1 Table. For the minimum energy PolyM (ap) system, ϕ and ψ fall in the range of -54 to -107˚and -123 to -156˚respectively. For the PolyMG (p) system these ranges are -47 to -101˚and -55 to -144˚respectively. These torsional angles match well with the most energetically favourable helical conformations of poly-β-D-mannuronate and poly-α-L-guluronate hexamers calculated by Braccini et al using a molecular mechanics method [49]. In the work of Bekri et al. [50] and Agulhon et al., [37] the two dihedral angles involved in the glycosidic linkage in M-M, M-G, G-M and G-G diuronates were calculated using DFT (B3LYP) and Hartree-Fock (HF) levels of theory, respectively. Both works identified multiple different minima corresponding to different values of (ϕ, ψ). B3LYP gave (ϕ, ψ) angles for minimum energy M-M, M-G, G-M and G-G diuronates of (312˚, 92˚), (57˚, 248˚), (269˚, 202˚) and (270˚, 203˚), respectively [50]. HF gave (ϕ, ψ) angles for minimum energy M-M and G-G diuronates of (274˚, 344˚) and (305˚, 292˚) respectively [37]. It is important to note that during the construction of the molecular models, in this work, all atoms were given complete freedom and our ground-state structures were not identified by constrained conformational searching. Moreover, the polyuronate molecular models are larger in molecular weight and possess acetyl functional groups, both of which effect the axial conformational flexibility of the polyuronate. It is unsurprising therefore that the (ϕ, ψ) angles obtained in this work differ from the those reported in both the works of Bekri et al. and Agulhon et al. Although it is clear that torsional space is a complex potential energy landscape, encompassing multiple local minima, that correspond to different dihedral angles (ϕ, ψ), it is nevertheless a useful measure to compare polyuronate configurations in this study. Both the PolyM and PolyMG systems possess oppositely displaced carboxyl groups, a feature also present in the minimum energy M-M and G-G diuronates in the work of Agulhon et al., [37] and, M-M and M-G junctions shifted to lower angles of ϕ, an observation replicated by Bekri et al [50]. Table) demonstrated that in the stable binding positions each ion formed multiple oxygen contacts to hydroxyl, acetyl, ring, glycosidic and carboxylate oxygen atoms. This aligns with observations from experimental crystal structures of calcium-carbohydrate complexes, which show that stable binding positions adopted by calcium ions occur in regions where the ion can adopt multiple bonds to polysaccharide oxygen functionality [48]. Moreover, charge-saturated PolyM and PolyMG structures (i.e., binding to multiple cations), with respect to all four carboxylic acid groups, gave more thermodynamically stable structures compared to the single ion complexes. The greater stability of these charge saturated structures justifies the placing of multiple, rather than single, cations in-between the 2 chains. Therefore, to study cation-extracellular polysaccharide interactions, of cations typically found in cystic fibrosis sputum, 8 Na + , 4 Mg 2+ and 4 Ca 2+ ions were positioned in-between chains in the PolyM (ap) and PolyMG (p) systems, in regions where multiple cation-oxygen contacts could be sustained and in the vicinity of carboxylate groups. These positions were determined by reference to the lowest energy binding positions observed in the single-chain studies above. Hydrogen atoms were removed from the carboxylic acid groups to ensure charge balance, and the number of cations included represented a fully charge-saturated system with respect to the carboxylic acid groups. Full geometry optimizations were performed and the thermodynamic stability was determined by means of evaluating a formation energy according to Eq 3 where E final represents the final energy of the optimized complex, E initial represents the energy of the PolyM (ap) or PolyMG (p) systems, μ A is the chemical potential of the cation and μ H is the chemical potential of a hydrogen atom. Thermodynamic stability of the cation cross-linked 2-chain systems The formation energies for the cation cross-linked PolyM (ap) and PolyMG (p) systems are given in Table 3. Across all sputum ions, more stable complexes are formed (by 0.5-1.5 eV) with PolyMG (p) , highlighting a slight preference for binding within M-G-blocks. This is consistent with elevated guluronic levels increasing metal ion affinity in algal alginates [51]. The 2-chain calcium complexes are very stable relative to the PolyM (ap) and PolyMG (p) systems without cations (Fig 2), as expected given experimental observations when the P. aeruginosa ECM is exposed to calcium [26]. For both polyuronate systems, the calcium ions produce more stable cross-linked structures relative to both sodium (~9 eV) and magnesium (~5 eV). At the atomistic level, this provides a thermodynamic rationale behind the increase in gelation capability of alginates upon substitution of extracellular sodium for calcium ions, which has been observed experimentally and through MD simulations [31,52]. However, it should be noted that it is in contradiction to a recent DFT study on disaccharides that predicted that magnesium would produce the most stable cross-linked structures [50]. This suggests that the larger model we are employing here better predicts the actual chemistry of the in vitro alginate structures. From the formation energies in Table 3, it is clear that sodium ions establish the weakest cross-links between chains and it is thermodynamically unfavourable (with a positive formation energy) for the PolyM (ap) system to aggregate about them. It can be interpreted that sodium ions are unable to induce the aggregation of bacterial alginate structures that only possess (acetylated) M-blocks. Indeed, even for P. aeruginosa, rheological experiments on its sodium alginate ECM, show that the chains are stabilized by entanglements and form only weakly held together, transient ionic networks [30]. Given the ubiquity of sodium in the extracellular environment, this result is not unexpected. Magnesium ions have a weaker interaction with the extracellular polysaccharide (Table 3) relative to calcium ions, which supports C 13 -NMR observations showing magnesium having a much smaller impact on the line broadening of the P. aeruginosa ECM chemical shifts compared to calcium, particularly at low concentrations [25]. For both polyuronate systems, the stability trend follows the order Ca 2+ > Mg 2+ > Na + . This is a trend that matches experimentally determined metal-alginate affinities [53], as well metal ion affinities demonstrated by acetylated bacterial polysaccharides in ion chromatography experiments [54]. However, this trend is not inversely proportional to the ionic radius (Ca 2+~N a + > Mg 2+ ), meaning the charge density of the ions is not the only factor that dictates cross-linked network stability. Cation coordination geometries All cations preferentially bind in positions whereby multiple oxygen contacts are sustained. Calcium (Fig 3) and magnesium (Fig 4) ions adopt coordination environments displaying analogies to the egg-box model of divalent ion complexation by algal alginates [55], namely, the formation of chelate pockets formed in-between adjacent chains. However, the chelate complex geometries adopted by Ca 2+ and Mg 2+ ions do not entirely match the geometries predicted by the egg-box model. Specifically, egg-box binding of divalent cations is facilitated through the establishment of ten ionic contacts, five from each polyuronate chain, where four uronate residues are responsible for binding a single divalent cation [55]. The coordination geometries highlighted in this work have 2-3 uronate residues per divalent ion and a probe over the Van der Waals (VdW) surface of poly-β-D-mannuronate and poly-α-L-guluronate hexamers showed a wide variety of possible binding positions for Ca 2+ ions within a 15 kcal/mol energetic window, where the only prerequisite was the suitable orientation of oxygen atoms [49]. Deviations away from the egg-box model have also been observed in MD studies where 2-chain associations have been created solely through Ca 2+ -COOinteractions [32,33], and through perpendicular chain conformations [35]. For both the calcium and magnesium cross-linked polyuronate systems, two COOgroups are responsible for binding a single cation, in agreement with predictions from thermogravimetric experiments investigating divalent cation complexation by algal alginate [56]. In contrast, sodium ions do not present defined binding regions as, due to the stoichiometry of the system, single COOgroups are involved in binding multiple sodium ions (Fig 5). Cooperative behaviour, the principle that the polyuronate chain offers defined binding sites distributed in regular arrays, is prevalent in divalent but absent in monovalent ion complexation. Ill-defined inter-chain sodium ion binding positions led to thermodynamically unfavourable complexation, E f = +0.62 eV (Table 2), in the PolyM (ap) system (Fig 5A). This unfavourability can be attributed to two sodium ions (Na3 and Na7) binding to outward facing COOgroups (O17, O18, O23 and O24) that do not contribute to binding of the two chains. In addition, Na8 binds in a position that fails to bind to a COOgroup because it binds to terminal acetyl and hydroxyl groups (O25, O30 and O53) at the periphery of the terminal residue. Similar binding modes have been shown in previous MD simulations of 10-unit poly-β-D-mannuronate chains. In these simulations the sodium failed to show any preferential binding for the carboxylate group and formed only transient interactions with other oxygen atoms, such that interchain cation-mediated cross-linking was not observed [36]. Coordination numbers of up to six are present in the calcium 2-chain complexes, which matches the number of contacts in optimized structures observed in a previous DFT study on calcium complexation by two algal alginate hexamers [38]. In both polyuronate systems, the calcium and magnesium ions bind in-between the chains and mediate cross-links; binding externally in either system is not observed. However, the maximum coordination number (CN) around the magnesium cation (CN = 5) is lower than for calcium (CN = 6), despite the higher charge density of magnesium. Similar observations have been previously reported for M-G diuronate cation complexes [50] and can be explained by the smaller ionic radius of Mg 2 + , which reduces the number of accessible oxygen atoms [57]. Considering the PolyM (ap) system, Ca1 ( Fig 3A) and Mg1 (Fig 4A) adopt similar coordination environments, apart from Ca1-O9, which is absent in the magnesium complex. Moreover, in the PolyMG (p) system, Mg2 and Mg3 (Fig 4B) coordination environments are similar to Ca2 and Ca3 (Fig 3B) other than the Mg2-O48 and Mg3-O49 interactions being absent. Consequently, magnesium complexes are less thermodynamically stable compared to the calcium complexes, which helps to explain the poor gelation ability of magnesium ions [58]. It is worth noting that in a previous DFT study on a single algal alginate M-G junction, magnesium was shown to complex with greater stability relative to calcium [39]. This highlights the importance of employing a model constructed from multiple M-M/M-G junctions (from opposing chains) to predict the effect that ionic radii have on uronate oxygen accessibility and to capture the correct gelation trends. Analysis of bonding architecture By considering CN, electronic spin state and cation-anion interatomic separation, Shannon and Prewitt-derived ionic radii are appropriate for systems where bonding of cations occurs to oxygen [59]. For the sputum ions considered in this work, the trend in ionic radii is Na +~C a 2 + > Mg 2+ across an array of different coordination environments. Cation-oxygen bond lengths (see S3-S5 Tables) follow a trend that reflects the trend in ionic radii. Specifically, the average Na + -oxygen length (2.36 Å)~the average Ca 2+ -oxygen length (2.35 Å) > the average Mg 2 + -oxygen length (2.03 Å). The Ca 2+ -oxygen and Mg 2+ -oxygen bond lengths agree well (within 4-8%) with observations from previous DFT calculations on divalent cation complexation by M-M and G-G diuronates [37]. Mulliken bond populations (see S3-S5 Tables) show that all cation-oxygen bonds, across all 2-chain complexes, are ionic, defined by cation-oxygen populations < 0.3 |e|. The average cation-COObond lengths for Na + , Ca 2+ and Mg 2+ in the PolyM (ap) system are 2.32 Å, 2.26 Å and 1.99 Å respectively and in the PolyMG (p) system they are 2.27 Å, 2.26 Å and 1.94 Å respectively. These are the shortest of all the cation-oxygen contacts and indicate that there is a stronger interaction between the cations and the COOgroups compared with other oxygen functionality. The average Na + -COOpopulations (PolyM (ap) 0.06 |e|, PolyMG (p) 0.08 |e|) are smaller compared with the average Ca 2+ -COO -(PolyM (ap) 0.14 |e|, PolyMG (p) 0.14 |e|) and Mg 2+ -COO -(PolyM (ap) 0.13 |e|, PolyMG (p) 0.17 |e|) populations, indicating the strength of the cation-COOinteraction is considerably larger for calcium and magnesium ions, which is to be expected following charge density arguments. Furthermore, comparing the PolyM (ap) and PolyMG (p) 2-chain complexes for each sputum ion, fewer COOgroups are saturated in the sodium and magnesium PolyM (ap) system, compared to their PolyMG (p) analogues. In these instances, the formation energy (Table 3) difference between the two systems is~2.2 eV for sodium and~1.7 eV for magnesium, with PolyMG (P) always having the lower formation energy. In the case of calcium, the same number of COOgroups are saturated in both polyuronate systems and, subsequently, the formation energy difference is lower (~0.5 eV). Therefore, within the slight preference for binding M-Gblocks, there is also a stabilizing effect of COOsaturation. All these factors highlight the electrostatic nature of cation complexation by the mucoid P. aeruginosa EPS. Comparing the calcium and magnesium 2-chain complexes, the atomic charges on Ca and Mg cations range from 1.44-1.55 and 1.55-1.66 respectively. Moreover, the magnitude of the average Mg 2+ -COOpopulations (PolyM (ap) 0.13 |e|, PolyMG (p) 0.17 |e|), although near equivalent in the PolyM (ap) complexes, exceed the Ca 2+ -COOpopulations in the PolyMG (p) complexes by 0.03 |e|. The atomic charge and bond populations imply that, using electrostatic arguments, magnesium ions should give more thermodynamically stable 2-chain complexes. However, this is not observed. It is clear that, although electrostatics play a role in cation complexation, they do not dictate complex stability. The cross-linked complexes accommodate the number of cations required to saturate the total negative charge of the carboxylate groups, assuming mono and divalent ion charges for sodium and magnesium/calcium respectively. Under these conditions, sodium ions display illdefined inter-chain binding sites and single COOgroups are involved in ionic bonding to multiple sodium ions (Fig 5). The consequence of this is a bonding architecture constructed from many weak ionic bonds, shown by Na + -COObond populations between 0.03-0.09 |e| (see S3 Table), and external binding giving fewer cross-linking interactions (Fig 5A). This effect is not prevalent in the calcium and magnesium 2-chain complexes, which display more defined binding sites. Differences in stability between these complexes can be attributed to lower coordination numbers adopted by magnesium ions due to their smaller ionic radius. Furthermore, larger global changes in the dihedral angles (ϕ, ψ) are observed in the divalent ion 2-chain complexes compared to the monovalent ion 2-chain complexes (S1 Table). Upon complexing Na + ions, deviations in ϕ or ψ do not surpass 39˚in both PolyM (ap) and PolyMG (p) 2-chain systems. However, upon complexing Ca 2+ and Mg 2+ ions, deviations reach 49˚in the PolyMG (p) 2-chain systems and 313˚in the PolyM (ap) 2-chain systems. The large changes in the dihedral angles in the PolyM (ap) divalent ion complexes are required to reorient the glycosidic linkage at O46 to face Ca2 and Mg2, establishing an ionic contact (Figs 3A and 4A). This is not observed with PolyM (ap) complexing Na + ions. As such, calcium and magnesium ions have greater influence on the polyuronate conformation compared to sodium ions, which has also seen in the complexes established between Na + , Ca 2+ and Mg 2+ ions with M-M, M-G, G-M and G-G diuronates [39]. Moreover, larger torsional changes in the divalent ion Poly-M (ap) complexes indicate that M-G junctions offer oxygen functionality whose spatial arrangement is suitable for saturating the coordination environment of the cation. Similar observations have been made in molecular mechanics studies on the conformational features of acidic polysaccharides interacting with calcium ions [49]. Therefore, it is likely that the requirement for large torsional change in the PolyM (ap) divalent ion complexes is another reason for the preference for binding within M-G blocks and the greater thermodynamic stability of the PolyMG (p) divalent ion complexes. This is further evidence of cooperative behaviourthe chain must adopt a conformation that creates stable defined binding positions for the (divalent) cations. Overall, the stability of the 2-chain complexes are, more significantly, determined by the geometry of the chelation site rather than the strength of the cation-oxygen bonds. This, in turn, is dependent on steric factors, namely, defined inter-chain binding sites and uronate oxygen accessibility. We clearly show that such steric effects can only be captured by employing a molecular model of a large enough molecular weight, encompassing multiple M-M/M-G junctions from opposing chains. Although sodium ions are ubiquitous in human physiology, and are consequently the most abundant metal ion in CF sputum, it is clear that sodium is not implicated in the aggregation of the mucoid P. aeruginosa extracellular polysaccharide or the ECM stability. By comparison, both calcium and magnesium ions, which are greatly elevated in CF sputum, have been shown to have a direct effect on extracellular polysaccharide aggregation and consequently, ECM stability. Calcium ions are predicted to induce the most stable aggregation of the EPS and can therefore be considered the most important sputum ion, out of those tested, for mucoid P. aeruginosa ECM stability and biofilm chronicity. Conclusions In this work, the relationship between structural chemistry and bacterial virulence has been probed in detail for mucoid P. aeruginosa extracellular polysaccharide molecular systems. Specifically, two two-chain mucoid P. aeruginosa extracellular polysaccharide molecular models were constructed, representing areas of zero and 50% guluronate as well possessing acetyl groups-an in vivo structural motif unique to mucoid P. aeruginosa. Thus, these models are, uniquely, structurally representative of the mucoid biofilm exopolysaccharide architecture. Precise step-wise building of the models ensured we had the most accurate system constructed to date. We demonstrated that stable accommodation of sputum ions by the mucoid P. aeruginosa EPS is electrostatic in origin but the stability of the resulting complex is, in fact, influenced more by the geometry of the chelation sites rather than simply the strength of the cation-oxygen bonds. In regions where guluronate units are absent and in regions where they are in their highest possible abundance (represented by the PolyM (ap) and PolyMG (p) systems respectively), calcium ions are able to produce cross-linked structures~9 eV and~5 eV more thermodynamically stable relative to sodium and magnesium ions. These are large differences, which show unequivocally how the chemistry of these two ions differs from that of the biologically ubiquitous sodium ion, and impacts upon the stability of the EPS structure. These observations are important in providing new chemical insight into previous experimental reports of thicker biofilms being produced in the presence of excess calcium [27][28][29]. In regions of guluronate inclusion, more thermodynamically stable cross-linked structures were obtained, highlighting a preference for binding within M-G blocks where preferential binding sites can be achieved without the large torsional rotations required by the PolyM system. This demonstrates well the importance of using larger whole-chain models rather than relying on smaller subunits to infer chemical nuance, and clearly explains the significance of the guluronate units in the alginate chains, with regard to bacterial virulence. The greater stability of the calcium 2-chain complexes rationalizes the virulent consequences of mucoid P. aeruginosa ECM exposure to calcium ions, namely the development of thicker, more granular and rigid biofilms that are difficult to detach [27][28][29]. Possible strategies to combat mucoid biofilm infection in the cystic fibrosis lung may therefore include calcium chelation. Indeed, recently, Powell et al. have demonstrated that a low molecular weight guluronate rich alginate oligomer, which binds calcium ions, can disrupt established biofilms [60]. From the work presented here, it is clear that preventing calcium ions from mediating electrostatic cross-links between bacterial alginate structures will be critical in facilitating the disruption of the mucoid biofilms and that using sufficiently detailed theoretical models is necessary for establishing meaningful chemical insight. Indeed, these cation cross-linked exopolysaccharide structures will, in the future, serve as precise model systems to study other cation substitutions and small-molecule pharmaceutical interventions at the molecular scale.
2021-09-04T05:18:56.366Z
2021-09-02T00:00:00.000
{ "year": 2021, "sha1": "1ddbb79415a8fb91015548cb895dafda3306aaec", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0257026&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ddbb79415a8fb91015548cb895dafda3306aaec", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
87414702
pes2o/s2orc
v3-fos-license
New additions to the flora of Uttarakhand , India Botanical explorations in different parts of Uttarakhand resulted in collection of seven angiosperm species which were not known previously from the state. These are described here with images of live plants and herbarium specimens. Uttarakhand is one of the Himalayan states in India and its area is about 53,483km 2 , mainly made up of mountainous terrain.The state hardly covers 1.69% of the land area of India but hosts more than 27.96% flowering plant diversity (Karthikeyan 2000;Uniyal et al. 2007) which speaks of the richness of flora here.This area has been a focus of plant collections as far back as 1796 when Thomas Hardwicke collected plants from the Alaknanda Valley of Garhwal.Since then, a large number of plant collectors have explored the area and a great deal of information was available about the flowering plants of this area by the beginning of the 21 st century.Based on these collections, floristic reports and their own collections, Uniyal et al. (2007) compiled a checklist of flowering plants of Uttarakhand as a baseline data for writing the flora of Uttarakhand.This valuable document indicates the presence of nearly 4,700 species of flowering plants (including 32 species of Gymnosperms and a few cultivated species). In routine botanical explorations in different parts of Uttarakhand a few interesting specimens were collected and identified with the help of relevant taxonomic literature and by comparing them with authentic specimens housed at the herbaria of Botanical Survey of India (BSI) and Forest Research Institute (DD) at Dehradun.These species proved as additions to the flora of Uttarakhand as these were not mentioned in Uniyal et al. (2007).Considering it, these species are being reported here for the first time from Uttarakhand.Their description including correct name, basionym (based on The Plant List 2010, International Plant Name Index 2012 or other recent literature), name in Flora of British India (Hooker 1872-97), and photographs of their natural state and their flowering/ fruiting times are provided here in this communication for future reference and further correct identification.The species are arranged in the sequence of families as per Uniyal et al. (2007). Flowering and Fruiting: September-November. Flowering and Fruiting: July-October.On steep slopes in high alpine zones with mosses.This alpine plant species is a newly described species from Uttarakhand (Gornall et al. 2012) and considered, till date, an endemic to Kedarnath area in Uttarakhand. Flowering and Fruiting: September-December.Along streams, climbing over grasses, shrubs. Oxystelma esculentum is considered as a widespread species and Karthikeyan et al. (2009) have reported it from throughout the plains and lower hills of India.However, it is being reported for the first time from Uttarakhand as it is not mentioned in Uniyal et al. (2007). Flowering and Fruiting: September-January. In marshy areas making dense thickets.Karthikeyan et al. (2009) have mentioned its distribution throughout India but it is not included in Uniyal et al (2007) which makes it a new addition to the flora of Uttarakhand.
2018-12-07T08:55:48.938Z
2014-07-26T00:00:00.000
{ "year": 2014, "sha1": "8875c32c3fafbcb9c88a741ec62205c4a140c00a", "oa_license": "CCBY", "oa_url": "https://threatenedtaxa.org/JoTT/article/download/1593/2927", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8875c32c3fafbcb9c88a741ec62205c4a140c00a", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Geography" ] }
231544201
pes2o/s2orc
v3-fos-license
Long-Term Follow-Up and Optimization of Interleukin-1 Inhibitors in the Management of Monogenic Autoinflammatory Diseases: Real-Life Data from the JIR Cohort Objectives: The major role of interleukin (IL)-1 in the pathogenesis of hereditary recurrent fever syndromes favored the employment of targeted therapies modulating IL-1 signaling. However the best use of IL1 inhibitors in terms of dosage is difficult to define at present. Methods: In order to better understand the use of IL1 inhibitors in a real-life setting, our study assessed the dosage regimens of French patients with one of the four main hereditary recurrent fever syndromes (Familial Mediterranean Fever (FMF), TNF receptor associated periodic syndrome (TRAPS), cryopyrin associated periodic fever (CAPS) and mevalonate kinase deficiency). The patients were retrieved retrospectively from the JIR cohort, an international platform gathering data of patients with pediatric inflammatory diseases. Results: Forty five patients of the JIR cohort with a hereditary recurrent fever syndrome had received at least once an IL1 inhibitor (anakinra or canakinumab). Of these, 43% received a lower dosage than the one suggested in the product recommendations, regardless of the type of the IL1 inhibitor. Especially patients with FMF and TRAPS seemed to need lower treatment regimens; in our cohort none of the FMF or TRAPS patients received an intensified dose of IL-inhibitor. On-demand treatment with a short half-life IL-1 inhibitor has also been used successfully for some patients with one of these two conditions The standard dose was given to 42% of the patients; whereas an intensified dose of IL-1 inhibitors was given to 15% of the patients (44% of CAPS patients and 17% of mevalonate kinase deficiency patients). In our cohort each individual patient’s need for treatment seemed highly variable, ranging from on demand treatment regimens to intensified dosage maintenance therapies depending on the activity and the severity of the underlying disease. Conclusion: IL-1 inhibitors are a good treatment option for patients with a hereditary recurrent fever syndrome, but the individual need of the dosage of IL-1 inhibitors to control the disease effectively seems highly variable. Severity, activity but also the type of the underlying disease, belong to the parameters underpinning the treat-to-target strategy implemented in an everyday life practice. Objectives: The major role of interleukin (IL)-1 in the pathogenesis of hereditary recurrent fever syndromes favored the employment of targeted therapies modulating IL-1 signaling. However the best use of IL1 inhibitors in terms of dosage is difficult to define at present. Methods: In order to better understand the use of IL1 inhibitors in a real-life setting, our study assessed the dosage regimens of French patients with one of the four main hereditary recurrent fever syndromes (Familial Mediterranean Fever (FMF), TNF receptor associated periodic syndrome (TRAPS), cryopyrin associated periodic fever (CAPS) and mevalonate kinase deficiency). The patients were retrieved retrospectively from the JIR cohort, an international platform gathering data of patients with pediatric inflammatory diseases. Results: Forty five patients of the JIR cohort with a hereditary recurrent fever syndrome had received at least once an IL1 inhibitor (anakinra or canakinumab). Of these, 43% received a lower dosage than the one suggested in the product recommendations, regardless of the type of the IL1 inhibitor. Especially patients with FMF and TRAPS seemed to need lower treatment regimens; in our cohort none of the FMF or TRAPS patients received an intensified dose of IL-inhibitor. On-demand treatment with a short half-life IL-1 inhibitor has also been used successfully for some patients with one of these two conditions The standard dose was given to 42% of the patients; whereas an intensified dose of IL-1 inhibitors was given to 15% of the patients (44% of CAPS patients and 17% of mevalonate kinase deficiency patients). In our cohort each individual patient's need for INTRODUCTION Interleukin (IL)-1 is implicated in the pathogenesis of several systemic auto-inflammatory disorders and this recognition has favored the employment of targeted therapies modulating IL-1 signaling in a wide number of diseases (Cavalli and Dinarello, 2015). Several IL-1 inhibitors have been developed, but in France the marketing authorization has been obtained only for two of them, the IL-1 receptor antagonist analog anakinra and the IL-1β selective monoclonal antibody canakinumab. The first one was formerly licensed for rheumatoid arthritis, then cryopyrinassociated periodic syndrome (CAPS), and recently in systemic JIA. The second has an indication in the treatment of systemic JIA and in four hereditary systemic auto-inflammatory disorders (European Medicines Agency, 2018b). In 2018, the pivotal placebo-controlled umbrella study with canakinumab has provided the highest level of evidence for the use of IL-1 blockers to control inflammatory symptoms in 3 diseases other than CAPS: i.e. mevalonate kinase deficiency (MKD), TNF receptor associated periodic syndrome (TRAPS), and familial Mediterranean fever (FMF) (De Benedetti et al., 2018). Anakinra will shortly be licensed in France also for colchicine resistant FMF (crFMF) patients (European Medecines Agency, 2018a). Despite the studies giving short or medium-term results, the use of IL-1 inhibitors on a long term and especially in real life may differ in terms of both intervals between the injections and dosage. Indeed, patients responding insufficiently to IL-1 inhibition, respond completely to a dose increase or shortening of the interval between the doses (Bodar et al., 2011;Grimwood et al., 2015;Kone-Paut et al., 2017;Deshayes et al., 2018). Conversely, the minimum doses required to treat a patient effectively are less well known, considering that the majority of patients are currently treated with a treat-to-target strategy. In French tertiary care centers, IL-1 inhibitors have been used off-labeled in theses indications for several years (Meinzer et al., 2011;Stankovic et al.,2012;Rossi-Semerano et al., 2015;Abbara et al., 2017). The analysis of these patients therefore presents an unique opportunity to compare the actual doses received by patients in a nation-wide "real-life" setting to the drug dosage recommended in the product recommendations. Study Design and Participants Patients were identified from the JIR cohort, an international multicenter data repository granted by the Swiss-Children-Rheumatisms foundation, which aims to collect both retrospective and prospective information in a variety of juvenile onset systemic inflammatory disorders (http://www. fondationres.org/fr/jircohorte -NTC02377245). For the purpose of the study, only patients from French centers (pediatric and adult) with complete history data and at least one completed follow-up visit were analyzed. Inclusion criteria to the study were all patients. 1) with a monogenic autoinflammatory recurrent fever syndrome (CAPS, TRAPS, FMF and MKD) according to the EUROFEVER/Printo classification criteria (Gattorno et al., 2019). 2) who received during their follow-up at least one IL-1 inhibitor. Export of patient's data took place on 12th June 2017, one month before the marketing authorization of canakinumab in France. Protocol Approvals This study conformed to the tenets of the Declaration of Helsinki and the protocol was approved by the French Ethic Committee (CCTIRS). Patients were enrolled after comprehensive information checking that they (or their legal guardian) were not opposed to the study and the storage of their personal data. The electronic case report form has been the object of an approval of the national commission for Data Protection and Liberties (CNIL). Aims and Endpoints The primary objective of the study was to evaluate the consistence of dosing of IL1inhibitors in HRFs based on European Medicines Agency labeled recommendations. The secondary aims were 1) to analyze the reasons for discrepancies with the product recommendations and 2) to assess the overall safety profile of IL-1 inhibitors in HRFs. Assessment of the Accordance of the Received Dosage of Medication with the Recommended Dosing Regimen All the patients who received at least one IL1 inhibitor for colchicine resistant FMF, MKD, TRAPS and CAPS were assessed. Starting and ending date of the IL-1 inhibition were notified so that total exposure time for each IL-1 inhibitor, expressed in patient-years, could be calculated. To study the different dosage regimens, we considered the dosage of IL-1 inhibitor received at the last visit (or at the last visit before discontinuation of the studied IL-1 inhibitor). Patients were classified into three groups: group 1/lower than recommended dosage, group 2/standard dosage and group 3/ intensified dosage. For anakinra, standard dose was defined as 100 mg/day (among adults) or 2 (±0.5) mg/kg/day (among children) (European Medicines Agency, 2018a). For canakinumab the standard dose depended on the indication: for CAPS-patients the standard dose was defined as 150 mg (or 2 (±0.5) mg/kg) every 8 weeks, whereas the standard dose for crFMF, MKD and TRAPS patient was the dose recommended by the European Medicines Agency: 150 mg (or 2 (±0.5) mg/kg) every 4 weeks (European Medicines Agency, 2018b). Patients treated with lower or less frequent injections were considered as receiving lower than recommended doses, whereas those receiving higher dosages or more frequent injections were considered as receiving intensified dosages of canakinumab. Analysis of the Reasons for Discrepancies with the Product Recommendations To analyze the reasons for accordance or discrepancies of the different dosage regimens with the product recommendations, a descriptive analysis of the treatment modalities of the patients treated with IL-1 inhibitors was performed. Assessment of the Overall Safety Profile of IL-1 Inhibitors Frequency and description of adverse events were retrieved according to the medDRA terminology. For each adverse event, investigators had to indicate the intensity among "no effect", "mild", "moderate", "severe" and "very severe," the seriousness with the necessity of an hospitalization or not, the relationship between the medication and the event among "not related," "not likely," "possible," "probable," and "definitely" and the consequence on the administration of the treatment among "no action," "drug interrupted," "drug discontinued," "dose reduced". Adverse events were expressed both as absolute number of events during the whole follow-up and as number of events/100 patients/year. RESULTS Forty-five French patients who received at least once an IL-1 inhibitor, either anakinra or canakinumab or both, were identified in the JIR cohort and included for analysis. Table 1 summarizes patient's characteristics with their treatments. Anakinra was the most given treatment (25/45 -56%), The treatment group of the patients (low, standard or intensified) was defined on the dosage received at the last visit (or at the last visit before discontinuation of the studied IL-1 inhibitor). Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 568865 especially in FMF (9/13 -69%) and TRAPS (8/8 -100%) patients. The total treatment exposure to anakinra and canakinumab represented 54 and 202.9 patient-years respectively. Figure 1 summarizes the actual doses received at the last visit (or at the last visit before discontinuation of the studied IL-1 inhibitor) according to the different diseases. Group 1 (lower dosage than in product recommendations) constituted 43% of the patients, regardless the type of IL-1 inhibitor. This was especially true for FMF, TRAPS and MKD patients on canakinumab with 100%, 75%, and 66% of patients respectively who received less than standard dose (i.e. 150 mg or 2 mg/kg every 4 weeks). Group 2 (standard dose) concerned 42% of the patients; whereas an intensified dose of IL-1 inhibitors (group 3) was given to 15% of the patients: 44% of CAPS patients and 17% of MKD patients received in our cohort higher doses than the recommended standard dose whereas neither FMF nor TRAPS patients required the intensified maintenance dose (i.e. 300 mg or 4 mg/kg every 4 weeks). The lower dosages in our cohort than the ones recommended in the summary of product characteristics (SPC) were explained by different treatment regimens: • Fifty percent of the patients (i.e 2 FMF and 3 TRAPS patients) treated with anakinra who received less than the recommended dose were treated with an on-demand regimen (anakinra administration only during flares), the other half received either a maintenance treatment by injections every other day instead of daily injections, or lower daily doses. • Administration modalities for canakinumab also varied: One CAPS and one FMF patient received an "ondemand" regimen, i.e., an injection of canakinumab only if clinical and biological symptoms appeared. The other lower dose regimens involved patients with the new indication of canakinumab (i.e., FMF, TRAPS and MKD): they received less frequent injections than those stipulated in the SCP, varying from an injection every 10 weeks to every 6 weeks. Concerning reported adverse eventsoccuring while on IL1 inhibitors ( Table 2), 6 led to a therapeutic discontinuation, whereas 40 other adverse events possibly, probably or certainly related to IL1 inhibition were reported. The global incidence of adverse events with IL1 inhibition was 17.1 per 100 patient/years. No significant difference in the incidence of adverse events was found between anakinra or canakinumab therapy (p 0.55). No link could be established between the frequency of adverse events and the dosage of IL1 inhibitor received. Especially of the nine patients with a side effect considered as serious or very serious by the investigator, three received an intensified dosage regimen. No life-threatening adverse events were retrieved in our study. The global drug retention rate was higher for canakinumab than anakinra ( Figure 2): 33 out of 36 patients (92%) that ever received canakinumab continued the treatment at the end of the study period, whereas this was only the case for 7 out of 25 (29%) of anakinra treated patients (p < 0.0001). DISCUSSION This study assessed the dosing regimen of IL-1 inhibitors in patients with a monogenic auto-inflammatory disease. During the study period, in France licensed use of IL-1 inhibitors was possible only in CAPS patients. Nevertheless the French healthcare organization enables physicians belonging to secondary or tertiary care centers for rare diseases to prescribe off labeled drugs and our study focused on these patients. Almost half of the patients received lower dosages of IL-1 inhibitors than the recommended standard dose. These lower dosage regimens concerned 60% of the patients with the more recent licensed indications of IL1-inhibitors: crFMF, TRAPS and MKD (De Benedetti et al., 2018;European Medicines Agency, 2018b). Especially, canakinumab injections rate was far lower and varied greatly from one patient to another with injections ranging from every 6 to every 10 weeks. This was probably due to the fact that patients received doses based upon the licensed use of canakinumab (ie CAPS, in whom the standard dose is lower than in the other recurrent autoinflammatory fever syndromes). Indeed the publication of the phase 3 Canakinumab Pivotal Umbrella Study in Three Hereditary Periodic Fevers (CLUSTER) study (De Benedetti et al., 2018), defining the standard dose of 150 mg (or 2 mg/kg) every 4 weeks, occurred after the end of our study. Nevertheless it is a striking finding that in a real-life setting, lower doses than the anticipated standard dose seem sufficient to control the disease. Moreover it seems to show that the need for IL1 inhibitors is not uniform: while 100% of patients with crFMF responded to low doses of interleukin 1 inhibition, patients with MKD required overall higher doses, with a need of intensified doses observed only in this group. TRAPS patients seem to display an intermediate profile of interleukin 1 thresholds, with more various needs of the level of IL-1 inhibition to control the disease. Thus results show that the optimal dosage for properly treating any of these diseases is not yet fully defined. The other main reason for lower dosages was an on-demand treatment strategy in FMF and TRAPS patients. An on-demand FIGURE 2 | Drug survival curves for anakinra (red line) and canakinumab (blue line). The retention rate for canakinumab was significantly higher than for anakinra. Time expressed in months. Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 568865 6 strategy was previously described only in 3 studies with anakinra (Bodar et al., 2011;Grimwood et al., 2015;Babaoglu et al., 2019). In a real life setting, this strategy seems to be a realistic treatment option for selected patients (equally well with anakinra than canakinumab), as 5 out of 7 patients still received an on-demand regimen at the end of the study period. Both patients not responding to an on demand treatment with anakinra switched to a maintenance therapy with canakinumab withaccording to the including physician a good response. The global incidence rate of adverse events in our study was slightly higher than in an Italian study (17.1 per 100 patient/years in our study vs. 8.4 in the Italian study) (Sota et al., 2018), but only already known side effects were described by the participating physicians (Table 2), with mainly-as anticipated-infectious complications (∼11 per 100 patient/years). Most adverse events were considered to be mild and could be managed with minimal treatment modifications. No death, no neoplasm, no tuberculosis infection or reactivation, nor opportunistic infections were reported in our study. Our observations are comforting about the safety profile of IL1 inhibitors in HRFs and support the hypothesis that severe adverse events with IL1 inhibitors are preferentially related to the underlying diseases requiring IL1 inhibition and to the poor general clinical condition, rather than to an actual effect of IL-1 blockade (Sota et al., 2018). We show a far better drug retention for canakinumab than for anakinra, whereas side effects seemed equally frequent in both groups. Our hypothesis is that the ease of treatment may be the most important point for treatment persistence in patients. It is worth noting, that during the scheduled switch from anakinra to canakinumab, none of the attending physicians pointed out that anakinra was not sufficiently effective to justify changing the medication. Similarly, patients with on-demand anakinra therapy with inadequate disease control switched directly to canakinumab-and not daily anakinra -maintenance therapy. These observations suggest that the ease of treatment is also a major argument guiding the choice of the drug for the prescribing physician. The major flaw of our study is that due to the retrospective design of our study; we were not able to retrieve a standardized disease activity score and consequently we were not able to link the disease activity of the patients to their treatment regimens. However we consider that we can infer the control of disease activity indirectly by assuming that the adaptations of therapies decided by the investigating physician were made because of criteria related to the severity and the control of the disease. The observed highly variable treatment regimens, ranging from on demand treatment regimens to intensified dosage maintenance therapies, reflects in our opinion that in daily life the investigating physicians adapts drug dosages as closely as possible to disease activity. This is all the more true since our study took place before the French marketing authorization for IL1-inhibitors in HRFs, at a time when dosages had not yet been standardized by the SCP. A second bias of our study concerns the heterogeneity of our sample, particularly concerning pathologies. However, this heterogeneity also highlighted that individual treatment needs are highly variable. Future studies should focus on identifying and refining the parameters underpinning the treat-to-target strategy practiced in HRFs. Key Messages • IL-1 inhibitors are a good treatment option for patients with a hereditary recurrent fever syndrome. • The individual need of the dosage of IL-1 inhibitors to control the disease effectively seems highly variable, with about 45% of patients responding well to low dosages of IL-1 inhibitors. • On-demand treatment with a short half-life IL-1 inhibitor may be a treatment option for some selected patients with a recurrent hereditary fever syndrome. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by French Ethic Committee (CCTIRS). Patients were enrolled after comprehensive information checking that they (or their legal guardian) were not opposed to the study and the storage of their personal data. AUTHOR CONTRIBUTIONS VH and SG-L were involved in the conception and design of the study. VH, SG-L, IK-P, AB, CG, GG, AC, AP, MH, and PP organized the data base. VH and MD analyzed the data. VH wrote the first draft of the manuscript. All authors contributed to the manuscript revision, read and approved the submitted version. FUNDING No specific funding was received from any bodies in the public, commercial or not-for-profit sectors to carry out the work described in this article.
2021-01-11T14:20:53.300Z
2021-01-11T00:00:00.000
{ "year": 2020, "sha1": "04731c379060fc030cb128aa696d60fec4517194", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.568865/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04731c379060fc030cb128aa696d60fec4517194", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255456814
pes2o/s2orc
v3-fos-license
MCHM Acts as a Hydrotrope, Altering the Balance of Metals in Yeast While drugs and other industrial chemicals are routinely studied to assess risks, many widely used chemicals have not been thoroughly evaluated. One such chemical, 4-methylcyclohexane methanol (MCHM), is an industrial coal-cleaning chemical that contaminated the drinking water supply in Charleston, WV, USA in 2014. While a wide range of ailments was reported following the spill, little is known about the molecular effects of MCHM exposure. We used the yeast model to explore the impacts of MCHM on cellular function. Exposure to MCHM dramatically altered the yeast transcriptome and the balance of metals in yeast. Underlying genetic variation in the response to MCHM, transcriptomics and, mutant analysis uncovered the role of the metal transporters, Arn2 and Yke4, to MCHM response. Expression of Arn2, which is involved in iron uptake, was lower in MCHM-tolerant yeast and loss of Arn2 further increased MCHM tolerance. Genetic variation within Yke4, an ER zinc transporter, also mediated response to MCHM, and loss of Yke4 decreased MCHM tolerance. The addition of zinc to MCHM-sensitive yeast rescued growth inhibition. In vitro assays demonstrated that MCHM acted as a hydrotrope and prevented protein interactions, while zinc induced the aggregation of proteins. We hypothesized that MCHM altered the structures of extracellular domains of proteins, and the addition of zinc stabilized the structure to maintain metal homeostasis in yeast exposed to MCHM. Background The potential for significant human exposure to toxic substances is increasing as thousands of chemicals in routine use have had little safety testing [1][2][3]. 4-Methylcyclohexane methanol (MCHM) is alicyclic primary alcohol used as a cleaning agent in the coal industry. Although health and safety information for this compound is limited, its widespread use in the coal-producing regions of the USA represents a potential hazard to humans and ecosystems. In January 2014, a large quantity of MCHM was spilled into the Elk River in West Virginia, USA and contaminated the drinking water supply of 300,000 people, exposing them to unknown health risks [4]. People exposed to MCHM through the contaminated drinking water reported a variety of significant ill effects [5]. MCHM is not easily degraded biologically because of its low reactivity [6]. In contrast to other well-studied hydrocarbons, such as cyclohexane and benzene, the effects of MCHM on metabolism are under studied [7]. Yeast strains exposed to MCHM exhibited increased expression of proteins associated with the membrane, cell wall, and cell structure functions, while MCHM metabolites mainly induced proteins related to antioxidant activity and oxidoreductase activity [3]. With human A549 cells, MCHM mainly induced DNA damage-related biomarkers, which indicates that MCHM is related to genotoxicity due to its DNA damage effect on human cells [3]. Yeast provides an ideal model system to understand the interplay between metabolic pathways involved in the transport, toxicity, and detoxification of MCHM. Further, the use of yeast strains with mutations in various metabolic pathways allows direct evaluation of targeted pathways on the fate and toxicity of MCHM in cells. "Petite yeast," strains with mutations that disrupt the electron transport chain that produces ATP in the mitochondria, grow more slowly and have smaller cell size than "grande yeast" (wild type). Because these yeast mutants can generate sufficient energy through glycolysis, these are not lethal mutations, but provide a slow-growing Amaury Pupo and Michael C. Ayers are Co-first authors Electronic supplementary material The online version of this article (https://doi.org/10.1007/s12011-019-01850-z) contains supplementary material, which is available to authorized users. strains to evaluate the metabolic rate and stress response. In addition to their roles in energy transformations, mitochondria are central for the synthesis of amino acids, nucleotides, and heme and Fe-S cluster proteins. Thus, such yeast strains are ideal models to assess the role of mitochondrial function in response to stress. Petite yeast has different tolerances to chemicals, which may be related to the production of reactive oxygen species (ROS) and mitochondrial function. For example, petite yeast are more tolerant to 4-nitroquinoline 1-oxide (4NQO) than grande yeast when grown on non-fermentative carbon sources that favor respiration [8,9]. 4NQO is metabolized to the active form only in cells with functional mitochondria and petite yeast, without mitochondria and favoring fermentation, are more resistant than wild-type yeast. However, petite yeast have higher levels of endogenous ROS and are sensitive to compounds that also generate ROS [10], such as MCHM. Petite yeast are additionally more sensitive to H 2 O 2 [10,11] but more resistant to copper [12]. Sod1 is the main dismutase that neutralizes ROS in the cytoplasm and the mitochondria, and mitochondrial Sod2 also neutralizes ROS. Thus, the petite yeast-grande yeast pair represents an ideal system to evaluate the role of ROS systems, metal homeostasis, and MCHM toxicity. The hydrophobicity of MCHM alters the membrane dynamics, which changes how cells can respond to the environment, including the import and subcellular localization of metals. Metal homeostasis is critical in that metals, such as iron (Fe) and zinc (Zn), play important roles in metabolism as co-factors for enzymes and other proteins, yet, if in excess, induce broad lesions to cell biology through the generation of ROS and by binding to a variety of biomolecules [13]. The coordinated activity of metal uptake and sequestration transporters function to maintain metals at optimal levels ( Fig. 1). For example, there are two Zn transporters located on the cell membrane of yeast. Zrt1 is the high-affinity transporter that transports Zn when extracellular levels are low [15] and Zrt2 the low-affinity transporter [16]. Zrt3 transports Zn from storage in the vacuole to the cytoplasm when needed [17] while Zrc1, the Zn/H + antiporter, and its paralog Cot1 [17,18] transport Zn into the vacuole from the cytoplasm. Izh1 and its paralog Izh4 are both involved in Zn homeostasis, by altering membrane sterol content or by directly altering cellular Zn levels [19,20]. In the current study, we evaluated the impacts of MCHM on petite and grande yeast strains, focusing on metal ion homeostasis and divergent respiratory pathways in these strains as potential mechanisms of MCHM sensitivity. We integrate transcriptomics, ionomics, and QTL to identify Fe and Zn homeostasis as central to MCHM toxicity and suggest that loss of metal homeostasis underlies ROS damage and MCHM toxicity. While the MCHM has low solubility in aqueous solutions, we propose that MCHM acts as a hydrotrope, altering membrane dynamics and changing how cells responded to nutrients, including import and subcellular localization of metals via transmembrane ion transporters. Experimental Procedures Yeast Strains and Media S96 petite yeast were previously generated from S96 (MATa lys5) with a 6-h incubation with ethidium bromide. The petite phenotype was validated by failure to grow on glycerol and loss of COX2 from the mitochondrial genome [9]. Yeast knockout strains were previously generated in the BY4741 background [21]. The entire coding regions of YKE4 were knocked out with hygromycin resistance marker in S96 and YJM789 [22]. S96-derived strains were grown in minimal media supplemented with lysine, while BY4741 strains grown in minimal media were supplemented with histidine, methionine, uracil, and leucine. S96 and BY4741 are considered in the S288c genetic background while YJM789 is a clinical isolate [23]. Crude MCHM was provided directly from Eastman Chemical. QTL Analysis Isolates of the recombinant haploid collection between S96 and YJM789 [24] were utilized to perform a QTL for MCHM tolerance. A genetic map was constructed and combined with phenotypes collected by growth curves of the segregants in YPD containing MCHM using a TECAN M200 plate reader. Briefly, cultures of segregants were grown overnight then diluted to 0.1 OD 600 starting concentrations in 200 μl of YPD and either 0 ppm (0 mM) control or 400 ppm (2.8 mM) MCHM. Each segregant was grown in biological triplicate for both control and MCHM treatments in 96-well plates. Both parent strains were grown on every plate to normalize plate-to-plate variation. Plates were grown with constant shaking and OD data was collected every 10 min for 24 h. Differences in control and MCHM saturated concentrations from hours 14-19 of the growth curves were averaged into a single data point to serve as the phenotype for each segregant. The S96 × YJM789 segregant collection used for genetic analysis contains 126 segregants genotyped at 55,958 SNPs identified by physical location. To perform the QTL analysis as previously described [25,26], the genetic map was estimated for use in place of the existing physical map. Computational efficiency was also improved by collapsing the 55,958 markers into only 5076 markers. The R/QTL package based this reduction of markers on all markers that did not recombine and segregate within the population, which it collapsed into individual randomly selected markers within the population. The genetic map was created using R version 3.4.3 and the QTL package version 1.41.6. A custom modification of the scripts contained in Karl W. Broman's genetic map construction guide was used to output the map, specifically utilizing known physical locations to validate that markers were ordered correctly and to identify yeast chromosome order. The qtl package was used to run the QTL analysis with the maximum likelihood EM algorithm method parameter selected to calculate LOD scores at all loci. Significance thresholds of alpha = 0.05 were applied using 1000 permutations to determine the significance of LOD scores. Serial Dilution Assays Yeast were serially diluted onto solid media as previously described [25]. MCHM and ZnSO 4 were added to media, autoclaved and cooled to 65°C, which was then poured. Specific experimental conditions varying the concentrations of MCHM, zinc, and yeast strains are outlined in the results. GO term analysis was undertaken with clusterProfiler [27], an R package that implements methods to analyze and visualize functional profiles (GO and KEGG) of gene and gene clusters. For this, the ORF names from genes up-or downregulated in each condition were translated to the correspondent Entrez id using the function bitr and the package org.Sc.sgd.db. The resulting gene clusters were processed with the compareCluster function, in mode enrichGO, using org.Sc. sgd.db as database, with Biological Process ontology, cutoffs of p value = 0.01 and q value = 0.05, adjusted by "BH" [28], to generate the corresponding GO profiles, which were then simplified with the function simplify. The simplified profiles were represented as dotplots, providing up to 15 most relevant categories. Elemental Analysis Yeast were grown to mid-log phase in YPD at which time MCHM was added to a final concentration of 550 ppm (3.9 mM). A total of 1.2 × 10 8 yeast cells were harvested following 0-, 10-, 30-, and 90-min exposure to MCHM. Samples were split and washed, one set twice with water and the other wash once with 10 mM EDTA to remove metals adsorbed to the extracellular matrix. Four biological replicates for each sample were frozen in liquid nitrogen and stored at − 80°C. Cell pellets were digested in 300 μl of 30% H 2 O 2 and followed by 700 μl of concentrated nitric acid. Samples were heated to 85°C until clear. Sample volume was brought up to 10 ml with HPLC-grade water. Digested samples were analyzed in technical triplicate by inductively coupled plasma emission spectrometry (Agilent 5110 ICP, Agilent, Santa Clara, CA, Hydrotrope Assay Hydrotrope assays were carried out as previously described [29], with the following modifications. Eggs were purchased and used within 1 day. Egg whites were separated and diluted 1:6 in 50 mM Tris-HCl pH 7.4. In glass tubes, 3 ml of diluted egg whites were mixed with different concentrations of ATP, MCHM, and zinc sulfate. Samples in the glass tubes were read at 450 nm after 45-60 s of incubation at 60°C water bath. All treatments were done in 3-4 replicates and averaged together with the standard deviation shown. Statistical differences were determined using a student t test. Spheroplasting Spheroplasts of the BY4741 yeast strain were prepared as described [30] with the following modifications. HEPES was used as the buffer and 100 mg 20 T zymolase was added per OD unit (OD 600 multiplied by the volume of culture); spheroplasts were incubated for 1 h at 30°C. In a 96-well plate, 5 replicates of each treatment were recorded: empty well, spheroplast media only, no treatment, 0.1% SDS, 0.5% sterile distilled water, 10 mM sorbitol, 1 mM ATP, 10 mM NaXS, 10 μM ZnSO 4 , 550-1000 ppm (3.9 to 7.1 mM) MCHM. Using a spectrophotometer, the absorbance at 600 nm was recorded every for 15 h at room temperature. Microscopy Yeast with proteins tagged at the N-terminus with mCherry under the TEF2 promoter [31] were grown to log phase and then split into eight different cultures (four for treated and four for untreated). Once in log phase, samples were treated with 550 ppm (3.6 mM) MCHM for 30 min followed by a 20-min incubation time of 25 μM calcofluor white (Biotium catalog number 29067) on a shaker in a dark room. Then, 30 μl of each sample was placed onto a microscope slide pretreated with 40 μL of 250 μg/ ml concanavalin A via pipette and let sit under a hood for 30 min to dry. The coverslip was placed on top and nail polish around the edges to hold coverslip in place. Cells were imaged on Nikon A1R confocal microscope using a FITC Laser and DAPI laser. Quantitative analysis of pixel intensity to measure the change in expression of the proteins of interest after exposure to MCHM was done with ImageJ on 17-20 cells for each condition. The signal was normalized to untreated yeast, and statistical differences were determined using a student t test. Results Petite yeast have different responses to chemicals because the metabolism of the yeast is shifted away from respiration. Indeed, compared to wild-type (grande) yeast, petite yeast were more sensitive to MCHM (Fig. 2). Growth inhibition of petite yeast was affected at 125 ppm MCHM while the growth of wild-type yeast was only affected at 500 ppm MCHM. Yeast have higher tolerances to MCHM when grown in minimal media (YM); however, petite yeast grew less on YM in general (Fig. 2). To assess the transcriptional response of petite yeast to MCHM, yeast were grown in both YPD and YM and then treated for 90 min with 550 ppm MCHM. A total of 949 genes were differentially regulated across strains and conditions (Supplemental Table 1). Gene expression levels between petite and isogenic grande yeast were similar when grown in YPD media (only six up-regulated genes: cell wall components and iron transporters, and one downregulated gene: putative mitochondrial protein, Fig. 3a), but they were clearly different when grown in YM (131 up and 117 downregulated genes, Supplemental Fig. 1A, Supplemental Table 1). In YM, petite yeast exhibited downregulated cell wall component genes, and there were also significant changes on smallmolecule metabolism-related genes (Supp. Figure 1A, 2 and 3, Supplemental. Table 1). MCHM treatment elicited the upregulation of genes involved in small molecule and sulfur compound biosynthesis, among others, in both petite and grande yeast, while the regulation of other genes differed between strains and depended on the media. For example, genes related to nucleotide and nucleoside metabolism were upregulated in the petite strain treated with MCHM only when grown in YM (Fig. 3, Supplemental Figs. 1 and 2, Supplemental Table 1). Genes downregulated due to MCHM treatment also depended on the media (Fig. 3, Supplemental Figs. 1 and 3, Supplemental Table 1). Among the genes with variable expression due to MCHM treatment and/or the use of petite vs. grande yeast were several involved in zinc homeostasis (COT1, IZH1, IZH2 and IZH4, ZRT1, and ZRT2) and iron homeostasis (ARN1 ARN2, ENB1, FIT1, FIT2, FIT3, FTR1, GGC1, and SIT1) (Fig. 3, Supplemental Fig. 1 and Supplemental Tables 2 and 3). The increased expression of iron transporters was further explored given that mitochondria, the presence of which differs between the two strains, are the site of iron-sulfur cluster protein biogenesis. Strain ionomic profiles were evaluated for yeast strains grown in YPD. YPD was chosen to minimize the differences in growth between grande and petite yeast. Because there were differences in expression of cell wall genes such as CWP1, yeast were also washed in EDTA to determine if increased iron or other metals were associated with the cell wall (water wash) or internalized (EDTA wash). There was no difference in metal levels from yeast washed with water alone or EDTA, indicating that the levels of reported ions were absorbed into the cells and were not associated with the cell walls (Supplemental Fig. 4 and Supplemental Table 4). Iron levels were three times higher in petite yeast than grande yeast, while zinc was 60% lower in the petite strain (Fig. 4a). Other elements were typically lower in grande compared to petite yeast except for calcium (Fig. 4a, Supplemental Table 4) as yeast mitochondria do not store calcium (reviewed [32]). Levels of sodium, phosphorus, and magnesium were lower in petite yeast (Fig. 4a). Copper was below the limit of detection in this analysis. To determine if the levels of metals in yeast were changed when exposed to MCHM, yeast were grown in YPD and then MCHM was added to a final concentration of 550 ppm. The levels of iron did not change for either grande or petite yeast over 90 min, although the strainspecific differences noted above were still notable (Fig. 4b). mRNAs encoding siderophore transporters such as Arn1 and Arn2 were expressed at higher levels, as were the Fit mannoproteins, which bind siderophores in petite yeast in YPD and MCHM compared to wild-type yeast (Supplemental Table 3). In contrast, the levels of zinc increased twofold in grande yeast but did not change in petite yeast over 90 min (Fig. 4c). Calcium and sodium increased with MCHM treatment in both strains with sodium increasing at a slower rate in the petite yeast ( Fig. 4d and e). Potassium, phosphorus, and magnesium also increased in MCHM treatment for grande, but not in petite, yeast ( Fig. 4f-h). The levels of these ions are comparable to other studies [33]. After 90 min of MCHM Plates were incubated at 30°C for 3 days and then photographed exposure, five out of the seven ions measured was significantly higher in the grande yeast (Table 1). There is a significant variation of growth among genetically diverse yeast strains in response to MCHM. In particular, YJM789, a yeast isolated from a human lung [23], was more sensitive than S96 at 500 ppm MCHM (Fig. 5a). Using a segregant collection of YJM789 and S96 that has been used to map genes contributing to differences in phenotypes between strains [8,24,25], quantitative trait loci (QTL) analysis was carried out. The growth rate in MCHM of segregant yeast strains was used to assess the association of various parts of the genome with increased growth in MCHM (Fig. 5b). Several peaks were noted, but only one broad peak on chromosome nine passed the 95% confidence threshold. Within that peak, we identified YKE4, a polymorphic ZIP family zinc transporter [34] that plays a role in zinc homeostasis by transporting zinc between the cytoplasm and the secretory pathway [34] and is localized to the ER [31]. Yke4 YJM789 contained two SNPs that change the protein's amino acid sequence (H5Q and F86L) compared to Yke4 S288c (Fig. 5c). The H5Q was located in the cytoplasmic signal peptide at the N-terminus while the F86L is at the C-terminal end of the first transmembrane domain using TMHMM [35]. To further characterize the role of Yke4, YKE4 was deleted from S96 and YJM789. Deletion of YKE4 did not alter growth in the presence of MCHM in these strains (Fig. 5d). The zinc levels were measured in these strains. The percentage of zinc was normalized to wild-type S96. Both YJM789 and the isogenic yke4 knockout strains had twice as much zinc compared to S96 (Fig. 5e). Yke4 is an ER-localized zinc transporter and plays a role in intracellular trafficking of zinc and did not appear to regulate total zinc levels. There were other genomics peaks in the QTL linked to genetic variation of MCHM response and likely contribute to differences seen between these strains. To assess the contribution of other proteins involved in metal transport, we utilized the yeast knockout collection to determine the impact of deleting MCHM differentially regulated genes on growth. This collection is in an S96-related strain background BY4741. In contrast to S96 and YJM789 yeast, the yke4 mutant in this background was sensitive to MCHM (Fig. 6a). There were no significant changes in expression of YKE4 induced by MCHM (Supplemental Table 1). However, ARN2 expression was higher in petite yeast and, from ICP-MS analysis, the endogenous levels of iron were also higher. The BY4741 arn2 knockout was more tolerant to MCHM (Fig. 6a). The growth on MCHM of izh1 and izh2 knockouts, genes involved in zinc transport that also were differently regulated, was not altered. Iron levels did not change with the addition of MCHM (Fig. 4b). However, zinc levels increased in the wild-type grande yeast but not in the petite yeast with MCHM exposure (Fig. 4c). We tested whether additional zinc could alleviate the growth inhibition by MCHM. Growth improved with the addition of 10 μM of zinc sulfate in MCHM in both BY4741 and the yke4 knockout (Fig. 6b). However, at higher zinc concentrations (100 μM), all growth was inhibited of all yeast when MCHM was added while zinc sulfate at this concentration alone did not alter growth (Supplemental Fig. 5A). Curiously, when zinc was added to YPD without MCHM, the media became slightly opaque. However, after several days, media cleared around yeast colonies. YPD is an undefined media composed of yeast extract, peptone, and agar. Zinc could have induced the precipitation of an unknown compound or compounds that are solubilized by the growth of yeast on solid media; the precipitation of these media components may limit yeast growth at this higher zinc concentration. Therefore, we tested whether yeast knockouts of several known zinc transporters would change the response to MCHM. First, the zinc tolerance of the zrt1, zrt2, zrt3, and zrc1 knockout yeast were tested. Only at the highest levels of zinc sulfate did the zrc1 mutant grow less than the other strains (Supplemental Fig. 5A). Then, the addition of 5 μM of zinc sulfate completely rescued reduced growth in the presence of MCHM (Supplemental Fig. 5B). However, increasing zinc levels to 100 μM further suppressed the growth of most of the yeast tested grown on MCHM is composed of a saturated hexane ring with a methyl group and a methanol group at opposite carbons (Fig. 7a). The methanol and methyl group can be in the cis or trans conformation. These characteristics of MCHM allow it to act as a hydrotrope, a compound that can solubilize hydrophobic substances in aqueous environments. We thus considered MCHM's role in altering protein-membrane and protein-protein interactions, which may explain the impacts of MCHM on the transcriptome and ionome. In vitro protein aggregation was carried out with sodium xylene sulfate (NaXS), an industrial hydrotrope; ATP, a biologically relevant hydrotrope [29]. and MCHM. Compared to no treatment (aggregation was set at 100% for no treatment at 45 s), NaXS reduced aggregation to 48% (p = 0.0064), ATP reduced aggregation to 3% (p = 0.00025), and 550 ppm MCHM treatment reduced aggregation to 60% (p = 0.02). However, at 1 min of incubation, MCHM allowed full protein aggregation and was not distinguishable from untreated controls (p = 28), while NaXS and ATP continued to prevent protein aggregation (Fig. 7b). Levels of zinc sulfate that rescued MCHMinduced growth inhibition increased aggregation by 60% (Fig. 7b). Zinc sulfate on its own caused nearly immediate aggregation of protein which was not prevented by the addition of MCHM. We tested if adding MCHM before zinc changed the rate of aggregation. When MCHM was added first followed by zinc sulfate, protein aggregation showed no difference at 45 s but was at the highest of all treatments tested at 1 min of incubation. Yeast have cell walls that protect cells from osmotic stress, and the cell wall can be easily removed to produce spheroplasts. However, unlike plants and fission yeast, spheroplasted yeast continue to divide their nuclei but do not undergo cytokinesis [36][37][38] leading to large multinucleated yeast. In this way, we can determine if hydrotropes can cause yeast to lyse when the cell wall is not providing rigid support. SDS, a commonly used detergent, reduced the optical density as yeast cells lysed. Sorbitol provides osmotic support and did not affect yeast growth (Fig. 7c). Treatment of spheroplast with known hydrotropes (ATP and NaXS) did not alter the growth of spheroplasted yeast over 15 h while the growth of spheroplasts treated with MCHM was arrested but did not cause lysis (Fig. 7c). The dose-dependent reduction of spheroplasted growth likely mirrors the growth inhibition on plates with MCHM. The growth arrest is reversible in MCHM, as cells will continue growing after MCHM is removed (data not shown). The subcellular localization of Zrt1, Zrt2, Zrt3, and Yke4 was measured as cells were exposed to MCHM. Proteins were tagged at the N-terminus with mCherry [31], and cells were stained with calcofluor white to highlight the cell wall. Fluorescence of each protein remained diffused and no foci appeared after 90 min of exposure (Supplemental Fig. 6A). The endogenous promotors were replaced with a common constitutive promoter (TEF2) so any increased expression of Zrt1, Zrt2, and Yke4 would likely be at the protein level rather than the mRNA level when exposed to MCHM. The protein levels of the Zrt transporters in YPD change by no more than 20% under the TEF2 promoter while Yke4 levels were fourfold higher than Yke4 under its endogenous promoter [31]. When treated with MCHM, protein levels of Zrt1, Zrt2, and Yke4 increased modestly while Zrt3 levels decreased, (Supplemental Figure 6B). Levels of ZRT1 and ZRT2 mRNAs were decreased with MCHM (Supplemental Table 2). Discussion The loss of the mitochondrial DNA and treatment with MCHM had pleiotropic effects on yeast. Petite yeast responded to stresses, including MCHM, differently than grande yeast. From RNA-seq analysis, iron and zinc transporters were differentially regulated in petite yeast and grande yeast in response to MCHM. In petite yeast, levels of zinc were lower while iron levels were higher. Transcriptomics pointed to metal transporters while genetic analysis uncovered genetic variation in Yke4, an internal zinc transporter as On non-fermentable carbon sources, yke4 mutants do not grow in the presence of excess zinc [34], further highlighting that internal zinc transport yeast differed in many other ways. To address how MCHM could affect the wide range of biochemical pathways seen, MCHM was tested and shown to be a hydrotrope in vitro, which could exert cell wall or membrane stress. Petite yeast grew slower and were especially sensitive to the growth inhibition of MCHM. This may be in part due to the altered ionome of petite yeast. This includes higher levels of iron from increased expression of iron transporters and lower endogenous zinc levels. These petite yeast were induced by loss of the mitochondrial genome and petite yeast caused by other types of mutations also had differences in internal metals and regulation in the iron regulome [39]. There is an interplay between metal levels, as zinc transporters are also important for responding to high levels of copper [25]. Zrt1 protein levels increase in response to high levels of copper [25] and in contrast to the mRNA, Zrt1 under the control of a generic promoter and 5′UTR increased protein expressed by 66% in MCHM treatment. While genetic variation in Zrt2 contributes to copper tolerance [25], Zrt2 protein levels also increased by 26% with MCHM exposure. Supplementation with zinc alleviates copper-induced growth inhibition as well as MCHM growth inhibition. The levels of sodium, calcium, phosphate, and magnesium also increased, suggesting drastic changes at the cell wall and membrane in response to MCHM. Although mRNAs of the iron acquisition pathway were increased in MCHM treatment, there was only a modest change in iron levels. Other stresses, such as starvation induced by rapamycin, also induce the iron regulon [40]. Deletion of arn2 improved growth. Arn2 is localized to the ER and suggests a role in subcellular localization of iron in the yeast [31]. MCHM-treated yeast are not starved for iron as GTL1 and GDH3 expression was not altered. These Gtl1 and Gdh3 are iron-dependent enzymes that are downregulated in iron-limiting media and upregulated in iron-replete media [41]. The addition of zinc rescued growth of yeast on MCHM at low levels of zinc (5-10 μM), while levels above 100-500 μM in combination with MCHM drastically reduced growth. Therefore, it appears that the metals have an optimum level to ameliorate the effects of MCHM. Based on RNA-seq and QTL analysis, two transporters of zinc were identified as having an important role in MCHM response. We found no correlation between levels of internal zinc and poor growth on MCHM, possibly because Timed protein aggregation when exposed to heat. No treatment is compared to 10 mM sodium xylene sulfate, 1 mM ATP, and 550 ppm MCHM. The optical density of samples was measured at 450 nm after incubation at 60°C. 550 ppm MCHM was added to samples then 10 μM zinc sulfate was added where indicated. c Spheroplast yeast were incubated for between 0 and 15 h with 0.1% SDS, 0.5% water, 10 mM sorbitol, 1 mM ATP, 10 mM NaXS, 10 μM zinc sulfate and 550-1000 ppm MCHM, and samples were read at 600 nm. Biological quadruplicates were averaged and standard error. Relevant p values were calculated using the student t test subcellular localization of zinc was critical to growth, rather than absolute levels of the metal. There could also be a period of adjustment that could not be captured due to the differences in time points between measuring metal levels and growth. Metal levels were measured at 30 min of exposure while growth was measured after 2 days. YKE4 expression levels between YJM789 and S96 are not different in unperturbed cells [42], and there are no SNPs in the 5′UTR or 3′UTR [43], suggesting that the polymorphisms in Yke4 itself contributed MCHM sensitivity in addition to other genetic differences. While BY4741 yke4 was MCHM sensitive, deletion in S288c and YJM789 had no effect. There are hundreds of genetic differences between BY4741 and S288c and thousands in YJM789 [44,45]. The pleiotropic effects of MCHM on yeast combined with the many smaller yet not quite statistically significant peaks in the QTL point to MCHM resistance as a polygenic trait likely spread throughout the genome. Other yeast strains have the H5Q polymorphism and a subset of those also have the F86L polymorphism. To separate the role of transcription and 5′UTR dependent, the promoter and 5′UTR of YKE4 was replaced. With MCHM treatment, Yke4 zinc levels increased 20%. A total of 582 proteins are predicted to bind zinc with 20 proteins binding 90% of total cellular zinc [14], and zinc sparing ensures that essential Zn binding proteins have zinc. As YJM789 expresses less Zrt1 protein normally [25], perhaps the internal levels of zinc and the ability of the yeast to quickly redistribute the zinc in MCHM may alter their ability to grow. Hydrotropes in cells prevent protein aggregation, but unlike surfactants, work at millimolar concentrations and display low cooperativity. In addition to changes in protein levels, post-translational modifications, and subcellular location, changes in protein conformation also regulate protein function. Protein aggregation is generally thought to inactivate proteins. Protein aggregation includes prions, which increase phenotypic plasticity without changing genetic diversity [46]. Intrinsically disordered regions of protein can separate proteins without being membrane-bound which is an important step in RNA granule formation [47]. Transmembrane domain proteins such as Zrts, Yke4, and Arn transporters have multiple extracellular and intracellular loops that would be also disordered regions. MCHM slowed the aggregation of proteins in an in vitro assay, and with zinc sulfate, that induced protein aggregation appeared to increase the rate of aggregation rather than preventing it. MCHM showed similar hydrotrope activity to NaXS, an industrial hydrotrope, but it was not as potent as ATP. Transcriptomics carried out after 90 min, approximately one generation in yeast, detected changes in mRNA encoding metal transporters. Zinc is required for both the syntheses of cell walls and phospholipids [34,48]. Exposure of yeast to MCHM increased intracellular sodium levels, yet yeast do not actively accumulate sodium [49], further supporting that MCHM alters the protein structures to either increase the bioavailability of ions or transport across cell membranes. MCHM altered the levels of ions in the cell at the earliest time points. Therefore, we conclude that this is likely through altered protein conformation at the cell membrane of many proteins because of the diverse metals that changed during MCHM exposure. Proteins and organelles are increasingly found in altered conformations to change local concentrations of proteins in (reviewed [50]). Inside the cell, MCHM could alter how molecules interact in liquid droplet changing the function of protein functions and metabolites in the cell.
2023-01-06T15:12:13.935Z
2019-08-07T00:00:00.000
{ "year": 2019, "sha1": "ddacd7630f2ca74bf86a7dbd7a35641484670985", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12011-019-01850-z.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "ddacd7630f2ca74bf86a7dbd7a35641484670985", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Biology" ], "extfieldsofstudy": [] }
270349824
pes2o/s2orc
v3-fos-license
Care Bundles to Improve Hemoperfusion Performance in Patients with Severe COVID-19: A Retrospective Study Background/Objectives: Hemoperfusion (HP) is employed to modulate cytokine storms in severe coronavirus disease 2019 (COVID-19) patients, requiring careful attention for success and safety. Therefore, we investigated whether our care bundles could enhance HP performance. Methods: We conducted a retrospective cohort study on adult patients (≥20 years old) with severe COVID-19 pneumonia. In the first wave (Phase I), we identified HP-related issues and addressed them with care bundles in the second wave (Phase II). The care bundles included early temperature control, precise hemodynamic monitoring, and clot prevention measures for the HP membrane. The HP success rate and associated adverse events (AEs) were assessed between the two phases. Results: The study included 60 HP (HA330) sessions from 27 cases (Phase I: 21 sessions from 9 cases; Phase II: 39 sessions from 18 cases). Patient characteristics and treatments for COVID-19 were similar, except for baseline body temperature (BT) and heart rate (HR). Phase II showed a higher success rate (67% vs. 89%, p = 0.19), although it did not reach statistical significance. Phase I recorded a significantly higher frequency of AEs (3 [IQR 1, 4] events/case vs. 1 [IQR 0, 2] events/case, p = 0.014). After implementing the care bundles, hypothermia significantly decreased (78% vs. 33%, p = 0.037), with an adjusted odds ratio of 0.15; 95% CI 0.02–0.95, p = 0.044 for baseline BT. Conclusions: Further exploration with a larger sample size is required to establish the advantages of care bundles. However, the bundles’ implementation has significantly improved hypothermia prevention. Introduction The coronavirus disease 2019 (COVID-19) pandemic ushered the world into a chaotic situation [1,2].The emergence of this new contagious disease intensified COVID-19 into becoming a burdensome disease.Numerous countries experienced a surge in the number of patients, resulting in a high rate of morbidity and mortality [3][4][5][6]. HP serves as a rescue therapy for severe COVID-19, countering cytokine storms caused by the virus [17][18][19].However, it demands an expert team and carries the risk of periprocedural adverse events (AEs).During the first pandemic (Phase I), problems related to HP care were addressed in subsequent phases (Phase II) by implementing a care bundle, which aided nurses in conducting HP effectively.We investigated HP success rates and associated AEs before and after implementing care bundles in severe COVID-19 patients. Study Design We conducted a retrospective observational study using the data from COVID-19 patients treated during the period from 12 April 2021 to 27 January 2022 (Phase I: 12 April 2021 to 19 May 2021, and Phase II: 14 July 2021 to 27 January 2022, with 63 and 180 patients, respectively). The study received approval from the Research Ethics Committee of the Faculty of Medicine, Chiang Mai University, Thailand (study code: NUR-2566-0444).The study was performed following the Declaration of Helsinki, which outlines ethical principles for medical research involving human subjects.An informed consent waiver was approved by the Research Ethics Committee of the Faculty of Medicine, Chiang Mai University, due to the minimal risk and anonymous data analysis intrinsic to the retrospective manner of this study. We retrieved the information from those patients with severe COVID-19 who were admitted to the ten-bed intensive care unit (ICU) of the Chiang Mai Neurological Hospital, which was under a joint memorandum operation by the staff from the Faculty of Medicine of Chiang Mai University. Inclusion and Exclusion The participants were considered eligible if they were adults aged 20 or older; were hospitalized with severe COVID-19 pneumonia, as defined by a score from six to nine on the World Health Organization (WHO)'s ordinal scale [20]; and received treatment with high-flow nasal cannula (HFNC), non-invasive ventilation (NIV), or invasive mechanical ventilation (IMV).They also needed to be confirmed positive for severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) through a reverse transcription polymerase chain reaction (RT-PCR) test on respiratory tract specimens, present evidence of pulmonary infiltration on chest X-ray images, and undergo HP treatment during their ICU stay. Exclusion criteria included patients who consented to limited treatment and advanced directives for medical therapy. Data Collection The data were retrieved from the Electronic Medical Records by the first author, SM.The information included the patients' demographics, pre-existing comorbidities, vital signs, severity of illness, and laboratory test results at the initial phase of HP therapy.Throughout each patient's ICU stay, we documented treatments and types of respiratory support. The ICU mortality rate, hospital mortality rate, ICU length of stay, and hospital length of stay were examined.Furthermore, we reviewed HP-related information, including the initiation date of HP relative to the admission date, the total number of HP sessions, the total duration of HP operation, the success rate of HP, and HP-related AEs. Study Outcomes The primary outcome was to determine the success rate of HP before and after the implementation of care bundles, defined as the achievement of a completed four-hour HP session without experiencing interruptions that significantly affected the HP procedure. Other outcomes related to HP-associated AEs, including shivering, cardiac arrhythmia, hypotension, hypertension, hypothermia, and cartridge clotting, were compared between the phases. Standard of Care for Severe COVID-19 Patients All patients received oral favipiravir or intravenous remdesivir, depending on the severity of hospitalization.All patients were switched to remdesivir if the disease was determined to be progressing.Concomitant intravenous systemic corticosteroid, primarily dexamethasone, or an equivalent dose of hydrocortisone, methylprednisolone, or prednisolone was given as determined by the attending physician.Tocilizumab was optionally provided in severe cases.Empirical antibiotics were administered if indicated.Respiratory support was offered using HFNC, NIV, or IMV, as appropriate based on the patient's respiratory status. Hemoperfusion Setting and Prescription HP was conducted during the early stages of hospitalization in patients with progressive acute hypoxemic respiratory failure (AHRF) with a PiO 2 /FiO 2 < 200, positive evidence of systemic hyperinflammation (lymphopenia < 1000 cells/µL or a high level of CRP > 30 mg/L), despite receiving standard therapy for COVID-19.The utilization of HP was determined based on the attending intensivist and nephrologist and was performed by the ICU nurses after proper establishment of vascular access using an 11.5 Fr double-lumen catheter. The hemoadsorption HA330 cartridge (Jafron Biomedical, Zhuhai, China), integrated with the HP machine, was utilized in our center.We primed the cartridge with 5000 IU unfractionated heparin (UFH) for 30 min.Then, we rinsed the cartridge with 0.9% normal saline for 4 L at a flow rate of 100 mL/min.Since most patients had already received low-molecular weight heparin, a standard prophylactic therapy for hospitalized COVID-19 patients, no additional UFH was administered during the HP session. We set the HP temperature at 37.0 • C and initiated the blood flow rate (Qb) at 80 mL/min.We gradually increased the Qb to achieve 150 to 200 mL/min within ten minutes.We recommended a four-hour period of HP for at least two sessions, 24 h apart.However, one to four sessions might have been merited, depending on the patient's severity. Care Bundles for Hemoperfusion HP care-related issues discovered in the first wave of the pandemic (Phase I) were addressed during the second wave (Phase II) by specialized ICU nurses trained in renal replacement therapy.The AEs in Phase I included shivering, cardiac arrhythmia, hypotension, hypertension, hypothermia, cartridge clotting, and circuit shattering.These AEs were subsequently addressed, and care bundles were implemented to assist nurses in performing HP.The strategies to promote a four-hour HP session included early temperature control, precise hemodynamic monitoring with early management, and measures for clot prevention in the HP membrane.We employed regular monitoring of each patient's vital signs before the HP procedure and at 5, 15, 30, 45, 60, 120, 180, and 240 min thereafter.Additionally, nurses actively monitored for AEs and promptly engaged physicians for necessary management, as shown in Table 1. Statistical Analysis Continuous data were summarized as median and interquartile ranges (IQR).Categorical variables were summarized as numbers and percentages.For the comparison of continuous variables, the Wilcoxon rank-sum test was employed, while for categorical variables, Fisher's exact test was utilized. Univariable logistic regression analysis was utilized to illustrate the association of the care bundles, as the independent variable, with the dependent outcome.The dependent variables were put into the model one by one in a binary form (yes/no), which included the success rate and each AE.The results were reported as the odds ratio (OR) and 95% confidence interval (95%CI). Given the constraints of the limited sample size, we focused our selection on pertinent variables that exhibited differences at the baseline and were related to the outcome of interest for refining the regression model, such as adjusting body temperature in the model where the outcome was hypothermia. A significant p-value of less than 0.05 was considered statistically significant.For analyzing data, we utilized STATA version 16.0 (Stata Corp LP, College Station, TX, USA). Demographics, Clinical Features, Treatment, and Outcomes We included all patients with severe COVID-19 who underwent HP at our center.Twenty-seven cases were involved in the study (Phase I: n = 9 and Phase II: n = 18).Table 2 summarizes the baseline characteristics of the patients.When comparing Phase I and Phase II, there were no significant differences in patients' demographics, including age and gender, with median values of 63 [IQR 53, 67] years vs. 58 [IQR 52, 67] years, p = 0.81, and females comprising 56% vs. 28%, p = 0.22, respectively.Additionally, there were no discernible variations observed in the results of laboratory investigations, including absolute lymphocyte count, D-dimer, C-reactive protein, and interleukine-6, which were not different between phases (all p > 0.005).Table 2 provides further information about the treatments administered during ICU admission.Of note, there were no significant differences in the treatments, types of respiratory support, and patient outcomes between the two phases (all p > 0.05). HP-Related Information and Outcomes All HP-related information is summarized in Table 3.The time to first HP initiation from admission was not different between phases (1 [IQR 1, 2] day vs. 3 [IQR 2, 6] days, p = 0.08).In total, there were 60 HP sessions/27 cases (Phase I: 21 sessions/9 cases and Phase II: 39 sessions/18 cases), with no significant differences in the median HP sessions per case or the total duration of HP operation between phases (2 [IQR 2, 3] sessions vs. 2 [IQR 2, 3] sessions, p = 0.78, and 480 [IQR 420, 720] minutes vs. 480 [IQR 480, 720] minutes, p = 0.91).The success rate of HP per case, calculated from the number of cases that totally completed a 4 h HP session divided by the number of cases from each phase, was slightly higher in Phase II (Table 3).However, it did not reach statistical significance (67% vs. 89%, p = 0.19).The success rate remained the same when considered per session, calculated from the number of completed 4 h sessions divided by the number of sessions from each phase (81% vs. 95%, p = 0.11). The total number of AEs was 49 events (Phase I: 26 events and Phase II: 23 events).The median number of AEs per case was significantly higher in Phase I than in Phase II (3 [IQR 1, 4] events/case vs. 1 [IQR 0, 2] events/case, p = 0.014).The details regarding the number of AEs per case differed between phases (p = 0.039), as summarized in Table 3. Discussion Our study offers insights drawn from the first wave of the pandemic that were then subsequently applied to the second wave of the pandemic, involving cytokine reduction using HP with HA330 for severe COVID-19 patients.We incorporated several technical aspects to improve HP performance, including early temperature control, regular hemo- Discussion Our study offers insights drawn from the first wave of the pandemic that were then subsequently applied to the second wave of the pandemic, involving cytokine reduction using HP with HA330 for severe COVID-19 patients.We incorporated several technical aspects to improve HP performance, including early temperature control, regular hemodynamic monitoring, ongoing surveillance for AEs, and timely contact with physicians to provide essential interventions when necessary.Although our approach did not lead to a significantly greater success rate of HP, it significantly reduced the number of AEs, particularly the incidence of hypothermia. Interestingly, there was a slightly higher number of patients undergoing HP therapy in Phase I compared to Phase II, though the amount lacks statistical significance.The proportion of patients receiving HP during these two phases was 14.3% (9/63 cases) vs. 10.0%(18/180 cases), respectively, with a p-value of 0.36.One contributing factor to the reduction in the utilization of HP treatment in Phase II pertained to the necessity of vascular access and specialized nurse support for continuous bedside HP operations lasting at least four hours.This additional complexity rendered HP more intricate than mere medication administration. Moreover, as time progressed, more knowledge supporting best practices for treating COVID-19 patients continued to evolve.We noticed some disparities in patient treatment between the two phases.The administration of systemic corticosteroids showed a tendency towards a longer duration in Phase II (6 [IQR 5,8] days vs. 12 [IQR 9, 16] days, p = 0.06).Additionally, the usage of Tocilizumab, an IL-6 receptor inhibitor, was more prevalent in Phase II (11% vs. 44%, p = 0.09). Nonetheless, there were a total of 60 HP sessions in our study (Phase I: 21 sessions and Phase II: 39 sessions).The comparative success rate of HP between Phase I and Phase II showed no statistical difference when adjusted by case or by sessions (67% vs. 89%, p = 0.19 and 81% vs. 95%, p = 0.11, respectively).Remarkably, the introduction of the HP care bundles led to a substantial decrease in the median number of AEs (3 [IQR 1,4] events/case vs. 1 [IQR 0, 2] event/case, p = 0.014).While shivering, cardiac arrhythmia, hypotension, and hypertension did not exhibit significant differences, hypothermia demonstrated a statistically significant reduction (78% vs. 33%, p = 0.037), with an OR of 0.15; 95% CI 0.02-0.95,p = 0.044 when adjusted for baseline BT.This result implies that the implementation of the care bundle during HP effectively mitigated the occurrence of hypothermia. Although the ICU mortality rate was marginally lower in Phase II, there was no statistically significant difference between the two phases (33% vs. 28%, p = 0.55).One plausible explanation could be the slightly delayed initiation of HP compared to Phase I, with the median time to first HP initiation from ICU admission being 1 (IQR 1, 2) day vs. 3 (IQR 2, 6) days, p = 0.008.A delay in HP initiation could potentially impact the efficacy of HP.Exploring the effectiveness of cytokine modulation through a combination of early HP and cytokine reduction medication could offer valuable insights for future research.On the other hand, investigating the efficacy by comparing medication alone to early HP could also present another area of interest. It is well-established that cytokine storms contribute to endothelial dysfunction, trigger microvascular thrombosis, and lead to organ dysfunction such as acute respiratory distress and acute kidney injury [21].Consequently, the cytokine storm is a major contributor to an increased mortality rate among critically ill patients with COVID-19 [22].Therefore, a strategic intervention aimed at the timely and effective clearance of cytokines through HP may lead to improved outcomes for severe COVID-19 patients [10,[17][18][19]. One of the largest investigations of CytoSorb involved the CYCOV trial [27], which revealed that the level of IL-6 at 72 h after ECMO initiation did not significantly differ between the HP + ECMO group (n = 17) vs. the ECMO alone group (n = 17).However, the 30-day survival rate was significantly lower in the HP + ECMO group (18% vs. 76%, p = 0.002).Caution is required when considering HP during the early phase of ECMO therapy, as HP can also remove anti-inflammatory cytokines, which might potentially contribute to this adverse outcome. To our knowledge, one large retrospective study (n = 128) compared the use of HP with HA330 (n = 46) and CytoSorb (n = 9) to matched controlled patients (n = 73) [19].This study showed a lower ICU mortality in the HP group than in the control group (67% vs. 89%, p = 0.002) [19].The HP group also exhibited more favorable outcomes in terms of a shorter ICU length of stay, greater improvements in SpO 2 , and greater reductions in PaCO 2 when compared to the control group [19].However, it is worth noting that the mortality rate in this study was somewhat higher than in our study (30%).Nonetheless, intubation rates were similar, with approximately three quarters of patients being intubated. Another study employing HA330 was conducted by Surasit K et al., where HP was performed for three or more sessions (n = 15) compared to no HP or less than three sessions of HP (n = 14) [17].The results indicated that patients who received three or more HP sessions achieved a reduction in organ dysfunction (as measured by the SOFA score), a decrease in pulmonary infiltration appearing on chest X-ray images, and lower CRP levels.Furthermore, the HP group outperformed the control group in terms of ICU mortality and 28-day mortality (13.3% vs. 92.9%,p < 0.001 and 13% vs. 86%, p < 0.001, respectively). Although several studies have demonstrated the advantages of HP for severe COVID-19 patients [17,19,[23][24][25][26][27][28][29], a notable deficiency exists in the available evidence to support best practices for enhancing HP performance.In addition, the overwhelming number of patients with severe COVID-19 led to a widespread shortage of healthcare professionals.In response, some centers eventually enlisted multidisciplinary healthcare workers who may not be specialized in critical care medicine.Our center encountered this challenge as well.Therefore, the execution of HP operations could occasionally present challenges.The guidance from experienced nurses, coupled with comprehensive care bundles encompassing all essential elements, became indispensable.It is worth noting that this study originates from a resource-limited setting where the implementation of HP can be somewhat financially burdensome.Given that the success rate of HP operations could potentially influence the survival of severe COVID-19 patients, it becomes imperative to develop a strategy for enhancing HP performance. Hypothermia, or suboptimal thermal regulation, represented a noteworthy challenge in the context of extracorporeal organ support.Studies have indicated that hypothermia affects nearly half of all patients undergoing continuous renal replacement therapy (CRRT) [30,31].This condition demands attention due to its potential to exacerbate patients' thermal instability, thereby heightening susceptibility to sepsis; precipitate chills; and induce arrhythmias and hemodynamic instability.Furthermore, individuals encountering hypothermia during CRRT were confronted with a notable increase in mortality risk, with rates reaching up to 60% [31].It is anticipated that the incidence of hypothermia will remain consistent for HP.Although certain centers may exhibit a heightened susceptibility to hypothermia, it is important to note that some HP machines, including those in our center, lack integrated warming capabilities.Therefore, a protocol to monitor patients' body temperatures is necessitated to prevent this complication. We conducted frequent monitoring of each patient's vital signs before the HP procedure and at intervals of 5, 15, 30, 45, 60, 120, 180, and 240 min thereafter.This standardized monitoring of vital signs before and after the initiation of HP was not fully established in Phase I.There was an understanding that vital signs were observed every 15 min during the initial hour of HP, and subsequently every 60 min, in accordance with the tailored nursing approaches of each unit.Nevertheless, to foster uniformity in nursing practices, efforts were made to integrate these practices into a formalized protocol or agreement, particularly in Phase II.Regular monitoring of vital signs prompted nursing action during HP.Furthermore, the comprehensive bundles facilitated timely physician consultation in cases where initial management falls short. An additional issue that impacted the effectiveness of HP was the formation of blood clots within the membrane.Furthermore, blood loss of approximately 285 mL (HA330 = 185 mL and circuit = 100 mL) may occur during circuit disposal.Thus, it is advisable to promptly address any early indications of membrane clotting.This can be achieved by regularly checking vascular access, adjusting patient positioning, closely monitoring for any increase in transmembrane pressure (TMP), and further examining clot size when TMP rises.In advanced stages where machine operation is compromised due to clotting, a more vigorous blood return is recommended in order to minimize blood loss whenever feasible. Our study had some limitations.Firstly, the small number of patients suitable for the investigation into the study hypothesis-centered on the care bundles to enhance HP performance-impeded a comprehensive assessment of HP benefits, including the success rates and other efficacies such as disease progression, intubation, and mortality rates.Further investigations with larger sample sizes could provide more insights to assess the impact of the care bundles, in cooperation with standardized treatment protocol, including the type and duration of corticosteroid treatment or the use of IL-6 receptor inhibitor, on patients' survival outcomes.However, in terms of efficacy we observed a significant reduction in IL-6, an important surrogate marker for disease severity, following HP treatment.In all phases, the IL-6 levels decreased from 75 (IQR 29, 109) pg/mL to 25 (IQR 9, 60) pg/mL, p < 0.001.This reduction was consistent across both phases.In Phase I, IL-6 levels dropped from 79 (IQR 39, 86) pg/mL to 35 (IQR 9, 60) pg/mL, p < 0.005.In Phase II, IL-6 levels dropped from 68 (IQR 29, 109) pg/mL to 25 (IQR 13, 27) pg/mL, p = 0.03.Secondly, the improved HP performance in Phase II might be attributed to the experience gained during Phase I.Although experienced nurses managed the care bundles, it was not always feasible to constantly have specialized nurses available.Therefore, the designated nurses were justified in following these care bundles.Thirdly, the retrospective nature of our study may introduce biases due to missing data.Regrettably, during the study period, our facility encountered constraints in conducting cytokine analyses, particularly for IL-6, during Phase II.Hence, we offer the available data, recognizing certain limitations.Nevertheless, we observe that the initial IL-6 levels in both phases exhibit comparability.Nevertheless, we posit that certain surrogates could reasonably approximate cytokine storms or disease severity, such as absolute lymphocyte count, D-dimer, and C-reactive protein levels.Moreover, it appears that IL-6 levels themselves may not have a direct impact on the success or failure of HP operations.Lastly, we could not determine how the care bundles affected COVID-19 patients versus those with septic shock.We occasionally perform HP on septic shock patients, but we do not have enough data for the comparison. Conclusions Further investigation with a larger sample size is necessary to confirm the benefits of the care bundles.Nonetheless, the utilization of these care bundles has been shown to notably improve the safety of HP, with a specific focus on the successful prevention of hypothermia.on 16 November 2023.The study was performed in accordance with the Declaration of Helsinki, a statement of ethical principles for medical research involving human subjects. Informed Consent Statement: An informed consent waiver was approved by the Research Ethics Committee of the Faculty of Medicine, Chiang Mai University, due to the minimal risk and anonymous data analysis intrinsic to the retrospective manner of this study. Figure 1 . Figure 1.Risk of hypothermia after hemoperfusion care bundle implementation.Blue dots and horizontal lines with bars indicate odds ratios and 95% confidence intervals, respectively. Figure 1 . Figure 1.Risk of hypothermia after hemoperfusion care bundle implementation.Blue dots and horizontal lines with bars indicate odds ratios and 95% confidence intervals, respectively. Table 1 . Adverse events and care bundles for hemoperfusion. * The interleukin-6 levels were aggregated from a subset of six cases during Phase II due to resource constraints in conducting cytokine analyses.Moreover Continuous data are presented as median (IQR).Abbreviation: HP, hemoperfusion and n/a, not applicable.
2024-06-09T15:18:28.282Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "9e6e9a9360c2a1e0729ffc139dd9ae3c5da2ead1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jcm13123360", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "28509ea83bc75f3a56032531572e18b5f923954c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8941685
pes2o/s2orc
v3-fos-license
Conformationally Selective RNA Aptamers Allosterically Modulate the β2-Adrenoceptor G-protein-coupled receptor (GPCR) ligands function by stabilizing multiple, functionally distinct receptor conformations. This property underlies how “biased agonists” activate specific subsets of a given receptor’s signaling profile. However, stabilization of distinct active GPCR conformations to enable structural characterization of mechanisms underlying GPCR activation remains difficult. These challenges have accentuated the need for receptor tools that allosterically stabilize and regulate receptor function via unique, previously unappreciated mechanisms. Here, utilizing a highly diverse RNA library combined with advanced selection strategies involving state-of-the-art next-generation sequencing and bioinformatics analyses, we identify RNA aptamers that bind a prototypical GPCR, β2-adrenoceptor (β2AR). Using biochemical, pharmacological, and biophysical approaches, we demonstrate that these aptamers bind with nanomolar affinity at defined surfaces of the receptor, allosterically stabilizing active, inactive, and ligand-specific receptor conformations. The discovery of RNA aptamers as allosteric GPCR modulators significantly expands the diversity of ligands available to study the structural and functional regulation of GPCRs. INTRODUCTION G protein-coupled receptors (GPCRs) are the superfamily of cell-surface, seven α-helical transmembrane-spanning receptors, with over 800 members identified in the human genome [1][2][3][4] . GPCRs are targets of one-third of all pharmaceutical agents currently available on the market for treatment of a wide range of health problems including cardiovascular disease, neurological disorders, asthma, and immune system dysfunction 1,3 . In response to agonist binding, GPCRs undergo conformational changes that activate intracellular signaling cascades and effector systems via coupling to G proteins and G protein-independent transducers such as β-arrestins 2,5,6 . Importantly, these two signaling pathways can be pharmacologically separated through the use of "biased" agonists that preferentially activate one signaling arm over the other, potentially leading to therapeutics with more targeted efficacy and enhanced safety profiles [5][6][7][8] . Indeed, work over the past decade has led to a list of biased agonists for several GPCRs and some of these biased agonists have even entered late stage clinical trials for various disease conditions [6][7][8] . The development of such biased ligands is dependent on a detailed understanding of the structural basis of different signaling GPCR conformations. Numerous biophysical studies have demonstrated that GPCRs are dynamic allosteric machines that exhibit conformational heterogeneity in both ligand-occupied and ligand-free states [9][10][11] . These studies support a multi-state model for GPCR activation in which receptors adopt multiple active or inactive conformations and specific ligands have a propensity to stabilize distinct conformational states and elicit ligand-specific activity. Therefore, structural information is essential to improve our understanding of the nature of ligand-specific receptor conformations and the mechanism by which these allosteric conformational changes are transmitted to transducers to initiate downstream signaling. Although recent crystal structures of multiple GPCRs have provided significant atomic-level structural information [12][13][14][15][16] , major challenges still exist in using X-ray crystallography to study the structures of GPCRs. These challenges stem primarily from the inherent flexibility and biochemical instability of functionally active conformational states 9,11,15,17 . X-ray crystallography of GPCRs in the absence of stabilizing agents tends to capture lower energy, thermodynamically stable inactive structures even in the presence of high-affinity or covalently tethered agonists of orthosteric site, thus missing functionally active signaling conformations 11,17,18 . Expanding the chemical profile of GPCR ligands has the potential both to aid in the development of biased drugs for various therapeutically important GPCRs and to provide molecular tools for structural and biophysical applications. Given their molecular diversity, ability to adopt unique 3D structures, lack of immunogenicity, and ease of chemicalmodification, RNA aptamers are emerging as valuable pharmacologic agents and conformation-sensors for various targets [19][20][21][22][23][24][25][26][27][28][29][30][31][32] . While aptamers targeting a variety of molecules ranging from small molecules to whole cells have been identified, few studies have described the selection of RNA aptamers against membrane proteins such as GPCRs 20,[24][25][26][27] . Additionally, most of these studies utilized traditional selection strategies, specifically, using complex cellular systems as targets and characterizing the most abundant aptamers after selection using conventional cloning methods. We hypothesized that isolating RNA aptamers with defined conformational specificities for GPCRs would require precise control of the selection conditions and more sensitive methods for analyzing clones. Here, we describe an integrated approach to discover conformationally specific RNA aptamer allosteric modulators for the β 2 -adrenoceptor (β 2 AR) 2 , a model GPCR system, involving next-generation sequencing (NGS) 33,34 and comparative bioinformatics analysis of parallel selections against purified β2AR in different states. The resulting set of aptamers exhibit distinct preferences for binding to various β 2 AR conformational states with high affinity and selectivity, as determined using a combination of biochemical, functional, biophysical and structural methods. Thus, our study reveals the potential of RNA aptamers to serve as molecular tools for elucidating the structural and mechanistic details underlying GPCR activation as well as for developing improved therapeutics. Preparation of the β 2 AR target The β 2 AR is a prototypic and well-characterized member of the GPCR family. It was the first ligand binding GPCR to be cloned and its structure was solved at high resolution in the active state and in complex with G-protein 2,12,13,16 . Purification of functionally active receptor and stabilization of purified GPCRs are major challenges in the field of GPCR biology research. We prepared β 2 AR from baculovirus-mediated expression in Spodoptera frugiperda (Sf9) insect cells via solubilization in detergent and using a three-step affinity purification procedure (as previously described 4 ; see online Methods). Purification of the β 2 AR to homogeneity was achieved primarily by use of the alprenolol affinity purification step, which selectively isolates functional receptors from those that are non-functional and incapable of binding of radioligand 2,4 . We maintained the purified receptor in maltoseneopentyl glycol (MNG) 35 , an amphiphilic detergent that enhances receptor stability. In order to lock the receptor into an active conformation for selection of RNA aptamers, an agonist of high affinity and extremely slow off-rate, BI167107 12,36 , was used (Fig. 1a). comparative bioinformatics analysis to efficiently identify candidate aptamer binders with desired functional properties ( Fig. 1a and see Supplementary Results, Supplementary Fig. 1 for a detailed selection schematics) [31][32][33][34] . To isolate unique RNA aptamers that bind at structurally relevant sites on the β 2 AR, we performed nine rounds of positive selection against unliganded-β 2 AR and a high-affinity agonist (BI167107)-bound β 2 AR. Prior to each round of positive selection, we performed negative selection to deplete filter and other nontarget-binding RNA molecules. In order to further enrich the population for aptamers that bound to our targets, at round five we performed a counter-selection against a non-target membrane protein, an inactive angiotensin receptor subtype 1a (AT1aR). Enrichment of target-specific sequences was monitored by measuring bulk equilibrium dissociation constants (K d ) of successive aptamer pools for binding to the two β 2 AR selection targets, via a nitrocellulose filter binding assay ( Supplementary Fig. 2). We found that the initial library, and pools from R1 through R4 exhibited minimal binding, but we observed a noticeable increase in the binding affinity of the R5 pool for each selection target (β 2 AR and β 2 AR:BI167107). Accordingly, all selected RNA pools starting from R5 on exhibited a progressive increase in the binding affinity, with the most prominent enrichments occurring between rounds 6 and 9 ( Supplementary Fig. 2 NGS of the eluted aptamer pools improved the resolution of the data over that of traditional single clone Sanger sequencing method. Traditional sequencing method limits sampling to a potentially poorly representative portion of the clonal space (usually a few hundred clones) compared to NGS, which samples millions of sequences across successive rounds of selection. The ability to sample a large proportion of clonal space via NGS not only improves the power of the selection, but also reduces the risk of capturing non-specific or poorly representative clones. High-throughput sequencing (HTS) was accomplished by preparing multiple barcoded, Illumina-compatible dsDNA fragment libraries derived from each pool and subjecting them to multiplexed paired-end sequencing analysis on Illumina HiSeq 2000 platform ( Supplementary Fig. 3). The use of multiple barcodes allowed us to analyze all aptamer pools in a single flow cell lane. We obtained a total of 1,180,685 raw sequences from all pools. During the initial bioinformatics analysis we observed a major decrease in sequence diversity of pools over the course of the selections, and an increase in copy numbers within the top, most frequent unique sequences indicating enrichment of target-specific binders. To identify β 2 AR specific aptamers, we tracked the enrichment of individual sequences across successive selection rounds. This was performed by calculating the fold-enrichment for every sequence, which we defined as the ratio of percent frequency of a given sequence in the later round to that of the earlier round. We ranked RNA aptamer sequences by comparing fold-enrichment across multiple selection rounds and selected the top 20 sequences primarily based on their high enrichment ratios (Supplementary Fig. 4a; see online Methods for details). A scatter plot of fold-enrichments from R4 to R9 for the top 20-aptamer sequences in the selections against unliganded-β 2 AR and β 2 AR bound to BI167107 is shown in Figure 1b. Aptamer sequences skewed towards a particular axis are enriched and potentially selective towards that conformational state of the receptor. RNA aptamers display conformation-specific binding The 20 putative β 2 AR binding aptamers were characterized for their binding to, and specificity, for unliganded β 2 AR and β 2 AR bound to BI167107 with 32 P-labeled and biotinylated aptamers, using nitrocellulose filter binding and pull-down assays, respectively ( Supplementary Fig. 4). Several aptamers displayed varying levels of binding and specificity for the two selection β 2 AR-targets (~75% of which bound to the receptor while 35% were conformation specific). These results are consistent with the data obtained from the deep sequencing analysis, which demonstrated multiple aptamer sequences displaying high foldenrichment for the particular conformational state of the β 2 AR that they are selected for ( Fig. 1b and Supplementary Fig. 4). In the course of screening the initial 20 putative β 2 AR binding aptamers, we obtained seven top aptamer candidates, which displayed robust β 2 AR binding and/or conformational selectivity. These seven aptamers were further grouped into three categories as follows (Fig. 1b,c and Supplementary Fig. 4): (i) four aptamers (A1, A2, A12, and A13) showed conformational selectivity for the BI167107-bound or active form of β 2 AR, (ii) two aptamers (A15 and A16) demonstrated binding specificity for the inactive form of β 2 AR, and (iii) one aptamer, A11 did not show a clear selectivity, but bound to both unoccupied and BI167107-bound forms of the β 2 AR with high affinity. In contrast to these aptamers, the control aptamer did not display significant binding to either conformational form of the receptor. In silico predicted secondary structures of these seven aptamers are shown in Supplementary Figure 5. Based on these screening results, the four candidate aptamers (A1, A2, A13 and A16) that showed strong conformational selectivity were selected for further characterization (boxed in Fig. 1c). To further characterize these four aptamers, we measured the affinity and kinetics of their binding to active (BI167107-bound) and inactive (ICI-118,551-bound) forms of the β 2 AR using a biophysical approach based on biolayer interferometry (BLI; ForteBio). BLI allows for quantification of the individual kinetic rate constants (k on and k off ) that contribute to the equilibrium dissociation constants (K d ). We found that three aptamers (A1, A2 and A13) bound to β 2 AR:BI167107 tightly with nanomolar affinities ( Table 1 for kinetic parameters). In contrast, no detectable binding affinity to the β 2 AR:ICI-118,551 was observed with these aptamers, indicating their specificity for an active conformation of the receptor. As expected, aptamer A16 bound to the β 2 AR:ICI-118,551 with a nanomolar affinity (K d [A16] = 93.1 ± 4.1 nM) but without any measurable affinity for the β 2 AR:BI167107, demonstrating specificity towards an inactive conformation form of the β 2 AR. To investigate the influence of the four aptamers on the affinity of an agonist (isoproterenol) for binding to the β 2 AR, we performed competition radioligand binding experiments with the radio-iodinated β-adrenergic antagonist cyanoiodopindolol ([ 125 I]-CYP) utilizing β 2 AR reconstituted into high-density lipoprotein (HDL) particles 37 . As shown in Figure 3a, A13 promoted the greatest increase in the affinity of isoproterenol (ISO) for the β 2 AR, 33.9-fold ([K i ] for [ISO + CNT-Apt] = 112 nM and for [ISO + A13] = 3.3 nM), followed by aptamer A1, which enhanced the affinity of isoproterenol for the β 2 AR by 6-fold (K i = 19.7 nM). In contrast, the presence of aptamers A2 and A16 did not affect the affinity of the β 2 AR for the agonist isoproterenol. Interestingly, although aptamer A2 had no effect on isoproterenol binding to the β 2 AR, it did appear to recognize a receptor conformation stabilized by the agonist BI167107, suggesting it has the ability to distinguish between active conformations induced by two-full agonists, isoproterenol and BI167107. Similarly, aptamer A16 has no effect on agonist binding and appears to recognize only an inactive conformation of the β 2 AR. Next we explored the ability of these aptamers to modulate transitions between active and inactive conformations as well as whether they have the ability to stabilize unique ligandspecific conformations. To do this, we measured the ability of each aptamer to bind β 2 AR occupied with a panel of pharmacologically and structurally distinct β 2 AR ligands via a receptor pull-down assay using biotinylated-aptamers (chemical structures of the different βadrenoceptor ligands used in this study are shown in Supplementary Fig. 6). We used three full agonists (BI167107, isoproterenol, and fenoterol), two partial agonists (salbutamol and clenbuterol), and four antagonists and inverse agonists (propranolol, carazolol, carvedilol, and ICI-118,551). Among these, carvedilol and BI167107 are modestly biased agonists towards β-arrestin-dependent signaling pathways 38,39 . Relative to the control aptamer, aptamers A1 and A13 robustly bound to agonist-occupied β 2 AR, but this binding was significantly reduced in the presence of antagonists ( Fig. 3b-e). Additionally, the binding specificity of these aptamers correlated directly with rank-order of agonist efficacy (Fig. 3b,d). Interestingly, we observed one aptamer, A2, whose effect on receptor binding did not correlate with ligand efficacy. With aptamer A2, we observed the largest receptor pull-down with BI167107-occupied β 2 AR, and a slightly general selectivity trend towards agonistoccupied β 2 AR conformations (Fig. 3c). Surprisingly aptamer A2 also displayed a unique selectivity towards a β 2 AR conformation stabilized by carvedilol, among the antagonists. Aptamer A2's specificity for BI167107-and carvedilol-occupied β 2 AR suggests that it stabilizes a unique active conformation of the β 2 AR that is distinct from that stabilized by aptamers A1 and A13. In contrast to the other aptamers, A16 showed an opposite trend whereby it selectively stabilized antagonist-bound β 2 AR complexes (Fig. 3e). To obtain further insight into the ability of aptamers to stabilize active β 2 AR conformations, we performed fluorescent spectroscopic studies on a β 2 AR labeled with a bimane probe at the cytoplasmic end of TM6 at C265. The bimane probe enables direct monitoring of agonist-induced receptor conformational changes via TM6 movement from a hydrophobic environment to a more polar, solvent-exposed position as a decrease in fluorescence intensity ( Supplementary Fig. 7). Both binding of agonist and G-protein (or its mimetic nanobody) 12,16 have previously been shown to alter the environment around the label, resulting in a decrease in fluorescence intensity and a rightward shift (red-shift) in emission λmax. Both the catecholamine agonist isoproterenol and high affinity agonist BI167107 induced conformational changes in the receptor thus changing the environment around the bimane label, as evidenced by the decrease in fluorescence intensity and a rightward shift in λmax ( Fig. 4a-d). However, such changes were not observed in β 2 AR occupied by an inverse agonist ICI-118,551. Interestingly, the effects of A1, A2 and A13 were enhanced (i.e., a further decrease in fluorescence intensity and a rightward shift in emission λmax) when combined with full agonists, signifying further stabilization of active conformations ( Fig. 4a-c). No significant change in bimane fluorescence was observed with aptamer A16, consistent with its ability to recognize an inactive conformation of the receptor (Fig. 4d). Functional effect of β 2 AR aptamers Stimulation of the β 2 AR system promotes activation of the membrane-associated effector enzyme adenylyl cyclase (AC) via the stimulatory G protein subunit, Gα s 40,41 . AC catalyzes the conversion of ATP to cyclic AMP (cAMP), one of the main second messengers of the GPCR signal transduction system. To determine the functional effects of the aptamers' binding to the β 2 AR, we measured the ability of aptamers to modulate isoproterenolstimulated Gα s and adenylyl cyclase activity by measuring the accumulation of cAMP (Fig. 5). Aptamers A13, A2, and A1 significantly inhibited isoproterenol-stimulated adenylyl cyclase activity by 46.1%, 34.7% and 28.3%, respectively (p < 0.01, p < 0.01, and p < 0.05, respectively vs. ISO-stimulated β 2 AR; one way ANOVA). Aptamer A16 on the other hand was weaker and did not significantly alter the β 2 AR-mediated adenylyl cyclase activity. The specificity of these inhibitory effects was further confirmed by the observation that the control aptamer had no effect on β 2 AR-mediated adenylyl cyclase activity. Molecular architecture of β 2 AR-aptamer complexes In addition to distinct properties, these functionally active aptamers appear to possess unique structural features. Specifically we characterized four aptamers that do not overlap with regard to sequence homology ( Supplementary Fig. 4a) or predicted secondary structural motifs ( Supplementary Fig. 5). In order to assess whether the aptamers bind at extracellular or intracellular regions of the β 2 AR, we conducted a competition pull-down binding assay utilizing β 2 AR-specific single domain nanobodies (Nbs) as competitive allosteric modulators. Both Nb80 (a G-protein mimetic nanobody) and Nb60 bind at the intracellular region around the G-protein binding cavity of the β 2 AR, recognizing active and inactive conformations of the receptor, respectively 12,42 . The nanobodies were used in excess as competitors and the magnitude of competition (or cooperativity) was evaluated with corresponding aptamers based on the level of captured β 2 AR (pre-bound with either BI167107 or ICI-118,551). As illustrated in Supplementary Figure 8a, Nb80 strongly inhibits the interaction of A1 and A2 with the activated β 2 AR, consistent with substantial overlap between the binding sites of Nb80 and those of aptamers A1 and A2. In contrast, Nb80 increases the interaction of activated β 2 AR with aptamer A13, suggesting a positively cooperative effect and minimal overlap between the binding sites of Nb80 and aptamer A13. Likewise, Nb60 enhanced the binding of inactive β 2 AR to aptamer A16, indicating the possibility of cooperativity and lack of competition between the two, consistent with the presumption that aptamer A16 may be binding at the extracellular region of β 2 AR ( Supplementary Fig. 8b). To gain further insight into the binding epitopes and structural basis of the interactions between the aptamers and different β 2 AR conformations, we next used negative stain transmission electron microscopy (EM) and single-particle reconstruction analysis 43,44 . After successfully visualizing the aptamers in complex with the receptor via EM, we further improved the visualization and post-imaging analysis alignment, through increasing the size of the complex to help us identify whether the aptamers were interacting with the extracellular or intracellular surface of the receptor. Improved EM imaging was achieved by affinity purifying samples of β 2 AR-ligand-aptamer complexes labeled with anti-Flag antigen-binding fragment (Fab), derived from a monoclonal antibody that recognizes the FLAG epitope located at the extracellular N-terminus of the receptor 39 ( Supplementary Fig. 9a,b and online Methods). From the EM two-dimensional class averages, we identified a central oval density, as the receptor embedded in the MNG detergent micelles (for receptor alone see Supplementary Fig. 9c). Furthermore, in the EM particle averages, the Fab is shown to bind exclusively to the extracellular N terminus of the β 2 AR and serves as a landmark to help locate the aptamer binding domains on the receptor (see Supplementary Fig. 9d for Fab-β 2 AR complex and Supplementary Fig. 9e for a representative aptamerβ 2 AR complex). By comparing the Fab-β 2 AR complex class averages with those of complexes bound to aptamers, we were able to identify densities corresponding to aptamers in complex with receptor ( Supplementary Fig. 9f and Fig. 6). As illustrated in the 2D-class averages of the β 2 AR complexes bound to aptamers A1, A2 and A13, the binding locations (densities) corresponding to each of the aptamers appear localized on the side opposite to (or distant) from the Fab, suggesting that aptamers A1, A2 and A13 bind at the intracellular region of the β 2 AR (Fig. 6a-c). On the other hand, by virtue of its binding on the same side as the Fab, A16 appears to interact with the extracellular region of the receptor (Fig. 6d). DISCUSSION In recent years within the field of GPCR signaling a large body of work has emerged exploring the underlying structural determinants of ligand-receptor interactions associated with pathway-specific functionally relevant receptor conformations. Characterizing such ligand-selective signaling conformations could serve as the basis for the design of GPCR ligands with better efficacy, improved safety profile, and an enhanced therapeutic window. Elucidating the structural and mechanistic features of these conformations using currently available tools has been challenging, in part due to the inherent flexibility of GPCRs and the fact that X-ray crystallography tends to capture thermodynamically stable inactive conformations 9,15,17,18,45 . These challenges have underscored the need for the development of conformationally selective allosteric agents that can stabilize distinct active and inactive receptor conformations. Although antibody-mediated stabilization of GPCRs or other proteins is a formidable advance 12,13,46-48 , its widespread utility remains limited by problems associated with immunogenicity, economic feasibility, and the time-consuming nature of immunization and library construction. Due to vast library diversity attainable for selections, the nature of chemical compositions of nucleotides, and the unique 3Dconformations they can attain, RNA aptamers have great potential as valuable conformationsensors and pharmacologic agents for GPCRs [19][20][21][22][23][24][25][26][27][28][29][30][31][32] . Herein, we describe the development of state-selective RNA aptamers that allosterically stabilize different conformations of the β 2 AR. Our results reveal that the aptamers have distinct preferences for binding specific receptor conformations with nanomolar range affinities ( Fig. 2 and Supplementary Table 1). We utilized a targeted selection method that allowed for enrichment of RNA aptamers that selectively bind distinct active and inactive receptor conformations. In addition, our approach here also employed NGS and comparative bioinformatics analyses to monitor the complexities of selected pools and the dynamics of enrichments of unique sequences (via evaluation of fold-enrichment) to derive state-selective aptamers (see online Methods). Our analysis, primarily based on fold-enrichment of individual RNA clones, is capable of discerning which aptamers were strongly enriched by each specific β 2 AR target. Notably, these aptamer modulators would not have been identified by traditional clonal selection strategies that pick the most abundant clones since non-specifically binding aptamers could dominate the selected population. Together, the aptamers that we isolated demonstrate the effectiveness of the selection strategies and NGS analysis applied here in identifying aptamer modulators for the β 2 AR, which may have been obscured using traditional clonal selection strategies. Selectivity of the aptamers for specific β 2 AR conformations also correlated with receptor ligand efficacy as demonstrated using biochemical, pharmacological and biophysical approaches. Of the aptamers, A1, A2, and A13 showed strong conformational selectivity for the high-affinity agonist (BI167107)-bound active β 2 AR conformation, while aptamer A16 displayed conformational selectivity for the inverse-agonist (ICI-118,551) inactive β 2 AR conformation. Interestingly, while both aptamers A1 and A13 allosterically enhanced agonist (isoproterenol) binding affinity and bound the receptor in an agonist dependent manner, aptamer A2 appeared to have a unique ligand specificity with preferential binding to BI167107, and to a lesser extent, to carvedilol-bound β 2 AR states. Both BI167107 and carvedilol have been shown to act as modest β-arrestin-biased ligands at the β 2 AR 38,39 . This result may therefore, suggest there is possible overlap between the conformational states stabilized by BI167107 and carvedilol, and that aptamer A2 may recognize a unique ligandinduced, potentially β-arrestin-biased conformation of the β 2 AR. The influence of aptamers on agonist-induced receptor conformational changes was also assessed in a fluorescence spectroscopic study using bimane probe on TM6 12,16 . Indeed, three of the aptamers (A1, A2 and A13) enhanced agonist induced conformational rearrangement of TM6, consistent with their ability to stabilize active receptor conformations via a positive cooperative interaction between β 2 AR and the aptamers. Conversely, aptamer A16 had little to no influence on the movement of TM6, in particular for the BI167107-bound state of β2AR, consistent with its ability to stabilize an inactive conformation of the β 2 AR. Stimulation of the β 2 AR activates heterotrimeric G-proteins and increases the rate of guanosine diphosphate/guanosine triphosphate (GDP/GTP) exchange on the Gα subunit to mediate the activation of adenylyl cyclase (AC) with subsequent accumulation of cAMP 40,41 . Interestingly, aptamers A1, A2, and A13 significantly inhibit agonist-induced cAMP accumulation. It has previously been shown that binding of intracellular-expressed antibodies to β 2 AR inhibits receptor-mediated downstream signaling at the G-protein binding site 42 . We hypothesize that the potential mechanism of inhibition of β 2 AR-mediated AC activity by aptamers (A1, A2 and A13) is likely secondary to their binding at the intracellular region of the receptor with resultant steric blockade of G protein binding. The lack of significant inhibition of AC activity by aptamer A16 may be attributed to its relatively weak binding affinity for the receptor. The recognition of functional activity of these aptamers is intriguing with regard to their potential use as pharmacological agents targeting GPCRs. Indeed, aptamers have been developed to bind many drug targets and constitute potential therapeutic agents as exemplified by the first aptamer-based drug for macular degeneration (pegaptanib sodium) as well as others that have undergone clinical trials [19][20][21][22][23] . Although we only identified ligands that inhibit agonist activity, the diversity of RNA libraries suggests it may be possible to identify receptor aptamers with diverse functionalities ranging from agonists to positive and negative allosteric modulators. GPCRs are versatile allosteric machines and their signaling activities can be affected by the binding of modulators at distinct sites. Indeed, several allosteric sites have been described recently, encompassing regions of the extracellular and intracellular surfaces for GPCRs, including the β 2 AR 49,50 . Interestingly, none of the antibody-based allosteric modulators reported for the β 2 AR bind at the extracellular region 12,42 . Our EM analysis 43,44 and competition studies using β 2 AR specific nanobodies 12,42 revealed the architecture of the β 2 AR-aptamer complexes and the location of the interaction epitopes of the aptamers on the surface of the receptor. Notably, using EM the BI167107-bound Fab-β 2 AR-aptamer complexes (A1, A2 or A13) show densities for the aptamers located opposite to the reference anti-FLAG Fab, suggesting their binding at the intracellular region of the receptor. EM images obtained for ICI-118,551-bound Fab-β 2 AR-A16 on the other hand suggest that it interacts with the extracellular region of the receptor. The EM data is consistent with the hypothesis that the aptamers potentially act through engagement via allosteric mechanisms involving key structural elements of the allosteric sites located at either the intracellular or extracellular regions of the receptor. In summary, the present study illustrates that aptamers can act as allosteric modulators by distinguishing between receptor conformations stabilized by pharmacologically different ligands. Our results therefore establish the potential of RNA aptamers to serve as allosteric modulators for elucidating the structural and mechanistic aspects underlying GPCR activation and signaling. In addition, by virtue of the ability of aptamers to lock receptors in biologically relevant conformations of interest, aptamers may also play a role in smallmolecule drug discovery efforts for identifying allosteric modulators against said conformations. Furthermore, given their favorable pharmacologic characteristics, relative tolerability for progression to the market [19][20][21][22][23] , and broad library diversity, RNA aptamers could represent an attractive class of GPCR ligands. Finally, the general approach used here establishes a framework for developing aptamers aimed at a wide range of soluble and membrane proteins that undergo function-dependent conformational changes. Reagents Sf9 cell culture media and transfection kits to generate virus stocks were purchased from Expression Systems. BI-167107 36 , synthesized as described previously 36 , was a generous gift from Dr. Xin Chen (Changzhou University, Changzhou, Jiangsu, China). All other ligands were purchased from Sigma-Aldrich. Other reagents were of analytical grade obtained from various suppliers and used without further purification unless indicated otherwise. DNA templates, synthesis of 2′-F-pyrimidine RNA transcripts, and biotinylated RNA aptamers The starting double-stranded DNA (dsDNA) library was composed of individual sequences of 107 nucleotides long, including flanking constant regions and a variable region containing 40 nucleotides as described by the following example: 5′-GGGGGAATTCTAATACGACTCACTATAGGGAGGACGATGCGG-N 40 -CAGACGACTCGCTGAGGATCCGAGA-3′. The final sequence complexity of our dsDNA library was ~10 15 52 . Aptamer cloning and sequencing Individual dsDNA forms of the RNAs were prepared by annealing, amplifying using PCR, and sub-cloning them as described above. The template DNA oligonucleotide for each aptamer was purchased from Integrated DNA Technologies. The starting dsDNA sequences (107 base-pair-long) of aptamers were generated by annealing template oligonucleotide (specific for each aptamer), 5′-TCTCGGATCCTCAGCGAGTCGTCTG-N 40 -CCGCATCGTCCTCCCTA-3′ and the 5′-primer oligonucleotide, 5′-GGGGGAATTCTAATACGACTCACTATAGGGAGGACGATGCGG-3′. Each annealed oligonucleotide was filled in with Klenow exo-and purified. The dsDNA products of desired aptamers or aptamer pools were cloned into pCR 2.1-TOPO vector cloning vector (Life Technologies), transformed into E. coli and sequenced (Eton Bioscience). In vitro selection of aptamers RNA aptamers were generated using a Systematic Evolution of Ligands by Exponential Enrichment (SELEX) 31 procedure against purified β 2 AR that was either unliganded or BI167107-bound. The selection library consisted of 80 nucleotide-long RNA oligonucleotides with a central random region of 40 nucleotides, flanked by constant regions of a 15-base 5′-primer sequence and a 25-base 3′-primer sequence. The RNA library in selection buffer (20 mM HEPES, pH 7.4, 50 mM NaCl, 2 mM MgCl 2 , 2 mM CaCl 2 ) was heat denatured at 65 °C, slowly cooled to room temperature (RT), and supplemented with MNG and CHS at final concentrations of 0.01% (w/v) and 0.001% (w/v), respectively. The RNA library solution was mixed with the nitrocellulose matrix, FLAG-peptide (0.5 µM), with or without BI167107, for selection against β 2 AR or β 2 AR-BI167107, respectively. The mixtures were incubated for 30 min at 25 °C, prior to every round of selection with unliganded or β 2 AR-BI167107, to remove nonspecific target-binding species. In the negative-selection step during the fifth round, the RNA library was incubated with a nontarget receptor (50 nM of AT1aR bound to 10 µM telmisartan). The pre-cleared RNA libraries (2.25 nmole) were recovered and then incubated with respective target protein, either unliganded or BI167107-bound β 2 AR (1.125 µM; RNA/receptor ratio 5/1) for 30 min at 25 °C in 400 µL selection buffer on a rotating wheel. Starting from round 2, yeast tRNA (20 ng/µL) was used in the selection mixture to eliminate nonspecific binding. After incubation, the selection mixtures were passed through nitrocellulose filters to capture β 2 AR-RNA complexes and remove supernatant containing unbound aptamers. Bound RNA molecules were then incubated in phenol:chloroform:isoamyl alcohol for 30 minutes at RT, chloroform extracted, ethanol precipitated, and resuspended in TE buffer (10 mM Tris pH 7.4, and 0.1mM EDTA) to extract the RNAs from the membrane. One-quarter of the extracted RNA was reverse transcribed (RT) with 3′ primer, dNTPs, and AMV Reverse Transcriptase (Roche). The RT reaction was PCR amplified with the 5′ and 3′ primers described above using platinum Taq polymerase and standard PCR conditions. The PCR reactions were desalted and excess reagents were removed using Centricon 30 (Millipore) and washed with TE buffer. The dsDNA products were then used to generate RNA pools for the next round using in vitro transcription as described above. Nine rounds of selection were performed, and selection pressure was increased throughout the process as follows: i) Ionic strength (concentration of NaCl) was 50 mM for the rounds1 to 3, 75 mM for the rounds 4 to 6, and 100 mM for the rounds 7 to 9; ii) The amount (nmoles) of RNA library input was decreased to 1, 0.5 and 0.25 in rounds 2, 5, and 8, respectively and the aptamer:β 2 AR ratio was 5:1 for the rounds 1 to 4, 7:1 for the rounds 5 to 7, 10:1 for the rounds 8 and 9. Filterbinding assay was used to evaluate the binding affinity of the individual selected pools (see below). High-throughput next-generation sequencing (NGS) of RNA aptamer pools To determine the sequences enriched through in vitro selection, we performed highthroughput next-generation sequencing (NGS) using HiSeq 2000 (Illumina). Each RNA pool was reverse transcribed using AMV Reverse Transcriptase (Roche), by substituting the 3′primer with the appropriate NGS 3′-primer that has 6-bases barcode and NKKNKK region ( Supplementary Fig. 3). The cDNA for each selection round was amplified by PCR using Phusion Hot Start II High-Fidelity DNA Polymerase (Thermo Scientific) with 5' and 3' primers containing barcode (6-base Illumina-compatible unique DNA barcodes for each pool), NKKNKK region, and 12-base complement sequences of the original constant regions of dsDNA PCR selection primers. The NKKNKK sequence facilitates cluster identification and ensures generation of high quality reads during Illumina sequencing (a common remedy for Illumina amplicon low diversity issue) while the 6-base pair barcode sequence, unique for each pools, helps identify each pool during multiplex sequencing. The resulting barcoded PCR products were purified with QIAEX II Gel Extraction Kit and QIAquick PCR purification Kit protocols. The DNA fragment libraries (94-bp; Illuminacompatible dsDNA fragments) were further purified and concentrated using phenolchloroform and ethanol-precipitation. The products were then ligated with the adapters (which included end-repair, A-tailing and paired-end adapter ligation), amplified by cluster generation and processed for NGS according the protocols provided by Illumina, Inc. Final library sizes were determined using the Agilent Bioanalyzer and quantified using the Qubit (Life Technologies). Indexed DNA samples were pooled together with equal molar-ratios and used for multiplex sequencing. High-throughput sequencing from both ends of the library inserts (i.e., paired end sequencing) was performed by the sequencing core facility at Duke Center for Genomic and Computational Biology, Duke University. Bioinformatics and in silico methods for sequence analysis Raw paired-end reads from each pool were processed to: 1) remove low quality paired-end reads and adapters using CutAdapt (http://journal.embnet.org/index.php/embnetjournal/ article/view/200) 2) the first 6 random sequence that was designed to remedy amplicon low diversity issue was also trimmed similarly 3) demultiplex and identify each pool of multiplex pooled sample using FASTX-Toolkit (http://hannonlab.cshl.edu/fastx_toolkit/ links.html). A custom Perl script was developed in-house for downstream data processing and analysis that 1) extracts the sequence of interests (random regions of interests) by removing the constant sequences that contain barcodes and PCR primers at the 5' and 3' regions, 2) clusters the resulting sequence (individual clonal sequences) based on sequence identity to generate a unique set of the sequences, 3) calculates the frequency and percentage of frequency of each unique sequence, and 4) transcribes the nucleotide sequences of the individual sequences into RNA sequences for downstream analysis. We used the frequency and percentage of frequency of each unique sequence in individual selection pools to compute enrichment-ratios between two rounds. Fold-enrichment of unique sequences was calculated by dividing percent frequency of the later round by that of the earlier round. We used fold-enrichment to account for the aptamers that might have had a low copy number, but still had relatively high numbers in the next round of selection. Secondary to this reason, we think the use of enrichment-ratios of individual sequences rather than copy number provides higher resolution characterization of pools and dynamics of enrichment of unique sequences, and thus, has significant advantage in the end in ensuring efficient and successful isolation of target specific binders. RNA sequences were ranked according to their foldenrichment across successive rounds and their enrichment dynamics were evaluated. To help us narrow candidate list that we could synthesize and test, next, we performed further bioinformatics analyses for the top ranking aptamers using combinations of tools including Microsoft Access, Microsoft Excel, GraphPad Prism, MacVector (MacVector, Inc.), Clustal Omega (http://www.ebi.ac.uk/Tools/msa/clustalo/), and RNA folding algorithm Mfold (http://unafold.rna.albany.edu/?q=mfold). The analyses include primary sequence alignments and comparisons between them to pick the representative of each cluster; analysis of the relative structural stability of individual sequences from minimal energies computed from the secondary structure predictions; scatter plot analysis of fold-enrichments between the two selection types; and rank-order copy numbers (e.g., at R6 and R9 selected pools). Based on these analyses we selected 20 top candidate aptamers to be cloned and synthesized as a 5′-end biotin moiety or radio ( 32 P)-labeled RNA aptamer versions, to subsequently evaluate their binding with the selection targets. Radiolabeling of RNA aptamers RNA aptamers were radioactively labeled by 32 P at the 5′-end, initially by removing the 5′terminal phosphate group with bacterial alkaline phosphatase (Life Technologies) at 65 °C for 1 hr and then purifying them by phenol/chloroform extraction followed by ethanol precipitation. 3 pmol of each dephosphorylated aptamer was incubated with 32 P-labeled γ-ATP and T4-polynucleotide kinase (NEB) at 37 °C for 45-minute. Radiolabeled aptamers were finally purified using G25-spin column (GE Healthcare) following the manufacturer's instructions. Incorporated radioactivity was quantified using a scintillation isotope counter. Filter binding assay The binding affinities of the different RNA pools or individual aptamers were determined by nitrocellulose-membrane filtration-based saturation binding assay. Constant amounts of 5′-[ 32 P]-radiolabelled RNA aptamers (at 2000 CPM/µL final) were incubated with increasing concentrations of the β 2 AR or β 2 AR:BI (12 two-fold serial dilutions starting from 2 µM) in a buffer containing 20 mM HEPES, pH 7, 50 mM NaCl, 2 mM MgCl 2 , 2 mM CaCl 2 , 0.01% MNG and 0.001% CHS for 30 minutes at RT. The final reaction volume was 20 µL. The β 2 AR-RNA aptamer mixtures were then passed through a stack of membranes on a vacuummanifold consisting of a Protran-nitrocellulose that captures RNA-protein complexes and GeneScreen Plus® nylon membrane that captures unbound RNA molecules. After washing twice with 100 µL binding buffer the membranes were air dried for 5 minutes, exposed to Phosphoimager screens (1 hr), and scanned using a Molecular Typhoon Phosphoimager (GE Healthcare). Finally, the fraction of RNA-bound was calculated, adjusted for background and graphed using GraphPad Prism. The equilibrium dissociation constant (K d ) for the RNA aptamers were obtained by fitting the fraction of nitrocellulose-bound RNA to the following equation: Y = (Bmax*X) / (X + K d ), where Bmax is the maximum value of Y (when X = ∞); and K d , the dissociation constant, is the value of X when Y = Bmax /2. Pull-down experiments In order to measure the binding activity of aptamers to the β 2 AR, we pulled down receptors using NeutrAvidin-beads (Pierce) For nanobody competition studies, pull-down experiments were performed as described here with minor adjustments. Intracellularly acting β2AR-specific nanobodies (Nb80 and Nb60, a positive and negative allosteric modulators, respectively) were used to assess cooperativity and competition with the aptamers. 10 ìM of nanobody (Nb80 or Nb60) or buffer alone was mixed with receptor (pre-reacted with carrier solvent, ICI-118,551 or BI167107) and added to bead-aptamer mixtures. In both cases after incubation, the receptor complexes (with or without aptamer and/or nanobody) were centrifuged and unbound mixtures were washed three times. Bound complexes were eluted with 37.5 µL buffer containing 20 mM HEPES, pH 7, 100 mM NaCl, 250 mM DTT, 500 µM Biotin and 50 mM EDTA for 20 min at RT. 12.5 µL of 4x SDS sample buffer was added to each eluted sample prior to performing a Western blotting using anti-β2AR antibody (sc-569; Santa Cruz), and ethidium bromide (EtBr) staining for RNAs on a 10% TBE gel (Life Technologies). Binding affinity measurements by biolayer interferometry (BLI) The kinetics of interactions of aptamers (A1, A2, A13, or A16) with the BI167107 or ICI-118,551-bound β 2 AR were measured by BLI on a ForteBio's Octet RED96 System. Prior to immobilization the biotinylated aptamers were incubated at 65°C for 5 min and then cooled to RT. Biotinylated aptamers were then immobilized onto Streptavidin (SA) biosensor tips (FortéBio) in a buffer composed of 20 mM HEPES, pH 7, 25 mM NaCl, 5 mM KCl, 5 mM MgCl 2 , and 2 mM CaCl 2 by dipping the SA sensors into wells containing biotinylated aptamers for 600 seconds. The loading levels of aptamers were kept between 1 and 1.2 nm in screening assays and between 0.2 and 0.35 nm for titrations. The aptamerloaded sensors were washed with buffer for 60 seconds. After obtaining baseline in the buffer that contained 0.01% MNG and 0.001% CHS, the association of BI167107 or ICI-118,551-bound β 2 AR (at a 1:20 receptor to ligand ratio) at varying concentrations was monitored for 300 seconds, followed by dissociation into the buffer for 300 seconds. Aptamer-free blank SA sensors were used in parallel to record signals due to non-specific interactions, which were subtracted out to obtain specific binding data. Signal from the interaction between receptor-free buffer with 0.01 % MNG and sensors was used to double reference to remove drifts in specific binding data. The association and dissociation rate constants (k on and k off ) and the dissociation constant (K d ) values were obtained by fitting the aptamer specific binding data globally to a 1:1 Langmuir binding model using FortéBio's Data Analysis software 7.1.0.36 (ForteBio) and/ or BiaEval 4.1 programs. Competitive radioligand binding experiments Competition radioligand binding assays were performed with purified β2AR reconstituted into HDL particles (nanodiscs) and radioiodinated cyanopindolol ( (4 µM), or a combination of a ligand and an aptamer. Fluorescence spectra were read in a SpectraMax M5 plate reader (Molecular Devices) using an excitation wavelength set at 370 nm and emission range from 430 to 600 nm in 1-nm increments. Spectra were corrected for background intensity from buffer, ligands and aptamers. Fluorescence emission curves fit to normal distribution were drawn using Prism. Adenylyl cyclase (AC) activity assay The effect of aptamers on β2AR-dependent stimulation of AC activity was assessed by 3', 5'-cyclic AMP (cAMP) accumulation on HEK-293 membrane homogenates stably expressing β 2 AR 53 (a clone developed in the laboratory that has an expression level of ~ 2 pmoles/mg), by measuring the conversion of [α-32 P]-ATP to [α-32 P]-cAMP as previously described 54 . Typical assay setup contained a final total volume of 100-µL, performed in 3steps. First, a premix sample (with or without ligand) of 60 µL consisting of final concentrations of 50 mM Tris-HCl, pH 7.5, 5 mM MgCl 2 , 1 mM ATP, 1 µM GTP, 1 mM cAMP, 2 µCi [α-32 P]-ATP, ATP-regenerating system [20 mM creatine phosphate and 13 units/100 µL of creatine-phosphokinase], and phosphodiesterase inhibitors [250 µM of Ro 20-1724 and 100 µM of 3-isobutyl-1-methylxanthine] with or without 100 nM isoproterenol were mixed. Second, HEK-293 membrane homogenates (150 µg) were incubated with individual aptamers (4 µM; heat denatured and refolded as described above) or assay buffer alone in a total volume of 40 µL for 20 min on ice. Then, to measure AC activity in response to isoproterenol (100 nM) or isoproterenol (100 nM) in combination with aptamers (4 µM), the premix samples (60 µL) and membrane mixtures (40 µL) were incubated at 37 °C for 10 min. Reactions were terminated with 0.8 mL cold trichloroacetic acid (6.25% wt/vol); and 100 µL of [ 3 H]-cAMP (~25,000 cpm) was added as a recovery marker. Samples were pelleted by centrifugation at 1,500 × g for 20 min at 4°C. The [α-32 P]-cAMP formed was then isolated from the remaining ATP by applying the 1 mL reaction mixture to a sequential chromatography using a Dowex gel column followed by filtration on an aluminum oxide column and elution with 4 mL of 0.1 M imidazole, pH 7.5. The samples were counted for both 3 H and 32 P, and the counts were converted to AC activity as picomole of cAMP/mg of protein/min as described previously 54 . Specimen preparation and EM imaging of negative-stained samples To prepare for EM visualization of the β 2 AR-aptamer complexes, affinity-purification using biotin/NeutrAvidin system was employed, as described above, for pull-down assays. An anti-FLAG Fab was developed to specifically label the FLAG-tagged β 2 AR at its extracellular N-terminus. This Fab was derived from a monoclonal mouse anti-FLAG M1 IgG that recognizes the FLAG-epitope was produced using hybridoma technology for antibody production 43 . The anti-FLAG Fab was isolated by digestion of the monoclonal mouse anti-FLAG M1 IgG on an immobilized-papain protease resin and followed by purification on a Protein-A column (Pierce) 43 . The β 2 AR and aptamer complexes (assembled as 10 µM and 20 µM in 125 µL volume, respectively) were formed in a buffer composed of 20 mM HEPES, pH 7, 25 mM NaCl, 5 mM KCl, 5 mM MgCl 2 , 2mM CaCl 2 , 0.01% MNG, 0.001% CHS and 10 µL ligand. β 2 AR-aptamer complexes were eluted in a buffer that has 4 mM biotin and then prepared for EM using conventional negative-staining protocols as described previously 44 . Specimens were imaged at RT with a FEI Tecnai G 2 Twin electron microscope operated at 120 kV using low-dose procedures. Images were recorded at a magnification of ×65,200 and a defocus value of ~1.5 µm on an Eagle 2K CCD camera. Two-dimensional classification Two-dimensional EM reference-free alignment and classification of particle projections were performed using ISAC 44 . Particles were both automatically and manually excised using Boxer (part of the EMAN 2.1 software suite) 44 . Over 10k 0° particle projections of either β 2 AR alone, β 2 AR-aptamer (A1, A2, A13 or A16), Fab-β 2 AR, or Fab-β 2 AR-aptamer (A1, A2, A13 or A16), were subjected to ISAC, producing at least 50 classes. Given the challenge of observing aptamers via EM due to their small size, our goal was to identify the particle averages, which allowed visualization of the β 2 AR-aptamer interaction. Approximately 5-10% of the particle averages demonstrated β 2 AR-aptamer interaction. To determine β 2 AR-aptamer conformations, each class average was designated as 'receptor alone', 'receptor-aptamer' or 'unassigned' and the number of projections resulting in 'receptor-aptamer' complex formation were utilized (relative to Fab-tag) to help identify extracellular versus intracellular interactions. Statistical analysis Statistical analysis and curve fitting were done using Prism 6 (GraphPad Software). For statistical comparison, one-way analysis of variance (ANOVA) with p-values of < 0.05 considered significant. Supplementary Material Refer to Web version on PubMed Central for supplementary material. (a) Schematic overview of the selection strategy, NGS, bioinformatics analysis, and characterization of candidate aptamers. Ribbon diagram representation for selection against inactive β 2 AR (colored in blue; PDB: 2RH1) or active β 2 AR bound to high affinity agonist BI167107 (colored in red; PDB: 3SN6). MNG detergent micelles are represented in gray. Encircled orange areas show potential binding regions for aptamers to different β 2 AR conformations. (b) Scatter plot from NGS analysis, comparing fold enrichment-ratios (R4 to R9) for the top 20-aptamer sequences from selection on unliganded β 2 AR (x-axis) versus BI167107-bound β 2 AR (y-axis). Each point in the plot represents a unique aptamer (the top seven candidate binders are color-coded in red, blue, or purple) according to their enrichment and selectivity to a selection target. (c) Bar graph shows top seven aptamers and their capacity to bind unliganded β 2 AR or BI167107-bound β 2 AR as assessed by pull-down assay. Boxed bars denote the four aptamers, selected for further characterization. Data shown represent the mean ± s.e.m. (*P < 0.05; **P < 0.01; ***P < 0.001) of three independent experiments, analyzed by one-way ANOVA followed by a Fisher's LSD posttest. values ± s.e.m. of three independent experiments. Blue, gray, green, and light blue curves represent the measured responses for each tested concentration of β 2 AR (BI167107 or ICI-118,551-bound β 2 AR). Whereas overlay the curves in red show the global fitting results of the binding data.
2018-04-03T00:05:48.589Z
2016-05-20T00:00:00.000
{ "year": 2016, "sha1": "a37e281a62a284b0d2088d34e9145d368be589db", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4990464?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3eb7e6db8ee088aba63ca9f5443ac3b976efa4cd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247250840
pes2o/s2orc
v3-fos-license
Social Capital, Urbanization Level, and COVID-19 Vaccination Uptake in the United States: A National Level Analysis Vaccination remains the most promising mitigation strategy for the COVID-19 pandemic. However, existing literature shows significant disparities in vaccination uptake in the United States. Using publicly available national-level data, we aimed to explore if county-level social capital can further explain disparities in vaccination uptake rates when adjusting for demographic and social determinants of health (SDOH) variables, and if association between social capital and vaccination uptake may vary by urbanization level. Bivariate analyses and a hierarchical multivariable quasi-binomial regression analysis were conducted, where the regression analysis was stratified by urban–rural status. The current study suggests that social capital contributes significantly to the disparities of vaccination uptake in the US. The results of the stratification analysis show common predictors of vaccine uptake but also suggest various patterns based on urbanization level regarding the associations of COVID-19 vaccination uptake with SDOH and social capital factors. The study provides a new perspective to address disparities in vaccination uptake through fostering social capital within communities; which may inform tailored public health intervention efforts to enhance social capital and promote vaccination uptake. Introduction COVID-19 has caused profound health consequences among populations across the globe. As of 5 April 2022, there have been 979,610 deaths attributed to COVID-19 in the United States [1]. High uptake of the COVID-19 vaccine is among the most promising strategies to reduce the burden of disease and control the pandemic [2]. As of April 2022, there are three COVID-19 vaccines available where one is recommended for children aged 5-11 years, one is for all individuals 12+, and two are for adults aged 18 and over [3]. Efficacious COVID-19 vaccines have been available at no cost for people eligible by age and administered in the US since December 2020 [4]. However, only 65.7% of US population have been fully vaccinated as of 7 April 2022 [5] with notable disparities in vaccine uptake [6][7][8]. Per CDC vaccine administration data (14 December 2020-1 May 2021), vaccination coverage was lower in counties with high social vulnerability than in counties with low social vulnerability. This disparity was especially evident in large, fringe metropolitan (suburban) and nonmetropolitan counties [9]. As such, social factors are found to exacerbate the vulnerability of communities when stratified by demographic features (e.g., racial minority populations and age) [9]. Overall, vaccine uptake rates are lower in counties with higher percentages of older adults with a race/ethnicity other than non-Hispanic through enhancing trust, promoting altruism, and providing social support. Firstly, residing in a community with high levels of generalized trust is likely to be associated with immunization where trust in government and health institutions influences risk perceptions and health behaviors [23]. Those with high trust in the government are more likely to support governmental policies regarding vaccination [23]. People who trust in health institutions are more likely to seek health care and ensure that they receive adequate treatment [24]. Institutional trust has been found to be significantly associated with acceptance of the COVID-19 vaccine [25]. Secondly, social capital is believed to foster altruism, which affects individual vaccination decision making. Vaccinating a society against communicable diseases can be characterized as a collective action, because doing so fosters herd immunity against the disease [19]. Eligible individuals may choose to contribute to herd immunity to protect the general other as well as those within their communities [19,22]. Social capital facilitates cooperation for altruistic collective action due to the influence of strong norms, enforcing acceptable behavior that tends to increase levels of cooperation, which may affect immunization decisions and reduce the prevalence of the reliance of individuals on herd immunity free-riding strategies (i.e., the individual benefits from herd immunity without contributing to it, in this case through vaccination) [22]. Finally, communities with high levels of social capital have access to more resources to overcome logistic challenges and barriers that hinder vaccination uptake such as transportation and vaccination appointment availability. For example, local health organizations, non-governmental organizations, and neighborhood committees may facilitate addressing these issues by offering patient navigation services, mobile vaccination clinics, and other health fair events to promote vaccine uptake, but may differ between communities due to levels of social capital [26]. While there is evidence of the association between social capital and vaccination uptake, few studies have examined how social capital level is related to rates of COVID-19 vaccination uptake at the county-level in the US when controlling for demographic and SDOH variables. Additionally, although there are rationales to support that the influence of social capital on vaccine uptake may vary by urbanization level, there are limited data testing this hypothesis. Therefore, based on national data, the current study aims to answer two main questions: (1) Is social capital associated with vaccination uptake, when controlling for demographic characteristics and SDOH variables? and (2) Does the effect of social capital on vaccination uptake vary by urbanization level? Data Sources Data utilized in the current study come from five publicly available dataset sources (Table A1). Vaccination rates were extracted from the CDC's US county and local estimates for vaccine hesitancy as of 12 December 2021 [27]. Demographic information was compiled from the US Census American Community Survey (ACS), which includes five-year population estimates (2015-2019) [25]. Variables representing SDOH were extracted from Emory University's AIDSVu 2018 County SDOH data [28]. Social capital variables were compiled utilizing data from the Social Capital Index Project aimed to understand the geography of social capital in America [14]. Finally, urbanization level was assessed using data from the CDC's National Center for Health Statistics (NCHS) 2013 Rural Classification Scheme for Counties in the United States [29]. Measures • Vaccination uptake rate: The county-level vaccine uptake rate was percent of adults (aged ≥ 18 years) who are fully vaccinated with any Food and Drug Administration (FDA) -authorized COVID-19 vaccine (i.e., have had the second dose of a two-dose vaccine series or one dose of a single-dose vaccine) based on the jurisdiction and county where the recipient resides [30]. • Demographic characteristics: Population demographic characteristics including gender, race, and ethnicity were aggregated to county-level. Specifically, the variables [31], high school education (i.e., percent of population with a high school degree or equivalent) [32], median household income [31], Gini Coefficient (i.e., a measure of income inequality with 0 reflecting complete equality and 1 reflecting complete inequality) [33,34], and percentage of population without health insurance [35]. • Social capital: The Social Capital Index was employed to estimate social capital. This index was developed by the Social Capital Index Project in the United States, which is composed of indicators in multilevel domains including family unity (e.g., births in past year to women who were unmarried and children living with a single parent), community health (e.g., number of nonreligious, nonprofit organizations), institutional health (e.g., voting rates in presidential elections, mail-back census response rates), and collective efficacy (e.g., violent crimes per 100,000 population) [14]. • Urbanization level: The urbanization level across counties was measured using the CDC's National Center for Health Statistics (NCHS) Rural Classification Scheme for Counties [30], which is a six-level urban-rural classification scheme for US counties and county-equivalent entities. The six categories consist of: (1) large central metropolitan, (2) large fringe metropolitan, (3) medium metropolitan, (4) small metropolitan, For data analysis, we further grouped categories 1 and 2 into "large urban" areas, categories 3 and 4 as "small urban" areas, and categories 5 and 6 as "rural" areas. Data Analysis Given the various scales of exiting measures, we first normalized all the variables by adjusting values measured on different scales to a common scale ranged from 0 to 1 using Min-Max scaling. Bivariate analyses were performed to explore the relationship between potential predictors and their associations with vaccination rates, using Pearson correlation for continuous variables. A multivariable regression analysis was then used to examine the significance of various factors on vaccination rates. Quasi-binomial models were applied since the dependent variable is the proportion of adults fully vaccinated against COVID-19 and the proportion is conceived as the outcome of multiple binomial trials in the quasibinomial model. Three sequential models were employed in the regression analysis. In Model 1, only demographic characteristics were utilized. In Model 2, SDOH variables were included in addition to the demographic characteristics of Model 1. In Model 3, social capital variables (at various domains) were added to Model 2. To further explore if the impacts of social capital on vaccination rate vary by urbanization level, we stratified the final regression model by urbanization level using Model 4 (for rural areas), Model 5 (for small urban areas), and Model 6 (for large urban areas). The alpha level was set to 0.05 for regression analyses (two tailed). To assess the multicollinearity, the variance inflation factor (VIF) was calculated for Models 3 to 6. Forest plots were created based on the final regression models to visualize the associations of COVID-19 vaccination rate with all the factors including social capital variables, demographic variables, and SDOH. Data were analyzed using R (version 19.0). Bivariate Analysis Correlations among continuous variables are shown in Table A2. Vaccination rate was associated with gender, including percentage of females (r = 0.05); race/ethnicity, including percentage of non-Hispanic Black (r = −0.12), percentage of non-Hispanic Asian (r = 0.24), percentage of non-Hispanic Native Hawaiian/Pacific Islander (r = −0.10), and percentage of Hispanic populations (r = 0.14); SDOH including percentage of living in poverty (r = −0.23), percentage of completing high school (r = 0.29), median household income (r = 0.38), percentage of uninsured (r = −0.33), percentage unemployed (r = 0.05), and percentage of living with severe housing cost burden (r = 0.15), and social capital factors including percentage of births to unmarried women (r = −0.06), number of non-religious non-profit organizations per 1000 population (r = 0.12), number of religious congregation per 1000 population (r = −0.32), presidential election voting rates in 2012 and 2016 (r = 0.21), and mail-back census response rates (r = 0.16). In addition, vaccination rate varied by urban status (r = −0.27) where larger urban areas had a higher vaccination rate. Stratification Analysis Table A4 and Figure A1 was associated with higher vaccination rates. Model 6 has an adjusted R 2 of 0.569. The variance inflation analysis did not suggest multicollinearity (VIF values ranged from 1.03 to 7.62 across all steps). Interpretation of Results The current study suggests that social capital (especially its structural-level and community-level domains) contribute significantly to the disparities of vaccination uptake in the US adjusting for demographic characteristics and SDOH. The results of stratification analysis, by urban-rural categorization, show common predictors (e.g., race/ethnicity, health insurance, education attainment) of vaccine uptake but also suggest various patterns across rural, small urban, and large urban areas regarding the association between COVID-19 vaccine uptake, SDOH, and social capital factors. The models suggest race as a strong predictor of the COVID-19 vaccination rate [36]. This is reflected as, regardless of urbanization level, higher proportions of Hispanic and non-Hispanic Asian populations were associated with higher rates of full vaccination against COVID-19 while non-Hispanic Native Hawaiian and Pacific Islander and non-Hispanic Black populations were associated with lower vaccination rates. The SDOH were also found to play a significant role in likelihood of vaccination. For instance, education and median household income were found to facilitate vaccine uptake while lack of health insurance impeded uptake. As household income is related to education, level of education may influence individuals' knowledge and perceptions regarding COVID-19 vaccines [7]. Both education level and its associated assumptions of knowledge and perceptions may contribute to vaccination hesitancy among those with limited health literacy or with mistrust in science and health agencies [37]. Additionally, a significant finding is that although the COVID-19 vaccine was administered at no cost to all people in the US, those who are uninsured were associated with low vaccination uptake. This may be due to a lack of inclusion of uninsured individuals within existing health systems, acting as a barrier to accessing health-related information and healthcare services [38]. Similarly, the relationship between social capital factors and COVID-19 vaccination rates are complex. Among factors of the community health domain, non-religious nonprofit organizations were found to be related to higher vaccination rates, likely demonstrating the presence of healthcare resources in communities and organizations that address disadvantages of the SDOH. Alternatively, religious congregations were found to be associated with a decreased vaccination rate, likely reflecting an opposition against and doubt placed on the vaccination within some religious organizations. Among factors of the institutional domain, participation in elections was associated with increased vaccination uptake. Finally, when assessing the role of family unity, percentage of births to unmarried women was associated with increased vaccination uptake. These findings, of mixed associations between social capital and vaccine uptake, reflect interesting and sophisticated decision-making dynamics demonstrating the roles of risk perceptions, social support, and community resources [13]. Generally, those with more social capital are assumed to be more likely to overcome the barriers associated with vaccination uptake through access to social networks and community support [13]. However, when individuals are living in a harsh social environment (e.g., being unemployed, being the only adult to take care of children, etc.), they may feel more vulnerable and susceptible to the impacts of the COVID-19 pandemic leading to its perception as a more severe health threat. Thus, fearing the consequences of infection and following public health guidance, individuals rightfully utilize the vaccine to protect themselves from infection. When stratified by urbanization level, we found mixed results regarding the association between COVID-19 vaccination uptake, SDOH, and social capital factors. High school education was associated with increased vaccine uptake, while lack of health insurance was found to impede vaccination uptake in all levels of urbanization considered, rural, small urban, and large urban. In rural areas, living with poverty, high median income, and unemployment status predicated a higher vaccination rate. Within small urban areas, living with severe housing cost burden was associated with increased vaccination rates, while living with poverty was found to be a barrier to vaccination. In large urban areas, there were no additional associations found between vaccination uptake and economic or employment status. The association between vaccination uptake and social capital factors were also complicated by rural/urban status. First, more social capital variables are related to vaccine uptake rates among rural counties as compared to the small and large urban counties. This finding implies that social capital may have more of an impact on individuals' healthrelated behaviors, such as vaccine uptake, in rural counties [39]. Second, among varying degrees of urbanization, vaccine uptake was associated with different social capital variables. For example, non-religious non-profit organizations and religious congregations were associated with increased and decreased vaccine uptake, respectively, in rural and small urban areas but not large urban areas. Similarly, presidential election voting rates were associated with increased vaccination uptake in both rural and small urban areas but not urban. Additionally, the percentage of births to unmarried women was associated with increased vaccination uptake in rural and large urban areas but found to have no association in small urban areas. These differentiated patterns may be explained by the varying degrees of a heterogeneous community, based on the level of urbanization, with the presence of a diversity of religions and social environments. Public Implications Our findings show a complex interplay between COVID-19 vaccination uptake, SDOH, and social capital factors. These mixed results were unexpected and require further exploration. However, these initial findings may imply that more SDOH factors affect vaccination uptake among people in rural and small urban areas compared to their counterparts in large urban areas. Similarly, we found that more social capital variables are associated with vaccine uptake rate in rural counties than the small and large urban counties. These findings should be considered in the policy making process in that SDOH factors and social capital factors may be differentiated by urban status with the assumption that rural areas may be more vulnerable to the disadvantages of the SDOH and low-level social capital. On the other hand, these findings may inform tailored public health intervention efforts to enhance social capital and community resilience and to reduce health disparities in vaccination. Investment in social capital building may benefit rural counties more in terms of improving health behaviors and enhancing health outcomes. Strenths and Limitations The current study was based on a national-level dataset, which enabled us to show a comprehensive picture on disparities in vaccination rates across counties. Another strength of this work is the utilization of a hierarchical regression analysis to investigate the impact of various SDOH and social capital dimensions on vaccine uptake. Further, the stratification by urbanization level allows us to further explore how social capital affects vaccine uptake rates in different social and geospatial environments. Our study is also subject to several limitations; first, we were not able to include individual-level factors such as risk perceptions and attitudes toward vaccine in data analysis, which limits our ability to explore this mechanism of social capital in affecting vaccine uptake. Second, there were missing data on some key social capital measures, which may affect the final regression results. Third, we must be cautious when interpreting the results of the stratification analysis because the sample size of model 4, 5, and 6 was quite different given the number of rural counties was 1589, small urban was 659, and large urban was 391. These various sample sizes resulted in differential statistical power. Future Research and Directions Despite these limitations, our study demonstrates how different social capital domains may contribute to explaining the disparities of vaccine uptake based on national level data and explores how the impacts of social capital on vaccine disparities may vary by urbanization level in the United States. The study provides a new perspective to address disparities in vaccination uptake through fostering social capital within communities. Further studies are needed to advance our understanding of the various associations between social capital and vaccine uptake rates by urbanization level. For example, we need to control for individual variables in future analyses, including health beliefs of vaccination, perceptions of COVID-19 vaccines, and susceptivity and severity of COVID-19 infection. For example, extant literature suggests that fear of side effects of COVID-19 vaccine is one of critical reasons for vaccination reluctance [40]. In addition, we need to consider the influence of government policy and stimulation strategies across states as government stimulation of vaccination uptake varied significantly by states due to the federal administration of the COVID-19 vaccine [41]. Various incentive/redress policies could also influence vaccination rates at the state level, which should then also be adjusted and controlled for in future data analyses. Note: * p < 0.05, ** p < 0.01, *** p < 0.001. Figure A1. Forest plots of Model 3 to Model 6. Notes: Logarithm of odds ratio was used in developing the forest plots given large value of some odds ratios. We then used zero instead of one as the criteria of significance.
2022-03-08T02:07:22.445Z
2022-03-07T00:00:00.000
{ "year": 2022, "sha1": "895d040dab226c7bcd15f920f23458b39d25d486", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/10/4/625/pdf?version=1650022730", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0bcf739104e9aeac59e18b2ee864409fd54cf801", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
239997860
pes2o/s2orc
v3-fos-license
A Comprehensive Evaluation of the Readability of Online Healthcare Materials Regarding Distal Radius Fractures Introduction Distal radius fractures (DRFs) are among the most common upper limb fractures reviewed in the emergency and orthopaedic departments. Approximately 40% of these fractures are unstable and require fixation to improve limb function. Confronted with an impending operation, many patients will access the internet, looking for information and reassurance. Previous studies have suggested that orthopaedic healthcare websites are beyond the comprehension of their target audience. Objective To assess the readability of healthcare websites regarding DRFs. Methods The terms distal radius fracture, broken wrist and wrist fracture were searched on Google and Bing. Of 101 websites initially considered, 52 unique websites underwent evaluation using readability software. Websites were assessed using two common methods for assessing readabilty; the Reading Grade Level (RGL) and the Flesch Reading Ease Score (FRES). In line with recommended guidelines and previous studies, an RGL of sixth grade and under and a FRES score above 65 was considered acceptable. Results The mean score for the FRES index was 56.67 (SD: ± 19.6), which resulted in the majority of pieces assessed being classified as ‘fairly difficult to read'. The mean RGL was 8.61 (SD: ± 2.86); 17.3% of the websites assessed fulfilled the criteria of having an RGL of six or less. One way T-tests comparing the FRES and RGL mean scores against the acceptable standards showed that they failed to meet the acceptable indexes (FRES: P<0.004; 95% CI: -13.8 to -2.8; RGL: P<0.0001; CI: 1.8-3.4). ANOVA testing showed no significant difference based on category (FRES: P=0.791; RGL: P=0.101). Conclusion The level of comprehension required for online healthcare education materials related to distal radius fractures exceeds the recommended guidelines. Improving the readability content of these websites would enhance the internet’s usability as an educational tool as well as improve patient post-operative outcomes. Introduction Distal radius fractures (DRFs) are extremely common orthopaedic injuries, accounting for up to 18% of all fractures in the elderly age group and 25% of all upper limb fractured encountered in the emergency department [1,2]. The higher incidence rates of this fracture type in the elderly result in substantial issues with loss of independence and function as well as resultant increased health care costs [3]. Based on extensive research and clinical practice, orthopaedic surgeons have developed multiple approaches for the treatment of distal radius fractures, including both conservative and non-conservative options depending on patient needs and co-morbidities [4]. Surgical treatment is used to restore function and allow for early wrist mobilisation. However, like all surgeries, it must be acknowledged that this is not without risk [4]. Confronted not only with a sudden reduction in function and independence as well as an impending operation, many patients may become understandably overwhelmed and frightened. They may be confused by the information clinicians are providing but are too embarrassed to ask further questions or seek clarity. Instead, these patients and their families will peruse the internet as a 'quasi-second opinion' in an attempt to gain more understanding of their injury and treatment [5]. Considering that internet penetrance is due to reach approximately 97% by 2023, and that research shows that 90% of patients believe the internet to be a reliable source of health care education and information, it can therefore be concluded that it is of the utmost important for the information on the internet to be as inclusive and accessible if we are, as physicians and health advocates, to ensure adequate health literacy [3][4][5][6][7]. Previous research in this area has shown that this is often not the case, orthopaedic related information in particular presented on the internet has been shown to vary widely in terms of accuracy, quality and readability levels. Health literacy is defined as the comprehension of basic health information to a level of competence that allows the patient to use the information provided to make decisions that improve their health [8]. Previous studies in this field have comprehensively shown that lower levels of health literacy are keenly associated with increased post-operative complications and reduced rehabilitation compliance [6,7,[9][10][11][12]. Patients with poorer health literacy are also more likely to re-present to the hospital, have increased inpatient stay lengths, increased postoperative morbidity and mortality and lower post operative satisfaction [6,7,[9][10][11][12]. All of these negative associated outcomes result in increased healthcare costs [13][14][15]. It can be thus, surmised that improving the readability of a text, the ease with which it is read and understood, is paramount to improving health literacy and positively impacting patient's resilience in the fact of an impending surgery. Previous health literacy guidelines have been published by the United States Departments of Health and Human Services (USDHSS) and by the National Institute of Health (NIH) [6,7,[15][16]. According to these institutions, over 88% of Americans are unable to fully understand the information provided to them regarding their health [6,7]. In a bid to combat the negative outcomes and high costs that may be associated with this health 'illiteracy', the USDHSS recommends that all patient education materials be written at a reading grade level (RGL) of no higher than the sixth grade [6,7,15]. However, previous studies in the area have shown that healthcare educational websites are rarely adherent to this criterion [5][6][7][16][17][18][19][20][21]. Based on our literature search, we have found only one other paper which examined the readability of information on DRF [19] which demonstrated a readability standard beyond the comprehension of most adults. However, this study was conducted nearly a decade ago; within that time frame, many more people have gained access and knowledge of the internet and are now more comfortable with it. Furthermore, the internet is always rapidly updating and evolving with every possibility that newer more accessible healthcare websites may have been created since the previous review occurred. In light of these updates and the potential improvement that may have occurred over the last decade, the aims of this study will be two-fold. Firstly, we aim to evaluate the readability of healthcare information on the internet with regards to DRFs and our second aim is to determine if there has been an improvement over the last decade. Materials And Methods In July 2021, the terms distal radius fracture, broken wrist and wrist fracture were searched using the two most popular search engines (Google and Bing) and as per previous studies similar to this the first two pages of website hits from each search term were evaluated (n=101) [6,7,21]. This limitation was applied based on the evidence from previous studies which has demonstrated that the majority of people do not scroll beyond the first two pages of website hits when researching something and that most people only look at the first page of hits [6,7,[19][20][21]. Table 1 encompasses the amount of hits returned for each search engine and each search term. Search string Returned results Google & distal radius fracture 5,550,000 Google & wrist fracture 430,000,000 Google & broken wrist 684,000,000 Bing & distal radius fracture 686,000 Bing & wrist fracture 5,360,000 Bing & broken wrist 5,240,000 Duplicate websites were removed and medical journals, sites requiring logins or composed solely of videos were also excluded; previous authors had discerned that medical journals, with their extremely poor readability and accessibility indexes, required a significantly higher level of education to read and understand and would thus, be beyond the capability of the majority of the population [6,7,[17][18][19][20][21]. Of the initial 101 websites, 52 unique web pages were identified as meeting the inclusion criteria and underwent further in-depth analysis [6,7,21]. A flow diagram showing a breakdown of this methodology is shown in Figure 1. FIGURE 1: Website identification flow chart. The websites were then categorised into physician, non-physician, commercial, media and news, social media and non-specified groupings [6,7,21]. 'Academic' refers to any website linked to a university while 'Physician' described any private website owned by a doctor [6,7,[19][20][21]. 'Non-physician' websites referred to those created by other multidisciplinary team members such as physical therapists, occupational therapists or radiographers. 'Commercial' websites denoted websites which contained advertising or products to sell. 'Social media' is an umbrella category encompassing Facebook, Instagram and TikTok website hits to name a few, acknowledging their influence in the modern era. Sites that did not fall into any of the above categories were classed as 'Unspecified' [6,7,21]. Once classified, the websites were uploaded into the online readability software (WEB FX) [6,7,22]. This software was then used to produce two readability scores for each of the websites; a Reading Grade Level (RGL) and a Flesch Reading Ease Score (FRES). The FRES score is defined as an index score used to determine the difficulty of understanding for any passage to be read and comprehended in English; this is done based on the number of syllables and the length of the sentences in each passage. It also accounts for the number of complex works in each passage. Complex words were defined as words with greater than three syllables or words with greater than 6 characters. Overly long sentences are defined as those with a word count greater than 22 words. The FRES score is the only readability testing metric where a higher score indicates an increased readability; a score of 65 or greater is considered to be acceptable [6,7,[21][22]. A breakdown of the FRES scoring system and its interpretation is shown in Table 3. The reading grade level (RGL) was defined as cumulative score for the readability of a passage; this refers to the ease with which a person can read and understand a document or passage on the first pass [6,7,19]. As per previous studies, all reading grade levels are reported in terms of the US educational system and denote the number of years of formal schooling a person would need to have to easily read and comprehend the text [6,7,15,16]. As previously stated, it is recommended that healthcare-related materials be written at no more than a sixth-grade level of education [6,7,15,16]. To further determine accessibility, each website was assessed for translation services and if offered, how many translations were available. Once this data had been determined, statistical analysis was undertaken; this was conducted using SPSS version 26 (SPSS, Chicago, IL) [23]. The level of statistical significance was set at 5%. ANOVA testing was performed between groups and if this achieved significance, Post-Hoc statistics were undertaken. A score of 65 or higher was determined to be acceptable for the FRES test; this acceptable standard was compared to the findings using a one-way t-test [6,7]. RGL was compared to the sixth grade standard using a one-way t-test [6,7]. Results Of the initial 101 websites considered, 52 unique websites were evaluated using the readability tool [22]. This included 26 academic websites, 10 non-physician websites, three commercial, five non-profit and eight news and media websites. No physician, social media or unspecified websites were categorised. Of the 52 websites assessed, only 19 websites (36.5%) had a FRES score that met or exceeded the acceptable score standard of 65. The mean FRES index score was 56.67 (SD: ± 19.6), which resulted in the majority of websites assessed being classified as 'fairly difficult to read ( Figure 2). Eighteen of the reviewed websites (34.61%) had FRES scores between 30 and 50, suggesting a college level education would be required to be able to read and interpret them [6,7]. As shown in Figure 2, the highest-scoring category for the FRES index was academic websites; i.e. those linked to universities and teaching hospitals. A one-way t-test was performed comparing the FRES mean to the standard; this was significantly below the recognised acceptable index (P<0.004; 95% CI: -13.8 to -2.8). An ANOVA conducted showed no significant difference between FRES scores based on categories (P=0.791). In regards to the RGL, the mean score was 8.61 (SD: ± 2.86); this is the equivalent of an eighth-grade level of education [6,7]. Only 17.3% of the websites assessed fulfilled the acceptable criteria of having an RGL of six or less. Figure 3 demonstrates that the worst RGL scores were in the non-profit category while the best were the academic websites, followed by non-physician or allied health professionals websites. As per previous studies [6,7], one-way t-tests were conducted and demonstrated that these scores were significantly higher than the acceptable standard (P<0.0001; CI: 1.8-3.4) [6,7]. ANOVA testing showed no significant difference based on category (P=0.101). Discussion Distal radius fractures are among the most common injuries encountered in orthopaedic practice, accounting for approximately 25% of all upper limb fractures reviewed in the emergency department [1][2][3]. Patients require access to highly comprehensible educational materials to ensure that they can give their full and explicit consent for the surgery [4][5][6][10][11][12]24]. Physicians must also be cognizant that many of their patients will have access to health care education materials of the internet which may provide invalid and inaccurate information to the patient, which can have a detrimental effect of patient outcomes and the patientphysician relationship if not addressed [6,7,12,24]. Similar to the trends exhibited in previous studies [11,[16][17][18][19][20][21], this research showed that the majority of health education websites available which contained information about DRF are beyond the comprehension levels of the majority of the population. With both the FRES and RGL scoring significantly above the recommended standard, patients seeking additional information online run the risk of becoming confused and overwhelmed. This may affect their compliance with post-operative instructions and rehabilitation [6,7]. A lack of credible and accessible healthcare education material may also potentiate the risk of complications or lead to the patients developing cyberchondria [25]. It is considerably frustrating that despite guidelines provided by the NIH and USDHSS on the appropriate reading levels for these health education materials, the majority of the websites included in the study (82.7%) exceed this [13,14]. While this trend has improved from a previous study done in 2012 [20], which at the time of publication showed that 92% of studies exceeded the requirements, it can hardly be appraised a positive change when we consider that almost a decade has passed with only 10% improvement in the readability of the provided online health care education materials. When contemplated with the increasing amount of people whom have gained access to the internet during that timeframe and for forecast 97% penetrance of the internet by 2023, it can perhaps be theorised that no real progress has been made in improving the readability of health education materials overall. Furthermore, it must be acknowledged that this study is not without limitations. Only the first two pages of each conducted search were analysed; while this was consistent with previous methodologies, it may also mean high-quality pages on later pages may have been excluded. The software used also determines the difficulty and readability of the websites based on the letters per word, syllables per word and number of words per paragraph [6,7,21,24]. This means that everyday words such as 'disagreement' may generate a higher RGL than words with fewer syllables and letters such as 'physis' which is a medical term and would be poorly understood by the general public [6,7,24]. Conclusions The information provided in healthcare websites online is beyond the scope of understanding of most potential patients and does not adhere to recommended guidelines. Despite a decade of guidance and advancements in accessibility on the internet, the improvement in the readability of the information surrounding distal radius fractures has been demonstrated to be a mediocre 10%. Steps should be taken to improve the readability of health care education materials based on the provided guidelines in a bid to improve post-operative satisfaction and compliance; this could be done by physicians creating their own accurate educational materials in line with the correct readability standards or providing additional education about the potential pitfalls of consulting "Dr Google" when preparing for an operation. Additional Information Disclosures Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-10-19T16:07:16.295Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "f53f4f42c29d18b77b207c514dcf2c7c499db1cb", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/69789-a-comprehensive-evaluation-of-the-readability-of-online-healthcare-materials-regarding-distal-radius-fractures.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "043da3073ce0889d18a5324c7b62ded4b239f662", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16143652
pes2o/s2orc
v3-fos-license
Blind Decoding of Multiple Description Codes over OFDM Systems via Sequential Monte Carlo We consider the problem of transmitting a continuous source through an OFDM system. Multiple description scalar quantization (MDSQ) is applied to the source signal, resulting in two correlated source descriptions. The two descriptions are then OFDM modulated and transmitted through two parallel frequency-selective fading channels. At the receiver, a blind turbo receiver is developed for joint OFDM demodulation and MDSQ decoding. Transformation of the extrinsic information of the two descriptions are exchanged between each other to improve system performance. A blind soft-input soft-output OFDM detector is developed, which is based on the techniques of importance sampling and resampling. Such a detector is capable of exchanging the so-called extrinsic information with the other component in the above turbo receiver, and successively improving the overall receiver performance. Finally, we also treat channel-coded systems, and a novel blind turbo receiver is developed for joint demodulation, channel decoding, and MDSQ source decoding. INTRODUCTION Multiple description scalar quantization (MDSQ) is a source coding technique that can exploit diversity communication systems to overcome channel impairments. An MDSQ encoder generates multiple descriptions for a source and sends them over different channels provided by the diversity systems. At the receiver, when all descriptions are received correctly, a high-quality reconstruction is possible. In the event of failure of one or more of the channels, the reconstruction would still be of acceptable quality. The problem of designing multiple description scalar quantizers is addressed in [1,2], where a theoretical performance bound is derived in [1] and practical design methods are given in [2,3] . Conventionally, MDSQ has been This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. investigated only from the perspective of transmission over erasure channels, that is, channels which either transmit noiselessly or fail completely [1,2,4]. Recently, it was shown in [5] that an MDSQ can be used effectively for communication over slow-fading channels. In that system, a threshold on the channel fade values is used to determine the acceptability of the received description. The signal received from the bad connection is not utilized at the receiver. In this paper, we propose an iterative MDSQ decoder for communication over fading channels, where the extrinsic information of the descriptions is exchanged with each other by exploiting the correlation between the two descriptions. Although the MDSQ coding scheme provided in [2] is optimized with the constraint of erasure channels, it provides very nice correlation property between different descriptions. Therefore, the same MDSQ scheme will be applied to the continuous fading environment considered in this paper [6,7,8]. Providing high-data-rate transmission is a key objective for modern communication systems. Recently, orthogonal frequency-division multiplexing (OFDM) has received a considerable amount of interests for high-rate wireless communications. Because OFDM increases the symbol duration and transmitting data in parallel, it has become one of the most effective modulation techniques for combating multipath delay spread over mobile wireless channels. In this paper, we consider the problem of transmitting a continuous source through an OFDM system over parallel frequency-selective fading channels. The source signals are quantized and encoded by an MDSQ, resulting in two correlated descriptions. These two descriptions are then modulated by OFDM and sent through two parallel fading channels. At the receiver, a blind turbo receiver is developed for joint OFDM demodulation and MDSQ decoding. Transformation of the extrinsic information of the two descriptions are exchanged between each other to improve system performance. The transformation is in terms of a transformation matrix which describes the correlation between the two descriptions. Another novelty in this paper is the derivation of a blind detector based on a Bayesian formulation and sequential Monte Carlo (SMC) techniques for the differentially encoded OFDM system. Being soft-input and soft-output in nature, the proposed SMC detector is capable of exchanging the so-called extrinsic information with the other component in the above turbo receiver, successively improving the overall receiver performance. For a practical communication system, channel coding is usually applied to improve the reliability of the system. In this paper, we also treat a channel-coded OFDM system, where each stream of the source description is channel encoded and then OFDM modulated before being sent to the channel. At the receiver, a novel blind turbo receiver is developed for joint demodulation, channel decoding, and source decoding. The rest of this paper is organized as follows. In Section 2, the diversity of an OFDM system with an MDSQ encoder is described. In Section 3, the turbo receiver is discussed for the MDSQ encoded OFDM system. In Section 4, we develop an SMC algorithm for blind symbol detection of OFDM systems. A turbo receiver for a channel-coded OFDM system is derived in Section 5. Simulation results are provided in Section 6, and a brief summary is given in Section 7. SYSTEM DESCRIPTION We consider transmitting a continuous source through a diversity OFDM system. The diversity of an OFDM system is made up of two N-subcarrier OFDM systems, signalling through two parallel frequency-selective fading channels. Such a parallel channel structure was first introduced in [9]. A block diagram of the system is shown in Figure 1. A sequence of continuous sources {S( j)} is encoded by a multiple description scalar quantizer (MDSQ), resulting in two sets of equal-length indices {(I 1 ( j), I 2 ( j))}, where j denotes the sequence order. The detailed MDSQ encoder will be discussed in Section 2.1. These indices can be further described in a binary sequence {(x 1 n , x 2 n )} with the order denoted by n. The bit interleavers π 1 and π 2 are used to reduce the influence of error bursts at the input of the MDSQ decoder. After the interleaved bits {a 1 n }, {a 2 n } are modulated by OFDM, we use the parallel concatenated transmission scheme shown in Figure 1; that is, one description of the source is transmitted through one channel and the other description is transmitted through another channel. At the receiver, the OFDM demodulators, which will be discussed in Section 4, generate soft information, which is then exchanged between the two OFDM detectors in the form of a priori probabilities of the information symbols. Next, we will focus on the structure of the MDSQ encoder and the diversity OFDM system. in Figure 2. The channel model consists of two channels that connect the source to the destination. Either channel may be broken or lossless at any time. The encoder of an MDSQ sends information over each channel at a rate of R bits/sample. Based on the decoder structure shown in Figure 2, the objective is to design an MDSQ encoder so as to minimize the average distortion when both channels are lossless (center distortion), subject to a constraint on the average distortion when only one channel is lossless (side distortion). Next, we give a brief summary of the MDSQ design presented in [2]. Denote an index set Assume a uniform quantizer. The main issue in MDSQ design is the choice of the set C, and the index assignment α(·). Following [2], an example of good assignment for R = 3 bits/sample is illustrated in Figure 3. We assume that the cells of a quantizer are numbered 1, 2, . . . , N, in increasing order from left to right as shown in Figure 3d. Intuitively, with a larger set C, center distortion will be improved at the expense of degraded side distortion. With the same size of the set C, the center distortion is fixed, and a diagonal-like assignment is preferred to minimize the side distortion. Multiple description scalar quantizer for diversity fading channels Although MDSQ was originally designed for diversity erasure channels, it provides a possible solution that combines source coding and channel coding to exploit the diversity provided by communication systems. Next, we consider the application of MDSQ techniques in diversity fading channels. At the transmitter, we apply the MDSQ encoder as the conventional (cf. Figure 2). For each continuous source S( j), a pair of indices (I 1 ( j), I 2 ( j)) is generated by the MDSQ, and is further mapped to binary bits {x 1 n , x 2 n } jR n=( j−1)R+1 . Recall that R denotes the bit-length of each description. At the receiver, OFDM modulator instead of using the side decoder and central decoder, a soft MDSQ decoder is employed for MDSQ over fading channels. It is assumed that a soft demodulator is available at the receiver, which generates the a posteriori symbol probability for each bit x i n , where Y denotes the received signal which is given by (3). Based on this posterior information, the soft MDSQ decoding rule is given by which maximizes the posterior probability of the indices subject to a code structure constraint, that is, (I 1 ( j), I 2 ( j)) ∈ C. Signal model for diversity OFDM system Consider an OFDM system with N-subcarriers signaling through a frequency-selective fading channel. The channel response is assumed to be constant during one symbol duration. The block diagram of such a system is shown in Figure 4. The diversity OFDM system is just the parallel concatenation of combination of two such OFDM systems. The binary information data {a i n } n are grouped and mapped into multiphase signals, which take values from a finite alphabet set A = {β 1 , . . . , β |A| }. In this paper, QPSK modulation is employed. The QPSK signals {d i k } N−2 k=0 are differentially encoded to resolve the phase ambiguity inherent in any blind receiver, and the output is given by These differentially encoded symbols are then inverse DFT transformed. A guard interval is inserted to prevent possible interference between OFDM frames. After pulse shaping and parallel-to-serial conversion, the signals are transmitted through a frequency-selective fading channel. At the receiver end, after matched-filtering and removing the guard interval, the sampled received signals are sent to a DFT block to demultiplex the multicarrier signals. For the ith OFDM system with proper cyclic extensions and proper sample timing, the demultiplexing sample of the kth subcarrier can be expressed as [10] T contains the time responses of all L taps; L = τ m ∆ f + 1 denotes the maximum number of resolvable taps, with τ m being the maximum multipath spread and ∆ f being the tone spacing of the carriers; and TURBO RECEIVER The receiver under consideration is an iterative receiver structure as shown in Figure 5. It consists of two blind Bayesian OFDM detectors, which compute the soft information for the corresponding descriptions. At the output of the blind detector, information about one description is transferred to the other based on the existence of correlation between the two descriptions. Such information transfer is then repeated between the two blind detectors to improve the system performance. Next, we will focus on the operation on the first description to illustrate the iterative procedure. Information transfer 1 Y 2 Figure 5: Turbo decoding for multiple description over a diversity OFDM system; Π i and Π −1 i denote the interleaver and deinterleaver, respectively, for the ith description. Blind Bayesian OFDM detector as the received signals for the first description. The blind Bayesian OFDM detector for the first description computes the a posteriori probabilities of the information bits {a 1 n } n , The design of such a blind Bayesian detector will be discussed later in Section 4. For now, we assume the Bayesian detector provides us such soft information, and focus on the structure of the turbo receiver. The a posteriori information delivered by the blind detector can be further expressed as The second term in (6), denoted by λ p 21 [n], represents the a priori log-likelihood ratio (LLR) of the bit a 1 n fed from detector 2. The superscript p indicates the quantity obtained from the previous iteration. The first term in (6), denoted by λ 1 [n], represents the extrinsic information delivered by detector 1, based on the received signals Y 1 , the structure of signal model (4), and the a priori information about all other bits {a 1 l } l =n . The extrinsic information n } n . This information transformation procedure is described next. Information transformation Assume that {a i n } n is mapped to {x i n } n after passing through the ith deinterleaver Π −1 i , with x i n a i πi(n) . To transfer the information from detector 1 to detector 2, the following steps are required. (1) Compute the bit probability of the deinterleaved bits (2) Compute the probability distribution for the first index I 1 based on the deinterleaved bit probabilities where {b k (l), k = 1, . . . , R} is the binary representation for the index l ∈ I. Recall that R denotes the bit length of each description. (3) Compute the probability distribution for the second index I 2 according to (4) Compute the bit probability that is associated with index I 2 ( j), (5) Compute the log likelihood of interleaved code bit It is important to mention here that the key step is the calculation of the conditional probability P(I 2 ( j) = m | I 1 ( j) = l) in (9). Hence, the proposed turbo receiver exploits the correlation between the two descriptions, which is measured by the conditional probabilities in (9). From the discussion in the previous section, these conditional probabilities can be easily obtained from the index assignment rule α(·) as shown in Figure 3. Problem statement The Bayesian OFDM receiver estimates the a posteriori probabilities of the information symbols based on the received signals Y i and the a priori symbol prob- , without knowing the channel response h i . Assume the bit a i n is mapped to symbol d i κ(n) . Based on this symbol a posteriori probability, the LLR of the code bit as required in (5) can be computed by Assume that the unknown quantities are independent of each other and have a priori distribution p(h i ) and p(Z i ), respectively. The direct computation of (12) is given by where p(Y i | h i , Z i ) is a Gaussian density function [cf. (4)]. Clearly, the computation in (14) involves a very highdimensional integration which is certainly infeasible in practice. Therefore, we resort to the sequential Monte Carlo method for numerical evaluation of the above multidimensional integration. SMC-based blind MAP detector Sequential Monte Carlo (SMC) is a family of methodologies that use Monte Carlo simulations to efficiently estimate the a posteriori distributions of the unknown states in a dynamic system [11,12,13]. In [14], an SMC-based blind MAP symbol detection algorithm for OFDM systems is proposed. This algorithm is summarized as follows. The following steps are implemented at the kth recursion (k = 0, . . . , N − 1) to update each weighted sample. For j = 1, . . . , m, the following hold. (1) For each a i ∈ A, compute the following quantities: (2) Impute the symbol Z k . Draw Z ( j) k from the set A with probability (3) Compute the importance weight: (4) Update the a posteriori mean and covariance of the channel. If the imputed sample Z (5) Perform resampling when k is a multiple of k 0 , where k 0 is the resampling interval. APP detection The above sampling procedure generates a set of random properly weighted with respect to the distribution p(Z k | Y k ). Based on these samples, an online estimation and a delayed-weight estimation can be obtained straightforwardly as Figure 6: MDSQ over a channel-coded diversity OFDM system. where W k j w ( j) k , and 1(·) denotes the indicator function. Note that both of these two estimates are only approximations to the a posteriori symbol probability P(d k = β l | Y N−1 ). We next propose a novel APP estimator, where the channel is estimated as a mixture vector, based on which the symbol APPs are then computed. Specifically, we have The symbol a posteriori probability is then given by where Y k Note that the integral within (22) is an integral of a Gaussian pdf with respect to another Gaussian pdf. The resulting distribution is still Gaussian, that is, with mean and variance given, respectively, by Equations (24) and (25) follow from the fact that conditioned on the channel h, Y k and Y k+1 are independent. The symbol a posteriori probability can then be computed in a close form as (26) CHANNEL-CODED SYSTEMS Although the MDSQ introduces some redundancy to the system, it has limited capability for error correction. In order to improve the system reliability, we next consider introducing channel coding to the proposed MDSQ system. A block diagram of an MDSQ system over a channelcoded diversity OFDM system is shown in Figure 6. A stream of source signal {S( j)} j is MDSQ encoded, resulting in two sets of indices {I 1 ( j), I 2 ( j)} j . Binary descriptions of these , is applied between the MDSQ encoder and channel encoder; the other set, named {Π i,2 } 2 i=1 , is applied between the channel encoder and OFDM modulator. At the receiver, a novel blind iterative receiver is developed for joint demodulation, channel decoding, and MDSQ decoding. The receiver structure, as shown in Figure 7, consists of two loops of iterative operations. For each description, there is an inner loop (iterative procedure) for joint OFDM demodulation and channel decoding. At the outer loop, soft information of the coded bits is exchanged between the two inner loops to exploit the correlations between the two descriptions. Next, we discuss the operation of both the inner loop and the outer loop. Inner loop: joint OFDM demodulation and channel decoding We consider a subsystem of the original MDSQ system, which consists of the channel coding and OFDM modulation for only one source description. Since the combination of a differential encoder and OFDM system acts as an inner encoder, the above subsystem is a typical serial concatenated code, and an iterative (turbo) receiver can be designed for such a system, which is denoted as the inner loop part in Figure 7. It consists of two stages: the SMC OFDM detector developed in the previous sections, followed by a MAP channel decoder [15]. The two stages are separated by a deinterleaver and an interleaver. Note that both the SMC OFDM detector and the MAP channel decoder can incorporate the a priori probabilities and output a posteriori probabilities of the code bits {a i n } n , that is, they are softinput and soft-output algorithms. Based on the turbo principle, extrinsic information of the channel-coded bits can be exchanged iteratively between the SMC OFDM detector and the MAP channel decoder to improve the performance of the subsystem. Outer loop: exploiting the correlation between the two descriptions In Section 3, an iterative receiver was proposed for joint MDSQ decoding and OFDM demodulation. Extrinsic information from one description is transformed into the soft information for the other description, and is fed into the OFDM demodulator as the a priori information. For channel-coded MDSQ systems, similar approaches can be considered to exploit the correlation between the two descriptions. As shown in Figure 7, the MAP channel decoder incorporates the a priori information for the channel-coded bits, and outputs the a posteriori probability of both channelcoded bits and uncoded bits. On the other hand, the OFDM detector incorporates and produces as output only the soft information for the channel-coded bits. Taking into account that only uncoded bits will be considered in the MDSQ decoder, the inner loop, when considered as one unit operation, is a SISO algorithm that incorporates the a priori information of the channel-coded bits, and produces the output a posteriori information of the uncoded bits. Altogether, the two inner loops constitute a turbo structure in parallel, and the transferred soft information provided by the information transformation block (IF-T) can be exchanged iteratively between the two inner loops. This iterative procedure is the outer loop of the system, which aims at further improving the system performance by exploiting the correlation between the two descriptions. It is shown in Section 3 that this correlation can be measured by the probability transformation matrix, and adopted by the IF-T block. For the outer loop, the soft output of the inner loop can be used directly as the a priori information for the IF-T; the soft output of IF-T, however, must be transformed before being fed into the inner loop as a priori information. Specifically, a soft channel encoder by the BCJR algorithm [15] is required to transform the soft information of the uncoded bits into the soft information of the coded bits. SIMULATION RESULTS In this section, we provide computer simulation results to illustrate the performance of the turbo receiver for MDSQ over diversity OFDM systems. In the simulations, the continuous alphabet source is assumed to be uniformly distributed on (−1, 1), and a uniform quantizer is applied. The source range is divided into 8, 22, and 34 intervals. Two indices are assigned to describe the source according the index assignment α(·) as shown in Figure 3, where each index is described with R = 3 bits. Assume the channel bandwidth for each OFDM system is divided into N = 128 subchannels. Guard interval is long enough to protect the OFDM blocks from intersymbol interference due to the delay spread. The frequency-selective fading channels are assumed to be uncorrelated. All L = 5 taps of the fading channel are Rayleigh distributed with the same variance, normalized such that E{ L−1 n=0 h n 2 } = 1, and have delays τ l = l/∆ f , l = 0, 1, . . . , L − 1. For channelcoded systems, a rate-1/2 constraint length-5 convolutional code (with generators 23 and 35 in octal notation) is used. The interleavers are generated randomly and fixed for all simulations. The blind SMC detector implements the algorithm described in Section 4.2. The variance of the noise V k in (24) is assumed known at the detector with values specified by the given SNR. The SMC algorithm draws m = 50 Monte Carlo samples at every recursion with Σ −1 set to 1000I L . Two qualities were used in the simulation to measure the performance of the SMC detector: bit error rate (BER) and word error rate (WER). Here, the bit error rate denotes the information bit error rate and word error rate denotes the error rate of the whole data block transferred during one symbol duration. On the other hand, mean square error (MSE) will be used to measure the performance of the whole system. Performance of the SMC detector The blind SMC detector, as a SISO algorithm for OFDM demodulation, is an important component of the proposed turbo receiver. Next, we illustrate the performance of the blind SMC detector. In Figure 8, the BER and WER performance is plotted. In the same figure, we also plot the known channel lower bound, where the fading coefficients are assumed to be perfectly known to the receiver and a MAP receiver is employed to compute the a posteriori symbol probabilities. Although the SMC detector generates soft outputs in terms of the symbol a posteriori probabilities, only hard decisions are used in an uncoded system. However in a coded system, the channel decoder, such as a MAP decoder, requires soft information provided by the demodulator. Next, we examine the accurateness of the soft output provided by the SMC detector in a coded OFDM scenario. In Figure 9, the BER and WER performance for the information bits is plotted. In the same figure, the known channel lower bound is also plotted. The MAP convolutional decoder is employed in conjunction with the different detection algorithms. It is seen from Figure 9 that the three SMC detector yield different performance after the MAP decoder because of the different quality of the soft information they provide. Specifically, the APP detector achieves the best performance. Performance of turbo receiver for MDSQ system The performance of the turbo receiver is shown in Figures 10, 11, and 12 for MDSQ systems with assignments 8, 22, and 34, respectively, as in Figure 3. The SMC blind detector is employed. In each figure, the BER, WER, and MSE are plotted. In the same figure, the quantization error bound s 2 /12, where Performance of turbo receiver for channel-coded MDSQ system Finally, we consider the performance of the channel-coded MDSQ system discussed in Section 5. Performance is compared for systems with different iterative profiles. Specifically, the BER, WER, and MSE performance for the information bits and coded bits are plotted in Figures 13 and 14 for the 4-inner-loop and 1-outer-loop turbo receivers and the 3-inner-loop and 2-outer-loop turbo receivers, respectively. In the simulation, the source range is divided into 22 intervals as shown in Figure 3b. It is seen that the proposed turbo receiver structure can successively improve the receiver performance through iterative processing. Moreover, the quantization error bounds are achieved at very low SNR, that is, 10 dB. CONCLUSIONS In this paper, we have proposed a blind turbo receiver for transmitting MDSQ-coded sources over frequency-selective fading channels. Transformation of the extrinsic information of the two descriptions are exchanged between each other to improve the system performance. A novel blind APP OFDM detector, which computes the a posteriori symbol probabilities, is developed using sequential Monte Carlo (SMC) techniques. Being soft-input and soft-output in nature, the proposed SMC detector is capable of exchanging the so-called extrinsic information with other component in the above turbo receiver, and successively improving the overall receiver performance. Finally, we have also treated channel-coded systems, and a novel blind turbo receiver is developed for joint demodulation, channel decoding, and MDSQ decoding. Simulation results have demonstrated the effectiveness of the proposed techniques.
2018-01-23T22:38:59.635Z
2005-04-01T00:00:00.000
{ "year": 2005, "sha1": "702bbf9f69eaf8d121f5310f184897b2dbad15a2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/wcn.2005.141", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "702bbf9f69eaf8d121f5310f184897b2dbad15a2", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
248490724
pes2o/s2orc
v3-fos-license
Glucocorticoid Receptor β Isoform Predominates in the Human Dysplastic Brain Region and Is Modulated by Age, Sex, and Antiseizure Medication The glucocorticoid receptor (GR) at the blood–brain barrier (BBB) is involved in the pathogenesis of drug-resistant epilepsy with focal cortical dysplasia (FCD); however, the roles of GR isoforms GRα and GRβ in the dysplastic brain have not been revealed. We utilized dysplastic/epileptic and non-dysplastic brain tissue from patients who underwent resective epilepsy surgery to identify the GRα and GRβ levels, subcellular localization, and cellular specificity. BBB endothelial cells isolated from the dysplastic brain tissue (EPI-ECs) were used to decipher the key BBB proteins related to drug regulation and BBB integrity compared to control and transfected GRβ-overexpressed BBB endothelial cells. GRβ was upregulated in dysplastic compared to non-dysplastic tissues, and an imbalance of the GRα/GRβ ratio was significant in females vs. males and in patients > 45 years old. In EPI-ECs, the subcellular localization and expression patterns of GRβ, Hsp90, CYP3A4, and CYP2C9 were consistent with GRβ+ brain endothelial cells. Active matrix metalloproteinase levels and activity increased, whereas claudin-5 levels decreased in both EPI-ECs and GRβ+ endothelial cells. In conclusion, the GRβ has a major effect on dysplastic BBB functional proteins and is age and gender-dependent, suggesting a critical role of brain GRβ in dysplasia as a potential biomarker and therapeutic target in epilepsy. Introduction The glucocorticoid receptor (GR) has recently been uncovered as a critical molecular regulator of drug permeability and barrier integrity at the epileptic blood-brain barrier (BBB), where it is found to be overexpressed and to have accelerated maturation [1][2][3]. After alternative splicing of the human GR transcript, multiple isoforms of this receptor are produced, two of the most well-characterized being GRα and GRβ [4,5]. These two GR isoforms differ at the carboxyl terminus [5,6], and due to the splicing at this position, GRβ is not able to bind ligands like glucocorticoids [5,6]. Although GRα is the classic GR isoform-binding glucocorticoids and activating transcription of glucocorticoid receptor element-containing genes-GRβ has shown important implications in inflammation and diseases like rheumatoid arthritis, asthma, and glioma [7][8][9], but the independent roles of GRα and GRβ isoforms have not yet been investigated in epilepsy. Focal cortical dysplasia (FCD) is a common epilepsy pathology that stems from focal malformations in the cerebral cortex [10], where neuroinflammation is prominent [11,12]. Pharmacoresistance in epilepsy still remains a major clinical challenge, as about onethird of epilepsy patients are non-responsive to antiseizure medications (ASMs) [13,14], and local drug metabolism and efflux activity at the BBB play a critical role in this phenomenon [3,15]. Cytochrome P450 (CYP) drug-metabolizing enzymes and efflux transporters (e.g., P-glycoprotein, Pgp) are functionally important at the BBB and could contribute to pharmacoresistance in epilepsy [1,2,15]. The expression of these enzymes and drug efflux transporters has been found to be regulated by the glucocorticoid receptor (GR) [1][2][3]. The importance of the GRβ isoform has been implicated in other brain disorders, such as glioma, where GRβ plays a critical part in the reactive astrocyte phenotype [9]. However, the specific role of GRβ in the human epileptic brain is not well established and could be an important target for drug regulation and BBB properties in epilepsy. To identify the involvement of the GRα and GRβ isoforms in FCD, we used cortical brain tissues from patients who underwent surgery for refractory epilepsy to determine: (1) the expression pattern of GRα and GRβ in dysplastic (epileptic) and non-dysplastic (relative control) tissues, (2) changes in GRα/GRβ ratio based on gender and age of these individuals, (3) the expression of neurovascular GR isoform localization in dysplastic vs. non-dysplastic brain tissue, and (4) the subcellular localization of GRα and GRβ based on the ASM combination taken by these patients before surgery as CYP dependent or a partially/independent metabolic pathway. To delineate the involvement of these two GR isoforms in BBB endothelial cells, where GR has been found to have a significant role [1][2][3], we used control primary human brain microvascular endothelial cells, with and without overexpressed GRβ by transfection, and compared that to primary dysplastic human brain endothelial cells (EPI-ECs) isolated from the well-characterized dysplastic brain region. We compared and evaluated the association of GRβ with the subcellular localization and expression levels of other key protein targets involved in drug metabolism and penetration through the BBB (CYP enzymes, P-glycoprotein) and GR regulation (Hsp90). Additionally, in these brain endothelial cells, we investigated the involvement of GRβ in BBB integrity (MMP-9, occludin, claudin-5) and matrix metalloproteinase (MMP) activity, extracellular matrix-degrading proteins responsible for a multitude of events linked to homeostasis and several physiological processes. Together, these data will elucidate the distinct roles of GRα and GRβ in the FCD brain and BBB, providing a deeper understanding of the significance of GR isoforms in epilepsy. Decreasing GRα/GRβ Ratio Is Dependent on Age and Gender in Human Dysplastic Brain Tissues The cortical brain tissues from dysplastic and non-dysplastic regions from patients with FCD (n = 14) revealed significant (* p < 0.0001) GRβ overexpression in the dysplastic vs. non-dysplastic brain regions, while GRα expression did not change (Figure 1a). This increase in GRβ expression levels in dysplastic vs. non-dysplastic tissue was found to be dependent on gender. Female patients (n = 9) showed a significant decrease in the GRα/GRβ ratio (* p = 0.0409) in dysplastic tissues compared to non-dysplastic, a trend that was not observed in male patients (n = 5, Figure 1b). Additionally, changes in the GRα/GRβ ratio in dysplastic compared to non-dysplastic brain tissues were also shown to be age-dependent, with a significant decrease in the ratio only being observed in dysplastic tissue of patients over 45 years old compared to non-dysplastic (* p = 0.0381, Figure 1c). Figure 1. Overexpression of GRβ in the dysplastic brain region compared to non-dysplastic is age and gender-dependent. (a) Western blot shows a significant increase (* p < 0.0001) in GRβ (~90 kDa) expression in dysplastic (DYS/EPI) compared to non-dysplastic (NON-DYS) brain tissues from patients with FCD (n = 14). There was no significant difference in the expression pattern of GRα (~94 kDa) between dysplastic and non-dysplastic tissues. β-actin (~43 kDa) was used as a loading control and for normalization. (b) The GRα/GRβ ratio using values obtained from the Western blot in (A) was plotted and compared based on the gender of each patient (n = 9 females and 5 males). The female group showed a significant decrease (* p = 0.0409) in the GRα/GRβ ratio in dysplastic vs. non-dysplastic tissues, corresponding to low GRα and high GRβ levels. There was no significant difference in the GRα/GRβ ratio in dysplastic vs. non-dysplastic tissues in the male patients, implying that GRβ overexpression in the dysplastic region of these patients is gender-dependent. (c) Patients from (a) were grouped into three age brackets (0-20 years old, 21-45 years old, and >45 years old) based on their age at the time of surgery. The GRα/GRβ ratio followed a decreasing trend with age in dysplastic compared to non-dysplastic brain tissues, and there was a significantly decreased ratio (* p = 0.0381) in patients that were above 45 years old. Western blots were performed in duplicate. All values are presented as mean with SD by paired t-test. Differential Expression and Localization of GRα and GRβ Is Evident in Dysplastic and Non-Dysplastic Human Brain Tissues The histology of cortical brain tissues resected from the epileptic lesion (dysplastic) Figure 1. Overexpression of GRβ in the dysplastic brain region compared to non-dysplastic is age and gender-dependent. (a) Western blot shows a significant increase (* p < 0.0001) in GRβ (~90 kDa) expression in dysplastic (DYS/EPI) compared to non-dysplastic (NON-DYS) brain tissues from patients with FCD (n = 14). There was no significant difference in the expression pattern of GRα (~94 kDa) between dysplastic and non-dysplastic tissues. β-actin (~43 kDa) was used as a loading control and for normalization. (b) The GRα/GRβ ratio using values obtained from the Western blot in (a) was plotted and compared based on the gender of each patient (n = 9 females and 5 males). The female group showed a significant decrease (* p = 0.0409) in the GRα/GRβ ratio in dysplastic vs. nondysplastic tissues, corresponding to low GRα and high GRβ levels. There was no significant difference in the GRα/GRβ ratio in dysplastic vs. non-dysplastic tissues in the male patients, implying that GRβ overexpression in the dysplastic region of these patients is gender-dependent. (c) Patients from (a) were grouped into three age brackets (0-20 years old, 21-45 years old, and >45 years old) based on their age at the time of surgery. The GRα/GRβ ratio followed a decreasing trend with age in dysplastic compared to non-dysplastic brain tissues, and there was a significantly decreased ratio (* p = 0.0381) in patients that were above 45 years old. Western blots were performed in duplicate. All values are presented as mean with SD by paired t-test. Differential Expression and Localization of GRα and GRβ Is Evident in Dysplastic and Non-Dysplastic Human Brain Tissues The histology of cortical brain tissues resected from the epileptic lesion (dysplastic) and from the surrounding, relatively normal brain area (non-dysplastic) was confirmed by cresyl violet staining and visualization of dysmorphic neurons, characteristic of FCD pathology (n = 3 patients, Figure 2a). In these same patients, DAB staining of both GRα and GRβ isoforms showed that, in general, GRβ levels were significantly elevated (* p = 0.0155) and GRα levels significantly decreased (* p = 0.0116) in the dysplastic brain region compared to the non-dysplastic region in these patients (Figure 2a). Co-immunohistochemistry of GRα with NeuN (neuronal nuclei marker) and GFAP (glial fibrillary acidic protein) confirmed the presence of GRα in the neurons and astrocytes of both the dysplastic and non-dysplastic brain regions (Figure 2b). GRβ staining was most prominent in the neurons and of the dysplastic tissues and scattered in the astrocytes (Figure 2b). Both GR isoforms were consistently expressed in the microvessels of dysplastic brain tissues in the cortex, marked with dotted lines in a few locations for reference ( Figure 2b). Antiseizure Drug Combination Regulates Subcellular Localization of GRα and GRβ Isoforms in Human Cortical Brain Tissue The cytosolic fraction of cortical brain tissue from the non-dysplastic regions of FCD patients taking ASMs as two or more CYP-mediated medications (n = 5) shows a significant increase (* p = 0.000476) in cytosolic GRα expression compared to the dysplastic tissue ( Figure 3a). Compared to the nuclear fraction, there are significantly greater GRα levels (* p = 0.0256) in the cytosolic fraction in the non-dysplastic tissue of the CYP-mediated + CYP-mediated ASMs group, which is opposite to what is observed in the dysplastic tissue, showing significantly increased nuclear GRα (* p = 0.0306). Additionally, the nuclear fractions from these same tissues revealed increased levels of GRβ in dysplastic tissues compared to non-dysplastic (* p < 0.0001, Figure 3a). In the nuclear fraction of dysplastic tissues from this group, GRβ is significantly increased (* p = 0.000266) compared to the dysplastic cytosolic fraction, but between the non-dysplastic cytosolic and nuclear fractions, there is no difference in GRβ expression. The cytosolic and nuclear fractions of brain tissues from patients taking ASM combinations as CYP+NON-CYP-mediated (n = 4) showed a different pattern of GRα and GRβ localization. There is no significant difference in GRα or GRβ expression between non-dysplastic and dysplastic tissues of this group in either subcellular fraction or between the cytosolic and nuclear fractions with both GR isoforms evaluated ( Figure 3b). 0.0155) and GRα levels significantly decreased (* p = 0.0116) in the dysplastic brain region compared to the non-dysplastic region in these patients ( Figure 2a). Co-immunohistochemistry of GRα with NeuN (neuronal nuclei marker) and GFAP (glial fibrillary acidic protein) confirmed the presence of GRα in the neurons and astrocytes of both the dysplastic and non-dysplastic brain regions ( Figure 2b). GRβ staining was most prominent in the neurons and of the dysplastic tissues and scattered in the astrocytes (Figure 2b). Both GR isoforms were consistently expressed in the microvessels of dysplastic brain tissues in the cortex, marked with dotted lines in a few locations for reference ( Figure 2b). confirms increased GRβ isoform levels in the dysplastic brain tissue compared to the non-dysplastic tissue (* p = 0.0155). GRα levels are also decreased (* p = 0.0116) in the dysplastic tissue compared to non-dysplastic. Images were obtained using a Leica DMIL brightfield microscope. Scale bar = 50 µm. Values are presented as mean with SD by paired t-test. (b) Immunofluorescent co-staining of GRα and GRβ with neuronal (NeuN) and astrocytic (GFAP) markers in dysplastic (EPI) vs. non-dysplastic human brain tissues (n = 3) elucidates the localization of GRα in both the neurons (white arrows) and astrocytes (yellow arrows) in dysplastic and non-dysplastic brain tissue. GRβ immunofluorescent staining shows extensive localization in neurons (NeuN) and astrocytes (GFAP) in the dysplastic tissue in relation to the non-dysplastic tissue. Select microvessels are lined for reference with a dotted white line, where both GRα and GRβ immunostaining is evident. Images were obtained using a Leica DMIL LED microscope with a gain of 1.0. Scale bar = 20 µm. GRβ Overexpression in Human Brain Microvascular Endothelial Cells Regulates Expression and Subcellular Localization of Critical BBB Proteins and MMP Activity The human brain microvascular endothelial cells (HBMECs, n = 3) transfected with HA-tagged GRβ (HBMEC+HA-GRβ, n = 3) and dysplastic/epileptic endothelial cells (EPI-ECs, n = 2) evaluated by Western blot (shown by the representative blots, Figure 4a, and quantification in Figure 4b) show changes in the subcellular localization of the GR isoforms and a heat-shock protein chaperone (Hsp90) critical for GR maturation and function, compared to HBMECs (non-transfected control group, Figure 4). In HBMEC controls, GRα was only found in the cytosol, whereas in cells with overexpressed GRβ, GRα was found mostly in the nuclear fraction ( Figure 4). In EPI-ECs, GRα was found in the cytoplasmic and partially in the nuclear fractions and was not significantly impacted by drug treatment. After GRβ overexpression in HBMECs, GRα levels increased in the nucleus after OXC, LEV, or DEX treatment for 24 h but not in the cytosol (Figure 4), which is reversed in the case of HBMEC/control endothelial cells (non-transfected). In all three cell types, the GRβ localization remained most prominent in the nuclear fraction and was unaffected by drug treatment (Figure 4). In both the cytosolic and nuclear fractions, EPI-ECs showed the highest GRβ expression of the three cell types, determined by the two-way ANOVA group effect. Hsp90 expression was increased in the cytosol after 24 h of OXC, LEV, or DEX treatment but only in HBMEC+HA-GRβ ( Figure 4). Hsp90 was not extensively found in the nucleus of HBMECs. However, besides the cytosolic fractions, Hsp90 expression was also prominent in the nuclear fraction in HBMEC+HA-GRβ (* p = 0.000152) and EPI-ECs (* p < 0.0001) compared to HBMECs, analyzed by two-way ANOVA group effect. ci. 2022, 23, x FOR PEER REVIEW 6 of 22 Figure 3. GRα and GRβ subcellular localization in human dysplastic brain tissues is dependent on the antiseizure medication (ASM) combination. (a) In patients who took a combination of two or more cytochrome P450 (CYP)-mediated ASMs (n = 5 patients), cytoplasmic GRα (~94 kDa) was significantly decreased (* p = 0.000476) and nuclear GRβ (~90 kDa) significantly elevated (* p < 0.0001) in dysplastic compared to non-dysplastic tissues. (b) In patients who took a combination of CYPmediated and NON-CYP-mediated ASMs (n = 4 patients), there were no significant differences between GRα or GRβ expression in dysplastic and non-dysplastic tissues in either the cytoplasm or nucleus. β-actin (~43 kDa) and PCNA (~35 kDa) were used as loading controls for the cytosolic and nuclear fractions, respectively, and for normalization. Western blots were performed in duplicate. All values are presented as mean with SD by one-way ANOVA with a Tukey post hoc test. GRβ Overexpression in Human Brain Microvascular Endothelial Cells Regulates Expression and Subcellular Localization of Critical BBB Proteins and MMP Activity The human brain microvascular endothelial cells (HBMECs, n = 3) transfected with HA-tagged GRβ (HBMEC+HA-GRβ, n = 3) and dysplastic/epileptic endothelial cells (EPI-ECs, n = 2) evaluated by Western blot (shown by the representative blots, Figure 4a, and . GRα and GRβ subcellular localization in human dysplastic brain tissues is dependent on the antiseizure medication (ASM) combination. (a) In patients who took a combination of two or more cytochrome P450 (CYP)-mediated ASMs (n = 5 patients), cytoplasmic GRα (~94 kDa) was significantly decreased (* p = 0.000476) and nuclear GRβ (~90 kDa) significantly elevated (* p < 0.0001) in dysplastic compared to non-dysplastic tissues. (b) In patients who took a combination of CYPmediated and NON-CYP-mediated ASMs (n = 4 patients), there were no significant differences between GRα or GRβ expression in dysplastic and non-dysplastic tissues in either the cytoplasm or nucleus. β-actin (~43 kDa) and PCNA (~35 kDa) were used as loading controls for the cytosolic and nuclear fractions, respectively, and for normalization. Western blots were performed in duplicate. All values are presented as mean with SD by one-way ANOVA with a Tukey post hoc test. . GRβ alters the expression and subcellular localization of GRα and other key drug regulatory-related proteins in human brain endothelial cells. (a) Protein targets critical for drug metabolism and transport at the BBB showed altered expression and subcellular localization patterns in HBMEC+HA-GRβ (n = 3) and EPI-ECs (n = 2) compared to HBMECs (n = 3) with endogenous GRβ levels by Western blot. Quantification is shown in (b). GRα (~94 kDa) was only present in the cytosol of HBMECs but was highly present in the nuclear fraction of HBMEC+HA-GRβ. EPI-ECs showed a pattern of GRα subcellular localization that was a mixture of what was seen in HBMECs and HBMEC+HA-GRβ, with expression in the cytosol and nucleus. Oxcarbazepine (OXC), levetiracetam (LEV), and dexamethasone (DEX) treatments all significantly increased the expression of GRα after 24 h in the cytosol of HBMECs and in the nuclear fraction of HBMEC+HA-GRβ but caused no change in GRα expression in EPI-ECs. After HA-GRβ overexpression in HBMECs, GRβ (~90 kDa) was exclusively localized in the nucleus and negligible in the cytosol, but in EPI-ECs it was present in both the cytoplasmic and nucleus. Also, Hsp90 (~90 kDa) was almost explicitly seen in the cytosolic fraction of HBMECs, but HA-GRβ overexpression caused Hsp90 to be found in the nuclear fraction as well as the cytosolic, which is consistent with what was observed in EPI-ECs. Pgp (~170 kDa) was only expressed in the cytosol of HBMECs and EPI-ECs but in the nucleus of HBMEC+HA-GRβ. After OXC, LEV, or DEX treatment for 24 h, Pgp levels in the cytosol and nucleus were increased in HBMEC+HA-GRβ; although, only 24 h DEX treatment increased cytosolic Pgp expression in HBMECs. CYP3A4 (~57 kDa) and CYP2C9 (~59 kDa) levels in the cytosol and nucleus were both significantly lower in HBMECs with endogenous GRβ compared to HBMEC+HA-GRβ (CYP3A4 cytosolic: * p < 0.0001, CYP3A4 nuclear: * p = 0.000487, CYP2C9 cytosolic: * p < 0.0001, CYP2C9 nuclear: * p = 0.000161) and EPI-ECs (CYP3A4 cytosolic: * p < 0.0001, CYP3A4 nuclear: * p < 0.0001, CYP2C9 cytosolic: * p < 0.0001, CYP2C9 nuclear: * p < 0.0001). Although 24 h OXC treatment significantly increased cytosolic CYP3A4 levels in HBMECs, 24 h LEV treatment significantly decreased nuclear CYP3A4 and CYP2C9 levels in HBMEC+HA-GRβ. EPI-ECs show elevated levels of both of these CYP enzymes compared to HBMECs, but drug treatment did not affect expression levels. βactin (~43 kDa) and PCNA (~35 kDa) were used as loading controls for the cytosolic and nuclear fractions, respectively, and for normalization. Western blots were performed in duplicate. All values are presented as mean with SD by two-way ANOVA with a Tukey post hoc test. . GRα (~94 kDa) was only present in the cytosol of HBMECs but was highly present in the nuclear fraction of HBMEC+HA-GRβ. EPI-ECs showed a pattern of GRα subcellular localization that was a mixture of what was seen in HBMECs and HBMEC+HA-GRβ, with expression in the cytosol and nucleus. Oxcarbazepine (OXC), levetiracetam (LEV), and dexamethasone (DEX) treatments all significantly increased the expression of GRα after 24 h in the cytosol of HBMECs and in the nuclear fraction of HBMEC+HA-GRβ but caused no change in GRα expression in EPI-ECs. After HA-GRβ overexpression in HBMECs, GRβ (~90 kDa) was exclusively localized in the nucleus and negligible in the cytosol, but in EPI-ECs it was present in both the cytoplasmic and nucleus. Also, Hsp90 (~90 kDa) was almost explicitly seen in the cytosolic fraction of HBMECs, but HA-GRβ overexpression caused Hsp90 to be found in the nuclear fraction as well as the cytosolic, which is consistent with what was observed in EPI-ECs. Pgp (~170 kDa) was only expressed in the cytosol of HBMECs and EPI-ECs but in the nucleus of HBMEC+HA-GRβ. After OXC, LEV, or DEX treatment for 24 h, Pgp levels in the cytosol and nucleus were increased in HBMEC+HA-GRβ; although, only 24 h DEX treatment increased cytosolic Pgp expression in HBMECs. CYP3A4 (~57 kDa) and CYP2C9 (~59 kDa) levels in the cytosol and nucleus were both significantly lower in HBMECs with endogenous GRβ compared to HBMEC+HA-GRβ (CYP3A4 cytosolic: * p < 0.0001, CYP3A4 nuclear: * p = 0.000487, CYP2C9 cytosolic: * p < 0.0001, CYP2C9 nuclear: * p = 0.000161) and EPI-ECs (CYP3A4 cytosolic: * p < 0.0001, CYP3A4 nuclear: * p < 0.0001, CYP2C9 cytosolic: * p < 0.0001, CYP2C9 nuclear: * p < 0.0001). Although 24 h OXC treatment significantly increased cytosolic CYP3A4 levels in HBMECs, 24 h LEV treatment significantly decreased nuclear CYP3A4 and CYP2C9 levels in HBMEC+HA-GRβ. EPI-ECs show elevated levels of both of these CYP enzymes compared to HBMECs, but drug treatment did not affect expression levels. β-actin (~43 kDa) and PCNA (~35 kDa) were used as loading controls for the cytosolic and nuclear fractions, respectively, and for normalization. Western blots were performed in duplicate. All values are presented as mean with SD by two-way ANOVA with a Tukey post hoc test. The expression changes in the three cell types and with drug treatment were also evaluated for downstream proteins relating to drug efflux activity and local drug metabolism at the BBB. Pgp was found in the cytosolic fraction in HBMEC and EPI-EC but was only observed in the nuclear fraction in HBMECs with GRβ overexpression. Pgp expression in the HBMEC and HBMEC+HA-GRβ increased after 24 h of each drug treatment-OXC, LEV, and DEX ( Figure 4). Cytosolic and nuclear CYP3A4 and CYP2C9 expression was significantly increased in HBMEC+HA-GRβ (CYP3A4 cytosolic: * p < 0.0001, CYP3A4 nuclear: * p = 0.000487, CYP2C9 cytosolic: * p < 0.0001, CYP2C9 nuclear: * p = 0.000161) and EPI-ECs (CYP3A4 cytosolic: * p < 0.0001, CYP3A4 nuclear: * p < 0.0001, CYP2C9 cytosolic: * p < 0.0001, CYP2C9 nuclear: * p < 0.0001) compared to HBMECs, according to two-way ANOVA group effect. A 24 h LEV treatment significantly decreased the nuclear expression of both CYP enzymes in HBMEC+HA-GRβ (CYP3A4: * p = 0.0415, CYP2C9: * p = 0.0355), while 24 h OXC treatment significantly increased CYP3A4 levels (* p = 0.0268) in the cytosol in HBMEC, both compared to their respective vehicle controls at 24 h (Figure 4a,b). Discussion GR has already proven to be an important player in drug-resistant epilepsy due to focal cortical dysplasia (FCD), but the individual roles and clinical significance of the brain GRα and GRβ isoforms in FCD have not been well defined. The current study identifies for the first time that upregulated GRβ or a decreased GRα/GRβ ratio in the dysplastic brain could contribute to pathogenesis and drug response in pharmacoresistant epilepsy, particularly in certain subsets of patients, such as females and those over 45 years old. By recognizing the imbalance of these two GR isoforms in the dysplastic brain region compared to an adjacent relatively non-dysplastic region, which could be a biomarker of the dysplastic focus, and the effect of GRβ on drug response in BBB endothelial cells, the mechanism underlying the BBB involvement in drug-resistant epilepsy is further unveiled. We found that of the two GR isoforms, GRβ is overexpressed in the dysplastic brain region compared to a non-dysplastic brain region of the same patients (Figure 1), whereas GRα isoform expression was not significantly overexpressed in the dysplastic brain. Previously, our group discovered that total GR is overexpressed in the dysplastic brain [1][2][3], suggesting that a great proportion of GR overexpression is possibly due to GRβ based on these novel findings. Although the expression and ratio of GRα and GRβ isoforms in epilepsy have not yet been reported until now, interestingly, this information has been found to be pertinent to other diseases such as asthma, rheumatoid arthritis, and glioma [7][8][9]. Sex and age differences relating to GR have been implicated in inflammatory bowel disease (IBD) as well [16] which is consistent with our case, where a significant difference in the dysplastic vs. non-dysplastic GRα/GRβ ratio in female patients and in patients > 45 years old was identified. Reports also indicate that female patients with IBD were more likely to develop a dependence on glucocorticoid treatment compared to male patients, who were less likely to relapse after glucocorticoid dose reduction [16]. In general, sex differences have been identified in epilepsy, with FCD being more common in male pediatric epilepsy patients compared to females [17]. The relationship between a decreased GRα/GRβ ratio in female patients compared to males in this study would be interesting to further investigate, such as the possibility that females with a lower GRα/GRβ ratio could be more susceptible to the development of focal cortical dysplasia in epilepsy. This could be further investigated. Additionally, age played a statistically significant role in glucocorticoid response in these IBD patients, with glucocorticoid-resistant patients being older than responders [16]. GRβ is known to be involved in glucocorticoid resistance in multiple diseases [18,19], so the current findings of GRβ overexpression and a dependency of sex and age in the GRα/GRβ ratio in dysplastic compared to non-dysplastic brain tissue are consistent with findings in other disorders related to inflammatory factors. Treatment with corticosteroids, such as prednisolone, has been successful in managing seizures in some pediatric epilepsy patients [20,21], although steroid treatment for older adult epilepsy has not been as extensively studied. Steroid treatment in older patients with focal cortical dysplasia may not be as successful due to the decreased ratio of GRα/GRβ, possibly contributing to glucocorticoid resistance. In glioma, GRβ was also found to be overexpressed in the nuclei of injured astrocytes, where it was associated with β-catenin. After GRβ downregulation, the reactive astrocyte phenotype seen in glioma was dampened, showing that overexpression of this particular GR isoform is functionally relevant to the disease phenotype and pathogenesis of glioma [9]. In terms of epilepsy, GRβ overexpression could have similar effects at multiple levels of the neurovasculature. We were able to detect GRβ overexpression predominantly in the dysplastic microvessels, astrocytes, and neurons, and GRα was also located in these cell types, which was expected due to the importance of glucocorticoid signaling in brain function and regulation in these cells [22][23][24]; although this isoform was not as robustly expressed as GRβ which is clearly distinguishable within dysplastic and non-dysplastic brain regions. It has been previously implicated that a decreasing GRα/GRβ ratio (lower GRα and higher GRβ levels), which is what we see in this current study, relates to the ability of GRβ to act as a dominant negative regulator of GRα function [25,26]. In addition to expression changes, the drug regimen of the FCD patients affected the subcellular localization of GRα and GRβ isoforms levels ( Figure 3). Previous studies have shown that GR is the upstream regulator of CYP3A4, CYP2C9, and Pgp expression [1,2]. Here, the GR isoform subcellular localization was followed in individuals that received multiple CYP-mediated ASMs vs. a combination of CYP+NON-CYP-mediated ASMs within the dysplastic and non-dysplastic brain regions. In the dysplastic tissue of the CYP+CYP group, both GRα and GRβ were primarily located in the nuclear fraction, the functionally active location. Both GR isoforms in the CYP+NON-CYP ASM group trended more towards the cytosolic fraction in the dysplastic and non-dysplastic tissues. GRα moves to the nucleus only after ligand binding, so the CYP+CYP ASM combination could trigger faster GR maturation and nuclear translocation through a drug-dependent mechanism that is not present with CYP+NON-CYP ASM combinations, possibly facilitated by heat-shock protein interaction with GR [3,27,28]. With that finding in mind, we further asked whether GRβ was the governing GR isoform that caused the changes in subcellular localization with drug treatment in EPI-ECs. Our overall goal was to determine if the expression and subcellular localization changes in targets important for BBB drug regulation observed between EPI-ECs and control endothelial cells could be attributed to GRβ overexpression. One possible explanation for increased nuclear GRα levels in endothelial cells with overexpressed GRβ could be that GRβ either drives GRα into the nucleus or traps it there, possibly modulating the downstream events differently in a disease state; however, this warrants further investigation. Because GRβ expression itself is driven by other factors, like cytokine levels, and it does not bind ligands [5,6], it is possible that drug monotherapy does not play as much of a role in its expression and nuclear translocation as polytherapy, as seen in Figure 3. In terms of GR regulation, Hsp90 is a major target for GR maturation and nuclear translocation [27,28]. However, Hsp90 does not only interact with the GRα isoform. It has been previously shown that Hsp90 is essential for GRβ nuclear translocation and that increased nuclear Hsp90 levels correspond with GRβ overexpression in glaucomatous trabecular meshwork cells [29]. In EPI-ECs and HBMEC+HA-GRβ, there were increased nuclear levels of Hsp90 compared to HBMEC. In other reports, nuclear Hsp90 accumulation was reported to be positively associated with metastasis and negatively associated with survival in patients with non-small cell lung cancer [30]. Interestingly, we show for the first time that GRβ overexpression contributes to the nuclear accumulation of Hsp90 in epilepsy. Hsp90 accumulation in the nucleus relating to GRβ overexpression could have clinical relevance in epilepsy which should be further investigated. The subcellular localization of Hsp90 could be related to GR maturation, which is associated with the expression of downstream targets, such as cytochrome P450 enzymes CYP3A4 and CYP2C9. Both of these CYP isoforms had elevated expression in EPI-ECs and in HBMEC+HA-GRβ, with localization in both the cytosol and nucleus, which suggests that these CYP enzymes are functional in the nucleus in epileptic endothelial cells, which is associated with GRβ overexpression. CYP1B1, another CYP isoform, mRNA has also been previously found in the cytoplasm and nucleus of human neurons and astrocytes in the cortex of the brain; although, the nuclear function remains unclear [31]. The role of nuclear CYP3A4 and CYP2C9 in epilepsy relating to GRβ overexpression needs to be explored in the future. In both EPI-ECs and HBMEC+HA-GRβ, unlike claudin-5, occludin levels were not significantly changed by drug treatment and remained relatively similar among the three cell types. Claudin-5 expression increased after 24 h of treatment with OXC, LEV, and DEX in HBMEC control cells [32,33] but not in HBMEC+HA-GRβ or EPI-ECs. This discrepancy could allude to the notion that, in this scenario, claudin-5 levels could not be rescued by pharmacological treatment due to the high levels of GRβ in the non-responding cells. In a study done with bone marrow-derived macrophages, LPS-induced resistance to DEX treatment was attributed to a 7-fold increase in GRβ mRNA levels in these cells [34], which could explain the resistance to DEX-mediated claudin-5 increase in HBMEC+HA-GRβ or EPI-ECs in our study. Interestingly, a recent study has also found that claudin-5 expression was significantly decreased 1.97-fold in epileptic brain microvessels compared to controls, but occludin expression was not significantly different between the two groups [35]. This supporting evidence further confirms that the decrease in tight junction proteins in the epileptic brain region may not afflict all types of tight junction proteins, and we describe here for the first time that GRβ may play a role in that phenomenon. Tight junction proteins can also be broken down by matrix metalloproteinases (MMPs), like MMP-9 [36], exacerbating the epileptic condition. Not only can MMP-9, as a calciumdependent zinc-containing endopeptidase critical for neurovascular homeostasis, afflict BBB damage, but it also affects neuronal function [37,38]. Increased expression of the active form of MMP-9 and increased MMP-2 activity in HBMEC+HA-GRβ and EPI-ECs was not significantly affected by drug treatment (OXC, LEV, or DEX for 24 h), which could imply that MMP function in the dysplastic BBB is a pathological issue that is not rescued or worsened by drug treatment. GRβ overexpression has been found to enhance the expression of tumor necrosis factor-α (TNF-α) in a human monocyte cell line [39], and similarly, MMP-9 expression has been found to be increased by TNF-α in a human epithelial cell line [40]. The increase in MMP expression and activity observed in HBMEC+HA-GRβ and EPI-ECs could be related to cytokine production, such as TNF-α, which is likely mediated by GRβ overexpression. A role for GR isoforms has been implicated in components of the blood, such as monocytes and platelets, in major depressive disorder [39] and immune thrombocytopenia [41,42]; although, there is little to no evidence of GR isoforms in the blood of epilepsy patients. Clinical studies have shown neuronal migration disorders in the lesioned vs. non-lesional patients and detected epileptogenicity and have shown to produce clustered magnetoencephalography spike sources under total intravenous anesthesia. It would be interesting to investigate whether the GRα/GRβ difference that was observed in the dysplastic brain tissue compared to the non-dysplastic brain tissue of these patients could be detected in the blood of epilepsy patients as a disease biomarker. In conclusion, the GRα/GRβ imbalance that is observed in the dysplastic tissue of patients, particularly females and those above 45 years old, compared to the non-dysplastic tissue could be a critical marker of the diseased brain region. While GRβ overexpression in brain microvascular endothelial cells altered the subcellular localization and expression of multiple protein targets vital to the proper functioning of the neurovasculature (summarized in Figure 6). Delineating the predominant GR isoform in the dysplastic region could allow for future isoform-specific targeting that would be critical for BBB functional homeostasis in the dysplastic brain region and better-targeted therapy for patients with drug-resistant epilepsy. . Summarizing the importance of GRβ overexpression in the dysplastic brain. We found an imbalance of GRα and GRβ, with increased GRβ levels, in the dysplastic brain region compared to a non-dysplastic region in patients with focal cortical dysplasia, particularly in females or individuals greater than 45 years old. The GR isoform imbalance, with GRβ being dominant in the dysplastic brain region, causes changes in the subcellular localization and expression patterns of critical BBB proteins related to drug regulation and BBB integrity as well as MMP activity in dysplastic endothelial cells. This is confirmed by overexpressing GRβ in normal brain microvascular endothelial cells which is found more comparable to dysplastic conditions. Delineating the role of GRβ in the dysplastic brain brings us one step closer to improved targeted therapy for epilepsy patients with focal cortical dysplasia. Figure created with BioRender.com (accessed on 6 April 2022). Ethical Approval Informed consent was obtained from patients prior to tissue procurement under a Cleveland Clinic Institutional Review Board-approved protocol (IRB #07-322). This study was compliant with the principles outlined in the Declaration of Helsinki, and the authors understand the ethical principles. Brain specimens from both male and female subjects (n Figure 6. Summarizing the importance of GRβ overexpression in the dysplastic brain. We found an imbalance of GRα and GRβ, with increased GRβ levels, in the dysplastic brain region compared to a non-dysplastic region in patients with focal cortical dysplasia, particularly in females or individuals greater than 45 years old. The GR isoform imbalance, with GRβ being dominant in the dysplastic brain region, causes changes in the subcellular localization and expression patterns of critical BBB proteins related to drug regulation and BBB integrity as well as MMP activity in dysplastic endothelial cells. This is confirmed by overexpressing GRβ in normal brain microvascular endothelial cells which is found more comparable to dysplastic conditions. Delineating the role of GRβ in the dysplastic brain brings us one step closer to improved targeted therapy for epilepsy patients with focal cortical dysplasia. Figure created with BioRender.com (accessed on 6 April 2022). Ethical Approval Informed consent was obtained from patients prior to tissue procurement under a Cleveland Clinic Institutional Review Board-approved protocol (IRB #07-322). This study was compliant with the principles outlined in the Declaration of Helsinki, and the authors understand the ethical principles. Brain specimens from both male and female subjects (n = 23) with pharmacoresistant epilepsy were obtained following focal surgical resections. Brain tissues from epileptic/dysplastic (DYS/EPI) and non-dysplastic/relative normal (NON-DYS) regions were resected after prior non-invasive (scalp video-EEG monitoring, magnetic resonance imaging, and positron emission tomography) and invasive (stereoelectro encephalography) evaluations. The non-dysplastic resected tissue region from each subject was considered as an internal relative control to the dysplastic tissue. The experimental outline is provided in Figure S1. Additional patient information (age, gender, ASMs, seizure frequency, epilepsy duration, resected tissue region, pathology, and experimental use of tissue) is summarized in Table 1. Tissue Lysate Preparation and Fractionation Approximately 50 mg of fresh-frozen human cortical tissue resected from patients with drug-resistant epilepsy due to FCD (n = 14) was lysed with radioimmunoprecipitation assay (RIPA; Sigma-Aldrich, Burlington, MA, USA, cat. R0278) buffer combined with 1× protease inhibitor cocktail (Sigma, cat. P8340) as previously described [3,15]. To obtain cytoplasmic and nuclear fractions of the tissue, 50 mg of fresh-frozen tissue (n = 9 patients) was fractionated using the NE-PER Nuclear and Cytoplasmic Extraction Reagents kit (Thermo Fisher Scientific, Waltham, MA, USA, cat. 78833) according to the manufacturer's instructions and as previously described [1]. The protein concentration of the lysates was estimated by the Bradford method. Western Blotting For the human brain tissue lysates, GRα and GRβ were separated by 8% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and later transferred to polyvinylidene fluoride (PVDF) membranes (EMD Millipore Corp., Burlington, MA, USA, cat. IPVH00010) by semidry transfer (Trans-Blot© SD Semi-Dry Transfer Cell, Bio-Rad, Hercules, CA, USA). In brief, the membranes were probed overnight at 4 • C with the respective primary antibody followed by the appropriate secondary antibody for 1 h at room temperature (Supplementary Table S1), as previously described [15]. For the target proteins, the PVDF membranes were incubated in stripping buffer (Thermo Scientific, cat. 21059) for 20 min at room temperature followed by blocking of the membranes in 5% milk for 4 h before re-probing. In each case, the protein expression was normalized by β-actin (total lysate and cytoplasmic fractions) or proliferating cell nuclear antigen (PCNA, nuclear fractions) as loading controls, and the densitometric quantification of the images was performed using ImageJ software (National Institute of Health, Bethesda, MD, USA). Western blot using cell lysates and subcellular fractions for GRα, GRβ, Hsp90, Pgp, CYP3A4, CYP2C9 (8% gels) MMP-9, occludin, and claudin-5 (10% gels) were performed in a similar manner as stated above. Antibody information can be found in Table S1 and full representative blots in Figure S2. Histology by Cresyl Violet Staining Gross anatomical evaluation of brain tissue specimens from patients who had undergone surgical resection for intractable epilepsy due to FCD from dysplastic compared to the respective non-dysplastic region was performed by cresyl violet histological staining on brain slices (n = 3 patients, 5 sections each) for observation of the cellular structures to identify dyslamination, ectopic neurons, and vascular malformations [43]. Diaminobenzidine Staining Brain sections (n = 3 patients, 5 sections each) were permeabilized (0.3% TWEEN in 0.1 M PBS), blocked for endogenous peroxidase (0.3% hydrogen peroxide in methanol) and non-specific staining (5% normal goat serum in 0.1 M PBS + 0.4% Triton-X 100), and incubated at 4 • C overnight with GRα or GRβ primary antibodies. The detailed method has been described previously [43,44]. After washing, the sections were incubated for 1 h at room temperature with the respective biotinylated secondary antibody followed by 1 h with the avidin/biotin complex (Vector Labs, Burlingame, CA, USA, Elite Vectastain ABC kit, cat. PK-6102), visualization with diaminobenzidine (DAB) (Vector Labs, peroxidase substrate kit, SK-4100; nickel omitted), dehydration, and mounting with Permount (Thermo Fisher Scientific, cat. SP15-500). Primary and secondary antibodies used are listed in Table S1. Images were obtained by bright field microscopy using a Leica DMIL microscope and Q Capture for image acquisition. Quantification of the DAB staining (n = 4 images/patient) was performed using ImageJ software (National Institutes of Health). The background was removed using the brightness and contrast controls and the Rolling Ball Radius function. Images were converted to 8-bit, and the threshold was maintained by using the Adjust Threshold function. The resulting highlights after adjustment were then measured for average relative DAB intensity using the Measure function. Origin Pro 9.0 Software (version: 90E, Origin Lab Corp., Northampton, MA, USA) was then utilized to identify significant differences in expression between the dysplastic and non-dysplastic brain tissue regions. Immunofluorescence Staining We also determined the expression and localization patterns of these two GR isoforms by immunofluorescence staining on contiguous brain slices (n = 4 patients, 5 sections each). The slices were immunostained for GRα and GRβ. Astrocytic (GFAP: glial fibrillary acidic protein) and neuronal (NeuN: neuronal nuclei) markers were also used to identify the cellular localization of the two GR isoforms. The concentrations and sources of all primary and secondary antibodies used are listed in Supplementary Table S1. After blocking for 1 h, the sections were incubated with the targeted primary antibody overnight at 4 • C followed by the respective secondary antibody for 2 h at room temperature. The tissues were blocked for autofluorescence with Sudan Black prior to mounting with VECTASHIELD ® Mounting Medium with DAPI (Vector Laboratories, cat. H-1200). Images were acquired by fluorescence microscopy using a Leica DMIL LED microscope with a gain of 1.0. The acquired images were processed using ImageJ software. Antibody information can be found in Table S1. Overexpression of HA-GRβ by Transfection To simulate the increase in GRβ expression observed in the dysplastic tissue compared to non-dysplastic, HBMECs were transfected with HA-tagged GRβ DNA (1.075 µg/µL). The custom HA-tagged GRβ plasmid was obtained from OriGene Technologies utilizing the open reading frame (ORF) from cat. RC220377 (OriGene Technologies, Rockville, MD, USA) cloned in a pCMV6-AC-HA vector (OriGene Technologies, cat. PS100004). To achieve this transfection, 5 µg of HA-GRβ DNA was mixed in serum-free Dulbecco's modified eagle medium (DMEM/F12) and was later combined with a 30 µg mixture of lipofectamine (Thermo Fisher Scientific, cat. 18324-012) in serum-free DMEM to form the DNA+lipofectamine complex. This mixture was set aside for 25 min at room temperature. Once formed, this DNA+lipofectamine complex was combined with additional serum-free DMEM and added to the 100 mm Petri dish of 70% confluent HBMECs and left to incubate for 5 h at 37 • C. After the incubation was complete, the serum-free media containing the DNA+lipofectamine complex was aspirated, the plate was washed with 0.1 M phosphatebuffered saline (PBS), and the PBS was replaced with normal HBMEC media (Cell Systems, cat. 4Z0-500) until the following day when subsequent drug treatment experiments were performed. These transfected cells will be denoted throughout as "HBMEC+HA-GRβ". Drug Treatment with Cellular Fractionation To determine the effect of ASM (oxcarbazepine: OXC or levetiracetam: LEV) or GR agonist (dexamethasone: DEX) treatment on the subcellular localization of various protein targets crucial for drug metabolism/transport and BBB integrity, HBMECs, HBMEC+HA-GRβ, and EPI-ECs were divided into four treatment groups each: vehicle control, OXC (25 µg/mL), LEV (15 µg/mL), and DEX (10 µM) for 24 h. The cells were fractionated into cytoplasmic and nuclear fractions using the NE-PER Nuclear and Cytoplasmic Extraction Reagents kit as described above (Thermo Fisher Scientific, cat. 78833) at 6 h (HBMEC and HBMEC+ HA-GRβ) and 24 h (HBMEC, HBMEC+HA-GRβ, and EPI-EC) and analyzed by Western blot. Protein concentration was estimated by the Bradford method. Cell culture supernatant samples were also collected at each time point. Determining MMP Activity by Zymography MMP activity was determined by gelatin zymography using the samples obtained from the endothelial cell supernatant with and without drug treatment. A total of 20 µL of each sample was loaded into gelatin zymography gels (Thermo Fisher Scientific, cat. ZY00102BOX) and run at 100 V for about 2 h [45]. The gels were then incubated in renaturing buffer (2.5% Triton-X 100 in distilled water) for 30 min at room temperature followed by 1× developing buffer (Thermo Fisher Scientific, cat. LC2671) for 30 min at room temperature to equilibrate the gels. Then, the gels were incubated at 37 • C in 1× developing buffer for 18 h. Gels were stained with 0.5% Coomassie Brilliant Blue R-250 (Bio-Rad Laboratories, cat. 161-0400) prepared in destaining solution (60% distilled water, 30% methanol, 10% acetic acid) for 30 min, and cleared with destain solution for 30 min to 1 h to visualize the bands before imaging. Images were processed and quantified densitometrically using ImageJ software. Origin Pro 9.0 Software was then utilized to identify significant differences in MMP activity. Data Analysis and Statistics Origin Pro 9.0 Software was used for data analysis and statistical interpretation of data. Paired t-test was used to compare dysplastic and non-dysplastic brain tissue regions of the same individual patient. One-way or two-way analysis of variance (ANOVA) was utilized to compare multiple groups, with a Tukey post hoc test. All data are presented as mean with standard deviation (SD), and p < 0.05 was considered to be statistically significant. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijms23094940/s1. Author Contributions: R.W. drafted the manuscript and performed part of the cell culture experiments, western blot, quantification, and analysis. N.C. performed immunohistochemistry and Western blot analysis and quantification. A.G. assisted with cell transfection and Western blot analysis. W.B., L.F. and I.M.N. helped to provide the patient tissues used for this experiment. C.G. designed the study/experiments and helped with data analysis and manuscript drafting. All authors participated in editing the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This work is supported in part by the National Institute of Neurological Disorders and Stroke/National Institutes of Health grant R01NS095825 awarded to Chaitali Ghosh. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the Cleveland Clinic (IRB #07-322). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Data that do not compromise ethical standards and patient confidentiality will be available upon reasonable request.
2022-05-02T15:03:46.545Z
2022-04-29T00:00:00.000
{ "year": 2022, "sha1": "60794ccd1e6e9b35e4bfa0b15987726c98f5dddf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/9/4940/pdf?version=1651218490", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1693951c9c6b9255a11e9194c90fb8cb598eeaa6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
189991072
pes2o/s2orc
v3-fos-license
Solving the Schrödinger Equation Using Anharmonic Potentials and a Variational Quantum Monte Carlo Method where De is the molecular dissociation energy and β is a parameter associated with the curvature of V(x). The potential operator can also be described from accurate quantum mechanical calculations. MRCI is among the quantum mechanical calculations able to describe the dissociation process as well as other spectroscopic constants in a very high level of accuracy. However, such methods are limited to be used in relatively small molecules. An interesting alternative to be applied in large system are the composite methods. Such alternative correspond to a combination of well-defined ab initio calculations to achieve an accurate total energy at a low computational cost when compared to high level calculations. These methods have been applied successfully in the calculation of thermochemical properties.3-5 However, potential energy curves provided by such accurate methods have not been used to estimate spectroscopic properties. The objective of this work is to explore potential curves of some diatomic molecules from a composite method to estimate spectroscopic constants. The composite method to be explored will be the Gaussian 3 theory or simply G3.6 In order to obtain the spectroscopic constants a new variational numerical procedure will be presented to solve the Schrödinger equation. Introduction Oscillators are important models in quantum systems because they are good approximation for different problems. The harmonic oscillator is the first approximation used for the vibrational spectroscopy models. 1,2 In this model, Hooke's law defines the potential energy operator: where k is the force constant and depends on the nature of the bonded atoms and x is the vibration coordinate. A more realistic model to describe molecular vibrations is given by the anharmonic oscillator. In this case, many different forms of analytical operators have been proposed. One of the most popular is the well-known Morse potential, 1 where De is the molecular dissociation energy and β is a parameter associated with the curvature of V(x). The potential operator can also be described from accurate quantum mechanical calculations. MRCI is among the quantum mechanical calculations able to describe the dissociation process as well as other spectroscopic constants in a very high level of accuracy. However, such methods are limited to be used in relatively small molecules. An interesting alternative to be applied in large system are the composite methods. Such alternative correspond to a combination of well-defined ab initio calculations to achieve an accurate total energy at a low computational cost when compared to high level calculations. These methods have been applied successfully in the calculation of thermochemical properties. [3][4][5] However, potential energy curves provided by such accurate methods have not been used to estimate spectroscopic properties. The objective of this work is to explore potential curves of some diatomic molecules from a composite method to estimate spectroscopic constants. The composite method to be explored will be the Gaussian 3 theory or simply G3.6 In order to obtain the spectroscopic constants a new variational numerical procedure will be presented to solve the Schrödinger equation. Methods The general G3 energy is defined as: Where the correction are: ΔE(+) for diffuse functions; ΔE(2df,p) for polarization functions; ΔE(QCI) for electronic correlation effects beyond fourth order perturbation theory using the method of quadratic configuration interaction; ΔE(G3large) for larger basis set effects and for the nonadditivity caused by the assumption of basis set extensions for diffuse functions and higher polarization functions; spin-orbit correction, ΔE(SO), for atomic species only; E(HLC), higher level correction, added to take into account remaining deficiencies in the energy calculations and finally E(ZPE) the zero-point energy at 0 K and thermal effects. In the present work, E(ZPE) is not being included to describe the potential curves. All the calculations were performed by using Gaussian09 software 7 . From the G3 potential curves the time independent Schrödinger equation was solved using a form of variational quantum Monte Carlo (VQMC) method. The one-dimension mean value energy of a system is described as: where the right side of Eq.(4) contains the kinetic (T) and potential operators (V). The kinetic energy operator is a second order derivative which can be described numerically by 8 where i is the i-thm component of a discretized wavefunction vector, μ is the reduced mass of the system and h_i=x_(i+1)-x_i. The potential energy operator will be described as set of discretized points obtained from the G3 calculation. After the application of the operators in an arbitrary and discretized wave function, Eq. (5) gives the energy of the system. The systematic to get the result by VQMC is: 9 i. Generates a random vector to be the initial wave function. ii. Calculate the average energy using a discretized version of Eq.(4), Eq.(5) and the desired potential function. iii. Select at random a single point of the vector and modify its value according to the equation: where rand is a random number generator with uniform distribution between zero and 1, and δ is a number which defines the range of change of the selected wave function. iv. Recalculate the energy with the modified wave function using Eq. (4). If the new energy is lower than the first one, the modification is accepted. If it is not, the previous wave function is restored. v. The procedure is repeated until determined number of steps and/or the convergence factor is reached. After the optimization of the ground state wave function, the first excited states can be obtained using the same procedure but preserving the orthogonality of the system. In this work the Gram-Schmidt method was used. 10 The procedure is repeated to provide as many excited states as necessary. After 106 steps of VQMC, an extra optimization of the meshes was carried out using the modified Simplex algorithm of Nelder and Mead 11 . Results and Discussion The harmonic oscillator, which has the potential energy operator shown in Eq. (1), was used as an initial test case. The boundaries of the domains have been defined between ±5 atomic units. The mass of the particle and the force constant have been set to 1 a.u. Table 1 As expected, increasing the number of the points yield more accurate energies which tends to the exact value. The accuracy is reduced for higher excited states. The more orthogonal functions are required from the calculations, the more pronounced is the error propagation. Figure 1 shows the first three harmonic wave functions. The method generates not only accurate energies, but also well behaved wave functions as shown in Fig.1. From the present uncertainty for harmonic oscillator, a mesh of 200 points will be used for the calculations involving the Morse and the G3 potentials. The reduced mass, dissociation energy and curvature for Morse's potential were taken from Herzberg, 12 as well as the experimental energies. The same domain of the Morse's potential has been used to calculate the energies from the G3 theory. All meshes were constructed firstly with few points which were increased by spline interpolation. Table 3 shows the vibrational energies for LiH, H2, HF, HCl, HBr, O 2 and Cl 2 by Morse's potential, G3 energies and experimental results. All the data are in atomic units. Table 2 shows that the error in energy increases with the increases of the state of energy. By comparison, the error of the zero-th state of the H 2 molecule, in Morse's potential, is 0.23%, while the fourth state is 3.0%. This enlargement of the error is explained by an error accumulation. Some spectroscopic parameters were estimated by the equation: where ν _e is the fundamental frequency and x _e ν _e is the anharmonic constant. Table 3 shows the values of ν _e and x _e ν _e from the calculations using the Morse and G3 potential, plus the experimental value. The calculated constants show that the errors do not follow a regular behavior. It implies that there are some situations that the Morse's potential fits better than G3, but this is not a rule. Conclusions The simulations performed in this work reveal that the present version of the variational principle associated with a Monte Carlo search is an accurate method to generate vibrational energies and well-behaved wave functions. The calculated spectroscopic parameters were usually considerably close to the experimental data with a small error. The approximate nature of both potentials tested may be pointed out as responsible for the deviations. Tests considering rigorous potential curves are under progress.
2019-06-14T14:47:39.420Z
2015-07-01T00:00:00.000
{ "year": 2015, "sha1": "6452056e9d001214c18b818423d46cd64289ae22", "oa_license": "CCBYNC", "oa_url": "http://ojs.rpqsenai.org.br/index.php/rpq_n1/article/download/289/279", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7d39997672fe7fa6312a9352b4fdaa16867b1cdf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
49657945
pes2o/s2orc
v3-fos-license
Haemosporidian parasite community in migrating bobolinks on the Galapagos Islands Bobolinks (Dolichonyx oryzivorus) migrate from their breeding grounds in North America to their wintering grounds in South America during the fall each year. A small number of Bobolinks stop temporarily in Galapagos, and potentially carry parasites. On the North American breeding grounds, Bobolinks carry a least two of the four Plasmodium lineages recently detected in resident Galapagos birds. We hypothesized that Bobolinks carried these parasites to Galapagos, where they were bitten by mosquitoes that then transmitted the parasites to resident birds. The haemosporidian parasite community in 44% of the Bobolinks we captured was consistent with those on their breeding grounds. However, the lineages were not those found in Galapagos birds. Our results provide a parasite community key for future monitoring. Introduction Avian haemosporidian parasites are a highly diverse group of dipteran-borne blood parasites. Plasmodium, Haemoproteus and Leucocytozoon being the most common genres, are widely distributed around the world (Valkiūnas, 2005). However, avian haemosporidians are limited in number and diversity in remote islands likely because the diversity of avian hosts and/or the required arthropod hosts is also limited (Clark et al., 2014). Four Plasmodium lineages, avian haemosporidian parasites that require both a vertebrate and mosquito host, were recently documented in diverse bird species on the Galapagos Islands (Levin et al., 2009(Levin et al., , 2013. Colonization by these parasites is of significant conservation concern, as they can lower host fitness and survival (Atkinson and Samuel, 2010;Lachish et al., 2011). There is no evidence, however, that the Plasmodium parasite(s) found in the islands complete their transmission cycle in Galapagos native (Levin et al., 2013) or introduced birds (Gottdenker et al., 2005;Deem et al., 2011;Jaramillo et al., in preparation). Therefore, we sought to determine if migratory species could be carrying parasites that may be transmitted to the Galapagos avifauna. We hypothesized that these parasites were brought to the islands by Bobolinks (Dolichonyx oryzivorus), the only passerine bird species that regularly migrates through Galapagos, and transmitted across the avifauna by mosquitoes. Galapagos currently has three mosquito species-one native, brackish-water species (Aedes taeniorhynchus); one species (Culex quinquefaciatus) accidentally introduced in 1985 that is a known Plasmodium vector elsewhere; and one species (Aedes aegypti) also accidentally introduced in the 1990's, but thought to feed less on birds than the other two species (Whiteman et al., 2005;Bataille et al., 2009). Bobolinks breed across much of the northern United States and southern Canada, and winter in eastern Bolivia, Paraguay, and northeastern Argentina . Their main migration route in South America is inland, but a likely small, but unknown number of Bobolinks move through the Galapagos each year . In an earlier study, we found Plasmodium parasite lineages B and C, detected in 2 Galapagos passerine species, Small ground finches (Geospiza fuliginosa) and Yellow warblers (Setophaga petechia), in the blood of Bobolinks sampled on their North American breeding grounds (Levin et al., 2013). We then characterized the geographic origins of these two lineages: Plasmodium lineage B was of South American origin and Plasmodium lineage C was of North American origin, potentially California (Levin et al., 2016). Although these results offer a compelling explanation as to how parasites may have arrived on the Galapagos Islands, we sought more evidence for the role of the Bobolink in Plasmodium transmission to Galapagos resident birds by sampling Bobolinks for haemosporidian parasites in the Galapagos. In October 2015, we traveled to San Cristobal Island to try to find, capture, and sample blood from migrating Bobolinks stopping over in Galapagos. Here we describe the infection rate and parasite community of nine Bobolinks caught on the Galapagos. Material and methods On October 12-23, 2015, we searched for Bobolinks on migration stopover in native grassland and agricultural habitat in the highlands of San Cristobal Island, Galapagos, Ecuador (see Perlut and Renfrew, 2016 for details). We used mist-nets and playbacks to capture nine Bobolinks at Finca de las Gemelas (422 m elevation; 0°53′26.23″S, 89°27′15.35″W), a 2.5 ha grassland patch, and in a pasture at Santa Monica, an agricultural complex 9.15 km west of Gamelas, owned by the Ecuadorian military (435 m elevation). We banded each bird with a unique U.S. Geological Survey band, recorded morphometric measurements, and collected a blood sample from the brachial vein that was placed in lysis buffer for DNA preservation (Longmire et al., 1988). We used a standard phenol-chloroform extraction protocol following Sambrook et al. (1989), with a final dialysis step in TNE 2 (1M Tris pH 8, 5M NaCl, 0.5M EDTA, dH 2 O), for DNA extraction of the Bobolink samples. For PCR-based molecular screening, we amplified a region of parasite mitochondrial cytochrome b gene following Waldenstrom et al. (2004). The positive control was a consistently amplifying PCR-positive Galapagos Penguin (Spheniscus mendiculus) and the negative control consisted of all PCR reagents without DNA. Positive samples were sequenced to identify parasite DNA using BigDye terminator v3.1 cycle sequencing kit (Applied Biosystems) in 10 μL reactions with a final primer concentration of 1 μM following a standard cycle sequencing program. Reactions were then cleaned using ethanol precipitation before sequencing on an ABI 3130 automated sequencer at the University of Missouri -Saint Louis. Parasite cyt b sequences were assembled in Seqman Pro 12.2.0 and added to a data set containing previous amplified sequences from Galapagos birds and overlapping cyt b sequences from described morphospecies from Gen-Bank used in Levin et al. (2013) to identify matches. Results and discussion Our molecular analysis identified 4 of 9 individuals (44%) that amplified Plasmodium sp. DNA from the Bobolink blood sampled in Galapagos. Two of the sequences matched the Plasmodium cathemerium lineage SEIAUR01 (GenBank accession number: AY377128, Loiseau et al., 2013), which is common in Bobolinks (Levin et al., 2013) and corresponds to lineage M found in those captured in North America (Levin et al., 2016). The other two sequences match Plasmodium sp. lineages WW3 and RWB01 (GenBank accession numbers: KC867662 and KC867673 respectively, Levin et al., 2013) which correspond to North American lineages K and L in Levin et al. (2016), respectively. The haemosporidian parasite community found in Bobolinks captured on Galapagos during migration was consistent with the parasite community in Bobolinks sampled on their North American breeding grounds (Levin et al., 2013), but did not match the lineages found in Galapagos resident bird species. The prevalence of haemosporidian DNA was higher in our samples (44%; 4 of 9 birds) than in Bobolinks sampled on their North America breeding grounds (17.8%; 78 of 438 birds; Levin et al., 2013), although sample sizes differ dramatically. We do not yet know if and how haemosporidian parasites affect Bobolinks throughout any phase of their life-cycle. Other studies have found both weak (Risely et al., 2018) and no (Cornelius et al., 2014;Sorensen et al., 2016) effect of parasites on migratory birds during migration and while wintering. These results indicate that 1) the Bobolinks that we sampled in Galapagos have been exposed to similar parasite communities as the Bobolinks in North America (e.g. population connectivity either in the breeding or wintering grounds), and 2) the Bobolinks provide a parasite community key to use for future monitoring, as little is known about the transmission paths of these parasites nor their impact on bird survival or reproduction. Our small sample size limited the probability of capturing a Bobolink that matched one or more of the four Galapagos lineages. Although we captured nine individuals during our 12 days of fieldwork, we observed but could not capture an additional 2-6 birds . None-the-less, the plausibility of our proposed pathway for parasite introduction into Galapagos-transport by migratory Bobolinks and transmission by resident mosquitoes-is supported by recent work on the altitudinal distribution of mosquitos in the archipelago. Both A. taeniorhynchus and C. quinquefaciatus were found in the Galapagos at the same elevation (422-435 m) as the sites that Bobolinks selected (Asigau et al., 2017). We recommend 1) sampling of additional Bobolinks and native avifauna on Galapagos to monitor changes in and sources of the haemosporidian parasite community; 2) study of the pathogenic effects of these lineages across the diverse resident species known to be infected (e.g. Levin et al., 2013); and 3) study of any costs or benefits to bobolinks in carrying these parasites. A number of studies have found haemosporidian parasites in migratory birds throughout their breeding and wintering ranges (Pérez-Tris and Bensch, 2005;Ramey et al., 2016;Ricklefs et al., 2016). Cornuault et al. (2012) used a phylogenic approach to understand parasite origins on remote islands of the Mascarene archipelago and found that while there is evidence for a lineage introduction through human-introduced birds, most lineages were likely brought by immigration of avian hosts or vectors and have diversified in-situ. To our knowledge, Cornuault et al. (2012) and this study are the only examples of haemosporidian parasites arriving to remote islands through migratory birds.
2018-07-11T00:13:43.420Z
2018-05-30T00:00:00.000
{ "year": 2018, "sha1": "4020b6c7e98d5e46f767b4af526f34eafc6afb59", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijppaw.2018.05.006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11061add7d60ed19de4c878039dc4967f0d4ba89", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }